Industrial IoT Edge Computing with AWS IoT Greengrass on Raspberry Pi CM5
An engineering R&D log documenting local component, artifact, and recipe creation on AWS IoT Greengrass v2 using Raspberry Pi Compute Module 5, including real deployment failures and fixes.
Industrial IoT Edge Computing with AWS IoT Greengrass on Raspberry Pi CM5 — SciTech Edge Advance 1.1
Executive Summary¶
This R&D log documents how we moved from a bare AWS IoT Greengrass v2 installation to real component execution on a Raspberry Pi CM5. The focus is not cloud theory, but the actual mechanics of artifacts, recipes, permissions, and local deployment, including the mistakes that caused deployments to fail and how they were fixed.
This matters because in Industrial IoT deployments, Greengrass success is determined by recipe correctness, permissions, and runtime behavior, not by whether the console shows a green tick.
Hardware Stack¶
- Edge Device: Raspberry Pi
- Compute Module: Raspberry Pi CM5
- Architecture: ARM
- Deployment Context: On-device local Greengrass execution + AWS-managed control plane
Software & Tooling Stack¶
- AWS IoT Core
- AWS IoT Greengrass v2
- AWS CLI
- Python 3 runtime
- Amazon S3 (artifact storage)
- IAM (users, policies, certificates)
Technical Walkthrough¶
Installing AWS CLI on the CM5
AWS CLI is mandatory because Greengrass artifacts are fetched from S3, and recipes often reference S3 URIs.
Jayanta installs AWS CLI locally and verifies installation by checking the version.
At this stage, no Greengrass-specific commands are used yet. This is pure infrastructure preparation.
After installation, IAM credentials are exported locally for the IAM user created earlier.
export AWS_ACCESS_KEY_ID=...
export AWS_SECRET_ACCESS_KEY=...
export AWS_DEFAULT_REGION=...These credentials belong to an IAM user created specifically for the Raspberry Pi device.
IAM User and Permissions Design
A dedicated IAM user was created (named RB Pi in this setup).
Permissions explicitly granted:
- Full access to S3
- Full access to AWS IoT
- Full access to AWS Greengrass
- IAM permissions (for certificate and policy binding)
This IAM user is reused inside the Greengrass device, which means missing permissions will silently break deployments.
IoT Thing, Certificate, and Policy Binding
The CM5 module is registered as a Thing in AWS IoT Core.
Key points:
- Certificates were auto-generated during initial Greengrass setup
- A previously created Raspberry Pi Policy was reused
- The policy was attached directly to the certificate
Jayanta explicitly verified the policy contents in JSON format to ensure required permissions were present.
This separation matters:
- IAM user → controls AWS-side actions (S3, deployments)
- IoT Thing policy → controls device runtime permissions
Artifact Design and Versioning Strategy
Artifacts were created using a strict versioned structure.
Component name:
com.example.helloWorld
Versioned folder:
1.0.0/
Inside 1.0.0/:
main.py
main.py
The minimal Python script.
The Python code is intentionally minimal:
- Prints
"Hello World" - Runs in a loop with a 1-second delay
This simplicity is deliberate. At this stage, the goal is deployment correctness, not application complexity.
Recipe Creation (Why Most Deployments Fail)
Recipe filename:
com.example.hello_world_1.0.0.json
Greengrass Recipe (v1.0)
Production-tested recipe.
Critical constraints enforced during editing:
- Component name must be lowercase
- Version must match artifact version exactly
- Runtime specified as Python 3
- Entry point script must match main.py
- S3 bucket path must match artifact location exactly
Any mismatch here leads to non-obvious runtime failures.
Uploading Artifacts to S3
The main.py artifact was uploaded to S3 under the versioned path.
Jayanta used an existing helper script (storage bucket.py) to perform uploads.
(The exact upload command was not shown verbatim in the recording.)
What matters:
- Artifact path in S3 must match the recipe reference
- Version folders must exist before deployment
Verification was done directly from:
- AWS Console → Greengrass → Artifacts
Local vs Cloud Deployment Strategy
Before deploying from the cloud, Jayanta intentionally runs everything locally first.
Why:
- Faster iteration
- Immediate log access
- No cloud redeploy latency
At this point:
- No local deployments existed
- Greengrass CLI was initially unavailable
Greengrass CLI Not Found (Expected Failure)
When attempting to run Greengrass CLI commands:
Greengrass CLI Not Found
The greengrass CLI was not available initially. This is expected behavior
before required Greengrass components are deployed from AWS.
Greengrass CLI becomes available only after the required components are deployed from the cloud.
No changes were made on the Raspberry Pi manually.
Deploying Required Greengrass Components from AWS
From AWS Console:
-
Target: CM5 IoT Thing
-
Components deployed:
- Greengrass CLI
- Local Debugger
- Nucleus-related dependencies
Once deployment succeeded:
- Components appeared automatically on the CM5
- Greengrass CLI became available without manual installation
This confirms cloud-to-edge synchronization is working correctly.
First Local Deployment (Failed)
Jayanta submitted the first local deployment.
Symptoms:
- Deployment entered
running - Component did not execute correctly
Logs revealed:
Component Execution Failure
Deployment failed due to an incorrect lifecycle definition and a missing virtual environment reference.
Root Cause and Fix
Why it failed:
- Old lifecycle script referenced a virtual environment incorrectly
- Component name resolution failed (
failed to find component name)
Fix applied:
- Lifecycle manifest was simplified
- Old script removed
- Entry point replaced with direct
main.pyexecution
Jayanta explicitly notes this was found after significant trial and error.
Successful Execution
After redeploying:
- Component executed successfully
Hello Worldprinted continuously- One-second delay confirmed
This validates:
- Artifact upload
- Recipe correctness
- Permission alignment
- Runtime execution
Local Greengrass Debugger
AWS Greengrass provides:
- Cloud debugger
- Local debugger (same UI, local execution)
The local debugger:
- Generates a temporary username/password
- Exposes a local HTTPS interface
- Mirrors AWS Console UI
This allows full inspection without redeploying from cloud.
Current State Verification¶
Healthy system indicators:
- Greengrass components visible locally
- Deployment status =
running - Logs show continuous execution
- No manual SSH-side fixes required
Verification was done via:
- Component list
- Deployment list
- Full Greengrass logs
- Local debugger UI
The Road Ahead
With the CM5 module stable and executing components locally, the foundation is ready: - [ ] Clean component lifecycle definitions - [ ] Git-based artifact management - [ ] Cloud-triggered fleet deployments - [ ] Debugging component failures systematically
Closing Note¶
This log reflects how Industrial IoT systems actually get built: broken deployments, permission mismatches, runtime assumptions, and gradual stabilization. We document these R&D steps publicly because production-grade edge systems are earned through iteration, not diagrams.
— DK Swami Founder, Scitech Industries (Kagaku Technology Pvt Ltd)
People Behind This Work


D K Swami
Founder & Technical Lead, Scitech Industries
Also documenting the founder journey at Unscripted with DK Swami → Instagram