{"id":4443,"date":"2022-05-13T13:58:00","date_gmt":"2022-05-13T11:58:00","guid":{"rendered":"https:\/\/blog.besharp.it\/?p=4443"},"modified":"2023-03-24T18:30:40","modified_gmt":"2023-03-24T17:30:40","slug":"what-i-learned-after-a-couple-of-weeks-of-using-aws-iot-greengrass","status":"publish","type":"post","link":"https:\/\/blog.besharp.it\/what-i-learned-after-a-couple-of-weeks-of-using-aws-iot-greengrass\/","title":{"rendered":"What I\u00a0 learned after a couple of weeks of using AWS IoT Greengrass"},"content":{"rendered":"\n
In the last few days, I started looking at AWS IoT Greengrass, one of the many IoT services offered by AWS, which was upgraded to v2 last year.<\/p>\n\n\n\n
In this article, I’d like to talk about what I’ve learned, and my general takes, hoping to help anyone who will approach this service for the first time just like I did.<\/p>\n\n\n\n
So, let’s start with\u2026<\/p>\n\n\n\n
AWS IoT Greengrass is a service that allows us to bring the computing experience we are used to when we work with AWS to our IoT edge devices. It consists of an open-source runtime that we need to install on our devices which offers several features, such as:<\/p>\n\n\n\n
Like most other AWS services, AWS IoT Greengrass was created to solve multiple problems in a managed manner without forcing us developers to reinvent the wheel every time. When we build IoT solutions, we need to worry about scaling and keeping our devices’ firmware up to date, and this is what AWS IoT Greengrass is trying to help us do.<\/p>\n\n\n\n
AWS IoT Greengrass makes it easy to deploy and manage device software on millions of devices remotely. We can organize our devices in groups and deploy and manage device software and configuration to a subset of devices or all devices. AWS IoT Greengrass gives us the ability to send over-the-air updates on the software that relies on our machines without worrying about breaking the runtime layer and forcing us to go to the device place to reset it manually. This has been done by decoupling the many components that we want to run on our devices from the Core software that is tasked to orchestrate our components and manage firmware updated jobs.<\/p>\n\n\n\n
With AWS IoT Greengrass, we can perform Machine Learning Inference, data aggregation, and streaming to multiple AWS cloud services (such as Amazon S3 or Amazon Kinesis) directly from our devices, therefore allowing us to have a grasp of what data goes to the cloud so we can optimize analytics, computing, and storage costs.<\/p>\n\n\n\n
Imagine you are building an IoT solution that consists of multiple devices installed in a factory. These devices periodically gather data and send it as is to a cloud backend. This data is then processed and stored, allowing your clients to know if their machines are performing as they should, if some of the parts need maintenance, and configure custom events that must be notified. Now imagine this solution deployed to thousands of factories, hence millions of devices that every few minutes send a payload to your backend: the costs of your application would grow with the number of devices installed. Furthermore, data received from a factory has no connection with data received from another because every factory behaves differently.<\/p>\n\n\n\n
Now, how can we optimize this infrastructure? With AWS IoT Greengrass, we could configure a machine that works as a processing centralizer: it gathers data, aggregates it, and sends a single payload periodically with the global status of the plant. Of course, devices might still have to send some information (such as alerts or events) as-is to our backend, but moving some of the data processing to the edge would drastically improve the costs of our infrastructure. We’d have far fewer resources to provision on the cloud side as they would not grow with the number of devices but with the number of plants.<\/p>\n\n\n\n
My approach to learning AWS IoT Greengrass has been the same as I have used for other services: I usually start with a quick overview of the official documentation, then I start building something directly from the AWS console, and when I’m facing an obstacle I do punctual investigation on the web.<\/p>\n\n\n\n
My take on the AWS IoT Greengrass official documentation is that it’s very extensive, but sometimes notions are somewhat scattered. I often found myself looking for ways to fix a problem that I thought wasn’t documented. After minutes (if not hours), I realized that the solution was right below my eyes but in a different documentation section. AWS IoT Greengrass is made by many strictly tight concepts, so you should seriously dive into the docs before starting building your solution.<\/p>\n\n\n\n
What I want to do here is give you a way to quickly start a project and relieve you of the hassle of trying to figure out how the features of this service work together.<\/p>\n\n\n\n
The core device is a machine with the AWS IoT Greengrass runtime installed. There are many ways to do this; the path I chose first is the automatic provisioning of AWS IoT Greengrass in a Docker container, as it is the most straightforward way to do this task, and it doesn’t require much configuration. All we need to do is create a “.env” file with our core device configuration that looks like this:<\/p>\n\n\n\n
GGC_ROOT_PATH=\/greengrass\/v2\nAWS_REGION=eu-west-1\nPROVISION=true\nTHING_NAME=P2BCGreengrassCore\nTHING_GROUP_NAME=P2BCGreengrassCoreGroup\nTES_ROLE_NAME=P2BCGreengrassV2TokenExchangeRole\nTES_ROLE_ALIAS_NAME=P2BCGreengrassCoreTokenExchangeRoleAlias\nCOMPONENT_DEFAULT_USER=ggc_user:ggc_group\n<\/code><\/pre>\n\n\n\nThis “.env” file can then be passed in the docker-compose.yaml file as the “env_file” value; If we set the PROVISION key to true, AWS IoT Greengrass will take care of creating all the resources required on AWS IoT to set up a new device. But to do this, we need to give AWS IoT Greengrass some AWS credentials in the form of an access key, secret access key, and session token. I wasn’t keen on this solution as I don’t want to do this operation for every core device I will set up in the future, and, above all, I want to be in charge of creating the roles and permissions needed. <\/p>\n\n\n\n
So I decided to manually create the IoT thing, role, role alias, group, and certificates. The certificates will be used to authenticate the newly created device on AWS. You can find on this page a well-done guide that explains how to provision these resources directly from the command line: https:\/\/docs.aws.amazon.com\/greengrass\/v2\/developerguide\/run-greengrass-docker-manual-provisioning.html. The “.env” file now looks like this:<\/p>\n\n\n\n
GGC_ROOT_PATH=\/greengrass\/v2\nAWS_REGION=eu-west-1\nPROVISION=false\nCOMPONENT_DEFAULT_USER=ggc_user:ggc_group\nINIT_CONFIG=\/tmp\/config\/config.yaml\n<\/code><\/pre>\n\n\n\nThe config.yaml contains all the references needed by the core device to find the location of the certificates and the endpoints of IoT Core. My config file looks like this:<\/p>\n\n\n\n
system:\n certificateFilePath: \"\/tmp\/certs\/device.pem.crt\"\n privateKeyPath: \"\/tmp\/certs\/private.pem.key\"\n rootCaPath: \"\/tmp\/certs\/AmazonRootCA1.pem\"\n rootpath: \"\/greengrass\/v2\"\n thingName: \"P2BCGreengrassCore-1\"\nservices:\n aws.greengrass.Nucleus:\n componentType: \"NUCLEUS\"\n version: \"2.5.3\"\n configuration:\n awsRegion: \"eu-west-1\"\n iotRoleAlias: \"P2BCGreengrassCoreTokenExchangeRoleAlias\"\n iotDataEndpoint: \"xxxxxxxxxxxx-ats.iot.eu-west-1.amazonaws.com\"\n iotCredEndpoint: \"xxxxxxxxxxxx.credentials.iot.eu-west-1.amazonaws.com\"\n<\/code><\/pre>\n\n\n\nThe last thing we need to do is configure the docker-compose.yml file with the Docker image, the paths to the certificates and configurations directories, and where we want these volumes to be mounted. Here’s my Docker compose file:<\/p>\n\n\n\n
version: '3.7'\n \nservices:\n greengrass:\n init: true\n cap_add:\n - ALL\n build:\n context: .\n container_name: aws-iot-greengrass\n image: amazon\/aws-iot-greengrass:latest\n volumes:\n - .\/greengrass-v2-config:\/tmp\/config\/:ro\n - .\/greengrass-v2-certs:\/tmp\/certs:ro \n env_file: .env\n ports:\n - \"8883:8883\"\n<\/code><\/pre>\n\n\n\nNow we can execute Docker compose, and our core device should go up and connect to AWS.<\/p>\n\n\n\n
<\/p>\n\n\n
\n