{"id":599,"date":"2018-10-10T10:53:40","date_gmt":"2018-10-10T08:53:40","guid":{"rendered":"https:\/\/blog.besharp.it\/go-serverless-part-3-event-driven-software-e-triggers\/"},"modified":"2023-02-22T17:15:16","modified_gmt":"2023-02-22T16:15:16","slug":"go-serverless-part-3-event-driven-software-e-triggers","status":"publish","type":"post","link":"https:\/\/blog.besharp.it\/go-serverless-part-3-event-driven-software-e-triggers\/","title":{"rendered":"GO Serverless! Part 3: event-driven software e triggers"},"content":{"rendered":"
\n
\n
\n

Go to part 1<\/a>\u00a0|\u00a0Go to part 2<\/a><\/p>\n

This is the last part of our 3-article series explaining how to set up a\u00a0Serverless File Sharing Platform<\/strong>. In this article, we are going to focus on the\u00a0code<\/strong> we need to build both the front- and the back-end of our software. Moreover, we will dive deep into the software architecture; we will learn the reason why of such planning and we will understand more about\u00a0triggers <\/strong>through AWS which are essential elements to make our solution work.<\/p>\n

As disclosed in the first article, we will also examine in depth Continuous Integration and Continuous Delivery Pipelines so that we will able to run trusted and repeatable deploys, simply pushing on the repository.<\/p>\n

In our second article<\/a>, we built an infrastructure similar to one we are about to fine-tune and define in the current article: we first need to remove the API Gateway created as a test; the back-end resources deploy, in fact, will be\u00a0managed by the automatic pipeline<\/strong>.<\/p>\n<\/div>\n<\/div>\n<\/section>\n

\n
To complete the deploy of our solution:<\/div>\n
\n
\n
    \n
  1. Download the back-end and for the front-end\u00a0source code<\/strong>;<\/li>\n
  2. \u00a0Set up 2 CodeCommit repositories<\/strong>\u00a0for the back-end and 2 CodeCommit repositories for the front-end;<\/li>\n
  3. \u00a0Configure the Pipelines<\/strong>;<\/li>\n
  4. \u00a0Push the code on the repositories to start the automatic deploy<\/strong>.<\/li>\n<\/ol>\n

    Here are the links to the repositories you will need in order to clone the source code:<\/p>\n

    Front-end<\/strong>:\u00a0https:\/\/github.com\/besharpsrl\/serverless-day-fe<\/a><\/p>\n

    Back-end<\/strong>:\u00a0https:\/\/github.com\/besharpsrl\/serverless-day-be<\/a><\/p>\n<\/div>\n<\/div>\n<\/section>\n

    \n
    Let\u2019s have a look at the software development best practices.<\/div>\n
    \n
    \n

    Front-end<\/h3>\n

    The front-end, written in\u00a0React.js<\/strong>\u00a0language, is an essential part of our application. It allows users to log in to a secure space so that they can upload, share and download documents.<\/p>\n

    Follow the steps below to\u00a0integrate the front-end with a serverless back-end<\/strong>. In this way, the user\u2019s action on the interface will actually reflect on Amazon S3 and on DynamoDB tables.<\/p>\n

    Let\u2019s start\u00a0cloning the source code<\/strong> from the repository or downloading the zip file. The packet structure should be similar to the following:<\/p>\n

    \n
    \n
    <\/div>\n<\/div>\n<\/figure>\n

    Make sure to be in the project root to be able to launch the following commands:<\/p>\n

    npm install\r\nnpm start<\/pre>\n

    We can now go on and configure\u00a0AWS Amplify library<\/strong>.<\/p>\n

    Amplify is a library suggested by AWS to free developers from tasks closely related to Amazon AWS, in this particular case, you won\u2019t need to take care of Cognito, S3 and API Gateway anymore.<\/p>\n

    To properly configure the library, we created a simple custom component we will use later as a configuration file<\/p>\n

    \n
    \n
    <\/div>\n<\/div>\n<\/figure>\n

    All the AWS Amplify configuration parameters must be replaced by the IDs, and the ARNs created before (see our previous article<\/a>).<\/p>\n

    Once the setup is completed and all the prerequisites and dependencies are installed, you can start the application and get the following\u00a0login page<\/strong>:<\/p>\n

    \n
    \n
    <\/div>\n<\/div>\n<\/figure>\n

    By manually\u00a0adding users to the CognitoUserPool<\/strong>, you will be able to log in and to get the main application view:<\/p>\n

    \n
    \n
    <\/div>\n<\/div>\n<\/figure>\n

    The code we made available is extensively commented, to make it as clear as possible.<\/p>\n

    An interesting trait of React.js is the\u00a0auto reload.<\/strong>\u00a0The advantage is: each change and\/or update made to the code, to the CSS or even to the libraries is immediately visible in the application shown in the browser.<\/p>\n

    The approach of React.js to the DOM is another useful feature; in a traditional Javascript front-end, it is up to the developers to take care of DOM changes. When using React.js, instead, this is a task which is completely managed by the Framework.<\/p>\n

    Each React.js component is similar to a complex Javascript object which related context, represented by \u201cthis<\/em>\u201d, allows the user to access to a public key called \u201cstate<\/em>\u201d. This public key represents the current state of the component. Every time a component attribute changes the render method changes, too. That is to say: the\u00a0developer\u2019s direct intervention on the DOM is not needed<\/strong>\u00a0anymore. For each \u201cstate<\/em>\u201d change, the framework takes care of everything by itself.<\/p>\n

    It is simple to notice that, in such a context, in the Table.js there are modals displayed which are not manageable in any other traditional way.<\/p>\n

    Back-end<\/h3>\n

    The back-end is\u00a0served through API Gateway and Lambda<\/strong>.<\/p>\n

    It could be necessary to make changes both on API Gateway and on Lambda at each deploy (depending on the changes made to the source code or to the back-end structure). For this reason, we found a way to get a simple and effective automated management of the AWS stack.<\/p>\n

    Our choice is related to the Framework we used:\u00a0Chalice<\/strong><\/a><\/p>\n

    Chalice is a\u00a0Python framework designed for AWS<\/strong>\u00a0which incorporates a router similar to Flask router and a decorator system which implements integrations with the supported AWS services.<\/p>\n

    It is possible (and simple!) to add AWS triggers to our back-end through simple declarations in order to invoke methods responding to specific events or to configure complex integrations in a completely managed way (e.g., CognitoUserPool integration or API Gateway integrations).<\/p>\n

    You will find the complete solution in the repository, complete with all the code and the configuration and utility files.<\/p>\n

    You won\u2019t need to set up an environment or to install dependencies thanks to the\u00a0Dockerfile and the docker-compose<\/strong>\u00a0we included. In this way, the application will start automatically.<\/p>\n

    To start the application using Docker, just access to the solution root and invoke<\/p>\n

    docker-compose up<\/pre>\n

    If you prefer to avoid Docker and to install dependencies on your own system, you first need to install Python 3.6, pip and then all the dependencies contained in requirements.txt.<\/p>\n

    If it is not possible to use Docker, we suggest you use a dedicated VirtualEnv:<\/p>\n

    pip3 install -r requirements.txt --user<\/pre>\n

    The project structure should be similar to the one in the following picture:<\/p>\n

    \n
    \n
    <\/div>\n<\/div>\n<\/figure>\n

    Let\u2019s say a few words about \u201cchalicelib<\/em>\u201d folder, used to add custom classes to the project so that they will be recognized by the framework in use.<\/p>\n

    The file named \u201cconfig.py<\/em>\u201d will be used to configure all the parameters dependent on AWS account that Chalice and the application need to know.<\/p>\n

    To maintain order and to keep the code as simple as possible, we divided the modules into\u00a0services<\/em>,\u00a0controllers<\/em>, and\u00a0connectors<\/em>. With such an organization it is possible to keep the\u00a0app.py\u00a0<\/em>code\u200a\u2014\u200acontaining the properly decorated back-end roots\u200a\u2014\u200aDRY.<\/p>\n

    All the working logic is contained into \u201ccontrollers<\/em>\u201d and \u201cservices<\/em>\u201d folders, except for the roots and for the output management tasks.<\/p>\n

    We decided to implement the interface aimed to communicate with the database in a separated module (a connector) so that it would be easy to optimize, modify or replace it.<\/p>\n

    Chalice takes care of both the deploy and the setup of API Gateway and Lambda on his own. Configurations are expressed idiomatically in the code.<\/p>\n

    Once the project is ready, we can start the automatic deploy by invoking the following command<\/p>\n

    chalice deploy<\/pre>\n

    Let\u2019s focus on the most interesting part of our project:\u00a0decorators<\/strong>.<\/p>\n

    Thanks to the decorators we will be able to integrate AWS services in a simple way by using triggers and API Gateway configurations.<\/p>\n

    The whole root and triggers definition takes place in the main back-end file: the\u00a0app.py<\/em>\u00a0file.<\/p>\n

    As you can see from the code, there is a great number of decorators. They are used by the framework to declare roots and integrations with AWS.<\/p>\n

    The decorators used in this project are\u00a0@app.route<\/strong>,\u00a0@app.on_s3_event<\/strong>,\u00a0@app.schedule<\/strong>.<\/p>\n

    @app.route<\/h3>\n

    Chalice takes care of the deploy of the back-end\u200a\u2014\u200aexcept for methods regarding reactions to the events\u200a\u2014\u200athrough a single Lambda. The framework includes the router which invokes the right method for each request.\u00a0@app.route decorator<\/em>\u00a0refers to the router: it gives instructions to create and manage a route on API Gateway responding to one or more HTTP verbs. The just decorated method will be invoked by the router every time a request\u200a\u2014\u200acorresponding to the defined route\u200a\u2014\u200aoccurs.<\/p>\n

    In addition to the route creation, the decorator is the designated element to accept a given number of parameters in order to maximize the integration with API Gateway. For example, it is possible to define CORS setting or an integration with a CognitoUserPool for the API authentication<\/p>\n

    Here is an example of the definition of an\u00a0authenticated method with authorizer and CORS settings<\/strong>:<\/p>\n

    @app.route('\/test, methods=['GET'], authorizer=authorizer, \r\ncors=CORS_CONFIG)\r\ndef mymethod():\r\n      return {\u201cmessage\u201d: \u201cok\u201d}<\/pre>\n

    @app.on_s3_event<\/h3>\n

    It\u2019s now time to discover how to tell the framework to invoke a specific method when an S3 event occurs. We take advantage of this trigger in our project in order to transform users\u2019 uploaded files.\u00a0@app.on_s3_event decorator<\/em>\u00a0uses the event we are interested in and the bucket in which the event is taking place as an input.<\/p>\n

    Here is an example:<\/p>\n

    @app.on_s3_event(bucket=config.S3_UPLOADS_BUCKET_NAME, \r\nevents=['s3:ObjectCreated:*'])\r\ndef react_to_s3_upload(event):\r\n     \u2026<\/pre>\n

    In the meanwhile, a deploy of a Lambda\u200a\u2014\u200aseparated from the main one\u200a\u2014\u200astarts. The trigger of the deploy of the Lambda is an upload to S3. This separated Lambda is automatically invoked by AWS every time an<\/p>\n

    s3:ObjectCreated:*<\/pre>\n

    event takes place on the specified S3 bucket.<\/p>\n

    @app.schedule<\/h3>\n

    As the name \u201c@app.schedule<\/em>\u201d evokes, this decorator defines a scheduling or a time interval for the method to be run. It creates a\u00a0CloudWatch events rule<\/strong>and then it manages it to start a specific Lambda containing the method to be run<\/p>\n

    @app.schedule(Rate(5, unit=Rate.MINUTES))\r\ndef expire_files(event):<\/pre>\n

    Pipeline di\u00a0CD\/CI<\/h3>\n

    At this point, it is already possible theoretically to upload the compiled Front-end to the S3 bucket created in our 2 previous articles and to run a Chalice deploy to make sure everything is properly configured. Anyway, we really wanted to conclude our architecture touching upon\u00a0automatic code deploy pipelines<\/strong>.<\/p>\n

    We already wrote about CD\/CI\u00a0in this article<\/a><\/p>\n

    Anyway, the solution we are going to focus on today is a little bit different.<\/p>\n

    Let\u2019s create, first, a CodeBuild application for the front-end and a CodeBuild application for the back-end. We need to set up CodeBuild so that it can use the\u00a0buildspec.yml<\/em>\u00a0file contained in the repository root.<\/p>\n

    The front-end CodeBuild can be based on a nodejs standard image, while the back-end one can be based on a Python3.6 image.<\/p>\n

    \n
    \n
    <\/div>\n<\/div>\n<\/figure>\n
    \n
    \n
    <\/div>\n<\/div>\n<\/figure>\n

    There is a\u00a0buildspec.yml<\/em>\u00a0file in each repository; they define CodeBuild steps to produce the packet to be deployed and also the related deploy instructions.<\/p>\n

    Once the two applications are provisioned, the only thing to do is\u00a0creating 2 Pipelines<\/strong>\u00a0(one for each application created). They will have the task to download the source code from CodeCommit and to pass it to the corresponding CodeBuild. In neither of the two Pipelines, you need to specify the deploy stage: the deploy instructions, in fact, are already contained in the\u00a0buildspec.yml<\/em>\u00a0file. In this way, they will be started automatically after the build process.<\/p>\n

    Consequently, the IAM roles associated with the CodeBuilds must have the following permissions:<\/p>\n