{"id":618,"date":"2019-04-05T14:28:27","date_gmt":"2019-04-05T12:28:27","guid":{"rendered":"https:\/\/blog.besharp.it\/aws-fargate-services-deployment-with-continuous-delivery-pipeline\/"},"modified":"2021-03-29T17:02:41","modified_gmt":"2021-03-29T15:02:41","slug":"aws-fargate-services-deployment-with-continuous-delivery-pipeline","status":"publish","type":"post","link":"https:\/\/blog.besharp.it\/aws-fargate-services-deployment-with-continuous-delivery-pipeline\/","title":{"rendered":"AWS Fargate Services Deployment with Continuous Delivery Pipeline"},"content":{"rendered":"
In this article, we explain how we have created a Continuous Delivery (CD) pipeline capable of producing a docker image for deployment on AWS ECS Fargate.<\/p>\n
With the emergence of\u00a0AWS Fargate<\/strong>, the realization of container-based services finally takes on a whole new meaning. In fact, before Fargate\u2019s release, the only way to use Amazon ECS was to provide a cluster of EC2 instances managed by Amazon (for software, updates, and configuration). This type of solution requires sustaining the costs of the clusters, plan oversizing to allow for the scaling of tasks, and lastly, configuring and maintaining a valid autoscaling system to avoid lacking adequate container resources.<\/p>\n AWS Fargate allows for all of this\u00a0management overhead to be handed to AWS<\/strong>, i.e., to launch container-based services by paying only for the actual execution time. No need to worry about the underlying cluster\u200a\u2014\u200athe focus can instead be placed on service development.<\/p>\n With AWS Fargate, AWS is making the container a top-tier object in computing solutions.<\/p>\n Automating the deployment of container-based services is fundamental to fully take advantage of AWS Fargate and AWS Cloud potential.<\/p>\n Here is our solution for implementing a CD pipeline that can put in production every push on the selected repository branch.<\/p>\n Key infrastructure services include:<\/p>\n Amazon Elastic Container Service (Amazon ECS)<\/strong>\u00a0is a container orchestration service. It supports Docker and allows for\u00a0easily running and resizing applications<\/strong>. AWS Fargate facilitates starting and orchestrating container-based services by fully using AWS-managed clusters and paying on a container basis.<\/p>\n Amazon Elastic Container Registry (ECR)<\/strong>\u00a0is a\u00a0fully Docker-managed image registry<\/strong>\u00a0that makes it easy for developers to store, manage, and distribute Docker container images.<\/p>\n It is possible to utilize the Elastic Load Balancing service to sort traffic via containers.<\/p>\n AWS Elastic Load Balancing<\/strong>\u00a0automatically routes incoming application traffic between multiple destinations, including EC2, containers, IP addresses, and Lambda functions.<\/p>\n Elastic Load Balancing offers\u00a0three types of load balancing systems:<\/strong><\/p>\n The Application Load Balancer (ALB)<\/strong>\u00a0is the load balancer for the services released on Fargate.<\/p>\n The Application Load Balancer systems are suitable for\u00a0balancing HTTP and HTTPS traffic<\/strong>. They offer advanced request routing for the distribution of modern architectures, e.g. in microservices and containers. These systems operate at the level of individual requests (level 7) and route traffic based on the content of the request.<\/p>\n Without further delay, let us move on to the tutorial for creating a fully automated release pipeline.<\/p>\n Throughout the rest of the article, we will assume that the entire project code is in a\u00a0CodePipeline-compatible repository.<\/strong><\/p>\n First, prepare an image of our service for testing it both locally and on AWS.<\/p>\n It is, therefore, necessary\u00a0to add a Dockerfile to the project<\/strong>, which will then be published in the repository. The file must contain instructions for building a container for all the software, dependencies, libraries, and configurations, as well as the package with our service.<\/p>\n This container can be safely tested locally or in a controlled environment in order to verify proper functioning.<\/p>\n Once the local tests are satisfying, one can proceed with the creation of an image and its publication on Amazon ECR.<\/p>\n The\u00a0creation of an ECR repository<\/strong>\u00a0follows, and the only data it requires is a valid name. Our service requires a\u00a0Load Balancer<\/strong>\u00a0to route traffic between replica containers.<\/p>\n For this reason, we need to\u00a0create an Application Load Balancer<\/strong>\u00a0whose configuration can be left blank. Defining behavioral details of the ALB is unnecessary because ECS is going to manage it dynamically during the containers\u2019 scaling operations.<\/p>\n As for ECS, the first thing to do is to create a cluster.\u00a0Clusters<\/strong> are nothing more than objects used to logically group services. Access the ECS dashboard and select \u201ccreate cluster\u201d.<\/p>\n From the wizard, choose \u201cNetworking only\u201d. This configuration tells AWS to use AWS Fargate for this virtual cluster.<\/p>\n If desired, select the name and a new VPC in the second and last wizard step. Otherwise, use one that has already been configured on your account. The second step\u00a0creates a task definition.<\/strong>\u00a0This object collects information about the task, i.e., name, description, IAM roles for deployment and execution, the size of the task in terms of RAM and CPU, and the specifications of the container that will host it.<\/p>\n Select the previously saved docker image on ECR to configure the container.<\/p>\n Simply select \u201cCreate task definition\u201d from the appropriate screen in the ECS area.<\/p>\n It is essential to choose AWS Fargate as the wizard\u2019s first step. Then input the requested data following the instructions and provide adequate sizing for the task.<\/p>\n The last object to be configured is called service (Service).<\/strong><\/p>\n A service is defined by a task and a set of parameters that specify how many instances of the task are required as a minimum, current, and maximum value to allow the service to function correctly.<\/p>\n<\/div>\n The creation procedure is no different from other configured objects.<\/p>\n Be attentive when selecting the VPC, subnets, and previously created load balancer.<\/p>\n The service should be visible by pointing the browser to the ALB URL at the end of the configuration.<\/p>\n Once the entire environment has been manually configured, it is possible to\u00a0create and configure a pipeline that automatically deploys each code change.<\/strong><\/p>\n The file named\u00a0buildspec.yml<\/strong>\u00a0needs to be added to the repository root before starting the pipeline configuration. The purpose of the file is to contain the instructions for building a new image of our service.<\/p>\n In fact, we want to automate what was previously performed by hand, i.e., the construction of the docker image from the dockerfile and the code, its uploading to ECR, and lastly, ECS updating to perform the service deployment by using the new image.<\/p>\n Here is a sample version of the file to be added (buildspec.yml):<\/p>\n The parts to be edited with the specific names of your project are in bold.<\/em><\/p>\n CodePipeline is the AWS service for implementing the automation with very low management and configuration effort. We will rely on CodeBuild to execute the file with the instructions (buildspec.yml) to perform image build operations.<\/p>\n So let us start with\u00a0the pipeline configuration<\/strong>\u00a0by creating a new one on CodePipeline:<\/p>\n At this point, the pipeline will try to run automatically, and it will\u00a0fail.<\/em><\/p>\n This is an expected failure:<\/strong>\u00a0the reason lies in the fact that the Wizard has created a CodeBuild role for us. However, this does not have all the necessary permissions to push the image to ECR.<\/p>\n To solve this, identify the generated role whose name follows this convention: code-build-build-project-name-service-role. Then add the following permissions:<\/p>\n AmazonEC2ContainerRegistryPowerUser<\/strong>\u00a0to see the running pipeline.<\/p>\n If everything worked as expected, then the pipeline will now function. Further, the service will be automatically updated at every commit on the chosen branch.<\/p>\n Containers are increasingly at the center of the DevOps scene. As a result, it is important to know the available tools for making thoughtful and effective project choices. We hope to have been helpful to you in this regard.<\/p>\n Please share with us your results, observations, doubts, and ideas\u2026 Our team is looking forward to furthering this topic with you!<\/p>\n [ATTENTION, SPOILER!]<\/em><\/p>\n If you are intrigued by this subject, keep following us:\u00a0<\/em>a creative way to get a highly personalized automatic pipeline that uses containers as a means of automation is coming your way.<\/em><\/strong><\/p>\n Stay tuned!\u00a0\ud83d\ude09<\/p>\n<\/div>\n","protected":false},"excerpt":{"rendered":" In this article, we explain how we have created a Continuous Delivery (CD) pipeline capable of producing a docker image […]<\/p>\n","protected":false},"author":8,"featured_media":636,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[242],"tags":[292,367,286,260,365,369],"class_list":["post-618","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-devops-en","tag-aws-codebuild-en","tag-aws-codecommit-en","tag-aws-codepipeline-en","tag-aws-fargate-en","tag-ci-cd-en","tag-continuous-delivery-en"],"yoast_head":"\n\n
Let us start with a short glossary:<\/h4>\n
\n
Part 1: Preparation of the first docker\u00a0image<\/h4>\n
\nOur docker image can then be uploaded to ECR by simply following the login and push instructions.<\/p>\n$(aws ecr get-login --no-include-email --region <regione>)\r\ndocker build -t <nome immagine> .\r\ndocker tag <nome immagine>:latest <ecr url>:latest\r\ndocker push <ecr url>:latest<\/pre>\n
Part 2: Configuration of ECS Fargate and Networking<\/h4>\n
<\/p>\n
<\/p>\n
<\/div>\n<\/div>\n<\/figure>\n
<\/div>\n<\/div>\n<\/figure>\n<\/div>\n
<\/div>\n<\/div>\n<\/figure>\n
<\/div>\n<\/div>\n<\/figure>\n
<\/div>\n<\/div>\n<\/figure>\n
<\/div>\n<\/div>\n<\/figure>\n
Part 3: Automating the deployment<\/h4>\n
version: 0.2<\/pre>\n
phases:\r\n pre_build:\r\n commands:\r\n - echo Logging in to Amazon ECR...\r\n - aws --version\r\n - $(aws ecr get-login --region $AWS_DEFAULT_REGION --no-include-email)\r\n - REPOSITORY_URI=<<\/em>REPLACE THIS TEXT WITH THE URL OF THE IMAGE USED ON ECR<\/em><\/strong>><\/em>\r\n - COMMIT_HASH=$(echo $CODEBUILD_RESOLVED_SOURCE_VERSION | cut -c 1-7)\r\n - IMAGE_TAG=${COMMIT_HASH:=latest}\r\n build:\r\n commands:\r\n - echo Build started on `date`\r\n - echo Building the Docker image... \r\n - docker build -t $REPOSITORY_URI:latest .\r\n - docker tag $REPOSITORY_URI:latest $REPOSITORY_URI:$IMAGE_TAG\r\n post_build:\r\n commands:\r\n - echo Build completed on `date`\r\n - echo Pushing the Docker images...\r\n - docker push $REPOSITORY_URI:latest\r\n - docker push $REPOSITORY_URI:$IMAGE_TAG\r\n - echo Writing image definitions file...\r\n - printf '[{\"name\":\"<<\/em>replace this text with the container name used in task definition<\/em><\/strong>><\/em>\",\"imageUri\":\"%s\"}]' $REPOSITORY_URI:$IMAGE_TAG > imagedefinitions.json\r\nartifacts:\r\n files: imagedefinitions.json<\/pre>\n
\n
\nChoose\u00a0Ubuntu<\/strong>\u00a0as the operative system and\u00a0Docker<\/strong>\u00a0as Runtime.<\/li>\n