{"id":1930,"date":"2020-10-30T10:53:58","date_gmt":"2020-10-30T09:53:58","guid":{"rendered":"https:\/\/blog.besharp.it\/?p=1930"},"modified":"2021-05-10T15:48:31","modified_gmt":"2021-05-10T13:48:31","slug":"how-to-setup-a-continuous-deployment-pipeline-on-aws-for-ecs-blue-green-deployments","status":"publish","type":"post","link":"https:\/\/blog.besharp.it\/how-to-setup-a-continuous-deployment-pipeline-on-aws-for-ecs-blue-green-deployments\/","title":{"rendered":"How to setup a Continuous Deployment Pipeline on AWS for ECS Blue\/Green Deployments."},"content":{"rendered":"\n
Continuous Deployment is nowadays a well-known strategy for releasing software where any commit that passes the automated testing phase is automatically released into the production deployment.<\/span><\/p>\n\n\n\n With Continuous Deployment, companies can eliminate DIY for Continuous Delivery and increase the focus on the product, make deployments frictionless without compromising security, and creating a flawless workflow across development, testing, and production environments.<\/p>\n\n\n\n In our <\/span>last article<\/a>,<\/span> we talked about Microservices, their benefits, and how to set up a Blue\/Green Deployment on AWS for an ECS Service. <\/span><\/p>\n\n\n\n Blue\/Green Deployment is a technique where both the old infrastructure (blue) and the new temporary infrastructure (green) co-exist. Once a new version is installed, it is possible to carry out integration\/validation tests on the green infrastructure before promoting it to production. If so, the traffic switch can be done with virtually no downtime. \n<\/p>\n\n\n\n This time we want to take another step forward from our last article and, as promised, show how to make the process automatic, by defining a pipeline for Continuous Deployment to manage the entire release flow.<\/span>\n\nIn short, from a simple <\/span>git push,<\/span><\/i> we want to release the new software package in Blue\/Green mode through an ECS service.<\/span><\/p>\n\n\n\n In the end, we\u2019ll also propose to you two bonus sections: how to automate testing on green environments and how it is possible to skip some initial infrastructure\u2019s boilerplate creation thanks to AWS CloudFormation.<\/span><\/p>\n\n\n\n Are you ready? Let\u2019s dig in!<\/p>\n\n\n\n Before start preparing your pipeline some steps must be done in order to have everything in place, ready to be configured to your needs:\n<\/p>\n\n\n\n Note: following are simplified steps to cover the last prerequisite; for more in-depth instructions, follow the steps provided in our <\/span>previous article<\/span><\/a>.<\/span><\/p>\n\n\n\n Go to your AWS account, select ECS in the search bar and after that click on \u201cClusters\u201d in the left panel and \u201cCreate Cluster\u201d in the new window. Leave \u201cNetworking only\u201d as an option since we want to use Fargate and click \u201cnext\u201d.<\/p>\n\n\n\n Type your cluster name, leave all settings as default and finally add some meaningful tags if you want. Click \u201cCreate\u201d to generate a new blank cluster.<\/p>\n\n\n\n Another prerequisite is the Task Definition, which will host our Docker containers.<\/p>\n\n\n\n Go to \u201cTask Definitions\u201d under \u201cAmazon ECS\u201d menu, then click \u201cCreate new Task Definition\u201d and select \u201cFargate\u201d as the image below and click \u201cNext Step\u201d:\n<\/p>\n\n\n\n For now, we can assign the default roles to both the Task Role and the Task Execution Role since they are sufficient for the operations we have to perform. Select the minimum values for Memory and Cpu (0.5GB and 0.25vCPU for this example).<\/p>\n\n\n\n We then create a container and associate it with the Docker image, previously saved on ECR (see the <\/span>last article<\/span><\/a>), appropriately configuring the vCPU and the memory with the same values as our task definition. <\/span><\/p>\n\n\n\n Select \u201cAdd Container\u201d.<\/p>\n\n\n\n A sidebar will open. Set a name for the container and for the Image Uri, open a new Tab, and navigate to ECR dashboard, select your previously created image and copy its URI value. Assign the value to \u201cImage URI\u201d.<\/span><\/p>\n\n\n\n Then add <\/span>3000<\/b> for <\/span>tpc protocol<\/b> in \u201cPort mapping\u201d, leave all other parameters as default and click \u201cAdd\u201d. Finally, save your task definition with \u201cCreate\u201d.<\/span><\/p>\n\n\n\n Start by going to your created Cluster in ECS service, click on its name and in the bottom area of the dashboard, under the \u201cService\u201d tab, click \u201cCreate\u201d.<\/p>\n\n\n\n In the new area, configure the options as following:<\/p>\n\n\n\n 1. Launch Type: FARGATE<\/strong><\/p>\n\n\n\n 2. Task Definition: <YOUR_TASK_DEFINITION><\/strong><\/p>\n\n\n\n 3. Cluster: <YOUR_CLUSTER><\/strong><\/p>\n\n\n\n 4. Service Name: <A_NAME_FOR_THE_SERVICE><\/strong><\/p>\n\n\n\n 5. Number of Tasks: 1<\/strong><\/p>\n\n\n\n 6. Deployment Type: Blue\/Green<\/strong><\/p>\n\n\n\n 7. Deployment Configuration: CodeDeployDefault.ECSAllAtOnce<\/strong><\/p>\n\n\n\n 8. Service Role for CodeDeploy: <A_SUITABLE_ROLE_WITH_ECS_PERMISSIONS><\/strong><\/p>\n\n\n\n Leave the rest of the options as default and click \u201cNext Step\u201d. In the new section select a suitable VPC, one or more of its subnets and auto-assign IP enabled.\n<\/p>\n\n\n\n Then we need to configure an Application LoadBalancer for our cluster. Select an existing one or create a new one from the EC2 console. Then select your container, being sure that it shows your mapped port.<\/p>\n\n\n\n After selecting your container click \u201cAdd to load balancer\u201d. <\/p>\n\n\n\n Select <\/span>8080<\/b> for \u201cProduction Listener Port\u201d and <\/span>8090 <\/b>for \u201cTest Listener Port\u201d, select your LoadBalancer\u2019s target group as shown in figure (you\u2019ll have to configure them beforehand or now in another tab following this guide.<\/a><\/span><\/p>\n\n\n\n After that, you can go to the next step and leave autoscaling off (for this example). Finally, after the review check, your service will be created!\n<\/p>\n\n\n\n Now we have all the fundamental bricks to create the Pipeline process in CodePipeline. Let\u2019s move on!\n<\/p>\n\n\n\n Start by having your sample application pushed on your GitHub repository.<\/p>\n\n\n\n Go to your AWS Account, select AWS CodePipeline from the services list. From the dashboard click on \u201cCreate pipeline\u201d.\n<\/p>\n\n\n\n In the next screen give a name to your pipeline and if you don\u2019t already have a suitable role, leave \u201cNew service role\u201d checked and the other options as defaults; click \u201cnext\u201d.\n<\/p>\n\n\n\n In the <\/span>source<\/b> stage select \u201cGitHub version 2\u201d and then you have to connect to your GitHub repository. Please follow the instructions provided after clicking on \u201cConnect to GitHub\u201d. Remember to <\/span>authorize only the repository of your solution <\/b>and to be the <\/span>owner of that repo,<\/b> otherwise you won\u2019t be able to complete the process.<\/span><\/p>\n\n\n\n After being connected to GitHub, you\u2019ll be able to complete the stage as follows, setting repository and branch:\n<\/p>\n\n\n\n Click \u201cnext\u201d, and you\u2019ll be presented with the build stage where we need to create our CodeDeploy project to add to the pipeline.\n<\/p>\n\n\n\n In order to keep your code always up to date in the pipeline, you need to make this step to always generate an updated docker image for your codebase. \n<\/p>\n\n\n\n Start by giving a name to your <\/span>Build stage<\/b>, select <\/span>CodeBuild<\/b> as the \u201cAction provider\u201d, the region, and <\/span>SourceArtifact <\/b>as the \u201cInput Artifact\u201d.<\/span><\/p>\n\n\n\n Then you need to create a new build project. Clicking on \u201cAdd project\u201d will bring a screen similar to this:<\/span><\/p>\n\n\n\n Give a name to the project, then leave <\/span>Managed Image<\/b> with all the container properties as suggested, then <\/span>check <\/b>(this is very important) the \u201cPrivileged option\u201d in order to allow building your docker image. Check your settings with the image below:<\/span><\/p>\n\n\n\n For the <\/span>buildspec<\/b> option, select the inline editor and paste these commands:<\/span><\/p>\n\n\n\n <\/p>\n\n\n\n Note: in bold there are the variables you need to customize yourself to your specific project.\n<\/p>\n\n\n\n After that, click ok, then add this CodeBuild project to your stage. \n<\/p>\n\n\n\n Start by selecting \u201cAmazon ECS (Blue\/Green)\u201d for Deploy provider, a region for your project, and then click on \u201cCreate application\u201d.\n<\/p>\n\n\n\n Give a new name to the project and select \u201cAmazon ECS\u201d as a Compute provider. After that, you\u2019ll be presented with a screen for creating a new Deployment group. <\/p>\n\n\n\n Give a name to the Deployment group than select in order:<\/p>\n\n\n\n A service role with suitable access.<\/p>\n\n\n\n The ECS cluster we have created before<\/p>\n\n\n\n The ECS service we have created before<\/p>\n\n\n\n The Application load balancer we have created before with 8080 and TargetGroup 1 for production and 8090 and TargetGroup 2 for test environments respectively.<\/p>\n\n\n\n Select a traffic strategy; for this example use \u201cSpecify when to reroute traffic\u201d and select five minutes<\/strong>.<\/span><\/p>\n\n\n\n Click \u201cCreate\u201d and then return to your CodePipeline stage and select your newly created <\/span>CodeDeploy application<\/b> and <\/span>CodeDeploy deployment group<\/b>.<\/span><\/p>\n\n\n\n Per \u201cInput Artifacts\u201d aggiungiamo BuildArtifact<\/strong> affianco a \u201cSourceArtifact\u201d.<\/p>\n\n\n\n For <\/span>Amazon ECS task definition<\/b> and <\/span>AWS CodeDeploy AppSpec file<\/b> select \u201cSource Artifact\u201d; then add BuildArtifact and IMAGE as the last options. Click \u201cNext\u201d, review and finally \u201cCreate pipeline\u201d.<\/span><\/p>\n\n\n\n We are almost there! To complete our pipeline we need to add a <\/span>task definition<\/b> and an <\/span>appspec.yml<\/b> to our application.<\/span><\/p>\n\n\n\n Create a new <\/span>appspec.yml<\/b> file in the root of your app\u2019s project and add the following code to it:<\/span><\/p>\n\n\n\n For the task definition file we can use a trick:<\/p>\n\n\n\n we have already created a task definition in the prerequisites: go find it and click on \u201cEdit\u201d, you\u2019ll find a JSON editor, copy all text from there and paste it in a new <\/span>taskdef.json <\/b>file in the root of your project and change these two lines:<\/span>\n<\/span><\/p>\n\n\n\n Push everything on your repo.<\/p>\n\n\n\n Test your application before promoting to Production<\/p>\n\n\n\n To verify that all the system is working as expected just make a slight modification to the text on the main route of your application, commit, wait until the pipeline finishes its tasks, and then check your URL with port 8090 and verify that presents the updated version while URL with port 8080 not. Wait for 5-6 minutes and then also the production environment should show the correct new version. \n<\/p>\n\n\n\n Your pipeline is now fully functional!<\/p>\n\n\n\n Bonus 1: apply automated testing through Lambda on your Green environment<\/span><\/p>\n\n\n\n In the deploy phase, it is possible to associate one or more Lambda functions to assert the health and the functionalities of your app before promoting the new version to production. This is done during the configuration of your Deployment lifecycle hooks. You\u2019ll need to add a Lambda hook to AfterAllowTraffic.\n<\/p>\n\n\n\n Please refer to these guides by AWS to configure this extra touch with a simple test example:<\/p>\n\n\n\n We have checked the prerequisites necessary to create an ECS cluster and its components but as we have seen this section needs a lot of configuration and can be tedious and nonetheless we want it to be repeatable.<\/p>\n\n\n\n Therefore a good idea would be to create a CloudFormation template to automate and simplify the infrastructure creation process.<\/p>\n\n\n\n The following is a simplified snippet to guide you to get started with it.<\/p>\n\n\n\n This code is just a hint, you\u2019ll need to cover by yourself parameters\u2019 management and add some tweaks for your specific project. In case of need also refer to these two links:\n<\/p>\n\n\n\n In this article, we have seen how to create a pipeline to make Blue\/Green deployments on ECS completely automated. <\/p>\n\n\n\n We have also seen how Lambda functions can be used to automate the testing phase in the green environment.<\/p>\n\n\n\n To complete our tutorial we\u2019ve also seen how AWS CloudFormation template can be used to minimize boilerplate infrastructure creation as well as making it reusable and repeatable.\n<\/p>\n\n\n\n As our aim was to trace the path to help you master automation and pipeline setup, this overview was maintained specifically simple, as it is intended to be expanded and manipulated by the reader to fit its particular needs. \n<\/p>\n\n\n\n Have you ever applied this, or similar \u2013 and maybe more advanced – configurations for your Pipelines? Let us know! We can\u2019t wait to hear from you!\n\nWe hope you enjoyed this reading and found it useful. \n<\/p>\n\n\n\n As always #Proud2beCloud meets you in two weeks.<\/p>\n\n\n\n Till then, happy deployments \ud83d\ude42<\/span><\/p>\n","protected":false},"excerpt":{"rendered":" Continuous Deployment is nowadays a well-known strategy for releasing software where any commit that passes the automated testing phase is […]<\/p>\n","protected":false},"author":14,"featured_media":1923,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[242],"tags":[378,292,294,286,409,369],"class_list":["post-1930","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-devops-en","tag-amazon-ecs-en","tag-aws-codebuild-en","tag-aws-codedeploy-en","tag-aws-codepipeline-en","tag-blue-green-deployment-en","tag-continuous-delivery-en"],"yoast_head":"\nRequirements<\/h1>\n\n\n\n
Create a new ECS Cluster<\/h2>\n\n\n\n
<\/figure><\/div>\n\n\n\n
Create a new Task Definition<\/h2>\n\n\n\n
<\/figure><\/div>\n\n\n\n
<\/figure><\/div>\n\n\n\n
<\/figure><\/div>\n\n\n\n
Create a new Service<\/h2>\n\n\n\n
<\/figure><\/div>\n\n\n\n
<\/figure><\/div>\n\n\n\n
<\/figure><\/div>\n\n\n\n
Create the Deployment Pipeline<\/h1>\n\n\n\n
<\/figure><\/div>\n\n\n\n
<\/figure><\/div>\n\n\n\n
<\/figure><\/div>\n\n\n\n
Create a new CodeBuild project<\/h2>\n\n\n\n
<\/figure><\/div>\n\n\n\n
<\/figure><\/div>\n\n\n\n
version: 0.2\nphases:\n pre_build:\n commands:\n - REPOSITORY_URI=YOU_ECR_URI<\/b>\n - echo $CODEBUILD_RESOLVED_SOURCE_VERSION - COMMIT_HASH=$(echo $CODEBUILD_RESOLVED_SOURCE_VERSION)\n - IMAGE_TAG=${COMMIT_HASH}:latest\n - $(aws ecr get-login --no-include-email --region YOUR_REGION<\/b>)\n install:\n runtime-versions:\n java: corretto11\n build:\n commands:\n - printf '{\"ImageURI\":\"%s\"}' $REPOSITORY_URI:latest > imageDetail.json\n - docker build -t YOU_ECR_URI<\/b>:latest .\n - docker push YOU_ECR_URI<\/b>:latest\nartifacts:\nfiles: imageDetail.json\n<\/pre>\n\n\n\n
<\/figure><\/div>\n\n\n\n
Create a new CodeDeploy project<\/h2>\n\n\n\n
<\/figure>\n\n\n\n
<\/figure>\n\n\n\n
version: 0.0\nResources:\n - TargetService:\n Type: AWS::ECS::Service\n Properties:\n TaskDefinition: <task_definition>\n LoadBalancerInfo:\n ContainerName: \"YOUR_ECS_CLUSTER_NAME<\/b>\"\n ContainerPort: 3000\n\n<\/task_definition><\/pre>\n\n\n\n
\"image\": \"<IMAGE>\"\n\"taskDefinitionArn\": \"<TASK_DEFINITION>\"<\/pre>\n\n\n\n
<\/figure><\/div>\n\n\n\n
<\/figure><\/div>\n\n\n\n
Bonus 2: automate prerequisite phase through CloudFormation<\/h1>\n\n\n\n
LoadBalancer:\n Type: AWS::ElasticLoadBalancingV2::LoadBalancer\n Properties:\n Name: !Ref ProjectName\n LoadBalancerAttributes:\n - Key: 'idle_timeout.timeout_seconds'\n Value: '60'\n - Key: 'routing.http2.enabled'\n Value: 'true'\n - Key: 'access_logs.s3.enabled'\n Value: 'true'\n - Key: 'access_logs.s3.prefix'\n Value: loadbalancers\n - Key: 'access_logs.s3.bucket'\n Value: !Ref S3LogsBucketName\n - Key: 'deletion_protection.enabled'\n Value: 'true'\n - Key: 'routing.http.drop_invalid_header_fields.enabled'\n Value: 'true'\n Scheme: internet-facing\n SecurityGroups:\n - !Ref LoadBalancerSecurityGroup\n Subnets:\n - !Ref SubnetPublicAId\n - !Ref SubnetPublicBId\n - !Ref SubnetPublicCId\n Type: application\n HttpListener:\n Type: AWS::ElasticLoadBalancingV2::Listener\n Properties:\n DefaultActions:\n - RedirectConfig:\n Port: '443'\n Protocol: HTTPS\n StatusCode: 'HTTP_301'\n Type: redirect\n LoadBalancerArn: !Ref LoadBalancer\n Port: 80\n Protocol: HTTP\n HttpsListener:\n Type: AWS::ElasticLoadBalancingV2::Listener\n Properties:\n Certificates:\n - CertificateArn: !Ref LoadBalancerCertificateArn\n DefaultActions:\n - Type: forward\n TargetGroupArn: !Ref TargetGroup\n LoadBalancerArn: !Ref LoadBalancer\n Port: 443\n Protocol: HTTPS\n TargetGroup:\n Type: AWS::ElasticLoadBalancingV2::TargetGroup\n Properties:\n Name: !Ref ProjectName\n HealthCheckIntervalSeconds: 30\n HealthCheckPath: !Ref HealthCheckPath\n HealthCheckProtocol: HTTP\n HealthCheckPort: !Ref NginxContainerPort\n HealthCheckTimeoutSeconds: 10\n HealthyThresholdCount: 2\n UnhealthyThresholdCount: 2\n Matcher:\n HttpCode: '200-299'\n Port: 8080\n Protocol: HTTP\n TargetType: ip\n TargetGroupAttributes:\n - Key: deregistration_delay.timeout_seconds\n Value: '30'\n VpcId: !Ref VpcId\n Cluster:\n Type: AWS::ECS::Cluster\n Properties:\n ClusterName: !Ref ProjectName\n Service:\n Type: AWS::ECS::Service\n Properties:\n Cluster: !Ref Cluster\n DeploymentConfiguration:\n MaximumPercent: 200\n MinimumHealthyPercent: 100\n DesiredCount: 3\n HealthCheckGracePeriodSeconds: 60\n LaunchType: FARGATE\n LoadBalancers:\n - ContainerName: ContainerOne\n ContainerPort: !Ref ContainerPort\n TargetGroupArn: !Ref TargetGroup\n NetworkConfiguration:\n AwsvpcConfiguration:\n AssignPublicIp: DISABLED\n SecurityGroups:\n - !Ref ContainerSecurityGroupId\n Subnets:\n - !Ref SubnetPrivateNatAId\n - !Ref SubnetPrivateNatBId\n - !Ref SubnetPrivateNatCId\n ServiceName: !Ref ProjectName\n TaskDefinition: !Ref TaskDefinition\n DependsOn: HttpsListener\n TaskDefinition:\n Type: AWS::ECS::TaskDefinition\n Properties:\n Family: !Ref ProjectName\n ContainerDefinitions:\n - Cpu: 2048\n Image: !Ref ContainerImageUri\n Memory: 4096\n MemoryReservation: 4096\n PortMappings:\n - ContainerPort: !Ref ContainerPort\n Protocol: tcp\n Name: ContainerOne\n LogConfiguration:\n LogDriver: awslogs\n Options:\n awslogs-group: !Ref ContainerLogGroup\n awslogs-region: !Ref AWS::Region\n awslogs-stream-prefix: ContainerOne\n Cpu: '2048'\n Memory: '4096'\n ExecutionRoleArn: !GetAtt ExecutionContainerRole.Arn\n TaskRoleArn: !GetAtt TaskContainerRole.Arn\n NetworkMode: awsvpc\n RequiresCompatibilities:\n - FARGATE\n<\/pre>\n\n\n\n
To sum up<\/h2>\n\n\n\n