{"id":645,"date":"2019-05-17T15:58:41","date_gmt":"2019-05-17T13:58:41","guid":{"rendered":"https:\/\/blog.besharp.it\/how-to-create-flexible-ci-cd-pipeline-on-aws-with-fargate-and-sqs\/"},"modified":"2021-03-29T17:17:03","modified_gmt":"2021-03-29T15:17:03","slug":"how-to-create-flexible-ci-cd-pipeline-on-aws-with-fargate-and-sqs","status":"publish","type":"post","link":"https:\/\/blog.besharp.it\/how-to-create-flexible-ci-cd-pipeline-on-aws-with-fargate-and-sqs\/","title":{"rendered":"How to create flexible CI\/CD Pipeline on AWS with Fargate and SQS"},"content":{"rendered":"
The use of Pipeline for automatic code deployment\u00a0<\/strong>is now an almost essential feature of every development project in the Cloud, as the concept of scalable architecture requires that virtual machines (or containers), which are started on the Cloud to manage traffic spikes use the most up-to-date version of the code. Furthermore, the creation of an automated pipeline\u00a0frees the DevOps from the manual management<\/strong>\u00a0of AMIs and Docker images, as well as eliminating the possibility of \u201chuman errors\u201d in the deployment phase.<\/p>\n AWS provides DevOps with a very powerful tool for creating automatic Pipelines:\u00a0AWS CodePipeline.<\/strong>\u00a0This fully managed service works as an\u00a0orchestrator for a CI\/CD<\/strong>\u00a0pipeline with similar functionalities to those offered by other services such as Jenkins but which must be installed on an EC2 instance and, therefore, in addition to not being highly reliable, require a significant configuration and maintenance effort.<\/p>\n The most common flow of an AWS CodePipeline consists of three steps:<\/p>\n Although the features of AWS CodePipeline are sufficient for the most common use cases, some special needs require you to develop one or more customized steps, to have more flexibility. In this article, we will see how it is possible to\u00a0create an automated pipeline able to build all the branches of a repo git hosted on AWS CodeCommit.<\/strong><\/p>\n Many projects, particularly large ones, use git flow or a similar flow to organize the repository.<\/p>\n This means that there are two or more branches (e.g., production, staging, development) containing the code actually deployed on the relevant environments and a large number of branch features, containing the individual features being developed assigned to the respective developer and team, which once completed are integrated into development and publications.<\/p>\n However, very often it is not possible to run the entire suite of automated tests directly from the developers\u2019 workstations, both for reasons of time and for the need to test the ever-increasing integration of the code with the various AWS SaaS services. To overcome these problems and reduce integration errors, it would be very convenient to be able to directly launch the build and the test suite at each commit on the individual features branch via AWS CodeBuild, instead of just merge the feature in dev through CodePipeline created explicitly for this environment.<\/p>\n Unfortunately, at the moment, AWS CodePipeline does not support source from multiple inputs; it is in fact necessary to specify both the repo and the branch. To solve the problem, in beSharp, we have developed a creative solution using the power of\u00a0CloudWatch Events, SQS, and Fargate.<\/strong><\/p>\n CloudWatch Rules:<\/strong>\u00a0the AWS service that allows you to create rules to perform operations or in response to events concerning the AWS account, such as turning on an EC2 or, in our case, a push on a CodeCommit repo, or to fixed time intervals.<\/p>\n SQS FIFO:<\/strong>\u00a0the fully managed and highly reliable code service offered by AWS. In our case, we used the First In First Out (FIFO) version to be sure of preserving the order of the messages.<\/p>\n Fargate:<\/strong>\u00a0The third component of the solution is a Docker container deployed through Fargate (ECS), the new AWS service that allows you to start containers\u00a0as a service<\/em>, without having to deal with management of the underlying infrastructure.<\/p>\n In a similar way to the standard operation of AWS CodePipeline, we used CloudWatch Rules to prepare a rule that is triggered at the time of push by a developer on any of the branches. The rule has two configured actions:<\/p>\n The first queues a message in an SQS queue, while the second starts the Fargate container. The message entered in the queue is the JSON that describes the whole event that started the CloudWatch Rule and contains the name of the repo, the name of the branch and the ID of the commit just sent by the developer.<\/p>\n The event pattern of the rule will look like this:<\/p>\n The SQS FIFO queue, therefore,<\/strong>\u00a0contains the messages corresponding to the push code events on the repository and is consumed by the Fargate containers. To prevent corrupt messages from being re-processed indefinitely, we added a\u00a0dead letter queue<\/strong>\u00a0where messages are transferred after two failed read attempts.<\/p>\n Once started from the CloudWatch Rule,\u00a0the Fargate container reads messages from the queue,<\/strong>\u00a0pulls the commit from the CodeCommit repository, saves the compressed code bundle on s3 and finally launches AWS CodeBuild with the correct parameters.<\/p>\n The Docker container was created using the\u00a0Dockerfile:<\/strong><\/p>\n As shown, only standard bash packages are required, in addition to the AWS CLI. The\u00a0codecommit_source.sh<\/em>script starts when the container is turned on and executes the logic described above.<\/p>\n An example of\u00a0codecommit_source.sh<\/strong>\u00a0is shown below:<\/p>\n Finally, those who manage the source code will have to take care of\u00a0creating\/modifying the buildspec<\/strong>\u00a0to save the build\u2019s outputs on S3 with an easily readable name.<\/p>\n The solution shown here can be easily modified to work even\u00a0in the case of multiple accounts.<\/strong>\u00a0For example, two accounts may be present: the first account (\u201cmaster\u201d) containing the production environment and the repos, while the second accounts for the staging\/development environments and the pipelines. To do this, it is necessary to add a\u00a0role<\/em>to the \u201cmaster\u201d account that can be taken over by staging to pull the repositories. Finally, it will also be necessary to configure event buses on both accounts in order to share rep messages with the development account.<\/p>\n In conclusion, AWS CodePipeline is a very powerful tool, but for some cases of use, it is not enough and should, therefore, be combined with custom solutions like the one proposed which are easily configurable using the wide suite of services made available by AWS.<\/p>\n Would you like to tell us about your innovative CD\/CI solution or have more information on that proposed in this article? Don\u2019t hesitate to comment and\/or\u00a0contact us!<\/strong><\/a><\/p>\n","protected":false},"excerpt":{"rendered":" The use of Pipeline for automatic code deployment\u00a0is now an almost essential feature of every development project in the Cloud, […]<\/p>\n","protected":false},"author":9,"featured_media":649,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[242],"tags":[290,294,264,370],"yoast_head":"\n\n
Services used for the solution:<\/h4>\n
{\r\n \"source\": [\r\n \"aws.codecommit\"\r\n ],\r\n \"detail-type\": [\r\n \"CodeCommit Repository State Change\"\r\n ],\r\n \"resources\": [\r\n \"arn:aws:codecommit:eu-west-1:<ACCOUNT_ID>:<REPOSITORY>\",\r\n ...\r\n ],\r\n \"detail\": {\r\n \"event\": [\r\n \"referenceCreated\",\r\n \"referenceUpdated\"\r\n ]\r\n }\r\n}<\/pre>\n
FROM ubuntu:16.04\r\nRUN apt-get update\r\nRUN apt-get install wget -y\r\nRUN apt-get install numactl -y\r\nRUN apt-get install jq -y\r\nRUN apt-get install zip -y\r\nRUN apt-get install git -y\r\nRUN apt-get install software-properties-common -y\r\nRUN add-apt-repository ppa:jonathonf\/python-3.6 -y\r\nRUN apt-get update\r\nRUN apt-get install python3.6 -y\r\nRUN wget https:\/\/bootstrap.pypa.io\/get-pip.py<\/a>\r\nRUN python3.6 get-pip.py\r\nRUN pip3.6 install awscli --upgrade\r\nRUN pip3.6 install boto3\r\nRUN mkdir \/pipeline_source\r\nWORKDIR \/pipeline_source\r\nADD .\/codecommit_source.sh \/pipeline_source\/codecommit_source.sh\r\nRUN chmod +x \/pipeline_source\/codecommit_source.sh\r\nCMD \/pipeline_source\/codecommit_source.sh<\/pre>\n
#!\/bin\/bash\r\nset -Eeuxo pipefail<\/pre>\n
MESSAGE=$(aws sqs receive-message --queue-url https:\/\/sqs.eu-west-1.amazonaws.com\/<account-id>\/custom-codecommit-events.fifo --wait-time-seconds 20)\r\nRECEIPT_HANDLE=$(echo $MESSAGE | jq -r '.Messages | .[] | .ReceiptHandle')<\/pre>\n
aws sqs delete-message --queue-url https:\/\/sqs.eu-west-1.amazonaws.com\/<account-id>\/custom-codecommit-events.fifo --receipt-handle $RECEIPT_HANDLE<\/pre>\n
if [ -n \"$MESSAGE\" ]\r\nthen\r\nEVENT=$(echo $MESSAGE | jq -r '.Messages | .[] | .Body | fromjson')\r\nREPOSITORY_NAME=$(echo $EVENT | jq -r '.detail | .repositoryName')\r\nCOMMIT_ID=$(echo $EVENT | jq -r '.detail | .commitId')\r\nBRANCH_NAME=$(echo $EVENT | jq -r '.detail | .referenceName')\r\nREPO_URL=https:\/\/git-codecommit.eu-west-1.amazonaws.com\/v1\/repos\/$REPOSITORY_NAME\r\ngit config --global credential.helper '!aws codecommit credential-helper $@'\r\ngit config --global credential.UseHttpPath true\r\ngit clone --depth 10 --branch $BRANCH_NAME $REPO_URL<\/pre>\n
cd $REPOSITORY_NAME\r\ngit checkout $COMMIT_ID\r\nrm -rf .git \r\nzip -r ..\/$COMMIT_ID.zip .\r\ncd ..<\/pre>\n
rm -rf $REPOSITORY_NAME<\/pre>\n
if [ -s $COMMIT_ID.zip ]\r\nthen\r\nCODEBUILD_PROJECT=$REPOSITORY_NAME<\/pre>\n
if [ $BRANCH_NAME != \"test\" ] && [ $BRANCH_NAME != \"develop\" ] && [ $BRANCH_NAME != \"staging\" ] \r\nthen<\/pre>\n
aws s3 cp $COMMIT_ID.zip s3:\/\/$CODE_BUCKET\/$REPOSITORY_NAME\/$BRANCH_NAME\/$COMMIT_ID.zip\r\necho s3:\/\/$CODE_BUCKET\/$REPOSITORY_NAME\/$BRANCH_NAME\/$COMMIT_ID.zip\r\naws codebuild start-build --project-name $CODEBUILD_PROJECT --environment-variables-override name=COMMIT_ID,value=$COMMIT_ID,type=PLAINTEXT --source-type-override S3 --source-location-override $CODE_BUCKET\/$REPOSITORY_NAME\/$BRANCH_NAME\/$COMMIT_ID.zip --artifacts-override type=NO_ARTIFACTS<\/pre>\n
fi\r\nfi\r\nelse\r\necho \"no message in queque\"\r\nfi<\/pre>\n