myPipeline.addStage({\nstageName: 'approve',\nplacement: {\njustAfter: myPipeline.stages[1],\n}\n}).addAction( new aws_codepipeline_actions.ManualApprovalAction({\nactionName: `${process.env.CUSTOMER}-approve`,\nnotificationTopic: new Topic(this, `${process.env.CUSTOMER}-${process.env.ENVIRONMENT}-software-sh-pipeline`),\nnotifyEmails: configFile.approvalEmails,\nadditionalInformation: `${process.env.CUSTOMER} deploy to ${process.env.ENVIRONMENT}`\n})\n)\n}<\/code><\/pre>\n\n\n\nWhen the pipeline reaches this step, an email is sent to the previously indicated addresses (in the example above they are indicated in the approvalEmails configuration). The person who will be responsible for verifying the required update will be able to allow the execution of the pipeline to continue or block it to fix any errors.<\/p>\n\n\n\n
Deploy repository software pipeline<\/h3>\n\n\n\n The stage configures the git credentials and clones the repository; it then makes a call to the shared ALB to calculate the priority of the rule to be applied to the listener. If it is an update to an existing rule, the priority will not be changed.<\/p>\n\n\n\n
Software pipeline repository<\/h2>\n\n\n\nApplication load balancer<\/h3>\n\n\n\n In the case of a custom domain and production environment, a nominal Application Load Balancer will be deployed with two listeners (port 80 HTTP and port 443 HTTPS)<\/p>\n\n\n\n
myCustomAppLoadBalancer.addListener(`App-80-Listener`, {\nport: 80,\ndefaultAction: elbv2.ListenerAction.redirect({\npermanent: true,\nport: '443',\nprotocol: 'HTTPS',\n})\n})\nconst myCustom443ApplicationListener =\n myCustomAppLoadBalancer.addListener(`App-443-Listener`, {\nport: 443,\ndefaultAction: elbv2.ListenerAction.fixedResponse(503, {\ncontentType: `text\/plain`,\nmessageBody: 'host not found',\n})\n})<\/code><\/pre>\n\n\n\nand the user’s certificate will be applied<\/p>\n\n\n\n
const wildcardListenerCertificate = elbv2.ListenerCertificate.fromArn(`${configFile.customer.certificate.arn}`)\nmyCustom443ApplicationListener.addCertificates(`Wildcard-${localUpperCustomer}-Cert`, [wildcardListenerCertificate])<\/code><\/pre>\n\n\n\nRDS Database <\/h2>\n\n\n\n An RDS database is created, the settings are defined within a configuration file. Among these parameters we find:<\/p>\n\n\n\n
Master username<\/li> Cluster name<\/li> List of security groups to be assigned<\/li> DB Engine to use<\/li> Backup configurations<\/li><\/ul>\n\n\n\nAutoscaling Group<\/h2>\n\n\n\n Thanks to the AWS Autoscaling service, we can configure thresholds that, once exceeded, will trigger the creation of a new instance that will be able to manage part of the traffic. <\/p>\n\n\n\n
In order to configure an Autoscaling group, it is necessary to provide a Launch Template (preferred by AWS) or a Launch Configuration (which is falling into disuse). For some reason, the AutoScaling construct provided by AWS CDK uses the Launch Configuration by default, but we expect the use of the Launch Template to be implemented in future versions of the class!<\/p>\n\n\n\n
Target group<\/h2>\n\n\n\n The target group is created using the configurations provided by the user (from the config file) to manage the health checks with which to verify the integrity of the target resources. This target group will then be associated with the Application Load Balancer.<\/p>\n\n\n\n
Software Pipeline<\/h2>\n\n\n\n The final destination …<\/p>\n\n\n\n
This pipeline deploys the customer’s software within the group of dedicated EC2 instances and is divided into the following stages.<\/p>\n\n\n\n
Build<\/h2>\n\n\n\n It carries out tests on the code and, to do so, uses the functionality made available by CodeBuild through the use of the buildSpec.yaml file and a custom image on which to do the tests downloaded directly from the ECR service. The buildSpec file is a YAML file that contains the necessary configurations for the CodeBuild project:<\/p>\n\n\n\n
version: 0.2\nphases:\n install:\n commands:\n - echo Entered the install phase...\n finally:\n - echo This always runs even if the update or install command fails\n pre_build:\n commands:\n - echo Entered the pre_build phase...\n finally:\n - echo This always runs even if the login command fails\n build:\n commands:\n - echo Entered the build phase...\n finally:\n - echo This always runs even if the install command fails\n post_build:\n commands:\n - echo Entered the post_build phase...\n - echo Build completed on `date`\nartifacts:\n files:\n - location\n - location\n name: artifact-name<\/code><\/pre>\n\n\n\nIt allows you to give the customer full autonomy on the commands to be executed by the build job divided into sections.<\/p>\n\n\n\n
Manual approval<\/h2>\n\n\n\n As with the infrastructure pipeline, we want to protect ourselves from unwanted updates that could cause downtime in the application. For this reason, every production release must be confirmed by a human.<\/p>\n\n\n\n
Deployment <\/h2>\n\n\n\n Using the CodeDeploy service we can automate the updating of our virtual machines and the deployment of the newly approved code.<\/p>\n\n\n\n
new codedeploy.ServerDeploymentGroup(this, `Deployment-Group-${localUpperCustomer}-${localUpperEnvironment}`, {\ndeploymentGroupName: `${process.env.CUSTOMER}-${process.env.ENVIRONMENT}-deploy-group`,\nloadBalancer: codedeploy.LoadBalancer.application(props.targetGroup),\nautoScalingGroups: [props.asg],\nrole: softwarePipelineRole,\napplication: deployApp,\ndeploymentConfig: codedeploy.ServerDeploymentConfig.ONE_AT_A_TIME,\ninstallAgent: true,\nautoRollback: {\nfailedDeployment: true,\nstoppedDeployment: true\n}\n})<\/code><\/pre>\n\n\n\nAlso for this service the command management functionality from file comes to our aid. The file to put in the root of the software repository is called appspec.yaml.<\/p>\n\n\n\n
version: 0.0\nos: linux\nfiles:\n - source: \/\n destination: \/var\/www\/html\nhooks:\n BeforeInstall: # You can use this deployment lifecycle event for preinstall tasks, such as decrypting files and creating a backup of the current version\n - location: deployScript\/beforeInstall.sh\n timeout: 300\n runas: root\n\n# INSTALL \u2013 During this deployment lifecycle event, the CodeDeploy agent copies the revision files from the temporary location to the final destination folder. This event is reserved for the CodeDeploy agent and cannot be used to run scripts.\n\n AfterInstall: # You can use this deployment lifecycle event for tasks such as configuring your application or changing file permissions\n - location: deployScript\/afterInstall.sh\n timeout: 300\n runas: root\n\n ApplicationStart: # You typically use this deployment lifecycle event to restart services that were stopped during ApplicationStop\n - location: deployScript\/applicationStart.sh\n timeout: 300\n runas: root\n\n ValidateService: # This is the last deployment lifecycle event. It is used to verify the deployment was completed successfully.\n - location: deployScript\/validateService.sh\n timeout: 300\n runas: root<\/code><\/pre>\n\n\n\nTo conclude<\/h2>\n\n\n\n When we see the iconic Succeeded<\/strong> label with the green flag next to the software pipeline, the entire process will be completed, and will be in possession of the machinery with our software installed and ready to be used.<\/p>\n\n\n\nWhen a customer asks us for a fleet of machines, we can easily create the dedicated configuration file and launch the deployment of the infrastructure stacks (the first repository analyzed in this article), and the “magic” of the IAC will do the rest allowing you to concentrate the efforts of the end user only on the development and maintenance of their own software.<\/p>\n\n\n\n
We hope you enjoyed the journey! Much more could be said about this topic, but this is a good way to get your hands dirty.<\/p>\n\n\n\n
What’s your experience with this topic? Did you build some kind of PaaS Virtual Host Vending Machine? Let us know in the comments!<\/p>\n\n\n\n
See you in 14 days for a new article on Proud2beCloud<\/strong><\/p>\n\n\n\n \n\n\n\nAbout Proud2beCloud<\/h4>\n\n\n\n Proud2beCloud is a blog by beSharp<\/a>, an Italian APN Premier Consulting Partner expert in designing, implementing, and managing complex Cloud infrastructures and advanced services on AWS. Before being writers, we are Cloud Experts working daily with AWS services since 2007. We are hungry readers, innovative builders, and gem-seekers. On Proud2beCloud, we regularly share our best AWS pro tips, configuration insights, in-depth news, tips&tricks, how-tos, and many other resources. Take part in the discussion!<\/p>\n","protected":false},"excerpt":{"rendered":"Welcome to the last chapter of our 3-article series about building PaaS on AWS. We started with a deep dive […]<\/p>\n","protected":false},"author":24,"featured_media":5157,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[242],"tags":[316,365,254,601,603],"class_list":["post-5141","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-devops-en","tag-amazon-ec2-en","tag-ci-cd-en","tag-infrastructure-as-code-iac-en","tag-platform-as-a-service-paas","tag-virtual-host"],"yoast_head":"\n
PaaS on AWS: how to build it the perfect way \u2013 Part III - Proud2beCloud Blog<\/title>\n \n \n \n \n \n \n \n \n \n \n \n \n\t \n\t \n\t \n \n \n \n \n \n \n\t \n\t \n\t \n