Part I<\/a>, we analyzed the key points for the correct implementation of a PaaS product. <\/p>\n\n\n\nIn this second episode, we are creating a Web Server vending machine<\/strong> while examining the common infrastructure stack<\/strong> for each customer. If you are new to this series, we suggest starting from here, as we are referring to the features and aspects mentioned in part I.<\/p>\n\n\n\nIn the repository that we will analyze we find the stacks needed for creating the following:<\/p>\n\n\n\n
Services for interception of pushes on GitLab<\/strong>:API Gateway to accept GitLab webhook calls;<\/li> Lambda to create a configuration file with committed data;<\/li> CodeBuild Job to pull the repository and upload to S3. <\/li><\/ul><\/li> Dedicated IAM roles<\/strong> per environment:Role for the use of the infrastructural pipeline;<\/li> Instance profile for EC2 instances<\/li> Role for the use of the software pipeline<\/li> Role for deploying resources related to EC2 instances, such as AutoScaling Group, Application Load Balancer, etc. <\/li><\/ul><\/li> VPC per environment:CIDR \/16<\/li> 9 subnet:3 Private<\/li> 3 Natted<\/li> 3 Public<\/li><\/ul><\/li> NAT Instance <\/li><\/ul><\/li> KMS key for the encryption of every object and service for the environment <\/li> Amazon S3<\/strong> bucket<\/strong> for the management of the files used by the pipelines (for example the artifacts) and for the collection of logs divided for each environment <\/li>Application load balancer<\/strong> per environment:Listener on 80 port with automatic redirect on 443 with HTTPS protocol<\/li> Listener on port 443 with return of the 503 error in case of non-match of the present rules<\/li><\/ul><\/li><\/ul>\n\n\n\nVPC<\/h2>\n\n\n\n The VPC consists of 9 subnets<\/strong> – 3 for each Availability Zone – in order to make the infrastructure highly available<\/strong>. They are divided into:<\/p>\n\n\n\nPUBLIC. Used for all services that must be reached from the internet (such as the ALB) or in case you need to directly expose an EC2 instance by assigning it a dedicated public IP address; <\/li> NATTED. Used for all services that need access to the internet but which must not be reachable<\/strong> from the outside; as the name suggests, the instances that will be created within these subnets will be able to access the internet through NAT gateways placed in their public subnets. In our case, we chose to opt for the 3 instances (one for AZ) only for the production VPC, while we’re using only one instance for the other environments; <\/li>PRIVATE. Used for all services that do not require internet access such as the RDS database.<\/li><\/ul>\n\n\n\nWith the VPC construct that AWS CDK makes available it is impossible to perform supernetting, since it has a management of the CIDRs assigned to the subnets. This deprives us of the possibility of grouping subnets with smaller netmasks. Therefore, we decided to use this construct, but making sure to overwrite the various CIDRs<\/strong> before deploying through this piece of code:<\/p>\n\n\n\nmyVpc.privateSubnets.forEach((subnet, index) => {\nlet cidr = `${startSubnetsCidr}.${firstPrivateCidr + index}.${endSubnetsCidr}`\nconst cfnSubnet = subnet.node.defaultChild as aws_ec2.CfnSubnet;\ncfnSubnet.addPropertyOverride('CidrBlock', `${cidr}`);\nlet name = `${configFile.projectName}-${process.env.ENVIRONMENT}-natted-${subnet.availabilityZone.replace(\/^\\w+\\-\\w+\\-\\d\/,'')}`;\nlet subName = `Subnet-Natted-${subnet.availabilityZone.replace(\/^\\w+\\-\\w+\\-\\d\/,'').toUpperCase()}-${process.env.ENVIRONMENT}-Name`;\nlet subId = `Subnet-Natted-${subnet.availabilityZone.replace(\/^\\w+\\-\\w+\\-\\d\/,'').toUpperCase()}-${process.env.ENVIRONMENT}-ID`;\nlet subCidr = `Subnet-Natted-${subnet.availabilityZone.replace(\/^\\w+\\-\\w+\\-\\d\/,'').toUpperCase()}-${process.env.ENVIRONMENT}-CIDR`;\ncdk.Aspects.of(subnet).add(\nnew cdk.Tag(\n'Name',\nName\n)\n)\n})<\/code><\/pre>\n\n\n\nOnce the VPC has been deployed, we can deploy all the resources necessary to operate the vending machine such as the Application Load Balancers<\/strong> in the public subnets, the Web Servers<\/strong> in the natted subnets, and the databases<\/strong> dedicated to the Web Servers in the private subnets. <\/p>\n\n\n\nThe creation of these resources will be the subject of our next article.<\/p>\n\n\n\n
Amazon S3 Bucket<\/h2>\n\n\n\n The S3 bucket created by this stack is used to store logs, artifacts and the result of git pushes on GitLab; In addition, relative permissions are assigned for IAM roles, guaranteeing full access to the bucket, and the removal policies for the stored logs are created:<\/p>\n\n\n\n
const myLifeCycleLogsRule: aws_s3.LifecycleRule = {\n\tid: `logs-cleared`,\nenabled: true,\nprefix: `*-${process.env.ENVIRONMENT}-log`,\nexpiration: cdk.Duration.days(1)\n}<\/code><\/pre>\n\n\n\nIn order to use the S3 bucket as a pipeline source, the CloudTrail service must be activated to ensure the ability to intercept events:<\/p>\n\n\n\n
const myTrail = new aws_cloudtrail.Trail(this, `CloudTrail-${process.env.ENVIRONMENT}`, {\ntrailName: `trail-${process.env.ENVIRONMENT}`,\nsendToCloudWatchLogs: true,\nbucket: myGlobalBucketS3,\nencryptionKey: myKms,\ncloudWatchLogGroup: new aws_logs.LogGroup(this, `Logs-${upperEnvironment}`, {\nlogGroupName: `logs-${process.env.ENVIRONMENT}`,\nretention: aws_logs.RetentionDays.THREE_DAYS,\nremovalPolicy: RemovalPolicy.DESTROY\n}),\ncloudWatchLogsRetention: aws_logs.RetentionDays.THREE_DAYS,\ns3KeyPrefix: `logs-${process.env.ENVIRONMENT}`,\nisMultiRegionTrail: false\n});<\/code><\/pre>\n\n\n\nBut this is not enough.<\/p>\n\n\n\n
To ensure that the pipeline is invoked when a new file is inserted into the S3 bucket, it is necessary to configure a notification event on CloudTrail<\/strong> that listens for write<\/em> operations within the S3 bucket:<\/p>\n\n\n\nmyTrail.addS3EventSelector([{\n\tbucket: myGlobalBucketS3,\n\tobjectPrefix: `software\/`,\n\t}], {\n\treadWriteType: aws_cloudtrail.ReadWriteType.WRITE_ONLY,\n})\nmyTrail.addS3EventSelector([{\n\tbucket: myGlobalBucketS3,\n\tobjectPrefix: `infrastructure\/`,\n\t}], {\n\treadWriteType: aws_cloudtrail.ReadWriteType.WRITE_ONLY,\n})<\/code><\/pre>\n\n\n\nKMS Key<\/h2>\n\n\n\n To ensure data encryption on S3, CloudTrail, and in the database, we have created a customer-managed KMS key. We have subsequently assigned a policy to this key that allows entities that must operate on encrypted services to be able to use it:<\/p>\n\n\n\n
myKms.addToResourcePolicy( new iam.PolicyStatement({\nsid: \"Allow principals in the account to decrypt log files\",\nactions: [\n\"kms:Decrypt\",\n\"kms:ReEncryptFrom\"\n],\nprincipals: [ new iam.AccountPrincipal(`${process.env.CDK_DEFAULT_ACCOUNT}`) ],\nresources: [\n`arn:aws:kms:${process.env.CDK_DEFAULT_REGION}:${process.env.CDK_DEFAULT_ACCOUNT}:key\/*`,\n],\nconditions: {\n\"StringLike\": {\n\"kms:EncryptionContext:aws:cloudtrail:arn\": \"arn:aws:cloudtrail:*:*:trail\/*\"\n},\n\"StringEquals\": {\n\"kms:CallerAccount\": `${process.env.CDK_DEFAULT_ACCOUNT}`\n}\n}\n}));<\/code><\/pre>\n\n\n\nApplication Load Balancer<\/h2>\n\n\n\n This ALB will manage the access to our services by automatically directing them from port 80 in HTTP to port 443 in HTTPS:<\/p>\n\n\n\n
myAppLoadBalancer.addListener(`App-80-Listener`, {\nport: 80,\ndefaultAction: elbv2.ListenerAction.redirect({\npermanent: true,\nport: '443',\nprotocol: 'HTTPS',\n})\n})\nmyAppLoadBalancer.addListener(`App-443-Listener`, {\nport: 443,\ndefaultAction: elbv2.ListenerAction.fixedResponse(503, {\ncontentType: `text\/plain`,\nmessageBody: 'host not found',\n})\n})<\/code><\/pre>\n\n\n\nTo manage the requests made in HTTPS on port 443, a certificate must be associated with the relative listener. We can do this using AWS Certificate Manager. This service makes it easy to create and configure certificates allowing also automatic updating.<\/p>\n\n\n\n
To conclude<\/h2>\n\n\n\n The resources configured within this repository can be considered the foundation for the entire solution.<\/p>\n\n\n\n
In the next episode, we will analyze the application stack dedicated to each customer who uses the services we have seen today. <\/p>\n\n\n\n
To have a solid solution from the security and scalability point of view, reliability must be firstly ensured to the underlying infrastructure. For this reason, we have relied solely on services managed by AWS, thus reducing the effort of administration and monitoring.<\/p>\n\n\n\n
Is everything running smoothly till now?<\/p>\n\n\n\n
At this point, we are ready to create the resources. But for this last step see you in 14 days with the last! <\/p>\n\n\n\n
\n\n\n\nAbout Proud2beCloud<\/h4>\n\n\n\n Proud2beCloud is a blog by beSharp<\/a>, an Italian APN Premier Consulting Partner expert in designing, implementing, and managing complex Cloud infrastructures and advanced services on AWS. Before being writers, we are Cloud Experts working daily with AWS services since 2007. We are hungry readers, innovative builders, and gem-seekers. On Proud2beCloud, we regularly share our best AWS pro tips, configuration insights, in-depth news, tips&tricks, how-tos, and many other resources. Take part in the discussion!<\/p>\n","protected":false},"excerpt":{"rendered":"Welcome back to our 3-step blog post series about building PaaS on AWS the correct way. In Part I, we […]<\/p>\n","protected":false},"author":24,"featured_media":5024,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[242],"tags":[316,536,252,292,294,286,365,599,601,603],"class_list":["post-5012","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-devops-en","tag-amazon-ec2-en","tag-amazon-rds","tag-amazon-s3-en","tag-aws-codebuild-en","tag-aws-codedeploy-en","tag-aws-codepipeline-en","tag-ci-cd-en","tag-gitlab","tag-platform-as-a-service-paas","tag-virtual-host"],"yoast_head":"\n
PaaS on AWS: how to build it the perfect way - Part II - Proud2beCloud Blog<\/title>\n \n \n \n \n \n \n \n \n \n \n \n\t \n\t \n\t \n \n \n \n \n \n \n\t \n\t \n\t \n