{"id":4476,"date":"2022-05-27T13:58:00","date_gmt":"2022-05-27T11:58:00","guid":{"rendered":"https:\/\/blog.besharp.it\/?p=4476"},"modified":"2022-06-30T15:29:48","modified_gmt":"2022-06-30T13:29:48","slug":"a-serverless-approach-for-gitlab-integration-on-aws","status":"publish","type":"post","link":"https:\/\/blog.besharp.it\/a-serverless-approach-for-gitlab-integration-on-aws\/","title":{"rendered":"A serverless approach for GitLab integration on AWS"},"content":{"rendered":"\n
Cost optimization and operational efficiency are key value drivers for a successful Cloud adoption path; using managed serverless services significantly lowers maintenance costs while speeding up operations.<\/p>\n\n\n\n
In this article, you’ll find how to better integrate GitLab pipelines on AWS using ECS Fargate in a multi-environment scenario.<\/p>\n\n\n\n
GitLab offers a lot of flexibility for computational resources: pipelines can run on Kubernetes clusters, Docker, on-premise, or custom platforms using GitLab custom executor drivers.<\/p>\n\n\n\n
The tried and tested solution to run pipelines on the AWS Cloud uses EC2 instances as computational resources. <\/p>\n\n\n\n
This approach leads to some inefficiency: starting instances on-demand will make pipeline executions slower and developers impatient (because of the initialization time). Keeping a spare runner available for builds, on the other hand, will increase costs.
<\/p>\n\n\n\n
We want to find a solution that can reduce execution time, ease maintenance and optimize costs.<\/p>\n\n\n\n
Containers have a faster initialization time and help decrease costs: billing will be based only on used build time. Our goal is to use them for our pipeline executions, they will run on ECS clusters. Additionally, we will see how to use ECS Services for autoscaling.<\/p>\n\n\n\n
Before describing our implementation, we need to know a few things: GitLab Runners are software agents that can execute pipeline scripts. We can configure a runner instance to manage the pipeline’s computational resources autoscaling by adding or removing capacity as demand for build capacity changes.<\/p>\n\n\n\n
In our scenario, we\u2019ll also assume that we have three different environments: development, staging, and production: we’ll define different IAM roles for our runners, so they will use the least privilege available to build and deploy our software.<\/p>\n\n\n\n
GitLab Runners have associated tags that help choose the environment that will run the execution step when defined in a pipeline.<\/p>\n\n\n\n
In this example, you can see a pipeline that builds and deploys in different environments:<\/p>\n\n\n\n
stages: \n - build dev\n - deploy dev \n - build staging\n - deploy staging\n - build production\n - deploy production\n \nbuild-dev: \n stage: build dev \n tags: \n - dev \n script: \n - .\/scripts\/build.sh\n artifacts: \n paths: \n - .\/artifacts\n expire_in: 7d \n \ndeploy-dev: \n stage: deploy dev \n tags: \n - dev \n script: \n - .\/scripts\/deploy.sh\n\nbuild-staging: \n stage: build staging\n tags: \n - staging\n script: \n - .\/scripts\/build.sh\n artifacts: \n paths: \n - .\/artifacts\n expire_in: 7d \n\n deploy-staging: \n stage: deploy staging\n tags: \n - staging\n script: \n - .\/scripts\/deploy.sh\n\nbuild-production: \n stage: build production\n tags: \n - production\n script: \n - .\/scripts\/build.sh\n artifacts: \n paths: \n - .\/artifacts\n expire_in: 7d \n\n deploy-production: \n stage: deploy production\n tags: \n - production\n script: \n - .\/scripts\/deploy.sh<\/code><\/pre>\n\n\n\nMaking a base Fargate runner<\/h2>\n\n\n\n
Let’s assume that our codebase uses NodeJS: we can build a custom generic Docker image with all the dependencies (including GitLab runner).<\/p>\n\n\n\n
Dockerfile<\/strong><\/p>\n\n\n\nFROM ubuntu:20.04 \n \n# Ubuntu based GitLab runner with nodeJS, npm, and aws CLI \n# --------------------------------------------------------------------- \n# Install https:\/\/github.com\/krallin\/tini - a very small 'init' process \n# that helps process signals sent to the container properly. \n# --------------------------------------------------------------------- \nARG TINI_VERSION=v0.19.0 \n \nCOPY docker-entrypoint.sh \/usr\/local\/bin\/docker-entrypoint.sh \n \nRUN ln -snf \/usr\/share\/zoneinfo\/Europe\/Rome \/etc\/localtime && echo Europe\/Rome > \/etc\/timezone \\ \n && echo \"Installing base packaes\" \\ \n && apt update && apt install -y curl gnupg unzip jq software-properties-common \\ \n && echo \"Installing awscli\" \\ \n && curl \"https:\/\/awscli.amazonaws.com\/awscli-exe-linux-x86_64.zip\" -o \"awscliv2.zip\" \\ \n && unzip awscliv2.zip \\ \n && .\/aws\/install \\ \n && rm -f awscliv2.zip \\ \n && apt update \\ \n && echo \"Installing packages\" \\ \n && apt install -y unzip openssh-server ca-certificates git git-lfs nodejs npm \\ \n && echo \"Installing tini and ssh\" \\ \n && curl -Lo \/usr\/local\/bin\/tini https:\/\/github.com\/krallin\/tini\/releases\/download\/${TINI_VERSION}\/tini-amd64 \\ \n && chmod +x \/usr\/local\/bin\/tini \\ \n && mkdir -p \/run\/sshd \\ \n && curl -L https:\/\/packages.gitlab.com\/install\/repositories\/runner\/gitlab-runner\/script.deb.sh | bash \\ \n && apt install -y gitlab-runner \\ \n && rm -rf \/var\/lib\/apt\/lists\/* \\ \n && rm -f \/home\/gitlab-runner\/.bash_logout \\ \n && git lfs install --skip-repo \\ \n && chmod +x \/usr\/local\/bin\/docker-entrypoint.sh \\ \n && echo \"Done\"\n\nEXPOSE 22 \n \nENTRYPOINT [\"tini\", \"--\", \"\/usr\/local\/bin\/docker-entrypoint.sh\"]<\/code><\/pre>\n\n\n\ndocker-entrypoint.sh<\/strong><\/p>\n\n\n\n#!\/bin\/sh \n \n# Create a folder to store the user's SSH keys if it does not exist. \nUSER_SSH_KEYS_FOLDER=~\/.ssh \n[ ! -d ${USER_SSH_KEYS_FOLDER} ] && mkdir -p ${USER_SSH_KEYS_FOLDER} \n \n# Copy contents from the `SSH_PUBLIC_KEY` environment variable \n# to the `$USER_SSH_KEYS_FOLDER\/authorized_keys` file. \n# The environment variable must be set when the container starts. \necho \"${SSH_PUBLIC_KEY}\" > ${USER_SSH_KEYS_FOLDER}\/authorized_keys \n \n# Clear the `SSH_PUBLIC_KEY` environment variable. \nunset SSH_PUBLIC_KEY \n \n# Start the SSH daemon \n\/usr\/sbin\/sshd -D<\/code><\/pre>\n\n\n\nAs you can see, there’s no environment-dependent configuration. <\/p>\n\n\n\n
Building a Runner for autoscaling (formerly Runner Manager)<\/strong><\/p>\n\n\n\nThis runner instance needs to be specialized to handle the environment configuration; we’ll use the Fargate Custom Executor provided by GitLab to interact and use different ECS Fargate Clusters for different environments.<\/p>\n\n\n\n
We’ll automatically handle our runner registration with the GitLab server during the Docker build phase by specifying its token using variables.<\/p>\n\n\n\n
Our Fargate custom executor will need a configuration file (“config.toml”) to specify a cluster, subnets, security groups, and task definition for our pipeline execution. We\u2019ll also handle this customization at build time.<\/p>\n\n\n\n
First, we need to get a registration token from our GitLab server: <\/p>\n\n\n\n
Go to your project CI\/CD settings and expand the “Runners\u201d section.<\/p>\n\n\n\n