Home

Published

- 10 min read

Setting Up a Production CI/CD Pipeline with Gitlab, Minikube and AWS: Part 1

img of Setting Up a Production CI/CD Pipeline with Gitlab, Minikube and AWS: Part 1

Setting Up a Production CI/CD Pipeline with GitLab, Minikube, and AWS: Part 1 - Laying the Foundation Hey everyone! 👋 Welcome to the first installment of our new series, where we’ll be building a robust production CI/CD (Continuous Integration/Continuous Delivery) pipeline using the powerful combination of GitLab and Amazon Web Services (AWS). Over this series, we’ll journey through setting up GitLab within a Docker container on an EC2 instance, creating our initial project, registering runners to execute our pipelines, and finally, crafting a tailored CI/CD workflow.

In this initial article, our focus will be on establishing the essential infrastructure on AWS. We’ll then dive into the world of Kubernetes by installing Minikube and deploying GitLab as a Kubernetes pod on top of our trusty EC2 instance. So, buckle up, and let’s get started!

AWS Infrastructure: Our Digital Playground AWS is a fantastic cloud platform, offering a vast array of services that run on a global network. It’s a go-to choice for testing and demos, thanks to its flexibility and extensive capabilities.

Let’s begin by defining our Virtual Private Cloud (VPC). Think of a VPC as your own private and isolated network within the public cloud. It’s the foundational building block for everything we’ll do on AWS.

Steps to Create Your VPC:

First things first, log in to your AWS Management Console. This web-based interface is your central hub for managing all things AWS. Navigate to the VPC service. You can usually find it under the “Networking & Content Delivery” section. Click on “Create VPC”. Give your VPC a descriptive name tag (something like gitlab-cicd-vpc). Then, specify the IPv4 CIDR block. This defines the range of private IP addresses for your VPC (a common starting point is 10.0.0.0/16). Finally, click “Create VPC”. Congratulations, you’ve just carved out your private network in the cloud! Next up, we’ll create Subnets. Imagine subnets as divisions within your VPC. They allow you to organize your resources and control network traffic. Subnets can be either public (allowing direct internet access) or private (without direct internet access).

Steps to Configure Subnets:

Head over to “Subnets” in the VPC dashboard and click “Create Subnet”. Choose an Availability Zone (AZ). Availability Zones are physically separate data centers within an AWS region, helping to ensure resilience. Give your subnet a meaningful name tag (e.g., public-subnet-az1 or private-subnet-az1). Specify the subnet’s IPv4 CIDR block. This should be a portion of your VPC’s CIDR block (for example, if your VPC is 10.0.0.0/16, a public subnet could be 10.0.1.0/24). Repeat these steps to create additional subnets in different Availability Zones as needed for redundancy. Remember to designate whether each subnet should be public or private based on your requirements. Now, let’s talk about Route Tables. A route table contains a set of rules that dictate where network traffic from your subnets should go. It acts like a virtual traffic director for your VPC.

Steps to Configure Route Tables:

Navigate to “Route Tables” in the VPC dashboard and click “Create Route Table”. Give it a descriptive name (e.g., public-route-table). To allow our public subnet to access the internet, we need to create a route that directs traffic to our Internet Gateway (IGW). Select your newly created route table, go to the “Routes” tab, and click “Edit routes”. Add a new route with the Destination as 0.0.0.0/0 (meaning all traffic) and the Target as your Internet Gateway. Finally, we need to associate this route table with our public subnet. Go to the “Subnet associations” tab and click “Edit subnet associations”. Select the public subnet you created earlier and click “Save associations”. Speaking of which, let’s create an Internet Gateway (IGW). An Internet Gateway is a virtual component that enables communication between your VPC and the public internet. It’s the bridge that allows your public resources to talk to the outside world.

Steps to Create an Internet Gateway:

Navigate to “Internet Gateways” in the VPC dashboard. Click “Create Internet Gateway”. Give it a name (e.g., gitlab-cicd-igw) and click “Create internet gateway”. Now, you need to attach this IGW to the VPC we created. Select your newly created Internet Gateway and click “Actions”, then “Attach to VPC”. Choose your VPC from the dropdown and click “Attach internet gateway”. Let’s shift our focus to Security Groups. Think of a security group as a virtual firewall for your EC2 instances. It controls the inbound (incoming) and outbound (outgoing) traffic, allowing you to define precisely what kind of traffic is allowed to reach or leave your instances. These rules are based on protocols (like TCP or UDP), port numbers, and source/destination IP addresses. Security groups provide granular control at the instance level. Importantly, security groups are stateful – if you allow an incoming request, the response traffic is automatically allowed back.

Steps to Configure Security Groups in AWS:

Navigate to “Security Groups” in the EC2 dashboard (you can find EC2 under the “Compute” section). Click “Create security group”. Give your security group a meaningful name (e.g., gitlab-instance-sg) and a description. Make sure to select the VPC we created earlier. Now, let’s define some essential rules. For our GitLab instance, we’ll likely need to allow inbound traffic on ports 22 (for SSH access), 80 (for HTTP), and 443 (for HTTPS). Click “Add rule” and configure the Type, Protocol, Port Range, and Source (you might want to restrict SSH access to your IP address for security). You’ll also need to configure outbound rules if your instance needs to initiate connections to specific external services. Remember to click “Create security group” when you’re done. Finally, let’s Launch an EC2 Instance. An EC2 instance is essentially a virtual server in the cloud. It’s where we’ll be installing Docker and Minikube.

Steps to Launch an EC2 Instance:

Open the EC2 Console. Click on the big “Launch Instance” button to begin the instance creation wizard. Provide a descriptive Name tag for your EC2 instance (e.g., gitlab-ec2). Choose an Amazon Machine Image (AMI). For this tutorial, let’s stick with Ubuntu, as it’s a popular and well-supported Linux distribution. Select an Instance type. For our initial setup and the free tier, t2.micro is a good choice. You’ll need a Key pair to securely connect to your instance via SSH. If you don’t have one already, click “Create new key pair”, give it a name, and download the .pem file to a secure location on your local machine. This private key is crucial for accessing your instance. In the “Network settings” section, make sure to select the VPC we created earlier and choose one of your public subnets. Also, select the security group we just configured (gitlab-instance-sg) to apply the firewall rules. Ensure that the security group allows inbound SSH traffic from your IP address at a minimum. Review your configuration and click “Launch instance”. Once the instance is running, you can grab its public IP address from the EC2 console to connect to it via SSH. Step 2: Setting Up Our Kubernetes Playground - Minikube Installation Now that we have our EC2 instance up and running, let’s dive into the world of Kubernetes with Minikube. Minikube is a fantastic tool that allows you to easily run a single-node Kubernetes cluster on your local machine or, in our case, our EC2 instance. It’s perfect for testing and developing Kubernetes-native applications.

Since Docker is the default container runtime for Minikube, we’ll start by installing Docker on our EC2 instance. Minikube uses the Docker driver to manage the virtual machine where the Kubernetes cluster runs. This means a Docker engine will be running within our Minikube environment, allowing us to build and run Docker images.

Steps to Install Docker:

Connect to your EC2 instance via SSH using the private key you downloaded earlier:

Bash

ssh -i /path/to/your/private_key.pem ubuntu@your_ec2_public_ip

Once connected, update the package lists:

Bash

sudo apt update

Install the necessary prerequisite packages for apt:

Bash

sudo apt install apt-transport-https ca-certificates curl software-properties-common

Add the GPG key for the official Docker repository:

Bash

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg

Add the Docker repository to your APT sources:

Bash

echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

Run the update command again to refresh the package lists with the new Docker repository:

Bash

sudo apt update

To verify that you can install from the Docker repository instead of the default Ubuntu repository, run:

Bash

apt-cache policy docker-ce

Finally, install Docker:

Bash

sudo apt install docker-ce

Check that the Docker service is running:

Bash

sudo systemctl status docker

Now that Docker is installed, we need a tool to interact with our Kubernetes cluster. We’ll install the kubectl binary.

Steps to Install Kubectl Binary with Curl:

Download the kubectl binary using the following command:

Bash

curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"

Make the kubectl binary executable:1

Bash

sudo chmod +x kubectl

github.com github.com Move the binary to a directory in your system’s PATH so you can run it easily:

Bash

sudo mv kubectl /usr/local/bin/kubectl

Verify the installation by checking the kubectl version:

Bash

kubectl version --client

GitLab Kubernetes Deployment With Minikube running and kubectl ready, we can now move on to deploying GitLab within our Kubernetes cluster!

Before we dive into the deployment, let’s briefly touch upon Git. Git is a widely used version control system that helps us track changes to our code, see who made those changes, and collaborate effectively on coding projects. Git enables us to:

Manage projects using repositories. Create a local copy of a project using cloning. Track and control changes using staging and committing. Work on isolated features using branching and integrate them back using merging. Get the latest updates from the main project using pulling. Share our local updates with the main project using pushing. In essence, Git is invaluable for managing code changes, especially when multiple people are working on the same files simultaneously.

Now, let’s get GitLab up and running on our Minikube cluster.

The first thing we need to do is enable the Ingress addon in Minikube. An Ingress is a Kubernetes object that manages external access to services within our cluster, typically via HTTP and HTTPS. It allows us to configure routing rules to direct traffic to the correct GitLab service.

Enable the Ingress addon:

Bash

minikube addons enable ingress

Next, we’ll use Helm, a package manager for Kubernetes, to deploy the GitLab chart.

Clone the official GitLab chart repository:

Bash

git clone https://gitlab.com/gitlab-org/charts/gitlab.git
cd gitlab

Update the Helm dependencies:

Bash

helm dependency update

Now, deploy GitLab using the Helm chart. We’ll use a minimal configuration file suitable for Minikube:

Bash

helm upgrade --install gitlab . \
  --timeout 600s \
  -f https://gitlab.com/gitlab-org/charts/gitlab/raw/master/examples/values-minikube-minimum.yaml

Check if your GitLab instance is up and running by listing the Kubernetes pods:

Bash

kubectl get pods

To access your GitLab instance, you’ll likely need to find the service URL exposed by the Ingress. You can usually do this with:

Bash

minikube service gitlab-ingress-nginx-ingress-controller --url

Open the obtained URL in your web browser and follow the initial setup instructions for GitLab. You’ll likely be prompted to set an initial administrator password. Wrapping Up Wow! We’ve made significant progress today. We successfully set up our foundational AWS infrastructure, launched a virtual machine, and installed the necessary tools to run Minikube. Then, we leveraged Helm to deploy a running GitLab instance on our Minikube cluster.

In the next article of this series, we’ll dive into the exciting part of creating our project pipeline and registering runners to automate the deployment of our projects.

Thanks for reading and following along on this journey! I look forward to seeing you in the next article, where we’ll take our CI/CD pipeline to the next level. Stay tuned! 👋