Orchestrate the deployment of CI CD Pipeline Part 1 with Terraform capabilities

Mondo Social Updated on 2024-03-06

In today's fast-paced development environment, a seamless, robust CI CD pipeline is critical to delivering high-quality software. In this article, we'll walk you through the steps to set up a deployment using Bitbucket Pipeline, Argocd Gitops, and AWS EKS, all orchestrated with the power of Terraform. In Part 1, we will introduce the first three steps of creating and deploying a CI CD pipeline using Terraform.

We have two environments, one is private node + 2 nat and the other is public node + 1 nat.

To demonstrate, the following will use an AWS Ubuntu 2204.3 LTS EC2 instances, attachedadministratoraccessto clone the repo:

Then change the directory to:

eks-tf-bitbucket-pipeline-argocd-gitops/eks tf/eks infra-tf (public node) -prod
Or you can use the dev version, which is exactly the same, except that the dev version has 2 nats and the prod version only has 1 nat.

Now, here are some things you need to do before you apply this to create an AWS EKS Infra.

Install the following tools on your PC (choose according to the operating system).

aws cli:

terraform cli:

kubectl cli:https:

If you want to run the script on your own computer, configure an AWS access key.

Run now$ terraform fmtcommand to format terraform

Next, you can do it in:vars.tfEdit the environment name, K8S version, and region of the EKS deployment in the file.

It can also be usedap-southeast-1area, and change it throughdefaultSave the file.

And don't forget to changeeks-node-groups-policy.tffiledesired_sizewithinstance_types

You should choose a medium or better large instance for your EKS cluster nodes, otherwise you will have problems installing Argocd or other applications, as all of these EC2 instance types have a limit on the number of pods, which can be found here

Run now$ terraform initDependencies:

After that, you can run it to your liking$ terraform planor$ terraform apply

Enter a valueyesand wait for it to finish, which can take up to 10-15 minutes.

TF will create the following AWS services:

VPC subnet.

Subnet routing table.

IAM roles and policies.

Internet gateways.

NAT Gateway.

Elastic ipeks clusters and node groups.

When the tf script completes successfully, you'll see a screen similar to the following at the end:

You can also see if an EKS cluster and all resources have been created in the AWS console.

Now we need to grant kubectl access to the eks cluster, and for this we need to run the following command:

$ aws eks update-kubeconfig - region region-code - name my-cluster
You need to update according to your environmentregion-codewithmy-clusterName, for example:

Then dismiss this warning:

To do this, you need to add the IAM username and arn to the EKS configmap using the following command:

$ kubectl edit configmap aws-auth -n kube-system
It will open a new window as shown below:

After that, add the following after the maproles paragraph:

mapusers: |groups: —system:masters userarn: arn:aws:iam::xxxxxxxxxxxx:user/devashish username: devashish
Don't forget to change the IAM username you are trying to access the EKS console.

If root privileges are used to create and access an EKS cluster, you must use the userarn and username of root.

Then save the file with WQ and refresh the EKS cluster page - now the IAM user warning should be gone.

In addition, you can see nodes in the compute tab of the eks cluster that did not appear before due to RBAC permission issues.

You can also runkubectlcommand to check if the EKS cluster is connected to the kubectl CLI tool.

Now that the EKS cluster is ready and running, let's move on to the next step.

To do this, we'll use the repo **addons readmemd, just follow the steps in the file.

Install argocd

You can check the running argocd pods using the following command:

$ kubectl get po -n argocd
Now that Argocd is installed, let's move on to the next step.

Use ACM to deploy ingress-nginx for NLB

Before we move on to the next step, we need the following:

VPC CIDR, also known as proxy-real-ip-cidr

The AWS ACM certificate ARN ID, which is ARN:AWS:ACM

So, if not, create them.

First, you'll need to use the wget **ingress-nginx for nlb controller script.

$ wget
Then use any text editor to open it.

$ nano deploy.yaml
These values are then changed according to the configuration.

After you change the value based on your information, make sure you use wildcards when you create the ACM certificate, and then run the following command:

$ kubectl apply -f deploy.yaml
Deploy the Argocd Pod Ingress service.

Use it first$ nano ingress.yamlCreate a yaml file and paste the eks addons readmemd file. Don't forget to changehostValue.

Then run:

$ kubectl apply -f ingress.yaml
Deploy the argocd service ingress file on EKS. You can run the following command to check whether the service is deployed:

$ kubectl get ingress -n argocd
The address value will take some time to appear, so please be patient. Then create a record that points the argocd subdomain to that nlb.

Now, you can access **:

Recover the password from the CLI and log in to Argocd with the password, the username should be admin.

Use the following command to retrieve your password:

$ kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="" | base64 -d; echo echo echo.| base64 -d; echo
After that, you can log in successfully.

Now that argocd is up and running, let's move on to the next step.

To do this, we need to create a Bitbucket and AWS ECR repo, where the BitBucket pipeline will deploy the application to the ECR repo.

Go to your Bitbucket ID and create a new repo:

After creating the repo, we need 3 files:

main.js

dockerfile

bitbucket-pipeline.yaml

Inbitbucket pipeline + dockerfileSample files are available in the repo under the folder.

So, let's create all 3 files according to your application. I'll do this in mainA sample Node JS application is used in a JS file.

Remember to create an AWS ECR Repos before creating a Bitbucket Pipeline YAML file, as it's needed to run. For example, the following private repo was created in AWS ECR.

Now we need to copy and paste some values from the ECR repository into the YAML file in Bitbucket Pipeline.

Make sure to update the branch name and -profile tag as needed, otherwise the pipeline won't be able to access the IAM access key.

Now, we need to create an AWS access key pair with access to the ECR repository and add it to the Bitbucket pipeline as the following variables.

ecr_access_key

ecr_secret_key

ecr_region

In addition, in order to add repository variables to Bitbucket Pipeline, we first need to enable it.

Then, for ECR repo access, we need to create an IAM user with access to amazonec2containerregistrypoweruser.

Then create an AWS Access Key Pair and add it to the Bitbucket Pipelines repository variables as follows:

After that, you should have the following files in your repo:

Make sure to write the correct file name, otherwise it may not work and the pipeline will not execute.

Once all the steps are completed, the pipeline will run automatically.

In a few minutes, the pipeline should be running successfully and the container image will be deployed to the ECR repo.

We can see that the image marked 1 was successfully uploaded to ECR because we used itas a marker for the image, so it's easier to find the pipeline number through the corresponding ecr repo.

Now that the Bitbucket pipeline deployed to the ECR Repo is up and running, it's ready to take the next step.

In Part 2, we'll cover the last two important steps in more detail and show you how to implement end-to-end Argocd Gitops on Bitbucket Pipeline using Terraform.

Related Pages