Skip to the content.

šŸ‘· CI/CD with Kubernetes

This is an optional section detailing how to set up a continuous integration (CI) and continuous deployment (CD) pipeline, which will deploy to Kubernetes using Helm.

There are many CI/CD solutions available, we will use GitHub Actions, as itā€™s easy to set up and most developers will already have GitHub accounts. It assumes familiarity with git and basic GitHub usage such as forking & cloning.

šŸ“ NOTE: This is not intended to be full guide or tutorial on GitHub Actions, you would be better off starting here or here.

šŸš© Get Started with GitHub Actions

Weā€™ll use a fork of this repo in order to set things up, but in principle you could also start with an new/empty repo on GitHub.

Inside the .github/workflows directory, create a new file called build-release.yaml and paste in the contents:

šŸ“ NOTE: This is special directory path used by GitHub Actions!

# Name of the workflow
name: CI Build & Release

# Triggers for running
on:
  workflow_dispatch: # This allows manually running from GitHub web UI
  push:
    branches: ["main"] # Standard CI trigger when main branch is pushed

# One job for building the app
jobs:
  buildJob:
    name: "Build & push images"
    runs-on: ubuntu-latest
    steps:
      # Checkout code from another repo on GitHub
      - name: "Checkout app code repo"
        uses: actions/checkout@v2
        with:
          repository: benc-uk/smilr

The comments in the YAML should hopefully explain what is happening. But in summary this will run a short single step job that just checks out the code of the Smilr app repo. The name and filename do not reflect the current function, but the intent of what we are building towards.

Now commit the changes and push to the main branch, yes this is not a typical way of working, but adding a code review or PR process would merely distract from what we are doing.

The best place to check the status is from the GitHub web site and in the ā€˜Actionsā€™ within your forked repo, e.g. https://github.com/{your-github-user}/kube-workshop/actions you should be able to look at the workflow run, the status, plus output & other details.

šŸ“ NOTE: Itā€™s unusual for the code you are building to be a in separate repo from the workflow(s), in most cases they will be in the same code base, however it doesnā€™t really make any difference to the approach we will take.

āŒØļø Set Up GitHub CLI

Install the GitHub CLI, this will make setting up the secrets required in the next part much more simple. All commands below assume you are running them from within the path of the cloned repo on your local machine.

Now login using the GitHub CLI, follow the authentication steps when prompted:

gh auth login

Once the CLI is set up it, we can use it to create a secret within your repo, called ACR_PASSWORD. Weā€™ll reference this secret in the next section. This combines the Azure CLI and GitHub CLI into one neat way to get the credentials:

gh secret set ACR_PASSWORD --body "$(az acr credential show --name $ACR_NAME --query "passwords[0].value" -o tsv)"

šŸ“¦ Add CI Steps For Image Building

The workflow, doesnā€™t really do much, the applicaiton gets built and images created but they go nowhere. So letā€™s update the workflow YAML to carry out a build and push of the application container images. We can do this using the code weā€™ve checked out in the previous workflow step.

Add this as the YAML top level, e.g just under the on: section, change the __YOUR_ACR_NAME__ string to the name of the ACR you deployed previously (do not include the azurecr.io part).

env:
  ACR_NAME: __YOUR_ACR_NAME__
  IMAGE_TAG: $

Add this section under the ā€œCheckout app code repoā€ step in the job, it will require indenting to the correct level:

      - name: "Authenticate to access ACR"
        uses: docker/login-action@master
        with:
          registry: $.azurecr.io
          username: $
          password: $

      - name: "Build & Push: data API"
        run: |
          docker buildx build . -f node/data-api/Dockerfile \
            -t $ACR_NAME.azurecr.io/smilr/data-api:$IMAGE_TAG \
            -t $ACR_NAME.azurecr.io/smilr/data-api:latest
          docker push $ACR_NAME.azurecr.io/smilr/data-api:$IMAGE_TAG

      - name: "Build & Push: frontend"
        run: |
          docker buildx build . -f node/frontend/Dockerfile \
            -t $ACR_NAME.azurecr.io/smilr/frontend:$IMAGE_TAG \
            -t $ACR_NAME.azurecr.io/smilr/frontend:latest
          docker push $ACR_NAME.azurecr.io/smilr/frontend:$IMAGE_TAG

Save the file, commit and push to main just as before. Then check the status from the GitHub UI and ā€˜Actionsā€™ page of your forked repo.

The workflow now does three important things:

The ā€œBuild & push imagesā€ job and the workflow should take around 2~3 minutes to complete.

šŸ”Œ Connect To Kubernetes

Weā€™ll be using an approach of ā€œpushingā€ changes from the workflow pipeline to the cluster, really exactly the same as we have been doing from our local machines with kubectl and helm commands.

To do this we need a way to authenticate, so weā€™ll use another GitHub secret and store the cluster credentials in it.

Thereā€™s a very neat ā€˜one linerā€™ command you can run to do this. Itā€™s taking the output of the az aks get-credentials command we ran previously and storing the result as a secret called CLUSTER_KUBECONFIG. Run the following:

gh secret set CLUSTER_KUBECONFIG --body "$(az aks get-credentials --name $AKS_NAME --resource-group $RES_GROUP --file -)"

Next add a second job called releaseJob to the workflow YAML, beware the indentation, this should under the jobs: key

  releaseJob:
    name: "Release to Kubernetes"
    runs-on: ubuntu-latest
    if: $
    needs: buildJob

    steps:
      - name: "Configure kubeconfig"
        uses: azure/k8s-set-context@v2
        with:
          method: kubeconfig
          kubeconfig: $

      - name: "Sanity check Kubernetes"
        run: kubectl get nodes

This is doing a bunch of things so lets step through it:

Save the file, commit and push to main just as before, and check the status using the GitHub actions page.

šŸŖ– Deploy using Helm

Nearly there! Now we want to run helm in order to deploy the Smilr app into the cluster, but also make sure it deploys from the images we just built and pushed. Thereā€™s two ways for Helm to access a chart, either using the local filesystem or a remote chart published to a chart repo. Weā€™ll be using the first approach. The Smilr git repo contains a Helm chart for us to use, weā€™ll check it out and then run helm to release the chart.

Add the following two steps to the releaseJob (beware indentation again!)

      - name: "Checkout app code repo" # Needed for the Helm chart
        uses: actions/checkout@v2
        with:
          repository: benc-uk/smilr

      - name: "Update chart dependencies"
        run: helm dependency update ./kubernetes/helm/smilr

You can save, commit and push at this point to run the workflow and check everything is OK, or push onto the next step.

Add one final step to the releaseJob, which runs the helm upgrade command to create or update a release. See the full docs on this command

      - name: "Release app with Helm"
        run: |
          helm upgrade myapp ./kubernetes/helm/smilr --install --wait --timeout 120s \
          --set registryPrefix=$ACR_NAME.azurecr.io/ \
          --set frontend.imageTag=$IMAGE_TAG \
          --set dataApi.imageTag=$IMAGE_TAG \
          --set mongodb.enabled=true

This command is doing an awful lot, so letā€™s break it down:

Phew! As you can see Helm is a powerful way to deploy apps to Kubernetes, sometimes with a single command

Once again save, commit and push, then check the status of the workflow. Itā€™s very likely you made a mistake, keep committing & pushing to fix and re-run the workflow until it completes and runs green.

You can validate the deployment with the usual kubectl get pods command and helm ls to view the Helm release. Hopefully all the pods should be running.

šŸ… Bonus - Environments

GitHub has the concept of environments, which are an abstraction representing a target set of resources or a deployed application. This lets you use the GitHub UI to see the status of deployments targeting that environment, and even give users a link to access it

We can add an environment simply by adding the follow bit of YAML under the releaseJob job:

    environment:
      name: workshop-environment
      url: http://__PUBLIC_IP_OF_CLUSTER__/

Tip. The environment part needs to line up with the needs and if parts in the job YAML.

The name can be anything you wish and the URL needs to point to the public IP address of your cluster which you were referencing earlier, if youā€™ve forgotten it try running
kubectl get svc -A | grep LoadBalancer | awk '{print $5}'

Return to Main Index šŸ  ā€– Previous Section āŖ