Blog
Setting Up GitHub Actions Self-Hosted Runners on Kubernetes

In this blog, we'll walk through setting up GitHub Actions self-hosted runners in a Kubernetes cluster.  This setup ensures that our runners are ephemeral, scaling up and down based on demand. We'll also cover customizing Docker images for the runners and configuring necessary Kubernetes secrets.

The examples in this blog are on a GKE cluster on Google Cloud Platform. However, a lot of the steps would be similar on any hosted Kubernetes platform.

Prerequisites

Before we begin, ensure you have:

  • A Kubernetes cluster set up, and credentials configured etc
  • Helm installed for deploying the requiered charts.
  • Docker installed for building and pushing images.
  • A Docker/Container Registry to store our images, and credentials for the same

Install the Actions Runner Controller

First, we need to install the Actions Runner Controller in our Kubernetes cluster. This controller manages self-hosted runners for GitHub Actions. We use Helm to install it using the official Helm charts with the following command:

NAMESPACE="arc-systems"
helm install arc \
    --namespace "${NAMESPACE}" \
    --create-namespace \
    oci://ghcr.io/actions/actions-runner-controller-charts/gha-runner-scale-set-controller

Authenticate with Google Artifact Registry

Since we’re on GCP, we use Google Artifact Registry to host our images. To allow our Kubernetes cluster to authenticate with Artifact Registry and fetch our customized Docker image, we need to create a Kubernetes secret. This secret will store the credentials necessary for accessing the Artifact Registry.

Generalize the following command to create the secret:

kubectl create secret docker-registry artifact-registry-secret \
    --namespace arc-runners \
    --docker-server=<YOUR_ARTIFACT_REGISTRY_SERVER> \
    --docker-username=_json_key \
    --docker-password="$(cat <PATH_TO_YOUR_SERVICE_ACCOUNT_KEY_JSON>)" \
    --docker-email=<YOUR_EMAIL>

3. Github App and GitHub Actions Authentication

Next, we need to set up a GitHub app with credentials that allow us to make the required Github API calls in our Actions workflows. Follow the GitHub documentation on creating the app and using it to make authenticated API requests. We then get the GitHub token associated with the app. In order to manage it securely in our workflows, we store this token as a Kubernetes secret in our cluster to allow it to be consumed by the required pods.

kubectl create secret generic actions-runner-rw-secret \
    --from-literal=github_app_id='YOUR_GITHUB_APP_ID' \
    --from-literal=github_app_installation_id='YOUR_GITHUB_APP_INSTALLATION_ID' \
    --from-literal=github_app_private_key='YOUR_GITHUB_APP_PRIVATE_KEY'

4. Create a Custom Docker Image

Since our Actions Runners are going to run in pods on our cluster, we now create a custom Docker image for the GitHub Actions runner, based on the official Actions Runner image and including necessary dependencies. Here's the Dockerfile we use:

# Source: https://github.com/dotnet/dotnet-docker
FROM mcr.microsoft.com/dotnet/runtime-deps:6.0-jammy as build

ARG TARGETOS=linux
ARG TARGETARCH=amd64
ARG RUNNER_VERSION=2.317.0
ARG RUNNER_CONTAINER_HOOKS_VERSION=0.6.0
ARG DOCKER_VERSION=25.0.4
ARG BUILDX_VERSION=0.13.1

RUN apt update -y && apt install curl unzip -y

WORKDIR /actions-runner
RUN export RUNNER_ARCH=${TARGETARCH} \
    && if [ "$RUNNER_ARCH" = "amd64" ]; then export RUNNER_ARCH=x64 ; fi \
    && curl -f -L -o runner.tar.gz https://github.com/actions/runner/releases/download/v${RUNNER_VERSION}/actions-runner-${TARGETOS}-${RUNNER_ARCH}-${RUNNER_VERSION}.tar.gz \
    && tar xzf ./runner.tar.gz \
    && rm runner.tar.gz

RUN curl -f -L -o runner-container-hooks.zip https://github.com/actions/runner-container-hooks/releases/download/v${RUNNER_CONTAINER_HOOKS_VERSION}/actions-runner-hooks-k8s-${RUNNER_CONTAINER_HOOKS_VERSION}.zip \
    && unzip ./runner-container-hooks.zip -d ./k8s \
    && rm runner-container-hooks.zip

RUN export RUNNER_ARCH=${TARGETARCH} \
    && if [ "$RUNNER_ARCH" = "amd64" ]; then export DOCKER_ARCH=x86_64 ; fi \
    && if [ "$RUNNER_ARCH" = "arm64" ]; then export DOCKER_ARCH=aarch64 ; fi \
    && curl -fLo docker.tgz https://download.docker.com/${TARGETOS}/static/stable/${DOCKER_ARCH}/docker-${DOCKER_VERSION}.tgz \
    && tar zxvf docker.tgz \
    && rm -rf docker.tgz \
    && mkdir -p /usr/local/lib/docker/cli-plugins \
    && curl -fLo /usr/local/lib/docker/cli-plugins/docker-buildx \
        "https://github.com/docker/buildx/releases/download/v${BUILDX_VERSION}/buildx-v${BUILDX_VERSION}.linux-${TARGETARCH}" \
    && chmod +x /usr/local/lib/docker/cli-plugins/docker-buildx

FROM mcr.microsoft.com/dotnet/runtime-deps:6.0-jammy

ENV DEBIAN_FRONTEND=noninteractive
ENV RUNNER_MANUALLY_TRAP_SIG=1
ENV ACTIONS_RUNNER_PRINT_LOG_TO_STDOUT=1
ENV ImageOS=ubuntu22

# 'gpg-agent' and 'software-properties-common' are needed for the 'add-apt-repository' command that follows
RUN apt update -y \
    && apt install -y --no-install-recommends sudo lsb-release gpg-agent software-properties-common \
    && rm -rf /var/lib/apt/lists/*

# Configure git-core/ppa based on guidance here:  https://git-scm.com/download/linux
RUN add-apt-repository ppa:git-core/ppa \
    && apt update -y

RUN adduser --disabled-password --gecos "" --uid 1001 runner \
    && groupadd docker --gid 123 \
    && usermod -aG sudo runner \
    && usermod -aG docker runner \
    && echo "%sudo   ALL=(ALL:ALL) NOPASSWD:ALL" > /etc/sudoers \
    && echo "Defaults env_keep += \"DEBIAN_FRONTEND\"" >> /etc/sudoers

WORKDIR /home/runner

RUN apt-get update || true && apt-get install -y openjdk-17-jdk && apt-get clean

COPY --chown=runner:docker --from=build /actions-runner .
COPY --from=build /usr/local/lib/docker/cli-plugins/docker-buildx /usr/local/lib/docker/cli-plugins/docker-buildx

RUN install -o root -g root -m 755 docker/* /usr/bin/ && rm -rf docker

USER runner

5. Deploy the Runner

We now create a deploy.sh script to build and push the Docker image, update the Helm chart values, and deploy the custom runner. For detailed information and options on deploying the runners, take a look at the official GitHub documentation.

Here's the script we will use:

#!/bin/bash

# Variables
INSTALLATION_NAME="arc-runner-set-jdk17"
NAMESPACE="arc-runners"
VALUES_FILE="values.yaml"
IMAGE_REGISTRY="region-name-south2-docker.pkg.dev/google-project-id/artifact-registry-repository-name/docker-image-name"
GIT_COMMIT_HASH=$(git rev-parse --short HEAD)
NEW_IMAGE_TAG="dev-${GIT_COMMIT_HASH}"
FULL_IMAGE="${IMAGE_REGISTRY}:${NEW_IMAGE_TAG}"

# Check if the current image tag matches the expected tag format
CURRENT_IMAGE_TAG=$(grep 'image: region-name-south2-docker.pkg.dev/google-project-id/artifact-registry-repository-name/docker-image-name' ${VALUES_FILE} | awk -F '/' '{print $NF}')
if [ "${CURRENT_IMAGE_TAG}" != "${NEW_IMAGE_TAG}" ]; then
    echo "Detected change in Git commit hash. Building new Docker image..."
    
    # Build and push Docker image
    echo "Building Docker image with tag ${FULL_IMAGE}..."
    docker build -t ${FULL_IMAGE} .

    echo "Pushing Docker image ${FULL_IMAGE}..."
    docker push ${FULL_IMAGE}

    # Update the image tag in the values.yaml file for the 'runner' container
    echo "Updating 'runner' container image tag in ${VALUES_FILE} to ${FULL_IMAGE}..."
    sed -i.bak "s|image: region-name-south2-docker.pkg.dev/google-project-id/artifact-registry-repository-name/docker-image-name:.*|image: ${FULL_IMAGE}|" ${VALUES_FILE}
else
    echo "No change detected in Git commit hash. Skipping Docker image build and push."
fi

# Deploy the Helm chart
echo "Deploying Helm chart with new image tag..."
helm upgrade --install "${INSTALLATION_NAME}" \
    --namespace "${NAMESPACE}" \
    --create-namespace \
    -f ${VALUES_FILE} \
    oci://ghcr.io/actions/actions-runner-controller-charts/gha-runner-scale-set

# Check if Helm deployment was successful
if [ $? -eq 0 ]; then
  echo "Helm chart deployed successfully with image tag: ${NEW_IMAGE_TAG}"
else
  echo "Failed to deploy Helm chart. Check the error messages above."
  exit 1
fi

# Cleanup backup values.yaml file created by sed
rm -f ${VALUES_FILE}.bak

6. Configuration Files

Here are the key configuration files used:

values.yaml

## githubConfigUrl is the GitHub url for where you want to configure runners
## ex: https://github.com/myorg/myrepo or https://github.com/myorg
githubConfigUrl: "https://github.com/organization-name"

## githubConfigSecret is the k8s secrets to use when auth with GitHub API.
## You can choose to use GitHub App or a PAT token
#githubConfigSecret:
  ### GitHub Apps Configuration
  ## NOTE: IDs MUST be strings, use quotes
  #github_app_id: ""
  #github_app_installation_id: ""
  #github_app_private_key: |

  ### GitHub PAT Configuration
  #github_token: ""
## If you have a pre-define Kubernetes secret in the same namespace the gha-runner-scale-set is going to deploy,
## you can also reference it via `githubConfigSecret: pre-defined-secret`.
## You need to make sure your predefined secret has all the required secret data set properly.
##   For a pre-defined secret using GitHub PAT, the secret needs to be created like this:
##   > kubectl create secret generic pre-defined-secret --namespace=my_namespace --from-literal=github_token='ghp_your_pat'
##   For a pre-defined secret using GitHub App, the secret needs to be created like this:
##   > kubectl create secret generic pre-defined-secret --namespace=my_namespace --from-literal=github_app_id=123456 --from-literal=github_app_installation_id=654321 --from-literal=github_app_private_key='-----BEGIN CERTIFICATE-----*******'
githubConfigSecret: actions-runner-rw-secret

## proxy can be used to define proxy settings that will be used by the
## controller, the listener and the runner of this scale set.
#
# proxy:
#   http:
#     url: http://proxy.com:1234
#     credentialSecretRef: proxy-auth # a secret with `username` and `password` keys
#   https:
#     url: http://proxy.com:1234
#     credentialSecretRef: proxy-auth # a secret with `username` and `password` keys
#   noProxy:
#     - example.com
#     - example.org

## maxRunners is the max number of runners the autoscaling runner set will scale up to.
maxRunners: 5

## minRunners is the min number of idle runners. The target number of runners created will be
## calculated as a sum of minRunners and the number of jobs assigned to the scale set.
minRunners: 1

# runnerGroup: "default"

## name of the runner scale set to create.  Defaults to the helm release name
# runnerScaleSetName: ""

## A self-signed CA certificate for communication with the GitHub server can be
## provided using a config map key selector. If `runnerMountPath` is set, for
## each runner pod ARC will:
## - create a `github-server-tls-cert` volume containing the certificate
##   specified in `certificateFrom`
## - mount that volume on path `runnerMountPath`/{certificate name}
## - set NODE_EXTRA_CA_CERTS environment variable to that same path
## - set RUNNER_UPDATE_CA_CERTS environment variable to "1" (as of version
##   2.303.0 this will instruct the runner to reload certificates on the host)
##
## If any of the above had already been set by the user in the runner pod
## template, ARC will observe those and not overwrite them.
## Example configuration:
#
# githubServerTLS:
#   certificateFrom:
#     configMapKeyRef:
#       name: config-map-name
#       key: ca.crt
#   runnerMountPath: /usr/local/share/ca-certificates/

## Container mode is an object that provides out-of-box configuration
## for dind and kubernetes mode. Template will be modified as documented under the
## template object.
##
## If any customization is required for dind or kubernetes mode, containerMode should remain
## empty, and configuration should be applied to the template.
#containerMode:
  #type: "dind"  ## type can be set to dind or kubernetes
#   ## the following is required when containerMode.type=kubernetes
#   kubernetesModeWorkVolumeClaim:
#     accessModes: ["ReadWriteOnce"]
#     # For local testing, use https://github.com/openebs/dynamic-localpv-provisioner/blob/develop/docs/quickstart.md to provide dynamic provision volume with storageClassName: openebs-hostpath
#     storageClassName: "dynamic-blob-storage"
#     resources:
#       requests:
#         storage: 1Gi
#   kubernetesModeServiceAccount:
#     annotations:

## listenerTemplate is the PodSpec for each listener Pod
## For reference: https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#PodSpec
# listenerTemplate:
#   spec:
#     containers:
#     # Use this section to append additional configuration to the listener container.
#     # If you change the name of the container, the configuration will not be applied to the listener,
#     # and it will be treated as a side-car container.
#     - name: listener
#       securityContext:
#         runAsUser: 1000
#     # Use this section to add the configuration of a side-car container.
#     # Comment it out or remove it if you don't need it.
#     # Spec for this container will be applied as is without any modifications.
#     - name: side-car
#       image: asia-south2-docker.pkg.dev/possible-point-244308/gale/github-runner-jdk17:dev-92df8c9

## template is the PodSpec for each runner Pod
## For reference: https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#PodSpec
template:
  # template.spec will be modified if you change the container mode
  # with containerMode.type=dind, we will populate the template.spec with following pod spec
  # template:
  spec:
    metadata:
      annotations:
        cluster-autoscaler.kubernetes.io/safe-to-evict: "false"
    imagePullSecrets:
      - name: artifact-registry-secret
    terminationGracePeriodSeconds: 60
    containers:
    - name: runner
      image: region-name-south2-docker.pkg.dev/google-project-id/artifact-registry-repository-name/docker-image-name:dev-image-tag
      command: ["/home/runner/run.sh"]
      resources:
          requests:
            memory: "2Gi"
            cpu: "2"
          limits:
            memory: "2Gi"
            cpu: "2"
  ######################################################################################################
  ## with containerMode.type=kubernetes, we will populate the template.spec with following pod spec
  ## template:
  ##   spec:
  ##     containers:
  ##     - name: runner
  ##       image: asia-south2-docker.pkg.dev/possible-point-244308/gale/github-runner-jdk17:dev-92df8c9
  ##       command: ["/home/runner/run.sh"]
  ##       env:
  ##         - name: ACTIONS_RUNNER_CONTAINER_HOOKS
  ##           value: /home/runner/k8s/index.js
  ##         - name: ACTIONS_RUNNER_POD_NAME
  ##           valueFrom:
  ##             fieldRef:
  ##               fieldPath: metadata.name
  ##         - name: ACTIONS_RUNNER_REQUIRE_JOB_CONTAINER
  ##           value: "true"
  ##       volumeMounts:
  ##         - name: work
  ##           mountPath: /home/runner/_work
  ##     volumes:
  ##       - name: work
  ##         ephemeral:
  ##           volumeClaimTemplate:
  ##             spec:
  ##               accessModes: [ "ReadWriteOnce" ]
  ##               storageClassName: "local-path"
  ##               resources:
  ##                 requests:
  ##                   storage: 1Gi

## Optional controller service account that needs to have required Role and RoleBinding
## to operate this gha-runner-scale-set installation.
## The helm chart will try to find the controller deployment and its service account at installation time.
## In case the helm chart can't find the right service account, you can explicitly pass in the following value
## to help it finish RoleBinding with the right service account.
## Note: if your controller is installed to only watch a single namespace, you have to pass these values explicitly.
# controllerServiceAccount:
#   namespace: arc-system
#   name: test-arc-gha-runner-scale-set-controller

7. Verifying the Deployment

After deploying the Helm chart, you can verify the successful creation of the GitHub Actions self-hosted runners by checking the Kubernetes pods in your cluster. The screenshot below shows an example of the pods that will be created:

In the final blog of this series, we will create a centralized action workflow for CI/CD that will execute on these runners. This workflow will help streamline our continuous integration and deployment processes across various projects and services.

Stay tuned for the final installment, where we will integrate all the steps into a cohesive CI/CD pipeline!

Setting Up GitHub Actions Self-Hosted Runners on Kubernetes
12-Sep-2024

In this blog, we'll walk through setting up GitHub Actions self-hosted runners in a Kubernetes cluster.  This setup ensures that our runners are ephemeral, scaling up and down based on demand. We'll also cover customizing Docker images for the runners and configuring necessary Kubernetes secrets.

The examples in this blog are on a GKE cluster on Google Cloud Platform. However, a lot of the steps would be similar on any hosted Kubernetes platform.

Prerequisites

Before we begin, ensure you have:

  • A Kubernetes cluster set up, and credentials configured etc
  • Helm installed for deploying the requiered charts.
  • Docker installed for building and pushing images.
  • A Docker/Container Registry to store our images, and credentials for the same

Install the Actions Runner Controller

First, we need to install the Actions Runner Controller in our Kubernetes cluster. This controller manages self-hosted runners for GitHub Actions. We use Helm to install it using the official Helm charts with the following command:

NAMESPACE="arc-systems"
helm install arc \
    --namespace "${NAMESPACE}" \
    --create-namespace \
    oci://ghcr.io/actions/actions-runner-controller-charts/gha-runner-scale-set-controller

Authenticate with Google Artifact Registry

Since we’re on GCP, we use Google Artifact Registry to host our images. To allow our Kubernetes cluster to authenticate with Artifact Registry and fetch our customized Docker image, we need to create a Kubernetes secret. This secret will store the credentials necessary for accessing the Artifact Registry.

Generalize the following command to create the secret:

kubectl create secret docker-registry artifact-registry-secret \
    --namespace arc-runners \
    --docker-server=<YOUR_ARTIFACT_REGISTRY_SERVER> \
    --docker-username=_json_key \
    --docker-password="$(cat <PATH_TO_YOUR_SERVICE_ACCOUNT_KEY_JSON>)" \
    --docker-email=<YOUR_EMAIL>

3. Github App and GitHub Actions Authentication

Next, we need to set up a GitHub app with credentials that allow us to make the required Github API calls in our Actions workflows. Follow the GitHub documentation on creating the app and using it to make authenticated API requests. We then get the GitHub token associated with the app. In order to manage it securely in our workflows, we store this token as a Kubernetes secret in our cluster to allow it to be consumed by the required pods.

kubectl create secret generic actions-runner-rw-secret \
    --from-literal=github_app_id='YOUR_GITHUB_APP_ID' \
    --from-literal=github_app_installation_id='YOUR_GITHUB_APP_INSTALLATION_ID' \
    --from-literal=github_app_private_key='YOUR_GITHUB_APP_PRIVATE_KEY'

4. Create a Custom Docker Image

Since our Actions Runners are going to run in pods on our cluster, we now create a custom Docker image for the GitHub Actions runner, based on the official Actions Runner image and including necessary dependencies. Here's the Dockerfile we use:

# Source: https://github.com/dotnet/dotnet-docker
FROM mcr.microsoft.com/dotnet/runtime-deps:6.0-jammy as build

ARG TARGETOS=linux
ARG TARGETARCH=amd64
ARG RUNNER_VERSION=2.317.0
ARG RUNNER_CONTAINER_HOOKS_VERSION=0.6.0
ARG DOCKER_VERSION=25.0.4
ARG BUILDX_VERSION=0.13.1

RUN apt update -y && apt install curl unzip -y

WORKDIR /actions-runner
RUN export RUNNER_ARCH=${TARGETARCH} \
    && if [ "$RUNNER_ARCH" = "amd64" ]; then export RUNNER_ARCH=x64 ; fi \
    && curl -f -L -o runner.tar.gz https://github.com/actions/runner/releases/download/v${RUNNER_VERSION}/actions-runner-${TARGETOS}-${RUNNER_ARCH}-${RUNNER_VERSION}.tar.gz \
    && tar xzf ./runner.tar.gz \
    && rm runner.tar.gz

RUN curl -f -L -o runner-container-hooks.zip https://github.com/actions/runner-container-hooks/releases/download/v${RUNNER_CONTAINER_HOOKS_VERSION}/actions-runner-hooks-k8s-${RUNNER_CONTAINER_HOOKS_VERSION}.zip \
    && unzip ./runner-container-hooks.zip -d ./k8s \
    && rm runner-container-hooks.zip

RUN export RUNNER_ARCH=${TARGETARCH} \
    && if [ "$RUNNER_ARCH" = "amd64" ]; then export DOCKER_ARCH=x86_64 ; fi \
    && if [ "$RUNNER_ARCH" = "arm64" ]; then export DOCKER_ARCH=aarch64 ; fi \
    && curl -fLo docker.tgz https://download.docker.com/${TARGETOS}/static/stable/${DOCKER_ARCH}/docker-${DOCKER_VERSION}.tgz \
    && tar zxvf docker.tgz \
    && rm -rf docker.tgz \
    && mkdir -p /usr/local/lib/docker/cli-plugins \
    && curl -fLo /usr/local/lib/docker/cli-plugins/docker-buildx \
        "https://github.com/docker/buildx/releases/download/v${BUILDX_VERSION}/buildx-v${BUILDX_VERSION}.linux-${TARGETARCH}" \
    && chmod +x /usr/local/lib/docker/cli-plugins/docker-buildx

FROM mcr.microsoft.com/dotnet/runtime-deps:6.0-jammy

ENV DEBIAN_FRONTEND=noninteractive
ENV RUNNER_MANUALLY_TRAP_SIG=1
ENV ACTIONS_RUNNER_PRINT_LOG_TO_STDOUT=1
ENV ImageOS=ubuntu22

# 'gpg-agent' and 'software-properties-common' are needed for the 'add-apt-repository' command that follows
RUN apt update -y \
    && apt install -y --no-install-recommends sudo lsb-release gpg-agent software-properties-common \
    && rm -rf /var/lib/apt/lists/*

# Configure git-core/ppa based on guidance here:  https://git-scm.com/download/linux
RUN add-apt-repository ppa:git-core/ppa \
    && apt update -y

RUN adduser --disabled-password --gecos "" --uid 1001 runner \
    && groupadd docker --gid 123 \
    && usermod -aG sudo runner \
    && usermod -aG docker runner \
    && echo "%sudo   ALL=(ALL:ALL) NOPASSWD:ALL" > /etc/sudoers \
    && echo "Defaults env_keep += \"DEBIAN_FRONTEND\"" >> /etc/sudoers

WORKDIR /home/runner

RUN apt-get update || true && apt-get install -y openjdk-17-jdk && apt-get clean

COPY --chown=runner:docker --from=build /actions-runner .
COPY --from=build /usr/local/lib/docker/cli-plugins/docker-buildx /usr/local/lib/docker/cli-plugins/docker-buildx

RUN install -o root -g root -m 755 docker/* /usr/bin/ && rm -rf docker

USER runner

5. Deploy the Runner

We now create a deploy.sh script to build and push the Docker image, update the Helm chart values, and deploy the custom runner. For detailed information and options on deploying the runners, take a look at the official GitHub documentation.

Here's the script we will use:

#!/bin/bash

# Variables
INSTALLATION_NAME="arc-runner-set-jdk17"
NAMESPACE="arc-runners"
VALUES_FILE="values.yaml"
IMAGE_REGISTRY="region-name-south2-docker.pkg.dev/google-project-id/artifact-registry-repository-name/docker-image-name"
GIT_COMMIT_HASH=$(git rev-parse --short HEAD)
NEW_IMAGE_TAG="dev-${GIT_COMMIT_HASH}"
FULL_IMAGE="${IMAGE_REGISTRY}:${NEW_IMAGE_TAG}"

# Check if the current image tag matches the expected tag format
CURRENT_IMAGE_TAG=$(grep 'image: region-name-south2-docker.pkg.dev/google-project-id/artifact-registry-repository-name/docker-image-name' ${VALUES_FILE} | awk -F '/' '{print $NF}')
if [ "${CURRENT_IMAGE_TAG}" != "${NEW_IMAGE_TAG}" ]; then
    echo "Detected change in Git commit hash. Building new Docker image..."
    
    # Build and push Docker image
    echo "Building Docker image with tag ${FULL_IMAGE}..."
    docker build -t ${FULL_IMAGE} .

    echo "Pushing Docker image ${FULL_IMAGE}..."
    docker push ${FULL_IMAGE}

    # Update the image tag in the values.yaml file for the 'runner' container
    echo "Updating 'runner' container image tag in ${VALUES_FILE} to ${FULL_IMAGE}..."
    sed -i.bak "s|image: region-name-south2-docker.pkg.dev/google-project-id/artifact-registry-repository-name/docker-image-name:.*|image: ${FULL_IMAGE}|" ${VALUES_FILE}
else
    echo "No change detected in Git commit hash. Skipping Docker image build and push."
fi

# Deploy the Helm chart
echo "Deploying Helm chart with new image tag..."
helm upgrade --install "${INSTALLATION_NAME}" \
    --namespace "${NAMESPACE}" \
    --create-namespace \
    -f ${VALUES_FILE} \
    oci://ghcr.io/actions/actions-runner-controller-charts/gha-runner-scale-set

# Check if Helm deployment was successful
if [ $? -eq 0 ]; then
  echo "Helm chart deployed successfully with image tag: ${NEW_IMAGE_TAG}"
else
  echo "Failed to deploy Helm chart. Check the error messages above."
  exit 1
fi

# Cleanup backup values.yaml file created by sed
rm -f ${VALUES_FILE}.bak

6. Configuration Files

Here are the key configuration files used:

values.yaml

## githubConfigUrl is the GitHub url for where you want to configure runners
## ex: https://github.com/myorg/myrepo or https://github.com/myorg
githubConfigUrl: "https://github.com/organization-name"

## githubConfigSecret is the k8s secrets to use when auth with GitHub API.
## You can choose to use GitHub App or a PAT token
#githubConfigSecret:
  ### GitHub Apps Configuration
  ## NOTE: IDs MUST be strings, use quotes
  #github_app_id: ""
  #github_app_installation_id: ""
  #github_app_private_key: |

  ### GitHub PAT Configuration
  #github_token: ""
## If you have a pre-define Kubernetes secret in the same namespace the gha-runner-scale-set is going to deploy,
## you can also reference it via `githubConfigSecret: pre-defined-secret`.
## You need to make sure your predefined secret has all the required secret data set properly.
##   For a pre-defined secret using GitHub PAT, the secret needs to be created like this:
##   > kubectl create secret generic pre-defined-secret --namespace=my_namespace --from-literal=github_token='ghp_your_pat'
##   For a pre-defined secret using GitHub App, the secret needs to be created like this:
##   > kubectl create secret generic pre-defined-secret --namespace=my_namespace --from-literal=github_app_id=123456 --from-literal=github_app_installation_id=654321 --from-literal=github_app_private_key='-----BEGIN CERTIFICATE-----*******'
githubConfigSecret: actions-runner-rw-secret

## proxy can be used to define proxy settings that will be used by the
## controller, the listener and the runner of this scale set.
#
# proxy:
#   http:
#     url: http://proxy.com:1234
#     credentialSecretRef: proxy-auth # a secret with `username` and `password` keys
#   https:
#     url: http://proxy.com:1234
#     credentialSecretRef: proxy-auth # a secret with `username` and `password` keys
#   noProxy:
#     - example.com
#     - example.org

## maxRunners is the max number of runners the autoscaling runner set will scale up to.
maxRunners: 5

## minRunners is the min number of idle runners. The target number of runners created will be
## calculated as a sum of minRunners and the number of jobs assigned to the scale set.
minRunners: 1

# runnerGroup: "default"

## name of the runner scale set to create.  Defaults to the helm release name
# runnerScaleSetName: ""

## A self-signed CA certificate for communication with the GitHub server can be
## provided using a config map key selector. If `runnerMountPath` is set, for
## each runner pod ARC will:
## - create a `github-server-tls-cert` volume containing the certificate
##   specified in `certificateFrom`
## - mount that volume on path `runnerMountPath`/{certificate name}
## - set NODE_EXTRA_CA_CERTS environment variable to that same path
## - set RUNNER_UPDATE_CA_CERTS environment variable to "1" (as of version
##   2.303.0 this will instruct the runner to reload certificates on the host)
##
## If any of the above had already been set by the user in the runner pod
## template, ARC will observe those and not overwrite them.
## Example configuration:
#
# githubServerTLS:
#   certificateFrom:
#     configMapKeyRef:
#       name: config-map-name
#       key: ca.crt
#   runnerMountPath: /usr/local/share/ca-certificates/

## Container mode is an object that provides out-of-box configuration
## for dind and kubernetes mode. Template will be modified as documented under the
## template object.
##
## If any customization is required for dind or kubernetes mode, containerMode should remain
## empty, and configuration should be applied to the template.
#containerMode:
  #type: "dind"  ## type can be set to dind or kubernetes
#   ## the following is required when containerMode.type=kubernetes
#   kubernetesModeWorkVolumeClaim:
#     accessModes: ["ReadWriteOnce"]
#     # For local testing, use https://github.com/openebs/dynamic-localpv-provisioner/blob/develop/docs/quickstart.md to provide dynamic provision volume with storageClassName: openebs-hostpath
#     storageClassName: "dynamic-blob-storage"
#     resources:
#       requests:
#         storage: 1Gi
#   kubernetesModeServiceAccount:
#     annotations:

## listenerTemplate is the PodSpec for each listener Pod
## For reference: https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#PodSpec
# listenerTemplate:
#   spec:
#     containers:
#     # Use this section to append additional configuration to the listener container.
#     # If you change the name of the container, the configuration will not be applied to the listener,
#     # and it will be treated as a side-car container.
#     - name: listener
#       securityContext:
#         runAsUser: 1000
#     # Use this section to add the configuration of a side-car container.
#     # Comment it out or remove it if you don't need it.
#     # Spec for this container will be applied as is without any modifications.
#     - name: side-car
#       image: asia-south2-docker.pkg.dev/possible-point-244308/gale/github-runner-jdk17:dev-92df8c9

## template is the PodSpec for each runner Pod
## For reference: https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#PodSpec
template:
  # template.spec will be modified if you change the container mode
  # with containerMode.type=dind, we will populate the template.spec with following pod spec
  # template:
  spec:
    metadata:
      annotations:
        cluster-autoscaler.kubernetes.io/safe-to-evict: "false"
    imagePullSecrets:
      - name: artifact-registry-secret
    terminationGracePeriodSeconds: 60
    containers:
    - name: runner
      image: region-name-south2-docker.pkg.dev/google-project-id/artifact-registry-repository-name/docker-image-name:dev-image-tag
      command: ["/home/runner/run.sh"]
      resources:
          requests:
            memory: "2Gi"
            cpu: "2"
          limits:
            memory: "2Gi"
            cpu: "2"
  ######################################################################################################
  ## with containerMode.type=kubernetes, we will populate the template.spec with following pod spec
  ## template:
  ##   spec:
  ##     containers:
  ##     - name: runner
  ##       image: asia-south2-docker.pkg.dev/possible-point-244308/gale/github-runner-jdk17:dev-92df8c9
  ##       command: ["/home/runner/run.sh"]
  ##       env:
  ##         - name: ACTIONS_RUNNER_CONTAINER_HOOKS
  ##           value: /home/runner/k8s/index.js
  ##         - name: ACTIONS_RUNNER_POD_NAME
  ##           valueFrom:
  ##             fieldRef:
  ##               fieldPath: metadata.name
  ##         - name: ACTIONS_RUNNER_REQUIRE_JOB_CONTAINER
  ##           value: "true"
  ##       volumeMounts:
  ##         - name: work
  ##           mountPath: /home/runner/_work
  ##     volumes:
  ##       - name: work
  ##         ephemeral:
  ##           volumeClaimTemplate:
  ##             spec:
  ##               accessModes: [ "ReadWriteOnce" ]
  ##               storageClassName: "local-path"
  ##               resources:
  ##                 requests:
  ##                   storage: 1Gi

## Optional controller service account that needs to have required Role and RoleBinding
## to operate this gha-runner-scale-set installation.
## The helm chart will try to find the controller deployment and its service account at installation time.
## In case the helm chart can't find the right service account, you can explicitly pass in the following value
## to help it finish RoleBinding with the right service account.
## Note: if your controller is installed to only watch a single namespace, you have to pass these values explicitly.
# controllerServiceAccount:
#   namespace: arc-system
#   name: test-arc-gha-runner-scale-set-controller

7. Verifying the Deployment

After deploying the Helm chart, you can verify the successful creation of the GitHub Actions self-hosted runners by checking the Kubernetes pods in your cluster. The screenshot below shows an example of the pods that will be created:

In the final blog of this series, we will create a centralized action workflow for CI/CD that will execute on these runners. This workflow will help streamline our continuous integration and deployment processes across various projects and services.

Stay tuned for the final installment, where we will integrate all the steps into a cohesive CI/CD pipeline!

Subscribe To Our Newsletter

Do get in touch with us to understand more about how we can help your organization in building meaningful and in-demand products
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Blog

Setting Up GitHub Actions Self-Hosted Runners on Kubernetes

Written by:  

Rajan Suri

September 13, 2024

15 min read

Setting Up GitHub Actions Self-Hosted Runners on Kubernetes

In this blog, we'll walk through setting up GitHub Actions self-hosted runners in a Kubernetes cluster.  This setup ensures that our runners are ephemeral, scaling up and down based on demand. We'll also cover customizing Docker images for the runners and configuring necessary Kubernetes secrets.

The examples in this blog are on a GKE cluster on Google Cloud Platform. However, a lot of the steps would be similar on any hosted Kubernetes platform.

Prerequisites

Before we begin, ensure you have:

  • A Kubernetes cluster set up, and credentials configured etc
  • Helm installed for deploying the requiered charts.
  • Docker installed for building and pushing images.
  • A Docker/Container Registry to store our images, and credentials for the same

Install the Actions Runner Controller

First, we need to install the Actions Runner Controller in our Kubernetes cluster. This controller manages self-hosted runners for GitHub Actions. We use Helm to install it using the official Helm charts with the following command:

NAMESPACE="arc-systems"
helm install arc \
    --namespace "${NAMESPACE}" \
    --create-namespace \
    oci://ghcr.io/actions/actions-runner-controller-charts/gha-runner-scale-set-controller

Authenticate with Google Artifact Registry

Since we’re on GCP, we use Google Artifact Registry to host our images. To allow our Kubernetes cluster to authenticate with Artifact Registry and fetch our customized Docker image, we need to create a Kubernetes secret. This secret will store the credentials necessary for accessing the Artifact Registry.

Generalize the following command to create the secret:

kubectl create secret docker-registry artifact-registry-secret \
    --namespace arc-runners \
    --docker-server=<YOUR_ARTIFACT_REGISTRY_SERVER> \
    --docker-username=_json_key \
    --docker-password="$(cat <PATH_TO_YOUR_SERVICE_ACCOUNT_KEY_JSON>)" \
    --docker-email=<YOUR_EMAIL>

3. Github App and GitHub Actions Authentication

Next, we need to set up a GitHub app with credentials that allow us to make the required Github API calls in our Actions workflows. Follow the GitHub documentation on creating the app and using it to make authenticated API requests. We then get the GitHub token associated with the app. In order to manage it securely in our workflows, we store this token as a Kubernetes secret in our cluster to allow it to be consumed by the required pods.

kubectl create secret generic actions-runner-rw-secret \
    --from-literal=github_app_id='YOUR_GITHUB_APP_ID' \
    --from-literal=github_app_installation_id='YOUR_GITHUB_APP_INSTALLATION_ID' \
    --from-literal=github_app_private_key='YOUR_GITHUB_APP_PRIVATE_KEY'

4. Create a Custom Docker Image

Since our Actions Runners are going to run in pods on our cluster, we now create a custom Docker image for the GitHub Actions runner, based on the official Actions Runner image and including necessary dependencies. Here's the Dockerfile we use:

# Source: https://github.com/dotnet/dotnet-docker
FROM mcr.microsoft.com/dotnet/runtime-deps:6.0-jammy as build

ARG TARGETOS=linux
ARG TARGETARCH=amd64
ARG RUNNER_VERSION=2.317.0
ARG RUNNER_CONTAINER_HOOKS_VERSION=0.6.0
ARG DOCKER_VERSION=25.0.4
ARG BUILDX_VERSION=0.13.1

RUN apt update -y && apt install curl unzip -y

WORKDIR /actions-runner
RUN export RUNNER_ARCH=${TARGETARCH} \
    && if [ "$RUNNER_ARCH" = "amd64" ]; then export RUNNER_ARCH=x64 ; fi \
    && curl -f -L -o runner.tar.gz https://github.com/actions/runner/releases/download/v${RUNNER_VERSION}/actions-runner-${TARGETOS}-${RUNNER_ARCH}-${RUNNER_VERSION}.tar.gz \
    && tar xzf ./runner.tar.gz \
    && rm runner.tar.gz

RUN curl -f -L -o runner-container-hooks.zip https://github.com/actions/runner-container-hooks/releases/download/v${RUNNER_CONTAINER_HOOKS_VERSION}/actions-runner-hooks-k8s-${RUNNER_CONTAINER_HOOKS_VERSION}.zip \
    && unzip ./runner-container-hooks.zip -d ./k8s \
    && rm runner-container-hooks.zip

RUN export RUNNER_ARCH=${TARGETARCH} \
    && if [ "$RUNNER_ARCH" = "amd64" ]; then export DOCKER_ARCH=x86_64 ; fi \
    && if [ "$RUNNER_ARCH" = "arm64" ]; then export DOCKER_ARCH=aarch64 ; fi \
    && curl -fLo docker.tgz https://download.docker.com/${TARGETOS}/static/stable/${DOCKER_ARCH}/docker-${DOCKER_VERSION}.tgz \
    && tar zxvf docker.tgz \
    && rm -rf docker.tgz \
    && mkdir -p /usr/local/lib/docker/cli-plugins \
    && curl -fLo /usr/local/lib/docker/cli-plugins/docker-buildx \
        "https://github.com/docker/buildx/releases/download/v${BUILDX_VERSION}/buildx-v${BUILDX_VERSION}.linux-${TARGETARCH}" \
    && chmod +x /usr/local/lib/docker/cli-plugins/docker-buildx

FROM mcr.microsoft.com/dotnet/runtime-deps:6.0-jammy

ENV DEBIAN_FRONTEND=noninteractive
ENV RUNNER_MANUALLY_TRAP_SIG=1
ENV ACTIONS_RUNNER_PRINT_LOG_TO_STDOUT=1
ENV ImageOS=ubuntu22

# 'gpg-agent' and 'software-properties-common' are needed for the 'add-apt-repository' command that follows
RUN apt update -y \
    && apt install -y --no-install-recommends sudo lsb-release gpg-agent software-properties-common \
    && rm -rf /var/lib/apt/lists/*

# Configure git-core/ppa based on guidance here:  https://git-scm.com/download/linux
RUN add-apt-repository ppa:git-core/ppa \
    && apt update -y

RUN adduser --disabled-password --gecos "" --uid 1001 runner \
    && groupadd docker --gid 123 \
    && usermod -aG sudo runner \
    && usermod -aG docker runner \
    && echo "%sudo   ALL=(ALL:ALL) NOPASSWD:ALL" > /etc/sudoers \
    && echo "Defaults env_keep += \"DEBIAN_FRONTEND\"" >> /etc/sudoers

WORKDIR /home/runner

RUN apt-get update || true && apt-get install -y openjdk-17-jdk && apt-get clean

COPY --chown=runner:docker --from=build /actions-runner .
COPY --from=build /usr/local/lib/docker/cli-plugins/docker-buildx /usr/local/lib/docker/cli-plugins/docker-buildx

RUN install -o root -g root -m 755 docker/* /usr/bin/ && rm -rf docker

USER runner

5. Deploy the Runner

We now create a deploy.sh script to build and push the Docker image, update the Helm chart values, and deploy the custom runner. For detailed information and options on deploying the runners, take a look at the official GitHub documentation.

Here's the script we will use:

#!/bin/bash

# Variables
INSTALLATION_NAME="arc-runner-set-jdk17"
NAMESPACE="arc-runners"
VALUES_FILE="values.yaml"
IMAGE_REGISTRY="region-name-south2-docker.pkg.dev/google-project-id/artifact-registry-repository-name/docker-image-name"
GIT_COMMIT_HASH=$(git rev-parse --short HEAD)
NEW_IMAGE_TAG="dev-${GIT_COMMIT_HASH}"
FULL_IMAGE="${IMAGE_REGISTRY}:${NEW_IMAGE_TAG}"

# Check if the current image tag matches the expected tag format
CURRENT_IMAGE_TAG=$(grep 'image: region-name-south2-docker.pkg.dev/google-project-id/artifact-registry-repository-name/docker-image-name' ${VALUES_FILE} | awk -F '/' '{print $NF}')
if [ "${CURRENT_IMAGE_TAG}" != "${NEW_IMAGE_TAG}" ]; then
    echo "Detected change in Git commit hash. Building new Docker image..."
    
    # Build and push Docker image
    echo "Building Docker image with tag ${FULL_IMAGE}..."
    docker build -t ${FULL_IMAGE} .

    echo "Pushing Docker image ${FULL_IMAGE}..."
    docker push ${FULL_IMAGE}

    # Update the image tag in the values.yaml file for the 'runner' container
    echo "Updating 'runner' container image tag in ${VALUES_FILE} to ${FULL_IMAGE}..."
    sed -i.bak "s|image: region-name-south2-docker.pkg.dev/google-project-id/artifact-registry-repository-name/docker-image-name:.*|image: ${FULL_IMAGE}|" ${VALUES_FILE}
else
    echo "No change detected in Git commit hash. Skipping Docker image build and push."
fi

# Deploy the Helm chart
echo "Deploying Helm chart with new image tag..."
helm upgrade --install "${INSTALLATION_NAME}" \
    --namespace "${NAMESPACE}" \
    --create-namespace \
    -f ${VALUES_FILE} \
    oci://ghcr.io/actions/actions-runner-controller-charts/gha-runner-scale-set

# Check if Helm deployment was successful
if [ $? -eq 0 ]; then
  echo "Helm chart deployed successfully with image tag: ${NEW_IMAGE_TAG}"
else
  echo "Failed to deploy Helm chart. Check the error messages above."
  exit 1
fi

# Cleanup backup values.yaml file created by sed
rm -f ${VALUES_FILE}.bak

6. Configuration Files

Here are the key configuration files used:

values.yaml

## githubConfigUrl is the GitHub url for where you want to configure runners
## ex: https://github.com/myorg/myrepo or https://github.com/myorg
githubConfigUrl: "https://github.com/organization-name"

## githubConfigSecret is the k8s secrets to use when auth with GitHub API.
## You can choose to use GitHub App or a PAT token
#githubConfigSecret:
  ### GitHub Apps Configuration
  ## NOTE: IDs MUST be strings, use quotes
  #github_app_id: ""
  #github_app_installation_id: ""
  #github_app_private_key: |

  ### GitHub PAT Configuration
  #github_token: ""
## If you have a pre-define Kubernetes secret in the same namespace the gha-runner-scale-set is going to deploy,
## you can also reference it via `githubConfigSecret: pre-defined-secret`.
## You need to make sure your predefined secret has all the required secret data set properly.
##   For a pre-defined secret using GitHub PAT, the secret needs to be created like this:
##   > kubectl create secret generic pre-defined-secret --namespace=my_namespace --from-literal=github_token='ghp_your_pat'
##   For a pre-defined secret using GitHub App, the secret needs to be created like this:
##   > kubectl create secret generic pre-defined-secret --namespace=my_namespace --from-literal=github_app_id=123456 --from-literal=github_app_installation_id=654321 --from-literal=github_app_private_key='-----BEGIN CERTIFICATE-----*******'
githubConfigSecret: actions-runner-rw-secret

## proxy can be used to define proxy settings that will be used by the
## controller, the listener and the runner of this scale set.
#
# proxy:
#   http:
#     url: http://proxy.com:1234
#     credentialSecretRef: proxy-auth # a secret with `username` and `password` keys
#   https:
#     url: http://proxy.com:1234
#     credentialSecretRef: proxy-auth # a secret with `username` and `password` keys
#   noProxy:
#     - example.com
#     - example.org

## maxRunners is the max number of runners the autoscaling runner set will scale up to.
maxRunners: 5

## minRunners is the min number of idle runners. The target number of runners created will be
## calculated as a sum of minRunners and the number of jobs assigned to the scale set.
minRunners: 1

# runnerGroup: "default"

## name of the runner scale set to create.  Defaults to the helm release name
# runnerScaleSetName: ""

## A self-signed CA certificate for communication with the GitHub server can be
## provided using a config map key selector. If `runnerMountPath` is set, for
## each runner pod ARC will:
## - create a `github-server-tls-cert` volume containing the certificate
##   specified in `certificateFrom`
## - mount that volume on path `runnerMountPath`/{certificate name}
## - set NODE_EXTRA_CA_CERTS environment variable to that same path
## - set RUNNER_UPDATE_CA_CERTS environment variable to "1" (as of version
##   2.303.0 this will instruct the runner to reload certificates on the host)
##
## If any of the above had already been set by the user in the runner pod
## template, ARC will observe those and not overwrite them.
## Example configuration:
#
# githubServerTLS:
#   certificateFrom:
#     configMapKeyRef:
#       name: config-map-name
#       key: ca.crt
#   runnerMountPath: /usr/local/share/ca-certificates/

## Container mode is an object that provides out-of-box configuration
## for dind and kubernetes mode. Template will be modified as documented under the
## template object.
##
## If any customization is required for dind or kubernetes mode, containerMode should remain
## empty, and configuration should be applied to the template.
#containerMode:
  #type: "dind"  ## type can be set to dind or kubernetes
#   ## the following is required when containerMode.type=kubernetes
#   kubernetesModeWorkVolumeClaim:
#     accessModes: ["ReadWriteOnce"]
#     # For local testing, use https://github.com/openebs/dynamic-localpv-provisioner/blob/develop/docs/quickstart.md to provide dynamic provision volume with storageClassName: openebs-hostpath
#     storageClassName: "dynamic-blob-storage"
#     resources:
#       requests:
#         storage: 1Gi
#   kubernetesModeServiceAccount:
#     annotations:

## listenerTemplate is the PodSpec for each listener Pod
## For reference: https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#PodSpec
# listenerTemplate:
#   spec:
#     containers:
#     # Use this section to append additional configuration to the listener container.
#     # If you change the name of the container, the configuration will not be applied to the listener,
#     # and it will be treated as a side-car container.
#     - name: listener
#       securityContext:
#         runAsUser: 1000
#     # Use this section to add the configuration of a side-car container.
#     # Comment it out or remove it if you don't need it.
#     # Spec for this container will be applied as is without any modifications.
#     - name: side-car
#       image: asia-south2-docker.pkg.dev/possible-point-244308/gale/github-runner-jdk17:dev-92df8c9

## template is the PodSpec for each runner Pod
## For reference: https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#PodSpec
template:
  # template.spec will be modified if you change the container mode
  # with containerMode.type=dind, we will populate the template.spec with following pod spec
  # template:
  spec:
    metadata:
      annotations:
        cluster-autoscaler.kubernetes.io/safe-to-evict: "false"
    imagePullSecrets:
      - name: artifact-registry-secret
    terminationGracePeriodSeconds: 60
    containers:
    - name: runner
      image: region-name-south2-docker.pkg.dev/google-project-id/artifact-registry-repository-name/docker-image-name:dev-image-tag
      command: ["/home/runner/run.sh"]
      resources:
          requests:
            memory: "2Gi"
            cpu: "2"
          limits:
            memory: "2Gi"
            cpu: "2"
  ######################################################################################################
  ## with containerMode.type=kubernetes, we will populate the template.spec with following pod spec
  ## template:
  ##   spec:
  ##     containers:
  ##     - name: runner
  ##       image: asia-south2-docker.pkg.dev/possible-point-244308/gale/github-runner-jdk17:dev-92df8c9
  ##       command: ["/home/runner/run.sh"]
  ##       env:
  ##         - name: ACTIONS_RUNNER_CONTAINER_HOOKS
  ##           value: /home/runner/k8s/index.js
  ##         - name: ACTIONS_RUNNER_POD_NAME
  ##           valueFrom:
  ##             fieldRef:
  ##               fieldPath: metadata.name
  ##         - name: ACTIONS_RUNNER_REQUIRE_JOB_CONTAINER
  ##           value: "true"
  ##       volumeMounts:
  ##         - name: work
  ##           mountPath: /home/runner/_work
  ##     volumes:
  ##       - name: work
  ##         ephemeral:
  ##           volumeClaimTemplate:
  ##             spec:
  ##               accessModes: [ "ReadWriteOnce" ]
  ##               storageClassName: "local-path"
  ##               resources:
  ##                 requests:
  ##                   storage: 1Gi

## Optional controller service account that needs to have required Role and RoleBinding
## to operate this gha-runner-scale-set installation.
## The helm chart will try to find the controller deployment and its service account at installation time.
## In case the helm chart can't find the right service account, you can explicitly pass in the following value
## to help it finish RoleBinding with the right service account.
## Note: if your controller is installed to only watch a single namespace, you have to pass these values explicitly.
# controllerServiceAccount:
#   namespace: arc-system
#   name: test-arc-gha-runner-scale-set-controller

7. Verifying the Deployment

After deploying the Helm chart, you can verify the successful creation of the GitHub Actions self-hosted runners by checking the Kubernetes pods in your cluster. The screenshot below shows an example of the pods that will be created:

In the final blog of this series, we will create a centralized action workflow for CI/CD that will execute on these runners. This workflow will help streamline our continuous integration and deployment processes across various projects and services.

Stay tuned for the final installment, where we will integrate all the steps into a cohesive CI/CD pipeline!

About Greyamp

Greyamp is a boutique Management Consulting firm that works with large enterprises to help them on their Digital Transformation journeys, going across the organisation, covering process, people, culture, and technology. Subscribe here to get our latest digital transformation insights.