Self-hosted GitHub Actions runners using Actions Runner Controller on k3s.
Repository: https://github.com/The1Studio/arc-github-runners
This repository contains the configuration for managing self-hosted GitHub Actions runners across multiple repositories and organizations using Kubernetes.
- β Auto-scaling runners - Automatically scale based on workload
- β Multi-organization support - Separate runner pools
- β Kubernetes-based - Runs on lightweight k3s
- β Resource efficient - Minimal overhead (~150MB for k3s)
All runners use the custom image the1studio/actions-runner:https-apt with HTTPS APT sources pre-configured.
- Purpose: Default runners for all the1studio repositories
- Replicas: Min 3, Max 10 (auto-scaling)
- Labels:
self-hosted,linux,x64,arc,the1studio,org - Resources: 2 CPU / 4GB RAM per runner
- Purpose: Build and deploy personal website
- Replicas: Min 1, Max 5 (auto-scaling)
- Labels:
self-hosted,linux,x64,arc,personal - Resources: 1 CPU / 2GB RAM per runner
- Purpose: Build Android APK files
- Replicas: Min 1, Max 3 (auto-scaling, conservative)
- Labels:
self-hosted,linux,x64,arc,android,apk-builder - Resources: 8 CPU / 16GB RAM per runner (high resource usage)
- Purpose: Deploy Unity builds to app stores/hosting
- Replicas: Min 1, Max 5 (auto-scaling)
- Labels:
self-hosted,linux,x64,deploy - Resources: 4 CPU / 8GB RAM per runner
- Purpose: Build Unity Editor Docker images and sync to Harbor registry
- Replicas: Min 1, Max 2 (auto-scaling, very conservative)
- Labels:
self-hosted,linux,x64,deploy,harbor-access,harbor-host - Resources: 8 CPU / 24GB RAM per runner (extreme resource usage)
This issue is now fixed at the runner image level. The custom image the1studio/actions-runner:https-apt has pre-configured HTTPS APT sources.
You no longer need to add HTTPβHTTPS conversion in workflows!
See docker/README.md for details about the custom image.
CRITICAL: Organization-level runners cannot be used by public repositories by default.
If your workflow stays in "Queued" state forever, you need to enable public repository access:
# Enable public repositories for organization runners
gh api -X PATCH orgs/the1studio/actions/runner-groups/1 \
-F allows_public_repositories=true
# Verify the change
gh api orgs/the1studio/actions/runner-groups/1 --jq '.allows_public_repositories'
# Should return: trueAlternative: Use repository-level runners instead of organization-level runners for public repositories. See examples/additional-runners.yaml.
For detailed troubleshooting, see docs/TROUBLESHOOTING.md.
- Linux system (tested on Arch Linux)
curlandbash- GitHub Personal Access Token with
repooradmin:orgscope
# 1. Install k3s
curl -sfL https://get.k3s.io | sh -s - --write-kubeconfig-mode 644
# 2. Install Helm
curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
# 3. Install cert-manager
export KUBECONFIG=/etc/rancher/k3s/k3s.yaml
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.14.4/cert-manager.yaml
# 4. Wait for cert-manager
kubectl wait --for=condition=ready pod -l app=cert-manager -n cert-manager --timeout=120s
# 5. Install ARC controller
kubectl create namespace arc-systems
kubectl create namespace arc-runners
helm repo add actions-runner-controller https://actions-runner-controller.github.io/actions-runner-controller
helm repo update
# Create GitHub token secret
kubectl create secret generic controller-manager \
--namespace arc-systems \
--from-literal=github_token="YOUR_GITHUB_PAT"
helm install arc \
--namespace arc-systems \
actions-runner-controller/actions-runner-controller
# 6. Deploy runners
kubectl apply -f k8s/runner-deployments.yaml
kubectl apply -f k8s/autoscalers.yamlGetting Started:
- π Project Overview & PDR - Project goals, requirements, and vision
- ποΈ System Architecture - Detailed architecture and component design
- π Code Standards - Coding conventions and best practices
- π Codebase Summary - Repository structure and key components
Operations:
- π Usage Guide - How to use runners in workflows
- βοΈ Management - Management commands and operations
- π§ Troubleshooting - Common issues and solutions
- πΎ Backup & Recovery - Disaster recovery procedures
Technical:
- π³ Custom Docker Image - HTTPS APT fix details
- π Example Runners - Templates for new runners
docs/
βββ project-overview-pdr.md # π Project goals & requirements
βββ system-architecture.md # ποΈ Architecture & components
βββ code-standards.md # π Standards & conventions
βββ codebase-summary.md # π Repository overview
βββ USAGE.md # π Workflow integration
βββ MANAGEMENT.md # βοΈ Operations & commands
βββ TROUBLESHOOTING.md # π§ Common issues
βββ BACKUP.md # πΎ Disaster recovery
.
βββ README.md
βββ k8s/
β βββ runner-deployments.yaml # Runner deployment configurations
β βββ autoscalers.yaml # Auto-scaling rules
β βββ network-policy.yaml # Network security policies
β βββ pod-disruption-budget.yaml # High availability settings
β βββ examples/
β βββ additional-runners.yaml # Template for adding more runners
βββ docker/
β βββ Dockerfile # Custom runner image
β βββ README.md # Image documentation
βββ workflows/
β βββ test-arc.yml # Sample workflow to test runners
βββ docs/
βββ project-overview-pdr.md # Project overview & PDR
βββ system-architecture.md # System architecture
βββ code-standards.md # Code standards
βββ codebase-summary.md # Codebase summary
βββ USAGE.md # How to use runners
βββ MANAGEMENT.md # Management commands
βββ TROUBLESHOOTING.md # Common issues
βββ BACKUP.md # Backup & recovery
jobs:
build:
runs-on: [self-hosted, linux, x64, arc, the1studio, org]
steps:
- uses: actions/checkout@v4
- run: echo "Running on the1studio runner"jobs:
deploy:
runs-on: [self-hosted, linux, x64, arc, personal]
steps:
- uses: actions/checkout@v4
- run: echo "Running on personal runner"jobs:
build-apk:
runs-on: [self-hosted, linux, x64, arc, android, apk-builder]
steps:
- uses: actions/checkout@v4
- run: echo "Building Android APK"jobs:
deploy-unity:
runs-on: [self-hosted, linux, x64, deploy]
steps:
- uses: actions/checkout@v4
- run: echo "Deploying Unity build"jobs:
build-unity-image:
runs-on: [self-hosted, linux, x64, deploy, harbor-access, harbor-host]
steps:
- uses: actions/checkout@v4
- run: echo "Building Unity Editor image"self-hosted- Indicates self-hosted runnerlinuxorLinux- Operating systemx64orX64- Architecture- Additional specific labels for targeting specific runner pools
Good Examples:
# β
Matches the1studio-org-runners
runs-on: [self-hosted, linux, x64, arc, the1studio, org]
# β
Matches android-apk-builder
runs-on: [self-hosted, linux, x64, android]
# β
Matches any Linux runner
runs-on: [self-hosted, linux, x64]Bad Examples:
# β Missing linux and x64 - may not match properly
runs-on: [self-hosted, arc, the1studio, org]
# β Too generic - will match any Linux runner
runs-on: [self-hosted]export KUBECONFIG=/etc/rancher/k3s/k3s.yaml
# View all runners
kubectl get pods -n arc-runners
# Check auto-scalers
kubectl get hra -n arc-runners
# View logs
kubectl logs -n arc-runners <pod-name> -c runner# Scale organization runners to 5
kubectl scale runnerdeployment the1studio-org-runners \
-n arc-runners --replicas=5
# Scale personal runners to 3
kubectl scale runnerdeployment tuha263-personal-runners \
-n arc-runners --replicas=3All runners use PercentageRunnersBusy metric for auto-scaling decisions.
- Replicas: Min 3, Max 10
- Scale up: 75% busy β 1.5x runners
- Scale down: 25% busy β 0.5x runners
- Cooldown: 5 minutes after scale-up
- Replicas: Min 1, Max 5
- Scale up: 75% busy β 1.5x runners
- Scale down: 25% busy β 0.5x runners
- Cooldown: 3 minutes after scale-up
- Replicas: Min 1, Max 3 (limited due to high resource usage)
- Scale up: 80% busy β 1.5x runners (conservative)
- Scale down: 20% busy β 0.5x runners
- Cooldown: 10 minutes after scale-up (prevent thrashing)
- Replicas: Min 1, Max 5
- Scale up: 75% busy β 1.5x runners
- Scale down: 25% busy β 0.5x runners
- Cooldown: 5 minutes after scale-up
- Replicas: Min 1, Max 2 (very limited due to extreme resource usage)
- Scale up: 90% busy β 2x runners (very conservative)
- Scale down: 10% busy β 0.5x runners
- Cooldown: 15 minutes after scale-up (prevent resource exhaustion)
See docs/TROUBLESHOOTING.md for common issues.
Quick checks:
# Check if ARC controller is running
kubectl get pods -n arc-systems
# Check if runners are registered in GitHub
gh api orgs/the1studio/actions/runners
# View ARC controller logs
kubectl logs -n arc-systems -l app.kubernetes.io/name=actions-runner-controllerThis is a personal infrastructure repository. Changes should be tested in a development environment before applying to production.
MIT