Source code for Open5GS implementation and vertical scaling application
This repository contains the complete implementation and configuration used for the engineering thesis titled:
"Vertical scaling of network functions in Open5GS Core network platform"
K3s cluster is deployed on 'k3s1' machine, 'manager' virtual machine manages Open5GS and UERANSIM. A Python-based vertical scaler dynamically adjusts CPU limits of the UPF pod based on real-time AMF metrics exposed via Prometheus.
Project Goals
- Deploy a 5G SA Core Network in a cloud-native (K3s) environment
- Simulate RAN elements using UERANSIM (gNodeB + UE)
- Enable Prometheus-based traffic monitoring
- Implement and validate vertical scaling logic (CPU) for UPF
- Analyze why horizontal scaling is not feasible in Open5GS (stateful architecture)
Stack and Architecture
- K3s – Lightweight Kubernetes cluster
- Helm – Deployment manager for Kubernetes apps
- Open5GS – Open-source 5G Core platform
- UERANSIM – UE + gNodeB simulator
- Prometheus – Metrics collection
- Flask + Python – Custom vertical scaler
- PVC + ConfigMaps – Persistent thresholds + mounted script logic
Deployment Steps
- Install K3s on master node
curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION="v1.32.3+k3s1" INSTALL_K3S_EXEC="server \
--write-kubeconfig-mode 644 \
--disable-cloud-controller \
--kube-apiserver-arg=feature-gates=InPlacePodVerticalScaling=true \
--kube-controller-manager-arg=feature-gates=InPlacePodVerticalScaling=true \
--kube-scheduler-arg=feature-gates=InPlacePodVerticalScaling=true \
--kubelet-arg=feature-gates=InPlacePodVerticalScaling=true \
--kube-proxy-arg=feature-gates=InPlacePodVerticalScaling=true" sh -- Copy kubeconfig to management
scp /etc/rancher/k3s/k3s.yaml ubuntu@<manager IP>:/home/ubuntu/.kube/config
For example:
scp /etc/rancher/k3s/k3s.yaml ubuntu@192.168.11.6:/home/ubuntu/.kube/configThen update the server: field to reflect K3s master IP.
ON MANAGER:
- Install kubectl
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
kubectl version --client4.Install Prometheus
Create the namespace and CRDs, and then wait for them to be available before creating the remaining resources Note that due to some CRD size we are using kubectl server-side apply feature which is generally available since kubernetes 1.22. If you are using previous kubernetes versions this feature may not be available and you would need to use kubectl create instead.
In kube-prometheus folder:
kubectl apply --server-side -f manifests/setup
kubectl wait \
--for condition=Established \
--all CustomResourceDefinition \
--namespace=monitoring
kubectl apply -f manifests/
On k3s1:
Add k3s service configuration
ubuntu@k3s1:~$ sudo nano /etc/systemd/system/k3s.service
'--kube-apiserver-arg=feature-gates=InPlacePodVerticalScaling=true' \
'--kube-controller-manager-arg=feature-gates=InPlacePodVerticalScaling=>
'--kube-scheduler-arg=feature-gates=InPlacePodVerticalScaling=true' \
'--kubelet-arg=feature-gates=InPlacePodVerticalScaling=true' \
'--kube-proxy-arg=feature-gates=InPlacePodVerticalScaling=true' \
Port forward to local machine:
kubectl port-forward svc/prometheus-server -n monitoring 9090:80- Install Helm
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
chmod 700 get_helm.sh
./get_helm.sh
- Add Helm repos and install Open5GS and UERANSIM
# Clone the repository
git clone https://github.com/igamiron/open5gs.git
cd open5gs
helm install open5gs open5gs --version 2.2.5 --values 5gSA-values-enable-metrics.yaml
helm install ueransim-gnb ueransim-gnb --version 0.2.6 --values gnb-ues-values.yaml
- Build Docker image locally
docker build -t open5gs-upf-scaler:local .
k3s ctr images import <(docker save open5gs-upf-scaler:local)- Apply 'upf-scaler.yaml'
kubectl apply -f upf-scaler.yamlThreshold Management API
The scaler exposes REST endpoints:
View current thresholds
curl http://localhost:8080/thresholdsUpdate thresholds
curl -X POST http://localhost:8080/thresholds \
-H "Content-Type: application/json" \
-d '{"2": {"amf": 8, "cpu": "200m"}}'Restart scaler pod and verify persistence
kubectl delete pod -l app=open5gs-upf-scaler
curl http://localhost:8080/thresholdsExpected result: JSON matches previously stored thresholds.
Testing
- Session metrics (
open5gs_upf_sessions_total{status="active"}) - Threshold API behavior
- Vertical scaling of UPF via
kubectl patchor auto-scaler - Persistence of CPU thresholds via PVC mount
- Emulated traffic via added UE deployments
Appendix A: Source Code
GitHub Repository:
https://github.com/igamiron/5g-project.git
Includes:
- Dockerfiles for scaler and Open5GS
- YAML Kubernetes manifests
- Prometheus setup
- Sample test scripts and logs
License
MIT License – based on open-source tools (Open5GS, UERANSIM, Prometheus).