Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions REUSE.toml
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,7 @@ path = [
"README.md",
"internal/agent/hack/**",
"internal/agent/proto/**",
"test/emulation/**"
]
precedence = "aggregate"
SPDX-FileCopyrightText = "2025 SAP SE or an SAP affiliate company and IronCore contributors"
Expand Down
115 changes: 115 additions & 0 deletions test/emulation/clos/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,115 @@
# SONiC Lab Topology (CLOS)

A containerized network lab environment running SONiC switches in a CLOS topology, orchestrated on Kubernetes using Clabernetes.

## Overview

This project sets up a complete network topology with:
- **2 Spine switches** (SONiC VMs)
- **2 Leaf switches** (SONiC VMs)
- **2 Client nodes** (Linux multitool containers)

The topology implements a standard data center CLOS architecture.

### Topology Diagram

![CLOS Topology](clos_topology.svg)

## Prerequisites

The following tools must be installed on the host:

- **kind** - Kubernetes in Docker (for local Kubernetes cluster)
- **kubectl** - Kubernetes command-line tool
- **sshpass** - SSH password automation utility

## Project Structure

```
clos/
├── clos.clab.yml - Network topology definition (YAML)
├── deploy.sh - Deployment automation script
├── init_setup.sh - Node initialization and agent setup
├── destroy.sh - Infrastructure cleanup script
└── README.md - This file
```

## Dependencies


### Software Packages
```
docker - Container runtime
kubernetes - Container orchestration
helm - Package manager
kubectl - Kubernetes CLI
sshpass - SSH password utility
jq - JSON processor
```

### Kubernetes Services
- Clabernetes - Deployed via Helm in `c9s` namespace
- kube-vip - RBAC and manifests applied to cluster
- kube-vip Cloud Controller - Deployed in `kube-vip` namespace


## Configuration Details

### IP Management
- kube-vip External IP Range: `172.18.1.10 - 172.18.1.250`
- Services exposed via kube-vip ARP mode on eth0

## Setup Steps

### 1. Prerequisites
Ensure all dependencies are installed and Kubernetes cluster is ready:

### 2. Initialize Kind Cluster
Run the initialization setup from the root level Makefile by:
```bash
make setup-test-e2e
```

### 3. Deploy the Lab Environment
Deploy the full topology to Kubernetes:
```bash
./deploy.sh
```

**What it does**:
- Installs Clabernetes via Helm in `c9s` namespace
- Applies kube-vip RBAC policies
- Deploys kube-vip cloud controller
- Creates kube-vip configmap with IP range
- Deploys kube-vip ARP daemonset
- Converts containerlab topology to Kubernetes resources
- Applies topology configuration to cluster
- Waits for services to be ready (180 seconds)
- Configure DNS, Pulls and starts Sonic Agenton port 57400 for each SONiC node via SSH
- Displays external IPs for all services

### 4. Access the Lab
After successful deployment, retrieve external IPs:
```bash
# View all services with external IPs
kubectl get -n c9s-clos svc

# SSH into a specific SONiC node (default credentials: admin/admin)
ssh admin@<external-ip>

# Example
ssh admin@172.18.1.15
```

### 5. Cleanup
Tear down the entire lab environment:
```bash
./destroy.sh
```

**What it does**:
- Deletes the `c9s-clos` namespace (all topology resources)
- Deletes the `c9s` namespace (Clabernetes)
- Removes kube-vip configmap, daemonset, and cloud controller
- Cleans up kube-vip RBAC resources
- Removes all related Kubernetes objects
29 changes: 29 additions & 0 deletions test/emulation/clos/clos.clab.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
name: clos

topology:
nodes:
sonic-spine1:
kind: sonic-vm
image: dberes/sonic-vs:latest
sonic-spine2:
kind: sonic-vm
image: dberes/sonic-vs:latest
sonic-leaf1:
kind: sonic-vm
image: dberes/sonic-vs:latest
sonic-leaf2:
kind: sonic-vm
image: dberes/sonic-vs:latest
client1:
kind: linux
image: ghcr.io/hellt/network-multitool:latest
client2:
kind: linux
image: ghcr.io/hellt/network-multitool:latest
links:
- endpoints: ["sonic-spine1:eth1", "sonic-leaf1:eth1"]
- endpoints: ["sonic-spine1:eth2", "sonic-leaf2:eth1"]
- endpoints: ["sonic-spine2:eth1", "sonic-leaf1:eth2"]
- endpoints: ["sonic-spine2:eth2", "sonic-leaf2:eth2"]
- endpoints: ["sonic-leaf1:eth3", "client1:eth1"]
- endpoints: ["sonic-leaf2:eth3", "client2:eth1"]
4 changes: 4 additions & 0 deletions test/emulation/clos/clos_topology.svg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
71 changes: 71 additions & 0 deletions test/emulation/clos/deploy.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,71 @@
#!/bin/bash

# SPDX-FileCopyrightText: 2025 SAP SE or an SAP affiliate company and IronCore contributors
# SPDX-License-Identifier: Apache-2.0

set -eu
HELM="docker run --network host -ti --rm -v $(pwd):/apps -w /apps \
-v $HOME/.kube:/root/.kube -v $HOME/.helm:/root/.helm \
-v $HOME/.config/helm:/root/.config/helm \
-v $HOME/.cache/helm:/root/.cache/helm \
alpine/helm:3.12.3"

CLABVERTER="sudo docker run --user $(id -u) -v $(pwd):/clabernetes/work --rm ghcr.io/srl-labs/clabernetes/clabverter"

$HELM upgrade --install --create-namespace --namespace c9s \
clabernetes oci://ghcr.io/srl-labs/clabernetes/clabernetes

kubectl apply -f https://kube-vip.io/manifests/rbac.yaml
kubectl apply -f https://raw.githubusercontent.com/kube-vip/kube-vip-cloud-provider/main/manifest/kube-vip-cloud-controller.yaml
kubectl create configmap --namespace kube-system kubevip \
--from-literal range-global=172.18.1.10-172.18.1.250 || true

#set up the kube-vip CLI
KVVERSION=$(curl -sL https://api.github.com/repos/kube-vip/kube-vip/releases | \
jq -r ".[0].name")
KUBEVIP="docker run --network host \
--rm ghcr.io/kube-vip/kube-vip:$KVVERSION"
#install kube-vip load balancer daemonset in ARP mode
$KUBEVIP manifest daemonset --services --inCluster --arp --interface eth0 | \
kubectl apply -f -


echo "Checking for configuration changes..."
CONFIG=$($CLABVERTER --stdout --naming non-prefixed)

if echo "$CONFIG" | kubectl diff -f - > /dev/null 2>&1; then
echo "No changes detected, skipping apply and wait"
else
echo "Changes detected, applying configuration..."
echo "$CONFIG" | kubectl apply -f -

# Wait for services to be ready
echo "Waiting for services to be ready..."
sleep 180

# Run script on each sonic node
echo "Provisioning SONiC nodes..."
for service in $(kubectl get -n c9s-clos svc -o jsonpath='{.items[*].metadata.name}' 2>/dev/null | tr ' ' '\n' | grep '^sonic-'); do
h=$(kubectl get -n c9s-clos svc "$service" -o jsonpath='{.status.loadBalancer.ingress[0].ip}' 2>/dev/null)
if [ ! -z "$h" ]; then
echo "Running init_setup.sh on $h"
sshpass -p 'admin' ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null admin@"$h" 'bash -s' < init_setup.sh || true
fi
done

fi


echo ""
echo "=========================================="
echo "SONiC Lab Topology - External IPs"
echo "=========================================="
for service in $(kubectl get -n c9s-clos svc -o jsonpath='{.items[*].metadata.name}' 2>/dev/null | tr ' ' '\n'); do
ip=$(kubectl get -n c9s-clos svc "$service" -o jsonpath='{.status.loadBalancer.ingress[0].ip}' 2>/dev/null)
if [ -n "$ip" ]; then
echo "$service -> $ip"
fi
done

echo ""
echo "Script ended successfully"
46 changes: 46 additions & 0 deletions test/emulation/clos/destroy.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,46 @@
#!/bin/bash

# SPDX-FileCopyrightText: 2025 SAP SE or an SAP affiliate company and IronCore contributors
# SPDX-License-Identifier: Apache-2.0

set -eux

echo "Starting destruction of SONiC lab infrastructure..."

# Delete the c9s-clos namespace (contains all topology resources)
echo "Deleting c9s-clos namespace..."
kubectl delete namespace c9s-clos --ignore-not-found=true
sleep 10

# Delete the c9s namespace (contains clabernetes)
echo "Deleting c9s namespace..."
kubectl delete namespace c9s --ignore-not-found=true
sleep 10

# Remove kube-vip configmap
echo "Removing kube-vip configmap..."
kubectl delete configmap -n kube-system kubevip --ignore-not-found=true

# Remove kube-vip daemonset
echo "Removing kube-vip daemonset..."
kubectl delete daemonset -n kube-system kube-vip-ds --ignore-not-found=true

# Remove kube-vip cloud controller
echo "Removing kube-vip cloud controller deployment..."
kubectl delete deployment -n kube-vip kube-vip-cloud-provider --ignore-not-found=true

# Remove kube-vip namespace if empty
echo "Cleaning up kube-vip namespace..."
kubectl delete namespace kube-vip --ignore-not-found=true

# Remove RBAC resources for kube-vip
echo "Removing kube-vip RBAC resources..."
kubectl delete clusterrole system:kube-vip-role --ignore-not-found=true
kubectl delete clusterrole system:kube-vip-cloud-controller-role --ignore-not-found=true
kubectl delete clusterrolebinding system:kube-vip-binding --ignore-not-found=true
kubectl delete clusterrolebinding system:kube-vip-cloud-controller-binding --ignore-not-found=true
kubectl delete serviceaccount -n kube-system kube-vip --ignore-not-found=true
kubectl delete serviceaccount -n kube-vip kube-vip-cloud-controller --ignore-not-found=true

echo "Destruction complete!"
echo "All SONiC lab resources have been removed."
28 changes: 28 additions & 0 deletions test/emulation/clos/init_setup.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
#!/bin/bash

# SPDX-FileCopyrightText: 2025 SAP SE or an SAP affiliate company and IronCore contributors
# SPDX-License-Identifier: Apache-2.0

set -euo pipefail

IMAGE="ghcr.io/ironcore-dev/sonic-agent:sha-966298d"

echo "Configuring DNS..."
if [ -d "/etc/resolvconf/resolv.conf.d" ]; then
echo "nameserver 8.8.8.8" | sudo tee /etc/resolvconf/resolv.conf.d/head
sudo /sbin/resolvconf --enable-updates
sudo /sbin/resolvconf -u
sudo /sbin/resolvconf --disable-updates
else
echo "Warning: resolvconf not found, skipping DNS configuration"
fi
echo "Removing old agent container if it exists..."
docker rm -f switch-operator-agent 2>/dev/null || true

echo "Pulling agent image..."
docker pull "$IMAGE"

echo "Starting agent container..."
docker run --pull always -d --name switch-operator-agent --entrypoint /switch-agent-server --network host --restart unless-stopped "$IMAGE" -port 57400

echo "Agent setup completed successfully"
Loading