Skip to content

Commit

Permalink
[Fix]: Fix for README tets (#63)
Browse files Browse the repository at this point in the history
  • Loading branch information
kfirtoledo authored and GitHub Enterprise committed Jan 24, 2023
1 parent b715e43 commit 10b6b60
Show file tree
Hide file tree
Showing 3 changed files with 32 additions and 30 deletions.
8 changes: 4 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,9 @@
# Multi-cloud Border Gateway(MBG) project
# Multi-cloud Border Gateway(MBG) project
Through the Multi-cloud border gateway, users can simplify the connection between various application services that are located in different domains, networks, and cloud infrastructures.
For more details, see the document: TBD
This project contains two main components:
1) MBG - the Multi-cloud border gateway that allows secure connections between different services in different network domains.
The MBG has different APIs like hello, expose and connect, enabling service connectivity.
The MBG has different APIs like hello, expose and connect, enabling service connectivity.
The MBG can also apply some network functions (TCP-split, compression, etc.)
2) mbgctl - the mbgctl is CLI implementation that uses MBG APIs to send control messages to thr MBG.
The mbgctl uses commands like expose, connect and disconnect to create connectivity to service in different network domains using the MBG.
Expand All @@ -16,8 +16,8 @@ The MBG can be set up and run on different environments: local environment (Kind
### <ins> Run MBG in local environment (Kind) <ins>
MBG can run in any K8s environment, such as Kind.
To run the MBG in a Kind environment, follow one of the examples:
1) Performance example - Run MBG with iPerf3 client and server. This example is used for performance measuring. Instructions can be found [Here](tests/iperf3/kind/README.md)
2) Application example - Run MBG with the BookInfo application. This example is used to demonstrate communication distributed applications (in different clusters) with different policies. Instructions can be found [Here](tests/bookinfo/kind/README.md)
1) Performance example - Run iPerf3 test between iPerf3 client and server using MBG components. This example is used for performance measuring. Instructions can be found [Here](tests/iperf3/kind/README.md).
1) Application example - Run the BookInfo application in different clusters using MBG components. This example demonstrates communication distributed applications (in different clusters) with different policies.Instructions can be found [Here](tests/bookinfo/kind/README.md).

### <ins>Run MBG in Bare-metal environment with 2 hosts<ins>
Follow instructions from [Here](tests/bare-metal/commands.txt)
Expand Down
2 changes: 0 additions & 2 deletions tests/bookinfo/kind/test.py
Original file line number Diff line number Diff line change
Expand Up @@ -153,7 +153,6 @@ def connectSvc(srcSvc,destSvc,srcK8sName):
runcmd(f"kind load docker-image maistra/examples-bookinfo-reviews-v2 --name={mbg2ClusterName}")
runcmd(f"kind load docker-image maistra/examples-bookinfo-ratings-v1:0.12.0 --name={mbg2ClusterName}")
runcmd(f"kubectl create -f {folReview}/review-v2.yaml")
runcmd(f"kubectl create service nodeport {review2svc} --tcp={srcK8sSvcPort}:{srcK8sSvcPort} --node-port={review2DestPort}")
runcmd(f"kubectl create -f {folReview}/rating.yaml")
mbgctl2name, mbgctl2Ip= buildMbgctl(mbgctl2Name, mbgMode="inside")
destMbg2Ip = f"{getPodIp(podMbg2)}:{mbg2cPortLocal}"
Expand All @@ -169,7 +168,6 @@ def connectSvc(srcSvc,destSvc,srcK8sName):
runcmd(f"kind load docker-image maistra/examples-bookinfo-reviews-v3 --name={mbg3ClusterName}")
runcmd(f"kind load docker-image maistra/examples-bookinfo-ratings-v1:0.12.0 --name={mbg3ClusterName}")
runcmd(f"kubectl create -f {folReview}/review-v3.yaml")
runcmd(f"kubectl create service nodeport {review3svc} --tcp={srcK8sSvcPort}:{srcK8sSvcPort} --node-port={review3DestPort}")
runcmd(f"kubectl create -f {folReview}/rating.yaml")
mbgctl3name, mbgctl3Ip= buildMbgctl(mbgctl3Name , mbgMode="inside")
destMbg3Ip = f"{getPodIp(podMbg3)}:{mbg3cPortLocal}"
Expand Down
52 changes: 28 additions & 24 deletions tests/iperf3/kind/README.md
Original file line number Diff line number Diff line change
@@ -1,12 +1,12 @@
# <ins>iPerf3 Connectivity and Performance Test<ins>
In this test we check iPerf3 connectivity between different kind cluster using the MBG components.
This setup use two Kind clusters-
1) MBG1 cluster - contain MBG, mbgctl (MBG control component), and iPerf3 client
2) MBG2 cluster - contain MBG, mbgctl (MBG control component), and iPerf3 serve
1) MBG1 cluster - contain MBG, mbgctl (MBG control component), and iPerf3 client.
2) MBG2 cluster - contain MBG, mbgctl (MBG control component), and iPerf3 server.


## <ins> Pre-requires installations <ins>
To run a Kind test, check all pre-requires are installed(Go, docker, Kubectl, Kind ):
To run a Kind test, check all pre-requires are installed (Go, docker, Kubectl, Kind):

export PROJECT_FOLDER=`git rev-parse --show-toplevel`
cd $PROJECT_FOLDER
Expand All @@ -25,16 +25,16 @@ Build MBG docker image:
make docker-build

### <ins> Step 2: Create kind clusters with MBG image <ins>
In this step, we build the kind cluster with an MBG image.
In this step, we build the kind cluster with an MBG image.
Build the first kind cluster with MBG, mbgctl, and iperf3-client:
1) Create a Kind cluster with MBG image:

kind create cluster --config $PROJECT_FOLDER/manifests/kind/mbg-config1.yaml --name=mbg-agent1
kind create cluster --config $PROJECT_FOLDER/manifests/kind/mbg-config1.yaml --name=mbg-agent1
kind load docker-image mbg --name=mbg-agent1

2) Create a MBG deployment:

kubectl create -f $PROJECT_FOLDER/manifests/mbg/mbg.yaml
kubectl create -f $PROJECT_FOLDER/manifests/mbg/mbg.yaml
kubectl create -f $PROJECT_FOLDER/manifests/mbg/mbg-client-svc.yaml

3) Create a mbgctl deployment:
Expand All @@ -43,7 +43,7 @@ Build the first kind cluster with MBG, mbgctl, and iperf3-client:
kubectl create -f $PROJECT_FOLDER/manifests/mbgctl/mbgctl-svc.yaml
4) Create an iPerf3-client deployment:

kubectl create -f $PROJECT_FOLDER/tests/iperf3/manifests/iperf3-client/iperf3-client.yaml
kubectl create -f $PROJECT_FOLDER/tests/iperf3/manifests/iperf3-client/iperf3-client.yaml

Build the second kind cluster with MBG, mbgctl, and iperf3-server:
1) Create a Kind cluster with MBG image:
Expand All @@ -67,44 +67,48 @@ Check that container statuses are Running.

kubectl get pods

### <ins> Step 3: Start running MBG and mbgctl <ins>
### <ins> Step 3: Start running MBG and mbgctl <ins>
In this step, start to run the MBG and mbgctl.
First, Initialize parameters of Pods name and IPs:
First, Initialize the parameters of the test (pods' names and IPs):

kubectl config use-context kind-mbg-agent1
export MBG1=`kubectl get pods -l app=mbg -o custom-columns=:metadata.name`
export MBG1IP=`kubectl get nodes -o jsonpath={.items[0].status.addresses[0].address}`
export MBG1IP=`kubectl get nodes -o jsonpath={.items[0].status.addresses[0].address}`
export MBG1PODIP=`kubectl get pod $MBG1 --template '{{.status.podIP}}'`
export MBGCTL1=`kubectl get pods -l app=mbgctl -o custom-columns=:metadata.name`
export MBGCTL1IP=`kubectl get pod $MBGCTL1 --template '{{.status.podIP}}'`
export IPERF3CLIENT_IP=`kubectl get pods -l app=iperf3-client -o jsonpath={.items[*].status.podIP}`
export IPERF3CLIENT=`kubectl get pods -l app=iperf3-client -o custom-columns=:metadata.name`

kubectl config use-context kind-mbg-agent2
export MBG2=`kubectl get pods -l app=mbgctl -o custom-columns=:metadata.name`
export MBG2IP=`kubectl get nodes -o jsonpath={.items[0].status.addresses[0].address}`
export MBG2=`kubectl get pods -l app=mbg -o custom-columns=:metadata.name`
export MBG2IP=`kubectl get nodes -o jsonpath={.items[0].status.addresses[0].address}`
export MBG2PODIP=`kubectl get pod $MBG2 --template '{{.status.podIP}}'`
export MBGCTL2=`kubectl get pods -l app=mbgctl -o custom-columns=:metadata.name`
export MBGCTL2IP=`kubectl get pod $MBGCTL2 --template '{{.status.podIP}}'`
export IPERF3SERVER_IP=`kubectl get pods -l app=iperf3-server -o jsonpath={.items[*].status.podIP}`

Start MBG1:( the MBG creates an HTTP server, so it is better to run this command in a different terminal (using tmux) or run it in the background)
Start MBG1: (the MBG creates an HTTP server, so it is better to run this command in a different terminal (using tmux) or run it in the background)

kubectl config use-context kind-mbg-agent1
kubectl exec -i $MBG1 -- ./mbg start --id "MBG1" --ip $MBG1IP --cport 30443 --cportLocal 8443 --dataplane mtls --rootCa ./mtls/ca.crt --certificate ./mtls/mbg1.crt --key ./mtls/mbg1.key
Initialize mbgctl (mbg control):
kubectl exec -i $MBGCTL1 -- ./mbgctl start --id "hostCluster" --ip $MBGCTL1IP --mbgIP $MBG1PODIP:8443 --dataplane mtls --rootCa ./mtls/ca.crt --certificate ./mtls/mbg1.crt --key ./mtls/mbg1.key

Initialize mbgctl (mbg control):

kubectl exec -i $MBGCTL1 -- ./mbgctl start --id "hostCluster" --ip $MBGCTL1IP --mbgIP $MBG1PODIP:8443 --dataplane mtls --rootCa ./mtls/ca.crt --certificate ./mtls/mbg1.crt --key ./mtls/mbg1.key

Create K8s service nodeport to connect MBG cport to the MBG localcport.

kubectl create service nodeport mbg --tcp=8443:8443 --node-port=30443
kubectl create service nodeport mbg --tcp=8443:8443 --node-port=30443

Start MBG2:( the MBG creates an HTTP server, so it is better to run this command in a different terminal (using tmux) or run it in the background)
Start MBG2: (the MBG creates an HTTP server, so it is better to run this command in a different terminal (using tmux) or run it in the background)

kubectl config use-context kind-mbg-agent2
kubectl exec -i $MBG2 -- ./mbg start --id "MBG2" --ip $MBG2IP --cport 30443 --cportLocal 8443 --dataplane mtls --rootCa ./mtls/ca.crt --certificate ./mtls/mbg2.crt --key ./mtls/mbg2.key
Initialize mbgctl (mbg control):
kubectl exec -i $MBGCTL2 -- ./mbgctl start --id "destCluster"-ip $MBGCTL2IP --mbgIP $MBG2PODIP:8443 --dataplane mtls --rootCa ./mtls/ca.crt --certificate ./mtls/mbg2.crt --key ./mtls/mbg2.key

Initialize mbgctl (mbg control):

kubectl exec -i $MBGCTL2 -- ./mbgctl start --id "destCluster"-ip $MBGCTL2IP --mbgIP $MBG2PODIP:8443 --dataplane mtls --rootCa ./mtls/ca.crt --certificate ./mtls/mbg2.crt --key ./mtls/mbg2.key

Create K8s service nodeport to connect MBG cport to the MBG localcport.

Expand All @@ -117,12 +121,12 @@ In this step, we set the communication between the MBGs.
First, send MBG2 details information to MBG1 using mbgctl:

kubectl config use-context kind-mbg-agent1
kubectl exec -i $MBGCTL1 -- ./mbgctl addPeer --id "MBG2" --ip $MBG2IP --cport 30443
kubectl exec -i $MBGCTL1 -- ./mbgctl addPeer --id "MBG2" --ip $MBG2IP --cport 30443
Send Hello message from MBG1 to MBG2:

kubectl exec -i $MBGCTL1 -- ./mbgctl hello
### <ins> Step 5: Add services <ins>
In this step, we add the iperf3 services for each mbg.
In this step, we add the iperf3 services for each MBG.
Add an iperf3-client service to MBG1:

kubectl exec -i $MBGCTL1 -- ./mbgctl addService --serviceId iperf3-client --serviceIp $IPERF3CLIENT_IP
Expand All @@ -137,11 +141,11 @@ In this step, we expose the iperf3-server service from MBG2 to MBG1.
kubectl exec -i $MBGCTL2 -- ./mbgctl expose --serviceId iperf3-server

### <ins> Step 7: iPerf3 test <ins>
Add an iperf3-server service to MBG2.In this test, we can check the secure communication between the iPerf3 client and server by sending the traffic using the MBGs.
In this step, we can check the secure communication between the iPerf3 client and server by sending the traffic using the MBGs.

kubectl config use-context kind-mbg-agent1
export MBG1PORT_IPERF3SERVER=`python3 $PROJECT_FOLDER/tests/aux/getMbgLocalPort.py -m $MBG1 -s iperf3-server`
kubectl exec -i $IPERF3CLIENT -- iperf3 -c $MBG1PODIP -p $MBG1PORT_IPERF3SERVER
export MBG1PORT_IPERF3SERVER=`python3 $PROJECT_FOLDER/tests/aux/getMbgLocalPort.py -m $MBG1 -s iperf3-server`
kubectl exec -i $IPERF3CLIENT -- iperf3 -c $MBG1PODIP -p $MBG1PORT_IPERF3SERVER


### <ins> Cleanup <ins>
Expand Down

0 comments on commit 10b6b60

Please sign in to comment.