Skip to content

Commit

Permalink
Cleanup sources and comments in NC for transport app.
Browse files Browse the repository at this point in the history
  • Loading branch information
stevelorenz committed Dec 6, 2019
1 parent 20beafc commit d4f909a
Show file tree
Hide file tree
Showing 12 changed files with 602 additions and 478 deletions.
6 changes: 6 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -53,6 +53,12 @@ Common facts about ComNetsEmu:

- ComNetsEmu is developed with **Python3.6**.

- Examples and applications in this repository are mainly developed with high-level script language for simplicity.
These programs are **not** performance-oriented and optimized.
Contact us if you want highly optimized implementation of the concepts introduced in this book.
For example, we have [DPDK](https://www.dpdk.org/) accelerated version (implemented with C) for low latency
(sub-millisecond) Network Coding (NC) as a network function.

#### Main Features

- Use Docker hosts in Mininet topologies.
Expand Down
97 changes: 71 additions & 26 deletions app/network_coding_for_transport/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,47 +12,92 @@ Programs: Client --- Encoder --- Recoder 1 --- Recoder 2 --- .... --- Recoder N-
Topology: Host 1 --- Host 2 --- Host 3 --- Host 4 --- .... --- Host N-2 --- Host N-1 --- Host N
```

Examples in this folder are mainly developed with Python to easily demonstrate the concept.
These programs are **not** performance-oriented and optimized.
The throughput and latency performance of all coders are not good.
Contact us if you want highly optimized implementation of the concepts introduced in this book.
For example, we have [DPDK](https://www.dpdk.org/) accelerated version (implemented with C) for low latency (sub-millisecond) Network Coding (NC) as a network function.

This folder contains following files:

1. Dockerfile: The dockerfile to build encoder, recoder and decoder VNF containers.
1. Dockerfile: The dockerfile to build encoder, recoder and decoder VNF Docker containers.

2. build_kodo_lib.sh: Script to build Kodo library on the system running the Testbed. Because Kodo requires
[Licence](http://steinwurf.com/license.html), the binaries can not be released. The dynamic library file kodo.so must
be built firstly and located in this directory to run the emulation. This script will build the library (source are
downloaded in "$HOME/kodo-python") and copy it to this directory.
1. build_kodo_lib.sh: Script to build Kodo library on the system running the Testbed. Because Kodo requires
[Licence](http://steinwurf.com/license.html), the binaries can not be released. The dynamic library file kodo.so
must be built firstly and located in this directory to run the emulation. This script will build the library (source
are downloaded in "$HOME/kodo-python") and copy it to this directory.

3. build_docker_images.sh: Script to build all coder containers.
1. build_docker_images.sh: Script to build all coder containers.

4. encoder.py recoder.py decoder.py common.py rawsock_helpers.py log.py: Python program for coders and helpers. Since
the VNF should work on network layer and handle Ethernet frames.
[Linux Packet Socket](http://man7.org/linux/man-pages/man7/packet.7.html) is used.
1. encoder.py, recoder.py, decoder.py, common.py, rawsock_helpers.py and log.py: Python program for coders and helpers.
Since the VNF should work on network layer and handle Ethernet frames. [Linux Packet Socket](http://man7.org/linux/man-pages/man7/packet.7.html) is used.
These programs are copied into VNF containers when run ./build_docker_images.sh

5. multihop_topo.py: The emulation program. Contains the process to create the topology, deploy coder VNFs and run
measurements (With [Iperf](https://iperf.fr/)).
1. multihop_topo.py, adaptive_redundancy.py, adaptive_rlnc_sdn_controller.py and redundancy_calculator.py:
Python programs for the multi-hop topology and different profiles (test setups/scenarios).

This application creates the chain topology with Docker hosts (require minimal seven hosts for multihop_topo.py setup)
and the links between them has loss rates (currently all links have the same and fixed loss rates).

There are two main profiles(test setups/scenarios) implemented in multihop_topo.py and adaptive_redundancy.py.

You can simple run the emulation with following commands (The container images in ../../test_containers/ should be
already built):
In all profiles, encoder and decoder use [on-the-fly full vector RLNC](https://github.com/steinwurf/kodo-python/blob/master/examples/encode_on_the_fly.py).
Coding parameters, including field size, generation size and payload size, are defined in [common.py](./common.py)).
The recoder can either store-and-forward (act as a dummy relay) or recode-and-forward based on the configuration.

Before running following profiles, run following commands to prepare the required libraries and container images:

```bash
$ bash ./build_kodo_lib.sh
$ bash ./build_docker_images.sh
$ sudo bash ./build_docker_images.sh
$ sudo ./install_dependencies.sh
```

## Profile 1: Multi-hop Topology with Mobile Recoder ###

In this scenario, due to the latency/computation overhead. Only one recoder can enable the recode-and-forward mode.
Other recoders perform store-and-forward.
For this deterministic scenario, the recode function is enabled from left to right one by one.
For each placement of the recoder, UDP traffic is generated from client to server with Iperf to measure throughput and
packet losses.

You can simple run the automated emulation of mobile\_recoder\_deterministic profile with following commands:

```bash
# All recoders perform only store-and-forward.
$ sudo python3 ./multihop_topo.py --all_foward
$ sudo python3 ./multihop_topo.py
```

This application creates the chain topology with Docker hosts (require minimal 5 hosts) and the links between them has
loss rates (currently all links have the same and fixed loss rates).
### Profile 2: Multi-hop Topology with Adaptive Redundancy ###

There are multiple main profiles(test setups/scenarios) implemented in multihop_topo.py. In all profiles, encoder and
decoder use [on-the-fly full vector RLNC](https://github.com/steinwurf/kodo-python/blob/master/examples/encode_on_the_fly.py)
(field size, generation size and payload size are defined in [common.py](./common.py)).
The recode can either store-and-forward or recode-and-forward based on the configuration.
#### Motivation and Setup

1. mobile\_recoder\_deterministic: In this scenario, due to the latency/computation overhead. Only one recoder can
enable the recode-and-forward mode. Other recoders perform store-and-forward. For the deterministic scenario, the
recode function is enabled from left to right one by one.
This examples demonstrates how to leverage the SDN controller's knowledge about the network parameters to flexibly adapt
the redundancy created by Random Linear Network Coding (RLNC) to repair losses in the transmission.

1. adaptive\_redundancy: All coders configure their redundancy based on the monitor statistics of the SDN controller.
Please read the guide [adaptive_redundancy.md](./adaptive_redundancy.md) to run this profile.
A simple Client - Encoder - Decoder - Server topology is created with an unknown loss ratio between the encoder and decoder.
The goal of this example is to show, that the controller can estimate these losses and precisely adapt the amount of
redundancy to guarantee a certain delivery probability for each packet transmitted.

The following figure depicts the setup:

```text
Control plane: SDN Controller
/ \
Data plane: Switch1 --- Switch2 --- Dummy --- Switch4 --- Switch5
| | | |
Hosts: Host1 Host2 Host3 Host4
| | | |
Programs: Client Encoder Loss emu. Decoder Server
```

#### Running the experiment

The experiment can be run with:

```bash
$ sudo python3 adaptive_redundancy.py
```

For all profiles, UDP traffic is generated from Client to Server with Iperf to measure throughput and packet losses.
Coding parameters can be found in [common.py](./common.py)
40 changes: 0 additions & 40 deletions app/network_coding_for_transport/adaptive_redundancy.md

This file was deleted.

92 changes: 23 additions & 69 deletions app/network_coding_for_transport/adaptive_redundancy.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
# vim:fenc=utf-8

"""
About: Example of using Network Coding (NC) for transport on a multi-hop topology
About: Example of using Network Coding (NC) for transport with adaptive redundancy.
"""


Expand All @@ -20,92 +20,46 @@
from mininet.term import makeTerm
from mininet.cli import CLI

# Just for prototyping...
# Should be replaced with SDN controller application
# ------------------------------------------------------------------------------


def get_ofport(ifce):
"""Get the openflow port based on iterface name
:param ifce (str): Name of the interface
:param ifce (str): Name of the interface.
"""
return check_output(
split("sudo ovs-vsctl get Interface {} ofport".format(ifce))
).decode("utf-8")
return check_output(split("ovs-vsctl get Interface {} ofport".format(ifce))).decode(
"utf-8"
)


def add_ovs_flows(net, switch_num):
"""Add OpenFlow rules for ARP/PING packets and other general traffic"""

check_output(
split('sudo ovs-ofctl add-flow s1 "priority=1,in_port=1,actions=output=2"')
)
check_output(
split('sudo ovs-ofctl add-flow s2 "priority=1,in_port=2,actions=output=3"')
)
check_output(
split('sudo ovs-ofctl add-flow s3 "priority=1,in_port=2,actions=output=3"')
)
check_output(
split('sudo ovs-ofctl add-flow s4 "priority=1,in_port=2,actions=output=3"')
)
check_output(
split('sudo ovs-ofctl add-flow s5 "priority=1,in_port=2,actions=output=1"')
)
check_output(split('ovs-ofctl add-flow s1 "priority=1,in_port=1,actions=output=2"'))
check_output(split('ovs-ofctl add-flow s2 "priority=1,in_port=2,actions=output=3"'))
check_output(split('ovs-ofctl add-flow s3 "priority=1,in_port=2,actions=output=3"'))
check_output(split('ovs-ofctl add-flow s4 "priority=1,in_port=2,actions=output=3"'))
check_output(split('ovs-ofctl add-flow s5 "priority=1,in_port=2,actions=output=1"'))

check_output(
split('sudo ovs-ofctl add-flow s1 "priority=1,in_port=2,actions=output=1"')
)
check_output(
split('sudo ovs-ofctl add-flow s2 "priority=1,in_port=3,actions=output=2"')
)
check_output(
split('sudo ovs-ofctl add-flow s3 "priority=1,in_port=3,actions=output=2"')
)
check_output(
split('sudo ovs-ofctl add-flow s4 "priority=1,in_port=3,actions=output=2"')
)
check_output(
split('sudo ovs-ofctl add-flow s5 "priority=1,in_port=1,actions=output=2"')
)
check_output(split('ovs-ofctl add-flow s1 "priority=1,in_port=2,actions=output=1"'))
check_output(split('ovs-ofctl add-flow s2 "priority=1,in_port=3,actions=output=2"'))
check_output(split('ovs-ofctl add-flow s3 "priority=1,in_port=3,actions=output=2"'))
check_output(split('ovs-ofctl add-flow s4 "priority=1,in_port=3,actions=output=2"'))
check_output(split('ovs-ofctl add-flow s5 "priority=1,in_port=1,actions=output=2"'))


def dump_ovs_flows(switch_num):
"""Dump OpenFlow rules of first switch_num switches"""
for i in range(switch_num):
ret = check_output(split("sudo ovs-ofctl dump-flows s{}".format(i + 1)))
ret = check_output(split("ovs-ofctl dump-flows s{}".format(i + 1)))
info("### Flow table of the switch s{} after adding flows:\n".format(i + 1))
print(ret.decode("utf-8"))


# ------------------------------------------------------------------------------


def disable_cksum_offload(switch_num):
"""Disable RX/TX checksum offloading"""
for i in range(switch_num):
ifce = "s%s-h%s" % (i + 1, i + 1)
check_output(split("sudo ethtool --offload %s rx off tx off" % ifce))


def save_hosts_info(hosts):
"""Save host's info (name, MAC, IP) in a CSV file
:param hosts:
"""
info = list()
for i, h in enumerate(hosts):
mac = str(h.MAC("h{}-s{}".format(i + 1, i + 1)))
ip = str(h.IP())
info.append([h.name, mac, ip])

with open("hosts_info.csv", "w+") as csvfile:
writer = csv.writer(
csvfile, delimiter=",", quotechar="|", quoting=csv.QUOTE_MINIMAL
)
for i in info:
writer.writerow(i)
check_output(split("ethtool --offload %s rx off tx off" % ifce))


def deploy_coders(mgr, hosts):
Expand All @@ -126,7 +80,7 @@ def deploy_coders(mgr, hosts):
"decoder",
hosts[-2].name,
"nc_coder",
"sudo python3 ./decoder.py h%d-s%d" % (len(hosts) - 1, len(hosts) - 1),
"python3 ./decoder.py h%d-s%d" % (len(hosts) - 1, len(hosts) - 1),
wait=3,
docker_args={},
)
Expand All @@ -135,7 +89,7 @@ def deploy_coders(mgr, hosts):
"encoder",
hosts[1].name,
"nc_coder",
"sudo python3 ./encoder.py h2-s2",
"python3 ./encoder.py h2-s2",
wait=3,
docker_args={},
)
Expand Down Expand Up @@ -180,7 +134,7 @@ def run_iperf_test(h_clt, h_srv, proto, time=10, print_clt_log=False):
"bw": "100K",
"time": time,
"interval": 1,
"length": str(SYMBOL_SIZE - 60),
"length": str(SYMBOL_SIZE - META_DATA_LEN),
"proto": "-u",
"suffix": "> /dev/null 2>&1 &",
}
Expand All @@ -189,7 +143,9 @@ def run_iperf_test(h_clt, h_srv, proto, time=10, print_clt_log=False):
iperf_client_para["suffix"] = ""

h_srv.cmd(
"iperf -s -p {} -i 1 {} > /tmp/iperf_server.log 2>&1 &".format(UDP_PORT_DATA, iperf_client_para["proto"])
"iperf -s -p {} -i 1 {} > /tmp/iperf_server.log 2>&1 &".format(
UDP_PORT_DATA, iperf_client_para["proto"]
)
)

iperf_clt_cmd = """iperf -c {server_ip} -p {port} -t {time} -i {interval} -b {bw} -l {length} {proto} {suffix}""".format(
Expand Down Expand Up @@ -315,5 +271,3 @@ def run_adaptive_redundancy(host_num, coder_log_conf):
coder_log_conf = {"encoder": 1, "decoder": 1, "recoder": 1}

run_adaptive_redundancy(5, coder_log_conf)

check_output("../../util/emu_cleanup.sh")
Loading

0 comments on commit d4f909a

Please sign in to comment.