About The Project • Getting Started • Usage • Related • License
Table of Contents
This project is built with Vagrant using Docker as a provider and Ansible for provisioning. Once deployed, the proposed virtual environment intends to perform tests on multipath transport protocols such as MPQUIC and MPTCP.
This configuration allows testing different scenarios with varying network conditions on each interface using the Linux Traffic Control (TC) and Network Emulator (NetEm) tool suite. In that matter, this lab aims to explore the mechanisms of these multipath protocols, observe their performance under different traffic types and conditions, and evaluate their use for live video streaming applications (bandwidth aggregation, latency).
To properly build this lab, the following setup is required:
-
Host machine running a recent Linux distribution like Ubuntu 22.04 LTS that comes with a recent Linux kernel having built-in MPTCP support enabled by default (kernel version ≥ 5.15.x)
-
MPTCP is enabled in the kernel:
sudo sysctl -a | grep mptcp.enabled
- Clone this repository:
git clone https://github.com/paulbertin/multipath-network-protocols-lab- Build the custom image from the Dockerfile, with the tag
vagrant-ubuntu-22.04:
docker build -t vagrant-ubuntu-22.04 multipath-network-protocols-lab/docker/ubuntu- Run the Vagrantfile:
cd multipath-network-protocols-lab
vagrant up- Check that the containers are running:
# Using vagrant command
vagrant status
# Alternatively, you can use the Docker CLI to get more details
docker ps- Now, Grafana and InfluxDB dashboards should be accessible from the host:
| Address | Credentials (login:password) | |
|---|---|---|
| Grafana | http://localhost:3000 | admin:grafana |
| InfluxDB | http://localhost:8086 | admin:influxdb |
This lab is composed of 5 containers:
| Container name | Description |
|---|---|
drone |
Runs a proxy and a client application that sends traffic to the server |
router |
Transparently forwards traffic between the client and the server |
server |
Runs a proxy and a server application that receives traffic from the client |
influxdb |
A time series database used to store the metrics collected by Telegraf. Telegraf is a data collection agent running on each of the 3 containers (drone, router, server), that collects metrics from the network interfaces (using Net Monitoring plugin) and sends them to the influxdb container. |
grafana |
A data visualization and monitoring solution that is connected to influxdb and displays the metrics collected by telegraf. |
The proxies on drone and server are used to forward traffic between the client and the server using either MPTCP or MPQUIC.
The client and server applications are used to generate traffic and evaluate the performance of these multipath transport protocols.
Most applications do not support multipath natively and require development to enable this feature. As a workaround, the provided proxies work in pairs (on both the server and client-side) to split the TCP connection and transmit all TCP-based application data over a multipath transport layer.
Note: While proxying allows us to quickly test the behavior of these protocols under various application traffic patterns, please note that this is not an efficient solution for production. Applications need specific algorithms and adaptations to leverage multipath for their use cases.
This demo implies that the virtual environment is already deployed and running. It features:
- TCP application traffic proxying via MPTCP or MPQUIC (sections A) or B)).
- Traffic generation using
ffmpegapplication for live video streaming fromdronetoservercontainer. - Real-time per-interface network bandwidth usage visualization on Grafana.
Show instructions
- Start the MPTCP proxy on
servercontainer:
# SSH into the server container
vagrant ssh server
# Start the MPTCP proxy (MPTCP to TCP)
./mptcp_forwarders/mptcp-tcp-forwarder.py -l 10.0.0.30:3333 -d 127.0.0.1:3333- Start the MPTCP proxy on
dronecontainer:
# SSH into the drone container
vagrant ssh drone
# Start the MPTCP proxy (TCP to MPTCP)
./mptcp_forwarders/tcp-mptcp-forwarder.py -l <eth0 ip address>:1111 -d 10.0.0.30:3333Note: The IP address of
eth0interface can be retrieved using theip acommand. This IP address is dynamically allocated by Docker (default network) and is reachable from the host machine. It is used to connect to the proxy from the client application that can be executed either on host side or within thedronecontainer.
The above set the following forwarding rules:
- The proxy on
dronecontainer listens for TCP connections oneth0(port 1111). - Incoming TCP traffic is forwarded from
droneto the MPTCP proxy running onservercontainer (port 3333). - The proxy on
servercontainer receives MPTCP traffic oneth1(port 3333) - Incoming MPTCP traffic is forwarded to the TCP application server running locally on
servercontainer (port 3333).
Show instructions
- Start the MPQUIC proxy on
servercontainer:
# SSH into the server container
vagrant ssh server
# Start the MPQUIC proxy (MPQUIC to TCP)
./mpquic_forwarder/start_server_demo.sh- Start the MPQUIC proxy on
dronecontainer:
# SSH into the drone container
vagrant ssh drone
# Start the MPQUIC proxy (TCP to MPQUIC)
./mpquic_forwarder/start_client_demo.shNote: By default, MPQUIC proxy on
dronecontainer listen for TCP connection on 127.0.0.1:1111. You can change the listening address by replacing thelisten_addrvalue at line 268 in mpquic_forwarder/src/client.rs. Alternatively, use this one liner to replace the value:sed -i "s/127.0.0.1/<drone_eth0_ipaddr>/g" mpquic_forwarder/src/client.rs
Show instructions
- Start the
ffplayserver application onservercontainer:
# SSH into the server container
vagrant ssh server
# Start the ffplay server application (listen on same port as the proxy forwards to)
ffplay -fflags nobuffer -flags low_delay -framedrop tcp://localhost:3333?listen- Start the
ffmpegclient application (on host machine):
# Start the ffmpeg client application (connect to the proxy on drone)
ffmpeg -re -f v4l2 -i /dev/video0 -pix_fmt yuv420p -preset ultrafast -tune zerolatency -fflags nobuffer -b:v 2000k -vf scale=960x720 -f mpegts tcp://<drone_eth0_ipaddr>:1111The above instruction executes ffmpeg on the host machine to access the laptop camera:
- The
ffmpegclient application connects to the proxy ondronecontainer. - The proxy forwards the traffic to the
ffplayserver application onservercontainer. - The
ffplayapplication displays the video stream on the host machine (X11 ).
Please refer to the ffmpeg documentation for more information on the command line options. It is also possible to stream a pre-recorded video file (such as in the demonstration) using the following command:
ffmpeg -re -i <video_file> -preset ultrafast -tune zerolatency -fflags nobuffer -b:v 2000k -vf scale=1920x1080 -f mpegts tcp://<drone_eth0_ipaddr>:1111
Using Network Emulator (NetEm), this testbed allows emulating different network conditions on each of the four interfaces used for multipath on drone.
By default, when the environment is deployed, homogeneous network conditions are applied on all interfaces. This set a 40ms RTT, 1 mbps bandwidth and 0% packet loss (see defaults values in traffic_control/default/main.yml).
One can modify the existing scenarios or create new ones. Some example scenarios are provided in the playbooks/scenarios folder. These are YAML files defining variables for every tc-netem parameters applied on each network interfaces.
Show example
# ./playbooks/scenarios/default.yml
# This scenario add a 20ms delay (with a 10ms jitter)
# and limit the bandwidth to 1mbit to all interfaces.
tc_already_setup: true
tc_rate:
eth1: 1mbit
eth2: 1mbit
eth3: 1mbit
eth4: 1mbit
tc_delay:
eth1: 20ms
eth2: 20ms
eth3: 20ms
eth4: 20ms
tc_delay_jitter:
eth1: 10ms
eth2: 10ms
eth3: 10ms
eth4: 10ms
tc_delay_correlation:
eth1: 25%
eth2: 25%
eth3: 25%
eth4: 25%
tc_loss:
eth1: 0%
eth2: 0%
eth3: 0%
eth4: 0%
eth5: 0.1% # Better add loss on router-side rather than locallyRun the traffic_control.yml Ansible playbook to modify the network conditions on the fly. Variables files are used to apply the desired parameters.
For example, use the following command to apply the high_latency.yml scenario:
ansible-playbook -i .vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory traffic_control.yml -e ./playbooks/scenarios/high_latency.ymlNote: The
vagrant_ansible_inventoryfile is generated by Vagrant and contains the list of all the containers. It is used by Ansible to connect to the containers.
- Current MPTCP and MPQUIC proxies are limited to single port forwarding.
- Some applications like iPerf3 are not working through the provided proxies. For MPTCP, consider using mptcpize as a workaround.
- MPQUIC forwarder may corrupt TCP packets exceeding the maximum UDP packet size. A packet fragmentation hotfix has been developed. It helps reduce errors but does not solve this issue for all traffic patterns.
- MPQUIC forwarder - A proxy developed alongside this project to forward traffic between a client and a server using MPQUIC. It is based on Quentin De Coninck's pull request that introduces multipath support to Cloudflare's Quiche library.
- Cloudflare Quiche - A Rust implementation of the QUIC transport protocol and HTTP/3, supported by Cloudflare and based on the IETF specifications.
- MPQUIC IETF Draft - IETF specification of the Multipath Extension for QUIC (MPQUIC).
- MPTCP RFC - IETF specification of the Multipath TCP (MPTCP) protocol.
- MPTCP documentation - Documentation of the out-of-tree kernel implementation of MPTCP. For more information on the built-in MPTCP in newer Linux kernels, refer to this website.
Distributed under MIT license. See LICENSE.md for more information.
Paul Bertin - pro.paulbertin@gmail.com
