(TCP Settings)
 
(99 versioni intermedie di 2 utenti non mostrate)
Riga 1: Riga 1:
 
[[Category:Research]]
 
[[Category:Research]]
=Controlling Queuing Delays for Real-Time Communication:  The Interplay of E2E and AQM Algorithms=
+
==Experimental settings employed to evaluate the Google Congestion Control for WebRTC==
  
Real-time media communication requires not only congestion control, but also minimization of queuing delays to provide interactivity. In this work we consider the case of real-time
+
This web page provides the guideline and scripts required to reproduce the experiments to evaluate the Google Congestion Control  (GCC) for WebRTC. <br />
communication between web browsers (WebRTC) and we focus on the interplay of an end-to-end delay-based congestion control algorithm, i.e. the Google congestion control (GCC), with two delay-based AQM algorithms, namely CoDel and PIE, and two flow queuing schedulers, i.e. SFQ and Fq_Codel. Experimental
+
The guideline be used to reproduce the results obtained in the papers published in the context of the ''Google Faculty Research Award'' 2014 reported [http://c3lab.poliba.it/index.php?title=GoogleFacultyAward here].
investigations show that, when only GCC flows are considered, the end-to-end algorithm is able to contain queuing delays without AQMs. Moreover the interplay of GCC flows with PIE or CoDel leads to higher packet losses with respect to the case of a DropTail queue. In the presence of concurrent TCP traffic, PIE and CoDel reduce the queuing delays with respect to DropTail at the cost of increased packet losses. In this scenario flow queuing schedulers offer a better solution.
 
  
== Experimental settings and scripts ==
+
Figure 1 shows an example of the experimental testbed topology. Two nodes, connected through an Ethernet cable, are used to run instances of the Chromium browsers, where GCC is deployed, to generate WebRTC traffic and IPerf-like applications to generate concurrent TCP traffic. The experiments are orchestrated by the Controller that sends ''ssh'' commands  to automatically start the WebRTC calls, start the TCP traffic and set the bottleneck parameters.
  
To be done
+
[[Immagine:WebRTCTestbed.png|center|400px|'''Experimental Testbed]]
 +
<center> '''Figure 1: Experimental Testbed Example'''</center>
 +
 
 +
== WAN Link Emulation ==
 +
 
 +
This repository contains the scripts used to emulate a WAN bottleneck link, which in Figure 1 is shown on Node 1 as Traffic shaper:
 +
* [https://github.com/GaetanoCarlucci/Wan-Emulation-TC-and-Netem Wan emulation TC/NetEm]
 +
 
 +
The bottleneck link queue can be governed by Drop Tail queue, AQM algorithms or flow queuing schedulers.These scripts employ iproute2 package and the NetEm Linux module. With tc “traffic control” is possible to set the queuing discipline, limit the link capacity and much more. The NetEm linux module can be employed to set the propagation delay.
 +
 
 +
== Chromium Settings ==
 +
 
 +
The Google Congestion Control algorithm is implemented in Google Chrome browser which is daily updated. To carry out the experimental investigation with the version of GCC according to the results that the reader wants to reproduce, we recommend to Download and Compile the open source Chromium browser:
 +
* [https://www.chromium.org/developers/how-tos/get-the-code Download and Compile Chromium]
 +
The reader can work with the appropriate release version number since they are tagged in the git repository.
 +
 
 +
== Google Congestion Control Settings ==
 +
The implementation of the GCC can be found in two main directories of the git repository:
 +
 
 +
: 1. [https://code.google.com/p/chromium/codesearch#chromium/src/third_party/webrtc/modules/remote_bitrate_estimator/ Remote Bitrate Estimator]
 +
:: This directory contains the delay-based controller implementation in the form of three main classes:
 +
:: 1. Over Use Detector: (overuse_estimator.cc) which estimates the one way delay variation between the incoming packets with a Kalman filter;
 +
:: 2. Over Use Estimator: (overuse_detector.cc) which compares the one way delay variation with a threshold and generates a signal that reports the status of the network (over-used, normal, under-used);
 +
:: 3. Rate Control: (aimd_rate_control.cc) which uses this signal to drive a finite state machine that computes the sending bitrate.
 +
: 2. [https://code.google.com/p/chromium/codesearch#chromium/src/third_party/webrtc/modules/bitrate_controller/  Bitrate Controller]
 +
:: This directory contains the loss-based controller implementation:
 +
:: 1. Bandwidth Estimation: (send_side_bandwidth_estimation.cc) which computes the sending bitrate based on the fraction loss reported in RTCP packet.
 +
 
 +
Relevant metrics, such as sending bitrate, RTT, packets losses, can be measured by employing different approaches. Here we cite some:
 +
# Using the traces from the Chromium source code after re-compilation. For example video bitrate can be extracted [https://code.google.com/p/chromium/codesearch#chromium/src/third_party/webrtc/modules/rtp_rtcp/source/rtp_sender.cc&l=225 here].
 +
# Using the Javascript getstats() API. An example is provided [https://github.com/muaz-khan/getStats here]. This approach does not require re-compilation.
 +
# Using linux tool. For example ''tcpdump'' to measure the sending bitrate, ''ping'' to measure the RTT. [https://webrtc.org/testing/wireshark/#capturing-rtp-streams This guideline] explains how to use ''wireshark'' to capture and analyze the RTP stream.
 +
 
 +
== Web Server Settings ==
 +
In order to establish WebRTC calls among the Chromium browsers which run on Nodes of Figure 1, a Web server is required. In particular the Web server provides the HTML pages that handles the signaling between the peers using the WebRTC JavaScript API.
 +
 
 +
To this purpose we have used the [https://appr.tc video chat demo app] which can be installed locally following this guide:
 +
* https://appr.tc https://github.com/webrtc/apprtc
 +
 
 +
== Video Settings ==
 +
The same video sequence is used to enforce experiments reproducibility. We have used  “Four People” 7 YUV test sequence which can be found here:
 +
* https://people.xiph.org/~thdavies/x264_streams/FourPeople_1280x720_30/
 +
 
 +
In order to fed Chromium browser with this video sequence, different approaches can be used:
 +
#The Linux kernel module [https://github.com/umlaeute/v4l2loopback v4l2loopback] can be used to create a virtual webcam device which cyclically repeats the video test sequence.
 +
#The  video test sequence can be directly fed to Chromium using the flag --use-file-for-fake-video-capture. Other useful flags for testing are available [https://webrtc.org/testing/ here].
 +
 
 +
Chromium encodes the raw video source with the VP8/9  video encoder. The implementation of H264/5 is not ready yet. The video encoder limits the dynamics of sending bitrate in the range [50,2000]kbps.
 +
 
 +
== TCP Settings ==
 +
The TCP sources employ the CUBIC congestion control, the default in Linux kernels. In order to enable CUBIC this command has to be run:
 +
 
 +
<nowiki># echo cubic > /proc/sys/net/ipv4/tcp_congestion_control</nowiki>
 +
 
 +
In order to run the TCP flows IPerf-like application can be used:
 +
* https://iperf.fr/

Versione attuale delle 16:37, 31 Mag 2016

Experimental settings employed to evaluate the Google Congestion Control for WebRTC

This web page provides the guideline and scripts required to reproduce the experiments to evaluate the Google Congestion Control (GCC) for WebRTC.
The guideline be used to reproduce the results obtained in the papers published in the context of the Google Faculty Research Award 2014 reported here.

Figure 1 shows an example of the experimental testbed topology. Two nodes, connected through an Ethernet cable, are used to run instances of the Chromium browsers, where GCC is deployed, to generate WebRTC traffic and IPerf-like applications to generate concurrent TCP traffic. The experiments are orchestrated by the Controller that sends ssh commands to automatically start the WebRTC calls, start the TCP traffic and set the bottleneck parameters.

Experimental Testbed
Figure 1: Experimental Testbed Example

WAN Link Emulation

This repository contains the scripts used to emulate a WAN bottleneck link, which in Figure 1 is shown on Node 1 as Traffic shaper:

The bottleneck link queue can be governed by Drop Tail queue, AQM algorithms or flow queuing schedulers.These scripts employ iproute2 package and the NetEm Linux module. With tc “traffic control” is possible to set the queuing discipline, limit the link capacity and much more. The NetEm linux module can be employed to set the propagation delay.

Chromium Settings

The Google Congestion Control algorithm is implemented in Google Chrome browser which is daily updated. To carry out the experimental investigation with the version of GCC according to the results that the reader wants to reproduce, we recommend to Download and Compile the open source Chromium browser:

The reader can work with the appropriate release version number since they are tagged in the git repository.

Google Congestion Control Settings

The implementation of the GCC can be found in two main directories of the git repository:

1. Remote Bitrate Estimator
This directory contains the delay-based controller implementation in the form of three main classes:
1. Over Use Detector: (overuse_estimator.cc) which estimates the one way delay variation between the incoming packets with a Kalman filter;
2. Over Use Estimator: (overuse_detector.cc) which compares the one way delay variation with a threshold and generates a signal that reports the status of the network (over-used, normal, under-used);
3. Rate Control: (aimd_rate_control.cc) which uses this signal to drive a finite state machine that computes the sending bitrate.
2. Bitrate Controller
This directory contains the loss-based controller implementation:
1. Bandwidth Estimation: (send_side_bandwidth_estimation.cc) which computes the sending bitrate based on the fraction loss reported in RTCP packet.

Relevant metrics, such as sending bitrate, RTT, packets losses, can be measured by employing different approaches. Here we cite some:

  1. Using the traces from the Chromium source code after re-compilation. For example video bitrate can be extracted here.
  2. Using the Javascript getstats() API. An example is provided here. This approach does not require re-compilation.
  3. Using linux tool. For example tcpdump to measure the sending bitrate, ping to measure the RTT. This guideline explains how to use wireshark to capture and analyze the RTP stream.

Web Server Settings

In order to establish WebRTC calls among the Chromium browsers which run on Nodes of Figure 1, a Web server is required. In particular the Web server provides the HTML pages that handles the signaling between the peers using the WebRTC JavaScript API.

To this purpose we have used the video chat demo app which can be installed locally following this guide:

Video Settings

The same video sequence is used to enforce experiments reproducibility. We have used “Four People” 7 YUV test sequence which can be found here:

In order to fed Chromium browser with this video sequence, different approaches can be used:

  1. The Linux kernel module v4l2loopback can be used to create a virtual webcam device which cyclically repeats the video test sequence.
  2. The video test sequence can be directly fed to Chromium using the flag --use-file-for-fake-video-capture. Other useful flags for testing are available here.

Chromium encodes the raw video source with the VP8/9 video encoder. The implementation of H264/5 is not ready yet. The video encoder limits the dynamics of sending bitrate in the range [50,2000]kbps.

TCP Settings

The TCP sources employ the CUBIC congestion control, the default in Linux kernels. In order to enable CUBIC this command has to be run:

# echo cubic > /proc/sys/net/ipv4/tcp_congestion_control

In order to run the TCP flows IPerf-like application can be used:

Controlling Queuing Delays for Real-Time Communication: The Interplay of E2E and AQM Algorithms[edit]

Real-time media communication requires not only congestion control, but also minimization of queuing delays to provide interactivity. In this work we consider the case of real-time communication between web browsers (WebRTC) and we focus on the interplay of an end-to-end delay-based congestion control algorithm, i.e. the Google congestion control (GCC), with two delay-based AQM algorithms, namely CoDel and PIE, and two flow queuing schedulers, i.e. SFQ and Fq_Codel. Experimental investigations show that, when only GCC flows are considered, the end-to-end algorithm is able to contain queuing delays without AQMs. Moreover the interplay of GCC flows with PIE or CoDel leads to higher packet losses with respect to the case of a DropTail queue. In the presence of concurrent TCP traffic, PIE and CoDel reduce the queuing delays with respect to DropTail at the cost of increased packet losses. In this scenario flow queuing schedulers offer a better solution.

Experimental settings and scripts[edit]

To be done