(→Google Congestion Control Settings) |
(→Google Congestion Control Settings) |
||
Riga 30: | Riga 30: | ||
:: 1. Over Use Detector: (overuse_estimator.cc) which estimates the one-way delay variation between the incoming packets with a Kalman filter; | :: 1. Over Use Detector: (overuse_estimator.cc) which estimates the one-way delay variation between the incoming packets with a Kalman filter; | ||
:: 2. Over Use Estimator: (overuse_detector.cc) which compares the one way delay variation with a threshold and generates a signal that reports the status of the network (over-used, normal, under-used); | :: 2. Over Use Estimator: (overuse_detector.cc) which compares the one way delay variation with a threshold and generates a signal that reports the status of the network (over-used, normal, under-used); | ||
− | :: 3. Rate Control: (aimd_rate_control.cc) which uses this signal to drive a finite state machine that computes the sending bitrate | + | :: 3. Rate Control: (aimd_rate_control.cc) which uses this signal to drive a finite state machine that computes the sending bitrate. |
: 2. [https://code.google.com/p/chromium/codesearch#chromium/src/third_party/webrtc/modules/bitrate_controller/ Bitrate Controller] | : 2. [https://code.google.com/p/chromium/codesearch#chromium/src/third_party/webrtc/modules/bitrate_controller/ Bitrate Controller] | ||
:: This directory contains the loss-based controller implementation: | :: This directory contains the loss-based controller implementation: |
This web page provides the guideline and scripts required to reproduce the experiments to evaluate the Google Congestion Control (GCC) for WebRTC.
The guideline be used to reproduce the results obtained in the papers published in the context of the Google Faculty Research Award 2014 reported here.
Figure 1 shows an example of the experimental testbed topology. Two nodes, connected through an Ethernet cable, are used to run instances of the Chromium browsers, where GCC is deployed, to generate WebRTC traffic and IPerf-like applications to generate concurrent TCP traffic. The experiments are orchestrated by the Controller that sends ssh commands to automatically start the WebRTC calls, start the TCP traffic and set the bottleneck parameters.
This repository contains the scripts used to emulate a WAN bottleneck link, which in Figure 1 is shown on Node 1 as Traffic shaper:
The bottleneck link queue can be governed by Drop Tail queue, AQM algorithms or flow queuing schedulers.These scripts employ iproute2 package and the NetEm Linux module. With tc “traffic control” is possible to set the queuing discipline, limit the link capacity and much more. The NetEm linux module can be employed to set the propagation delay.
The Google Congestion Control algorithm is implemented in Google Chrome browser which is daily updated. To carry out the experimental investigation with the version of GCC according to the results that the reader wants to reproduce, we recommend to Download and Compile the open source Chromium browser:
The reader can work with the appropriate release version number since they are tagged in the git repository.
The implementation of the GCC can be found in two main directories of the git repository:
Relevant metrics , such as sending bitrate, RTT, packets losses, can be measured by employing different approaches. Here we cite some:
In order to establish WebRTC calls among the Chromium browsers running on Nodes of Figure 1, a Web server is required. In particular the Web server provides the HTML pages that handles the signaling between the peers using the WebRTC JavaScript API.
To this purpose we have used the video chat demo app which can be installed locally following this guide:
The same video sequence is used to enforce experiments reproducibility. We have used “Four People” 7 YUV test sequence which can be found here:
In order to fed Chromium browser with this video sequence difference approaches can be used.
Chromium encodes the raw video source with the VP8/9 video encoder. The implementation of H264/5 is not ready yet. The video encoder limits the dynamics of sending bitrate in the range [50,2000]kbps.
The TCP sources employ the CUBIC congestion control, the default in Linux kernels. In order to enable CUBIC this command has to be run:
# echo cubic > /proc/sys/net/ipv4/tcp_congestion_control
In order to run the TCP flows iPerf-like application can be used: