Low Delay Protocols

Delay sensitive applications require not only congestion control but also minimization of queuing delays to provide interactivity.

Controlling Queuing Delays for Real-Time Communication
  • G. Carlucci, L. De Cicco, and S. Mascolo
    Controlling Queuing Delays for Real-Time Communication: Interplay of E2E and AQM Algorithms
    ACM SIGCOMM Computer Communication Review, July 2016 (PDF)
ACM SIGCOMM Newsletter: Link


This paper considers the case of real-time communication between web browsers (WebRTC) and we focus on the interplay of an end-to-end delay-based congestion control algorithm with delay-based AQM algorithms, namely CoDel and PIE, and flow queuing schedulers, i.e. SFQ and Fq Codel.

Motivation

For an increasingly important class of Internet applications – such as video conference and personalized live streaming – high delay, rather than limited bandwidth, is the main obstacle to improved performance. A common problem that impacts this class of applications is “bufferbloat”, where excess buffering in the network causes high latency and jitter. Solutions for persistently full buffer problems, active queue management (AQM) schemes such as the original RED, have been known for two decades. Yet, while RED is simple and effective at reducing persistent queues is not widely or consistently configured and enabled in routers and sometimes directly unavailable.

Controlling Queuing Delays
Controlling Queuing Delays


A recent focus on bufferbloat has brought a number of new AQM proposals, including PIE and CoDel, which explicitly control the queuing delay and have no knobs for operators, users or implementers to adjust. This paper considers the interplay between some of these AQM protocols and the new end-to-end delay-based congestion control algorithm, Google Congestion Control (GCC) part of the WebRTC framework.

Testbed

Testbed
Testbed Set Up


The Figure above shows an essential view of the experimental testbed employed to evaluate the interaction. A WebRTC and a TCP sender on Node 1 and a WebRTC and a TCP receiver on Node 2. The bottleneck link buffer is governed by different queuing discipline with the goal of assessing the performance of the interaction between the end-to-end GCC algorithm and queuing discipline algorithm. Performances are assessed in terms of metrics such as packet loss ratio, average bitrate, and queuing delay which are known to be well correlated with QoE metrics.

Bottleneck configuration

We have employed the settings reported in the Table below. In the case of CoDel, the suggested target value is 13ms when the link capacity is 1Mbps otherwise the default value of 5 ms is used. Regarding PIE, we have used the default tuning parameters employed in the Linux implementations. In the case of DropTail and SFQ, we have set the queue size to 300 ms, which is the time taken to drain the queue when it is completely full. Throughout the experiment we consider the DropTail (DT/300) queuing discipline as the baseline for performance comparison i.e. when only end-to-end algorithm control is enabled.

[edit]

Algorithm Parameter Value
DropTail (DT/300) queue size 300 ms
PIE tupdate 30 ms
ttarget 20 ms
limit 1000 pkts
Codel interval 100 ms
target 13 ms at 1 Mbps, 5 ms otherwise
limit 1000 pkts
SFQ (DT/300) queue size 300 ms
FQ Codel interval 100 ms
target 13 ms
limit 10240 pkts

Main Results

We investigate how the performance of real-time video flows is impacted by the interaction of GCC congestion control algorithm employed at the end points, and several queuing disciplines employed at the bottleneck queue.

Real-time Video Flow in isolation

Single Flow
Real-time Video Flow in isolation: Loss Ratio and Queuning Delay


This test case compares the video metrics obtained when one video flow runs in isolation over a bottleneck governed by either DT/300, PIE, or CoDel using the settings reported in Table above. We have considered two values for the link capacity b ∈ {1, 2} Mbps and the round trip propagation delay has been set to 50 ms. Regarding the packet loss ratio, average values, and standard deviation are shown. Queuing delays are depicted using a box and whisker plot: the bottom and top of the box are respectively the 25-th and 75-th percentile, whereas the band in the box is the median; the end of the whiskers represent the 5-th and 95-th percentile. The Figure does not depict the channel utilization since it higher than 90% for every experiment.

Results show that if only real-time video traffic is considered, the end-to-end congestion control is able to contain the queuing delay with zero losses. On the other hand, PIE and CoDel provide roughly the same queuing delay of DropTail but with the drawback of introducing packet losses.

Real-time Video flow versus multiple TCP flows

Vs TCP
A single video flow with nTCP concurrent TCP flows over a 10 Mpbs bottleneck link


This test case considers one real-time video flow when competing with a number nTCP of concurrent TCP flows. We consider AQM algorithms, namely PIE and CoDel, or packet schedulers, i.e. SFQ and Fq CoDel. In Figure above, each bar represents the average value and the error bar is the standard deviation; bars are grouped in accordance with the number of concurrent TCP flows. Let us now focus on the loss ratio. Under DT/300, the loss ratio is contained in the range ]0, 0.5]% for any number of concurrent TCP flows; this means that GCC, in order to fairly share the bandwidth with the TCP flow, has been driven in the loss-based mode in which GCC behaves more aggressively in order to fairly share the link with TCP. Video packet losses are higher than 1% when PIE or CoDel are used. In particular, we notice that, when increasing the number of concurrent TCP flows, induced losses under PIE, CoDel, and Fq CoDel are increased which indeed is detrimental for real-time video quality.

Vs TCP Dyn
Video flow and TCP flows rates, RTT, and Video Flow fraction loss dynamics in the case of 99 concurrent TCP flows


To get a further insight on the different behavior of GCC with SFQ or with Fq Codel in the presence of concurrent TCP traffic, Figure above compares the dynamics of rates, RTT and fraction loss in the two cases when nTCP = 99 and link capacity is equal to 100 Mbps. As expected, in both cases the video flow reaches the fair share at 1 Mbps with an RTT small enough to guarantee real-time interaction. However, the GCC flow does not experience any losses in the case of SFQ, whereas Fq CoDel provokes losses due to the CoDel algorithm.


Summary

Our analysis has shown that, if only real-time video traffic is considered, the end-to-end congestion control is able to contain the queuing delay with zero losses. On the other hand, PIE and CoDel provide the same queuing delay of DropTail but with the drawback of introducing packet losses. When concurrent TCP traffic is considered, both PIE and CoDel are able to effectively reduce the queuing delays compared to DropTail, but they provoke packet losses on the video flow that increase with the number of TCP flows.

Moreover, we show that flow queuing schedulers offer a better solution since they provide flow isolation. The best interplay is obtained with SFQ that obtains the best performance in terms of queuing delay and packet losses.

Low Delay Protocols[edit]

Delay sensitive applications require not only congestion control but also minimization of queuing delays to provide interactivity.

Controlling Queuing Delays for Real-Time Communication
  • G. Carlucci, L. De Cicco, and S. Mascolo
    Controlling Queuing Delays for Real-Time Communication: Interplay of E2E and AQM Algorithms
    ACM SIGCOMM Computer Communication Review, July 2016 (PDF)
ACM SIGCOMM Newsletter: Link


This paper considers the case of real-time communication between web browsers (WebRTC) and we focus on the interplay of an end-to-end delay-based congestion control algorithm with delay-based AQM algorithms, namely CoDel and PIE, and flow queuing schedulers, i.e. SFQ and Fq Codel.

Motivation[edit]

For an increasingly important class of Internet applications – such as video conference and personalized live streaming – high delay, rather than limited bandwidth, is the main obstacle to improved performance. A common problem that impacts this class of applications is “bufferbloat”, where excess buffering in the network causes high latency and jitter. Solutions for persistently full buffer problems, active queue management (AQM) schemes such as the original RED, have been known for two decades. Yet, while RED is simple and effective at reducing persistent queues is not widely or consistently configured and enabled in routers and sometimes directly unavailable.

Controlling Queuing Delays
Controlling Queuing Delays


A recent focus on bufferbloat has brought a number of new AQM proposals, including PIE and CoDel, which explicitly control the queuing delay and have no knobs for operators, users or implementers to adjust. This paper considers the interplay between some of these AQM protocols and the new end-to-end delay-based congestion control algorithm, Google Congestion Control (GCC) part of the WebRTC framework.

Testbed[edit]

Testbed
Testbed Set Up


The Figure above shows an essential view of the experimental testbed employed to evaluate the interaction. A WebRTC and a TCP sender on Node 1 and a WebRTC and a TCP receiver on Node 2. The bottleneck link buffer is governed by different queuing discipline with the goal of assessing the performance of the interaction between the end-to-end GCC algorithm and queuing discipline algorithm. Performances are assessed in terms of metrics such as packet loss ratio, average bitrate, and queuing delay which are known to be well correlated with QoE metrics.

Bottleneck configuration[edit]

We have employed the settings reported in the Table below. In the case of CoDel, the suggested target value is 13ms when the link capacity is 1Mbps otherwise the default value of 5 ms is used. Regarding PIE, we have used the default tuning parameters employed in the Linux implementations. In the case of DropTail and SFQ, we have set the queue size to 300 ms, which is the time taken to drain the queue when it is completely full. Throughout the experiment we consider the DropTail (DT/300) queuing discipline as the baseline for performance comparison i.e. when only end-to-end algorithm control is enabled.

[edit]

Algorithm Parameter Value
DropTail (DT/300) queue size 300 ms
PIE tupdate 30 ms
ttarget 20 ms
limit 1000 pkts
Codel interval 100 ms
target 13 ms at 1 Mbps, 5 ms otherwise
limit 1000 pkts
SFQ (DT/300) queue size 300 ms
FQ Codel interval 100 ms
target 13 ms
limit 10240 pkts

Main Results[edit]

We investigate how the performance of real-time video flows is impacted by the interaction of GCC congestion control algorithm employed at the end points, and several queuing disciplines employed at the bottleneck queue.

Real-time Video Flow in isolation[edit]

Single Flow
Real-time Video Flow in isolation: Loss Ratio and Queuning Delay


This test case compares the video metrics obtained when one video flow runs in isolation over a bottleneck governed by either DT/300, PIE, or CoDel using the settings reported in Table above. We have considered two values for the link capacity b ∈ {1, 2} Mbps and the round trip propagation delay has been set to 50 ms. Regarding the packet loss ratio, average values, and standard deviation are shown. Queuing delays are depicted using a box and whisker plot: the bottom and top of the box are respectively the 25-th and 75-th percentile, whereas the band in the box is the median; the end of the whiskers represent the 5-th and 95-th percentile. The Figure does not depict the channel utilization since it higher than 90% for every experiment.

Results show that if only real-time video traffic is considered, the end-to-end congestion control is able to contain the queuing delay with zero losses. On the other hand, PIE and CoDel provide roughly the same queuing delay of DropTail but with the drawback of introducing packet losses.

Real-time Video flow versus multiple TCP flows[edit]

Vs TCP
A single video flow with nTCP concurrent TCP flows over a 10 Mpbs bottleneck link


This test case considers one real-time video flow when competing with a number nTCP of concurrent TCP flows. We consider AQM algorithms, namely PIE and CoDel, or packet schedulers, i.e. SFQ and Fq CoDel. In Figure above, each bar represents the average value and the error bar is the standard deviation; bars are grouped in accordance with the number of concurrent TCP flows. Let us now focus on the loss ratio. Under DT/300, the loss ratio is contained in the range ]0, 0.5]% for any number of concurrent TCP flows; this means that GCC, in order to fairly share the bandwidth with the TCP flow, has been driven in the loss-based mode in which GCC behaves more aggressively in order to fairly share the link with TCP. Video packet losses are higher than 1% when PIE or CoDel are used. In particular, we notice that, when increasing the number of concurrent TCP flows, induced losses under PIE, CoDel, and Fq CoDel are increased which indeed is detrimental for real-time video quality.

Vs TCP Dyn
Video flow and TCP flows rates, RTT, and Video Flow fraction loss dynamics in the case of 99 concurrent TCP flows


To get a further insight on the different behavior of GCC with SFQ or with Fq Codel in the presence of concurrent TCP traffic, Figure above compares the dynamics of rates, RTT and fraction loss in the two cases when nTCP = 99 and link capacity is equal to 100 Mbps. As expected, in both cases the video flow reaches the fair share at 1 Mbps with an RTT small enough to guarantee real-time interaction. However, the GCC flow does not experience any losses in the case of SFQ, whereas Fq CoDel provokes losses due to the CoDel algorithm.


Summary[edit]

Our analysis has shown that, if only real-time video traffic is considered, the end-to-end congestion control is able to contain the queuing delay with zero losses. On the other hand, PIE and CoDel provide the same queuing delay of DropTail but with the drawback of introducing packet losses. When concurrent TCP traffic is considered, both PIE and CoDel are able to effectively reduce the queuing delays compared to DropTail, but they provoke packet losses on the video flow that increase with the number of TCP flows.

Moreover, we show that flow queuing schedulers offer a better solution since they provide flow isolation. The best interplay is obtained with SFQ that obtains the best performance in terms of queuing delay and packet losses.