Researchers discover major barrier to reducing network congestion MIT News

Congestion can occur when users want to send data over the Internet faster than over the network – the same way traffic congestion enters a large city in the morning.

Computers and devices that transmit data over the Internet break the data into smaller packets and use a special algorithm to decide how fast to send those packets. These congestion control algorithms attempt to fully discover and utilize available network capacity, while sharing it fairly with other users who may share the same network. These algorithms try to reduce the delay caused by data waiting in queues in the network.

Over the past decade, researchers in industry and academia have developed several algorithms that attempt to achieve higher rates while controlling for delays. Some of these, such as the BBR algorithm developed by Google, are now widely used by many websites and applications.

But a team of MIT researchers has found that these algorithms can be deeply inappropriate. In a new study, they show that there will always be a network scenario where at least one sender receives almost no bandwidth compared to other senders; That is, the problem known as starvation cannot be avoided.

“What’s really surprising about this paper and the result is that when you take into account the real-world complexity of network paths and all the things they can do to data packets, it’s basically It is impossible to avoid delayed congestion control algorithms to avoid starvation using current methods,” says Mohamed Alizadeh, associate professor of electrical engineering and computer science (EECS).

While Alizadeh and his co-authors were unable to find a traditional crowd control algorithm that could avoid starvation, there may be algorithms in a different class that can circumvent this problem. Their analysis also suggests that changing the way these algorithms work, so that they allow large variations in delay, could help prevent starvation in certain network situations.

Alizadeh co-authored the paper with first author and EECS graduate student Venkat Arun and senior author Hari Balakrishnan, Fujitsu Professor of Computer Science and Artificial Intelligence. The research will be presented at the ACM Special Interest Group on Data Communications (SIGCOMM) conference.

crowd control

Congestion control is a fundamental problem in networking that researchers have been trying to tackle since the 1980s.

A user’s computer does not know how fast to send data packets over the network because it lacks information, such as the quality of the network connection or how many other senders are using the network. Sending packets too slowly makes poor use of available bandwidth. But sending them too quickly can overwhelm the network, and in doing so the packets will start to drop. These packets must be resent, which leads to more delays. There can also be delays due to packets waiting in queues for a long time.

Congestion control algorithms use packet loss and delay as a signal to estimate congestion and decide how fast the data is to be sent. But the Internet is complex, and packets can be delayed and lost for reasons unrelated to network congestion. For example, data may be placed in a queue along the way and then released with a burst of other packets, or there may be a delay in the receiver’s acknowledgment. The authors refer to delays as “jitters” that are not caused by overcrowding.

Even if a congestion control algorithm measures delays perfectly, it cannot tell the difference between delays caused by congestion and delays caused by jitter. The delay caused by jitter is unpredictable and confuses the sender. Because of this ambiguity, users tend to estimate different delays, causing them to send packets at uneven rates. Eventually, this leads to a situation where starvation occurs and one stops completely, explains Arun.

“We started the project because we lacked a theoretical understanding of crowd control behavior in the presence of panic. To put it on a strong theoretical basis, we built a mathematical model that was simple enough to think about, yet had some of the complexities of the Internet. It has been very rewarding that mathematics told us things we didn’t know and that have practical relevance,” he says.

study of starvation

The researchers fed their mathematical model to a computer, gave it a series of commonly used crowd control algorithms, and asked the computer to find an algorithm that could avoid starvation using their model.

“We couldn’t do it. We tried every algorithm we know of, and a few new ones we made up. Nothing worked. Computers always got into a situation where a few people would get all the bandwidth.” And at least one person gets basically nothing,” says Arun.

The researchers were surprised by this result, especially since these algorithms are widely considered to be reasonably reasonable. He began to suspect that it was not possible to avoid starvation, an extreme form of unfairness. This led him to define a class of algorithms he called “delay-convergence algorithms”, which he proved would always suffer from starvation under his network model. All current congestion control algorithms that control delays (that researchers are aware of) are delay-convergence.

The fact that such simple failure modes of these widely used algorithms remained unknown for so long shows how difficult it is to understand the algorithms through empirical testing alone, Arun says. This underscores the importance of a solid theoretical foundation.

But all hope is not lost. While all the algorithms they tested failed, there may be other algorithms that are not delay-convergent that may be able to avoid starvation. This suggests that one way to fix the problem is by designing congestion control algorithms. which makes the delay range vary more widely, so this range is larger than any delay that may be caused by jitter in the network.

“To control for delays, algorithms have tried to also constrain the variances in delays about the desired equilibrium, but there is nothing wrong with potentially creating more delay variances so that a better measure of congestive delay can be obtained. It’s just a new design philosophy that you have to adopt,” says Balakrishnan.

Now, the researchers want to keep pushing to see if they can find or create an algorithm that will eliminate starvation. They also want to apply this approach of mathematical modeling and computational proofs to other thorny, unsolved problems in network systems.

“We are increasingly dependent on computer systems for very important things, and we need to base their credibility on a strong conceptual basis. We have shown the amazing things you can discover when you look at these,” Alizadeh says. It takes time to come up with formal specifications.”

Be the first to comment

Leave a Reply

Your email address will not be published.


*