Monday, September 8, 2008

Analysis of Increase and Decrease Algorithms for COngestion Avoidance in Computer Networks

This paper was published after the previous paper on congestion control, and it attempts to evaluate different techniques for congestion control. The conclusion of course is that additive increase and multiplicative decrease is the best solution. The categories that are evaluated are basically combinations of additive increase/decreases and multiplicative increase/decrease. The paper proses several metrics used for comparison, such as efficiency, fairness, convergence time and size of oscillation.

The paper defines efficiency as the closeness of total load on the resource to its knee, basically maximizing the bandwidth use without going over. Fairness is between the users of the network, and the allowed bandwidth of each, not letting any one user dominate the bandwidth. Distributed is an interesting metric, because it wants to decentralize control. Thus, instead of a central unit telling a particular user to back off, they assume a binary feedback, where all users get same information, congested or not congested. And finally convergence is the speed in which the network reaches its equilibrium state.

It's interesting to read about the ways of proving the set of different metrics. I think a good way to think about is that if its additive increase additive decrease (or multiplicative for both), the network although is trying it's best to decrease congestion by having users decrease load, but users also add load as fast as its decreasing load, so congestion will happen again very quickly. Obviously multiplicative increase and additive decrease will do no good at all. Only by decreasing at a faster rate than your increasing, can you control the network to slowly increase bandwidth, increasing the time it takes for the network to be congested again.

No comments: