Monday, September 15, 2008

Random Early Detection Gateways for Congestion Avoidance

This paper presented the Random Early Detection gateway algorithm to avoid congestion in a packet switched network. The basic idea presented in this paper is that in order to avoid congestion at the gateway level, an average queue size should be maintained in order to provide constant throughput. To maintain the average queue size in the gateway, packets will be dropped randomly at a ratio proportionate to the queue size, which represents the congestion level of the network. If the queue passes a threshold size, all packets incoming will be dropped.

Of course this technique requires the end to end protocol to ensure guarantee of delivery because it doesn't contact the sender directly to slow down the sending window. Instead it uses a passive method of just dropping the packet, and then waits for the sender to realize that a packet has been dropped, and thus it needs to slow down. The paper provided detailed algorithms on each of the decisions that need to be made, including the probability of dropping a packet etc. It also set a max and min thresholds to the queue size. If the queue size reaches the min threshold, then it means that the network is under utilized, thus no packets are dropped or marked. If the queue size reaches the maximum threshold, then all the incoming packets are dropped.

It was also interesting how they defined fairness in this paper. It seems like instead of defining fairness as attempting to give all the nodes the same amount of bandwidth, the fairness was defined as, "there will be the same probability that everyone's packet will be dropped, we will not discriminate against certain nodes and no drop their packet." This is different from the papers we read before, and it certainly does not guarantee that it tries to give all nodes the same bandwidth, it is merely used for congestion control.

Although i can see that when working together with TCP or any higher level protocol which has reliable delivery this would work to control congestion, because end nodes can detect that a packet is dropped and then slow down its window, but intuitively it just seems like randomly dropping a packet seems not to be the most efficient way, and it really relies on the protocols to have reliable delivery. However, the other way is to have the gateways notify the nodes by actually sending messages to them, which would only create more traffic and use up more bandwidth. There is always a tradeoff there.

1 comment:

Randy H. Katz said...

Previous definitions of fairness had to do with fair bandwidth share. This paper is about fairness of who gets dropped.