Monday, September 1, 2008

End to End Arguments in System Design

This paper summarizes one of the key concepts and tradeoffs when designing the internet, or any type of network involving multiple layers interacting. The argument basically is how much functionality should be duplicated in each of the layers to ensure the guarantee of the functionality through the network to be robust. In the paper most examples are regarding robustness and reliability, but it could really be applied to any functionality that is to be implemented in the network.

The paper gives some detailed examples, starting with a careful file transfer protocol. It lists the concerns and errors that might occur during a simple file transfer protocol, and some of the methods to combat it. For example, if we just have the protocol application itself do the error checking, and resend the file if there was corruption or error in the transfer, then the lower level implementations wouldn't need to worry about the error checking. However, if error occurs frequently, then this would result in a huge performance loss and possibly network congestion because the whole file needs to be resent over and over. However, if we implement at a lower level, there still is a possibility that the protocol will result in error, so implementing at the lower level doesn't void the necessity to implement the repeated error checking function at a higher level. It's just the matter of how much.

The paper then lists out different scenarios and application backgrounds to help us understand that there is no correct answer but everything is a tradeoff. In particular, even the same type of application with a little different background can result in a different design. A real time voice communicator (phone) will probably care less about dropping one or two packets, and would care more about the latency, since the users can easily reask the information over the communication. However, if it's a voicemail, then we probably care more about the validity and quality of the content over the latency since we can't reask the information.

The examples used in the paper were very clear and really portrayed the ideas behind them. Instead of giving an absolute opinion on the end to end argument itself, the paper actually more serves as a guideline as to how to design the communication system and identify the needs in order to choose between the tradeoff.

I actually read this paper before the previous blog post paper, and i thought it was the perfect order to read the two papers of the week. This paper is a precursor of the different design difficulties faced in desiging the internet, especially in regards to using the datagram as a building block as opposed a more reliable building block. Ultimately the goal of the internet was to support a wide range of networks, and because of that, the reliable error checks and retransmission mechanisms might not be needed for all services and applications supported by the internet. As the argument states, implementing the functionalities in the lower level layers doesn't void the implemention at the higher layers, thus there was no need to have a reliable building block when a datagram is suffice. This clearly was one of the considerations taken into account when the internet was designed.

No comments: