This paper also talks about the mess of middle boxes that are used in traffic management, especially in the data centers where massive amounts of data are sorted, stored, and need to be looked up. Firewalls or load balancers etc are all connected in the network, which create a difficulty in changing policies, or managing the networks because of the complicated network structure. Several difficulties arise in the current implementation including configuration difficulties, manipulating link costs, and also to create separate VLANs. The current approach is not flexible nor scalable in the management, thus this paper introduces a new switch to connect the several components of the data center to create a more manageable and flexible architecture.
The switch architecture proposed is called a pswitch, which introduces a new layer between layer 2 and 3. This switch allows the middle boxes to be connected off the main network path, and to the switches. Inside the switches there are policy controls and rule tables which network managers can easily manipulate and change. And inside the pswitch, the traffic will be routed through the desired middle box component before it is passed onto the next hop, this allows a clean approach to the complicated network architecture and middle boxes that are used to manage the traffic in data centers.
Tuesday, November 25, 2008
Improving MapReduce Performance in Heterogeneous Environments
This paper talks about mapping a task in a multi processor environment. MapReduce is the technique of splitting a job into multiple smaller tasks and mapping it so it can scale to thousands of tasks being executed simultaneously. A popular open source implementation, Hadoop, which is developed by Yahoo, has been commonly used for MapReducing tasks in clusters. However, the Hadoop implementation has several inherit assumptions which cause it to perform poorly in certain environments. These assumptions include the assumptions that nodes are somewhat similar in the working environment. As a result, when performance is monitored to be used to speculatively execute tasks on idle nodes, it can cause poor estimation of progress, and waste node computing power.
THis paper proposes a new scheduler for the MapReduce operation. The LATE scheduler takes into account the slowest tasks that will effect response time, and only reschedules the tasks that are farthest away from finishing to duplicate execution on a faster node. This allows the overall response time of the MapReduce operation to be decreased, and in a heterogeneous environment this allows for better estimation and usage of the scheduler. The performance can be up to 2 times faster than the original scheduler.
THis paper proposes a new scheduler for the MapReduce operation. The LATE scheduler takes into account the slowest tasks that will effect response time, and only reschedules the tasks that are farthest away from finishing to duplicate execution on a faster node. This allows the overall response time of the MapReduce operation to be decreased, and in a heterogeneous environment this allows for better estimation and usage of the scheduler. The performance can be up to 2 times faster than the original scheduler.
Wednesday, November 19, 2008
Scalable Application Layer Multicast
This paper again is on multicast at the application layer. This paper focuses on a protocol to maintain an overlay topology for efficient delivery of multicast data. The paper introduces NICE trees, which is the topology of the overlay network. The idea uses the location of the nodes to form several clusters with nodes that are close. The center node of the cluster is the cluster leader which contacts other cluster leaders. The whole NICE tree is hierarchical with different layers. All nodes are in the lowest layer, the node leaders are also at a higher layer with the other node leaders, so communication goes by passing data first to the highest layer possible, then going down the tree. Since the nodes and clusters are allocated by their geographic location, so the different clusters can communicate within cluster faster, and across cluster, the cluster leader talks to another cluster leader on the same layer. It's kind of like routing within AS and outside of ASes.
With a protocol like this, the most important thing to notice is how nodes join and leave and how the hierarchy is continuously maintained. The NICE tress has a Rendezvous point that is in charge of telling the new nodes where to look and where to join, which also traverses down the hierarchy and finds its spot. Other important situations include when cluster leaders leave the network or are down, then a new leader must be selected. This process is summarized in another paper, but is not really mentioned in this one. The topology of this network definitely seems like it could work, and simulation results weren't bad either. Seemed like a great paper.
With a protocol like this, the most important thing to notice is how nodes join and leave and how the hierarchy is continuously maintained. The NICE tress has a Rendezvous point that is in charge of telling the new nodes where to look and where to join, which also traverses down the hierarchy and finds its spot. Other important situations include when cluster leaders leave the network or are down, then a new leader must be selected. This process is summarized in another paper, but is not really mentioned in this one. The topology of this network definitely seems like it could work, and simulation results weren't bad either. Seemed like a great paper.
A reliable multicast framework for light-weight sessions and application level framing
This week our focus is on multicast in the network. Both papers were on application level overlay networks that are used for multicast. This paper takes the approach of building a generic framework protocol used for multicast. The analogy they used was like TCP for unicast, they wanted a framework for reliable delivery (but it's also best effort with eventual delivery of the message). It starts by stating several unicast methods that are bad suited for multicast networks. These networks include ACKs used for reliable delivery. Bad idea for multicast because of ACK explosion and the sender doesn't necessarily know all the receivers and participants involved. Another bad method is the state tracking of communication channels. These include sequence numbers, because participants can leave and join at any time. So the multicast is better suited using application data units. The paper implements it multicast framework using a whiteboard program, which allows users to have online conferencing and see the same whiteboard or presentation.
From what i read, the protocol involves a lot of random backoff timers to ensure that only one copy of repair and recover request is received, and all other receivers will not send out requests for the same data. Also, data is synced by page (for the whiteboard program), but the paper also mentions that you can request old pages, but they don't give any specifics. It seems like this protocol/framework was designed to work with different network topologies, as the paper tries to compare the different network topologies, but it seems that it still has some assumptions about the underlying topology. The next paper is more focused on maintaining an overlay topology for efficient delivery for multi cast networks.
From what i read, the protocol involves a lot of random backoff timers to ensure that only one copy of repair and recover request is received, and all other receivers will not send out requests for the same data. Also, data is synced by page (for the whiteboard program), but the paper also mentions that you can request old pages, but they don't give any specifics. It seems like this protocol/framework was designed to work with different network topologies, as the paper tries to compare the different network topologies, but it seems that it still has some assumptions about the underlying topology. The next paper is more focused on maintaining an overlay topology for efficient delivery for multi cast networks.
Monday, November 17, 2008
An Architecture for Internet Data Transfer
Initially when i was reading this paper, since for this whole semester we've been reading papers on the network architecture, i finally realized that this was not a paper on the architecture of the internet, but more like overlay networks where services are implemented on top of the existing network for convenience. Several of his points seemed very appealing, including the fact that different applications that use or send the same data could potentially cache because they were using the same transfer service, thus save in time to transmit or obtain data.
THe idea of seperating data transfer and negotiation of data certainly is not new. FTP is the perfect example of a protocol which uses one connection for negotiation and another connection for data transfer. The difference here is that this is provides a general interface for use for all the applications instead of merely ftp. The implementation uses C++ and hooks into network transfer libraries and create callbacks when transferring data. The interface approach is interesting, but i'm just wondering about how applicable this is to most applications on the internet, since there are already a lot of existing services used for data transfer.
THe idea of seperating data transfer and negotiation of data certainly is not new. FTP is the perfect example of a protocol which uses one connection for negotiation and another connection for data transfer. The difference here is that this is provides a general interface for use for all the applications instead of merely ftp. The implementation uses C++ and hooks into network transfer libraries and create callbacks when transferring data. The interface approach is interesting, but i'm just wondering about how applicable this is to most applications on the internet, since there are already a lot of existing services used for data transfer.
A Delay Tolerant Network Architecture for Challenged Internets
This paper addresses address the issues of links from the network with extra long delay and also unreliable connectivity and poor operation characteristics. The paper calls these kind of networks challenged networks. The paper states that the current internet makes certain assumptions about the underlying architecture and it's basic reliability and performance. I think this is only true in some cases. The basic internet architecture itself i don't think makes that assumption, that's why this research is even possible. From what i got from the paper, the mail idea is to insert nodes (they call it DTN Gateways) into the network to split the network into different Regions. Then within the Regions define the network characteristic of the region. The DTN nodes are therefore responsible for the retransmission and reliable transfers. The rest of the paper describes the different routing and naming algorithms of this overlay network. This reminds me of the split TCP mechanism, where in the middle of the TCP connection you have a midway point that is also responsible for a similar mechanism.
The idea presented isn't completely new, but in the original internet architecture proposal, one of the main points was not to keep state in the network. However, now with more and more wide range use of the network, it seems like there are several benefits of keep state in some nodes of the network. It seems like with challenged networks, such unreliability requires some sort of retransmission unit. But another argument is: that's what the different layers are for, thus we can implement it in the link layer or have a stronger retransmission mechanism. Nonetheless, with the internet evolving and more and more networks wanting to integrate into the internet, it seems like states in the network are somehow unavoidable.
The idea presented isn't completely new, but in the original internet architecture proposal, one of the main points was not to keep state in the network. However, now with more and more wide range use of the network, it seems like there are several benefits of keep state in some nodes of the network. It seems like with challenged networks, such unreliability requires some sort of retransmission unit. But another argument is: that's what the different layers are for, thus we can implement it in the link layer or have a stronger retransmission mechanism. Nonetheless, with the internet evolving and more and more networks wanting to integrate into the internet, it seems like states in the network are somehow unavoidable.
Thursday, November 13, 2008
X-Trace: A Pervatsive Network Tracing Framework
This paper takes a different approach to measuring and tracing the performance of the internet than the previous paper. Instead of taking data and analyzing the packets and traces, this paper proposes a more active method of tracing the network. They introduce a framework that inserts tracing metadata within the packets into the network to help trace the network. This framework allows us to trace across layers and protocols, and analyze different causality relationships between the network. This concept is really effective, because it is indeed that a lot of network traffic is caused by a separate flow. For example, a DNS query is often triggered by another HTTP request or email packet. One since website query might lead to queries from ad servers or image servers etc. Thus, an effective way of categorizing network traffic is to take into account these causality relationships.
The framework introduced metadata that is appended to the packets, along with two propagation primitives pushNext and pushDown in order to propagate the metadata along the network. pushDown copies the metadata from one layer to the layer below, and pushNext pushes the metadata to the next hop. Then based on the metadata, you reconstruct the task tree and analyze it. The paper also gives several use cases and applications.
This method of course involves changing the network structure and introducing more traffic in order to trace traffic. Compared to the other method where is was just being a bystander observing the packets pass by. Forgetting about the security implications of the trace, i wonder if it's possible that trace data or mechanisms introduced itself affect the network traffic, thus causing the measured data or analysis to be skewed or inaccurate?
The framework introduced metadata that is appended to the packets, along with two propagation primitives pushNext and pushDown in order to propagate the metadata along the network. pushDown copies the metadata from one layer to the layer below, and pushNext pushes the metadata to the next hop. Then based on the metadata, you reconstruct the task tree and analyze it. The paper also gives several use cases and applications.
This method of course involves changing the network structure and introducing more traffic in order to trace traffic. Compared to the other method where is was just being a bystander observing the packets pass by. Forgetting about the security implications of the trace, i wonder if it's possible that trace data or mechanisms introduced itself affect the network traffic, thus causing the measured data or analysis to be skewed or inaccurate?
Subscribe to:
Posts (Atom)