Design Consideration for Ethernet/802.3 Networks

Network Latency

Latency is the time a frame or a packet takes to travel from the source station to the final destination. Users of network-based applications experience latency when they have to wait many minutes to access data stored in a data center or when a website takes many minutes to load in a browser. Latency has at least three sources.
First, there is the time it takes the source NIC to place voltage pulses on the wire, and the time it takes the destination NIC to interpret these pulses. This is sometimes called NIC delay, typically around 1 microsecond for a 10BASE-T NIC.
Second, there is the actual propagation delay as the signal takes time to travel through the cable. Typically, this is about 0.556 microseconds per 100 m for Cat 5 UTP. Longer cable and slower nominal velocity of propagation (NVP) result in more propagation delay.
Third, latency is added based on network devices that are in the path between two devices. These are either Layer 1, Layer 2, or Layer 3 devices. These three contributors to latency can be discerned from the animation as the frame traverses the network.
Latency does not depend solely on distance and number of devices. For example, if three properly configured switches separate two computers, the computers may experience less latency than if two properly configured routers separated them. This is because routers conduct more complex and time-intensive functions. For example, a router must analyze Layer 3 data, while switches just analyze the Layer 2 data. Since Layer 2 data is present earlier in the frame structure than the Layer 3 data, switches can process the frame more quickly. Switches also support the high transmission rates of voice, video, and data networks by employing application-specific integrated circuits (ASIC) to provide hardware support for many networking tasks. Additional switch features such as port-based memory buffering, port level QoS, and congestion management, also help to reduce network latency.
Switch-based latency may also be due to oversubscribed switch fabric. Many entry-level switches do not have enough internal throughput to manage full bandwidth capabilities on all ports simultaneously. The switch needs to be able to manage the amount of peak data expected on the network. As the switching technology improves, the latency through the switch is no longer the issue. The predominant cause of network latency in a switched LAN is more a function of the media being transmitted, routing protocols used, and types of applications running on the network.
Network Congestion
The primary reason for segmenting a LAN into smaller parts is to isolate traffic and to achieve better use of bandwidth per user. Without segmentation, a LAN quickly becomes clogged with traffic and collisions. The figure shows a network that is subject to congestion by multiple node devices on a hub-based network.
These are the most common causes of network congestion:
Increasingly powerful computer and network technologies. Today, CPUs, buses, and peripherals are much faster and more powerful than those used in early LANs, therefore they can send more data at higher rates through the network, and they can process more data at higher rates.
Increasing volume of network traffic. Network traffic is now more common because remote resources are necessary to carry out basic work. Additionally, broadcast messages, such as address resolution queries sent out by ARP, can adversely affect end-station and network performance.
High-bandwidth applications. Software applications are becoming richer in their functionality and are requiring more and more bandwidth. Desktop publishing, engineering design, video on demand (VoD), electronic learning (e-learning), and streaming video all require considerable processing power and speed.
LAN Segmentation
LANs are segmented into a number of smaller collision and broadcast domains using routers and switches. Previously, bridges were used, but this type of network equipment is rarely seen in a modern switched LAN. The figure shows the routers and switches segmenting a LAN.
In the figure the network is segmented into two collision domains using the switch.
Bridges and Switches
Although bridges and switches share many attributes, several distinctions differentiate these technologies. Bridges are generally used to segment a LAN into a couple of smaller segments. Switches are generally used to segment a large LAN into many smaller segments. Bridges have only a few ports for LAN connectivity, whereas switches have many.
Even though the LAN switch reduces the size of collision domains, all hosts connected to the switch are still in the same broadcast domain. Because routers do not forward broadcast traffic by default, they can be used to create broadcast domains. Creating additional, smaller broadcast domains with a router reduces broadcast traffic and provides more available bandwidth for unicast communications. Each router interface connects to a separate network, containing broadcast traffic within the LAN segment in which it originated.


Post a Comment


NBA Live Streaming. Copyright 2008 All Rights Reserved Revolution Two Church theme by Brian Gardner Converted into Blogger Template by Bloganol dot com | Distributed by Blogger Templates Blog