Tuesday, December 23, 2008

Network Congestion

Network Congestion

The primary reason for segmenting a LAN into smaller parts is to isolate traffic and to achieve better use of bandwidth per user. Without segmentation, a LAN quickly becomes clogged with traffic and collisions. The figure shows a network that is subject to congestion by multiple node devices on a hub-based network.

These are the most common causes of network congestion:


- Increasingly powerful computer and network technologies. Today, CPUs, buses, and peripherals are much faster and more powerful than those used in early LANs, therefore they can send more data at higher rates through the network, and they can process more data at higher rates.
- Increasing volume of network traffic. Network traffic is now more common because remote resources are necessary to carry out basic work. Additionally, broadcast messages, such as address resolution queries sent out by ARP, can adversely affect end-station and network performance.
- High-bandwidth applications. Software applications are becoming richer in their functionality and are requiring more and more bandwidth. Desktop publishing, engineering design, video on demand (VoD), electronic learning (e-learning), and streaming video all require considerable processing power and speed.

LAN Segmentation

LANs are segmented into a number of smaller collision and broadcast domains using routers and switches. Previously, bridges were used, but this type of network equipment is rarely seen in a modern switched LAN. The figure shows the routers and switches segmenting a LAN.

In the figure the network is segmented into two collision domains using the switch.

Bridges and Switches


Although bridges and switches share many attributes, several distinctions differentiate these technologies. Bridges are generally used to segment a LAN into a couple of smaller segments. Switches are generally used to segment a large LAN into many smaller segments. Bridges have only a few ports for LAN connectivity, whereas switches have many.

Routers

Even though the LAN switch reduces the size of collision domains, all hosts connected to the switch are still in the same broadcast domain. Because routers do not forward broadcast traffic by default, they can be used to create broadcast domains. Creating additional, smaller broadcast domains with a router reduces broadcast traffic and provides more available bandwidth for unicast communications. Each router interface connects to a separate network, containing broadcast traffic within the LAN segment in which it originated.

Controlling Network Latency


When designing a network to reduce latency, you need to consider the latency caused by each device on the network. Switches can introduce latency on a network when oversubscribed on a busy network. For example, if a core level switch has to support 48 ports, each one capable of running at 1000 Mb/s full duplex, the switch should support around 96 Gb/s internal throughput if it is to maintain full wirespeed across all ports simultaneously. In this example, the throughput requirements stated are typical of core-level switches, not of access-level switches.

The use of higher layer devices can also increase latency on a network. When a Layer 3 device, such as a router, needs to examine the Layer 3 addressing information contained within the frame, it must read further into the frame than a Layer 2 device, which creates a longer processing time. Limiting the use of higher layer devices can help reduce network latency. However, appropriate use of Layer 3 devices helps prevent contention from broadcast traffic in a large broadcast domain or the high collision rate in a large collision domain.

Removing Bottlenecks


Bottlenecks on a network are places where high network congestion results in slow performance.


In this figure which shows six computers connected to a switch, a single server is also connected to the same switch. Each workstation and the server are all connected using a 1000 Mb/s NIC. What happens when all six computers try to access the server at the same time? Does each workstation get 1000 Mb/s dedicated access to the server? No, all the computers have to share the 1000 Mb/s connection that the server has to the switch. Cumulatively, the computers are capable of 6000 Mb/s to the switch. If each connection was used at full capacity, each computer would be able to use only 167 Mb/s, one-sixth of the 1000 Mb/s bandwidth. To reduce the bottleneck to the server, additional network cards can be installed, which increases the total bandwidth the server is capable of receiving. The figure shows five NIC cards in the server and approximately five times the bandwidth. The same logic applies to network topologies. When switches with multiple nodes are interconnected by a single 1000 Mb/s connection, a bottleneck is created at this single interconnect.

Higher capacity links (for example, upgrading from 100 Mb/s to 1000 Mb/s connections) and using multiple links leveraging link aggregation technologies (for example, combining two links as if they were one to double a connection's capacity) can help to reduce the bottlenecks created by inter-switch links and router links. Although configuring link aggregation is outside the scope of this course, it is important to consider a device's capabilities when assessing a network's needs. How many ports and of what speed is the device capable of? What is the internal throughput of the device? Can it handle the anticipated traffic loads considering its placement in the network?

No comments: