Friday, October 5, 2007

Bandwidth management
In computer networking, bandwidth management is the process of measuring and controlling the communications (traffic, packets) on a network link, to avoid filling the link to capacity or overfilling the link, which would result in network congestion and poor performance.

Overview
The user of the only computer on a connection will probably know what application caused the problem or (barring spyware that hides itself deep within a system) figure it out pretty quickly. However, this task is much harder for network administrator, who does not necessarily know what applications other people are running on their computers, or how they use the network.
Conversely, this task is much more important for network administrators. A user downloading large files can happily go and do something else while they wait for the download to finish. But on a network, if one user does this, the others will start complaining that they can't access web pages, or their access is slow, and demand that the administrator fix it.

Finding the culprit
To keep your Internet connection working fast and smoothly, you must control your use of bandwidth, to stay below the maximum capacity of the network link. To control something, you must be able to measure it.
These tasks are usually viewed separately: much software exists for network traffic measurement and network traffic control, but these are normally not integrated. And indeed it may not be necessary to integrate them. Once the cause of the heavy traffic is identified, it is usually simpler, and may be more effective, and to shut it down or reschedule it than to try to manage its bandwidth use.
Many aspects of the Internet protocol suite prevent communications links from reaching their maximum capacity in practice. Therefore, it is necessary to keep the link utilisation below the maximum theoretical capacity of the link, in order to ensure fast responsiveness and eliminate bottleneck queues at the link endpoints, which increase latency. This is called congestion avoidance.
Some issues which limit the performance of a given link are:

TCP determines the capacity of a connection by flooding it until packets start being dropped (Slow-start)
Queueing in routers results in higher latency and jitter as the network approaches (and occasionally exceeds) capacity
TCP global synchronisation when the network reaches capacity results in waste of bandwidth
Burstiness of web traffic requires spare bandwidth to rapidly accommodate the bursty traffic
Lack of widespread support for explicit congestion notification and Quality of Service management on the Internet
Internet Service Providers typically retain control over queue management and quality of service at their end of the link

No comments: