Bandwidth management is the process of measuring and controlling the communications (traffic, packets) on a network link, to avoid filling the link to capacity or overfilling the link, which would result in network congestion and poor performance of the network. Bandwidth is described by bit rate and measured in units of bits per second (bit/s) or bytes per second (B/s).
Bandwidth management mechanisms may be used to further engineer performance and includes:
Traffic shaping (rate limiting):
Token bucket
Leaky bucket
TCP rate control - artificially adjusting TCP window size as well as controlling the rate of ACKs being returned to the sender
Scheduling algorithms:
Weighted fair queuing (WFQ)
Class based weighted fair queuing
Weighted round robin (WRR)
Deficit weighted round robin (DWRR)
Hierarchical Fair Service Curve (HFSC)
Congestion avoidance:
RED, WRED - Lessens the possibility of port queue buffer tail-drops and this lowers the likelihood of TCP global synchronization
Policing (marking/dropping the packet in excess of the committed traffic rate and burst size)
Explicit congestion notification
Buffer tuning - allows you to modify the way a router allocates buffers from its available memory, and helps prevent packet drops during a temporary burst of traffic.
Bandwidth reservation protocols / algorithms
Resource reservation protocol (RSVP) - is the means by which applications communicate their requirements to the network in an efficient and robust manner.
Constraint-based Routing Label Distribution Protocol (CR-LDP)
Top-nodes algorithm
Traffic classification - categorising traffic according to some policy in order that the above techniques can be applied to each class of traffic differently
Issues which may limit the performance of a given link include:
TCP determines the capacity of a connection by flooding it until packets start being dropped (slow start)
Queueing in routers results in higher latency and jitter as the network approaches (and occasionally exceeds) capacity
TCP global synchronization when the n
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
The token bucket is an algorithm used in packet-switched and telecommunications networks. It can be used to check that data transmissions, in the form of packets, conform to defined limits on bandwidth and burstiness (a measure of the unevenness or variations in the traffic flow). It can also be used as a scheduling algorithm to determine the timing of transmissions that will comply with the limits set for the bandwidth and burstiness: see network scheduler.
In computer networks, network traffic measurement is the process of measuring the amount and type of traffic on a particular network. This is especially important with regard to effective bandwidth management. Network performance could be measured using either active or passive techniques. Active techniques (e.g. Iperf) are more intrusive but are arguably more accurate. Passive techniques have less network overhead and hence can run in the background to be used to trigger network management actions.
In computer networks, rate limiting is used to control the rate of requests sent or received by a network interface controller. It can be used to prevent DoS attacks and limit web scraping. Research indicates flooding rates for one zombie machine are in excess of 20 HTTP GET requests per second, legitimate rates much less. Hardware appliances can limit the rate of requests on layer 4 or 5 of the OSI model. Rate limiting can be induced by the network protocol stack of the sender due to a received ECN-marked packet and also by the network scheduler of any router along the way.
The New Space Economy is a fast-growing market, driven by the commercialization of the historical institutional space sector. This course contains more than 30 videos-lectures from space experts from
The New Space Economy is a fast-growing market, driven by the commercialization of the historical institutional space sector. This course contains more than 30 videos-lectures from space experts from
Multiprocessors are a core component in all types of computing infrastructure, from phones to datacenters. This course will build on the prerequisites of processor design and concurrency to introduce
Ce cours introduit les composants à semiconducteurs électroniques de base : diodes à jonction PN, transistors bipolaires et MOS. Leurs modes de fonctionnement en DC et AC sont étudiés. Les circuits
Interactive mobile applications like web browsing and gaming are known to benefit significantly from low latency networking, as applications communicate with cloud servers and other users' devices. Emerging mobile channel standards have not met these needs ...
Peer-to-peer (p2p) networks are not independent of their peers, and the network efficiency depends on peers contributing resources. Because shared resources are not free, this contribution must be rewarded. Peers across the network may share computation po ...
IEEE COMPUTER SOC2022
Time-sensitive networks, as in the context of IEEE Time-Sensitive Networking (TSN) and IETF Deterministic Networking (DetNet), offer deterministic services with guaranteed bounded latency in order to support safety-critical applications. In this thesis, we ...