The Internet has evolved from a loose federation of networks used primarily in academic institutions, to a global entity which has revolutionized communication, commerce and computing. Early in the evolution, it was recognized that unrestricted access to the Internet resulted in poor performance in the form of low network utilization and high packet loss rates. This phenomenon known as congestion collapse, led to the development of the first congestion control algorithm for the Internet . The basic idea behind the algorithm was to detect congestion in the network through packet losses. Upon detecting a packet loss, the source reduces its transmission rate; otherwise, it increases the transmission rate. The original algorithm has undergone many minor, but important changes, but the essential features of the algorithm used for the increase and decrease phases of the algorithm have not changed through the various versions of TCP, such as TCP-Tahoe, Reno, NewReno, SACK [17, 54]. An exception to this is the TCP Vegas algorithm which uses queueing delay in the network as the indicator of congestion, instead of packet loss . One of the goals of the book is to understand the dynamics of Jacobson’s algorithm through simple mathematical models, and to develop tools and techniques that will improve the algorithm and make it scalable for networks with very large capacities, very large numbers of users, and very large round-trip times.
KeywordsPacket Loss Arrival Rate Congestion Control Simple Mathematical Model Congestion Management
Unable to display preview. Download preview PDF.