Understanding TCP/IP’s Transport and Application Layers

Understanding TCP/IP’s Transport and Application Layers

When computers communicate with one another, certain rules, or protocols, are required to allow them to transmit and receive data in an orderly fashion. Throughout the world, the most widely adopted protocol suite is TCP/IP. Understanding how TCP/IP functions is important for a larger understanding of how data is transmitted in network environments.

The way in which IP delivers a packet of data across a network is a fundamental concept in the TCP/IP architecture used in large networks. Understanding how data is transmitted via IP is central to understanding how the TCP/IP suite of protocols functions overall. This, in turn, adds to an understanding of how data that is communicated across networks can be prioritized, restricted, secured, optimized, and maintained. This lesson describes the sequence of steps in IP packet delivery and the concepts and structures involved, such as packets, datagrams, and protocol fields, to provide a view of how data is transmitted over large networks.

For the Internet and internal networks to function correctly, data must be delivered reliably. You can ensure reliable delivery of data through development of the application and by using the services provided by the network protocol. In the OSI reference model, the transport layer manages the process of reliable data delivery. The transport layer hides details of any network-dependent information from the higher layers by providing transparent data transfer. The User Datagram Protocol (UDP) and TCP operate between the transport layer and the application layer. Learning how UDP and TCP function between the network layer and the application layer provides a more complete understanding of how data is transmitted in a TCP/IP networking environment. This section describes the function of the transport layer and how UDP and TCP operate.

The Transport Layer

Residing between the application and network layers, the transport layer, Layer 4, is in the core of the TCP/IP layered network architecture. The transport layer has the critical role of providing communication services directly to the application processes running on different hosts. Learning how the transport layer functions provides an understanding of how data is transmitted in a TCP/IP networking environment. The transport layer protocol places a header on data that is received from the application
layer. The purpose of this protocol is to identify the application from which the data was received and create segments to be passed down to the Internet layer. Some transport layer protocols also perform two additional functions: flow control (provided by sliding windows) and reliability (provided by sequence numbers and acknowledgments). Flow control is a mechanism that enables the communicating hosts to negotiate how much data 64 Chapter 1: Building a Simple Network is transmitted each time. Reliability provides a mechanism for guaranteeing the delivery of each packet.

Two protocols are provided at the transport layer:

  • TCP: A connection-oriented, reliable protocol. In a connection-oriented environment, a connection is established between both ends before transfer of information can begin. TCP is responsible for breaking messages into segments, reassembling them at the destination station, resending anything that is not received, and reassembling messages from the segments. TCP supplies a virtual circuit between end user applications.
  • UDP: A connectionless and unacknowledged protocol. Although UDP is responsible for transmitting messages, no checking for segment delivery is provided at this layer. UDP depends on upper-layer protocols for reliability.

When devices communicate with one another, they exchange a series of messages. To understand and act on these messages, devices must agree on the format and the order of the messages exchanged, as well as the actions taken on the transmission or receipt of a message.

An example of a how a protocol can be used to provide this functionality is a conversation
exchange between a student and a teacher in a classroom:

  1. The teacher is lecturing on a particular subject. The teacher stops to ask, “Are there any questions?” This question is a broadcast message to all students.
  2. You raise your hand. This action is an implicit message back to the teacher.
  3. The teacher responds with “Yes, what is your question?” Here, the teacher has acknowledged your message and signals you to send your next message.
  4. You ask your question. You transmit your message to the teacher.
  5. The teacher hears your question and answers it. The teacher receives your message and transmits a reply back to you.
  6. You nod to the teacher that you understand the answer. You acknowledge receipt of the message from the teacher.
  7. The teacher asks if everything is all clear.

The transmission and receipt of messages and a set of conventional actions taken when sending and receiving these messages are at the heart of this question-and-answer protocol.

TCP provides transparent transfer of data between end systems using the services of the network layer below to move packets between the two communicating systems. TCP is a transport layer protocol. IP is a network layer protocol.

Similar to the OSI reference model, TCP/IP separates a full network protocol suite into a number of tasks. Each layer corresponds to a different facet of communication. Conceptually, you can envision TCP/IP as a protocol stack. The services provided by TCP run in the host computers at either end of a connection, not in the network. Therefore, TCP is a protocol for managing end-to-end connections. Because end-to-end connections can exist across a series of point-to-point connections, these end-to-end connections are called virtual circuits. The characteristics of TCP are as follows:

  • Connection-oriented: Two computers set up a connection to exchange data. The end systems synchronize with one another to manage packet flows and adapt to congestion in the network.
  • Full-duplex operation: A TCP connection is a pair of virtual circuits, one in each direction. Only the two synchronized end systems can use the connection.
  • Error checking: A checksum technique verifies that packets are not corrupted.
  • Sequencing: Packets are numbered so that the destination can reorder packets and determine if a packet is missing.
  • Acknowledgments: Upon receipt of one or more packets, the receiver returns an acknowledgment to the sender indicating that it received the packets. If packets are not acknowledged, the sender can retransmit the packets or terminate the connection if the sender thinks the receiver is no longer on the connection.
  • Flow control: If the sender is overflowing the buffer of the receiver by transmitting too quickly, the receiver drops packets. Failed acknowledgments alert the sender to slow down or stop sending. The receiver can also lower the flow to slow the sender down.
  • Packet recovery services: The receiver can request retransmission of a packet. If packet receipt is not acknowledged, the sender resends the packets.

TCP is a reliable transport layer protocol. Reliable data delivery services are critical for applications such as file transfers, database services, transaction processing, and other mission-critical applications in which delivery of every packet must be guaranteed.

An analogy to TCP protocol services would be sending certified mail through the postal service. For example, someone who lives in Lexington, Kentucky, wants to send this book to a friend in New York City, New York, but for some reason, the postal service handles only letters. The sender could rip the pages out and put each one in a separate envelope. To ensure the receiver reassembles the book correctly, the sender numbers each envelope. Then, the sender addresses the envelopes and sends the first envelope certified mail. The postal service delivers the first envelope by any truck and any route. Upon delivery of that envelope, the carrier must get a signature from the receiver and return that certificate of delivery to the sender.

The sender mails several envelopes on the same day. The postal service again delivers each envelope by any truck using any route. The sender returns to the post office each day sending several envelopes each requiring a return receipt. The receiver signs a separate receipt for each envelope in the batch as they are received. If one envelope is lost in transit, the sender would not receive a certificate of delivery for that numbered envelope. The sender might have already sent the pages that follow the missing one, but would still be able to resend the missing page. After receiving all the envelopes, the receiver puts the pages in the right order and pastes them back together to make the book. TCP provides these levels of services.

UDP is another transport layer protocol that was added to the TCP/IP protocol suite. This transport layer protocol uses a smaller header and does not provide the reliability available with TCP.

The early IP suite consisted only of TCP and IP, although IP was not differentiated as a separate service. However, some end user applications needed timeliness rather than accuracy. In other words, speed was more important than packet recovery. In real-time voice or video transfers, a few lost packets are tolerable. Recovering packets creates excessive overhead that reduces performance.

To accommodate this type of traffic, TCP architects redesigned the protocol suite to include UDP. The basic addressing and packet-forwarding service in the network layer was IP. TCP and UDP are in the transport layer on top of IP, and both use IP services.

UDP offers only minimal, nonguaranteed transport services and gives applications direct access to the IP layer. UDP is used by applications that do not require the level of service of TCP or that want to use communications services such as multicast or broadcast delivery, not available from TCP.

An analogy of the UDP protocol services would be using the postal service to send fliers notifying all of your neighbors of your garage sale. In this example, you make a flier advertising the day, time, and location of your garage sale. You address each flier with the specific name and address of each neighbor within a 2-mile radius of your house. The postal service delivers each flier by any truck and any route. However, it is not important if a flier is lost in transit or if a neighbor acknowledges receipt of the flier.

TCP/IP Applications

In addition to including the IP, TCP, and UDP protocols, the TCP/IP protocol suite also includes applications that support other services such as file transfer, e-mail, and remote login. Some of the applications that TCP/IP supports include the following:

FTP: FTP is a reliable, connection-oriented service that uses TCP to transfer files between systems that support FTP. FTP supports bidirectional binary and ASCII file transfers.
TFTP: TFTP is an application that uses UDP. Routers use TFTP to transfer configuration files and Cisco IOS images and to transfer files between systems that support TFTP.
Terminal Emulation (Telnet): Telnet provides the capability to remotely access another computer. Telnet enables a user to log on to a remote host and execute commands.
E-mail (SMTP): Simple Mail Transfer Protocol allows users to send and receive messages to e-mail applications throughout the internetwork.

Transport Layer Functionality

The transport layer hides details of any network-dependent information from the higher layers by providing transparent data transfer. Learning how the TCP/IP transport layer and the TCP and UDP protocols function provides a more complete understanding of how data is transmitted with these protocols in a TCP/IP networking environment.

Transport services enable users to segment and reassemble several upper-layer applications onto the same transport layer data stream. This transport layer data stream provides end-toend transport services. The transport layer data stream constitutes a logical connection between the endpoints of the internetwork, the originating or sending host and the destination or receiving host.

A user of a reliable transport layer service must establish a connection-oriented session with its peer system. For reliable data transfer to begin, both the sending and the receiving applications inform their respective operating systems that a connection is to be initiated, as shown in Figure 1-41.

One machine initiates a connection that must be accepted by the other. Protocol software modules in the two operating systems communicate by sending messages across the network to verify that the transfer is authorized and that both sides are ready.

Figure 1-41 Network Connection
building-simple-network-1.41

After successful synchronization has occurred, the two end systems have established a connection, and data transfer can begin. During transfer, the two machines continue to verify that the connection is still valid.

Encapsulation is the process by which data is prepared for transmission in a TCP/IP network environment. This section describes the encapsulation of data in the TCP/IP stack.

The data container looks different at each layer, and at each layer the container goes by a different name, as shown in Figure 1-42.

Figure 1-42 Names for Encapsulated Data by Layer
building-simple-network-1.42

The names for the data containers created at each layer are as follows:

  • Message: The data container created at the application layer is called a message.
  • Segment or datagram: The data container created at the transport layer, which encapsulates the application layer message, is called a segment if it comes from the transport layer’s TCP protocol. If the data container comes from the transport layer’s UDP protocol, it is called a datagram.
  • Packet: The data container at the network layer, which encapsulates the transport layer segment, is called a packet.
  • Frame: The data container at the data link layer, which encapsulates the packet, is called a frame. This frame is then turned into a bit stream at the physical layer.

A segment or packet is the unit of end-to-end transmission containing a transport header and the data from the above protocols. In general, in discussion about transmitting information from one node to another, the term packet is used loosely to refer to a piece of data. However, this book refers to data formed in the transport layer as a segment, data at the network layer as a datagram or packet, and data at the link layer as a frame.

To provide communications between the segments, each protocol uses a particular header, as discussed in the next section.

TCP/UDP Header Format

TCP is known as a connection-oriented protocol because the end stations are aware of each other and are constantly communicating about the connection. A classic nontechnical example of connection-oriented communication is a telephone conversation between two people. First, a protocol lets the participants know that they have connected and can begin communicating. This protocol is analogous to an initial conversation of “Hello.”

UDP is known as a connectionless protocol. An example of a connectionless conversation is the normal delivery of U.S. postal service. You place the letter in the mail and hope that it gets delivered. Figure 1-43 illustrates the TCP segment header format, the field definitions of which are described in Table 1-4. These fields provide the communication between end stations to control the conversation.

Figure 1-43 TCP Header Format
building-simple-network-1.43

Table 1-4 TCP Header Field Descriptions
building-simple-network-1.4 ip

Figure 1-44 shows a data capture of an Ethernet frame with the TCP header field expanded.

Figure 1-44 TCP Header
building-simple-network-1.44

The TCP header is 20 bytes. Transporting multiple packets with small data fields results in less efficient use of available bandwidth than transporting the same amount of data with fewer, larger packets. This situation is like placing several small objects into several boxes, which could hold more than one object, and shipping each box individually instead of filling one box
completely with all of the objects and sending only that box to deliver all the objects.

Figure 1-45 illustrates the UDP segment header format, the field definitions for which are described in Table 1-5. The UDP header length is always 64 bits.

Figure 1-45 UDP Header
building-simple-network-1.45

Table 1-5 UDP Header Field Descriptions
building-simple-network-1.5 ip

Figure 1-46 shows a data capture of an Ethernet frame with the UDP header field expanded.

Protocols that use UDP include TFTP, SNMP, Network File System (NFS), and DNS.

Figure 1-46 UDP Header
building-simple-network-1.46

How TCP and UDP Use Port Numbers

Both TCP and UDP use port numbers to pass information to the upper layers. Port numbers keep track of different conversations crossing the network at the same time. Figure 1-47 defines some of the port numbers as used by TCP and UDP.

Figure 1-47 Port Numbers
building-simple-network-1.47

Application software developers agree to use well-known port numbers that are controlled by the IANA. For example, any conversation bound for the FTP application uses the standard port number 21. Conversations that do not involve an application with a wellknown port number are assigned port numbers randomly chosen from within a specific range instead. These port numbers are used as source and destination addresses in the TCP segment.

Some ports are reserved in both TCP and UDP, but applications might not be written to support them. Port numbers have the following assigned ranges:

  • Numbers below 1024 are considered well-known or assigned ports.
  • Numbers 1024 and above are dynamically assigned ports.
  • Registered ports are those registered for vendor-specific applications. Most are above 1024.

NOTE
Some applications, such as DNS, use both transport layer protocols. DNS uses UDP for name resolution and TCP for server zone transfers.

Figure 1-48 shows how well-known port numbers are used by hosts to connect to the application on the end station. It also illustrates the selection of a source port so that the end station knows how to communicate with the client application.

RFC 1700, “Assigned Numbers,” defines all the well-known port numbers for TCP/IP. For a listing of current port numbers, refer to the IANA website at http://www.iana.org.

End systems use port numbers to select the proper application. Originating source port numbers are dynamically assigned by the source host, some number greater than 1023.

Figure 1-48 Port Number Example
building-simple-network-1.48

Establishing a TCP Connection: The Three-Way Handshake
TCP is connection-oriented, so it requires connection establishment before data transfer begins. For a connection to be established or initialized, the two hosts must synchronize on each other’s initial sequence numbers (ISN). Synchronization is done in an exchange of connection-establishing segments carrying a control bit called SYN (for synchronize) and
the initial sequence numbers. As shorthand, segments carrying the SYN bit are also called “SYNs.” Hence, the solution requires a suitable mechanism for picking an initial sequence number and a slightly involved handshake to exchange the ISN.

The synchronization requires each side to send its own initial sequence number and to receive a confirmation of its successful transmission within the acknowledgment (ACK) from the other side. Here is the sequence of events:

  1. Host A to Host B SYN: My sequence number is 100, ACK number is 0, and ACK bit is not set. SYN bit is set.
  2. Host A to Host B SYN, ACK: I expect to see 101 next, my sequence number is 300, and ACK bit is set. Host B to Host A SYN bit is set.
  3. Host A to Host B ACK: I expect to see 301 next, my sequence number is 101, and ACK bit is set. SYN bit is not set.

NOTE
The initial sequence numbers are actually large random numbers chosen by each host.

This exchange is called the three-way handshake and is illustrated in Figure 1-49.

Figure 1-49 Three-Way Handshake
building-simple-network-1.49

Figure 1-50 shows a data capture of the three-way handshake. Notice the sequence numbers in the three frames.

A three-way handshake is necessary because sequence numbers are not tied to a global clock in the network, and IP stacks might have different mechanisms for picking the ISN. Because the receiver of the first SYN has no way of knowing whether the segment was an old delayed one, unless it remembers the last sequence number used on the connection (which is not always possible), it must ask the sender to verify this SYN. Figure 1-51 illustrates the acknowledgment process.

The window size determines how much data, in bytes, the receiving station accepts at one time before an acknowledgment is returned. With a window size of 1 byte (as shown in Figure 1-51), each segment must be acknowledged before another segment is transmitted. This results in inefficient use of bandwidth by the hosts.

Figure 1-50 Capture of Three-Way Handshake
building-simple-network-1.50

Figure 1-51 Simple Acknowledgment
building-simple-network-1.51

TCP provides sequencing of segments with a forward reference acknowledgment. Each datagram is numbered before transmission. At the receiving station, TCP reassembles the segments into a complete message. If a sequence number is missing in the series, that segment is retransmitted. If segments are not acknowledged within a given time period, that results in retransmission. Figure 1-52 illustrates the role that acknowledgment numbers play when datagrams are transmitted.

Figure 1-52 Acknowledgment Numbers
building-simple-network-1.52

Session Multiplexing
Session multiplexing is an activity by which a single computer, with a single IP address, is able to have multiple sessions occur simultaneously. A session is created when a source machine needs to send data to a destination machine. Most often, this involves a reply, but a reply is not mandatory. The session is created and controlled within the IP network application, which contains the functionality of OSI Layers 5 through 7.

A best-effort session is very simple. The session parameters are sent to UDP. A best-effort session sends data to the indicated IP address using the port numbers provided. Each transmission is a separate event, and no memory or association between transmissions is retained.

When using the reliable TCP service, a connection must first be established between the sender and the receiver before any data can be transmitted. TCP opens a connection and negotiates connection parameters with the destination. During data flow, TCP maintains reliable delivery of the data and, when complete, closes the connection.

For example, you enter a URL for Yahoo! into the address line in the Internet Explorer window, and the Yahoo! site corresponding to the URL appears. With the Yahoo! site open, you can open the browser again in another window and type in another URL (for example,  Google). You can open another browser window and type the URL for Cisco.com, and it will open. Three sites are open using only one IP connection, because the session layer is sorting the separate requests based on the port number.

Segmentation
TCP takes data chunks from the application layers and prepares them for shipment onto the network. Each chunk is broken up into smaller segments that fit the maximum transmission unit (MTU) of the underlying network layers. UDP, being simpler, does no checking or negotiating and expects the application process to give it data that works.

Flow Control for TCP/UDP
To govern the flow of data between devices, TCP uses a flow control mechanism. The receiving TCP reports a “window” to the sending TCP. This window specifies the number of bytes, starting with the acknowledgment number, that the receiving TCP is currently prepared to receive.

TCP window sizes are variable during the lifetime of a connection. Each acknowledgment contains a window advertisement that indicates how many bytes the receiver can accept. TCP also maintains a congestion control window that is normally the same size as the receiver’s window but is cut in half when a segment is lost (for example, when you have congestion). This approach permits the window to be expanded or contracted as necessary to manage buffer space and processing. A larger window size allows more data to be processed.

NOTE
TCP window size is documented in RFC 793, “Transmission Control Protocol,” and RFC 813, “Window and Acknowledgment Strategy in TCP,” which you can find at http://www.ietf.org/rfc.html.

In Figure 1-53, the sender sends three 1-byte packets before expecting an ACK. The receiver can handle a window size of only 2 bytes (because of available memory). So, it drops packet 3, specifies 3 as the next byte to be received, and specifies a window size of 2. The sender resends packet 2 and also sends the next 1-byte packet, but still specifies its window size of 3. (For example, it can still accept three 1-byte packets.) The receiver acknowledges bytes 3 and 4 by requesting byte 5 and continuing to specify a window size of 2 bytes.

Many of the functions described in these sections, such as windowing and sequencing, have no meaning in UDP. Recall that UDP has no fields for sequence numbers or window sizes. Application layer protocols can provide for reliability. UDP is designed for applications that provide their own error recovery process. It trades reliability for speed.

Figure 1-53 TCP Windowing
building-simple-network-1.53

TCP, UDP, and IP and their headers are key in the communications between networks. Layer 3 devices use an internetwork protocol like TCP/IP to provide communications between remote systems.

Acknowledgment
TCP performs sequencing of segments with a forward reference acknowledgment. A forward reference acknowledgment comes from the receiving device and tells the sending device which segment the receiving device is expecting to receive next.

For the purpose of this lesson, the complex operation of TCP is simplified in a number of ways. Simple incremental numbers are used as the sequence numbers and acknowledgments, although in reality the sequence numbers track the number of bytes received. In a TCP simple acknowledgment, the sending computer transmits a segment, starts a timer, and waits for acknowledgment before transmitting the next segment. If the timer expires before receipt of the segment is acknowledged, the sending computer retransmits the segment and starts the timer again.

Imagine that each segment is numbered before transmission (remember that it is really the number of bytes that are tracked). At the receiving station, TCP reassembles the segments into a complete message. If a sequence number is missing in the series, that segment and all subsequent segments can be retransmitted. The steps involved with the acknowledgment process are as follows:

Step 1 The sender and receiver agree that each segment must be acknowledged before another can be sent. This occurs during the connection setup procedure by setting the window size to 1.
Step 2 The sender transmits segment 1 to the receiver. The sender starts a timer and waits for acknowledgment from the receiver.
Step 3 The receiver receives segment 1 and returns ACK = 2. The receiver acknowledges the successful receipt of the previous segment by stating the expected next segment number.
Step 4 The sender receives ACK = 2 and transmits segment 2 to the receiver. The sender starts a timer and waits for acknowledgment from the receiver.
Step 5 The receiver receives segment 2 and returns ACK = 3. The receiver acknowledges the successful receipt of the previous segment.
Step 6 The sender receives ACK = 3 and transmits segment 4 to the receiver. This process continues until all data is sent.

Windowing
The TCP window controls the transmission rate at a level where receiver congestion and data loss do not occur.

Fixed Windowing
In the most basic form of reliable, connection-oriented data transfers, ignoring network congestion issues, the recipient acknowledges the receipt of each data segment to ensure the integrity of the transmission. However, if the sender must wait for an acknowledgment after sending each segment, throughput is low, depending on the round-trip time (RTT)
between sending data and receiving the acknowledgment.

Most connection-oriented, reliable protocols allow more than one segment to be outstanding at a time. This approach can work because time is available after the sender completes a segment transmission and before the sender processes any acknowledgment of receipt. During this interval, the sender can transmit more data, provided the window at the receiver is large enough to handle more than one segment at a time. The window is the number of data segments the sender is allowed to send without getting acknowledgment from the receiver, as shown in Figure 1-54.

Windowing enables a specified number of unacknowledged segments to be sent to the receiver, thereby reducing latency. Latency in this instance refers to the amount of time it takes for data to be sent and the acknowledgment to be returned.

Example: Throwing a Ball
Think of two people standing 50 feet apart. One person throws a football to the other, and that portion of the trip takes 3 seconds. The second person receives the football, throws a ball back (acknowledgment), and that portion of the trip takes 3 seconds. The round trip takes a total of 6 seconds. To do this process 3 times would take a total of 18 seconds. Now imagine that the first person has three balls and throws them one after the other. This part of the trip still takes 3 seconds. The second person throws back one ball to acknowledge the receipt of the third ball, and that portion of the trip again takes 3 seconds. The round trip takes a total of 6 seconds. (Of course, this ignores processing time and so on.)

Figure 1-54 Fixed Windowing
building-simple-network-1.54

The following steps describe the windowing process in a TCP connection:

Step 1 The sender and receiver set an initial window size: three segments before an acknowledgment must be sent. This occurs during the connection setup procedure.
Step 2 The sender transmits segments 1, 2, and 3 to the receiver. The sender transmits the segments, starts a timer, and waits for acknowledgment from the receiver.
Step 3 The receiver receives segments 1, 2, and 3 and returns ACK = 4. The receiver acknowledges the successful receipt of the previous segments.
Step 4 The sender receives ACK = 4 and transmits segments 4, 5, and 6 to the receiver. The sender transmits the segments, starts a timer, and waits for acknowledgment from the receiver.
Step 5 The receiver receives segments 4, 5, and 6 and returns ACK = 7. The receiver acknowledges the successful receipt of the previous segments.

The numbers used in this example are simplified for ease of understanding. These numbers actually represent octets (bytes) and would be increasing in much larger numbers representing the contents of TCP segments, not the segments themselves.

TCP Sliding Windowing

TCP uses a sliding window technique to specify the number of segments, starting with the acknowledgment number that the receiver can accept.

In fixed windowing, the window size is established and does not change. In sliding windowing, the window size is negotiated at the beginning of the connection and can change dynamically during the TCP session. A sliding window results in more efficient use of bandwidth because a larger window size allows more data to be transmitted pending acknowledgment. Also, if a receiver reduces the advertised window size to 0, this effectively stops any further transmissions until a new window greater than 0 is sent.

In Figure 1-55, the window size is 3. The sender can transmit three segments to the receiver. At that point, the sender must wait for acknowledgment from the receiver. After the receiver acknowledges receipt of the three segments, the sender can transmit three more. However, if resources at the receiver become scarce, the receiver can reduce the window size so that it does not become overwhelmed and have to drop data segments.

Figure 1-55 Sliding Windowing
building-simple-network-1.55

Each acknowledgment transmitted by the receiver contains a window advertisement that indicates the number of bytes the receiver can accept (the window size). This allows the window to be expanded or contracted as necessary to manage buffer space and processing.

TCP maintains a separate congestion window size (CWS) parameter, which is normally the same size as the window size of the receiver, but the CWS is cut in half when segments are lost. Segment loss is perceived as network congestion. TCP invokes sophisticated back off and restart algorithms so that it does not contribute to network congestion. The following steps are taken during a sliding window operation:

Step 1 The sender and the receiver exchange their initial window size values. In this example, the window size is 3 segments before an acknowledgment must be sent. This occurs during the connection setup procedure.
Step 2 The sender transmits segments 1, 2, and 3 to the receiver. The sender waits for an acknowledgment from the receiver after sending segment 3.
Step 3 The receiver receives segments 1 and 2, but now can handle a window size of only 2 (ACK = 3 WS = 2). The receiver’s processing might slow down for many reasons, such as when the CPU is searching a database or downloading a large graphic file.
Step 4 The sender transmits segments 3 and 4. The sender waits for an acknowledgment from the receiver after sending segment 5, when it still has two outstanding segments.
Step 5 The receiver acknowledges receipt of segments 3 and 4, but still maintains a window size of 2 (ACK = 5 WS = 2). The receiver acknowledges the successful receipt of segments 3 and 4 by requesting transmission of segment 5.

Maximize Throughput
The congestion windowing algorithm manages the rate of sent data. This minimizes both data drop and the time spent recovering dropped data; therefore, efficiency is improved.

Global Synchronization
While the congestion windowing algorithm improves efficiency in general, it can also have an extremely negative effect on efficiency by causing global synchronization of the TCP process. Global synchronization is when all the same senders use the same algorithm and their behavior synchronizes. The senders all perceive the same congestion and all back off at the same time. Then, because the senders are all using the same algorithm, they all come back at the same time, which creates waves of congestion.

About the author

Scott

Leave a Comment