Chapter 2 discusses the differences between a bridge and a switch. Cisco identifies the Catalyst as a LAN switch; a switch is a more complex bridge. The switch can be configured to behave as multiple bridges by defining internal virtual bridges (i.e., VLANs). Each virtual bridge defines a new broadcast domain because no internal connection exists between them. Broadcasts for one virtual bridge are not seen by any other. Only routers (either external or internal) should connect broadcast domains together. Using a bridge to interconnect broadcast domains merges the domains and creates one giant domain. This defeats the reason for having individual broadcast domains in the first place.
Switches make forwarding decisions the same as a transparent bridge. But vendors have different switching modes available to determine when to switch a frame. Three modes in particular dominate the industry: store-and-forward, cut-through, and fragment-free. Figure 3-3 illustrates the trigger point for the three methods.
Figure 3-3. Switching Mode Trigger Points
Each has advantages and trade offs, discussed in the sections that follow. As a result of the different trigger points, the effective differences between the modes are in error handling and latency. Table 3-3 compares the approaches and shows which members of the Catalyst family use the available modes. The table summarizes how each mode handles frames containing errors, and the associated latency characteristics.
Table 3-3. Switching Mode Comparison
|Switching Mode||Errored Frame Handling||Latency||CAT Member*|
|Fragment-free||Drops if error detected in first 64 octets||Moderate fixed||3000, 2820,|
|*. Note that when a model supports more than one switching mode, adaptive cut-through may be available. Check model specifics to confirm.|
One of the objectives of switching is to provide more bandwidth to the user. Each port on a switch defines a new collision domain that offers full media bandwidth. If only one station attaches to an interface, that station has full dedicated bandwidth and does not need to share it with any other device. All the switching modes defined in the sections that follow support the dedicated bandwidth aspect of switching.
To determine the best mode for your network, consider the latency requirements for your applications and your network reliability. Do your network components or cabling infrastructure generate errors? If so, fix your network problems and use store-and-forward. Can your applications tolerate the additional latency of store-and-forward switching? If not, use cut-through switching. Note that you must use store-and-forward with the Cat 5000 and 6000 family of switches. This is acceptable because latency is rarely an issue, especially with high-speed links and processors and modern windowing protocols. Finally, if the source and destination segments are different media types, you must use store-and-forward mode.
The store-and-forward switching mode receives the entire frame before beginning the switching process. When it receives the complete frame, the switch examines it for the source and destination addresses and any errors it may contain, and then it possibly applies any special filters created by the network administrator to modify the default forwarding behavior. If the switch observes any errors in the frame, it is discarded, preventing errored frames from consuming bandwidth on the destination segment. If your network experiences a high rate of frame alignment or FCS errors, the store-and-forward switching mode may be best. The absolute best solution is to fix the cause of the errors. Using store-and-forward in this case is simply a bandage. It should not be the fix.
If your source and destination segments use different media, then you must use this mode. Different media often have issues when transferring data. The section “Source-Route Translation Bridging” discusses some of these issues. Store-and-forward mode is necessary to resolve this problem in a bridged environment.
Because the switch must receive the entire frame before it can start to forward, transfer latency varies based on frame size. In a 10BaseT network, for example, the minimum frame, 64 octets, takes 51.2 microseconds to receive. At the other extreme, a 1518 octet frame requires at least 1.2 milliseconds. Latency for 100BaseX (Fast Ethernet) networks is one-tenth the 10BaseT numbers.
Cut-through mode enables a switch to start the forwarding process as soon as it receives the destination address. This reduces latency to the time necessary to receive the six octet destination address: 4.8 microseconds. But cut-through cannot check for errored frames before it forwards the frame. Errored frames pass through the switch, consequently wasting bandwidth; the receiving device discards errored frames.
As network and internal processor speeds increase, the latency issues become less relevant. In high speed environments, the time to receive and process a frame reduces significantly, minimizing advantages of cut-through mode. Store-and-forward, therefore, is an attractive choice for most networks.
Some switches support both cut-through and store-and-forward mode. Such switches usually contain a third mode called adaptive cut-through. These multimodal switches use cut-through as the default switching mode and selectively activate store-and-forward. The switches monitor the frame as it passes through looking for errors. Although the switch cannot stop an errored frame, it counts how many it sees. If the switch observes that too many frames contain errors, the switch automatically activates the store-and-forward mode. This is often known as adaptive cut-through. It has the advantage of providing low latency while the network operates well, while providing automatic protection for the outbound segment if the inbound segment experiences problems.
Another alternative offers some of the advantages of cut-through and store-and-forward switching. Fragment-free switching behaves like cut-through in that it does not wait for an entire frame before forwarding. Rather, fragment-free forwards a frame after it receives the first 64 octets of the frame (this is longer than the six bytes for cut-through and therefore has higher latency). Fragment-free switching protects the destination segment from fragments, an artifact of half-duplex Ethernet collisions. In a correctly designed Ethernet system, devices detect a collision before the source finishes its transmission of the 64-octet frame (this is driven by the slotTime described in Chapter 1).
When a collision occurs, a fragment (a frame less than 64 octets long) is created. This is a useless Ethernet frame, and in the store-and-forward mode, it is discarded by the switch. In contrast, a cut-through switch forwards the fragment if at least a destination address exists. Because collisions must occur during the first 64 octets, and because most frame errors will show up in these octets, the fragment-free mode can detect most bad frames and discard them rather than forward them. Fragment-free has a higher latency than cut-through, however, because it must wait for an additional 58 octets before forwarding the frame. As described in the section on cut-through switching, the advantages of fragment-free switching are minimal given the higher network speeds and faster switch processors.