A Brief ATM Tutorial

A Brief ATM Tutorial

This section looks at some basic ATM concepts and terminology before diving into the deep waters of LANE. The principal concepts covered here include the following:

  • Understanding ATM Cells (Five Questions about Cells That You Were Always Afraid to Ask)
  • ATM is Connection Oriented
  • ATM Addressing
  • ATM Devices
  • ATM Overhead Protocols
  • When to Use ATM

Understanding ATM Cells: Five Questions about Cells (That You Were Always Afraid to Ask)

The single most important characteristic of ATM is that it uses cells. Whereas other technologies transport large and variable-length units of data, ATM is completely based around small, fixed-length units of data called cells.

Why Cells?

Most readers are probably aware that ATM uses fixed-length packages of data called cells. But what’s the big deal about cells? Because it takes quite a lot of work for network devices hardware to cellify all of their data, what is the payoff to justify all of this extra work and complexity? Fortunately, cells do have many advantages, including the following:

  • High throughput
  • Advanced statistical multiplexing
  • Low latency
  • Facilitate multiservice traffic (voice, video, and data)

Each of these advantages of ATM cells is addressed in the sections that follow.

High Throughput

High throughput has always been one of the most compelling benefits of ATM. At the time ATM was conceived, routers were slow devices that required software-based processing to handle the complex variable-length and variable-format multiprotocol (IP, IPX, and so forth) traffic. The variable data lengths resulted in inefficient processing and many complex buffering schemes (to illustrate this point, just issue a show buffers command on a Cisco router). The variable data formats required every Layer 3 protocol to utilize a different set of logic and routing procedures. Run this on a general-purpose CPU and the result is a low-throughput device.

The ATM cell was designed to address both of these issues. Because cells are fixed in length, buffering becomes a trivial exercise of simply carving up buffer memory into fixed-length cubbyholes. Because cells have a fixed-format, 5-byte header, switch processing is drastically simplified. The result: it becomes much easier to build very high-speed, hardware-based switching mechanisms.

Advanced Statistical Multiplexing

Large phone and data carriers funded the majority of early ATM research. One of the key factors that motivated carriers to explore ATM was their desire to improve utilization and statistical multiplexing in their networks. At the time, it was very common (and still is) to link sites using channelized T1 circuits (or E1 circuits outside the United States). Figure 9-1 illustrates a typical use of this approach.

Figure 9-1. A Typical Channelized Network
a-brief-atm-tutorial-9.1

Figure 9-1 illustrates a small corporate network with three sites: headquarters is located in New York City with two remote sites in Washington, DC and Los Angeles. The NY site has a single T1 to the carrier’s nearest Central Office (CO). This T1 has been channelized into two sections: one channel to DC and another to LA.

T1 technology uses something called time-division multiplexing (TDM) to allow up to 24 voice conversations to be carried across a single 4-wire circuit. Each of these 24 conversations is assigned a timeslot that allows it to send 8 bits of information at a time (typically, these 8 bits are pulse code modulation [PCM] digital representations of human voice conversations). Repeat this pattern 8000 times per second and you have the illusion that all 24 conversations are using the wire at the same time. Also note that this results in each timeslot receiving 64,000 bits/second (bps) of bandwidth (8 bits/timeslotx8,000 timeslots per second).

However, the data network in Figure 9-1 does not call for 24 low-bandwidth connections—the desire is for two higher-bandwidth connections. The solution is to group the timeslots into two bundles. For example, a 14-timeslot bundle can be used to carry data to the remote site in DC, whereas the remaining 10 timeslots are used for data traveling to the remote site in LA. Because each timeslot represents 64 Kbps of bandwidth, DC is allocated 896 Kbps and LA receives 640 Kbps. This represents a form of static multiplexing. It allows two connections to share a single link, but it prevents a dynamic reconfiguration of the 896/640 bandwidth split. In other words, if no traffic is being transferred between NY and LA, the NY-to-DC circuit is still limited to 896 Kbps. The 640 Kbps of bandwidth allocated to the other link is wasted.

Figure 9-2 shows an equivalent design utilizing ATM.

Figure 9-2. An ATM-Based Network Using Virtual Circuits
a-brief-atm-tutorial-9.2

In Figure 9-2, the NY office still has a T1 to the local CO. However, this T1 line is unchannelized—it acts as a single pipe to the ATM switch sitting in the CO. The advantage of this approach is that cells are only sent when there is a need to deliver data (other than some overhead cells). In other words, if no traffic is being exchanged between the NY and LA sites, 100 percent of the bandwidth can be used to send traffic between NY and DC. Seconds later, 100 percent of the bandwidth might be available between NY and LA. Notice that, although the T1 still delivers a constant flow of 1,544,000 bps, this fixed amount of bandwidth is better utilized because there are no hard-coded multiplexing patterns that inevitably lead to unused bandwidth. The result is a significant improvement over the static multiplexing configured in Figure 9-1. In fact, many studies have shown that cell multiplexing can double the overall bandwidth utilization in a large network.

Low Latency

Latency is a measurement of the time that it takes to deliver information from the source to the destination. Latency comes from two primary sources:

  • Propagation delay
  • Switching delay

Propagation delay is based on the amount of time that it takes for a signal to travel over a given type of media. In most types of copper and fiber optic media, signals travel at approximately two-thirds the speed of light (in other words, about 200,000 meters per second). Because this delay is ultimately controlled by the speed of light, propagation delay cannot be eliminated or minimized (unless, of course, the two devices are moved closer together).

Switching delay results from the time it takes for data to move through some internetworking device. Two factors come into play here:

  • The length of the frame— If very large frames are in use, it takes a longer period of time for the last bit of a frame to arrive after the first bit arrives.
  • The switching mechanics of the device— Software-based routers can add several hundred microseconds of delay during the routing process, whereas hardware-based devices can make switching and routing decisions in only several microseconds.

Cells are an attempt to address both of these issues simultaneously. Because cells are small, the difference between the arrival time of the first and last bit is minimized. Because cells are of a fixed size and format, they readily allow for hardware-based optimizations.

Facilitates Multiservice Traffic

One of the most touted benefits of ATM is its capability to simultaneously support voice, video, and data traffic over the same switching infrastructure. Cells play a large part in making this possible by allowing all types of traffic to be put in a single, ubiquitous container.

Part of this multiservice benefit is derived from points already discussed. For example, one of the biggest challenges facing voice over IP is the large end-to-end latency present in most existing IP networks. The low latency of cell switching allows ATM to easily accommodate existing voice applications. In addition, the advanced multiplexing of cells allows ATM to instantly make bandwidth available to data applications when video and voice traffic is reduced (either through compression or tearing down unused circuits). Furthermore, the small size of cells prevents data logjams that can result in many other architectures when small packets clump up behind large packets (much like small cars clumping up behind trucks on the highway).

Why 53 Bytes?

As discussed in the previous section, every cell is a fixed-length container of information. After considerable debate, the networking community settled on a 53-byte cell. This 53-byte unit is composed of two parts: a 5-byte header and a 48-byte payload. However, given that networking engineers have a long history of using even powers of two, 53 bytes seems like a very strange number. As it turns out, 53 bytes was the result of an international compromise. In the late 1980s, the European carriers wanted to use ATM for voice traffic. Given the tight latency constraints required by voice, the Europeans argued that a 32-byte cell payload would be most useful. U.S. carriers, interested in using ATM for data traffic, were more interested in the efficiency that would be possible with a larger, 64-byte payload. The two groups compromised on the mid-point value, resulting in the 48-byte payload still used today. The groups then debated the merits of various header sizes. Although several sizes were proposed, the 5-byte header was ultimately chosen.

How Does an IP Packet Fit Inside a Cell?

To answer this question, this section examines the three-step process that ATM uses to transfer information:

  1. Slice & Dice
  2. Build Header
  3. Ship Cells

Each of these steps equates to a layer in the ATM stack shown in Figure 9-3.

Figure 9-3. Three-Layer ATM Stack
a-brief-atm-tutorial-9.3

First, ATM must obviously chop up large IP packets before transmission. The technical term for this function is the ATM Adaptation Layer (AAL); however, I use the more intuitive term Slice & Dice Layer. The purpose of the Slice & Dice Layer is to act like a virtual Cuisinart\xa8 that chops up large data into small, fixed size pieces. This is frequently referred to as SAR, a term that stands for Segmentation And Reassembly, and accurately portrays this Slice & Dice function (it is also one of the two main functions performed by the AAL).

Just as Cuisinarts are available with a variety of different blades, the AAL blade can Slice & Dice in a variety of ways. In fact, this is exactly how ATM accommodates voice, video, and data traffic over a common infrastructure. In other words, the ATM Adaptation Layer adapts all types of traffic into common ATM cells. However, regardless of which Slice & Dice blade is in use, the AAL is guaranteed to pass a fixed-length, 48-byte payload down to the next layer, the ATM layer.

The middle layer in the ATM stack, the ATM layer, receives the 48-byte slices created by the AAL. Note the potential for confusion here: all three layers form the ATM stack, but the middle layer represents the ATM layer of the ATM stack. This layer builds the 5-byte ATM cell header, the heart of the entire ATM process. The primary function of this header is to identify the remote ATM device that should receive each cell. After this layer has completed its work, the cell is guaranteed to be 53 bytes in length.

  • Note
    Technically, there is a small exception to the statement that the ATM Layer always passes 53-byte cells to the physical layer. In some cases, the physical layer is used to calculate the cell header’s CRC, requiring only 52-byte transfers. In practice, this minor detail can be ignored.

At this point, the cells are ready to leave the device in a physical layer protocol. This physical layer acts like a shipping department for cells. The vast majority of campus ATM networks use Synchronous Optical Network (SONET) as a physical layer transport. SONET was developed as a high-speed alternative to the T1s and E1s discussed earlier in this chapter.

  • Note
    SONET is very similar to T1s in that it is a framed, physical layer transport mechanism that repeats 8,000 times per second and is used for multiplexing across trunk links. On the other hand, they are very different in that SONET operates at much higher bit rates than T1s while also maintaining much tighter timing (synchronization) parameters. SONET was devised to provide efficient multiplexing of T1, T3, E1, and E3 traffic.

Think of a SONET frame as a large, 810-byte moving van that leaves the shipping dock every 1/8000th of a second (810 bytes is the smallest/slowest version of SONET; higher speeds use an even larger frame!). The ATM Layer is free to pack as many cells as it can fit into each of these 810-byte moving vans. On a slow day, many of the moving vans might be almost empty. However, on a busy day, most of the vans are full or nearly full.

One of the most significant advantages of ATM is that it doesn’t require any particular type of physical layer. That is, it is media independent. Originally, ATM was designed to run only over SONET. However, the ATM Forum wisely recognized that this would severely limit ATM’s potential for growth and acceptance, and developed standards for many different types and speeds of physical layers. In fact, the Physical Layer Working Group of the ATM Forum has been its most prolific group. Currently, ATM runs over just about any media this side of barbed wire.

Why Is It Asynchronous Transfer Mode?

The term Asynchronous Transfer Mode has created significant confusion. Many ask, “What’s asynchronous about it? Is this a throwback to using start and stop bits like my modem?”

ATM is asynchronous in the sense that cells are generated asynchronously on an as-needed basis. If two ATM devices have a three-second quiet period during an extended file transfer, ATM does not need to send any cells during this three-second interval. This unused bandwidth is instantly available to any other devices sharing that link. In other words, it is the asynchronous nature of ATM that delivers the advanced statistical multiplexing capabilities discussed earlier.

This confusion is further complicated by the common use of SONET as a physical layer for ATM. Because Synchronous Optical Network is obviously synchronous, how can ATM be asynchronous if SONET is in use? To answer this question, apply the logic of the previous paragraph: ATM is only referring to the generation of cells as being asynchronous. How cells get shipped from point A to point B is a different matter. In the case of SONET, the shipment from point A to point B is most definitely synchronous in that SONET moving vans (frames) leave the shipping dock (the ATM device) at exactly 1/8000th of a second intervals. However, filling these moving vans is done on an as-needed or asynchronous basis. Again, this is exactly what gives ATM its amazing statistical multiplexing capabilities. All of the empty space in the moving van is instantly available to other devices and users in the network.

What Is the Difference Between ATM and SONET?

As discussed in the previous sections, ATM and SONET are closely related technologies. In fact, they were originally conceived to always be used together in a global architecture called BISDN (Broadband Integrated Digital Services Network). However, they have been developed (and often implemented) as two different technologies. ATM is a cloud technology that deals with the high-speed generation and transportation of fixed-size units of information called cells. SONET, on the other hand, is a point-to-point technology that deals with the high-speed transportation of anything, including ATM cells.

In other words, referring back to the three-layer model, ATM makes the cells and SONET ships the cells.

  • Tip
    ATM and SONET are two different concepts. ATM deals with cells and SONET is simply one of the many physical layers available for moving ATM cells from point to point.

ATM Is Connection Oriented

ATM is always connection oriented. Before any data can be exchanged, the network must negotiate a connection between two endpoints. ATM supports two types of connections:

  • Permanent virtual circuits (PVCs)
  • Switched virtual circuits (SVCs)

PVCs act like virtual leased lines in that the circuits are always active. PVCs are manually built based on human intervention (either via a command-line interface [CLI] or some action at a management console).

SVCs are “dialup” ATM connections. Think of them as ATM phone calls. When two devices require connectivity via a SVC, one device signals the ATM network to build this temporary circuit. When the connection is no longer required, one of the devices can destroy the SVC via signaling.

ATM always requires that one of these two types of circuits be built. At the cell layer, it is not possible for ATM to provide a connectionless environment like Ethernet. However, it is entirely possible for this connection-oriented cell layer to emulate a connectionless environment by providing a connectionless service at some higher layer. The most common example of such a connectionless service is LANE, the very subject of this chapter.

ATM supports two configurations for virtual circuits (VCs):

  1. Point-to-point Virtual Circuits
  2. Point-to-multipoint Virtual Circuits

Figure 9-4 illustrates both types of circuits.

Figure 9-4. Point-to-Point and Point-to-Multipoint Vcs
a-brief-atm-tutorial-9.4

Point-to-point virtual circuits behave exactly as the name suggests: one device can be located at each end of the circuit. This type of virtual circuit is also very common in technologies such as Frame Relay. These circuits support bi-directional communication—that is, both end points are free to transmit cells.

Point-to-multipoint virtual circuits allow a single root node to send cells to multiple leaf nodes. Point-to-multipoint circuits are very efficient for this sort of one-to-many communication because it allows the root to generate a given message only once. It then becomes the duty of the ATM switches to pass a copy of the cells that comprise this message to all leaf nodes. Because of their unidirectional nature, point-to-multipoint circuits only allow the root to transmit. If the leaf nodes need to transmit, they need to build their own virtual circuits.

  • Tip
    Do not confuse the root of a point-to-multipoint ATM VC with the Spanning Tree Root Bridge and Root Port concepts discussed in Chapter 6, “Understanding Spanning Tree.” They are completely unrelated concepts.

ATM Addressing

As with all other cloud topologies, ATM needs some method to identify the intended destination for each unit of information (cell) that gets sent. Unlike most other topologies, ATM actually uses two types of addresses to accomplish this task: Virtual Path Indicator/Virtual Channel Indicator (VPI/VCI) addresses and Network Services Access Point (NSAP) addresses, both of which are discussed in greater detail in the sections that follow.

VPI/VCI Addresses

VPI/VCI, the first type of address used by ATM, is placed in the 5-byte header of every cell. This address actually consists of two parts: the Virtual Path Indicator (VPI) and the Virtual Channel Indicator (VCI). They are typically written with a slash separating the VPI and the VCI values—for example, 0/100. The distinction between VPI and VCI is not important to a discussion of LANE. Just remember that together these two values are used by an ATM edge device (such as a router) to indicate to ATM switches which virtual circuit a cell should follow. For example, Figure 9-5 adds VPI/VCI detail to the network illustrated in Figure 9-2 earlier.

Figure 9-5. VPI/VCI Usage in an ATM Network
a-brief-atm-tutorial-9.5

The NY router uses a single physical link connected to Port 0 on the NY ATM switch carrying both virtual circuits. How, then, does the ATM switch know where to send each cell? It simply makes decisions based on VPI/VCI values placed in ATM cell headers by the router. If the NY router (the ATM edge device) places the VPI/VCI value 0/50 in the cell header, the ATM switch uses a preprogrammed table indicating that the cell should be forwarded out Port 2, sending it to LA.

Also, note that this table needs to instruct the switch to convert the VPI/VCI value to 0/51 as the cells leave the Port 1 interface (only the VCI is changed). The ATM switch in LA has a similar table indicating that the cell should be switched out Port 1 with a VPI/VCI value of 0/52. However, if the NY router originates a cell with the VPI/VCI value 0/65, the NY ATM switch forwards the cell to DC. The ATM switching table in the NY ATM switch would contain the entries listed in Table 9-1.

Table 9-1. Switching Table in NY ATM Switch

inputoutput
PortVPIVCIPortVPIVCI
00502051
006510185
101850065
20510050

Notice the previous paragraph mentions that the ATM switch had been preprogrammed with this switching table. How this preprogramming happens depends on whether the virtual circuit is a PVC or an SVC. In the case of a PVC, the switching table is programmed through human intervention (for example, through a command-line interface). However, with SVCs, the table is built dynamically at the time the call is established.

NSAPs

The previous section mentioned that SVCs build the ATM switching tables dynamically. This requires a two-step process:

  1. The ATM switch must select a VPI/VCI value for the SVC.
  2. The ATM switch must determine where the destination of the call is located.

Step 1 is a simple matter of having the ATM switch look in the table to find a value currently not in use on that port (in other words, the same VPI/VCI can be in use on every port of the same switch; it just cannot be used twice on the same port).

To understand Step 2, consider the following example. If the NY router places an SVC call to DC, how does the New York ATM switch know that DC is reachable out Port 1, not Port 2? The details of this process involve a complex protocol called Private Network-Network Interface (PNNI) that is briefly discussed later in this chapter. For now, just remember that the NY switch utilizes an NSAP address to determine the intended destination. NSAP addresses function very much like regular telephone numbers.

Just as every telephone on the edge of a phone network gets a unique phone number, every device on an ATM network gets a unique NSAP address. Just as you must dial a phone number to call your friend named Joe, an ATM router must signal an NSAP address to call a router named DC. Just as you can look at a phone number to determine the city and state in which the phone is located, an NSAP tells you where the router is located.

However, there is one important difference between traditional phone numbers and NSAP addresses: the length. NSAPs are fixed at 20 bytes in length. When written in their standard hexadecimal format, these addresses are 40 characters long! Try typing in a long list of NSAP addresses and you quickly learn why an ATM friend of mine refers to NSAPs as Nasty SAPs! You also learn two other lessons: the value of cut-and-paste and that even people who understand ATM can have friends (there’s hope after all!).

NSAP addresses consist of three sections:

  • A 13-byte prefix.This is a value that uniquely identifies every ATM switch in the network. Logically, it functions very much like the area code and exchange of a U.S. phone number (for example, the 703-242 of 703-242-1111 identifies a telephone switch in Vienna, Virginia). Cisco’s campus ATM switches are preconfigured with 47.0091.8100.0000, followed by a unique MAC address that gets assigned to every switch. For example, a switch that contains the MAC address 0010.2962.E801 uses a prefix of 47.0091.8100.0000.0010.2962.E801. No other ATM switch in your network can use this prefix.
  • A 6-byte End System Identifier (ESI).This value identifies every device connected to an ATM switch. A MAC address is typically used (but not required) for this value. Logically, it functions very much like the last four digits of a U.S. phone number (for example, the 1111 of 703-242-1111 identifies a particular phone attached to the Vienna, Virginia telephone switch).
  • A 1-byte selector byte.This value identifies a particular software process running in an ATM-attached device. It functions very much like an extension number associated with a telephone number (for example, 222 in 703-242-1111 x222). Cisco devices typically use a subinterface number for the selector byte.

Figure 9-6 illustrates the ATM NSAP format.

Figure 9-6. ATM NSAP Format
a-brief-atm-tutorial-9.6

An actual NSAP appears as follows:

47.009.8100.0000.0060.8372.56A1 . 0000.0c33.BFC1 . A1

Extra spaces have been inserted to clearly delineate the three sections. The pattern of dots above is optional. If you are a particularly adept typist, feel free to completely omit the dots. However, most people find a pattern such as the one above to be a very useful typing aid.

Using NSAP and VPI/VCI Addresses Together

In short, the purpose of the two types of ATM addresses can be summarized as follows:

  • NSAP addresses are used to build SVCs.
  • VPI/VCIs are used after the circuit (SVC or PVC) has already been built to deliver cells across the circuit.

Tip
ATM NSAP addresses are only used to build an SVC. After the SVC is built, only VPI/VCI addresses are used.

Figure 9-7 illustrates the relationship between NSAP and VPI/VCI addresses. The NSAPs represent the endpoints whereas VPI/VCIs are used to address cells as they cross each link.

Figure 9-7. Using NSAP Addresses to Build VPI/VCI Values for SVCs
a-brief-atm-tutorial-9.7

Table 9-2 documents the characteristics of NSAP and VPI/VCI addresses for easy comparison.

Table 9-2. NSAP Versus VPI/VCI Address Characteristics

NSAP AddressesVPI/VCI Addresses
40 hex characters (20 bytes) in length24 to 28 bits in length (depending on what type of link the cell is traversing)
Globally significant (that is, globally unique)Locally significant (that is, only need to be unique to a single link)
Physically reside in the signaling messages used to build SVCsPhysically reside in the 5-byte header of every ATM cell
Not used for PVCsUsed for PVCs and SVCs

ATM Device Types

ATM devices can be divided into one of two broad categories, each of which is covered in greater detail in the sections that follow:

  • ATM edge devices
  • ATM switches

ATM Edge Devices

ATM edge devices include equipment such as workstations, servers, PCs, routers, and switches with ATM interface cards (for example, a Catalyst 5000 containing a LANE module). These devices act as the termination points for ATM PVCs and SVCs. In the case of routers and switches, edge devices must also convert from frame-based media such as Ethernet to ATM cells.

ATM Switches

ATM switches only handle ATM cells. Cisco’s campus switches include the LightStream LS1010 and 8500 MSR platforms (Cisco also sells several carrier-class switches developed as a result of their Stratacom acquisition). ATM switches are the devices that contain the ATM switching tables referenced earlier. They also contain advanced software features (such as PNNI) to allow calls to be established and high-speed switching fabrics to shuttle cells between ports. Except for certain overhead circuits, ATM switches generally do not act as the termination point of PVCs and SVCs. Rather, they act as the intermediary junction points that exist for the circuits connected between ATM edge devices.

Figure 9-8 illustrates the difference between edge devices and ATM switches.

Figure 9-8. Catalyst ATM Edge Devices Convert Frames to Cells, Whereas ATM Switches Handle Only Cells
a-brief-atm-tutorial-9.8

  • Tip
    Remember the difference between ATM edge devices and ATM switches. ATM edge devices sit at the edge of the network, but ATM switches actually are the ATM network.

Products such as the Catalyst 5500 and 8500 actually support multiple functions in the same chassis. The 5500 supports LS1010 ATM switching in its bottom five slots while simultaneously accommodating one to seven LANE modules in the remaining slots. However, it is often easiest to think of these as two separate boxes that happen to share the same chassis and power supplies. The bottom five slots (slots 9–13) accommodate ATM switch modules while slots 2–12 accommodate ATM edge device modules (note that slots 9–12 can support either service).

  • Tip
    Cisco sells several other devices that integrate ATM and frame technologies in a single platform in a variety of different ways. These include the Catalyst 5500 Fabric Integration Module (FIM), the Catalyst 8500 MSR, and the ATM Router Module (ARM). See Cisco’s Product Catalog for more information on these devices.

ATM Overhead Protocols

Although ATM theory can be extremely complex, the good news is that it can be amazingly easily to implement in most networks. This plug-and-play nature is due in large part to two automation protocols: Integrated Local Management Interface (ILMI) and Private Network-Network Interface (PNNI).

ILMI

Integrated Local Management Interface (ILMI) is a protocol created by the ATM Forum to handle various automation responsibilities. Initially called the Interim Local Management Interface, ILMI utilizes SNMP to allow ATM devices to “automagically” learn the configuration of neighboring ATM devices. The most common use of ILMI is a process generally referred to as address registration. Recall that NSAP addresses consist of three parts: the switch’s prefix and the edge device’s ESI and selector byte. How do the two devices learn about each other’s addresses? This is where ILMI comes in. Address registration allows the edge device to learn the prefix from the switch and the switch to learn the ESI from the edge device (because the selector byte is locally significant, the switch doesn’t need to acquire this value).

PNNI

Private Network-Network Interface (PNNI) is a protocol that allows switches to dynamically establish SVCs between edge devices. However, edge devices do not participate in PNNI—it is a switch-to-switch protocol (as the Network-Network Interface portion of the name suggests). Network-Network Interface (NNI) protocols consist of two primary functions:

  • Signaling
  • Routing

Signaling allows devices to issue requests that create and destroy ATM SVCs (because PVCs are manually created, they do not require signaling for setup and tear down).

Routing is the process that ATM switches use to locate the destination the NSAP addresses specified in signaling requests. Note that this is very different from IP-based routing. IP routing is a connectionless process that is performed for each and every IP datagram (although various caching and optimization techniques do exist). ATM routing is only performed at the time of call setup.After the call has been established, all of the traffic associated with that SVC utilizes the VPI/VCI cell-switching table. Note that this distinction allows ATM to simultaneously fulfill the conflicting goals of flexibility and performance. The unwieldy NSAP addresses provide flexible call setup schemes, and the low-overhead VPI/VCI values provide high-throughput and low-latency cell switching.

  • Tip
    Do not confuse ATM routing and IP routing. ATM routing is only performed during call setup when an ATM SVC is being built. On the other hand, IP routing is a process that is performed on each and every IP packet.

When to Use ATM

Although ATM has enjoyed considerable success within the marketplace, the debate continues: which is better—ATM or competing technologies such as Gigabit Ethernet in the campus and Packet Over SONET in the WAN? Well, like most other issues in the internetworking field, the answer is “it depends.”

Specifically, ATM has distinct advantages in the following areas:

  • Full support for timing-critical applications
  • Full support for Quality of Service (QoS)
  • Communication over long geographic distances
  • Theoretically capable of almost unlimited throughput

Die-hard geeks often refer to timing-critical applications as isochronous applications. Isochronous is a fancy term used to describe applications such as voice and video that have very tight timing requirements. Stated differently, the chronous (greek word for timing) must be isos (Greek word for equal). Traditional techniques used to encode voice and video such as PCM for voice and H.320 for video are generally isochronous and can benefit greatly from ATM’s circuit emulation capabilities.

If your voice and video traffic is isochronous and you want to use a single network infrastructure for voice, video, and data, ATM is about your only choice other than bandwidth-inefficient TDM circuits. However, note that there is a growing movement away from isochronous traffic. For example, voice over IP and H.323 video are non-isochronous mechanisms that can run over frame-based media such as Ethernet.

At the time of writing, ATM is the only data technology in common use that reliably supports Quality of Service (QoS). This allows bandwidth and switch processing to be reserved and guaranteed for critical applications like voice and video. Although Ethernet, IP, and other data communication technologies are beginning to offer QoS, these efforts are still in their infancy. Many ATM users argue that Ethernet and IP-based forms of QoS are be better termed Class of Service (CoS) because the reservation and isolation mechanisms are not as strong as can be found in ATM (ATM was built from the ground up to support QoS).

For many network designers, one of the most compelling advantages of ATM is freedom from distance constraints. Even without repeaters, ATM supports much longer distances than any form of Ethernet. With repeaters (or additional switches), ATM can cover any distance. For example, with ATM it is very simple and cost-effective to purchase dark fiber between two sites that are up to 40 kilometers apart (much longer distances are possible) and connect the fiber to OC-12 long-reach ports on LS1010 ATM switches (no repeaters are required). By using repeaters and additional switches, ATM can easily accommodate networks of global scale. However, on the other hand, a number of vendors have introduced forms of Gigabit Ethernet ports capable of reaching 100 kilometers without a repeater such as Cisco’s ZX GBIC. Although this does not allow transcontinental Ethernet connections, it can accommodate many campus requirements.

ATM has historically been considered one of the fastest (if not the fastest) networking technologies available. However, this point has recently become the subject of considerable debate. The introduction of hardware-based, Gigabit-speed routers (a.k.a. Layer 3 switches) has nullified the view that routers are slow, causing many to argue that modern routers can be just as fast as ATM switches. On the other hand, ATM proponents argue that ATM’s low-overhead switching mechanisms will always allow for higher bandwidth than Layer 3 switches can support. Only time will tell.

In short, the decision to use ATM is no longer a clear-cut choice. Each organization must carefully evaluate its current requirements and plans for future growth. For additional guidelines on when to use ATM and when not to use ATM, see Chapter 15, “Campus Design Implementation.”

About the author

Scott

Leave a Comment