Design 1: Using MLS to Blend Layer 2 and Layer 3 Processing

Design 1: Using MLS to Blend Layer 2 and Layer 3 Processing

The first proposal presented to Happy Homes utilizes MLS for Layer 3 switching as illustrated in Figure 17-3.

Figure 17-3. MLS Design

Design Discussion

This section introduces some of the design choices that were made for the first design. However, before diving into the specifics, it is worth pausing to look at the big picture of Design 1. As discussed earlier, both designs use Layer 3 switching in the MDF/distribution layer devices. This isolates each building behind a Layer 3 barrier to provide scalability and stability. By placing each building behind the safety of an intelligent Layer 3 router, it is much more difficult for problems to spread throughout the entire campus. Also, by providing a natural hierarchy, routers (Layer 3 switches) simplify the troubleshooting and maintenance of the network.

However, notice that Layer 2-oriented Catalysts, such as the Catalyst 5000s used in this design, do not automatically provide this Layer 3 barrier. In other words, simply plugging in a bunch of Catalyst 5000s or non-Native IOS Mode Catalyst 6000s (see Chapter 18, “Layer 3 Switching and the Catalyst 6000/6500s” for more information), add every VLAN to every switch (recall that VTP defaults to server mode). Only by manipulating VTP and carefully pruning selected VLANs from certain links can Layer 3 hierarchy be achieved when using technologies with a strong Layer 2 component (such as RSMs, MLS, and Catalyst 5000s and 6000s without any Layer 3 hardware/software).

For example, in this case the traffic from the end-user VLANs 11–14 and 21–24 could be forced through a separate VLAN in the core (VLAN 250) to create a true Layer 3 barrier. If left to the defaults where all of the devices are VTP servers in the same domain and therefore contain the full list of VLANs, routing might still be required between VLANs, but a Layer 3 barrier of scalability is not created. For more information on this point, see Chapter 14 and the section “MLS versus 8500s” in Chapter 11.

It is extremely important to recognize that most of the devices in Cisco’s product line can be used to build either Layer 2 or Layer 3 designs. This chapter is focusing on its relative strengths and default behavior. For example, as Chapter 11 pointed out, Catalyst 8500s can be used to build either Layer 2 or Layer 3 networks. However, by default, the 8500s function as switching routers, where every interface is a uniquely routed subnet/VLAN. Although you can use 8500s in Layer 2 designs, this generally involves the use of IRB, something that can easily become difficult to manage as the network grows.

Similarly, MLS can easily be used to build all of the Layer 3 topologies discussed in this chapter. However, many people are misled into believing that they automatically have Layer 3 hierarchy simply because they paid for some Layer 3 switching cards. As stated in the preceding text, this is not the case. Therefore, although MLS is suitable for almost all Layer 3 campus topologies, it does not maximize the scalability benefits of Layer 3 switching by default (you need to intervene to control VTP and implement selective VLAN pruning).

Finally, it is worth noting that the MSFC Native IOS Mode, discussed in Chapter 18, is equally adept at both designs. Consider it the multipurpose tool of Layer 3 campus switching.

Although both Design 1 and Design 2 create a Layer 3 barrier, for the reasons mentioned in previous paragraphs, the way in which the Layer 3 switching is implemented constitutes the primary difference between the two designs. In the case of Design 1, the Layer 3 barrier is created at the point where traffic enters and leaves the building. The result: traffic can continue to maintain a Layer 2 path within each building. In effect, Layer 3 switching has been implemented in such a way that Layer 2 triangles have been maintained within each building (the IDF switch represents one corner of the triangle with the other two corners being the MDF switches). By breaking the Layer 2 processing into clearly-defined and well-contained regions, this approach can provide a very scalable, high-performance, and cost-effective solution for campus networks.

By contrast, later sections of the chapter explore an alternate approach to Layer 3 switching used in Design 2. This design uses 8500-style hardware-based routing to implement routing both between and within the buildings. Although, as discussed in Chapter 11, Catalyst 8500s can be configured to provide a mixture of Layer 2 and Layer 3 switching, these devices are most comfortable as a pure Layer 3 device (this is from a configuration and maintenance standpoint, not from the standpoint of the data forwarding rate). This effectively chops off the bottom of the Layer 2 triangles in Design 1 to create Layer 2 V’s.

Note that MLS can also be used to create Layer 2 V’s by simply pruning the MDF-to-MDF link of the IDF VLANs. Although this is a popular design choice successfully used by many organizations, this chapter does not utilize it in an attempt to maximize the differences between Design 1 and Design 2.

Although the difference between these two designs might seem trivial, it can be dramatic from a network implementation standpoint. By looking at specific configuration requirements and commands used by these two approaches, this chapter explores in detail the many implications of these two approaches to campus design.

Hardware Selection

Because of their high port densities and proven flexibility, Catalyst 5500s were chosen for the bulk of the devices used in Design 1. The horizontal wiring from end stations connect to an IDF/access layer switch located on each floor. Except for the third floor of Building 2, Catalyst 5509s have been selected as the IDF switches. Because the mahogany sales department offices on the third floor will take up considerably more space than other offices within Happy Homes, this will dramatically reduce the number of end stations located here. As a result, a Catalyst 2820 will be deployed on the third floor of Building 2.

The IDF switches will then connect via redundant links to a pair of MDF/distribution layer switches located in the basement of each building. Because they provide both ATM and Ethernet switching capabilities, Catalyst 5500s will be used in the MDFs. Route Switch Modules (RSMs) and MLS will also play a key role here.

The design also calls for a small server farm located in the basement of Building 1. This facility is designed to handle Happy Homes’ server farm needs until construction can be completed on a separate data center building. The server farm will use a Catalyst 2948G switch to provide 10/100 Ethernet connectivity to the servers and Gigabit Ethernet uplinks to the Cat-B1-0A and Cat-B1-0B switches.

VLAN Design

The design utilizes five VLANs in each building plus an additional VLAN for the backbone. The first VLAN in each building is reserved for the management VLAN and only contains Catalyst SC0 interfaces (or ME1 interfaces on some models). The other four VLANs are used for end users: Sales, Marketing, Engineering, and Finance. Table 17-1 presents the VLAN names and numbers recommended by the design.

Table 17-1. VLAN Names and Numbers

Building 1 Building 2 Backbone
Name Number Name Number Name Number
Management 10 Management 20 Backbone 250
Sales 11 Sales 21
Marketing 12 Marketing 22
Engineering 13 Engineering 23
Finance 14 Finance 24

In other words, the first digit (or two digits in the case of the Backbone) of the VLAN number specifies the building number, and the last digit specifies the VLAN within the building.

The backbone VLAN, VLAN 250, corresponds to an ELAN named Backbone. Finally, notice that although the same five user communities exist in both buildings, separate broadcast domains are maintained because of the Layer 3 barrier created by MLS and the RSMs in the distribution layer.

Also note that this approach implements the recommendation made in Chapters 14 and 15 to separate end-user and management traffic. This is done to isolate the Catalyst CPU from the broadcast traffic that might be present in the end-user VLANs. By doing so, the stability of the network can be improved (for example, the CPU is not deprived of cycles for such important tasks as network management and the Spanning-Tree Protocol).

IP Addressing

Each VLAN utilizes a single IP subnet. Happy Homes will use network with Network Address Translation (NAT) to reach the Internet. The design document calls for the following IP address scheme:

The subnet mask will be /24 (or for all links. For example, the thirtieth address on the Sales VLAN in Building 1 would be Because HSRP will be in use, three node addresses are reserved for routers on each subnet. The .1 node address is reserved for the shared HSRP address, whereas .2 and .3 will be used for the real addresses associated with each router (.1 will be the default gateway address used by the end users).

This scheme results in the IP subnets presented in Table 17-2.

Table 17-2. IP Subnets for Design 1

Use Building VLAN Subnet
Management 1 10
Sales 1 11
Marketing 1 12
Engineering 1 13
Finance 1 14
Management 2 20
Sales 2 21
Marketing 2 22
Engineering 2 23
Finance 2 24
Server Farm N/A 100
Backbone N/A 250

The server farm is listed with a building of N/A because it has its own addressing space that falls outside the 10.Building.VLAN.Node convention. This is also true because it will originally be located in basement of Building 1 and later be relocated to a separate building.

Happy Homes would like to start using DHCP in the new network. The first 20 addresses on each segment will be reserved for devices that do not (or should not) utilize DHCP such as printers, servers, and router addresses. The remaining addresses in each subnet will be divided between a pair of DHCP servers for redundancy. For example, the Marketing subnet in Building 1 will have two DHCP scopes: the first DHCP server will be configured with–, and the second server will receive– Therefore, if the first DHCP server fails, the second server will have its own block of unique address for every subnet.

DHCP scopes are typically split in this fashion because the DHCP protocol currently does not specify a mechanism for server-to-server communication. For example, if the scopes did overlap and one of the servers failed, the second server would have no way of knowing what new leases were issued while it was down. Therefore, it might try to issue the same IP address again and create a duplicate IP address problem. Future enhancements to the DHCP standards (as well as proprietary DHCP implementations) can be used to avoid this problem. See Chapter 11 for more information on using DHCP.

IPX Addressing

Although Happy Homes expects most new applications to be IP based, it currently makes extensive use of Novell servers and the IPX protocol. For consistency, the design recommends that the IPX network numbers should be based on the IP subnet values. IPX network numbers are 32 bits in length, the same as a full IP address. Therefore, IP subnets can be converted from the usual dotted quad notation to an eight-character hex value suitable for use as an IPX network number. For example, the Sales VLAN in Building 1 uses IP subnet By converting each of these four decimal values into their hex equivalents, the corresponding IPX network number would be 0x0A010B00.

  • Tip
    For IPX internal network numbers on NetWare servers, the full IP address assigned to the server’s NIC can be converted to hex.

Table 17-3 presents the IPX addresses along with the corresponding IP subnet values.

Table 17-3. IPX Network Addresses

Use Building VLAN IPX Network Subnet
Management 1 10 0A010A00
Sales 1 11 0A010B00
Marketing 1 12 0A010C00
Engineering 1 13 0A010D00
Finance 1 14 0A010E00
Management 2 20 0A021400
Sales 2 21 0A021500
Marketing 2 22 0A021600
Engineering 2 23 0A021700
Finance 2 24 0A021800
Server Farm N/A 100 0A646400
Backbone 250 250 0AFAFA00


To maximize the Layer 2-orientation of this design, the proposal calls for the use of VTP server mode. However, to avoid some of the scalability issues of VTP, each building will use a unique VTP domain. Two mechanisms will be used to partition the VTP traffic:

  • The removal of VLAN 1 from the backbone
  • Separate VTP domain names

Because the backbone utilizes LANE as a trunking technology, VLAN 1 can be removed from the core of the network by simply not creating a “default” ELAN that maps to VLAN 1 (note that VLAN 1 cannot be removed from Ethernet trunks). Because VTP traffic must be carried in VLAN 1, this action prevents VTP information from propagating between buildings. However, it is not advisable to rely only on this technique—if someone accidentally enabled VLAN 1 on the backbone, it could seriously corrupt the VTP information as discussed in the “VLAN Table Deletion” section of Chapter 12, “VLAN Trunking Protocol.”

To prevent this sort of VTP database corruption between buildings, separate VTP domains should be employed (however, note that using anything other than VTP transparent mode still allows VLAN corruption to occur within a single building). Because Catalysts only exchange VTP information if their VTP domain names match, this creates an effective barrier for VTP. Design 1 calls for Building 1 to use the domain Happy-B1, whereas Building 2 uses Happy-B2.

  • Tip
    By creating a VTP barrier, the use of unique VTP domain names in each building also modifies the Catalyst behavior to create a Layer 3 barrier at the edge of every building. Keep this technique in mind when you create your own campus designs.


To enhance the stability and scalability of the network, Design 1 calls for several optimizations on trunk links. First, it recommends that manual configuration be used to override all speed, duplex, and trunk state negotiation protocols. Relying on autonegotiation of 10/100 Ethernet speed and duplex can lead to many frustrating hours of troubleshooting and network downtime. To avoid these issues, important trunk and server links should be hard-coded. End-station ports generally continue to use speed and duplex autonegotiation protocols to maximize freedom of movement in PC hardware deployment. Similarly, the trunk links have hard-coded trunk state information. By not relying on DISL and DTP negotiation, network stability can be improved.

Second, the design recommends that the trunk links be pruned of unnecessary VLANs. Because this can constrict unnecessary broadcast flooding, it can also be an important optimization in Layer 2-oriented networks. For example, broadcasts and multicasts for VLANs 22–24 are not flooded to Cat-B2-3A because it only participates in VLANs 20 and 21 (the management and sales departments VLANs). The need for pruning becomes even greater in very flat networks without the Layer 3 barriers of scalability that automatically reduce broadcast and multicast flooding.

Load Balancing

Because of the Layer 2-orientation of Design 1, Spanning Tree load balancing must be employed. As discussed in Chapter 7, the Root Bridge placement form of Spanning Tree load balancing is both effective and simple to configure and maintain. That is, if your topology supports it. One of the advantages of having the Layer 2 triangles employed by this design is that it easily facilitates this form of load balancing. For example, by making Cat-B1-0A the Root Bridge for VLAN 21, traffic in the B1_Sales VLAN automatically uses the left-hand riser link. Design 1 calls for the A MDF devices (Cat-B1-0A and Cat-B2-0A) to act as the Root Bridge for the traffic for the odd-numbered VLANS, whereas the B devices (Cat-B1-0B and Cat-B2-0B) handle the even-numbered VLANs.

To create a cohesive load balancing scheme, the Spanning Tree Root Bridge placement should be coordinated with HSRP. This can be done by using the HSRP priority command to alternate the active HSRP peer for odd and even VLANs.

Spanning Tree

In addition to Root Bridge placement, several other Spanning Tree parameters should be tuned in Design 1. Because the Layer 3 barrier in Design 1 limits Layer 2 connectivity to small triangles, the largest number of bridges that can exist between two end stations is three hops. For example, if the link between Cat-B1-1A and Cat-B1-0B failed, traffic flowing between an end station connected to Cat-B1-1A and the RSM in Cat-B1-0B would have to cross three Layer 2 switches (Cat-B1-1A, Cat-B1-0A, and Cat-B1-0B). This is illustrated in Figure 17-4 (note that the Catalyst backplane is being counted as a link here).

Figure 17-4. Path from an End User to the RSM in Cat-B1-0B after a Link Failure

Therefore, the Spanning Tree Max Age and Forward Delay parameters can be safely reduced to 12 and 9 seconds, respectively (assuming the default Hello Time of 2 seconds). The safest and simplest way to accomplish this is to use the set spantree root macro to automatically modify the appropriate Spanning Tree parameters. As a result, convergence time can be reduced from a default of 30–50 seconds to 18–30 seconds.

To further speed Spanning Tree convergence, UplinkFast, BackboneFast, and PortFast can be implemented. UplinkFast is only configured on the IDF switches and can reduce failover of uplinks to less than 3 seconds. BackboneFast, if in use, must be enabled on every switch in a Layer 2 domain and can reduce convergence time of indirect failures to 18 seconds (given the Forward Delay of 9 seconds specified in the previous paragraph). Although PortFast is not helpful in the failure of trunk links, it can be a useful enhancement to allow end stations more immediate access to the network and reduce the impact of Spanning Tree Topology Change Notifications (see Chapter 6, “Understanding Spanning Tree,” and Chapter 7 for more information on TCNs).


This section presents sample configurations used for Design 1. Rather than include all of the configurations, you see an example of each type of device. First, you see an IDF/access layer switch. Next, you see coverage of the various components of an MDF/distribution layer switch: the Supervisor, the RSM module, and the LANE module. This section concludes with discussion of a configuration for one of the ATM switches in the core.

IDF Supervisor Configuration

Because Catalyst configurations are far less readable than IOS-based router configurations, two sections are devoted to coverage of Catalyst Supervisors. First, you see the interactive output of the necessary configuration steps. This allows you to focus only on the commands necessary for a typical MLS design. Second, you see the full Supervisor configuration. However, because Catalysts show all commands in the configuration listing (unlike the routers that only list non-default commands), these listings can be rather lengthy.

Cisco is working on a feature that will only show non-default configuration commands. This should be available in the future.

Configuring an IDF Supervisor: Cat-B2-1A

The first floor switch in Building 2 (Cat-B2-1A) is a representative example of an IDF switch. To begin configuring this device, first assign a name as in Example 17-1.

Example 17-1 Catalyst Name Configuration

Early releases of code also required the set prompt command to include the name in the display prompt. However, starting in 4.X Catalyst images, this step is done automatically.

Next, create the VTP domain and add the appropriate VLANs as in Example 17-2.

Example 17-2 VTP Configuration

Because Design 1 uses VTP server mode, the domain name must be set before the VLANs can be added. Although VTP defaults to server mode, the second command ensures that the default setting has not been changed.

Next, assign an IP address to the SC0 logical interface as in Example 17-3.

Example 17-3 Catalyst Supervisor IP Address Configuration

Notice that SC0 is assigned to VLAN 20, the management VLAN for Building 2. Next, the set ip route command is used to provide a single default gateway for the Catalyst. uses HSRP on the routers to provide redundancy (see the RSM section later).

Example 17-4 shows how to configure the Spanning-Tree Protocol for the IDF switch.

Example 17-4 Spanning Tree Configuration

The first command (set spantree portfast) enables PortFast on all of the end-user ports. Notice that trunk links are not included (you can set PortFast on trunk ports and it will be ignored, but it is best to avoid this because it can lead to administrative confusion). Next, BackboneFast is enabled (set spantree backbonefast enable) to improve STP convergence time associated with an indirect failure. As discussed in Chapter 7, this command must be enabled on every Catalyst in a Layer 2 domain. The last command (set spantree uplinkfast enable) enables UplinkFast.

Unlike BackboneFast, UplinkFast should only be enabled on leaf-node IDF switches. You can also see that enabling UplinkFast automatically modifies several Spanning Tree parameters to reinforce this leaf-node behavior. First, it increases the Bridge Priority to 49,152 so that the current bridge does not become the Root Bridge (unless there are no other bridges available). Second, the Path Cost is increased to greater than 3000 to encourage downstream bridges to use some other path to the Root Bridge (however, if no path is available, this bridge handles the traffic normally).

Next, configure the trunk links as in Example 17-5.

Example 17-5 Port Name and Trunk Configuration

The first four commands assign a name to the trunk ports, useful information when trying to troubleshoot and maintain the network. Next, the 1/1 and 2/1 ports are forced into ISL trunking mode with the set trunk command. If you know that a port is going to be a trunk, it is best to hard-code the trunking state rather than rely on the auto and negotiate settings (these mechanisms have been known to fail and also require that the VTP domain names match). Finally, the clear trunk command is used to remove unnecessary VLANs from the 1/1 and 1/2 links. This sort of pruning can significantly improve the scalability of your network.

The code in Example 17-6 sets up passwords in the form of SNMP community strings and login passwords.

Example 17-6 SNMP and Password Configuration

Because SNMP is enabled by default with widely known community strings (“public”, “private”, and “secret”), you should always modify the SNMP community strings. Do not forget to modify all three. (Most devices only use two community strings, one for reading and one for writing. Catalysts have a third community string that also allows the community strings themselves to be modified.) Finally, because community strings are not encrypted (either in the configuration or as they travel through the network), it is best to make them different than the console/Telnet login passwords.

The bottom section of the Example 17-6 sets both the user and privileged passwords. Unlike Cisco routers that do not allow any remote access until passwords have been configured, Catalysts allow full access by default. Therefore, always remember to change the passwords.

Next, you need to configure a variety of management commands as in Example 17-7.

Example 17-7 Banner, Contact Information, and DNS Configuration

Although none of the commands in Example 17-7 are essential for Catalyst operation, they can all be useful when maintaining a network over the long term.

Example 17-8 creates an IP permit list to limit Telnet access to the device.

Example 17-8 IP Permit List to Limit Telnet Access to the Catalyst

Because Design 1 calls for Supervisor IIIs with NetFlow Feature Cards (NFFCs), useful IDF features such as IGMP Snooping (to reduce multicast flooding) and Protocol Filtering (to reduce broadcast flooding) can be enabled as in Example 17-9.

Example 17-9 Enabling IGMP Snooping and Protocol Filtering

Next, you need to provide a variety of SNMP traps as in Example 17-10.

Example 17-10 SNMP Trap Configuration

Enabling SNMP traps cause the Catalyst to report to information it detects related to issues such as Spanning Tree changes, device resets, and hardware failures. Link up/down traps are enabled for the important uplink ports (because of the potential volume of data, it is almost always best not to enable this on end-station ports).

Finally, the commands in Example 17-11 configure the Catalyst to send Syslog information to the network management station.

Example 17-11 Syslog Configuration

Full IDF Supervisor Listing: Cat-B2-1A

Example 17-12 presents the full configuration file that results for Cat-B2-1A after the previous sequence of configuration steps is completed.

Example 17-12 Full Catalyst Configuration for Cat-B2-1A

MDF Supervisor Configuration

The second switch in Building 2 (Cat-B2-0B) is a representative example of an MDF/distribution layer switch. As with the IDF/access layer switch, the Supervisor configuration is presented in two sections: one showing the interactive configuration steps and another showing the resulting complete listing.

Configuring an MDF Supervisor: Cat-B2-0B
As with the IDF switch, the name, VTP, and SC0 parameters are configured as in Example 17-13.

Example 17-13 Configuring the Catalyst Name, VTP, and IP Address Parameters

Notice that because VTP server mode is in use, the VLANs do not need to be manually added to this switch. In fact, assuming that the Supervisor contained an empty configuration, Cat-B2-0B would have also automatically learned the VTP domain name (making the set vtp domain Happy-B2 command optional). Because all of the devices in Building 2 share a single management VLAN, Cat-B2-0B receives an IP address for the same IP subnet and uses the same default gateway address.

Next, you need to modify the Spanning Tree parameters as in Example 17-14.

Example 17-14 Spanning Tree Configuration

To implement load balancing, the MDF switches require more Spanning Tree configuration than the IDF switches. The first six set spantree root commands configure Cat-B1-0B’s portion of the Root Bridge placement for Building 2 (one command is required for each of the six VLANs in use). Notice that Cat-B2-0B is configured as the primary Root Bridge for the even-numbered VLANs (20, 22, and 24) and the secondary Root Bridge for the odd-numbered VLANs (21 and 23). Cat-B2-0A would have the opposite configuration for VLANs 20–24 (primary for odd VLANs and secondary for even VLANs). For VLAN 250, the backbone VLAN, Cat-B1-0A is configured as the primary Root Bridge (not shown here) with Cat-B2-0B as the secondary. This allows Cat-B2-0B to take over as the Root Bridge for the core in the event that connectivity is lost to Building 1.

PortFast is configured for all the ports on module six. In the event that some of the Building 2 servers are connected here using fault-tolerant NICs that toggle link state (most fault-tolerant NICs do not do this), this allows the NICs to quickly bring up the backup ports without waiting through the Spanning Tree Listening and Learning states.

The last command enables BackboneFast (as discussed earlier, it must be enabled on all switches to work correctly). Finally, notice that UplinkFast is not enabled on the MDF switches. Doing so disturbs the Root Bridge placement carefully implemented with the earlier set spantree root command.

Example 17-15 shows how to configure the trunk ports.

Example 17-15 Port and Trunk Configuration

As with the IDF switch, the ports are labeled with names and hard-coded to be ISL trunks. The 10/100 Supervisor connection to Cat-B2-3A is also hard-coded to 100 Mbps and full-duplex. The Gigabit Ethernet links to Cat-B2-1A and Cat-B2-2A do not require this step because the 3-port Gigabit Ethernet Catalyst 5000 module are fixed at 1000 Mbps and full-duplex.

The clear trunk command manually prunes VLANs from the trunk links. Because the Catalyst on the third floor will only contain ports in the Sales VLAN, all VLANs except 20 and 21 have been removed from the 1/1 uplink. Happy Homes is less certain about the location of employees on the first two floors of Building 2. Although the immediate plans call for engineering to be located on the first floor and for finance and marketing to share the second floor, the company knows that there will be a large amount of movement between these floors for the next two years. As a result, both Cat-B2-1A and Cat-B2-2A will be configured with all four end-user VLANs. However, other VLANs (2–19 and 25–1005) have still been pruned.

  • Tip
    When manually pruning VLANs, be careful not to prune the Management VLAN. If you do, Telnet, SNMP, and other IP-based communication with Supervisor are not possible. If you are using VLAN 1 for the Management VLAN, this is not an issue because VLAN 1 cannot be cleared from a trunk link.

It is important to notice that the backbone VLAN, VLAN 250, has been excluded from every link within the building, including the link between the two MDF switches (Port 5/3 on Cat-B2-0B). In other words, the only port configured for VLAN 250 on the four MDF switches should be the ATM link into the campus core. By doing this, it guarantees a loop-free core with more deterministic and faster converging traffic flows as discussed in the section “Make Layer 2 Cores Loop Free” in Chapter 15.

  • Tip
    When using a Layer 2 core, be sure to remove the core VLAN from all links within each distribution block.

The commands in Example 17-16 complete the configuration and are almost identical to the IDF configuration discussed with Examples 17-6 through 17-11.

Example 17-16 Configuring Passwords, Banner, System Information, DNS, IP Permit List, IGMP Snooping, SNMP, and Syslog

The only significant difference between Examples 17-6 through 17-11 and Example 17-16 is that Protocol Filtering is not enabled.

Full MDF Supervisor Listing: Cat-B2-0B

Example 17-17 presents the full configuration listing for the Cat-B2-0B MDF switch configured in Examples 17-13 through 17-16.

Example 17-17 Full Catalyst Configuration for Cat-B2-0B

MDF RSM Configuration: Cat-B2-0B

To provide high-performance Layer 3 switching between each building and the campus backbone, the MDF/distribution layer switches are configured for MLS.

First, notice that no commands were required to enable MLS on the Supervisor in the previous section. As discussed in Chapter 11, a Supervisor located in the same chassis with an RSM requires no configuration to support MLS. However, if the design called for an external router-on-a-stick, the Supervisor would need to be configured with the IP address of the router.

Although using an RSM does eliminate the need for MLS configuration on the Supervisor, MLS must still be enabled on the RSM itself. Example 17-18 shows the required commands to enable MLS on the RSM.

Example 17-18 Full RSM Configuration for Cat-B2-0B

Each VLAN interface has been configured with a separate HSRP group for default gateway redundancy with Cat-2B-0A. Because Happy Homes will require NetWare and IPX services for the foreseeable future, the RSM has been configured with IPX network addresses (notice that IPX automatically locates a new gateway when the primary fails [although a reboot might be required] and therefore does not require the support of a feature such as HSRP).

Each interface is also configured with a pair of ip helper-address commands to forward DHCP traffic to the Server Farm. If desired, a single ip helper-address could have been specified using the server farm subnet’s broadcast address ( Also notice the two no ip forward-protocol udp statements. These prevent the flooding of chatty NetBIOS over TCP/IP name resolution traffic, a potentially important enhancement in networks with large amounts of Microsoft-based end stations.

EIGRP has been configured as the IP routing protocol (IPX uses IPX RIP by default). Because EIGRP includes interfaces on a classful basis, the passive-interface command has been used to keep routing traffic off the IDF segments. Although this is not going to save much update traffic with a protocol such as EIGRP (in this case, it only prevents EIGRP hello packets from being sent), it prevents a large number of unnecessary EIGRP neighbor relationships (by default, there is one for every pair of routers in every VLAN). By reducing these peering relationships, you can improve the performance and stability of the routing protocol.

  • Tip
    Reducing unnecessary peering can be especially useful in the Catalyst 8500s where excessive control plane traffic can overwhelm the CPU. However, it is an important optimization for all VLAN-based router platforms.

The RSM has also been configured with many of the same management features as Catalyst Supervisors, including the following:

  • SNMP community strings
  • SNMP host and location information
  • SNMP traps
  • A message-of-the-day banner
  • Passwords
  • A VTY access-class to limit Telnet access from segments other than the Server Farm
  • DNS
  • Syslog logging
  • Timestamps of logging information

MDF LANE Module Configuration: Cat-B2-0B

As you probably gathered from Chapter 9, “Trunking with LAN Emulation,” the theory of LANE and ATM is fairly complex. However, a large part of that complexity is designed to make ATM as plug-and-play as possible. As a result, configuring most of the LANE components becomes a trivial exercise. For instance, the Example 17-19 shows the code for the LANE module in Cat-B2-0B.

Example 17-19 Full LANE Module Configuration for Cat-B2-0B

Only five lines differ from the default configuration:

  • The LANE module has been named with the hostname command.
  • A multipoint subinterface was created for the Backbone ELAN.
  • The LAN Emulation Server (LES) and Broadcast and Unknown Server (BUS) are created with the lane server-bus command.
  • A LAN Emulation Client (LEC) is created with the lane client command.
  • PHY B (PHY is short for Physical) is selected as the preferred port (the reason why is discussed in the next section).

LS1010 Configuration: LS1010-A

In general, ATM switches that fully support protocols such as ILMI and PNNI require virtually no configuration. However, because this design calls for the LS1010s to act as the LAN Emulation Configuration Servers (LECSs), the configuration is somewhat more involved. Example 17-20 shows the configuration for LS1010-A.

Example 17-20 Full ATM Switch Configuration for LS1010-A

Both of the LS1010s require four configuration items to support LANE under Design 1:

  • The addresses of the LECSs (in this case, the LS1010s themselves) must be configured with the atm lecs-address-default command. Because the design calls for SSRP, both ATM switches are configured with two LECS addresses. See Chapter 9 for more information on SSRP.
  • The LECS database. Again because of SSRP, there are two LES/BUS devices in use. Because both LES/BUSs are using dual-PHY connections to different ATM switches, a total of four different LES addresses are possible and must all be included in the database.
  • The configuration on logical interface atm 2/0/0 (the ATM Switch Processor [ASP] itself) of the lane config auto-config-atm-address and lane config database commands to start the LECS process.
  • The configuration on the logical subinterface atm 2/0/0.1 of a LANE client to provide an in-band management channel for the ATM switch.

In addition to the in-band management channel provided by the LEC located on interface atm 2/0/0.1, an additional connection is provided for occasions where the ATM network is down. One way to accomplish this is to provide a modem on the AUX port of the ASP. However, in campus networks, it is often more effective to utilize the ASP’s Ethernet management port. In this case, the port is configured with an IP address on the Building 1 Management VLAN and then connected to a 10/100 port on Cat-B1-0B.

The order of the statements in the LECS database deserves special notice. Figure 17-5 shows a detailed view of the ATM links specified in Design 1.

Figure 17-5. Detailed View of ATM Links

Recall from Chapter 9 that careful planning of the order of LECS database can avoid unnecessary backtracking. Because Cat-B1-0A is the primary LES/BUS and is configured with PHY A as its preferred port, the combination of LS1010-A’s prefix and Cat-B1-0A’s ESI is listed first in the database. If this port fails, it takes 10 or more seconds for Cat-B1-0A’s PHY B to become active, making it a poor choice for the secondary LES. Because Cat-B2-0B’s preferred port, PHY B, should already be fully active, it is more efficient as a secondary LES address. If Cat-B2-0B’s PHY B fails, the tertiary LES address can be Cat-B1-0A’s PHY B. As a last resort, Cat-B2-0B’s PHY A is used. For more information on this issue, see the “Dual-PHY” section of Chapter 9.

Finally, the LS1010 is configured with many of the same management options as earlier devices: SNMP, passwords, logging, a banner, and DNS.

Design Alternatives

Although an endless variety of design alternatives exist, several are common enough to deserve special mention. One popular design alternative involves pruning the IDF VLANs from the link that connects the MDFs together. This effectively converts the Layer 2 triangles discussed in this design into the Layer 2 V’s used in Design 2 (a Catalyst 8500-based design). It is exactly this sort of minor change in a campus topology that can have a dramatic impact on Spanning Tree and the overall design. For details on how this affects the network, refer to Design 2 (from a Spanning Tree and load balancing perspective, this modification to Design 1 makes it equivalent to Design 2).

In addition, network designers wanting to fully utilize the Layer 2 features of their networks might want to implement Dynamic VLANs and VMPS. Given the Layer 2-orientation of MLS and the approach presented in Design 1, this enhancement is fairly simple to configure. For more information on Dynamic VLANs and VMPS, see Chapter 12.

Furthermore, VTP pruning can be used to automate the removal of VLANs from trunk links. This prevents the need for the manual pruning via the clear trunk command as discussed earlier.

Also, when implementing a design that maintains any sort of Layer 2 loops, you should at least consider implementing a loop-free topology within the management VLAN. As discussed in Chapter 15, loops in the management VLAN can quickly lead to collapse of the entire network. Although one of the great benefits of creating a Layer 3 barrier is that it isolates this failure to a single building (and it further helps by making the Layer 2 domains small enough that loops are unlikely to form), some form of looping is always a possibility when using Layer 2 technology.

In another common change, many organizations like to make trunk and server links high priority using the set port level command.

Finally, the servers can be directly connected to the ATM core by supplying them with ATM NICs. However, one of the downsides to this approach is the question of how to handle default gateway routing from the servers to the routers located in the MDF switches. For example, if the servers are configured with a default address of, the address of the interface VLAN 250 on Cat-B2-0B’s RSM, all traffic is directed to Building 2. Traffic destined for Building 1 would therefore incur an additional routing hop and cross the backbone twice (unless ICMP redirects were supported). Another problem with using default gateways is the issue of redundancy. Although HSRP can be configured, it exacerbates the previous issue by disabling ICMP redirects on the router. In general, the best solution is to run a routing protocol on your servers (also requiring you to migrate the RSMs in this design from EIGRP to something like OSPF).

About the author


Leave a Comment