Design 2: Maximizing Layer 3 with Catalyst 8500 Switching Routers

Design 2: Maximizing Layer 3 with Catalyst 8500 Switching Routers

This section presents Design 2, an approach that relies on Catalyst 8500-style hardware-based routing (in other words, the 8500 is a switching router). Figure 17-6 illustrates Design 2.

Figure 17-6. Design 2 Network Diagram
design-2-maximizing-layer-3-catalyst-8500-switching-routers-17.6

Several differences from the physical layout used in Design 1 are important. First, the ATM core has been replaced with Gigabit Ethernet. Second, the Building 2 third floor has been replaced with a Catalyst 5509. However, both designs are similar in that a pair of redundant MDF devices is used in each basement with two riser links going to each IDF.

Design Discussion

Whereas Design 1 sought to blend Layer 2 and Layer 3 technology, Design 2 follows an approach that maximizes the Layer 3 content in the MDF/distribution layer switches. In doing so, this somewhat subtle change has a dramatic impact on the rest of the design.

The most important change created by this design is that all IDF VLANs are terminated at the MDF switch. In other words, users connected to different IDFs always fall in different VLANs. As discussed in Chapter 11, although it is possible to have a limited number of VLANs traverse a Catalyst 8500 using IRB, this is not a technique that you want to use many times throughout your campus (it is appropriate for one or two special-case VLANs). In other words, this style of Layer 3 switching is best used as a fast version of a normal routing.

The second most important change, a simplification of Spanning Tree, is discussed in the next section.

Spanning Tree

Although some view the loss of IDF-to-IDF VLANs as a downside to the approach taken in Design 2, it is important to offset this with the simplifications that hardware-based routing make possible. One of the most important simplifications involves the area of Layer 2 loops and the Spanning-Tree Protocol. In fact, hardware-based routing has completely eliminated the Layer 2 loops between the IDF and MDF switches. Whereas Design 1 used Layer 2 triangles, this design uses Layer 2 V’s.

Note
As was stressed in the discussion of Design 1, MLS can be used to build loop-free Layer 2 V’s. However, it is important to realize that switching routers such as the 8500 do this by default, whereas MLS (and routing switches) require you to manually prune certain VLANs from selected links. See the earlier section “Trunks” for more information.

Because this design removes all Layer 2 loops (at least the ones that are intentionally formed), some organizations have decided to completely disable Spanning Tree when using this approach. However, because it does not prevent unintentional loops on a single IDF switch (generally as the result of a cabling mistake), other network designers want to maintain a Spanning Tree security blanket on their IDF switches. However, it is important to recognize that even in the cases where Spanning Tree remains enabled (as it is in Design 2), the operation of the Spanning-Tree Protocol is dramatically simplified for a variety of reasons.

First, Root Bridge placement becomes a non-issue. Each IDF switch is not aware of any other switches and naturally elects itself as the Root Bridge.

  • Tip
    It can still be a good idea to lower the Bridge Priority in case someone plugs in another bridge some day.

In addition, Spanning Tree load balancing is not required (or, for that matter, possible).

Also, features such as UplinkFast and BackboneFast are no longer necessary for fast convergence.

Finally, the Spanning Tree network diameter has been reduced to the IDF switch itself. As a result, the Max Age and Forward Delay times can be aggressively tuned without concern. For example, Design 2 specifies a Max Age of 10 seconds and a Forward Delay of 7 seconds. Although somewhat more aggressive values can be used, these were chosen as a conservative compromise. As a result, failover performance where a loop exists is between 14 and 20 seconds. However, because the topology is loop free at Layer 2, there should be no Blocking ports during normal operation. As a result, IDF uplink failover performance is governed by HSRP, not Spanning Tree. Also as a result, the network can recover from uplink failures in as little as one second (assuming that the HSRP parameters are lowered).

  • Tip
    The Spanning-Tree Protocol does not affect failover performance in this network.

VLAN Design

Although the concept of a VLAN begins to blur (or fade) in this design, the IDF switches are configured with the same end user VLAN names as used in Design 1. However, notice that all of the VLANs use essentially the same numbers throughout this version of the design. The management VLAN in all switches is always VLAN 1 (even though they are different IP subnets). Similarly, the first end-user VLAN on an IDF switch is VLAN 2. If more than one VLAN is required on a given IDF switch, VLANs 3 and greater can be created.

Notice that this brings a completely different approach to user mobility than Design 1. Design 1 attempted to place all users in the same community of interest located within a single building in the same VLAN. In the case of Design 2, that is no longer possible without enabling IRB on the Catalyst 8500. Here, it is expected that users in the same community of interest may very well fall into different subnets. However, because DHCP is in use, IP addressing is transparent to the users. Furthermore, because the available Layer 3 bandwidth is so high with 8500 technology, the use of routing (Layer 3 switching) does not impair the network’s performance.

Note
Note that a similar case for Layer 3 performance can be made for the Catalyst 6000/6500. See Chapter 18 for more detail.

IP and IPX Addresses

Because Design 2 is less flat than Design 1, it requires more IP subnets (and IPX networks). For example, every link through the core is a separate subnet. Furthermore, every IDF uses a separate subnet as a management VLAN (remember, all VLAN terminate at the MDF switches). To avoid using an excessive number of address space, variable length subnet masking (VLSM) has been specified in Design 2.

Although in reality this is not a concern for most organizations using the Class A network such as network 10.0.0.0, it provides another benefit by making the subnets appear similar to the subnets used in Design 1. For example, whereas Design 1 uses a single backbone subnet of 10.250.250.0/24, Design 2 uses multiple 10.250.250.0/29 subnets. Just as Design 1 uses 10.1.10.0/24 and 10.2.20.0/24 for management VLANs, Design 2 uses multiple smaller subnets of 10.1.10.0/29 and 10.2.20.0/29.

As a result, Design 2 uses two subnet masks:

  • /24 (255.255.255.0) for end-user segments
  • /29 (255.255.255.248) for management VLANs, loopback addresses, and backbone links

Although it is possible to further optimize the address space utilization by using a /30 mask (255.255.255.252) for loopback interfaces and backbone links, a common mask was chosen for simplicity (furthermore, this one-bit optimization quickly reaches a point of diminishing returns when working with a Class A address!). Table 17-4 shows the IP subnets along with the corresponding IPX network numbers.

Table 17-4. IP Subnets and IPX Networks for Design 2

UseDescriptionBldgVLANSubnetMaskIPX Net
B1_MgtCat-B1-1A SC01110.1.10.8/290A010A08
B1_MgtCat-B1-2A SC01110.1.10.16/290A010A10
B1_MgtCat-B1-3A SC01110.1.10.24/290A010A18
B1_SalesEnd-user segment1210.1.11.0/240A010B00
B1_MktingEnd-user segment1310.1.12.0/240A010C00
B1_EngEnd-user segment1210.1.13.0/240A010D00
B1_FinanceEnd-user segment1210.1.14.0/240A010E00
B2_MgtCat-B2-1A SC02110.2.20.8/290A021408
B2_MgtCat-B2-2A SC02110.2.20.16/290A021410
B2_MgtCat-B2-3A SC02110.2.20.24/290A021418
B2_SalesEnd-user segment2210.2.21.0/240A021500
B2_MktingEnd-user segment2310.2.22.0/240A021600
B2_EngEnd-user segment2210.2.23.0/240A021700
B2_FinanceEnd-user segment2210.2.24.0/240A021800
Svr. FarmServer Farm segmentBackbone10010.100.100.0/240A646400
LoopbackCat-B1-0ABackboneN/A10.200.200.8/290AC8C808
LoopbackCat-B1-0BBackboneN/A10.200.200.16/290AC8C810
LoopbackCat-B2-0ABackboneN/A10.200.200.24/290AC8C818
LoopbackCat-B2-0BBackboneN/A10.200.200.32/290AC8C820
BackboneCat-B1-0A to Cat-B1-0BBackboneN/A10.250.250.8/290AFAFA08
BackboneCat-B1-0A to Cat-B2-0BBackboneN/A10.250.250.16/290AFAFA10
BackboneCat-B1-0A to Cat-B2-0ABackboneN/A10.250.250.24/290AFAFA18
BackboneCat-B1-0B to Cat-B2-0BBackboneN/A10.250.250.32/290AFAFA20
BackboneCat-B1-0B to Cat-B2-0ABackboneN/A10.250.250.40/290AFAFA28
BackboneCat-B2-0A to Cat-B2-0BBackboneN/A10.250.250.48/290AFAFA30

VTP

Given the Layer 3 nature of Design 2, VTP server mode has little meaning (8500s do not propagate VTP frames). Therefore, Design 2 calls for VTP transparent mode. Although not a requirement, the design also calls for a VTP domain name of Happy (unlike server and client modes, transparent mode does not require a VTP domain name).

As a result, each IDF switch must be individually configured with the list of VLANs it must handle. However, this is rarely a significant issue because each IDF switch usually only handles a small number of VLANs.

  • Tip
    If the VLAN configuration tasks are a concern (or, for that matter any other configuration task), consider using tools such as Perl and Expect. Both run on a wide variety of UNIX platforms as well as Windows NT.

Trunks

To present an alternative approach, Design 2 uses Fast EtherChannel links between the MDF and IDF switches. To provide adequate bandwidth in the core, Gigabit Ethernet links are used.

Server Farm

This design calls for a separate Server Farm building (a third building at the corporate headquarters campus will be used). The Server Farm could have easily been placed in Building 1 as it was with Design 1, however, an alternate approach was used for variety.

Configurations

This section presents the configurations for Design 2. As with Design 1, you see only one example of each type of device. First, you see configurations for and discussion of a Catalyst 5509 IDF switch, followed by configurations for and discussion of a Catalyst 8540 MDF switch.

IDF Supervisor Configuration

As with Design 1, this section is broken into two sections:

  • The interactive configuration output
  • The full configuration listing

Configuring an IDF Supervisor: Cat-B2-1A

As with the IDF switch in Design 1, begin by configuring the device VTP domain names as in Example 17-21.

Example 17-21 System Name and VTP Configuration

Unlike Design 1, this design utilizes VTP transparent mode and requires only a single end-user VLAN for Cat-B2-1A as shown in Example 17-22.

Example 17-22 VTP and VLAN Configration

The SC0 interface also uses a different configuration under Design 2. First, the IP address and netmask are obviously different. Second, SC0 is left in VLAN 1, the default. Third, Design 2 calls for two default gateway addresses to be specified with the ip route command (this feature was first supported in Version 4.1 of Catalyst 5000 code). This can simplify the overall configuration and maintenance of the network by not requiring a separate HSRP group to be maintained for each management subnet/VLAN. Example 17-23 demonstrates these steps.

Example 17-23 IP Configuration

The first two commands (set spantree root) lower the Max Age and Forward Delay timers to 10 and 7 seconds, respectively. For consistency, this also forces the IDF switch to be the Root Bridge. (Although this is useful in the event that other switches or bridges have been cascaded off the IDF switch, in most situations this has no impact on the actual topology under Design 2.) Finally, PortFast is enabled on all of the end-user ports in slots 4–8.

Next, the trunk ports are configured as in Example 17-25.

Example 17-25 Port and Trunk Configuration

As mentioned earlier, Design 2 uses Fast EtherChannel links from Cat-B2-1A and Cat-B2-2A to the MDF switches. For stability, these are hard-coded to the port channel on state. The resulting EtherChannel bundles are also hard-coded as ISL trunks. Also notice that although the set trunk command is only applied to a single port, the Catalyst automatically applies it to every port in the EtherChannel bundle.

The commands in Example 17-26 are very similar to those used in Example 17-16 of Design 1.

Example 17-26 Configuring SNMP, Password, Banner, System Information, DNS, IP Permit List, IGMP Snooping, Protocol Filtering, SNMP, and Syslog

Full IDF Supervisor Listing: Cat-B2-1A

Example 17-27 presents the configuration code that results from the previous sequence of configuration steps.

Example 17-27 Full Catalyst Configuration

MDF Configuration: Cat-B2-0B

Example 17-28 presents the full configuration listing for Cat-B2-0B, an 8540 MDF switch. The chassis contains a 16-port 100BaseFX module in slot 0 and 2-port Gigabit Ethernet modules in slots 1 and 2. Because IOS-based router configurations are shorter (they only list non-default commands) and easier to read than XDI/CatOS-based Catalyst images, this section does not show a separate listing of the interactive command output.

Example 17-28 Full Catalyst 8540 Configuration

Three logical port-channel interfaces are configured to handle the links to the three IDF switches. Because the EtherChannels are using ISL encapsulation to trunk multiple VLANs to the IDFs, each port-channel is then configured with multiple subinterfaces, one for each IDF VLAN. For example, interface port-channel 2 is used to connect to Cat-B2-2A on the second floor. Subinterface port-channel 2.1 is created for the management VLAN, 2.2 for the Finance VLAN, and 2.3 for the Marketing VLAN. Each subinterface is configured with an encapsulation isl statement and the appropriate IP and IPX Layer 3 information.

The subinterfaces supporting end-user traffic are also configured with two HSRP groups. As explained in Chapter 11, HSRP load balancing should be employed in designs where a single end-user VLAN is used on each IDF and there are no Layer 2 loops (making Spanning Tree load balancing impossible). To enable HSRP load balancing, a technique called Multigroup HSRP (MHSRP) is used. Under MHSRP, two (or more) HSRP groups are created for every subnet.

By having each MDF device be the active HSRP peer for one of the two HSRP groups, load balancing can be achieved. For example, Design 2 calls for two HSRP groups per end-user subnet (as mentioned earlier, the management VLANs use multiple default gateways instead). The first HSRP group uses .1 in the fourth octet of the IP address, and the second group uses .2. By making Cat-B2-0A the active peer for the first group and Cat-B2-0B the active peer for the second group, both router ports can be active at the same time.

Note
Note that the recommendation to use MHSRP is predicated upon the fact that a single VLAN is being used on the IDF switches (as discussed in Chapter 11, this is often done to facilitate ease of network management). If you are using multiple VLANs on the IDFs, you can simply alternate active HSRP peers between the VLANs. See Chapter 11 for more information and configuration examples.

The catch with this approach is finding some technique to have half of the end stations use the .1 default gateway address and the other half use .2. Chapter 11 suggests using DHCP for this purpose. For example, Happy Homes is planning to deploy two DHCP servers (from the ip helper-address statements, we can determine that the IP addresses are 10.100.100.33 and 10.100.100.81). All leases issued by the first DHCP server, 10.100.100.33, specify .1 as the default gateway.

On the other hand, all leases issued by the second DHCP server, 10.100.100.81, specify .2 as a default gateway. To help ensure a fairly random distribution of leases between two DHCP, the order of the ip helper-address statements can be inverted between the two MDF switches. For example, the configuration for Cat-B2-0B shows 10.100.100.81 as the first ip helper-address on every end-user subinterface. On the other MDF switch, Cat-B2-0A, 10.100.100.33 should be listed first.

Further down in the configuration, the actual Fast Ethernet ports are shown. Notice that these do not contain any direct configuration statements (the entire configuration is done on the logical port-channel interface). The only statement added to each interface is a channel-group command that includes the physical interface in the appropriate logical port-channel interface.

Because the Gigabit Ethernet interfaces are not using EtherChannel, the configuration is placed directly on the interface itself. Each interface receives an IP address and an IPX network statement. Because these interfaces do not connect to any end stations, HSRP and IP helper addresses are not necessary.

The remaining configuration commands set up the same management features discussed in the earlier configurations.

Design Alternatives

As with Design 1, hundreds of permutations are possible for Design 2. This section briefly discusses some of the more common alternatives.

First, as shown in Figure 17-5, Design 2 calls for a pair of 8500s for the server farm. Figure 17-7 illustrates a potential layout for the server farm under Design 2.

Figure 17-7. Detail of Server Farm for Design 2
design-2-maximizing-layer-3-catalyst-8500-switching-routers-17.7

In this plan, a pair of Catalyst 6500 switches are directly connected to the backbone via Cat-B1-0B and Cat-B2-0B. By using the Catalyst 6500’s MSFC Native IOS Mode, you can leverage the capability of these devices to simultaneous behave as both routing switches and switching routers (see Chapter 18 for more information on this capability). This gives you the flexibility to provide Layer 2 connectivity within the server farm while also utilizing Layer 3 to reach the backbone. In essence, the server farm becomes a miniature version of one of the buildings, but all contained within a pair of devices (the 6500s are acting like MDF and IDF devices at the same time).

As an alternative, some organizations have used the design shown in Figure 17-8.

Figure 17-8. Layer 2 Server Farm Design
design-2-maximizing-layer-3-catalyst-8500-switching-routers-17.8

In this example, the Layer 2 Catalysts (in this case, 4003s) have been directly connected to the existing 8540s, Cat-B1-0B and Cat-B2-0B. The advantage of this approach is that it saves the expense of two Layer 3 switches and potentially removes one router hop from the typical end-user data path.

Unfortunately, this design is susceptible to the same default gateway issues discussed earlier in association with directly connecting servers to the LANE cloud in Design 1. As a result, it can actually add router hops by unnecessarily forwarding traffic to the wrong building. (You can run HSRP, but all traffic is directed to the active peer. MHSRP can be used, but it is generally less effective with servers than end users because of their extremely high bandwidth consumption.) If you do implement this design, consider running a routing protocol on your servers.

However, potentially the most serious problem involves IP addressing and link failures. Consider the case of where the Gigabit Ethernet link between the 4000s fails—both 8500s continue trying to send all traffic destined to the server farm subnet out their rightmost port. For example, Cat-B2-0B still tries to reach servers connected to Server Farm A by sending the traffic first to Server Farm B. And if the link between Server Farm B and Server Farm A is down, the traffic obviously never reaches its destination. This is a classic case of the discontinuous subnet problem.

  • Tip
    Look for potential discontinuous subnets in your network. This can be especially important in mission-critical areas of your network such as a server farm.

Probably the most common modification to Design 2 entails using a Layer 2 core rather than directly connecting the MDF switches to each other with a full or partial mesh of Gigabit Ethernet links. Although the approach used in Design 2 is fine for smaller networks, a Layer 2 core is more scalable for several reasons:

  • It is easier to add distribution blocks.
  • It is easier to upgrade access bandwidth to one building block (simply upgrade the links to the Layer 2 core versus upgrading all the meshed bandwidth).
  • Routing protocol peering is reduced from the distribution layer to the core.

The most common implementation is to use a pair of Layer 2 switches for redundancy (however, be careful to remove all Layer 2 loops in the core).

A third potential modification to Design 2 involves VLAN numbering. Notice that Design 2 uses the pattern-based VLAN numbering scheme discussed in Chapter 15. Because designs with a strong Layer 3 switching component effectively nullify the concept of VLANs being globally-unique broadcast domains, this approach is appropriate for designs such as Design 2. However, some organizations prefer to maintain globally-unique VLAN numbers even when utilizing Layer 3 switching. In this case, every subnet is mapped to a unique VLAN number. See Chapter 15 for more information on pattern-based versus globally-unique VLAN numbering schemes.

Finally, another option is to deploy Gigabit EtherChannel within the core and server farm. By offering considerably more available bandwidth, this can provide additional room for growth with the Happy Homes campus.

About the author

Scott

Leave a Comment