General Recommendation: Multilayer Model

General Recommendation: Multilayer Model

As discussed in the previous section, the multilayer model is the most appropriate approach for most modern campus networks for a variety of reasons. This section explains some specific considerations of this model.

Distribution Blocks

A large part of the benefit of the multilayer model centers around the concept of a modular approach to access (IDF) and distribution (MDF) switches. Given a pair of redundant MDF switches, each IDF/access layer switch forms a triangle of connectivity as shown in Figure 14-7. If there are ten IDF switches connected to a given set of MDF switches, ten triangles are formed (such as might be the case in a ten-story building). The collection of all triangles formed by two MDF switches is referred to as a distribution block. Most commonly, a distribution block equates to all of the IDF and MDF switches located in a single building.

Figure 14-7. Triangles of Connectivity within a Distribution Block

Because of its simplicity, the triangle creates the ideal building block for a campus network. By having two vertical links (IDF uplink connections), it automatically provides redundancy. Because the redundancy is formed in a predictable, consistent, and uncomplicated fashion, it is much easier to provide uniformly fast failover performance.

  • Tip
    Use the concept of a distribution block to simplify the design and maintenance of your network.

The multilayer model does not take a dogmatic stance on Layer 2 versus Layer 3 switching (although it is based around the theme that some Layer 3 processing is a requirement in large networks). Instead, it seeks to create the optimal blend of both Layer 2 and Layer 3 technology to achieve the competing goals of low cost, high performance, and scalability.

To provide cost-effective bandwidth, Layer 2 switches are generally used in the IDF (access layer) wiring closets. As discussed earlier, the NetFlow Feature Card can add significant value in the wiring closet with features such as Protocol Filtering and IGMP Snooping.

To provide control, Layer 3 switching should be deployed in the MDF (distribution layer) closets. This is probably the single-most important aspect of the entire design. Without the Layer 3 component, the distribution blocks are no longer self-contained units. A lack of Layer 3 processing in the distribution layer causes Spanning Tree, VLANs, and broadcast domains to spread throughout the entire network. This increases the interdependency of various pieces of the network, making the network far less scalable and far more likely to suffer a network-wide outage.

By making use of Layer 3 switching, each distribution block becomes an independent switching system. The benefits discussed in the “Advantages of Routing” section are baked into the network. Problems that develop in one part of the network are prevented from spreading to other parts of the network.

You should also be careful to not circumvent the modularity of the distribution block concept with random links. For example, Links 1 and 2 in Figure 14-8 break the modularity of the multilayer model.

Figure 14-8. Links 1 and 2 Break the Modularity of the Multilayer Design

The intent here was good: provide a direct, Layer 2 path between three IDF switches containing users in the same workgroup. Although this does eliminate one or two router hops from the paths between these IDF switches, it causes the entire design to start falling apart. Soon another exception is made, then another, and so on. Before long, the entire network begins to resemble an interconnected mess more like a bowl of spaghetti than a carefully planned campus network. Just remember that the scalability and long-term health of the network are more important than a short-term boost in bandwidth. Avoid “spaghetti networks” at all costs.

  • Tip
    Be certain to maintain the modularity of distribution blocks. Do not add links or inter-VLAN bridging that violate the Layer 3 barrier that the multilayer model uses in the distribution layer.

Without descending too far into “marketing speak,” it is useful to note the potential application of Layer 4 switching in the distribution layer. By considering transport layer port numbers in addition to network layer addressing, Layer 4 switching can more easily facilitate policy-based networking. However, from a scalability and performance standpoint, Layer 4 switching does not have a major impact on the overall multilayer model—it still creates the all-important Layer 3 barrier at the MDF switches.

On the other hand, the choice of Layer 3 switching technology can make a difference in matters such as addressing and load balancing.

Switching Router (8500) MDFs

In the case of 8500-style switching routers, the MDF switches make a complete break in the Layer 2 topology by default. As a result, the triangles of connectivity appear as two unique subnets—one that crosses the IDF switch and one that sits between the MDF switches as illustrated in Figure 14-9.

Figure 14-9. Switching Router MDF Switches Break the Network into Two Subnets

The resulting network is completely free of Layer 2 loops. Although some network designers have viewed this as an opportunity to completely disable the Spanning-Tree Protocol, this is generally not advisable because misconfiguration errors can easily create loops in the IDF wiring closet or end-user work areas (therefore possibly taking down the entire IDF). However, it does mean that STP load balancing cannot be used. Recall from Chapter 7 that STP load balancing requires two characteristics to be present in the network. First, it requires redundant paths, something that exists in Figure 14-9. Second, it requires that these redundant paths form Layer 2 loops, something that the routers in Figure 14-9 prevent. Therefore, some other load balancing technique must be employed.

The decision of whether or not the Spanning-Tree Protocol should be disabled can be complex. This book recommends leaving Spanning Tree enabled (even in Layer 2 loop-free networks such as the one in Figure 14-9) because it provides a safety net for any loops that might be accidentally formed through the end-user ports. Currently, most organizations building large-scale campus networks want to take this conservative stance. This choice seems especially wise when you consider that Spanning Tree does not impose any failover delay for important topology changes such as a broken IDF uplink. In other words, the use of Spanning Tree in this environment provides an important benefit while having very few downsides.

For more discussion on the technical intricacies of the Spanning-Tree Protocol, see Chapter 6 and 7. For more detailed and specific recommendations on using the Spanning-Tree Protocol in networks utilizing the various forms of Layer 3 switching, see Chapter 15.

In general, some form of HSRP load balancing is the most effective solution. As discussed in the “HSRP” section of Chapter 11, if the IDF switch contains multiple end-user VLANs, the VLANs can be configured to alternate active HSRP peers between the MDF switches. For example, the left switch in Figure 14-9 could be configured as the active HSRP peer for the odd VLANs, whereas the right switch would handle the even VLANs. However, if the network only contains a single VLAN on the IDF switch (this is often done to simplify network administration by making it more like the router and hub model), the Multigroup HSRP (MHSRP) technique is usually the most appropriate technology. Figure 14-10 illustrates the MHSRP approach.

Figure 14-10. MHSRP Load Balancing

In Figure 14-10, two HSRP groups are created for a single subnet/VLAN. The first group uses the address, whereas the second group uses Notice that both addresses intentionally fall within the same subnet. Half of the end stations connected to the IDF switch are then configured to use a primary default gateway of, and the other half use (this can be automated with DHCP). For more information on this technique, see the “MHSRP” section of Chapter 11 and the “Use DHCP to Solve User Mobility Problems” section of Chapter 15.

  • Tip
    In general, implementing load balancing while using switching routers in the distribution layer requires multiple IDF VLANs (each with a separate HSRP standby group) or MHSRP for a single IDF VLAN.

Routing Services (MLS) MDFs

However, if the MDF switches are using routing switch MLS-style Layer 3 switching, the design might be very different. In this case, it is entirely possible to have Layer 2 loops. Rather than being pure routers as with the switching router approach, the MDF switches are normal Layer 2 devices that have been enhanced with Layer 3 caching technology. Therefore, MLS devices pass Layer 2 traffic by default (this default can be changed). For example, Figure 14-11 illustrates the Layer 2 loops that commonly result when MLS is in use.

Figure 14-11. MLS Often Creates Layer 2 Loops that Require STP Load Balancing

Both VLANs 2 and 3 are assigned to all three trunk links, forming a Layer 2 loop. In this case, STP load balancing is required. As shown in Figure 14-11, the cost for VLAN 3 on the 1/1 IDF port can be increased to 1000, and the same can be done for VLAN 2 on Port 1/2. For more detailed information on STP load balancing, please see Chapter 7.

  • Tip
    The Layer 2/3 hybrid nature of MLS generally requires STP load balancing.


Designing the core of a multilayer network is one of the areas where creativity and careful planning can come into play. Unlike the distribution blocks, there is no set design for a multilayer core. This section discusses some of the design factors that should be taken into consideration.

One of the primary concerns when designing a campus core backbone should be fast failover and convergence behavior. Because of the reliance on Layer 3 processing in the MLS design, fast-converging routing protocols can be used instead of the slower Spanning-Tree Protocol. However, one must be careful to avoid unexpected Spanning Tree slowdowns within the core itself.

Another concern is that of VLANs. In some cases, the core can utilize a single flat VLAN that spans one or more Layer 2 core switches. In other cases, traffic can be segregated into VLANs for a variety of reasons. For example, multiple VLANs can be used for policy reasons or to separate the different Layer 3 protocols. A separate management VLAN is also desirable when using Layer 2-oriented switches.

Broadcast and multicast traffic are other areas of concern. As much as possible, broadcasts should be kept off of the network’s core. Because the multilayer model uses Layer 3 switching in the MDF devices, this usually isn’t an issue. Likewise, multicast traffic also benefits from the use of routers in the multilayer model. If the core makes use of routing, Protocol Independent Multicast (PIM) can be used to dynamically build optimized multicast distribution trees. If sparse-mode PIM is used, the rendezvous point (RP) can be placed on a Layer 3 switch in the core. If the core is comprised of Layer 2 switches only, then CGMP or IGMP Snooping can be deployed to reduce multicast flooding within the core.

One of the important decisions facing every campus network designer has to do with the choice of media and switching technology. The majority of campus networks currently utilize Fast and Gigabit Ethernet within the core. However, ATM can be a viable choice in many cases. Because it supports a wide range of services, can integrate well with wide area networks, and provides extremely low-latency switching, ATM has many appealing aspects. Also, MultiProtocol Label Swapping (MPLS, also known as Tag Switching), traditionally seen as a WAN-only technology, is likely to become increasingly common in very large campus backbones. Because it provides excellent traffic engineering capabilities and very tight integration between Layer 2 and 3, MPLS can be extremely useful in all sorts of network designs.

However, the most critical decision has to do with the switching characteristics of the core. In some cases, a Layer 2 core is optimal; other networks benefit from a Layer 3 core. The following sections discuss issues particular to each.

Layer 2 Core

Figure 14-12 depicts the typical Layer 2 core in a multilayer network.

Figure 14-12. A Layer 2 Core

This creates a L2/L3/L2 profile throughout the network. The network’s intelligence is contained in the distribution-layer MDF switches. Both the access (IDF) and core switches utilize Layer 2 switching to maintain a high price/performance ratio. To provide redundancy, a pair of switches form the core. Because the core uses Layer 2 processing, this approach is most suitable for small to medium campus backbones.

When building a Layer 2 core, Spanning Tree failover performance should be closely analyzed. Otherwise, the entire network can suffer from excessively slow reconvergence. Because the equipment comprising the campus core should be housed in tightly controlled locations, it is often desirable to disable Spanning Tree entirely within the core of the network.

  • Tip
    I recommend that you only disable Spanning Tree in the core if you are using switching routers in the distribution layer. If MLS is in use, its Layer 2 orientation makes it too easy to misconfigure a distribution switch and create a bridging loop.

One way to accomplish this is through the use of multiple VLANs that have been carefully assigned to links in a manner that create a loop-free topology within each VLAN. An alternate approach consists of physically removing cables that create Layer 2 loops. For example, consider Figure 14-13.

Figure 14-13. A Loop-Free Core

In Figure 14-13, the four Layer 2 switches forming the core have been kept loop free at Layer 2. Although a redundant path does exist through each distribution (MDF) switch, the pure routing behavior of these nodes prevents any Layer 2 loops from forming.

If Spanning Tree is required within the core, blocked ports should be closely analyzed. Because STP load balancing can be very tricky to implement in the network core, compromises might be necessary.

In addition to Spanning Tree, there are several other issues to look for in a Layer 2 core. First, be careful that multicast flooding is not a problem. As mentioned earlier, IGMP Snooping and CGMP can be useful tools in this situation (also see Chapter 13). Second, keep an eye on router peering limits as the network grows. Because each MDF switch is a router under the multilayer model, a Layer 2 core creates the appearance of many routers sitting around a single subnet. If the number of routers becomes too large, this can easily lead to excessive state information, erratic behavior, and slow convergence. In this case, it can be desirable to break the network into multiple VLANs that reduce peering.

  • Tip
    Be careful to avoid excessive router peering when using Catalyst 8500s. One of the easiest ways to accomplish this is through the use of a Layer 3 core (see the next section).

A Layer 2 core can provide a very useful campus backbone. However, because of the potential issues and scaling limits, it is most appropriate in small to medium campus networks.

  • Tip
    A Layer 2 core can be a cost-effective solution for smaller campus networks.

Layer 3 Core

Figure 14-14 redraws Figure 14-12 with a Layer 3 core.

Figure 14-14. A Layer 3 Core

Although Figure 14-12 and Figure 14-14 look very similar, the use of Layer 3 switching within the core makes several important changes to the network.

First, the path determination is no longer contained only within the distribution layer switches. With a Layer 3 core, the path determination is spread throughout the distribution and core layer switches. This more decentralized approach can provide many benefits:

  • Higher aggregate forwarding capacity
  • Superior multicast control
  • Flexible and easy to configure load balancing
  • Scalability
  • Router peering is reduced
  • IOS feature throughout a large percentage of the network

In short, the power and flexibility of Layer 3 processing eliminates many of the issues discussed concerning Layer 2 backbones. For example, the switches can be connected in a wide variety of looped configurations without concern for bridging loops or STP performance. By cross-linking core switches, redundancy and performance can be maximized. Also, placing routing nodes within campus core, router mesh and peering between the distribution switches can be dramatically reduced (however, it is still advisable to consider areas of excessive router peering).

Notice that a Layer 3 core does add additional hops to the path of most traffic. In the case of a Layer 2 core, most traffic requires two hops, one through the end user’s MDF switch and the other through the server farm’s MDF switch. In the case of a Layer 3 core, an additional hop (or two) is added. However, several factors minimize this concern:

  • The consistent and modular design of the multilayer model guarantees a consistent and small number of router hops. In general, no more than four router hops within the campus should ever be necessary.
  • Many Layer 3 switches have latencies comparable to Layer 2 switches.
  • Windowing protocols (such as TCP or IPX Burst Mode) reduce impact of latency for most applications.
  • Switching latency is often a very small part of overall latency. In other words, latency is not as big an issue as most people make it out to be.
  • The scalability benefits of Layer 3 are generally far more important than any latency concerns.

Larger campus networks benefit from a Layer 3 core.

Server Farm Design

Server farm design is an important part of almost all modern networks. The multilayer model easily accommodates this requirement. First, the server farm can easily be treated as its own distribution block. A pair of redundant Layer 3 switches can be used to provide physical redundancy as well as network layer redundancy with protocols such as HSRP. In addition, the Layer 3 switches create an ideal place to apply server-related policy and access lists. Figure 14-15 illustrates a server farm distribution block.

Figure 14-15. The Server Farm Can Form Another Distribution Block

Although enterprise-wide servers should generally be deployed in a central location, workgroup servers can be attached directly to access or distribution level switches. Two examples of this are shown in Figure 14-15.

  • Tip
    An enterprise server farm is usually best implemented as another distribution block that connects to the core.

Specific tips for server farm design are discussed in considerably more detail in the “Server Farms” section of Chapter 15.

Using a Unique VTP Domain for Each Distribution Block

When using the MLS approach to Layer 3 switching in the MDF closets, it might be advantageous to make each distribution block a separate VTP domain. Because of the Layer 2 orientation to MLS, VLANs propagate throughout the entire network by default (see Chapter 12 for more information on VTP). However, the multilayer model is designed to constrain VLANs to an individual distribution block. By innocently using the default behavior, your network can become unnecessarily burdened by extraneous VLANs and STP computations.

Assigning a unique VTP domain name to each distribution block is a simple but effective way to have VLAN propagation mirror the intended design. When a new VLAN is added within a distribution block, it automatically is added to every other switch in that block. However, because other distribution blocks are using a different domain name, they do not learn about this new VLAN.

  • Tip
    The MLS approach to Layer 3 switching can lead to excessive VLAN propagation. Use a different VTP domain name for each distribution block to overcome this default behavior.

When VTP domains are in use, it is usually best to make the names descriptive of the distribution block (for example, Building1 and Building 2).

  • Tip
    Recall from Chapter 8 that when using trunk links between different VTP domains, the trunk state will need to be hard-coded to on. The use of auto and desirable will not work across VTP domain names (in other words, the DISL and DTP protocols check for matching VTP domain names).

IP Addressing

In a very large campus network, it is usually best to assign bitwise contiguous blocks of address spaces to each distribution block. This allows the routers in each distribution block to summarize all of the subnets within that block into a single advertisement that gets sent into the core backbone. For example, the single advertisement (/20 is a shorthand way to represent the subnet mask can summarize the entire range of 16 subnets from to (/24 is equivalent to the subnet mask This is illustrated in Figure 14-16.

Figure 14-16. Using IP Address Summarization

As shown in Figure 14-16, the /20 and /24 subnet masks (or network prefixes) differ by four bits (in other words, /20 is four bits “shorter” than /24). These are the only four bits that differ between the 16 /24 subnet addresses. In other words, because all 16 /24 subnet addresses match in the first 20 bits, a single /20 address can be used to summarize all of them.

In a real-world distribution block, the 16 individual /24 subnets can be applied to 16 different end-user VLANs. However, outside the distribution block, a classless IP routing protocol can distribute the single /20 route of

  • Tip
    In very large campus networks, try to plan for future growth and address summarization by pre-allocating bitwise contiguous blocks of address space.

Scaling Link Bandwidth

Note that the modular nature of the multilayer model allows individual links to easily scale to higher bandwidth. Not only does the architecture accommodate entirely different media types, it is easy to add additional links and utilize Fast or Gigabit EtherChannel.

Network Migrations

Finally, the modularity of the multilayer model can make migrations much easier. In general, the entire old network can appear as a single distribution block to the rest of the new network (for example, imagine that the server farm distribution block in Figure 14-15 is the old network). Although the old network generally does not have all of the benefits of the multilayer model, it provides a redundant and routed linkage between the two networks. After the migration is complete, the old network can be disabled.

About the author


Leave a Comment