When the word switching is brought up, the first thing that comes to most network engineer’s minds is the subject of VLANs. The use of VLANs can make or break a campus design. This section discusses some of the most important issues to remember when designing and implementing VLANs in your network.
The Appropriate Use of VLANs
Given that VLANs are associated so closely with switching, people most often think of what Chapter 14, “Campus Design Models,” referred to as campus-wide VLANs. Given the popularity of campus-wide VLANs as both a concept and a design, this section discusses its pro and cons, as well as an alternate design for consideration.
The popularity of campus-wide VLANs is due in large part to several well-publicized benefits to this approach. First, it can allow direct Layer 2 paths between all of the devices located in the same community of interest. By doing so, this can remove routers from the path of high-volume traffic such as that going to a departmental file server. Assuming that software-based routers are in use, there is the potential for a tremendous increase in available bandwidth.
Second, campus-wide VLANs make it possible to use technology like Cisco’s User Registration Tool (URT). By functioning as a sophisticated extension to the VLAN membership policy server (VMPS) technology discussed in Chapter 5, “VLANs,” URT allows VLAN placement to be transparently determined by authentication servers such as Windows NT Domain Controllers and NetWare Directory Services (NDS). Organizations such as universities have found this feature very appealing because they can create one or more VLANs for professors and administrative staff while creating separate VLANs for students. Consequently, the same physical campus infrastructure can be used to logically segregate the student traffic while still allowing the use of roving laptop users.
The third benefit of campus-wide VLANs is actually implied by the second benefit—campus-wide VLANs allow these roving users to be controlled by a centralized set of access lists. For example, a university using campus-wide VLANs might utilize a pair of 7500 routers located in the data center for all inter-VLAN routing. As a result, access lists between the VLANs only need to be configured in two places. Consider the alternative where routers (or Layer 3 switches) might be deployed in every building on campus. To maintain user mobility, each of these routers needs to be configured with all of the VLANs and access lists used throughout the entire campus. This can obviously lead to a situation where potentially hundreds of access lists must be maintained.
Although campus-wide VLANs have several well-publicized benefits and are quite popular, they create many network design and management issues. Try to avoid using campus-wide VLANs.
Although these advantages are very alluring, many organizations that implement this approach quickly discover their downsides. Most of the disadvantages are the result of one characteristic of campus-wide VLANs: a lack of hierarchy. Specifically, this lack of hierarchy creates significant scalability problems that can affect the network’s stability and maintainability. Furthermore, these problems are often difficult to troubleshoot because of the dynamic and non-deterministic nature of campus-wide VLANs (not to mention that it can be difficult to know where to start troubleshooting in a flat network). For more information on these issues, please refer to Chapter 14, “Campus Design Models,” Chapter 11, “Layer 3 Switching,” and Chapter 17, “Case Studies: Implementing Switches.”
Although many books and vendors discuss campus-wide VLANs as simply the way to use switching, Layer 3 switching introduces a completely different approach that is definitely worthy of consideration. Chapter 14 discussed these Layer 3 approaches under the heading of the multilayer campus design model. Although this approach cannot match the support for centralized access lists available under campus-wide VLANs, it can allow you to build and maintain much larger networks than is typically possible with campus-wide VLANs. Layer 3 switching can also be used with the Dynamic Host Control Protocol (DHCP), a very proven and scalable technique for handling user mobility (see the next section). Therefore, as a general rule of thumb, use the multilayer model as your default design choice and only use flat earth designs if there is a compelling reason to justify the risks. For more information on the advantages and implementation details of the multilayer model, see Chapter 11, Chapter 14, and Chapter 17.
Note that this implies a fundamental difference in how VLANs are used between the two design models. In the case of campus-wide VLANs, VLANs are used to create logical partitions unique to the entire campus network. In the case of the multilayer model, they are used to create logical partitions that may be unique to a single IDF/access layer wiring closet.
The multilayer design model uses VLANs in a completely different fashion from the campus-wide VLANs model. In the multilayer model, VLANs are very often only unique to a single IDF device whereas campus-wide VLANs are globally unique.
Use DHCP to Solve User Mobility Problems
Many network engineers feel that campus-wide VLANs are the only way to handle mobile users and unwittingly saddle themselves with a flat network that requires high maintenance. As mentioned in the previous section, many user-mobility problems can be solved with DHCP. Because DHCP fits well into hierarchical designs that utilize Layer 3 processing for scalability, it can be a much safer choice than using campus-wide VLANs. As discussed in Chapter 11 and Chapter 17, the use of DHCP simply requires one or more ip helper-address statements on each router (or Layer 3 switch) interface.
When using IP helper addresses for DHCP, consider using the no ip forward-protocol command to disable the forwarding of unwanted traffic types that are enabled by default (the ip helper-address command automatically enables forwarding of the following UDP ports: 37, 49, 53, 67, 68, 69, 137, and 138). Most commonly, UDP ports 137 and 138 are removed to prevent excessive forwarding of NetBIOS name registration traffic.
Be careful to not simply enter no ip forward-protocol upd. Prior to 12.0, entering this command disabled all of the default UDP ports, including ports 67 and 68 that are used by DHCP. Although no ip forward-protocol upd does not disable DHCP in early releases of 12.0, proceed with caution. For an example of ip helper-address and no ip forward-protocol, see Chapter 17.
Although VLAN numbering is a very simple task, having a well thought-out plan can help make the network easier to understand and manage in the long run. In general, there are two approaches to VLAN numbering:
- Globally-unique VLAN numbers
- Pattern-based VLAN numbers
In globally-unique VLAN numbers, every VLAN has a unique numeric identifier. For example, consider the network shown in Figure 15-1. Here, the VLANs in Building 1 use numbers 10–13, Building 2 uses 20–23, and Building 3 uses 30–33.
Figure 15-1. Globally-Unique VLANs
When using globally-unique VLANs, try to establish an easily remembered scheme such as the one used in Figure 15-1 (Building 1 uses VLANs 1X, Building 2 uses 2X, and so on).
In the case of pattern-based VLAN numbers, the same VLAN number is used for the same purpose in each building. For example, Figure 15-2 shows a network where the management VLAN is always 1, the first end user VLAN is 2, the second end user VLAN is 3, and so on.
Figure 15-2. Pattern-Based VLANs
Which approach you use is primarily driven by what type of design model you adopt. If you have utilized the campus-wide VLANs model, you are essentially forced to use globally-unique VLAN numbers. Although there are special cases and “hacks” where this may not be true, not using unique VLANs in flat designs can lead to cross-mapped VLANs and widespread connectivity problems.
Use globally-unique VLAN numbers with campus-wide VLANs.
If you are using the multilayer model, either numbering scheme can be adopted. Because VLANs are terminated at MDF/distribution layer switches, there is no underlying technical requirement that the VLAN numbers must match (this is especially true when using using switching router platforms such as the Catalyst 8500). In fact, even if the VLAN numbers do match, they are still maintained as completely separate broadcast domains because of Layer 3 switching/routing.
If you like the simplicity of knowing that the management VLAN is always VLAN 1, the pattern-based approach might be more appropriate. On the other hand, some organizations prefer to keep every VLAN number unique just as every IP subnet is unique (this approach often ties the VLAN number to the subnet number—for example, VLAN 25 might be 10.1.25.0/24). In other cases, a blend of the two numbering schemes works best. Here, organizations typically adopt a single number for use in all management VLANs but use unique numbers for end-user VLANs.
The multilayer model can be used with both globally-unique VLANs and pattern-based VLANs.
Use Meaningful VLAN Names
Although common sense dictates that clearly-named VLANs serve as a form of documentation, networks are frequently built with useless VLAN names. Recall from Chapter 5 that if you do not specify a VLAN name, the Catalysts use a very creative name such as VLAN0002 for VLAN 2 and VLAN0003 for VLAN3. In other cases, organizations do specify a VLAN name as a parameter to the set vlan command, but the names are cryptic or poorly maintained.
It is usually a far better choice to create VLAN names that actually describe the function of that broadcast domain. This is especially true when using campus-wide VLANs and globally-unique VLAN numbers. The dynamic and non-hierarchical nature of these networks makes troubleshooting challenging enough without having to waste time trying to determine what VLAN a problem involves. Having clearly-defined and descriptive VLAN names can save critical time during a network outage (as well as avoiding the confusion that might cause an administrator to misconfigure a device and thus create a network outage).
Descriptive VLAN names are especially important when using campus-wide VLANs.
Although VLAN names are less important when the multilayer design model is in use, the names should at least differentiate management and end-user traffic. Try to include the name of the department or IDF/access layer closet where the VLAN is used. Also, some organizations like to include the IP subnet number in the VLAN name.
Use Separate Management VLANs
When first exposed to VLANs, many network administrators find them confusing and therefore decide to adopt a policy of placing only a single VLAN on every switch. Although this can have an appealing simplicity, it can seriously destabilize your network. In short, you want to always use at least two VLANs on every Layer 2 Catalyst switch. At a minimum, you want one VLAN for management traffic and a separate VLAN for end-user traffic.
Make sure every Layer 2 switch participates in at least two VLANs: one that functions as the management VLAN and one or more for end-user VLANs.
However, this is not to suggest that having more than two VLANs is a good idea. To the contrary, the simplicity of maintaining a single end-user VLAN (or at least a small number) can be very beneficial for network maintenance.
Why, then, is it so important to have at least two VLANs? Think back to the material discussed in Chapter 5 regarding the impact of broadcasts on end stations. Because broadcasts are not filtered by hardware on-board the network interface card (NIC), every broadcast is passed up to Layer 3 using an interrupt to the central CPU. The more time that the CPU spends looking at unwanted broadcast packets, the less time it has for more useful tasks (like playing Doom!).
Well, the CPU on a Catalyst’s Supervisor is no different. The CPU must inspect every broadcast packet to determine if it is an ARP destined for its IP address or some other interesting broadcast packet. However, if the level of uninteresting traffic becomes too large, the CPU can become overwhelmed and start dropping packets. If it drops Doom packets, no harm is done. On the other hand, if it drops Spanning Tree BPDUs, the whole network could destabilize.
Note that this section is referring to Layer 2 Catalysts such as the 2900s, 4000s, 5000s, and 6000s. Because these devices currently have one IP address that is only assigned to a single VLAN, the selection of this VLAN can be important. On the other hand, this point generally does not apply to router-like Catalysts such as the 8500. Because these platforms generally have an IP address assigned to every VLAN, trying to pick the best VLAN for an IP address obviously becomes irrelevant. For more information on the Catalyst 8500, see Chapter 11.
In fact, this Spanning Tree problem is one of the more common issues in flat earth campus networks. The story usually goes something like this: The network is humming along fine until a burst of broadcast data in the management VLAN causes a switch to become overwhelmed to the point where is starts dropping packets. Because some of these packets are BPDUs, the switch falls behind in its Spanning Tree information and inadvertently creates a Layer 2 loop in the network. At this point, the broadcasts in the network go into a full feedback loop as discussed in Chapter 6, “Understanding Spanning Tree.”
If this loop occurs in one or more VLANs other than the management VLAN, it can quickly starve out all remaining trunk bandwidth throughout the entire campus in a flat network. However, the Supervisor CPUs are insulated by the VLAN switching ASICs and continue operating normally (recall that all data forwarding is handled by ASICs in Catalyst gear).
On the other hand, if the loop occurs in the management VLAN (the VLAN where SC0 is assigned), the results can be truly catastrophic. Suddenly, every switch CPU is hit with a tidal wave of broadcast traffic, completely crushing every switch in a downward spiral that virtually eliminates any chance of the network recovering from this problem. If a network is utilizing campus-wide VLANs, this problem can spread to every switch within a matter of seconds.
Recall that SC0 is the management interface used in Catalyst switches such as the 4000s, 5000s, and 6000s. This is where the management IP address is assigned to a Catalyst Supervisor. Because the CPU processes all broadcast packets (and some multicast packets) received on this interface, it is important to not overwhelm the CPU.
How do you know if your CPU is struggling to keep up with traffic in the network? First, you can use the Catalyst 5000 show inband command (this is used for Supervisor IIIs; use show biga on Supervisor Is and IIs [biga stands for Backplane Interface Gate Array]) to display low-level statistics for the device. Look under the Receive section for the RsrcErrors field. This lists the number of received frames that were dropped by the CPU. Second, to view the load directly on the CPU, use the undocumented command ps -c. The final line of this display lists the CPU idle time (subtract from 100 to calculate the load). Note that ps-c has been replaced by show proc cpu in newer images.
Use the show inband, show biga, ps -c, and show proc cpu commands to determine if your CPU is overloaded.
If you find that you are facing a problem of CPU overload, also read the section “Consider Using Loop-Free Management VLANs” later in this chapter.
Deciding What Number Should be Used for the Management VLAN
A common question surrounds the issue of VLAN numbering for the management VLAN. To appropriately answer this question, you must consider the three types of traffic that pass through Catalyst switches:
- Control traffic
- Management traffic
- End-user traffic
Control traffic encompasses plug and play-oriented protocols such as DISL/DTP (used for trunk state negotiation), CDP, PAgP, and VTP. These protocols always use VLAN 1.
Management traffic includes end-to-end and IP-based protocols such as Telnet, SNMP, and VQP (the protocol used by VMPS). These protocols always use the VLAN assign to SC0.
End-user traffic is all of the remaining traffic on your network. Obviously, this represents the majority of traffic on most networks.
The overriding principle concerning Management VLAN design is to never mix end-user traffic with the control and management traffic. Failing to abide by this rule will open your network up to the sort of network meltdown scenarios discussed in the previous section.
Never mix end-user traffic with control and management traffic.
When implementing this principle, you must generally choose one of two designs:
- Use VLAN 1 for all control and management traffic while placing end-user traffic in other VLANs (VLANs 2–1000).
- Use VLAN 1 for control traffic, another VLAN (such as VLAN 2) for management traffic, and the remaining VLAN for end-user traffic (such as VLAN 3–1000).
The first option combines control and management traffic in VLAN 1. The advantage of this approach is management simplicity (it is the default setting and uses a single VLAN). The primary disadvantage of this approach centers around the default behavior of VLAN 1—because VLAN 1 cannot currently be removed from trunk links, it is easy for this VLAN to become extremely large. For example, the use of Ethernet trunks throughout a network along with MLS Layer 3 switching in the MDF/distribution layer will result in VLAN 1 spanning every link and every switch in the campus, exactly what you do not want for your all-important management VLAN. Therefore, placing SC0 in such as large and flat VLAN can be risky.
Although VLAN 1 cannot be removed from Ethernet trunks in current versions of Catalyst code, Cisco is developing a feature that will provide this capability in the future. In short, this feature is expected to allow VLAN 1 to be removed from both trunk links and the VTP VLAN database. Therefore, from a user-interface perspective, enabling this feature effectively removes VLAN 1 from the device. However, from the point of view of the Catalyst internals, the VLAN will actually remain in use, but only for control traffic such as VTP and CDP (for example, a Sniffer will reveal these packets tagged with a VLAN 1 header on trunk links). In other words, this feature will essentially convert VLAN 1 into a “reserved” VLAN than can only be used for control traffic.
This risk can be avoided with the second option where the control and management traffic are separated. Whereas the control traffic must use VLAN 1, the management traffic is relocated to a different VLAN (many organizations choose to use VLAN 2, 999, or 1000). As a result, SC0 and the CPU will be insulated from potential broadcast problems in VLAN 1. This optimization can be particularly important in extremely large campus networks that are lacking in Layer 3 hierarchy.
For the most conservative management/control VLAN design, only use VLAN 1 for control traffic while placing SC0 in its own VLAN (in other words, no end-user traffic will use this VLAN).
Also, when using the upcoming feature that “removes” VLAN 1 from a Catalyst, you are effectively forced to use this approach.
Be Careful When Moving SC0’s VLAN
Although some traffic always uses VLAN 1, other management traffic changes VLANs as SC0 is reassigned. This includes all of the end-to-end protocols (as opposed to the link-by-link protocols that only use VLAN 1) such as:
- The VQP protocol used by VMPS
For these protocols to function, SC0 must be assigned to the correct VLAN with a valid IP address and one or more functioning default gateways to reach the rest of the network. The most common problem here is that people often move SC0 to a different VLAN for troubleshooting purposes and forget to move it back when they are done. Although this can help troubleshoot the immediate problem, it is almost guaranteed to create more problems! Another common problem is failing to use an IP address that is appropriate for the VLAN assigned to SC0.
If you reconfigure SC0 for troubleshooting (or other) purposes, be sure to return it to its original state.
Prune VLANs from Trunks
Two generic technologies are available for creating trunks that share multiple VLANs:
- Implicit tagging
- Explicit tagging
When using implicit tagging, some information already contained in the frame serves as an indicator of VLAN membership. Many vendors have created equipment that uses MAC addresses for this purpose (other possibilities include Layer 3 addresses or Layer 4 port numbers). The downside of this approach is that you must devise some technique for sharing these tags. For example, when using MAC addresses, all of the switches must be told what VLAN every MAC address has been assigned to. Maintaining and synchronizing these potentially huge tables can be a real problem.
To avoid these synchronization issues, Cisco has adopted the approach of using explicit tagging through ISL and 802.1Q. There are two advantages to explicit tagging. First, because the tag is carried in an extra header field that is added to the original frame, VLAN membership becomes completely unambiguous (therefore preventing problems associated with frames bleeding through between VLANs). Second, each switch needs to know only the VLAN assignments of its directly-connected ports (in implicit tagging, the shared tables require every switch to maintain knowledge of every MAC address/end station). As a result, the amount of state information required by each switch is dramatically reduced.
Cisco’s use of explicit tagging creates significant scalability benefits.
However, there is a hidden downside to the advantage of every switch not needing to know what VLANs other switches are using—flooded traffic must be sent to every switch in the Layer 2 network. In other words, by default, one copy of every broadcast, multicast, and unknown unicast frame is flooded across every trunk link in a Layer 2 domain.
Two approaches can be used to reduce the impact of this flooding. First, note that if you are using campus-wide VLANs, this flooding problem also becomes campus-wide. Therefore, one of the simplest and most scalable ways to reduce this flooding is to partition the network with several Layer 3 barriers that utilize routing (Layer 3 switching) technology. This breaks the network into smaller Layer 2 pockets and constrains the flooding to each pocket.
Where Layer 3 switching cannot prevent unnecessary flooding (such as with campus-wide VLANs or within each of the Layer 2 pockets created by Layer 3 switching), a second technique of VLAN pruning can be employed. By using the clear trunk command discussed in Chapter 8, “Trunking Technologies and Applications,” unused VLANs can be manually pruned from a trunk. Therefore, when a given switch needs to flood a frame, it only sends it out access ports locally assigned to the source VLAN and trunk links that have not been pruned of this VLAN. For example, an MDF switch can be configured to flood frames only for VLANs 1 and 2 to a given IDF switch if the switch only participates in these two VLANs. To automate the process of pruning, VTP pruning can be used. For more information on VTP pruning, please refer to Chapter 12, “VLAN Trunking Protocol.”
One of the most important uses of manual VLAN pruning involves the use of a Layer 2 campus core, the subject of the next section.
VLAN pruning on trunk lines is one of the most important keys to the successful implementation of a network containing Layer 2 Catalyst switching.
Make Layer 2 Cores Loop Free
When using a Layer 2 core in association with the multilayer model, strive to eliminate links that create loops. On one hand, this sounds completely counter-intuitive. After all, most network engineers spend countless hours trying to improve the resiliency of their network’s core. However, by carefully pruning your network of certain links and VLANs, you can eliminate Spanning Tree convergence delays while still maintaining a high degree of redundancy and network resiliency. In other words, simply throwing more links (and VLANs) at a Layer 2 core can actually degrade network reliability by introducing Spanning Tree delays.
Furthermore, there is another advantage to using loop-free Layer 2 cores. When loops exist, Spanning Tree automatically places ports in the Blocking state and therefore reduces the capability to load balance across the core. By eliminating loops and therefore removing Spanning Tree Blocking ports, every path through the core can be utilized to maximize available bandwidth in this important area of the network.
For example, consider the collapsed Layer 2 backbone illustrated in Figure 15-3.
Figure 15-3. A Loop-Free Collapsed Layer 2 Core
The core in Figure 15-3 is formed by a pair of redundant Layer 2 switches each carrying a single VLAN. All four of the MDF switches connect to one of the core switches (Core-A or Core-B), allowing any single link or switch to fail without creating a permanent outage. If the four MDF switches are configured with Catalyst 8500-style switching routers, then this will automatically result in a loop-free core. On the other hand, the use of Layer 3 router switching (MLS) in the MDF devices requires more careful planning. Specially, the core VLAN must be removed from the links to IDF switches as well as on the link between MDF switches.
When using MLS (and other forms of routing switches), be certain that you remove the core VLAN from links within the distribution block (the triangles of connectivity formed by MDF and IDF switches).
Larger Layer 2 campus cores require even more careful planning. For example, Figure 15-4 shows a network that covers a larger geographic area and therefore uses four Layer 2 switches within the core. This design is often referred to as a “split Layer 2” core.
Figure 15-4. A Split Layer 2 Core
In this case, the key to creating a fast-converging and resilient core is to actually partition the core into two separate VLANs and not cross-link the switches to each other. The first core VLAN is used for the pair of switches on the left, and the second VLAN is used for the pair of switches on the right. If the core switches in Figure 15-4 were cross-linked or fully meshed and a single VLAN were deployed, Spanning Tree convergence and load balancing issues would become a problem.
Finally, notice that creating a loop-free core requires the use of Layer 3 switching in the MDF/distribution layer closets. When using campus-wide VLANs, the only way to achieve a loop-free core is to remove all loops from the entire network, obviously a risky endeavor if you are at all concerned about redundancy. Again, follow the suggestion of this chapter’s first section and try to always use the multilayer model and the scalability benefits it achieves through the use of Layer 3 switching.
When using split Layer 2 cores, some network designers chose to use this to segregate the traffic by protocol to provide additional control. For example, the Core-A and Core-C switches could be used for IP traffic while the Core-B and Core-D can carry IPX traffic. This can be a useful way of guaranteeing a certain amount of bandwidth for each protocol.
It is especially useful when you have non-routable protocols that require bridging throughout a large section of the network. This will allow one half of the core to carry the non-routable/bridged traffic while the other half carries the multiprotocol routed traffic.
This section has repeatedly discussed the pruning of VLANs from links. Obviously, one way to accomplish this is to use the clear trunk command discussed in the “Restricting VLANs on a Trunk” section of Chapter 8. However, the simplest and most effective approach for removing VLANs from a campus core is to just use non-trunk links. By merely assigning these ports to the core VLAN, you will automatically prevent VLANs from spanning the core and creating flat earth VLANs.
Use non-trunk links in the campus core to avoid campus-wide end-user VLANs.
In fact, this technique is also the most effective method of removing VLAN 1 from the core. Recall that current versions of Catalyst code do not allow you to prune VLAN 1 from Ethernet trunks. Therefore, as discussed earlier, this can result in a single campus-wide VLAN in the all-important VLAN 1 (the last place you want to have loops and broadcast problems).
Use non-trunk links in the campus core to avoid a campus-wide VLAN in VLAN 1 (this is where you least want a flat earth VLAN, especially if SC0 is assign to VLAN 1).
Don’t Forget PLANs
When creating a new design or when your first one or two attempts at solving a particular problem fail, redraw your VLAN design using physical LANs (PLANs). In other words, take the logical topology created through the use of virtual LANs and redraw it using PLANs.
PLAN is a somewhat tongue-in-cheek term the author coined to describe a very serious issue. For some reason, the human brain is almost guaranteed to forget all knowledge of IP subnetting when faced with virtual LANs. People spend days looking at Sniffer traces of complex things like ISL trunks and Spanning Tree to only learn in the end that someone “fat fingered” one digit in an IP address.
So, you ask, what the heck is a PLAN? To answer this mystery, first consider Figure 15-5, a drawing of a typical network using VLANs.
Figure 15-5. Virtual LANs (VLANs)
Figure 15-6 redraws Figure 15-5 using PLANs.
Figure 15-6. Physical LANs (PLANs)
Each VLAN in Figure 15-6 has been redrawn as a separate segment connected to a different router interface. It depicts the logical separation of VLANs with the physical separation used in traditional router and hub designs. However, from a Layer 3 perspective, both networks are identical.
By doing this, it makes the network extremely easy to understand. In fact, it makes it painfully obvious that this network contains a problem—the host using 10.0.2.183 is located on the wrong segment/VLAN (it should be on the Blue VLAN).
Although this might seem like a simple example, simple addressing issues trip up even the best of us from time to time. Why not use a technique that removes VLANs as an extra layer of obfuscation? However, PLANs can be useful in many situations other than for your own troubleshooting. Even if you understand why a network is having a problem, PLANs can be useful for explaining it to other people who might not see the problem as clearly. PLANs can also be used to simplify a new design and help you better analyze the traffic flows and any potential problems.
PLANs are no joke—use them to help troubleshoot and explain your network.
How to Handle Non-Routable Protocols
Chapter 11 discussed various approaches to integrating Layer 3 routing with Layer 2 bridging, including options such as bridging between VLANs, Concurrent Routing and Bridging (CRB), and Integrated Routing and Bridging (IRB). Most organizations utilize one of these techniques because of the need to have users in two different VLANs communicate via a non-routable protocol such as NetBEUI or LAT. Although the techniques discussed in Chapter 11 can provide relief in limited situations, it is almost always better to avoid their use entirely. Instead, try to place all users of a particular non-routable protocol in a single VLAN. In situations where Catalyst 8500-style switching routers are in use, this might require IRB to be enabled (the Layer 2 nature of MLS does not require the use of IRB).
For more information, see the “Integration Between Routing and Bridging” section of Chapter 11.
Try to avoid “bridging between VLANs” at all costs.