Servers play a critical role in modern networks. Given this importance, they should be considered early in the design process. This section discusses some common issues associated with server farm design.
Where to Place Servers
Most organizations are moving toward centralized server farms to allow better support and management of the servers themselves. Given this trend, it is generally best to position a centralized server farm as another distribution block attached to the campus core. This concept is illustrated in Figure 15-15.
Figure 15-15. Centralized Server Farm
The servers in Figure 15-15 can be connected by a variety of means. The figure shows the servers directly connected to the pair of Layer 3 switches that link to the campus core. An alternative design is to use one or more Layer 2 switches within the server farm. These Layer 2 devices can then be connected to the Layer 3 switches through Gigabit Ethernet or Gigabit EtherChannel. Although some servers can connect to only a single switch, redundant NICs provide a measure of fault-tolerance.
The key to this design is the Layer 3 barrier created by the pair of Layer 3 switches that link the server farm to the core. Not only does this insulate the server farm from the core, but it also creates a much more modular design.
Some network designs directly connect the servers to the core as shown in Figure 15-16.
Figure 15-16. Connecting Servers Directly to the Campus Core
Figure 15-16 illustrates a popular method used for core-attached servers—using an ATM core. By installing LANE-capable ATM NICs in the servers, the servers can directly join the ELAN used in the campus core. A similar design could have been built using ISL or 802.1Q NICs in the servers.
Most organizations run into one of two problems when using servers directly connected to the campus core:
- Inefficient flows
- Poor performance
The first problem occurs with implementations of the multilayer model where the routing component contained in the MDF/distribution layer devices can lead to inefficient flows. For example, consider Figure 15-16. Assume that one of the servers needs to communicate with an end user in Building 1. When using default gateway technology, the server does not know which MDF Layer 3 switch to send the packets to. Some form of Layer 3 knowledge is required as packets leave the server farm. One way to achieve this is to run a routing protocol on the servers themselves. However, this can limit your choice of routing protocols throughout the remainder of the network, and many server administrators are reluctant to configure routing protocols on their servers. A cleaner approach is to simply position the entire server farm behind a pair of Layer 3 switches, as shown in Figure 15-15.
The second problem occurs with implementations of campus-wide VLANs where the servers can be made to participate in every VLAN used throughout the campus (for example, most LANE NICs allow multiple ELANs to be configured). Although this sounds extremely attractive on paper (it can eliminate most of the need for routers in the campus), these multi-VLAN NICs often have poor performance and are subject to frequent episodes of strange behavior (for example, browsing services in a Microsoft-based network). Moreover, this approach suffers from all of the scalability concerns discussed earlier in this chapter and in Chapters 14 and 17.
In general, it is best to always place a centralized server farm behind Layer 3 switches. Not only does this provide intelligent forwarding to the MDF switches located throughout the rest of the campus, but it also provides a variety of other benefits:
- This placement encourages fast convergence.
- Access lists can be configured on the Layer 3 switches to secure the server farm.
- Server-to-server traffic is kept off of the campus core. This can not only improve performance, but it can also improve security.
- It is highly scalable.
- Layer 3 switches have excellent multicast support, an important consideration for campuses making widespread use of multicast technology.
Consider Distributed Server Farms
Although centralized server farms are becoming increasingly common because they simplify server management, they do create problems from a bandwidth management perspective because the aggregate data rate can be extremely high. Although high-speed Layer 2 and Layer 3 switches have mitigated this problem to a certain extent, network designers should look for opportunities to intelligently distribute servers throughout the organization. Although this point is obviously true with regards to wide-area links, it can also be true of campus networks.
One occasion where servers can fairly easily be distributed is in the case of departmental servers (servers that are dedicated to a single organizational unit). These devices can be directly connected to the distribution block network they serve. In general, these servers are attached in one of two locations:
- They can be directly connected to the IDF switch that handles the given department.
- They can be attached to the MDF switches in that building or distribution block. This also presents the opportunity to create mini server farms in the MDF closets of every building. Departmental file and print servers can be attached here where enterprise and high-maintenance servers can be located in the centralized server farm.
Use Fault-Tolerant NICs
Many organizations spend numerous hours and millions of dollars creating highly redundant campus networks. However, much of this money and effort can go to waste unless the servers themselves are also redundant. A fairly simple way to improve a server’s redundancy is to install some sort of redundant NICs.
Although using redundant NICs can be as simple as just installing two normal NICs in each server, this approach can lead to problems in the long run. Because most network operating systems require each of these NICs to use different addresses, clients need some mechanism to failover to the address assigned to the secondary NIC when the primary fails. This can be challenging to implement.
Instead, it is advisable to use special NICs that automatically support failover using a single MAC and Layer 3 address. In this case, the failover can be completely transparent to the end stations. A variety of these fault-tolerant NICs are available (some also support multiple modes of fault tolerance, allowing customization of network performance).
Fault-tolerant NICs allow two (or more) server NICs to share a single Layer 2 and Layer 3 address.
When selecting a fault-tolerant NIC, also consider what sort of load balancing it supports (some do no load balancing, and others only load balance in one direction). Finally, closely analyze the technique used by the NICs to inform the rest of the network that a change has occurred. For example, many NICs perform a gratuitous ARP to force an update in neighboring switches.
In some cases, this update process can be fairly complex and require a compromise of timer values. For example, when using fault-tolerant Ethernet NICs in conjunction with a LANE backbone, it is not enough to simply update the Layer 2 CAM tables and Layer 3 ARP tables. If redundant LANE modules are used to access the server farm, the LANE LE-ARP tables (containing MAC address to ATM NSAP address mappings) also need to be updated. When faced with this issue, you might be forced to disable PortFast and intentionally incur a Spanning Tree delay. The upside of this delay is that it triggers a LANE topology change message and forces the LE-ARP tables to update.
Obviously, redundant NICs should be carefully planned and thoroughly tested before a real network outage occurs.
You may need to disable PAgP on server ports using fault-tolerant NICs to support the binding protocols used by some of these NICs during initialization.
Use Secured VLANs in Server Farms
Cisco is developing a new model for VLANs to provide simple but effective security for applications such as very large server farms. Under this feature, one or more uplink ports are configured on each of the switches used to directly link the servers to one or more default gateways. These ports support two-way access to all servers within the VLAN. However, other ports within the VLAN designated as access or server ports cannot communicate with each other.
This creates an easy-to-administer environment where the servers have full communication with the network’s backbone/core but with no risk of the servers communicating with each other. This feature will be extremely useful in situations such as Internet service provider (ISP) web hosting facilities where communication between servers from different clients must be tightly controlled. Whereas earlier solutions generally involved creating hundreds of small VLANs and IP subnets, Cisco’s new model of VLAN will be much easier to implement and maintain (all of the servers can use a single VLAN and IP subnet) while providing tight security.
This feature had not received an official name at the time this book goes to press. Contact your Cisco sales team for additional information.