CCDA Notes Exploring Basic Campus and Data Center Network Design

CCDA Notes Exploring Basic Campus and Data Center Network Design

The multilayer design strategy uses a modular approach, which adds scalability to a design. This section examines how the multilayer design approach can be applied to both the enterprise campus and the enterprise data center.

Understanding Campus Design Considerations

As illustrated, an enterprise campus might be composed of multiple buildings that share centrally located campus resources.

Enterprise campus design considerations fall under three categories:

  • Network application considerations—A network’s applications might include the following:
  • Peer-to-peer applications (for example, file sharing, instant messaging, IP telephony, videoconferencing)
  • Client/local server applications (for example, applications on servers located close to clients or servers on the same LAN)

ccda-notes-exploring-basic-campus-data-center-network-design-1

  • Client/server farm applications (for example, e-mail, file sharing, and database applications)
  • Client/enterprise edge server applications (for example, Internet accessible web and e-commerce applications)
  • Environmental considerations—Network environmental considerations vary with the scope of the network. Three scopes are as follows:
    1.Intrabuilding—An intrabuilding network provides connectivity within a building. The network contains both building access and building distribution layers. Typical transmission media includes twisted pair, fiber optics, and wireless technology.
    2.Interbuilding—An interbuilding network provides connectivity between buildings that are within two kilometers of each other. Interbuilding networks contain the building distribution and campus core layers. Fiber optic cabling is typically used as the transmission media.
    3.Remote Buildings—Buildings separated by more than two kilometers might be interconnected by company-owned fiber,a company-owned WAN, or by service provider offerings (for example, metropolitan-area network [MAN] offerings).

Common transmission media choices include the following:

  • Twisted pair
    1000-m distance limit
    10-Gbps speed limit
    Low cost
  • Multimode fiber
    2-km distance limit (Fast Ethernet) or 550-m distance limit (Gigabit Ethernet)
    10-Gbps speed limit
    Moderate cost

ccda-notes-exploring-basic-campus-data-center-network-design-2
NOTE
The core diameter in a multimode fiber is large enough to permit multiple paths (that is, modes) for light to travel. This might cause different photons (that is, light particles) to take different amounts of time to travel through the fiber. As distance increases, this leads to multimode delay distortion. Therefore, multimode fiber has a distance limitation of approximately 2 km.

  • Single-mode fiber
    80-km distance limit (Fast Ethernet or 10 Gigabit Ethernet)
    Speed limit of 10-Gbps or greater
    High cost

ccda-notes-exploring-basic-campus-data-center-network-design-3
NOTE
The core diameter in a single-mode fiber is only large enough to permit one path for light to travel. This approach eliminates multimode delay distortion, thus increasing the maximum distance supported.

  • Wireless
    500-m distance limit (at a rate of 1 Mbps)
    Speed limit of 54 Mbps
    Moderate cost

Infrastructure device considerations include the following:

  • When selecting infrastructure devices, Layer 2 switches are commonly used for access layer devices, whereas multilayer switches are typically found in the distribution and core layers.
  • Selection criteria for switches include the need for QoS, the number of network segments to be supported, required network convergence times, and the cost of the switch.

Understanding the Campus Infrastructure Module

When designing the enterprise campus, different areas of the campus (that is, building access, building distribution, campus core, and server farm) require different device characteristics (that is, Layer 2 versus multilayer technology, scalability, availability, performance, and perport cost).

  • Building access best practices
    Limit the scope of most VLANs to a wiring closet. A VLAN is a single broadcast domain.If you use the Spanning Tree Protocol (STP), select Rapid Per VLAN Spanning Tree Plus (RPVST+) for improved convergence.When using trunks to support the transmission of traffic from multiple VLANs across a single physical link, set both ends of the trunk to desirable, which causes the switches at each end of the link to send Dynamic Trunk Protocol (DTP) frames in an attempt to negotiate a trunk. Also, set the DTP mode to negotiate, to support DTP protocol negotiation.Remove (that is, “prune”) unneeded VLANs from trunks.Set the VLAN Trunking Protocol (VTP) mode to transparent because a hierarchical design has little need for a VLAN to span multiple switches.When using an EtherChannel, set the Port Aggregation Protocol (PAgP) mode to desirable to cause both sides of the connection to send PAgP frames, in an attempt to create an EtherChannel.Consider the potential benefits of implementing routing at the
    access layer to achieve, for example, faster convergence times.
  • Building-distribution considerations
    Switches selected for the building distribution layer require wirespeed performance on all their ports. The need for such high performance stems from the roles of a building distribution layerswitch: acting as an aggregation point for access layer switches and supporting high-speed connectivity to campus core layer switches.The key roles of a building distribution layer switch demand redundant connections to the campus core layer. You should design redundancy such that a distribution layer switch could perform equal-cost load balancing to the campus core layer. However, if a link were to fail, the remaining link(s) should have enough capacity to carry the increased traffic load. Redundancy technologies such as Stateful Switchover (SSO) and Nonstop Forwarding (NSF) offer failover times in the range of one to three seconds. Also, some platforms support the In Service Software Upgrade (ISSU) feature, which allows you to upgrade a switch’s Cisco IOS image without taking the switch out of service.Building distribution layer switches should support network services such as high availability, quality of service (QoS), and policy enforcement.
  • Campus core considerations
    Evaluate whether a campus core layer is needed. Campus core layer switches interconnect building distribution layer switches, and Cisco recommends that you deploy a campus core layer when interconnecting three or more buildings or when interconnecting four or more pairs of building distribution layer switches.Determine the number of high-speed ports required to aggregate the building distribution layer.For high-availability purposes, the campus core should always include at least two switches, each of which can provide redundancy to the other.Decide how the campus core layer connects to the enterprise edge and how WAN connectivity is provided. Some designs use edge distribution switches in the core to provide enterprise edge and WAN connectivity. For larger networks that include a data center, enterprise edge and WAN connectivity might be provided through the data center module.
  • Server farm considerations—Determine server placement in the network. For networks with moderate server requirements, common types of servers can be grouped together in a separate server farm module connected to the campus core using multilayer switches. Access control lists (ACL) in these multilayer switches offer limited access to these servers.All server-to-server traffic should be kept within the server farm module and not be propagated to the campus core.
    For large network designs, consider placing the servers in a separate data center. This data center could potentially reside in a remote location.Consider using network interface cards (NIC) in servers that provide at least two ports. One NIC port could be active, with the other port in standby mode. Alternatively, some NICs support EtherChannel, which could increase the effective throughput between a server and the switch to which it connects.For security, place servers with similar access policies in the same VLANs, and then limit interconnections between servers in different policy domains using ACLs on the server farm’s multilayer switches.Understand the traffic patterns and bandwidth demands of applications deployed on the servers. Some applications (for example, backup applications or real-time interactive applications) place a high bandwidth demand on the network. By understanding such application characteristics, you can better size the server farm uplinks to prevent oversubscription.

Understanding Enterprise Data Center Considerations

An enterprise data center’s architecture uses a hierarchical design, much like the campus infrastructure. However, there are subtle differences in these models. Large networks that contain many servers traditionally consolidated server resources in a data center. However, data center resources tended not to be effectively used because the supported applications required a variety of operating systems, platforms, and storage solutions. These diverse needs resulted in multiple application silos, which can be thought of as separate application “islands.”

Today, the former server-centric data center model is migrating to a service-centric model. The main steps in this migration are as follows:

  1. Use virtual machine software, such as VMware, to remove the requirement that applications running on different operating systems must be located on different servers.
  2. Remove network storage from the individual servers, and consolidate the storage in shared storage pools.
  3. Consolidate I/O resources, such that servers have on-demand access to I/O resources, to reach other resources (for example,other servers or storage resources).

The Cisco enterprise data center architecture consists of two layers:

  • Networked Infrastructure Layer—The Networked Infrastructure Layer contains computing and storage resources, which are connected in such a way to meet bandwidth, latency, and protocol requirements for user-to-server, server-to-server, and server-tostorage connectivity design requirements.
  • Interactive Services Layer—The Interactive Services Layer supports such services as Application Networking Services (ANS) (for example, application acceleration) and infrastructure enhancing services (for example, intrusion prevention).

Data centers can leverage the Cisco enterprise data center architecture to host a wide range of legacy and emerging technologies, including N-tier applications, web applications, blade servers, clustering, serviceoriented architecture (SOA), and mainframe computing.

An enterprise data center infrastructure design requires sufficient port density and L2/L3 connectivity at the access layer. The design must  also support security services (for example, ACLs, firewalls, and intrusion detection systems [IDS]) and server farm services (for example, content switching and caching). Consider the following design best practices for an enterprise data center’s access, aggregation, and core layers:

  • Data center access layer design best practices
    Provide for both Layer 2 and Layer 3 connectivity.Ensure sufficient port density to meet server farm requirements.Support both single-attached and dual-attached servers.Use RPVST+ as the STP approach for loop-free Layer 2 topologies.Offer compatibility with a variety of uplink options
  • Data center aggregation layer design best practices
    Use the data center aggregation layer to aggregate traffic from the data center access layer.Provide for advanced application and security options.Maintain state information for connections, so that hardware failover can occur more rapidly.Offer Layer 4 through 7 services, such as firewalling, server load balancing, Secure Sockets Layer (SSL) offloading, and IDS.Provision processor resources to accommodate a large STP processing load.
  • Data center core layer design best practices
    Evaluate the need for a data center core layer by determining whether the campus core switches have sufficient 10-Gigabit Ethernet ports to support both the campus distribution and data center aggregation modules.If you decide to use a data center core, use the separate cores (that is, the campus core and the data center core) to create separate administrative domains and policies (for example, QoS policies and ACLs).If you decide that a data center core is not currently necessary,anticipate how future growth might necessitate the addition of a data center core. Determine whether it would be worthwhile to initially install a data center core, instead of adding one in the future.

Designers commonly use modular chassis (for example, Cisco Catalyst 6500 or 4500 series switches) in an enterprise access layer. Although this design approach does offer high performance and scalability, challenges can emerge in a data center environment. Server density has increased thanks to 1RU (one rack unit) and blade servers, resulting in the following issues:

  • Cabling—Each server typically contains three to four connections, making cable management between high-density servers and modular switch more difficult.
  • Power—Increased server and switch port density requires additional power to feed a cabinet of equipment.
  • Heat—Additional cabling under a raised floor and within a cabinet can restrict the airflow required to cool equipment located in cabinets. Also, due to higher-density components, additional cooling is required to dissipate the heat generated by switches and servers.

One approach to address these concerns is just to not deploy highdensity designs. Another approach is to use rack-based switching, with 1RU top-of-rack switches, which allows the cables between the servers and switches to be confined within a cabinet. If you prefer to use modular switches, an option is to locate modular switches (for example, Cisco Catalyst 6500 series switches) much like “bookends” on each end of a row of cabinets. This approach reduces administration overhead because you have fewer switches to manage compared to using multiple 1RU switches.

More Resources

About the author

Scott

Leave a Comment