Changing Traffic Patterns
Any effective campus design must take traffic patterns into account. Otherwise, switching and link bandwidth are almost certainly wasted. The good news is that most modern campus networks follow several trends that create unmistakable flows. This section discusses the traditional campus traffic patterns and shows how popular new technologies have drastically changed this.
The earliest seeds of today’s campus networks began with departmental servers. In the mid-1980s, the growth of inexpensive PCs led many organizations to install small networks utilizing Ethernet, ArcNet, Token Ring, LocalTalk, and a variety of proprietary solutions. Many of these networks utilized PC-based server platforms such as Novell’s Netware. Not only did this promote the sharing of information, it allowed expensive hardware such as laser printers to be shared.
Throughout the late-1980s, these small networks began to pop up throughout most corporations. Each network was built to serve a single workgroup or department. For example, the finance department would have a separate network from the human resources department. Most of these networks were extremely decentralized. In many cases, they were installed by non-technical people employed by the local workgroup (or outside consultants hired by the workgroup). Although some companies provided centralized support and guidelines for deploying these departmental servers, few companies provided links between these pockets of network computing.
In the early 1990s, multiprotocol routers began to change all of this. Routers suddenly provided the flexibility and scalability to begin hooking all of these “network islands” into one unified whole. Although routers allowed media-independent communication across the many different types of data links deployed in these departmental networks, Ethernet and Token Ring became the media of choice. Routers were also used to provide seamless communication across wide-area links.
Early routers were obviously extremely bandwidth-limited compared to today’s products. How then did these networks function when the Gigabit networks of today strain to keep up? There are two main factors: the quantity of traffic and the type of traffic.
First, there was considerably less traffic in campus networks at the time early router-based campus networks were popular. Simply put, fewer people used the network. And those who did use it tended to use less network-intensive applications.
However, this is not to say that early networks were like a 15-lane highway with only three cars on it. Given the lower available bandwidth of these networks, many had very high average and peak utilization levels. For instance, before the rise of client/server computing, many databases utilized file servers as a simple “hard drive at the end of a long wire.” Thousands of dBase and Paradox applications were deployed that essentially pulled the entire database across the wire for each query. Therefore, although the quantity of traffic has grown dramatically, another factor is required to explain the success of these older, bandwidth-limited networks.
To explain this difference, the type of traffic must be considered. Although central MIS organizations used routers and hubs to merge the network into a unified whole, most of the traffic remained on the local segment. In other words, although the networks were linked together, the workgroup servers remained within the workgroups they served. For example, a custom financial application developed in dBase needed to use only the finance department’s server; it never needed to access the human resource server. The growing amount of file and printer server traffic also tended to follow the same patterns.
These well-established and localized traffic flows allowed designers to utilize the popular 80/20 rule. Eighty (or even 90+) percent of the traffic in these networks remained on the local segment. Hubs (or possibly early “switching hubs”) could support this traffic with relative ease. Because only 20 (or even less than 10) percent of the traffic needed to cross the router, the limited performance of these routers did not pose significant problems.
With blinding speed, all of this began to change in the mid-1990s. First, enterprise databases were deployed. These were typically large client/server systems that utilized a small number of highly centralized servers. On one hand, this dramatically cut the amount of traffic on networks. Instead of pulling the entire database across the wire, the application used technologies such as Structured Query Language (SQL) to allow intelligent database servers to first filter the data before it was transmitted back to the client. In practice, though, client/server systems began to significantly increase the utilization of network resources for a variety of reasons. First, the use of client/server technology grew at a staggering rate.
Although each query might only generate one fourth of the traffic of earlier systems, many organizations saw the number of transactions increase by a factor of 10–100. Second, the centralized nature of these applications completely violated the 80/20 rule. In the case of this traffic component, 100 percent needs to cross the router and leave the local segment.
Although client/server applications began to tax traditional network designs, it took the rise of Internet and intranet technologies to completely outstrip available router (and hub) capacity. With Internet-based technology, almost 100 percent of the traffic was destined to centralized servers. Web and e-mail traffic generally went to a small handful of large UNIX boxes running HTTP, Simple Mail Transfer Protocol (SMTP), and Post Office Protocol (POP) daemons. Internet-bound traffic was just as centralized because it needed to funnel through a single firewall device (or bank of redundant devices).
This trend of centralization was further accelerated with the rise of server farms that began to consolidate workgroup servers. Instead of high-volume file and print server traffic remaining on the local wire, everything began to flow across the corporate backbone.
As a result, the traditional 80/20 rule has become inverted. In fact, most modern networks have less than five percent of their traffic constrained to the local segment. When this is combined with the fact that these new Internet-based technologies are wildly popular, it is clear that the traditional router and hub design is no longer appropriate.
Be sure to consider changing traffic patterns when designing a campus backbone. In doing so, try to incorporate future growth and provide adequate routing performance.