Network Security Basics and the Need for Network Security

Network Security Basics and the Need for Network Security

In this section, we examine some of the key principles involved in creating a secure network. We establish building blocks that will be used in formulating an effective security policy. The principles are as follows:

  • Open networks and knowledgeable attackers with sophisticated attack methods create the requirement for flexible, dynamic network security policies.
  • Examine the CIA triad: confidentiality, integrity, and availability.
  • Define data classification categories in the public and private sectors.
  • Examine the three top-level types of security controls: administrative, technical, and physical.
  • Explore some of the incident response methods when a security breach has occurred.
  • List key laws and ethical codes by which INFOSEC professionals are bound.

The following section illustrates how the advent of sophisticated attack methods combined with open networks has resulted in a growing need for network security and flexible security policies, which can be dynamically adjusted to meet this threat

The Threats

According to Cisco, there are two major categories of threats to network security:

  • Internal threats. Examples are network misuse and unauthorized access.
  • External threats. Examples are viruses and social engineering.

The most foolproof way of protecting a network against external threats would be to sever its connections completely to public networks. In theory, this is OK; in practice, however, it is not practical because many businesses require connectivity to public networks, such as the Internet, in order to perform E-commerce in today’s connected world. The challenge, therefore, is to strike a balance between three often-competing needs:

  • Evolving business requirements
  • Freedom of information initiatives
  • Protection of data: private, personal, and intellectual property

It is axiomatic in the field of network security that the tradeoff is largely between the first two items, which are necessary for a business or government organization to reach the public, and the last item. Essentially, the battle is fought between these opposing camps—openness vs. security. Often, more security means less openness, and vice versa.

Internal Threats

According to Cisco, internal threats are the most serious, because insiders often have the most intimate knowledge of the network. They leverage on their knowledge of the internal network to achieve security breaches. They often don’t need to crack passwords because they already have sufficient access.

Insider attacks often render technical security solutions ineffective. This problem is exacerbated because human nature dictates that often the last place we look for security breaches is within the fortification! We are so busy looking for the enemy climbing the outside walls that we don’t look behind us. A best practice for hardening systems from internal (as well as external) threats includes following the systems’ vendor recommendations.

External Threats

External attackers lack the insider’s knowledge and often rely on technical tools to breach your network’s security. Technical tools such as Intrusion PreventionSystems (IPSs), firewalls, and routers with access control lists (ACLs) are usually effective in mitigating an organization’s vulnerability to this type of attack.

Firewalls and ACLs are discussed in Chapter 5, “Using Cisco IOS Firewalls to Implement a Network Security Policy.” IPSs are discussed in Chapter 8, “Network Security Using Cisco IOS IPS.”

Know the difference between internal and external threats and how they may be mitigated.

Other Reasons for Network Insecurity

An alarming trend is that as the sophistication of hacker tools has been on the increase, the technical knowledge required to use them has been on the decrease. According to the 2007 CSI/FBI Computer Crime and Security Survey, organizations are suffering a two-fold increase in financial losses but on slightly fewer reported attacks in the report’s four-year period. Financial frauds have overtaken viruses as the greatest cause of loss.

The 2007 CSI/FBI Computer Crime and Security Survey can be downloaded from this site,

In the past, hackers have been motivated as much by notoriety and intellectual challenge as for profit. A disturbing recent trend has been what Cisco calls “custom” threats, which focus on the application layer of the OSI model. These attacks may be written to breach a known vulnerability in an organization’s own customized application. Traditional signature-based intrusion detection systems (IDSs) and IPS products will not detect this type of attack because the products’ signatures match against a database of known vulnerabilities. Even following best practices in ensuring that vendor patches are tested and applied regularly to application servers may prove to be ineffective. Compounding the issue is that the applications themselves may have been written by programmers who have little or no formal training in network security, let alone an appreciation for the subject. According to Theresa Lanowitz of Gartner Inc., 75 percent of all attacks today are application layer attacks with three out of four businesses being vulnerable to this type of attack.

You can read more about the emergence of custom threats and their ability to go undetected by traditional signature-based intrusion detection systems (IDSs) and IPS products at this site:

The CIA Triad

This section describes the three primary purposes of network security, which are to secure an organization’s data confidentiality, integrity, and availability—the C-I-A triad. Here are some basic definitions:

  • Confidentiality. Ensuring that only authorized users have access to sensitive data
  • Integrity. Ensuring that only authorized entities can change sensitive data. May also guarantee origin authentication (see the following note), meaning an assurance that the data originated from an authorized entity (like an individual).
  • Availability. Ensuring that systems and the data that they provide access to remain available for authorized users.

Origin authentication is often overlooked in designing network security architecture. In some texts, this is the “A” in CIA.

A security professional must constantly weigh the tradeoffs between threats, their likelihood, the costs to implement security countermeasures, and cost versus benefit. In the end, someone has to pay for security (more on this later in the chapter), and there must be a solid business case and return on investment (ROI) for the measures implemented.
Let’s look at confidentiality, integrity, and availability separately.


Confidentiality is often discussed in the context of hiding an organization’s data with encryption technologies—using a Virtual Private Network (VPN), for example. In a broader context, assuring confidentiality involves any method of separating an organization’s data from its adversaries. Here are some other thoughts about confidentiality:

  • Confidentiality means that only authorized users can read sensitive data.
  • Confidentiality countermeasures provide separation of data from users through the use of:
  • Physical separation
  • Logical separation

Thus, the risk of confidentiality breaches can be minimized by effective enforcement of access control, thereby limiting access to the following:

  • Network resources through use of VLANs, firewall policies, and physical network separation.
  • Files and objects through use of operating system-based controls, such as Microsoft™ Active Directory™ and domain controls and Unix host security.
  • Data through use of authentication, authorization, and accounting (AAA) at the application level.

When attackers successfully read sensitive data that they are not authorized to view, a breach has occurred. This is almost impossible to detect because the attacker may have breached the confidentiality of the data by making a copy of the data from the network and using tools offline, leaving no trace. This is why much of the focus of network security in the context of confidentiality is for preventing the breach in the first place. Technologies such as Virtual Private Networks (VPNs) would be an example. This is discussed in Chapter 7, “Virtual Private Networks with IPsec.”


Data integrity guarantees that only authorized entities can change sensitive data. It can also provide for optional authentication in proving that only authorized entities created the sensitive data. This provides for data authenticity. There are a number of methods to ensure data integrity and authenticity including the use of hashing functions and digital signatures. Some of these methods are described in Chapter 6, “Introducing Cryptographic Services,” and will not be discussed here. Integrity services provide for some guarantee that:

  • Data cannot be changed except by authorized users.
  • Changes made by unauthorized users can be detected.


Availability refers to the safeguards that provide for uninterrupted access to data and other computing resources on a network during either accidental or deliberate network or computer disruptions. Given the complexity of systems and the variety of current attack methods, this is one of the most difficult security services to guarantee. Attacks that prevent legitimate users access to system or network resources are called Denial of Service (DoS) attacks. DoS attacks are usually caused by one of two things:

  • A device or an application becomes unresponsive because it is unable to handle an unexpected condition.
  • An attack (remember, this can be accidental!) creates a large amount of data causing a device or application to fail.
    DoS attacks are relatively easy to launch, often with tools downloadable offline such as vulnerability assessment tools. There is a fine line between a network probe designed to determine a network’s resiliency against various types of attack, and an actual DoS attack. Some vulnerability assessment tools even give the user the choice as to whether to enable probes that are known to be dangerous when leveraged against vulnerable networks.

Know the difference between (C)onfidentiality, (I)ntegrity, and (A)vailability. Understand that confidentiality is proof against reading data. Understand that integrity is proof against changing data, as well as providing for data authenticity. Understand that availability countermeasures provide for uninterrupted access to data.

Data Classification

Proper data classification will indicate what level of confidentiality, integrity, and availability services will be required to safeguard the organization’s data. It recognizes that not all data has the same inherent value, but that the divulgence of some data may even cause embarrassment to an organization. It also helps focus the development of the security policy so that more attention can be given to data that needs the most protection. As well, some laws require that information be classified for an organization to be compliant.

Classification Levels

Classification levels are typically different for private (non-government) and public (government) sectors.
The following are the levels of classification for data in the public sector:

  • Unclassified. Data with minimum confidentiality, integrity, or availability requirements; thus, little effort is made to secure it.
  • Sensitive but Unclassified (SBU). Data that would cause some embarrassment if revealed, but not enough to constitute a security breach.
  • Confidential. First level of classified data. This data must comply with confidentiality requirements.
  • Secret. Data that requires concerted effort to keep secure. Typically, only a limited number of people are authorized to access this data—certainly fewer than those who are authorized to access confidential data.
  • Top Secret. The greatest effort is used to secure this data and to ensure its secrecy. Only those people with a “need to know” typically have access to data classified at this level.

There are no specific industry standards or definitions for data classification in the private sector. Standards, where they exist, will vary from country to country. That aside, Cisco makes these specific recommendations for data classification in the private sector:

  • Public. Data that is often displayed for public consumption such as that found on public websites and in marketing literature.
  • Sensitive. Similar to SBU data in the public-sector model.
  • Private. Data that is important to the organization and whose safeguarding is required for legal compliance. Some effort is exerted to maintain both the secrecy (confidentiality) and accuracy (integrity) of the data.
  • Confidential. The greatest effort is taken to safeguard this data. Trade secrets, intellectual property, and personnel files are examples of data commonly classified as confidential.

Classification Criteria

There are four basic metrics that determine at what level data should be classified and consequently what level of protection is required to safeguard that data:

  • Value. Most important and perhaps the most obvious.
  • Age. Data’s sensitivity typically decreases over time.
  • Useful Life. Data can be made obsolete by newer inventions.
  • Personal Association. Some data is particularly sensitive because of its association with an individual. Compromise of this data can lead to guilt by association.

Information Classification Roles

Another advantage of properly classifying data is that it helps define the roles of the personnel that will be working with and safeguarding the data:

  • Owner. Ultimate responsibility for the data, usually management, and different than the custodian.
  • Custodian. Responsible for the routine safeguarding of classified data. Usually an IT resource.
  • User. These persons use the data according to the organization’s established operational procedures.

Security Controls

Now that the information classification roles have been established, the types of security controls over an organization’s data can be defined. Controls are the engine of a security policy. They define the levels of passive and active tools necessary for

a custodian to enact a security policy and to meet the three objectives (remember those?!) of confidentiality, integrity, and availability. This is essential in order to provide defense in depth. Subcategories or “types” of controls are investigated a little later on in this section.

Controls can be divided into three broad categories, as follows:

  • Administrative. Mostly policies and procedures.
  • Technical. Involving network elements, hardware, software, other electronic devices, and so on.
  • Physical. Mostly mechanical.

Here’s a useful way to remember these categories of controls. If they are in place, you can “stand pat.” PAT = Physical, Administrative, Technical.

Here are some of the attributes of administrative, technical, and physical controls.

Administrative Controls

The following are attributes of administrative controls:

  • Security awareness training
  • Security policies and standards
  • Security audits and tests
  • Good hiring practices
  • Background checks of employees and contractors

Technical Controls
IT staffs usually think of network security as a technical solution because it is in their nature. That said, implementation of devices and systems in this category, while important, should not be the sole part of an effective Information Security (INFOSEC) program. Here is a list of some common technologies and examples of those technologies that fit in the category of technical controls.

  • Network devices. Firewalls, IPSs, VPNs, Routers with ACLs.
  • Authentication systems. TACACS+, RADIUS, OTP.
  • Security devices. Smart cards, Biometrics, NAC systems.
  • Logical access control mechanisms. Virtual LANs (VLANs), Virtual Storage Area Networks (VSANs).

The focus of this Exam Cram is largely a technical one because this is the primary focus of the Cisco course material and therefore also the exam. It is important, however, to note that technical controls should only be implemented as part of a broader security policy.

Physical Controls

If the purpose of your security policy is to build a castle around your data with technical controls and manage it with administrative controls, how effective do you think it will be if you leave the drawbridge down or forget to lock or at least post sentinels at the front gate? This is where physical controls come in. Physical controls consist of the following:

  • Monitoring Equipment. Intruder detection systems.
  • Physical Security Devices. Locks, safes, equipment racks.
  • Environmental Controls. Uninterruptible Power Supplies (UPSs), fire suppression systems, positive air flow systems.
  • Security Guards. Human, canine.

Types of Controls

A control “type” is a further subdivision of a control “category” (refer to the next Exam Alert):

  • Preventative. Controls that prevent access.
  • Deterrent. Controls that deter access.
  • Detective. Controls that detect access.

It is important to note that although the three broadest categories are administrative, technical, and physical, these can be further subdivided by type. The hierarchy is Category of Control -> Type of Control. For example, an IPS would be an example of a Technical -> Preventative system, whereas an IDS would be an example of a Technical -> Detective system.
Remember this definition: A security control is any mechanism that you put in place to reduce the risk of compromise of any of the three objectives: confidentiality, integrity, and availability.

Now that you have built comprehensive security controls into our network design, what do you do when a security breach occurs? What internal procedures do you follow? Who do you notify? How do you contain the damage? What steps do you take to document the breach? How do you recover compromised data? So many questions! Adding to this complexity is the whole quagmire of the law, law enforcement agencies, and the question of legal and ethical responsibilities in both reporting a breach, as well as whether we may be somehow responsible for the breach because of bad network design and a lack of due diligence. Let’s look at answering these questions in two different contexts:

  • Incident response
  • Laws and ethics

Incident Response

So it’s happened. Someone has hacked into your network and either accessed your confidential data or denied access to your network by authorized users. Assuming that you have implemented Technical -> Detective controls, and you have evidence that a breach has, in fact, occurred, you must decide how to move forward and use the evidence gathered to improve your existing network and/or prosecute the hacker. Let’s look at some of the complex issues involved in prosecuting computer crimes.

You shouldn’t decide what response you will take at the moment that the breach has been detected. You should plan an incident response as part of a comprehensive Network Security Policy. This is discussed a little later in this chapter.

Computer Crime Investigations

For successful prosecution of computer crimes, law enforcement investigators must prove three things: motive, opportunity, and means (MOM). Anyone who
enjoys watching crime shows on television will recognize these:

  • Motive. Did the individuals have something to gain from committing the crime?
  • Opportunity. Were the individuals available to commit the crime?
  • Means. Did the individuals have the ability to commit the crime?

You will often see the term “chain of custody” in discussions about incident response. Chain of custody means that you can prove that from the time that the incident occurred, the copies you made of your system (see below) never left your control and were never changed while under your control and before they were presented as evidence to the investigating agency. Lawyers might question the completeness of this definition, but it is sufficient for this discussion.

Although it is advisable to immediately quarantine a breached system from the network, basic rules must be followed in order to collect evidence and to preserve its integrity:

  • Make a copy of the system. A complete copy of the system, both persistent and non-persistent storage, should be made. This means that the contents of RAM should be dumped to a file and multiple images should be made of the hard drive(s), flash drive(s), and so on.
  • Photograph the system. Photograph the system before it is moved or disconnected.
  • Handle evidence carefully. The chain of custody must be preserved.

Laws and Ethics

As if computer crime isn’t complicated enough, a security expert also needs to deal with the jurisdictional, procedural, and legal issues within the framework of the law of the land.

Types of Laws

There are three types of law found in most countries:

  • Criminal. Concerned with crimes. Penalties usually involve possible fines (paid to the court) and/or imprisonment of the offender.
  • Civil (also called “tort”). Concerned with righting wrongs that do not involve crimes or criminal intent. Penalties are typically monetary and paid to the party who wins the lawsuit.
  • Administrative. Typically, government agencies in the course of enforcing regulations. Monetary awards are divided between the government agency and the victim (if any) of the contravened regulation.

Although these categories are common for most countries, some governments do not follow or even recognize them. Further complicating this is that computer crimes often cross international boundaries, meaning that jurisdiction must be established before the crime can be prosecuted.


Sometimes we are motivated to do something, not because we will be punished if we don’t do it, but because we know it’s the right thing to do. This is why ethics are considered to be moral principles and a higher standard than the law. These codes of ethics are as follows:

  • Moral principles that constitute a higher standard (or “code”) than the law.
  • Guides for the conduct of individuals or groups.
  • Supported by a number of organizations in the INFOSEC field:
  • ISC2 (International Information Systems Security Certification Consortium, Inc.) Code of Ethics
  • Computer Ethics Institute
  • IAB (Internet Activities Board)
  • GASSP (Generally Accepted System Security Principles)

A good example of why codes of ethics are an important INFOSEC principle would be the subject of entrapment. Entrapment is the process of luring someone to commit an illegal act that they might not otherwise commit were the opportunity not there. They might have motive. They might have means. You have provided them opportunity. An example of this might be a “Honey Pot” consisting of a deliberately easy-to-compromise system. You may have deployed

this system to see what bees are interested in your honey and as an early warning system for penetration of your network. In this manner, private use of the data so collected may be legitimate from a security control (Technical -> Detective) perspective, but it may contravene legal, regulatory, and ethical standards if it was used for prosecution. Seek legal and ethical advice before deploying such a system as part of your network security architecture.


Organizations are responsible for the proper protection of their systems against compromise. If a loss of service occurs due to a security breach, and if it is discovered that the organization did not have adequate security controls in place, that organization might be held liable for damages. Organizations are required to practice the following:

  • Due Diligence. Concerns itself with the implementation of adequate security controls (administrative, technical, and physical) and establishing best practices for ongoing risk assessment and vulnerability testing.
  • Due Care. Operating and maintaining security controls that have been implemented through due diligence.

Security practitioners are very fond of using the terms “due care” and “due diligence” when describing exposure to liability. Cisco’s definitions are listed previously, and you need to know them for the exam, but they still look very similar, don’t they? Think of due diligence as being exercised in the planning and overall design of a network secu rity architecture. This includes all the security controls (discussed in a previous sec tion) put in place to meet expected threats. It is relatively static. Due care, on the other hand, is more dynamic and involves the day-to-day operating, maintaining, and tweaking of the security architecture. Remember the old axiom, “Security is a process, not a product.” Due care is that process.

Legal and Government Policy Issues

Here are some examples of U.S. government regulations that have been introduced to enforce network and system security and to raise awareness of privacy and (more recently) INFOSEC issues:

  • Gramm-Leach-Bliley Act (GLBA) of 1999. Enacted to allow banks, securities firms, and insurance companies to merge and share information with one another.
  • Health Insurance Portability and Accountability Act (HIPAA) of 2000. Requires national standards for the confidentiality of electronic patient records.
  • Sarbanes-Oxley (SOX) Act of 2002. Law to ensure transparency of corporations’ accounting and reporting practices.
  • Security and Freedom Through Encryption Act (SAFE) of 1997. Entrenches the rights of U.S. citizens to any kind of encryption of data without the requirement of a key escrow.
  • Computer Fraud and Abuse Act. Last amended in 2001 by the USA PATRIOT Act (Uniting and Strengthening America by Providing Appropriate Tools Required to Intercept and Obstruct Terrorism). Intention of this act is to reduce hacking by defining specific penalties when damages result from a compromised system.
  • Privacy Act of 1974. Privacy of individuals is to be respected unless a written release is obtained.
  • Federal Information Security Management Act (FISMA) of 2002. Intended to strengthen IT security in the U.S. federal government by requiring yearly audits.
  • Economic Espionage Act of 1996. Enacted to criminalize the misuse of trade secrets.

Be able to recognize these pieces of legislation on the exam.

About the author


Leave a Comment