Exploring PKI and Asymmetric Encryption
Understanding Asymmetric Algorithms
Asymmetric algorithms support two of the primary objectives of any form of security, because their main objectives are confidentiality and authentication. To meet these objectives, these algorithms are based on much more complex mathematical formulas that require greater time to compute than symmetric algorithms. With their greater security through a more complex computation, asymmetric algorithms are used quite often as key exchange protocols for symmetric algorithms. This is because symmetric algorithms have no inherent key exchange technology built in. This section explores asymmetric algorithms and their usage.
Exploring Asymmetric Encryption Algorithms
Asymmetric algorithms employ a two-key technology: a public key and a private key. Often this is simply called public-key encryption. In this key pair, the “public” key may be distributed freely, whereas the “private” key must be closely guarded. If it is compromised, the system as a whole fails. In fact, calling this just public-key encryption oversimplifies this process, because both keys are required, with the complementary key being used to provide decryption. Figure 14-1 shows the use of asymmetric encryption algorithms.
With public-key encryption, the public key is used to encrypt the data. After it is encrypted, only the private key can decrypt the data. The opposite is also true. If data is encrypted by the private key, the public key may be used to decrypt the data.
A number of public-key encryption algorithms exist. Although each algorithm differs, they all share a common trait in that the mathematics behind them is quite complicated. Here are some of the most popular algorithms:
- RSA
- Digital Signature Algorithm (DSA)
- Diffie-Hellman (DH)
- ElGamal
- Elliptic Curve Cryptography (ECC)
The design of asymmetric algorithms is such that the key used for encryption is substantially different from the key used for decryption. This is done so that an attacker cannot, in any reasonable amount of time, calculate the decryption key from the encryption key, and vice versa. These keys come in varying lengths, but the general range for a key built using asymmetric algorithms is from 512 to 4096 bits. As another security feature, the key lengths for asymmetric algorithm keys cannot be directly compared to symmetric algorithm key lengths. This is because these two forms of algorithms differ greatly in the structure of their design.
As mentioned, a number of asymmetric cryptographic algorithms exist, but the most widely known and used are RSA, ElGamal, and elliptic curve algorithms. It is generally true, with regard to key length, that an RSA encryption key of 2048 bits is roughly equivalent to a 128- bit key of RC4 in terms of its ability to resist brute-force attacks.
Using Public-Key Encryption to Achieve Confidentiality
To achieve confidentiality, the encryption process begins with the public key. Using the public key to encrypt data ensures that only the private key can decrypt the protected data. Confidentiality is assured in this manner because only one host has the private key necessary for decryption. This process hinges on the integrity of the private key. Should the private key become compromised, this guarantee of confidentiality is lost, and another key pair must be generated to replace the compromised key. It is not possible to re-create the compromised key, so both keys in the pair are replaced.
Let’s examine an example in which a public key pair is used, with the goal being to provide confidentiality. This exchange is shown in Figure 14-2 and is detailed in the following steps:
Step 1 Addison gains access to Matthew’s public key.
Step 2 Addison uses Matthew’s public key to encrypt a message to be sent to Matthew. This process often uses a symmetric key, with an agreed-upon algorithm.
Step 3 Addison sends the encrypted message to Matthew.
Step 4 Matthew uses his private key to decrypt the message and reveal the contents.
Providing Authentication with a Public Key
Authentication with an asymmetric algorithm is achieved when the encryption process is begun using the private key rather than the public key, as you saw when the goal was confidentiality. In this instance, the private key is used to encrypt the data, and the public key must then be used to decrypt the data. The same rules apply, however, in that only one host has the private key. In this case, this means that only that host can encrypt the message. In addition to providing security, this authenticates the sender, because only that host has the private key.
The public key is just that—public—and generally no attempt is made to preserve the secrecy of this key. That means that any number of hosts may have this public key and, therefore, any or all could decrypt the message. After a message has been successfully decrypted using the host’s public key, that host trusts that the message was encrypted by the sender’s private key. This serves to verify the identity of the sender, providing a form of authentication.
Much as we discussed with the public key, should the private key become compromised, another key pair needs to be generated to replace the compromised key. Let’s examine an example in which the private key is used to provide authentication as two individuals exchange data. Figure 14-3 illustrates the following steps:
Step 1 Addison encrypts the message to be sent with her private key using an agreedupon algorithm.
Step 2 Addison sends the encrypted message to Matthew.
Step 3 Matthew acquires Addison’s public key.
Step 4 Matthew uses Addison’s public key to decrypt the message and reveal its contents. This also serves to authenticate that the message is indeed from Addison, because Matthew is the only person with this private key.
Understanding the Features of the RSA Algorithm
RSA, invented by Ron Rivest, Adi Shamir, and Len Adleman in 1977, is one of the most common asymmetric algorithms in use today. This public-key algorithm was patented until September 2000, when the patent expired, making the algorithm part of the public domain. RSA has been widely embraced over the years, in part because of its ease of implementation and flexibility. This flexibility is because of RSA’s use of a variable key length. This allows implementers to trade speed for the security of the algorithm if they so choose.
RSA has stood the test of time, having been in existence for 31 years. RSA has withstood many years of extensive cryptanalysis. Although the security of RSA has been neither proven nor disproven, its longevity suggests, if nothing more, a strong level of confidence in the RSA algorithm, whose keys are generally 512 to 2048 bits long. Security provided by RSA is based on the difficulty of factoring very large numbers. This process breaks these large numbers into multiplicative factors. Should researchers or attackers derive an easy method of factoring these large numbers, RSA’s effectiveness would cease to exist.
The RSA algorithm is based on the premise that each entity has two keys, a public key and a private key. As discussed earlier, the public key can readily be published and given away. The private key, on the other hand, must be kept secret and should be available only to the owner of the key pair. Determining the makeup of either key based on the other key is not computationally feasible. Taken together, what one key can encrypt, the other can decrypt. The nature of these RSA keys is to be long-term. Generally they are either changed or renewed after several months of usage. Some even stay in use for years.
Working with RSA Digital Signatures
Modern digital signatures rely on more than public-key operations. They actually combine a hash function with a public-key algorithm to create a more secure signature, as shown in Figure 14-4.
Let’s examine the steps involved in the signature process:
Step 1 To uniquely identify the document and its contents, the signer makes a hash or fingerprint of the document.
Step 2 The signer’s private key is used to encrypt the hash.
Step 3 The signature (the encrypted hash) is appended to the document. Continuing the process, the following steps outline verification:
Step 4 The verifier obtains the signer’s public key.
Step 5 The signer’s public key is used to decrypt the signature. This step reveals the signer’s assumed hash value.
Step 6 The verifier makes a hash of the received document, without its signature.
This is compared to the decrypted signature hash. If the two hashes match, the document is thought to be authentic. In other words, it was signed by the assumed signer, and it has not been altered since it was signed.
In the exchange just depicted, you can see how both the authenticity and integrity of the message are ensured, even though the actual text is public. To ensure that the message remains private and that it has not been altered, both encryption and digital signatures are required.
Guidelines for Working with RSA
Although RSA is widely accepted and has a long history, it is certainly not the fastest algorithm. When compared to Data Encryption Standard (DES) in software, it is approximately 100 times slower. When compared to DES in a hardware implementation, it is nearly 1000 times slower. Because of these speed issues, RSA generally is used to protect only small amounts of data. In fact, RSA is used for two main reasons:
- To perform encryption to ensure the confidentiality of data
- To generate digital signatures to provide authentication of data, nonrepudiation of data, or both
Examining the Features of the Diffie-Hellman Key Exchange Algorithm
The Diffie-Hellman (DH) Key Exchange Algorithm was invented by Whitfield Diffie and Martin Hellman in 1976. The Diffie-Hellman algorithm derives its strength from the difficulty of calculating the discrete logarithms of very large numbers. The functional usage of this algorithm is to provide secure key exchange over insecure channels such as the Internet. DH is also often used to provide keying material for other symmetric algorithms, such as DES, 3DES, or AES.
The DH algorithm serves as the basis for many of our modern automatic key exchange methods. It is used within the Internet Key Exchange (IKE) protocol in IP Security (IPsec) virtual private networks (VPN). In this role it provides a reliable and trusted method for key exchange over untrusted channels such as the Internet.
Before the DH exchange may begin, the two parties involved must agree on two nonsecret numbers. The first number selected is used as the generator and is termed g, for generator. The second number is called p, and it serves as the modulus. There is no need to keep these numbers secret; generally they are chosen from a table of known values. In most cases g is usually a very small number, a single integer such as 2, 3, or 4, and p is a very large prime number. After these numbers are selected, each party generates its own secret value. Finally, these numbers are used together. Based on the values of g and p, as well as the secret value of each party, each party calculates its public value. The following formula is used to compute the public value:
Y=gxmod p
x represents the entity’s secret value, and Y is the entity’s public value.
After these public values have been computed by both parties, they are exchanged. Then each party exponentiates the public value it received with its own secret value. This step computes a common shared secret value. When the algorithm finishes, each party has the same shared secret.
If an attacker is listening on the channel, he can’t compute the secret value, because only g, p, YA, and YB are known. To calculate the shared secret value, at least one secret value is needed. Given the nature of this process, for an attacker to obtain the shared secret, he would have to be able to compute the discrete algorithm of the equation we discussed to recover X A or XB.
Steps of the Diffie-Hellman Key Exchange Algorithm
Let’s take a closer look at the steps involved in the DH key exchange:
Step 1 Matthew and Abby agree on generator g and modulus p.
Step 2 Matthew selects a random large integer Xa and sends Abby its public value, YA, where YA=gx(A)mod p.
Step 3 Abby selects a random large integer XB and sends Matthew her public value, YB, where YB=gx(B)mod p.
Step 4 Matthew computes k=YBx(A)mod p.
Step 5 Abby computes k’=YAx(B)mod p.
Step 6 Both k and k’ are equal to gx(A)x(B)mod p.
Now that Matthew and Abby have gone through this process, they have a shared secret (k=k’). Even if an attacker has been able to listen in on an untrusted channel, there is no way he could compute the secret from the captured information. As mentioned earlier, computing a discrete logarithm ofYA or YB is nearly unfeasible.
Working with a PKI
By implementing a Public Key Infrastructure (PKI), organizations can provide an underlying basis for a number of security services, such as encryption, authentication, and nonrepudiation. You are likely familiar with encryption and authentication, but nonrepudiation may be somewhat unfamiliar. Nonrepudiation blocks the false denial of a particular action.
PKI is often a central authentication source for corporate VPNs, because it provides a scalable solution to meet the growing needs of today’s organizations.
A number of terms are specific to the PKI structure. The following sections explore these and examine how a PKI works.
Examining the Principles Behind a PKI
To understand all that a PKI has to offer, first you must understand its components. A PKI provides organizations with the framework needed to support large-scale public-key-based technologies. Taken as a whole, a PKI is a set of technical, organizational, and legal components that combine to establish a system that enables large-scale use of public-key cryptography. Via a PKI, an organization can provide authenticity, confidentiality, integrity, and nonrepudiation services. This section examines the principles of implementing a PKI.
Understanding PKI Terminology
The following are two very important PKI terms:
- Certificate authority (CA): A trusted third party responsible for signing the public keys of entities in a PKI-based system.
- Certificate: A document issued and signed by the CA that binds the name of the entity and its public key.
In a PKI, the certificate issued to a user is always signed by a CA. Each CA also has a certificate of its own. This certificate, called a CA certificate or a root certificate, contains its public key and is signed by the CA itself. This is why it is also called a self-signed CA certificate.
Components of a PKI
Creating a large PKI involves more than simply the CA and users who obtain certificates. It also involves substantial organizational and legal work. When we consider this in its entirety, we see that five main areas constitute the PKI:
- CAs to provide management of keys
- PKI users (people, devices, servers)
- Storage and protocols
- Supporting organizational framework (practices) and user authentication through Local Registration Authorities (LRA)
- Supporting legal framework A number of vendors provide effective CA servers.
These act as a managed service or may be an end-user product; this varies by vendor. The primary providers are as follows:
- Microsoft
- Cybertrust
- VeriSign
- Entrust Technologies
- RSA
- Novell
Classes of Certificates
CAs can issue a number of different classes of certificates. These classes vary, depending on how trusted a certificate is. For instance, an outsourcing vendor such as VeriSign or RSA might run a single CA, issuing certificates of different classes. The customers who obtain these certificates then can use this CA that they need based on their desired level of trust.
Certificate classes are defined by a number, 0 through 4. The higher the number, the more trusted the certificate. So what determines the “trust” in a given certificate?
Trust in the certificate generally is determined by how rigorous the verification process was with regard to the holder’s identity at the time the certificate was issued. Let’s consider an example.
If an organization wanted a class 0 certificate, it might be issued without any checks. This form of certificate might be used for testing purposes internally. A class 1 certificate, in contrast, would likely require an e-mail reply from the holder to confirm her wish to enroll. This is still a very weak form of authentication for the user, but again, a class 1 certificate is not highly trusted. If an organization requires a higher level of trust for its certificate, it may go through the process to obtain a class 3 or 4 certificate. Before these certificates are issued, the future holder is required to prove her identity. The applicant must authenticate her public key by appearing in person, with a minimum of two official ID documents. As you can see, the various classes of certificates range greatly in their degree of trust to meet an organization’s needs.
Examining the PKI Topology of a Single Root CA
In addition to offering a number of different certificates with varying levels of trust, PKIs form different topologies of trust. Here we will examine the most simple of these models, a single CA (see Figure 14-5).
This topology is often called a root CA. This single CA is responsible for issuing all the certificates to the end users. The initial attraction of this PKI topology is its simplicity; however, it also has a number of pitfalls:
- It is difficult to scale this topology to a large environment.
- This topology needs a strictly centralized administration.
- There is a critical vulnerability in using a single signing private key. If it is stolen, the whole PKI falls apart, because the CA can no longer be trusted as a unique signer.
This form of topology may be used to support VPNs. In some cases, this topology may be used when there is not a greater need beyond the VPN for the PKI.
Examining the PKI Topology of Hierarchical CAs
For organizations that want to avoid the pitfalls of the single-root CA, more complex CA structures can be devised and implemented. This section examines the hierarchical CA structure and its application, as shown in Figure 14-6.
The hierarchical CA structure is a more robust and complicated implementation of the PKI. In this topology, CAs may issue certificates to both end users and subordinate CAs. These subordinate CAs then may issue their certificates to end users, other CAs, or both. This topology creates a tree-like structure of CAs and end users in which each higher-level CA may issue certificates to any lower-level CAs and end users. This structure gets around the issues that we saw with the single-root CA.
For many organizations that implement this topology, the main benefit they achieve is a significant increase in scalability and manageability. In this topology, trust decisions may be hierarchically distributed to smaller branches lower in the tree. This distribution fits well with the structure of many larger enterprise organizations. Let’s take a look at an example.
A large enterprise organization may choose to have a root CA in its headquarters that is responsible for issuing certificates to level-2 CAs both locally and in regional locations. It then falls on these level-2 CAs to issue all certificates to the end users.
This solution also addresses security, because the root-signing key, held by the root CA, is seldom used after the subordinate CA certificates are issued. This means that in this topology its exposure is limited and, therefore, more readily trusted. This structure also addresses the threat of having a key stolen from a subordinate CA. Should this occur, only that branch of the PKI is rendered untrusted. All other users simply no longer trust that particular CA.
Even though this hierarchical topology has great benefits, some matters must be considered. Given the complex nature of a structure with numerous branches, one issue can be finding the certification path for a certificate. Finding this path allows you to understand the signing process. If a great number of CAs exist between the root CA and the end user, determining and verifying this certification path can be quite difficult.
Examining the PKI Topology of Cross-Certified CAs
Cross-certifying represents another form of hierarchical PKI topology. This structure has a number of flat, single-root CAs. Each of these CAs establishes a trust relationship horizontally by cross-certifying its own CA certificates, as shown in Figure 14-7.
Understanding PKI Usage and Keys
Depending on the PKI’s structure and implementation, it may offer or even require the use of two key pairs for each entity involved:
- The first public and private key pair is used only for encryption. In this combination, the public key encrypts, and the private key decrypts.
- The second key pair is intended exclusively for signing. In this case, the private key signs, and the public key is used to verify the signature.
These key pair combinations go by different names. You might hear them called “special keys” or “usage keys.” In either case, they serve the same purpose. These key pairs may differ in key length. They may also differ in the choice of the public-key algorithm they employ.
If the PKI that is employed requires two key pairs per entity, the user has two certificates as well. These certificates contain the following two components:
- An encryption certificate containing the user’s public key, which encrypts the data
- A signature certificate containing the user’s public key, which verifies the user’s digital signature Usage keys may be employed in a number of situations. Let’s examine a few of these situations so that we may look more closely at their application:
- Where encryption is used more frequently than signing, a certain public and private key pair is more exposed because of this frequent usage. Its lifetime is shortened, and it changes more frequently. The separate signing private and public key pair could have a much longer lifetime.
- If key recovery is desired, such as when a copy of a user’s private key is kept in a central repository for various backup reasons, usage keys allow for backing up only the private key of the encrypting pair. In this instance, the signing private key remains with the user, allowing for true nonrepudiation.
- When different levels of encryption and digital signing are required because of legal, export, or performance issues, usage keys allow you to assign different key lengths to the pairs.
Working with PKI Server Offload
As we have discussed, the CA plays a central role in the PKI with its private key. Its security is a critical element in making the PKI work successfully. To make the operation of the CA more secure, a great many of the key management tasks may be effectively offloaded to registration authorities (RA). These RAs are PKI servers that are responsible for performing management tasks on behalf of the CA. Having an RA in place allows the CA to focus on the signing process.
Having an RA in place allows for the offloading of three main tasks:
- Authentication of users when they enroll with the PKI
- Key generation for users who cannot generate their own keys
- Distribution of certificates after enrollment
Understanding PKI Standards
As discussed in an earlier section, the market has a number of PKI vendors, making standardization and interoperability an issue when interconnecting PKIs. Some progress has been made in this area by the X.509 standards and the Internet Engineering Task Force (IETF) Public Key Infrastructure X.509 (PKIX) workgroup. Together they have worked toward publishing a common set of standards to be used for PKI protocols and data formats.
In addition to striving toward these standards, it is important to understand the supporting services used by a PKI, such as Lightweight Directory Access Protocol (LDAP)-accessible X.500 directories.
One reason that there is such concern about interoperability between a PKI and its supporting services is that many vendors have proposed and implemented proprietary solutions. Currently, interoperability is in the most basic of states, even though it has been ten years since the development of PKI software solutions.
As mentioned, the IETF is one organization working toward standardization in this area. It has formed a working group dedicated to promote and standardize PKI in the Internet. This group has created and published a draft set of standards that detail common data formats and PKI-related protocols to be used in a network. You may review this draft at http:// www.ietf.org/html.charters/pkix-charter.html.
Understanding X.509v3
X.509 is a well-known industry standard that has been incorporated to define basic PKI formats. Areas that are based on this include both the certificate and certificate revocation list (CRL) format. Using this common standard in this manner underlies the basic interoperability we see in the majority of PKIs. Of course, PKI is not the only technology to take advantage of X.509. It is a widely used standard for many Internet applications, including Secure Socket Layer (SSL) and IPsec.
The format of a digital certificate is defined by X.509 version 3 (X.509v3). This format is currently in use throughout the Internet. Table 14-2 lists some of the ways in which it is currently being used.
When working with the CA to provide the authentication procedure, when contacting the PKI, the user first securely obtains a copy of the CA’s public key. This public key is used to verify all the certificates issued by the CA. This is central to the proper functioning of the PKI.
Recall that certificates contain the binding between the names and public keys of entities to which they are issued. Generally, these are published in a centralized directory. This is done so that other PKI users can easily access them.
The CA also can distribute its public key in the form of a certificate issued by the CA itself. This certificate is called a self-signed certificate, because in this case the signer of the certificate and the holder of the certificate are the same entity. These self-signed certificates are issued only by a root CA.
Understanding Public Key Cryptography Standards (PKCS)
Public Key Cryptography Standards (PKCS) is used to provide basic interoperability for applications that employ public-key cryptography. Taken together, PKCS defines a set of low-level standardized formats for the secure exchange of arbitrary data. For instance, PKCS defines a standard format for an encrypted piece of data, a signed piece of data, and so on.
Now that you have a sense of these standards and the various areas that they address, let’s examine a couple of them in greater detail:
- PKCS #7: The Cryptographic Message Syntax Standard defines the syntax of several kinds of cryptographically protected messages. This includes defining the standard for encrypted messages and messages with digital signatures. One place that we see PKCS #7 extensively is S/MIME. PKCS #7 is the basis for S/MIME secure e-mail specification and as such has been widely implemented. PKCS #7 is not limited to
working with mail messages. It also has become a basis for message security in a number of diverse systems. PKCS #7 provides message security in the Secure Electronic Transaction (SET) specifications for bank card payments, the World Wide Web Consortium (W3C) digital signature initiative, and PKCS #12, the Personal Information Exchange Syntax Standard. - PKCS #10: The Certification Request Syntax Standard defines the syntax for how certification requests will be made in a PKI. Certification requests are made up of various parts: the distinguished name (DN), a public key, and optionally a set of attributes. If included, these attributes are signed by the entity requesting certification. All certification requests are sent to a CA. The CA must accept the request and verify the authenticity of the information provided by the applicant. After this occurs, the CA transforms each request into an X.509 public-key certificate. When the CA returns the newly signed certificate, it is presented in a specific form; a PKCS #7 message is one possibility.
If this optional set of attributes is provided, other application-specific information about a given entity may be added to enhance security and flexibility. For example, if a “challenge password” is added, the entity can later request certificate revocation. In addition to these electronic means of certificate requests, CAs may require nonelectronic forms of request and may return nonelectronic replies.
Understanding Simple Certificate Enrollment Protocol (SCEP)
As we have discussed, public-key technology is widely used today and is incorporated in various standards-based security protocols. This increasing emphasis on public-key technology makes it all the more important that there be a certificate management protocol that PKI clients and CA servers can rely on to support all certificate life-cycle operations. Simple Certificate Enrollment Protocol (SCEP), illustrated in Figure 14-8, addresses the need for a certificate management protocol to handle certificate enrollment and revocation, as well as certificate and CRL access. The goal of SCEP is to provide a scalable means to support the secure issuance of certificates, while using existing technology wherever possible. One current use of SCEP is in IPSec VPNs, where it is used by IPSec VPN endpoints for certificate enrollment. This represents a significant improvement over manual/file-based enrollment.
Let’s examine the enrollment transaction in greater detail:
- An end entity creates a certificate request using PKCS #10.
- The request is enveloped using PKCS #7 and is sent to the CA or RA based on the topology in place.
- When the CA or RA receives the request, either it is automatically approved and the certificate is sent back, or the end entity has to wait until the operator can manually authenticate the identity of the requesting end entity.
Exploring the Role of Certificate Authorities and Registration Authorities in a PKI
One central tenet behind the use of a PKI and trusted third-party protocols is that all participating parties agree to accept the word of a neutral third party. Should two parties need to validate each other, they turn to this trusted third party, which in turn provides indepth authentication of the parties involved. This is done rather than having each party perform its own authentication.
These entities rely on the third party (the CA) to conduct an in-depth investigation of each entity before any credentials are issued. Furthermore, these entities rely on this trusted third party to issue credentials that are extremely difficult to forge. With these “assumptions” in place, from this point forward, all individuals who trust the third party agree to readily accept the credentials that it issues. If any of these assumptions are incorrect, the validity of this process is called into question, and the security of all entities is at risk.
Because of networking constraints, processor overhead, and general practicality, it is not reasonable for all parties in a large organization to continuously exchange identification documents for all communications. If you think about it, this is not that different from how your own organization may approach measures of physical security.
For example, you may work for an organization that issues each employee an ID badge.
Before someone is given an employee badge, various measures probably are taken in conjunction with general hiring procedures. Perhaps a background check is run, or documentation is collected, such as a copy of the employee’s driver’s license and birth certificate, to ensure that the employee is who he claims to be. As soon as the employee passes this authentication process, he receives his employee badge. Of course, in addition to these authentication steps, the badge itself is made in such a way that it would be difficult to duplicate or forge. This adds another layer of trust. After the badge is issued, it is accepted as proof of the individual’s identity and authority to work within your organization.
You might be wondering what would provide this validation if you did not have a process such as this in place. Let’s assume that ten individuals within the organization need to validate each other, and there is no trusted third-party proof of identity, such as company ID badges. What might be involved? If no trusted third party is in place, this process of ten individuals validating each other would result in 90 separate validations before everyone would have validated everyone else. If that sounds messy, consider adding just one more individual to the group. This single addition would require an additional 20 validations, because each of the original group of ten individuals would need to authenticate the new person, and then the new person would need to authenticate the original ten. As you can see, this approach does not scale well. It becomes practically impossible for organizations of considerable size.
Certificate servers act as this trusted third party so that entities can provide the utmost level of trust between themselves, without the time-consuming complexities that would be involved if each individual entity needed to directly validate the others. As the example indicates, this would be impractical.
Examining Identity Management
CA-based solutions give an organization a means of identity management. This is accomplished in two primary ways:
- Through the CA’s acting as the trusted third party in PKI implementations.
- Through the use of the X.509 standard, which describes the identity and how to store an authentication key. Information about the format of the X.509 certificate and the syntax of the fields in the certificate is described in Abstract Syntax Notation 1 (ASN.1).
The concept of a trusted third party embodied in the CA is a product of the merger of the X.509 standard with public-key encryption. The CA holds a key pair, a set of asymmetric keys, a private key, and a public key. The X.509 certificate is created to identify the CA, and it contains specific information for this purpose:
- The CA’s identity
- The CA’s public key
- The signature using the CA’s private key
- Parameters such as serial number, algorithms used, and validity
As discussed in earlier sections, the CA’s certificate is freely distributed. Therefore, it is incumbent that the recipient of the CA’s certificate verify its authenticity out of band.
Retrieving the CA Certificate
Figure 14-9 shows the process that occurs when the CA certificate is retrieved, as described in the following list:
- Abby and Matt request the CA certificate that contains the CA public key.
- After the CA certificate is received, Abby and Matt’s systems verify the validity of the certificate. This is done using public-key cryptography.
- Abby and Matt go beyond the technical verification done by their systems by telephoning the CA administrator to verify the public key and the serial number of the certificate. This out-of-band process is a necessary step for true certainty.
Understanding the Certificate Enrollment Process
After the users have retrieved the CA certificate, they need to submit certificate requests to the CA. This process is shown in Figure 14-10 and described in the following steps: - Abby and Matt’s systems forward a certificate request that includes their public key along with some identifying information. All of this information is encrypted using the CA’s public key.
- After the certificate request is received, the CA administrator telephones Abby and Matt to confirm that they submitted the request and to verify the public key.
- The CA administrator adds data to the certificate and then digitally signs it before issuing it.
When this process is complete, one of two things happens. The end user manually retrieves the certificate, or SCEP automatically retrieves it. After it is obtained, the certificate is installed on the system.
Examining Authentication Using Certificates
After the parties involved have installed certificates signed by the same CA, they may authenticate each other, as shown in Figure 14-11. This is done when the two parties exchange certificates. The CA’s part in this process is finished, so it is not involved in this exchange.
At this point, each party involved verifies the digital signature on the certificate. This is done by hashing the plain-text portion of the certificate, decrypting the digital signature using the CA’s public key, and then comparing the results.
For the certificate to be valid, the results must match when this comparison is conducted. If this is the case, the certificate is verified as being signed by a trusted third party, and the verification by the CA that each party is who it claims to be is accepted.
Examining Features of Digital Certificates and CAs
A number of authentication mechanisms are available to organizations. The following characteristics are unique to the use of a PKI:
- Authentication of each party involved begins with the parties each obtaining the CA’s certificate and their own certificate. To be secure, this process involves out-of-band verification. When it is complete, the presence of the CA is no longer required until the expiration of one of the certificates that is involved.
- PKI systems use asymmetric keys. One key is public, and the other is private. A feature of these algorithms is that whatever one key encrypts, only the other key may decrypt, providing true nonrepudiation.
- Very long lifetimes, generally in terms of years, may be set for the certificates because of the strength of the algorithms involved.
- Key management is greatly simplified, because two users may freely exchange the certificates. The validity of the received certificates is verified using the CA’s public key. Each user has this in his or her possession, making this process easy to undertake.
Understanding the Caveats of Using a PKI
To this point we have discussed the strengths that a PKI can bring to an organization. In addition to these strengths, it is important to understand the caveats involved in implementing a PKI in the enterprise so that you can make an informed decision. Table 14-4 describes some of the caveats.
Even when certificates are employed in an IP network, they must be combined with another means of authentication. Using public-key authentication alone is not a wholly secure solution. In these instances, you should combine your public-key authentication with another authentication mechanism to provide greater security and more authorization options.
An example of combining mechanisms might be something like working with IPsec using certificates for authentication and then combining this with Extended Authentication (XAUTH) that has one-time password hardware tokens. This combination provides a superior authentication scheme over using only certificates. Of course, some limitations exist.
One notable limitation should be mentioned. If an organization moves to digital certificates from Phase Shift Keying (PSK), a data modulation scheme that conveys data by changing or modulating the phase of a reference signal, it can experience an issue with a router. For instance, if it boots with incorrect time information, significant issues could occur. In a similar situation, if an internal battery dies, devices will think the current year is 1998. If you are working with a certificate that was issued in 2008 and it expires in 2010, the VPN will not be functional, and your site-to-site connection will go down. In a case like this, troubleshooting the issue is very hard, because the key areas that normally are examined, such as the routing table, appear normal and without issue, and users on the network can get out to the Internet. Because of this limitation, it is important that you understand that time—the correct time setting on your devices—now plays a critical role in the stability of VPN tunnels.
A second issue to be aware of is that if a router experiences such things as a random reboot, a power supply failure, or the interface going bad, when you replace this router with another one, you cannot simply paste in the former router’s configuration and get it back online. This new device must first generate a certificate request and enroll with the CA. After the CA approves the request from the new router, you must install the certificates to bring the VPN tunnels back up. This is a little different than with PSKs. If you are working with PSKs, all the administrator must do is copy over the configuration, and the tunnel is up again.
Understanding How Certificates Are Employed
Certificates first found their use in providing strong authentication for applications. When employed in this manner, each application may have a different implementation of the actual authentication process. They all use a similar type of certificate in the X.509 format.
Secure Socket Layer (SSL) is one of the most widely used and most well known means of certificate-based authentication. With the emergence of e-commerce, SSL’s ability to negotiate keys that are used to encrypt the SSL session is readily used to secure everything from online purchases to online banking. Among applications that rely on SSL, one of the most widely used is HTTPS. With the availability of SSL, other applications that previously employed lesser forms of authentication with no encryption were modified to use SSL. Among these are such popular applications as Simple Mail Transfer Protocol (SMTP), LDAP, and Post Office Protocol version 3 (POP3).
One of the most important extensions to secure communications is Multipurpose Internet Mail Extension (MIME). MIME allows arbitrary data to be included in an e-mail. A further enhancement (more properly called an “extension”) to e-mail focused on providing greater security to entire mail messages or parts of messages. With Secure MIME (S/MIME) you can authenticate and encrypt e-mail messages.
Certificates may also be used at either the network or application layer by network devices. For instance, Cisco routers, Cisco VPN concentrators, and Cisco PIX firewalls can use certificates to authenticate IPsec peers.
End devices and devices connecting to the LAN may be authenticated by Cisco switches. This authentication process employs 802.1X between the adjacent devices and may be proxied to a central access control server (ACS) via Extensible Authentication Protocol with TLS (EAP-TLS). Cisco routers now can use SSL to establish secure TN3270 sessions rather than providing Telnet 3270 support that does not include encryption or strong authentication.
Figure 14-12 shows certificates being used for various purposes within a network. As you can see, a single CA server may facilitate a number of different applications that require digital certificates for authentication purposes. Using CA servers in these instances provides a solution that simplifies the management of the authentication process. It also provides significant security based on the cryptographic mechanisms used in combination with digital certificates.