CCNP Switch: IP Telephony

CCNP Switch: IP Telephony

Power over Ethernet (PoE)

A Cisco IP Phone is like any other node on the network—it must have power to operate. Power can come from two sources:

  • An external AC adapter
  • Power over Ethernet (DC) over the network data cable

The external AC adapter plugs into a normal AC wall outlet and provides 48V DC to the phone. These adapters, commonly called “wall warts,” are handy if no other power source is available. However, if a power failure occurs to the room or outlet where the adapter is located, the IP Phone will fail.

A more elegant solution is available as inline power or Power over Ethernet (PoE). Here, the same 48V DC supply is provided to an IP Phone over the same unshielded twisted-pair cable that is used for Ethernet connectivity. The DC power’s source is the Catalyst switch itself. No other power source is needed, unless an AC adapter is required as a redundant source.

PoE has the benefit that it can be managed, monitored, and offered only to an IP Phone. In fact, this capability isn’t limited to Cisco IP Phones—any device that can request and use inline power in a compatible manner can be used. Otherwise, if a nonpowered device such as a normal PC is plugged into the same switch port, the switch will not offer power to it.

The Catalyst switch also can be connected to an uninterruptible power supply (UPS) so that it continues to receive and offer power even if the regular AC source fails. This allows an IP Phone or other powered device to be available for use even across a power failure

How Power over Ethernet Works

A Catalyst switch can offer power over its Ethernet ports only if it is designed to do so. It must have one or more power supplies that are rated for the additional load that will be offered to the connected devices. PoE is available on many platforms, including the Catalyst 3750-PWR, Catalyst 4500, and Catalyst 6500.

Two methods provide PoE to connected devices:

  • Cisco Inline Power (ILP)—A Cisco-proprietary method developed before the IEEE 802.3af standard
  • IEEE 802.3af—A standards-based method that offers vendor interoperability (see http://

Detecting a Powered Device

The switch always keeps the power disabled when a switch port is down. However, the switch must continually try to detect whether a powered device is connected to a port. If it is, the switch must begin providing power so that the device can initialize and become operational. Only then will the Ethernet link be established. Because there are two PoE methods, a Catalyst switch tries both to detect a powered device. For IEEE 802.3af, the switch begins by supplying a small voltage across the transmit and receive pairs of the copper twisted-pair connection. It then can measure the resistance across the pairs to detect whether current is being drawn by the device. If 25K Ohm resistance is measured, a powered device is indeed present. The switch also can apply several predetermined voltages to test for corresponding resistance values. These values are applied by the powered device to indicate which of the five IEEE 802.3af power classes it belongs to. Knowing this, the switch can begin allocating the appropriate maximum power needed by the device. Table 16-2 defines the power classes.

Table 16-2 IEEE 802.3af Power Classes
CCNP Switch IP Telephonytb16.1

The default class is used if either the switch or the powered device doesn’t support or doesn’t attempt the optional power class discovery. At press time, class 4 is not used; it holds a place for future devices that will require a greater power budget than the current Catalyst switches can supply.

Cisco inline power device discovery takes a totally different approach than IEEE 802.3af. Instead of offering voltage and checking resistance, the switch sends out a 340-kHz test tone on the transmit pair of the twisted-pair Ethernet cable. A tone is transmitted instead of DC power because the switch first must detect an inline power-capable device before offering it power. Otherwise, other types of devices (normal PCs, for example) could be damaged.

A powered device such as a Cisco IP Phone loops the transmit and receive pairs of its Ethernet connection while it is powered off. When it is connected to an inline power switch port, the switch can “hear” its test tone looped back. Then it safely assumes that a known powered device is present, and power can be applied to it.

Supplying Power to a Device

A switch first offers a default power allocation to the powered device. On a Catalyst 3750-24-PWR, for example, an IP Phone first receives 15.4W (0.32 amps at 48V DC). Power can be supplied in two ways:

  • For Cisco ILP, inline power is provided over data pairs 2 and 3 (RJ-45 pins 1,2 and 3,6) at 48V DC.
  • For IEEE 802.3af, power can be supplied in the same fashion (pins 1,2 and 3,6) or over pairs 1 and 4 (RJ-45 pins 4,5 and 7,8).

Now the device has a chance to power up and bring up its Ethernet link, too. The power budget offered to the device can be changed from the default to a more appropriate value. This can help prevent the switch from wasting its total power budget on devices that use far less power than the per-port default. With IEEE 802.3af, the power budget can be changed by detecting the device’s power class.

For Cisco ILP, the switch can attempt a Cisco Discovery Protocol (CDP) message exchange with the device. If CDP information is returned, the switch can discover the device type (Cisco IP Phone, for example) and the device’s actual power requirements. The switch then can reduce the inline power to the amount requested by the device.

To see this in operation, look at Example 16-1. Here, the power was reduced from 15,000 mW to 6300 mW. This output was produced by the debug ilpower controller and debug cdp packets commands.

Example 16-1 Displaying Inline Power Adjustment

Configuring Power over Ethernet

PoE or inline power configuration is simple. Each switch port automatically can detect the presence of an inline power-capable device before applying power, or the feature can be disabled to ensure that the port never can detect or offer inline power. By default, every switch port attempts to discover an inline-powered device. To change this behavior, use the following interfaceconfiguration commands:

By default, every switch interface is configured for auto mode, where the device and power budget automatically is discovered. In addition, the default power budget is 15.4W. You can change the maximum power offered as max milli-watts (4000 to 15400).

You can configure a static power budget for a switch port if you have a device that can’t interact with either of the powered device-discovery methods. Again, you can set the maximum power offered to the device with max milli-watts. Otherwise, the default value of 15.4W is used. If you want to disable PoE on a switch port, use the never keyword. Power never will be offered and powered devices never will be detected on that port.

Verifying Power over Ethernet

You can verify the power status for a switch port with the following EXEC command:

Example 16-2 provides some sample output from this command. If the class is shown as n/a, Cisco ILP has been used to supply power. Otherwise, the IEEE 802.3af power class (0 through 4) is shown.

Example 16-2 Displaying PoE Status for Switch Ports

CAUTION A Catalyst switch waits for 4 seconds after inline power is applied to a port to see if an IP Phone comes alive. If not, the power is removed from the port. Be careful if you plug an IP phone into a switch port, and then remove it and plug in a normal Ethernet device. The inline power still could be applied during the 4-second interval, damaging a nonpowered device. Wait 10 seconds after unplugging an IP Phone before plugging anything back into the same port.

Voice VLANs

A Cisco IP Phone provides a data connection for a user’s PC, in addition to its own voice data stream. This allows a single Ethernet drop to be installed per user. The IP Phone also can control some aspects of how the packets (both voice and user data) are presented to the switch. Most Cisco IP Phone models contain a three-port switch, connecting to the upstream switch, the user’s PC, and the internal VoIP data stream, as illustrated in Figure 16-1. The voice and user PC ports always function as access-mode switch ports. The port that connects to the upstream switch, however, can operate as an 802.1Q trunk or as an access-mode (single VLAN) port.

Figure 16-1 Basic Connections to a Cisco IP Phone
CCNP Switch IP Telephonyfig16.1

The link mode between the IP Phone and the switch is negotiated; you can configure the switch to instruct the phone to use a special-case 802.1Q trunk or a single VLAN access link. With a trunk, the voice traffic can be isolated from other user data, providing security and QoS capabilities. As an access link, both voice and data must be combined over the single VLAN. This simplifies other aspects of the switch configuration because a separate voice VLAN is not needed, but it could compromise the voice quality, depending on the PC application mix and traffic load.

Voice VLAN Configuration

Although you can configure the IP Phone uplink as a trunk or nontrunk, the real consideration pertains to how the voice traffic will be encapsulated. The voice packets must be carried over a unique voice VLAN (known as the voice VLAN ID or VVID) or over the regular data VLAN (known as the native VLAN or the port VLAN ID, PVID). The QoS information from the voice packets also must be carried.

To configure the IP Phone uplink, just configure the switch port where it connects. The switch instructs the phone to follow the mode that is selected. In addition, the switch port does not need any special trunking configuration commands if a trunk is wanted. If an 802.1Q trunk is needed, a special-case trunk automatically is negotiated by the Dynamic Trunking Protocol (DTP) and CDP.

Use the following interface-configuration command to select the voice VLAN mode that will be used:

Figure 16-2 shows the four different voice VLAN configurations. Pay particular attention to the link between the IP Phone and the switch.

Table 16-3 documents the four different voice VLAN configurations

Table 16-3 Trunking Modes with a Cisco IP Phone
CCNP Switch IP Telephonytb16.3

The default condition for every switch port is none, where a trunk is not used. All modes except for none use the special-case 802.1Q trunk. The only difference between the dot1p and untagged modes is the encapsulation of voice traffic. The dot1p mode puts the voice packets on VLAN 0, which requires a VLAN ID (not the native VLAN) but doesn’t require a unique voice VLAN to be created. The untagged mode puts voice packets in the native VLAN, requiring neither a VLAN ID nor a unique voice VLAN.

The most versatile mode uses the vlan-id, as shown in case A in Figure 16-2. Here, voice and user data are carried over separate VLANs. VoIP packets in the voice VLAN also carry the CoS bits in the 802.1p trunk encapsulation field.

Figure 16-2 Trunking Modes for Voice VLANs with a Cisco IP Phone
CCNP Switch IP Telephonyfig16.2
CCNP Switch IP Telephonyfig16.2-1

Be aware that the special-case 802.1Q trunk automatically is enabled through a CDP information exchange between the switch and the IP Phone. The trunk contains only two VLANs—a voice VLAN (tagged VVID) and the data VLAN. The switch port’s access VLAN is used as the data VLAN that carries packets to and from a PC that is connected to the phone’s PC port.

f an IP Phone is removed and a PC is connected to the same switch port, the PC still will be capable of operating because the data VLAN still will appear as the access VLAN—even though the special trunk no longer is enabled.

Verifying Voice VLAN Operation

You can verify the switch port mode (access or trunk) and the voice VLAN by using the show interface switchport command. As demonstrated in Example 16-3, the port is in access mode and uses access VLAN 10 and voice VLAN 110.

Example 16-3 Verifying Switch Port Mode and Voice VLAN

When the IP Phone trunk is active, it is not shown in the trunking mode from any Cisco IOS Software show command. However, you can verify the VLANs being carried over the trunk link by looking at the Spanning Tree Protocol (STP) activity. STP runs with two instances—one for the voice VLAN and one for the data VLAN, which can be seen with the show spanning-tree interface command.

For example, suppose a switch port is configured with access VLAN 10, voice VLAN 110, and native VLAN 99. Example 16-4 shows the switch port configuration and STP information when the switch port is in access mode.

Example 16-4 IP Phone Trunk Configuration and STP Information

Voice QoS

On a quiet, underutilized network, a switch generally can forward packets as soon as they are received. However, if a network is congested, packets can’t always be delivered in a timely manner. Traditionally, network congestion has been handled by increasing link bandwidths and switching hardware performance. This does little to address how one type of traffic can be preferred or delivered ahead of another.
Quality of service (QoS) is the overall method used in a network to protect and prioritize timecritical or important traffic. The most important aspect of transporting voice traffic across a switched campus network is maintaining the proper QoS level. Voice packets must be delivered in the most timely fashion possible, with little jitter, little loss, and little delay.

Remember, a user expects to receive a dial tone, a call to go through, and a good-quality audio connection with the far end when an IP Phone is used. Above that, any call that is made could be an emergency 911 call. It is then very important that QoS be implemented properly.

QoS Overview

The majority of this book has discussed how Layer 2 and Layer 3 Catalyst switches forward packets from one switch port to another. On the surface, it might seem that there is only one way to forward packets—just look up the next packet’s destination in a Content Addressable Memory (CAM) or Cisco Express Forwarding (CEF) table and send it on its way. But that addresses only ifthe packet can be forwarded, not how it can be forwarded.

Different types of applications have different requirements for how their data should be sent end to end. For example, it might be acceptable to wait a short time for a web page to be displayed after a user requests it. That same user probably cannot tolerate the same delays in receiving packets that belong to a streaming video presentation or an audio telephone call. Any loss or delay in packet delivery could ruin the purpose of the application.
Three basic things can happen to packets as they are sent from one host to another across a network:

  • Delay—As a packet is sent from one network device to another, its delivery is delayed by some amount of time. This can be caused by the time required to send the packet serially across a wire, the time required for a router or switch to perform table lookups or make decisions, the time required for the data to travel over a geographically long path, and so on. The total delay from start to finish is called the latency. This is seen most easily as the time from when a user presses a key until the time the character is echoed and displayed in a terminal session.
  • Jitter—Some applications involve the delivery of a stream of related data. As these packets are delivered, variations can occur in the amount of delay so that they do not all arrive at predictable times. The variation in delay is called jitter. Audio streams are particularly susceptible to jitter; if the audio data is not played back at a constant rate, the resulting speech or music sounds choppy.
  • Loss—In extreme cases, packets that enter a congested or error-prone part of the network simply are dropped without delivery. Some amount of packet loss is acceptable and recoverable by applications that use a reliable, connection-oriented protocol such as TCP. Other application protocols are not as tolerant, and dropped packets mean data is missing. To address and alleviate these conditions, a network can employ three basic types of QoS:
    • Best-effort delivery
    • Integrated Services model
    • Differentiated Services model

Keep in mind that QoS works toward making policies or promises to improve packet delivery from a sender to a receiver. The same QoS policies should be used on every network device that connects the sender to the receiver. QoS must be implemented end to end before it can be totally effective.

Best-Effort Delivery

A network that simply forwards packets in the order they were received has no real QoS. Switches and routers then make their “best effort” to deliver packets as quickly as possible, with no regard for the type of traffic or the need for priority service.

To get an idea of how QoS operates in a network, consider a fire truck or an ambulance trying to quickly work its way through a crowded city. The lights are flashing and the siren is sounding to signal that this is a “priority” vehicle needing to get through ahead of everyone else. The priority vehicle does not need to obey normal traffic rules.

However, the best effort scenario says that the fire truck must stay within the normal flow of traffic. At an intersection, it must wait in the line or queue of traffic like any other vehicle—even if its lights and siren are on. It might arrive on time or too late to help, depending on the conditions along the road.

Integrated Services Model

One approach to QoS is the Integrated Services (IntServ) model. The basic idea is to prearrange a path for priority data along the complete path, from source to destination. Beginning with RFC 1633, the Resource Reservation Protocol (RSVP) was developed as the mechanism for scheduling and reserving adequate path bandwidth for an application.

The source application itself is involved by requesting QoS parameters through RSVP. Each network device along the way must check to see if it can support the request. When a complete path meeting the minimum requirements is made, the source is signaled with a confirmation. Then the source application can begin using the path.

Applying the fire truck example to the IntServ model, a fire truck would radio ahead to the nearest intersection before it left the firehouse. Police stationed at each intersection would contact each other in turn, to announce that the fire truck was coming and to assess the traffic conditions. The police might reserve a special lane so that the fire truck could move at full speed toward the destination, regardless of what other traffic might be present.

Differentiated Services Model

As you might imagine, the IntServ model does not scale very well when many sources are trying to compete with each other to reserve end-to-end bandwidth. Another approach is the Differentiated Services (DiffServ) model, which permits each network device to handle packets on an individual basis. Each router or switch can be configured with QoS policies to follow, and forwarding decisions are made accordingly.

DiffServ requires no advance reservations; QoS is handled dynamically, in a distributed fashion. In other words, whereas IntServ applies QoS on a per-flow basis, DiffServ applies it on a per-hop basis to a whole group of similar flows. DiffServ also bases its QoS decisions on information contained in each packet header.

Continuing with the emergency vehicle analogy, here police are stationed at every intersection, as before. However, none of them knows a fire truck is coming until they see the lights or hear the siren. At each intersection, a decision is made as to how to handle the approaching fire truck. Other traffic can be held back, if needed, so that the fire truck can go right through. Giving premium service to voice traffic focuses almost entirely on the DiffServ model. QoS is a complex and intricate topic in itself. The BCMSN course and exam cover only the theory behind DiffServ QoS, along with the features and commands that address voice QoS specifically.

DiffServ QoS

DiffServ is a per-hop behavior, with each router or switch inspecting each packet’s header to decide how to go about forwarding that packet. All the information needed for this decision is carried along with each packet in the header. The packet itself cannot affect how it will be handled. Instead, it merely presents some flags, classifications, or markings that can be used to make a forwarding decision based on QoS policies that are configured into each switch or router along the path.

Layer 2 QoS Classification

Layer 2 frames themselves have no mechanism to indicate the priority or importance of their contents. One frame looks just as important as another. Therefore, a Layer 2 switch can forward frames only according to a best-effort delivery.

When frames are carried from switch to switch, however, an opportunity for classification occurs. Recall that a trunk is used to carry frames from multiple VLANs between switches. The trunk does this by encapsulating the frames and adding a tag indicating the source VLAN number. The encapsulation also includes a field that can mark the class of service (CoS) of each frame. This can be used at switch boundaries to make some QoS decisions. After a trunk is unencapsulated at the far-end switch, the CoS information is removed and lost.

The two trunk encapsulations handle CoS differently:

  • IEEE 802.1Q—Each frame is tagged with a 12-bit VLAN ID and a User field. The User field contains three 802.1p priority bits that indicate the frame CoS, a unitless value ranging from 0 (lowest-priority delivery) to 7 (highest-priority delivery). Frames from the native VLAN are not tagged (no VLAN ID or User field), so they receive a default CoS that is configured on the receiving switch.
  • Inter-Switch Link (ISL)—Each frame is tagged with a 15-bit VLAN ID. In addition, next to the frame Type field is a 4-bit User field. The lower 3 bits of the User field are used as a CoS value. Although ISL is not standards based, Catalyst switches make CoS seamless by copying the 802.1p CoS bits from an 802.1Q trunk into the User CoS bits of an ISL trunk. This allows CoS information to propagate along trunks of differing encapsulations.

Layer 3 QoS Classification with DSCP

From the beginning, IP packets have always had a type of service (ToS) byte that can be used to mark packets. This byte is divided into a 3-bit IP Precedence value and a 4-bit ToS value. This offers a rather limited mechanism for QoS because only the 3 bits of IP Precedence are used to describe the per-hop QoS behavior.

The DiffServ model keeps the existing IP ToS byte but uses it in a more scalable fashion. This byte also is referred to as the Differentiated Services (DS) field, with a different format, as shown in Figure 16-3. The 6-bit DS value is known as the Differentiated Service Code Point (DSCP) and is the one value that is examined by any DiffServ network device.

Don’t be confused by the dual QoS terminology—the ToS and DS bytes are the same, occupying the same location in the IP header. Only the names are different, along with the way the value is interpreted. In fact, the DSCP bits have been arranged to be backward compatible with the IP precedence bits so that a non-DiffServ device still can interpret some QoS information.

Figure 16-3 ToS and DSCP Byte Formats
CCNP Switch IP Telephonyfig16.3

The DSCP value is divided into a 3-bit class selector and a 3-bit Drop Precedence value. Refer to Table 16-4 to see how the IP Precedence, DSCP per-hop behavior, and DSCP codepoint names and numbers relate.

Table 16-4 Mapping of IP Precedence and DSCP Fields
CCNP Switch IP Telephonytb16.4

The three class selector bits (DS5 through DS3) coarsely classify packets into one of seven classes:

  • Class 0, the default class, offers only best-effort forwarding.
  • Classes 1 through 4 are called Assured Forwarding (AF) service levels. Higher AF class numbers indicate the presence of higher-priority traffic.
    Packets in the AF classes can be dropped, if necessary, with the lower-class numbers the most likely to be dropped. For example, packets with AF Class 4 will be delivered in preference to packets with AF Class 3.
  • Class 5 is known as Expedited Forwarding (EF), with those packets given premium service. EF is the least likely to be dropped, so it always is reserved for time-critical data such as voice traffic.
  • Classes 6 and 7 are called Internetwork Control and Network Control, respectively, and are set aside for network control traffic. Usually, routers and switches use these classes for things such as the Spanning Tree Protocol and routing protocols. This ensures timely delivery of the packets that keep the network stable and operational. Each class represented in the DSCP also has three levels of drop precedence, contained in bits DS2 through DS0 (DS0 is always zero):
    • Low (1)
    • Medium (2)
    • High (3)

Within a class, packets marked with a higher drop precedence have the potential for being dropped before those with a lower value. In other words, a lower drop precedence value gives better service. This gives finer granularity to the decision of what packets to drop when necessary.

TIP The DSCP value can be given as a codepoint name, with the class selector providing the two letters and a number followed by the drop precedence number. For example, class AF Level 2 with drop precedence 1 (low) is written as AF21. The DSCP commonly is given as a decimal value. For AF21, the decimal value is 18. The relationship is confusing, and Table 16-2 should be a handy aid.
You should try to remember a few codepoint names and numbers. Some common values are EF (46) and most of the classes with low drop precedences: AF41 (34), AF31 (26), AF21 (18), and AF11 (10). Naturally, the default DSCP has no name (0).

Implementing QoS for Voice

To manipulate packets according to QoS policies, a switch somehow must identify which level of service each packet should receive. This process is known as classification. Each packet is classified according to the type of traffic (UDP or TCP port number, for example), according to parameters matched by an access list or something more complex, such as by stateful inspection of a traffic flow.

Recall that IP packets carry a ToS or DSCP value within their headers as they travel around a network. Frames on a trunk also can have CoS values associated with them. A switch then can decide whether to trust the ToS, DSCP, or CoS values already assigned to inbound packets. If it trusts any of these values, the values are carried over and used to make QoS decisions inside the switch.

If the QoS values are not trusted, they can be reassigned or overruled. This way, a switch can set the values to something known and trusted, and something that falls within the QoS policies that must be met. This prevents nonpriority users in the network from falsely setting the ToS or DSCP values of their packets to inflated levels so that they receive priority service.

Every switch must decide whether to trust incoming QoS values. Generally, an organization should be able to trust QoS parameters anywhere inside its own network. At the boundary with another organization or service provider, QoS typically should not be trusted. It is also prudent to trust only QoS values that have been assigned by the network devices themselves. Therefore, the QoS values produced by the end users should not be trusted until the network can verify or override them.

The perimeter formed by switches that do not trust incoming QoS is called the trust boundary. Usually, the trust boundary exists at the farthest reaches of the enterprise network (access-layer switches and WAN or ISP demarcation points). When the trust boundary has been identified and the switches there are configured with untrusted ports, everything else inside the perimeter can be configured to blindly trust incoming QoS values.

TIP Every switch and router within a network must be configured with the appropriate QoS features and policies. The BCMSN course and exam limit QoS coverage to the switches at the access layer where the trust boundary is configured. Other more involved QoS topics are dealt with in the “Implementing Cisco Quality of Service (QoS)” course.

Figure 16-4 shows a simple network in which the trust boundary is defined at the edges where end users and public networks join. On Catalyst A, port GigabitEthernet2/1 is configured to consider inbound data as untrusted. Catalyst B’s port FastEthernet0/2 connects to a PC that also is untrusted. The Cisco IP Phone on Catalyst B port FastEthernet0/1 is a special case because it supports its own voice traffic and an end user’s PC. Therefore, the trust boundary can’t be defined clearly on that switch port.

Figure 16-4 A QoS Trust Boundary Example
CCNP Switch IP Telephonyfig16.4

Configuring a Trust Boundary

When a Cisco IP Phone is connected to a switch port, think of the phone as another switch (which it is). If you install the phone as a part of your network, you probably can trust the QoS information relayed by the phone.
However, remember that the phone also has two sources of data:

  • The VoIP packets native to the phone—The phone can control precisely what QoS information is included in the voice packets because it produces those packets.
  • The user PC data switch port—Packets from the PC data port are generated elsewhere, so the QoS information cannot necessarily be trusted to be correct or fair.

A switch instructs an attached IP Phone through CDP messages on how it should extend QoS trust to its own user data switch port. To configure the trust extension, use the following configuration steps:
Step 1 Enable QoS on the switch:

By default, QoS is disabled globally on a switch and all QoS information is allowed to pass from one switch port to another. When you enable QoS, all switch ports are configured as untrusted, by default.
Step 2 Define the QoS parameter that will be trusted:

You can choose to trust the CoS, IP precedence, or DSCP values of incoming packets on the switch port. Only one of these parameters can be selected. Generally, for Cisco IP Phones, you should use the cos keyword because the phone can control the CoS values on its two-VLAN trunk with the switch.
Step 3 Make the trust conditional:

You also can make the QoS trust conditional if a Cisco IP Phone is present. If this command is used, the QoS parameter defined in step 2 is trusted only if a Cisco phone is detected through CDP. If a phone is not detected, the QoS parameter is not trusted.

Step 4 Instruct the IP Phone on how to extend the trust boundary:

Normally, the QoS information from a PC connected to an IP Phone should not be trusted. This is because the PC’s applications might try to spoof CoS or Differentiated Services Code Point (DSCP) settings to gain premium network service. In this case, use the cos keyword so that the CoS bits are overwritten to value by the IP Phone as packets are forwarded to the switch. If CoS values from the PC cannot be trusted, they should be overwritten to a value of 0.

In some cases, the PC might be running trusted applications that are allowed to request specific QoS or levels of service. Here, the IP Phone can extend complete QoS trust to the PC, allowing the CoS bits to be forwarded through the phone unmodified. This is done with the trust keyword.

By default, a switch instructs an attached IP Phone to consider the PC port as untrusted. The phone will overwrite the CoS values to 0.

What about switch ports that don’t connect to end-user or phone devices? Switch uplinks always should be considered as trusted ports. QoS parameters are trusted or overwritten at the network edge, as packets enter the trusted domain. After that, every switch inside the trusted boundary implicitly can trust and use the QoS parameters in any packet passing through. You can configure a switch uplink port to be trusted with the following commands:

Here, the trust is not conditional. The switch will trust only the CoS values that are found in the incoming packets.

TIP A Cisco switch also has a CoS-to-DSCP map that is used to convert inbound CoS values to DSCP values. The CoS information is useful only on trunk interfaces because it can be carried within the trunk encapsulation. CoS must be converted to DSCP or IP Precedence, which can be carried along in the IP packet headers on any type of connection. Switches use a default CoS-to-DSCP mapping, which can be configured or changed. However, this is beyond the scope of the BCMSN course and exam.

Verifying Voice QoS

A switch port can be configured with a QoS trust state with the connected device. If that device is an IP Phone, the switch can instruct the phone on whether to extend QoS trust to an attached PC. To verify how QoS trust has been extended to the IP Phone itself, use the following EXEC command:

If the port is trusted, all traffic forwarded by the IP Phone is accepted with the QoS information left intact. If the port is not trusted, even the voice packets can have their QoS information overwritten by the switch. Example 16-5 demonstrates some sample output from the show mls qos interface command, where the switch port is trusting CoS information from the attached IP Phone.

Example 16-5 Verifying QoS Trust to the IP Phone

Next, you can verify how the IP Phone has been instructed to treat incoming QoS information from its attached PC or other device. This is shown in the trust device: line in Example 16-5, where the device is the IP Phone’s device. You also can use the following EXEC command:

Again, the IP Phone’s device is not being trusted. If the switch port was configured with the switchport priority extend trust command, the appliance trust would show trusted.
Example 16-7 shows the configuration commands that have been added to a switch interface where a Cisco IP Phone is connected.

Example 16-7 Switch Port Configuration Commands to Support a Cisco IP Phone

If the IP Phone is not connected to the switch port, it is not detected and the trust parameter is not enabled, as Example 16-8 demonstrates.

Example 16-8 Displaying IP Phone Connection and Trust Status

When a Cisco IP Phone is connected, power is applied and the phone is detected. Then the conditional QoS trust (CoS, in this case) is enabled, as demonstrated in Example 16-9.

Example 16-9 Conditional Trust (CoS) Enabled on a Cisco IP Phone

About the author


Leave a Comment