CCNP Voice FAQ: Congestion Avoidance, Policing, Shaping, and Link Efficiency Mechanisms

CCNP Voice FAQ: Congestion Avoidance, Policing, Shaping, and Link Efficiency Mechanisms

Q1. Which of the following is not a tail drop flaw?
A. TCP synchronization
B. TCP starvation
C. TCP slow start
D. No differentiated drop

Answer: C

Q2. Which of the following statements is not true about RED?
A. RED randomly drops packets before the queue becomes full.
B. RED increases the drop rate as the average queue size increases.
C. RED has no per-flow intelligence.
D. RED is always useful, without dependency on flow (traffic) types.

Answer: D

Q3. Which of the following is not a main parameter of a RED profile?
A. Mark probability denominator
B. Average transmission rate
C. Maximum threshold
D. Minimum threshold

Answer: B

Q4. Which of the following is not true about WRED?
A. You cannot apply WRED to the same interface as CQ, PQ, and WFQ.
B. WRED treats non-IP traffic as precedence 0.
C. You normally use WRED in the core routers of a network.
D. You should apply WRED to the voice queue.

Answer: D

Q5. Which of the following is not true about traffic shaping?
A. It is applied in the outgoing direction only.
B. Shaping can re-mark excess packets.
C. Shaping buffers excess packets.
D. It supports interaction with Frame Relay congestion indication.

Answer: B

Q6. Which of the following is not true about traffic policing?
A. You apply it in the outgoing direction only.
B. It can re-mark excess traffic.
C. It can drop excess traffic.
D. You can apply it in the incoming direction.

Answer: A

Q7. Which command is used for traffic policing in a class within a policy map?
A. police
B. drop
C. remark
D. maximum-rate

Answer: A

Q8. Which of the following does not apply to class-based shaping?
A. It does not support FRF.12.
B. It classifies per DLCI or subinterface.
C. It understands FECN and BECN.
D. It is supported via MQC

Answer: B

Q9. Which of the following is not a valid statement about compression?
A. Many compression techniques remove as much redundancy in data as possible.

B. A single algorithm might yield different compression ratios for different data types.

C. If available, compression is always recommended.

D. Compression can be hardware based, hardware assisted, or software based.

Answer: C

Q10. Which of the following is not true about Layer 2 payload compression?
A. It reduces the size of the frame payload.

B. It reduces serialization delay.

C. Software-based compression might yield better throughput than hardware-based compression.

D. Layer 2 payload compression is recommended on all WAN links.

Answer: D

Q11. Which of the following is the only true statement about header compression?
A. RTP header compression is not a type of header compression.
B. Header compression compresses the header and payload.
C. Header compression may be class based.
D. Header compression is performed on a session-by-session (end-to-end) basis.

Answer: C

Q12. Which of the following is not true about fragmentation and interleaving?
A. Fragmentation and interleaving is recommended when small delay-sensitive packets are present.

B. Fragmentation result is not dependent on interleaving.

C. Fragmentation and interleaving might be necessary, even if LLQ is configured on the interface.

D. Fragmentation and interleaving is recommended on slow WAN links.

Answer: B

Q13. Name two of the limitations and drawbacks of tail drop.

Answer: The limitations and drawbacks of tail drop include TCP global synchronization, TCP starvation, and lack of differentiated (or preferential) dropping.

Q14. Explain TCP global synchronization.

Answer: When tail drop happens, TCP-based traffic flows simultaneously slow down (go into slow start) by reducing their TCP send window size. At this point, the bandwidth utilization drops significantly (assuming there are many active TCP flows), interface queues become less congested, and TCP flows start to increase their window sizes. Eventually, interfaces become congested again, tail drops happen, and the cycle repeats. This situation is called TCP global synchronization.

Q15. Explain TCP starvation.

Answer: Queues become full when traffic is excessive and has no remedy, tail drop happens, and aggressive flows are not selectively punished. After tail drops begin, TCP flows slow down simultaneously, but other flows (non-TCP), such as UDP and non-IP traffic, do not. Consequently, non-TCP traffic starts filling up the queues and leaves little or no room for TCP packets. This situation is called TCP starvation.

Q16. Explain why RED does not cause TCP global synchronization.

Answer: Because RED drops packets from some and not all flows (statistically, more aggressive ones), all flows do not slow down and speed up at the same time, causing global synchronization.

Q17. What are the three configuration parameters for RED?

Answer: RED has three configuration parameters: minimum threshold, maximum threshold, and mark
probability denominator (MPD). While the size of the queue is smaller than the minimum threshold, RED does not drop packets. As the queue size grows, so does the rate of packet drops. When the size of the queue becomes larger than the maximum threshold, all arriving packets are dropped (tail drop behavior). The mark probability denominator is an integer that dictates to RED to drop one of MPD (as many packets as the value of mark probability denominator); the size of the queue is between the values of minimum and maximum thresholds.

Q18. Briefly explain how WRED is different from RED.

Answer: Weighted random early detection (WRED) has the added capability of differentiating between high- and low-priority traffic, compared to RED. With WRED, you can set up a different profile (with a minimum threshold, maximum threshold, and mark probability denominator) for each traffic priority. Traffic priority is based on IP precedence or DSCP values.

Q19. Explain how class-based weighted random early detection is implemented.

Answer: When CBWFQ is the deployed queuing discipline, each queue performs tail drop by default.Applying WRED inside a CBWFQ system yields CBWRED; within each queue, packet profiles are based on IP precedence or DSCP value.

Q20. Explain how assured forwarding per-hop behavior is implemented on Cisco routers.

Answer: Currently, the only way to enforce assured forwarding (AF) per hop-behavior (PHB) on a Cisco router is by applying WRED to the queues within a CBWFQ system. Note that LLQ is composed of a strict-priority queue (policed) and a CBWFQ system. Therefore, applying WRED to the CBWFQ component of the LLQ also yields AF behavior.

Q21. List at least two of the main purposes of traffic policing.

Answer: The purposes of traffic policing are to enforce subrate access, to limit the traffic rate for each traffic class, and to re-mark traffic.

Q22. List at least two of the main purposes of traffic shaping.

Answer: The purposes of traffic shaping are to slow down the rate of traffic being sent to another site through a WAN service such as Frame Relay or ATM, to comply with the subscribed rate, and to send different traffic classes at different rates.

Q23. List at least four of the similarities and differences between traffic shaping and policing.

Answer: The similarities and differences between traffic shaping and policing include the following:

  • Both traffic shaping and traffic policing measure traffic. (Sometimes, different traffic classes are measured separately.)
  • Policing can be applied to the inbound and outbound traffic (with respect to an interface), but traffic shaping applies only to outbound traffic.
  • Shaping buffers excess traffic and sends it according to a preconfigured rate, whereas policing drops or re-marks excess traffic.
  • Shaping requires memory for buffering excess traffic, which creates variable delay and jitter; policing does not require extra memory, and it does not impose variable delay.
  • Policing can re-mark traffic, but traffic shaping does not re-mark traffic.
  • Traffic shaping can be configured to shape traffic based on network conditions and signals, but policing does not respond to network conditions and signals.

Q24. In the token bucket scheme, how many tokens are needed for each byte of data to be transmitted?

Answer: To transmit one byte of data, the bucket must have one token.

Q25. Explain in the single bucket, single rate model when conform action and exceed action take place.

Answer: If the size of data to be transmitted (in bytes) is smaller than the number of tokens, the traffic is called conforming. When traffic conforms, as many tokens as the size of data are removed from the bucket, and the conform action, which is usually forward data, is performed. If the size of data to be transmitted (in bytes) is larger than the number of tokens, the traffic is called exceeding. In the exceed situation, tokens are not removed from the bucket, but the action performed (exceed action) is either buffer and send data later (in the case of shaping) or drop or mark data (in the case of policing).

Q26. What is the formula showing the relationship between CIR,Bc, and Tc?

Answer: The formula showing the relationship between CIR, Bc, and Tc is as follows:
CIR (bits per second) = Bc (bits) / Tc (seconds)

Q27. Compare and contrast Frame Relay traffic shaping and class-based traffic shaping.

Answer: Frame Relay traffic shaping controls Frame Relay traffic only and can be applied to a Frame Relay subinterface or Frame Relay DLCI. Whereas Frame Relay traffic shaping supports Frame Relay fragmentation and interleaving (FRF.12), class-based traffic shaping does not. On the other hand, both class-based traffic shaping and Frame Relay traffic shaping interact with and support Frame Relay network congestion signals such as BECN and FECN. A router that is receiving BECNs shapes its outgoing Frame Relay traffic to a lower rate. If it receives FECNs—even if it has no traffic for the other end—it sends test frames with the BECN bit set to inform the other end to slow down.

Q28. Briefly explain compression.

Answer: Compression is a technique used in many of the link efficiency mechanisms. It reduces the size of data to be transferred; therefore, it increases throughput and reduces overall delay. Many compression algorithms have been developed over time. One main difference between compression algorithms is often the type of data that the algorithm has been optimized for. The success of compression algorithms is measured and expressed by the ratio of raw data to compressed data. When possible, hardware compression is recommended over software compression.

Q29. Briefly explain Layer 2 payload compression.

Answer: Layer 2 payload compression, as the name implies, compresses the entire payload of a Layer 2 frame. For example, if a Layer 2 frame encapsulates an IP packet, the entire IP packet is compressed. Layer 2 payload compression is performed on a link-by-link basis; it can be performed on WAN connections such as PPP, Frame Relay, HDLC, X.25, and LAPB. Cisco IOS supports Stacker, Predictor, and Microsoft Point-to-Point Compression (MPPC) as Layer 2 compression methods. The primary difference between these methods is their overhead and utilization of CPU and memory. Because Layer 2 payload compression reduces the size of the frame, serialization delay is reduced. An increase in available bandwidth (hence throughput) depends on the algorithm efficiency.

Q30. Provide a brief explanation for header compression.

Answer: Header compression reduces serialization delay and results in less bandwidth usage, yielding more throughput and more available bandwidth. As the name implies, header compression compresses headers only. For example, RTP compression compresses RTP, UDP, and IP headers, but it does not compress the application data. This makes header compression especially useful when application payload size is small. Without header compression, the header (overhead)-to-payload (data) ratio is large, but with header compression, the overheadto-data ratio.

Q31. Is it possible to mitigate the delay imposed by the large data units ahead of delay-sensitive packets in the hardware (Tx) queue?

Answer: Yes, you must enable fragmentation on a link and specify the maximum data unit size (called fragment size). Fragmentation must be accompanied by interleaving; otherwise, it will not have an effect. Interleaving allows packets of different flows to get between fragments of large data units in the queue.

Q32. Where should link efficiency mechanisms be applied?

Answer: Link efficiency mechanisms might not be necessary on all interfaces and links. It is important that you identify network bottlenecks and work on the problem spots. On fast links, many link efficiency mechanisms are not supported, and if they are, they might have negative results. On slow links and where bottlenecks are recognized, you must calculate the overhead-to-data ratios, consider all compression options, and make a choice. On some links, you can perform full link compression. On some, you can perform Layer 2 payload compression, and on others, you will probably perform header compression such as RTP or TCP header compression only. Link fragmentation and interleaving is always a good option to consider on slow links.

About the author

Scott

Leave a Comment