High DRAM (Dynamic Random Access Memory ) CPU utilization on RE (Routing Engine). The SNMP variable being reported is jnxOperatingBuffer.22.214.171.124. The OID value for this variable is 126.96.36.199.4.1.26188.8.131.52.184.108.40.206.0.0.
The problem related to this syslog message is described in the following sections:
The RMON_EVENT_re_high_dram_utilization message is logged each time the Routing Engine memory usage increases to the point of crossing an 80% usage value threshold.
When a RMON_EVENT_re_high_dram_utilization event occurs, a message similar to the following is reported:
snmpd: SNMPD_RMON_EVENTLOG: ais_re0_high_dram_utilization: Event 50001 triggered by Alarm 50001, rising threshold (80) crossed, (variable: jnxOperatingBuffer.220.127.116.11, value: 83)
The cause of increased memory use percentage by a Routing Engine can be due to several factors:
- The routing table has an increasing number of entries, particularly from a BGP session that has restarted or been added.
- There are a large number of routing instances in use.
- Several traceoptions have been set up in the configuration to do logging.
- A task is experiencing a memory leak.
- The ‘/mfs’ directory is held in memory, and files are being created to fill this area.
Examine the following output to help determine the cause of this message:
show log messages
show system processes extensive
show task memory
show task memory detail
show route summary
show krt queue
show krt state
show system storage
Look for any related events that occurred at or just before the RMON_EVENT_re_high_dram_utilization message in the log messages output.
The other outputs can be run several times a few minutes apart to see if any process, queue, in-memory directory, or the routing table is steadily increasing in size.
Perform these steps:
- If the RMON_EVENT_re_high_dram_utilization message was due to a routing process or a BPG session restart, then this message could be a normal part of that processing. The reported memory used by the Routing Engine may remain high afterwards because the kernel is ‘lazy’ about reclaiming freed memory.
- If the ‘show krt’ outputs report a route stuck in the krt queue, it is possible to get the queue unstuck by committing any configuration change under the chassis hierarchy. This will not affect transit traffic, and it can be reverted afterwards to the previous configuration via rollback or by using ‘commit confirmed’.
- If possible, consider increasing the amount of memory available in the Routing Engine.
- If the RMON_EVENT_re_high_dram_utilization messages continue, or a task or in-memory directory is showing an increase in memory usage over time, open a case with your technical support representative to investigate the issue further.