# Performance Monitoring Events for Intel(R) Xeon(R) processor E5 family Based on the Sandy Bridge-EP Microarchitecture - V20 # 9/16/2016 11:35:10 AM # Copyright (c) 2007 - 2016 Intel Corporation. All rights reserved. Unit EventCode UMask EventName Description Counter MSRValue Filter Internal CBO 0x0 0x0 UNC_C_CLOCKTICKS tbd 0,1,2,3 0 null 0 CBO 0x1f 0x0 UNC_C_COUNTER0_OCCUPANCY Since occupancy counts can only be captured in the Cbo's 0 counter, this event allows a user to capture occupancy related information by filtering the Cb0 occupancy count captured in Counter 0. The filtering available is found in the control register - threshold, invert and edge detect. E.g. setting threshold to 1 can effectively monitor how many cycles the monitored queue has an entry. 1,2,3 0 null 0 CBO 0x21 0x0 UNC_C_ISMQ_DRD_MISS_OCC tbd 0,1 0 null 0 CBO 0x34 0x3 UNC_C_LLC_LOOKUP.DATA_READ Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS set filter mask bit 0 and select a state or states to match. Otherwise, the event will count nothing. CBoGlCtrl[22:18] bits correspond to [FMESI] state. 0,1 0 CBoFilter[22:18] 0 CBO 0x34 0x41 UNC_C_LLC_LOOKUP.NID Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS set filter mask bit 0 and select a state or states to match. Otherwise, the event will count nothing. CBoGlCtrl[22:18] bits correspond to [FMESI] state. 0,1 0 CBoFilter[22:18], CBoFilter[17:10] 0 CBO 0x34 0x9 UNC_C_LLC_LOOKUP.REMOTE_SNOOP Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS set filter mask bit 0 and select a state or states to match. Otherwise, the event will count nothing. CBoGlCtrl[22:18] bits correspond to [FMESI] state. 0,1 0 CBoFilter[22:18] 0 CBO 0x34 0x5 UNC_C_LLC_LOOKUP.WRITE Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS set filter mask bit 0 and select a state or states to match. Otherwise, the event will count nothing. CBoGlCtrl[22:18] bits correspond to [FMESI] state. 0,1 0 CBoFilter[22:18] 0 CBO 0x37 0x2 UNC_C_LLC_VICTIMS.E_STATE Counts the number of lines that were victimized on a fill. This can be filtered by the state that the line was in. 0,1 0 null 0 CBO 0x37 0x8 UNC_C_LLC_VICTIMS.MISS Counts the number of lines that were victimized on a fill. This can be filtered by the state that the line was in. 0,1 0 null 0 CBO 0x37 0x1 UNC_C_LLC_VICTIMS.M_STATE Counts the number of lines that were victimized on a fill. This can be filtered by the state that the line was in. 0,1 0 null 0 CBO 0x37 0x40 UNC_C_LLC_VICTIMS.NID Counts the number of lines that were victimized on a fill. This can be filtered by the state that the line was in. 0,1 0 CBoFilter[17:10] 0 CBO 0x37 0x4 UNC_C_LLC_VICTIMS.S_STATE Counts the number of lines that were victimized on a fill. This can be filtered by the state that the line was in. 0,1 0 null 0 CBO 0x39 0x8 UNC_C_MISC.RFO_HIT_S Miscellaneous events in the Cbo. 0,1 0 null 0 CBO 0x39 0x1 UNC_C_MISC.RSPI_WAS_FSE Miscellaneous events in the Cbo. 0,1 0 null 0 CBO 0x39 0x4 UNC_C_MISC.STARTED Miscellaneous events in the Cbo. 0,1 0 null 0 CBO 0x39 0x2 UNC_C_MISC.WC_ALIASING Miscellaneous events in the Cbo. 0,1 0 null 0 CBO 0x1b 0x4 UNC_C_RING_AD_USED.DOWN_EVEN Counts the number of cycles that the AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the 'UP' direction is on the clockwise ring and 'DN' is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring. 2,3 0 null 0 CBO 0x1b 0x8 UNC_C_RING_AD_USED.DOWN_ODD Counts the number of cycles that the AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the 'UP' direction is on the clockwise ring and 'DN' is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring. 2,3 0 null 0 CBO 0x1b 0x1 UNC_C_RING_AD_USED.UP_EVEN Counts the number of cycles that the AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the 'UP' direction is on the clockwise ring and 'DN' is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring. 2,3 0 null 0 CBO 0x1b 0x2 UNC_C_RING_AD_USED.UP_ODD Counts the number of cycles that the AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the 'UP' direction is on the clockwise ring and 'DN' is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring. 2,3 0 null 0 CBO 0x1c 0x4 UNC_C_RING_AK_USED.DOWN_EVEN Counts the number of cycles that the AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the 'UP' direction is on the clockwise ring and 'DN' is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring. 2,3 0 null 0 CBO 0x1c 0x8 UNC_C_RING_AK_USED.DOWN_ODD Counts the number of cycles that the AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the 'UP' direction is on the clockwise ring and 'DN' is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring. 2,3 0 null 0 CBO 0x1c 0x1 UNC_C_RING_AK_USED.UP_EVEN Counts the number of cycles that the AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the 'UP' direction is on the clockwise ring and 'DN' is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring. 2,3 0 null 0 CBO 0x1c 0x2 UNC_C_RING_AK_USED.UP_ODD Counts the number of cycles that the AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the 'UP' direction is on the clockwise ring and 'DN' is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring. 2,3 0 null 0 CBO 0x1d 0x4 UNC_C_RING_BL_USED.DOWN_EVEN Counts the number of cycles that the BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the 'UP' direction is on the clockwise ring and 'DN' is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring. 2,3 0 null 0 CBO 0x1d 0x8 UNC_C_RING_BL_USED.DOWN_ODD Counts the number of cycles that the BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the 'UP' direction is on the clockwise ring and 'DN' is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring. 2,3 0 null 0 CBO 0x1d 0x1 UNC_C_RING_BL_USED.UP_EVEN Counts the number of cycles that the BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the 'UP' direction is on the clockwise ring and 'DN' is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring. 2,3 0 null 0 CBO 0x1d 0x2 UNC_C_RING_BL_USED.UP_ODD Counts the number of cycles that the BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the 'UP' direction is on the clockwise ring and 'DN' is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring. 2,3 0 null 0 CBO 0x5 0x2 UNC_C_RING_BOUNCES.AK_CORE tbd 0,1 0 null 0 CBO 0x5 0x4 UNC_C_RING_BOUNCES.BL_CORE tbd 0,1 0 null 0 CBO 0x5 0x8 UNC_C_RING_BOUNCES.IV_CORE tbd 0,1 0 null 0 CBO 0x1e 0xf UNC_C_RING_IV_USED.ANY Counts the number of cycles that the IV ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. There is only 1 IV ring in JKT. Therefore, if one wants to monitor the 'Even' ring, they should select both UP_EVEN and DN_EVEN. To monitor the 'Odd' ring, they should select both UP_ODD and DN_ODD. 2,3 0 null 0 CBO 0x6 0x1 UNC_C_RING_SINK_STARVED.AD_CACHE tbd 0,1 0 null 0 CBO 0x6 0x2 UNC_C_RING_SINK_STARVED.AK_CORE tbd 0,1 0 null 0 CBO 0x6 0x4 UNC_C_RING_SINK_STARVED.BL_CORE tbd 0,1 0 null 0 CBO 0x6 0x8 UNC_C_RING_SINK_STARVED.IV_CORE tbd 0,1 0 null 0 CBO 0x7 0x0 UNC_C_RING_SRC_THRTL tbd 0,1 0 null 0 CBO 0x12 0x2 UNC_C_RxR_EXT_STARVED.IPQ Counts cycles in external starvation. This occurs when one of the ingress queues is being starved by the other queues. 0,1 0 null 0 CBO 0x12 0x1 UNC_C_RxR_EXT_STARVED.IRQ Counts cycles in external starvation. This occurs when one of the ingress queues is being starved by the other queues. 0,1 0 null 0 CBO 0x12 0x4 UNC_C_RxR_EXT_STARVED.ISMQ Counts cycles in external starvation. This occurs when one of the ingress queues is being starved by the other queues. 0,1 0 null 0 CBO 0x12 0x8 UNC_C_RxR_EXT_STARVED.ISMQ_BIDS Counts cycles in external starvation. This occurs when one of the ingress queues is being starved by the other queues. 0,1 0 null 0 CBO 0x13 0x4 UNC_C_RxR_INSERTS.IPQ Counts number of allocations per cycle into the specified Ingress queue. 0,1 0 null 0 CBO 0x13 0x1 UNC_C_RxR_INSERTS.IRQ Counts number of allocations per cycle into the specified Ingress queue. 0,1 0 null 0 CBO 0x13 0x2 UNC_C_RxR_INSERTS.IRQ_REJECTED Counts number of allocations per cycle into the specified Ingress queue. 0,1 0 null 0 CBO 0x13 0x10 UNC_C_RxR_INSERTS.VFIFO Counts number of allocations per cycle into the specified Ingress queue. 0,1 0 null 0 CBO 0x14 0x4 UNC_C_RxR_INT_STARVED.IPQ Counts cycles in internal starvation. This occurs when one (or more) of the entries in the ingress queue are being starved out by other entries in that queue. 0,1 0 null 0 CBO 0x14 0x1 UNC_C_RxR_INT_STARVED.IRQ Counts cycles in internal starvation. This occurs when one (or more) of the entries in the ingress queue are being starved out by other entries in that queue. 0,1 0 null 0 CBO 0x14 0x8 UNC_C_RxR_INT_STARVED.ISMQ Counts cycles in internal starvation. This occurs when one (or more) of the entries in the ingress queue are being starved out by other entries in that queue. 0,1 0 null 0 CBO 0x31 0x4 UNC_C_RxR_IPQ_RETRY.ADDR_CONFLICT Number of times a snoop (probe) request had to retry. Filters exist to cover some of the common cases retries. 0,1 0 null 0 CBO 0x31 0x1 UNC_C_RxR_IPQ_RETRY.ANY Number of times a snoop (probe) request had to retry. Filters exist to cover some of the common cases retries. 0,1 0 null 0 CBO 0x31 0x2 UNC_C_RxR_IPQ_RETRY.FULL Number of times a snoop (probe) request had to retry. Filters exist to cover some of the common cases retries. 0,1 0 null 0 CBO 0x31 0x10 UNC_C_RxR_IPQ_RETRY.QPI_CREDITS Number of times a snoop (probe) request had to retry. Filters exist to cover some of the common cases retries. 0,1 0 null 0 CBO 0x32 0x4 UNC_C_RxR_IRQ_RETRY.ADDR_CONFLICT tbd 0,1 0 null 0 CBO 0x32 0x1 UNC_C_RxR_IRQ_RETRY.ANY tbd 0,1 0 null 0 CBO 0x32 0x2 UNC_C_RxR_IRQ_RETRY.FULL tbd 0,1 0 null 0 CBO 0x32 0x10 UNC_C_RxR_IRQ_RETRY.QPI_CREDITS tbd 0,1 0 null 0 CBO 0x32 0x8 UNC_C_RxR_IRQ_RETRY.RTID tbd 0,1 0 null 0 CBO 0x33 0x1 UNC_C_RxR_ISMQ_RETRY.ANY Number of times a transaction flowing through the ISMQ had to retry. Transaction pass through the ISMQ as responses for requests that already exist in the Cbo. Some examples include: when data is returned or when snoop responses come back from the cores. 0,1 0 null 0 CBO 0x33 0x2 UNC_C_RxR_ISMQ_RETRY.FULL Number of times a transaction flowing through the ISMQ had to retry. Transaction pass through the ISMQ as responses for requests that already exist in the Cbo. Some examples include: when data is returned or when snoop responses come back from the cores. 0,1 0 null 0 CBO 0x33 0x20 UNC_C_RxR_ISMQ_RETRY.IIO_CREDITS Number of times a transaction flowing through the ISMQ had to retry. Transaction pass through the ISMQ as responses for requests that already exist in the Cbo. Some examples include: when data is returned or when snoop responses come back from the cores. 0,1 0 null 0 CBO 0x33 0x10 UNC_C_RxR_ISMQ_RETRY.QPI_CREDITS Number of times a transaction flowing through the ISMQ had to retry. Transaction pass through the ISMQ as responses for requests that already exist in the Cbo. Some examples include: when data is returned or when snoop responses come back from the cores. 0,1 0 null 0 CBO 0x33 0x8 UNC_C_RxR_ISMQ_RETRY.RTID Number of times a transaction flowing through the ISMQ had to retry. Transaction pass through the ISMQ as responses for requests that already exist in the Cbo. Some examples include: when data is returned or when snoop responses come back from the cores. 0,1 0 null 0 CBO 0x11 0x4 UNC_C_RxR_OCCUPANCY.IPQ Counts number of entries in the specified Ingress queue in each cycle. 0 0 null 0 CBO 0x11 0x1 UNC_C_RxR_OCCUPANCY.IRQ Counts number of entries in the specified Ingress queue in each cycle. 0 0 null 0 CBO 0x11 0x2 UNC_C_RxR_OCCUPANCY.IRQ_REJECTED Counts number of entries in the specified Ingress queue in each cycle. 0 0 null 0 CBO 0x11 0x10 UNC_C_RxR_OCCUPANCY.VFIFO Counts number of entries in the specified Ingress queue in each cycle. 0 0 null 0 CBO 0x35 0x4 UNC_C_TOR_INSERTS.EVICTION Counts the number of entries successfuly inserted into the TOR that match qualifications specified by the subevent. There are a number of subevent 'filters' but only a subset of the subevent combinations are valid. Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example, one wanted to count DRD Local Misses, one should select 'MISS_OPC_MATCH' and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182). 0,1 0 null 0 CBO 0x35 0xa UNC_C_TOR_INSERTS.MISS_ALL Counts the number of entries successfuly inserted into the TOR that match qualifications specified by the subevent. There are a number of subevent 'filters' but only a subset of the subevent combinations are valid. Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example, one wanted to count DRD Local Misses, one should select 'MISS_OPC_MATCH' and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182). 0,1 0 null 0 CBO 0x35 0x3 UNC_C_TOR_INSERTS.MISS_OPCODE Counts the number of entries successfuly inserted into the TOR that match qualifications specified by the subevent. There are a number of subevent 'filters' but only a subset of the subevent combinations are valid. Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example, one wanted to count DRD Local Misses, one should select 'MISS_OPC_MATCH' and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182). 0,1 0 CBoFilter[31:23] 0 CBO 0x35 0x48 UNC_C_TOR_INSERTS.NID_ALL Counts the number of entries successfuly inserted into the TOR that match qualifications specified by the subevent. There are a number of subevent 'filters' but only a subset of the subevent combinations are valid. Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example, one wanted to count DRD Local Misses, one should select 'MISS_OPC_MATCH' and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182). 0,1 0 CBoFilter[17:10] 0 CBO 0x35 0x44 UNC_C_TOR_INSERTS.NID_EVICTION Counts the number of entries successfuly inserted into the TOR that match qualifications specified by the subevent. There are a number of subevent 'filters' but only a subset of the subevent combinations are valid. Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example, one wanted to count DRD Local Misses, one should select 'MISS_OPC_MATCH' and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182). 0,1 0 CBoFilter[17:10] 0 CBO 0x35 0x4a UNC_C_TOR_INSERTS.NID_MISS_ALL Counts the number of entries successfuly inserted into the TOR that match qualifications specified by the subevent. There are a number of subevent 'filters' but only a subset of the subevent combinations are valid. Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example, one wanted to count DRD Local Misses, one should select 'MISS_OPC_MATCH' and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182). 0,1 0 CBoFilter[17:10] 0 CBO 0x35 0x43 UNC_C_TOR_INSERTS.NID_MISS_OPCODE Counts the number of entries successfuly inserted into the TOR that match qualifications specified by the subevent. There are a number of subevent 'filters' but only a subset of the subevent combinations are valid. Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example, one wanted to count DRD Local Misses, one should select 'MISS_OPC_MATCH' and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182). 0,1 0 CBoFilter[31:23], CBoFilter[17:10] 0 CBO 0x35 0x41 UNC_C_TOR_INSERTS.NID_OPCODE Counts the number of entries successfuly inserted into the TOR that match qualifications specified by the subevent. There are a number of subevent 'filters' but only a subset of the subevent combinations are valid. Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example, one wanted to count DRD Local Misses, one should select 'MISS_OPC_MATCH' and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182). 0,1 0 CBoFilter[31:23], CBoFilter[17:10] 0 CBO 0x35 0x50 UNC_C_TOR_INSERTS.NID_WB Counts the number of entries successfuly inserted into the TOR that match qualifications specified by the subevent. There are a number of subevent 'filters' but only a subset of the subevent combinations are valid. Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example, one wanted to count DRD Local Misses, one should select 'MISS_OPC_MATCH' and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182). 0,1 0 CBoFilter[17:10] 0 CBO 0x35 0x1 UNC_C_TOR_INSERTS.OPCODE Counts the number of entries successfuly inserted into the TOR that match qualifications specified by the subevent. There are a number of subevent 'filters' but only a subset of the subevent combinations are valid. Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example, one wanted to count DRD Local Misses, one should select 'MISS_OPC_MATCH' and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182). 0,1 0 CBoFilter[31:23] 0 CBO 0x35 0x10 UNC_C_TOR_INSERTS.WB Counts the number of entries successfuly inserted into the TOR that match qualifications specified by the subevent. There are a number of subevent 'filters' but only a subset of the subevent combinations are valid. Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example, one wanted to count DRD Local Misses, one should select 'MISS_OPC_MATCH' and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182). 0,1 0 null 0 CBO 0x36 0x8 UNC_C_TOR_OCCUPANCY.ALL For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. There are a number of subevent 'filters' but only a subset of the subevent combinations are valid. Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example, one wanted to count DRD Local Misses, one should select 'MISS_OPC_MATCH' and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182) 0 0 null 0 CBO 0x36 0x4 UNC_C_TOR_OCCUPANCY.EVICTION For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. There are a number of subevent 'filters' but only a subset of the subevent combinations are valid. Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example, one wanted to count DRD Local Misses, one should select 'MISS_OPC_MATCH' and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182) 0 0 null 0 CBO 0x36 0xa UNC_C_TOR_OCCUPANCY.MISS_ALL For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. There are a number of subevent 'filters' but only a subset of the subevent combinations are valid. Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example, one wanted to count DRD Local Misses, one should select 'MISS_OPC_MATCH' and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182) 0 0 null 0 CBO 0x36 0x3 UNC_C_TOR_OCCUPANCY.MISS_OPCODE For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. There are a number of subevent 'filters' but only a subset of the subevent combinations are valid. Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example, one wanted to count DRD Local Misses, one should select 'MISS_OPC_MATCH' and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182) 0 0 CBoFilter[31:23] 0 CBO 0x36 0x48 UNC_C_TOR_OCCUPANCY.NID_ALL For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. There are a number of subevent 'filters' but only a subset of the subevent combinations are valid. Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example, one wanted to count DRD Local Misses, one should select 'MISS_OPC_MATCH' and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182) 0 0 CBoFilter[17:10] 0 CBO 0x36 0x44 UNC_C_TOR_OCCUPANCY.NID_EVICTION For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. There are a number of subevent 'filters' but only a subset of the subevent combinations are valid. Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example, one wanted to count DRD Local Misses, one should select 'MISS_OPC_MATCH' and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182) 0 0 CBoFilter[17:10] 0 CBO 0x36 0x4a UNC_C_TOR_OCCUPANCY.NID_MISS_ALL For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. There are a number of subevent 'filters' but only a subset of the subevent combinations are valid. Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example, one wanted to count DRD Local Misses, one should select 'MISS_OPC_MATCH' and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182) 0 0 CBoFilter[17:10] 0 CBO 0x36 0x43 UNC_C_TOR_OCCUPANCY.NID_MISS_OPCODE For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. There are a number of subevent 'filters' but only a subset of the subevent combinations are valid. Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example, one wanted to count DRD Local Misses, one should select 'MISS_OPC_MATCH' and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182) 0 0 CBoFilter[31:23], CBoFilter[17:10] 0 CBO 0x36 0x41 UNC_C_TOR_OCCUPANCY.NID_OPCODE For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. There are a number of subevent 'filters' but only a subset of the subevent combinations are valid. Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example, one wanted to count DRD Local Misses, one should select 'MISS_OPC_MATCH' and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182) 0 0 CBoFilter[31:23], CBoFilter[17:10] 0 CBO 0x36 0x1 UNC_C_TOR_OCCUPANCY.OPCODE For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. There are a number of subevent 'filters' but only a subset of the subevent combinations are valid. Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example, one wanted to count DRD Local Misses, one should select 'MISS_OPC_MATCH' and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182) 0 0 CBoFilter[31:23] 0 CBO 0x4 0x0 UNC_C_TxR_ADS_USED tbd 0,1 0 null 0 CBO 0x2 0x1 UNC_C_TxR_INSERTS.AD_CACHE Number of allocations into the Cbo Egress. The Egress is used to queue up requests destined for the ring. 0,1 0 null 0 CBO 0x2 0x10 UNC_C_TxR_INSERTS.AD_CORE Number of allocations into the Cbo Egress. The Egress is used to queue up requests destined for the ring. 0,1 0 null 0 CBO 0x2 0x2 UNC_C_TxR_INSERTS.AK_CACHE Number of allocations into the Cbo Egress. The Egress is used to queue up requests destined for the ring. 0,1 0 null 0 CBO 0x2 0x20 UNC_C_TxR_INSERTS.AK_CORE Number of allocations into the Cbo Egress. The Egress is used to queue up requests destined for the ring. 0,1 0 null 0 CBO 0x2 0x4 UNC_C_TxR_INSERTS.BL_CACHE Number of allocations into the Cbo Egress. The Egress is used to queue up requests destined for the ring. 0,1 0 null 0 CBO 0x2 0x40 UNC_C_TxR_INSERTS.BL_CORE Number of allocations into the Cbo Egress. The Egress is used to queue up requests destined for the ring. 0,1 0 null 0 CBO 0x2 0x8 UNC_C_TxR_INSERTS.IV_CACHE Number of allocations into the Cbo Egress. The Egress is used to queue up requests destined for the ring. 0,1 0 null 0 CBO 0x3 0x2 UNC_C_TxR_STARVED.AK Counts injection starvation. This starvation is triggered when the Egress cannot send a transaction onto the ring for a long period of time. 0,1 0 null 0 CBO 0x3 0x4 UNC_C_TxR_STARVED.BL Counts injection starvation. This starvation is triggered when the Egress cannot send a transaction onto the ring for a long period of time. 0,1 0 null 0 PCU 0x0 0x0 UNC_P_CLOCKTICKS The PCU runs off a fixed 800 MHz clock. This event counts the number of pclk cycles measured while the counter was enabled. The pclk, like the Memory Controller's dclk, counts at a constant rate making it a good measure of actual wall time. 0,1,2,3 0 null 0 PCU 0x3 0x0 UNC_P_CORE0_TRANSITION_CYCLES Number of cycles spent performing core C state transitions. There is one event per core. 0,1,2,3 0 null 1 PCU 0x4 0x0 UNC_P_CORE1_TRANSITION_CYCLES Number of cycles spent performing core C state transitions. There is one event per core. 0,1,2,3 0 null 1 PCU 0x5 0x0 UNC_P_CORE2_TRANSITION_CYCLES Number of cycles spent performing core C state transitions. There is one event per core. 0,1,2,3 0 null 1 PCU 0x6 0x0 UNC_P_CORE3_TRANSITION_CYCLES Number of cycles spent performing core C state transitions. There is one event per core. 0,1,2,3 0 null 1 PCU 0x7 0x0 UNC_P_CORE4_TRANSITION_CYCLES Number of cycles spent performing core C state transitions. There is one event per core. 0,1,2,3 0 null 1 PCU 0x8 0x0 UNC_P_CORE5_TRANSITION_CYCLES Number of cycles spent performing core C state transitions. There is one event per core. 0,1,2,3 0 null 1 PCU 0x9 0x0 UNC_P_CORE6_TRANSITION_CYCLES Number of cycles spent performing core C state transitions. There is one event per core. 0,1,2,3 0 null 1 PCU 0xa 0x0 UNC_P_CORE7_TRANSITION_CYCLES Number of cycles spent performing core C state transitions. There is one event per core. 0,1,2,3 0 null 1 PCU 0x1e 0x0 UNC_P_DEMOTIONS_CORE0 Counts the number of times when a configurable cores had a C-state demotion 0,1,2,3 0 PCUFilter[7:0] 0 PCU 0x1f 0x0 UNC_P_DEMOTIONS_CORE1 Counts the number of times when a configurable cores had a C-state demotion 0,1,2,3 0 PCUFilter[7:0] 0 PCU 0x20 0x0 UNC_P_DEMOTIONS_CORE2 Counts the number of times when a configurable cores had a C-state demotion 0,1,2,3 0 null 0 PCU 0x21 0x0 UNC_P_DEMOTIONS_CORE3 Counts the number of times when a configurable cores had a C-state demotion 0,1,2,3 0 PCUFilter[7:0] 0 PCU 0x22 0x0 UNC_P_DEMOTIONS_CORE4 Counts the number of times when a configurable cores had a C-state demotion 0,1,2,3 0 PCUFilter[7:0] 0 PCU 0x23 0x0 UNC_P_DEMOTIONS_CORE5 Counts the number of times when a configurable cores had a C-state demotion 0,1,2,3 0 PCUFilter[7:0] 0 PCU 0x24 0x0 UNC_P_DEMOTIONS_CORE6 Counts the number of times when a configurable cores had a C-state demotion 0,1,2,3 0 PCUFilter[7:0] 0 PCU 0x25 0x0 UNC_P_DEMOTIONS_CORE7 Counts the number of times when a configurable cores had a C-state demotion 0,1,2,3 0 PCUFilter[7:0] 0 PCU 0xb 0x0 UNC_P_FREQ_BAND0_CYCLES Counts the number of cycles that the uncore was running at a frequency greater than or equal to the frequency that is configured in the filter. One can use all four counters with this event, so it is possible to track up to 4 configurable bands. One can use edge detect in conjunction with this event to track the number of times that we transitioned into a frequency greater than or equal to the configurable frequency. One can also use inversion to track cycles when we were less than the configured frequency. 0,1,2,3 0 PCUFilter[7:0] 0 PCU 0xc 0x0 UNC_P_FREQ_BAND1_CYCLES Counts the number of cycles that the uncore was running at a frequency greater than or equal to the frequency that is configured in the filter. One can use all four counters with this event, so it is possible to track up to 4 configurable bands. One can use edge detect in conjunction with this event to track the number of times that we transitioned into a frequency greater than or equal to the configurable frequency. One can also use inversion to track cycles when we were less than the configured frequency. 0,1,2,3 0 PCUFilter[15:8] 0 PCU 0xd 0x0 UNC_P_FREQ_BAND2_CYCLES Counts the number of cycles that the uncore was running at a frequency greater than or equal to the frequency that is configured in the filter. One can use all four counters with this event, so it is possible to track up to 4 configurable bands. One can use edge detect in conjunction with this event to track the number of times that we transitioned into a frequency greater than or equal to the configurable frequency. One can also use inversion to track cycles when we were less than the configured frequency. 0,1,2,3 0 PCUFilter[23:16] 0 PCU 0xe 0x0 UNC_P_FREQ_BAND3_CYCLES Counts the number of cycles that the uncore was running at a frequency greater than or equal to the frequency that is configured in the filter. One can use all four counters with this event, so it is possible to track up to 4 configurable bands. One can use edge detect in conjunction with this event to track the number of times that we transitioned into a frequency greater than or equal to the configurable frequency. One can also use inversion to track cycles when we were less than the configured frequency. 0,1,2,3 0 PCUFilter[31:24] 0 PCU 0x7 0x0 UNC_P_FREQ_MAX_CURRENT_CYCLES Counts the number of cycles when current is the upper limit on frequency. 0,1,2,3 0 null 0 PCU 0x4 0x0 UNC_P_FREQ_MAX_LIMIT_THERMAL_CYCLES Counts the number of cycles when thermal conditions are the upper limit on frequency. This is related to the THERMAL_THROTTLE CYCLES_ABOVE_TEMP event, which always counts cycles when we are above the thermal temperature. This event (STRONGEST_UPPER_LIMIT) is sampled at the output of the algorithm that determines the actual frequency, while THERMAL_THROTTLE looks at the input. 0,1,2,3 0 null 0 PCU 0x6 0x0 UNC_P_FREQ_MAX_OS_CYCLES Counts the number of cycles when the OS is the upper limit on frequency. 0,1,2,3 0 null 0 PCU 0x5 0x0 UNC_P_FREQ_MAX_POWER_CYCLES Counts the number of cycles when power is the upper limit on frequency. 0,1,2,3 0 null 0 PCU 0x1 0x0 UNC_P_FREQ_MIN_IO_P_CYCLES Counts the number of cycles when IO P Limit is preventing us from dropping the frequency lower. This algorithm monitors the needs to the IO subsystem on both local and remote sockets and will maintain a frequency high enough to maintain good IO BW. This is necessary for when all the IA cores on a socket are idle but a user still would like to maintain high IO Bandwidth. 0,1,2,3 0 null 1 PCU 0x2 0x0 UNC_P_FREQ_MIN_PERF_P_CYCLES Counts the number of cycles when Perf P Limit is preventing us from dropping the frequency lower. Perf P Limit is an algorithm that takes input from remote sockets when determining if a socket should drop it's frequency down. This is largely to minimize increases in snoop and remote read latencies. 0,1,2,3 0 null 1 PCU 0x0 0x0 UNC_P_FREQ_TRANS_CYCLES Counts the number of cycles when the system is changing frequency. This can not be filtered by thread ID. One can also use it with the occupancy counter that monitors number of threads in C0 to estimate the performance impact that frequency transitions had on the system. 0,1,2,3 0 null 1 PCU 0x2f 0x0 UNC_P_MEMORY_PHASE_SHEDDING_CYCLES Counts the number of cycles that the PCU has triggered memory phase shedding. This is a mode that can be run in the iMC physicals that saves power at the expense of additional latency. 0,1,2,3 0 null 0 PCU 0x80 0x40 UNC_P_POWER_STATE_OCCUPANCY.CORES_C0 This is an occupancy event that tracks the number of cores that are in C0. It can be used by itself to get the average number of cores in C0, with threshholding to generate histograms, or with other PCU events and occupancy triggering to capture other details. 0,1,2,3 0 null 0 PCU 0x80 0x80 UNC_P_POWER_STATE_OCCUPANCY.CORES_C3 This is an occupancy event that tracks the number of cores that are in C0. It can be used by itself to get the average number of cores in C0, with threshholding to generate histograms, or with other PCU events and occupancy triggering to capture other details. 0,1,2,3 0 null 0 PCU 0x80 0xc0 UNC_P_POWER_STATE_OCCUPANCY.CORES_C6 This is an occupancy event that tracks the number of cores that are in C0. It can be used by itself to get the average number of cores in C0, with threshholding to generate histograms, or with other PCU events and occupancy triggering to capture other details. 0,1,2,3 0 null 0 PCU 0xa 0x0 UNC_P_PROCHOT_EXTERNAL_CYCLES Counts the number of cycles that we are in external PROCHOT mode. This mode is triggered when a sensor off the die determines that something off-die (like DRAM) is too hot and must throttle to avoid damaging the chip. 0,1,2,3 0 null 0 PCU 0x9 0x0 UNC_P_PROCHOT_INTERNAL_CYCLES Counts the number of cycles that we are in Interal PROCHOT mode. This mode is triggered when a sensor on the die determines that we are too hot and must throttle to avoid damaging the chip. 0,1,2,3 0 null 0 PCU 0xb 0x0 UNC_P_TOTAL_TRANSITION_CYCLES Number of cycles spent performing core C state transitions across all cores. 0,1,2,3 0 null 1 PCU 0x3 0x0 UNC_P_VOLT_TRANS_CYCLES_CHANGE Counts the number of cycles when the system is changing voltage. There is no filtering supported with this event. One can use it as a simple event, or use it conjunction with the occupancy events to monitor the number of cores or threads that were impacted by the transition. This event is calculated by or'ing together the increasing and decreasing events. 0,1,2,3 0 null 0 PCU 0x2 0x0 UNC_P_VOLT_TRANS_CYCLES_DECREASE Counts the number of cycles when the system is decreasing voltage. There is no filtering supported with this event. One can use it as a simple event, or use it conjunction with the occupancy events to monitor the number of cores or threads that were impacted by the transition. 0,1,2,3 0 null 0 PCU 0x1 0x0 UNC_P_VOLT_TRANS_CYCLES_INCREASE Counts the number of cycles when the system is increasing voltage. There is no filtering supported with this event. One can use it as a simple event, or use it conjunction with the occupancy events to monitor the number of cores or threads that were impacted by the transition. 0,1,2,3 0 null 0 PCU 0x32 0x0 UNC_P_VR_HOT_CYCLES tbd 0,1,2,3 0 null 0 UBOX 0x42 0x8 UNC_U_EVENT_MSG.DOORBELL_RCVD Virtual Logical Wire (legacy) message were received from Uncore. Specify the thread to filter on using NCUPMONCTRLGLCTR.ThreadID. 0,1 0 null 0 UBOX 0x42 0x10 UNC_U_EVENT_MSG.INT_PRIO Virtual Logical Wire (legacy) message were received from Uncore. Specify the thread to filter on using NCUPMONCTRLGLCTR.ThreadID. 0,1 0 null 0 UBOX 0x42 0x4 UNC_U_EVENT_MSG.IPI_RCVD Virtual Logical Wire (legacy) message were received from Uncore. Specify the thread to filter on using NCUPMONCTRLGLCTR.ThreadID. 0,1 0 null 0 UBOX 0x42 0x2 UNC_U_EVENT_MSG.MSI_RCVD Virtual Logical Wire (legacy) message were received from Uncore. Specify the thread to filter on using NCUPMONCTRLGLCTR.ThreadID. 0,1 0 null 0 UBOX 0x42 0x1 UNC_U_EVENT_MSG.VLW_RCVD Virtual Logical Wire (legacy) message were received from Uncore. Specify the thread to filter on using NCUPMONCTRLGLCTR.ThreadID. 0,1 0 null 0 UBOX 0x41 0x2 UNC_U_FILTER_MATCH.DISABLE Filter match per thread (w/ or w/o Filter Enable). Specify the thread to filter on using NCUPMONCTRLGLCTR.ThreadID. 0,1 0 null 0 UBOX 0x41 0x1 UNC_U_FILTER_MATCH.ENABLE Filter match per thread (w/ or w/o Filter Enable). Specify the thread to filter on using NCUPMONCTRLGLCTR.ThreadID. 0,1 0 UBoxFilter[3:0] 0 UBOX 0x41 0x8 UNC_U_FILTER_MATCH.U2C_DISABLE Filter match per thread (w/ or w/o Filter Enable). Specify the thread to filter on using NCUPMONCTRLGLCTR.ThreadID. 0,1 0 null 0 UBOX 0x41 0x4 UNC_U_FILTER_MATCH.U2C_ENABLE Filter match per thread (w/ or w/o Filter Enable). Specify the thread to filter on using NCUPMONCTRLGLCTR.ThreadID. 0,1 0 UBoxFilter[3:0] 0 UBOX 0x44 0x0 UNC_U_LOCK_CYCLES Number of times an IDI Lock/SplitLock sequence was started 0,1 0 null 0 UBOX 0x47 0x1 UNC_U_MSG_CHNL_SIZE_COUNT.4B Number of transactions on the message channel filtered by request size. This includes both reads and writes. 0,1 0 null 1 UBOX 0x47 0x2 UNC_U_MSG_CHNL_SIZE_COUNT.8B Number of transactions on the message channel filtered by request size. This includes both reads and writes. 0,1 0 null 1 UBOX 0x45 0x2 UNC_U_PHOLD_CYCLES.ACK_TO_DEASSERT PHOLD cycles. Filter from source CoreID. 0,1 0 null 1 UBOX 0x45 0x1 UNC_U_PHOLD_CYCLES.ASSERT_TO_ACK PHOLD cycles. Filter from source CoreID. 0,1 0 null 1 UBOX 0x46 0x1 UNC_U_RACU_REQUESTS.COUNT tbd 0,1 0 null 1 UBOX 0x43 0x10 UNC_U_U2C_EVENTS.CMC Events coming from Uncore can be sent to one or all cores 0,1 0 null 0 UBOX 0x43 0x4 UNC_U_U2C_EVENTS.LIVELOCK Events coming from Uncore can be sent to one or all cores 0,1 0 null 0 UBOX 0x43 0x8 UNC_U_U2C_EVENTS.LTERROR Events coming from Uncore can be sent to one or all cores 0,1 0 null 0 UBOX 0x43 0x1 UNC_U_U2C_EVENTS.MONITOR_T0 Events coming from Uncore can be sent to one or all cores 0,1 0 null 0 UBOX 0x43 0x2 UNC_U_U2C_EVENTS.MONITOR_T1 Events coming from Uncore can be sent to one or all cores 0,1 0 null 0 UBOX 0x43 0x80 UNC_U_U2C_EVENTS.OTHER Events coming from Uncore can be sent to one or all cores 0,1 0 null 0 UBOX 0x43 0x40 UNC_U_U2C_EVENTS.TRAP Events coming from Uncore can be sent to one or all cores 0,1 0 null 0 UBOX 0x43 0x20 UNC_U_U2C_EVENTS.UMC Events coming from Uncore can be sent to one or all cores 0,1 0 null 0 QPI LL 0x14 0x0 UNC_Q_CLOCKTICKS Counts the number of clocks in the QPI LL. This clock runs at 1/8th the 'GT/s' speed of the QPI link. For example, a 8GT/s link will have qfclk or 1GHz. JKT does not support dynamic link speeds, so this frequency is fixed. 0,1,2,3 0 null 0 QPI LL 0x38 0x0 UNC_Q_CTO_COUNT Counts the number of CTO (cluster trigger outs) events that were asserted across the two slots. If both slots trigger in a given cycle, the event will increment by 2. You can use edge detect to count the number of cases when both events triggered. 0,1,2,3 0 null 1 QPI LL 0x13 0x2 UNC_Q_DIRECT2CORE.FAILURE_CREDITS Counts the number of DRS packets that we attempted to do direct2core on. There are 4 mutually exlusive filters. Filter [0] can be used to get successful spawns, while [1:3] provide the different failure cases. Note that this does not count packets that are not candidates for Direct2Core. The only candidates for Direct2Core are DRS packets destined for Cbos. 0,1,2,3 0 null 0 QPI LL 0x13 0x8 UNC_Q_DIRECT2CORE.FAILURE_CREDITS_RBT Counts the number of DRS packets that we attempted to do direct2core on. There are 4 mutually exlusive filters. Filter [0] can be used to get successful spawns, while [1:3] provide the different failure cases. Note that this does not count packets that are not candidates for Direct2Core. The only candidates for Direct2Core are DRS packets destined for Cbos. 0,1,2,3 0 null 0 QPI LL 0x13 0x4 UNC_Q_DIRECT2CORE.FAILURE_RBT Counts the number of DRS packets that we attempted to do direct2core on. There are 4 mutually exlusive filters. Filter [0] can be used to get successful spawns, while [1:3] provide the different failure cases. Note that this does not count packets that are not candidates for Direct2Core. The only candidates for Direct2Core are DRS packets destined for Cbos. 0,1,2,3 0 null 0 QPI LL 0x13 0x1 UNC_Q_DIRECT2CORE.SUCCESS Counts the number of DRS packets that we attempted to do direct2core on. There are 4 mutually exlusive filters. Filter [0] can be used to get successful spawns, while [1:3] provide the different failure cases. Note that this does not count packets that are not candidates for Direct2Core. The only candidates for Direct2Core are DRS packets destined for Cbos. 0,1,2,3 0 null 0 QPI LL 0x12 0x0 UNC_Q_L1_POWER_CYCLES Number of QPI qfclk cycles spent in L1 power mode. L1 is a mode that totally shuts down a QPI link. Use edge detect to count the number of instances when the QPI link entered L1. Link power states are per link and per direction, so for example the Tx direction could be in one state while Rx was in another. Because L1 totally shuts down the link, it takes a good amount of time to exit this mode. 0,1,2,3 0 null 0 QPI LL 0x10 0x0 UNC_Q_RxL0P_POWER_CYCLES Number of QPI qfclk cycles spent in L0p power mode. L0p is a mode where we disable 1/2 of the QPI lanes, decreasing our bandwidth in order to save power. It increases snoop and data transfer latencies and decreases overall bandwidth. This mode can be very useful in NUMA optimized workloads that largely only utilize QPI for snoops and their responses. Use edge detect to count the number of instances when the QPI link entered L0p. Link power states are per link and per direction, so for example the Tx direction could be in one state while Rx was in another. 0,1,2,3 0 null 0 QPI LL 0xf 0x0 UNC_Q_RxL0_POWER_CYCLES Number of QPI qfclk cycles spent in L0 power mode in the Link Layer. L0 is the default mode which provides the highest performance with the most power. Use edge detect to count the number of instances that the link entered L0. Link power states are per link and per direction, so for example the Tx direction could be in one state while Rx was in another. The phy layer sometimes leaves L0 for training, which will not be captured by this event. 0,1,2,3 0 null 0 QPI LL 0x9 0x0 UNC_Q_RxL_BYPASSED Counts the number of times that an incoming flit was able to bypass the flit buffer and pass directly across the BGF and into the Egress. This is a latency optimization, and should generally be the common case. If this value is less than the number of flits transfered, it implies that there was queueing getting onto the ring, and thus the transactions saw higher latency. 0,1,2,3 0 null 0 QPI LL 0x3 0x1 UNC_Q_RxL_CRC_ERRORS.LINK_INIT Number of CRC errors detected in the QPI Agent. Each QPI flit incorporates 8 bits of CRC for error detection. This counts the number of flits where the CRC was able to detect an error. After an error has been detected, the QPI agent will send a request to the transmitting socket to resend the flit (as well as any flits that came after it). 0,1,2,3 0 null 0 QPI LL 0x3 0x2 UNC_Q_RxL_CRC_ERRORS.NORMAL_OP Number of CRC errors detected in the QPI Agent. Each QPI flit incorporates 8 bits of CRC for error detection. This counts the number of flits where the CRC was able to detect an error. After an error has been detected, the QPI agent will send a request to the transmitting socket to resend the flit (as well as any flits that came after it). 0,1,2,3 0 null 0 QPI LL 0x1e 0x1 UNC_Q_RxL_CREDITS_CONSUMED_VN0.DRS Counts the number of times that an RxQ VN0 credit was consumed (i.e. message uses a VN0 credit for the Rx Buffer). This includes packets that went through the RxQ and those that were bypasssed. 0,1,2,3 0 null 1 QPI LL 0x1e 0x8 UNC_Q_RxL_CREDITS_CONSUMED_VN0.HOM Counts the number of times that an RxQ VN0 credit was consumed (i.e. message uses a VN0 credit for the Rx Buffer). This includes packets that went through the RxQ and those that were bypasssed. 0,1,2,3 0 null 1 QPI LL 0x1e 0x2 UNC_Q_RxL_CREDITS_CONSUMED_VN0.NCB Counts the number of times that an RxQ VN0 credit was consumed (i.e. message uses a VN0 credit for the Rx Buffer). This includes packets that went through the RxQ and those that were bypasssed. 0,1,2,3 0 null 1 QPI LL 0x1e 0x4 UNC_Q_RxL_CREDITS_CONSUMED_VN0.NCS Counts the number of times that an RxQ VN0 credit was consumed (i.e. message uses a VN0 credit for the Rx Buffer). This includes packets that went through the RxQ and those that were bypasssed. 0,1,2,3 0 null 1 QPI LL 0x1e 0x20 UNC_Q_RxL_CREDITS_CONSUMED_VN0.NDR Counts the number of times that an RxQ VN0 credit was consumed (i.e. message uses a VN0 credit for the Rx Buffer). This includes packets that went through the RxQ and those that were bypasssed. 0,1,2,3 0 null 1 QPI LL 0x1e 0x10 UNC_Q_RxL_CREDITS_CONSUMED_VN0.SNP Counts the number of times that an RxQ VN0 credit was consumed (i.e. message uses a VN0 credit for the Rx Buffer). This includes packets that went through the RxQ and those that were bypasssed. 0,1,2,3 0 null 1 QPI LL 0x1d 0x0 UNC_Q_RxL_CREDITS_CONSUMED_VNA Counts the number of times that an RxQ VNA credit was consumed (i.e. message uses a VNA credit for the Rx Buffer). This includes packets that went through the RxQ and those that were bypasssed. 0,1,2,3 0 null 1 QPI LL 0xa 0x0 UNC_Q_RxL_CYCLES_NE Counts the number of cycles that the QPI RxQ was not empty. Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Occupancy Accumulator event to calculate the average occupancy. 0,1,2,3 0 null 0 QPI LL 0x1 0x2 UNC_Q_RxL_FLITS_G0.DATA Counts the number of flits received from the QPI Link. It includes filters for Idle, protocol, and Data Flits. Each 'flit' is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four 'fits', each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI 'speed' (for example, 8.0 GT/s), the 'transfers' here refer to 'fits'. Therefore, in L0, the system will transfer 1 'flit' at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as 'data' bandwidth. For example, when we are transfering a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual 'data' and an additional 16 bits of other information. To calculate 'data' bandwidth, one should therefore do: data flits * 8B / time (for L0) or 4B instead of 8B for L0p. 0,1,2,3 0 null 0 QPI LL 0x1 0x1 UNC_Q_RxL_FLITS_G0.IDLE Counts the number of flits received from the QPI Link. It includes filters for Idle, protocol, and Data Flits. Each 'flit' is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four 'fits', each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI 'speed' (for example, 8.0 GT/s), the 'transfers' here refer to 'fits'. Therefore, in L0, the system will transfer 1 'flit' at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as 'data' bandwidth. For example, when we are transfering a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual 'data' and an additional 16 bits of other information. To calculate 'data' bandwidth, one should therefore do: data flits * 8B / time (for L0) or 4B instead of 8B for L0p. 0,1,2,3 0 null 0 QPI LL 0x1 0x4 UNC_Q_RxL_FLITS_G0.NON_DATA Counts the number of flits received from the QPI Link. It includes filters for Idle, protocol, and Data Flits. Each 'flit' is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four 'fits', each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI 'speed' (for example, 8.0 GT/s), the 'transfers' here refer to 'fits'. Therefore, in L0, the system will transfer 1 'flit' at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as 'data' bandwidth. For example, when we are transfering a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual 'data' and an additional 16 bits of other information. To calculate 'data' bandwidth, one should therefore do: data flits * 8B / time (for L0) or 4B instead of 8B for L0p. 0,1,2,3 0 null 0 QPI LL 0x2 0x18 UNC_Q_RxL_FLITS_G1.DRS Counts the number of flits received from the QPI Link. This is one of three 'groups' that allow us to track flits. It includes filters for SNP, HOM, and DRS message classes. Each 'flit' is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four 'fits', each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI 'speed' (for example, 8.0 GT/s), the 'transfers' here refer to 'fits'. Therefore, in L0, the system will transfer 1 'flit' at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as 'data' bandwidth. For example, when we are transfering a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual 'data' and an additional 16 bits of other information. To calculate 'data' bandwidth, one should therefore do: data flits * 8B / time. 0,1,2,3 0 null 1 QPI LL 0x2 0x8 UNC_Q_RxL_FLITS_G1.DRS_DATA Counts the number of flits received from the QPI Link. This is one of three 'groups' that allow us to track flits. It includes filters for SNP, HOM, and DRS message classes. Each 'flit' is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four 'fits', each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI 'speed' (for example, 8.0 GT/s), the 'transfers' here refer to 'fits'. Therefore, in L0, the system will transfer 1 'flit' at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as 'data' bandwidth. For example, when we are transfering a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual 'data' and an additional 16 bits of other information. To calculate 'data' bandwidth, one should therefore do: data flits * 8B / time. 0,1,2,3 0 null 1 QPI LL 0x2 0x10 UNC_Q_RxL_FLITS_G1.DRS_NONDATA Counts the number of flits received from the QPI Link. This is one of three 'groups' that allow us to track flits. It includes filters for SNP, HOM, and DRS message classes. Each 'flit' is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four 'fits', each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI 'speed' (for example, 8.0 GT/s), the 'transfers' here refer to 'fits'. Therefore, in L0, the system will transfer 1 'flit' at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as 'data' bandwidth. For example, when we are transfering a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual 'data' and an additional 16 bits of other information. To calculate 'data' bandwidth, one should therefore do: data flits * 8B / time. 0,1,2,3 0 null 1 QPI LL 0x2 0x6 UNC_Q_RxL_FLITS_G1.HOM Counts the number of flits received from the QPI Link. This is one of three 'groups' that allow us to track flits. It includes filters for SNP, HOM, and DRS message classes. Each 'flit' is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four 'fits', each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI 'speed' (for example, 8.0 GT/s), the 'transfers' here refer to 'fits'. Therefore, in L0, the system will transfer 1 'flit' at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as 'data' bandwidth. For example, when we are transfering a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual 'data' and an additional 16 bits of other information. To calculate 'data' bandwidth, one should therefore do: data flits * 8B / time. 0,1,2,3 0 null 1 QPI LL 0x2 0x4 UNC_Q_RxL_FLITS_G1.HOM_NONREQ Counts the number of flits received from the QPI Link. This is one of three 'groups' that allow us to track flits. It includes filters for SNP, HOM, and DRS message classes. Each 'flit' is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four 'fits', each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI 'speed' (for example, 8.0 GT/s), the 'transfers' here refer to 'fits'. Therefore, in L0, the system will transfer 1 'flit' at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as 'data' bandwidth. For example, when we are transfering a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual 'data' and an additional 16 bits of other information. To calculate 'data' bandwidth, one should therefore do: data flits * 8B / time. 0,1,2,3 0 null 1 QPI LL 0x2 0x2 UNC_Q_RxL_FLITS_G1.HOM_REQ Counts the number of flits received from the QPI Link. This is one of three 'groups' that allow us to track flits. It includes filters for SNP, HOM, and DRS message classes. Each 'flit' is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four 'fits', each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI 'speed' (for example, 8.0 GT/s), the 'transfers' here refer to 'fits'. Therefore, in L0, the system will transfer 1 'flit' at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as 'data' bandwidth. For example, when we are transfering a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual 'data' and an additional 16 bits of other information. To calculate 'data' bandwidth, one should therefore do: data flits * 8B / time. 0,1,2,3 0 null 1 QPI LL 0x2 0x1 UNC_Q_RxL_FLITS_G1.SNP Counts the number of flits received from the QPI Link. This is one of three 'groups' that allow us to track flits. It includes filters for SNP, HOM, and DRS message classes. Each 'flit' is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four 'fits', each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI 'speed' (for example, 8.0 GT/s), the 'transfers' here refer to 'fits'. Therefore, in L0, the system will transfer 1 'flit' at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as 'data' bandwidth. For example, when we are transfering a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual 'data' and an additional 16 bits of other information. To calculate 'data' bandwidth, one should therefore do: data flits * 8B / time. 0,1,2,3 0 null 1 QPI LL 0x3 0xc UNC_Q_RxL_FLITS_G2.NCB Counts the number of flits received from the QPI Link. This is one of three 'groups' that allow us to track flits. It includes filters for NDR, NCB, and NCS message classes. Each 'flit' is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four 'fits', each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI 'speed' (for example, 8.0 GT/s), the 'transfers' here refer to 'fits'. Therefore, in L0, the system will transfer 1 'flit' at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as 'data' bandwidth. For example, when we are transfering a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual 'data' and an additional 16 bits of other information. To calculate 'data' bandwidth, one should therefore do: data flits * 8B / time. 0,1,2,3 0 null 1 QPI LL 0x3 0x4 UNC_Q_RxL_FLITS_G2.NCB_DATA Counts the number of flits received from the QPI Link. This is one of three 'groups' that allow us to track flits. It includes filters for NDR, NCB, and NCS message classes. Each 'flit' is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four 'fits', each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI 'speed' (for example, 8.0 GT/s), the 'transfers' here refer to 'fits'. Therefore, in L0, the system will transfer 1 'flit' at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as 'data' bandwidth. For example, when we are transfering a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual 'data' and an additional 16 bits of other information. To calculate 'data' bandwidth, one should therefore do: data flits * 8B / time. 0,1,2,3 0 null 1 QPI LL 0x3 0x8 UNC_Q_RxL_FLITS_G2.NCB_NONDATA Counts the number of flits received from the QPI Link. This is one of three 'groups' that allow us to track flits. It includes filters for NDR, NCB, and NCS message classes. Each 'flit' is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four 'fits', each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI 'speed' (for example, 8.0 GT/s), the 'transfers' here refer to 'fits'. Therefore, in L0, the system will transfer 1 'flit' at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as 'data' bandwidth. For example, when we are transfering a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual 'data' and an additional 16 bits of other information. To calculate 'data' bandwidth, one should therefore do: data flits * 8B / time. 0,1,2,3 0 null 1 QPI LL 0x3 0x10 UNC_Q_RxL_FLITS_G2.NCS Counts the number of flits received from the QPI Link. This is one of three 'groups' that allow us to track flits. It includes filters for NDR, NCB, and NCS message classes. Each 'flit' is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four 'fits', each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI 'speed' (for example, 8.0 GT/s), the 'transfers' here refer to 'fits'. Therefore, in L0, the system will transfer 1 'flit' at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as 'data' bandwidth. For example, when we are transfering a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual 'data' and an additional 16 bits of other information. To calculate 'data' bandwidth, one should therefore do: data flits * 8B / time. 0,1,2,3 0 null 1 QPI LL 0x3 0x1 UNC_Q_RxL_FLITS_G2.NDR_AD Counts the number of flits received from the QPI Link. This is one of three 'groups' that allow us to track flits. It includes filters for NDR, NCB, and NCS message classes. Each 'flit' is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four 'fits', each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI 'speed' (for example, 8.0 GT/s), the 'transfers' here refer to 'fits'. Therefore, in L0, the system will transfer 1 'flit' at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as 'data' bandwidth. For example, when we are transfering a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual 'data' and an additional 16 bits of other information. To calculate 'data' bandwidth, one should therefore do: data flits * 8B / time. 0,1,2,3 0 null 1 QPI LL 0x3 0x2 UNC_Q_RxL_FLITS_G2.NDR_AK Counts the number of flits received from the QPI Link. This is one of three 'groups' that allow us to track flits. It includes filters for NDR, NCB, and NCS message classes. Each 'flit' is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four 'fits', each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI 'speed' (for example, 8.0 GT/s), the 'transfers' here refer to 'fits'. Therefore, in L0, the system will transfer 1 'flit' at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as 'data' bandwidth. For example, when we are transfering a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual 'data' and an additional 16 bits of other information. To calculate 'data' bandwidth, one should therefore do: data flits * 8B / time. 0,1,2,3 0 null 1 QPI LL 0x8 0x0 UNC_Q_RxL_INSERTS Number of allocations into the QPI Rx Flit Buffer. Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Occupancy event in order to calculate the average flit buffer lifetime. 0,1,2,3 0 null 0 QPI LL 0x9 0x0 UNC_Q_RxL_INSERTS_DRS Number of allocations into the QPI Rx Flit Buffer. Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Occupancy event in order to calculate the average flit buffer lifetime. This monitors only DRS flits. 0,1,2,3 0 null 1 QPI LL 0xc 0x0 UNC_Q_RxL_INSERTS_HOM Number of allocations into the QPI Rx Flit Buffer. Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Occupancy event in order to calculate the average flit buffer lifetime. This monitors only HOM flits. 0,1,2,3 0 null 1 QPI LL 0xa 0x0 UNC_Q_RxL_INSERTS_NCB Number of allocations into the QPI Rx Flit Buffer. Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Occupancy event in order to calculate the average flit buffer lifetime. This monitors only NCB flits. 0,1,2,3 0 null 1 QPI LL 0xb 0x0 UNC_Q_RxL_INSERTS_NCS Number of allocations into the QPI Rx Flit Buffer. Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Occupancy event in order to calculate the average flit buffer lifetime. This monitors only NCS flits. 0,1,2,3 0 null 1 QPI LL 0xe 0x0 UNC_Q_RxL_INSERTS_NDR Number of allocations into the QPI Rx Flit Buffer. Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Occupancy event in order to calculate the average flit buffer lifetime. This monitors only NDR flits. 0,1,2,3 0 null 1 QPI LL 0xd 0x0 UNC_Q_RxL_INSERTS_SNP Number of allocations into the QPI Rx Flit Buffer. Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Occupancy event in order to calculate the average flit buffer lifetime. This monitors only SNP flits. 0,1,2,3 0 null 1 QPI LL 0xb 0x0 UNC_Q_RxL_OCCUPANCY Accumulates the number of elements in the QPI RxQ in each cycle. Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Not Empty event to calculate average occupancy, or with the Flit Buffer Allocations event to track average lifetime. 0,1,2,3 0 null 0 QPI LL 0x15 0x0 UNC_Q_RxL_OCCUPANCY_DRS Accumulates the number of elements in the QPI RxQ in each cycle. Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Not Empty event to calculate average occupancy, or with the Flit Buffer Allocations event to track average lifetime. This monitors DRS flits only. 0,1,2,3 0 null 1 QPI LL 0x18 0x0 UNC_Q_RxL_OCCUPANCY_HOM Accumulates the number of elements in the QPI RxQ in each cycle. Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Not Empty event to calculate average occupancy, or with the Flit Buffer Allocations event to track average lifetime. This monitors HOM flits only. 0,1,2,3 0 null 1 QPI LL 0x16 0x0 UNC_Q_RxL_OCCUPANCY_NCB Accumulates the number of elements in the QPI RxQ in each cycle. Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Not Empty event to calculate average occupancy, or with the Flit Buffer Allocations event to track average lifetime. This monitors NCB flits only. 0,1,2,3 0 null 1 QPI LL 0x17 0x0 UNC_Q_RxL_OCCUPANCY_NCS Accumulates the number of elements in the QPI RxQ in each cycle. Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Not Empty event to calculate average occupancy, or with the Flit Buffer Allocations event to track average lifetime. This monitors NCS flits only. 0,1,2,3 0 null 1 QPI LL 0x1a 0x0 UNC_Q_RxL_OCCUPANCY_NDR Accumulates the number of elements in the QPI RxQ in each cycle. Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Not Empty event to calculate average occupancy, or with the Flit Buffer Allocations event to track average lifetime. This monitors NDR flits only. 0,1,2,3 0 null 1 QPI LL 0x19 0x0 UNC_Q_RxL_OCCUPANCY_SNP Accumulates the number of elements in the QPI RxQ in each cycle. Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Not Empty event to calculate average occupancy, or with the Flit Buffer Allocations event to track average lifetime. This monitors SNP flits only. 0,1,2,3 0 null 1 QPI LL 0x35 0x1 UNC_Q_RxL_STALLS.BGF_DRS Number of stalls trying to send to R3QPI. 0,1,2,3 0 null 0 QPI LL 0x35 0x8 UNC_Q_RxL_STALLS.BGF_HOM Number of stalls trying to send to R3QPI. 0,1,2,3 0 null 0 QPI LL 0x35 0x2 UNC_Q_RxL_STALLS.BGF_NCB Number of stalls trying to send to R3QPI. 0,1,2,3 0 null 0 QPI LL 0x35 0x4 UNC_Q_RxL_STALLS.BGF_NCS Number of stalls trying to send to R3QPI. 0,1,2,3 0 null 0 QPI LL 0x35 0x20 UNC_Q_RxL_STALLS.BGF_NDR Number of stalls trying to send to R3QPI. 0,1,2,3 0 null 0 QPI LL 0x35 0x10 UNC_Q_RxL_STALLS.BGF_SNP Number of stalls trying to send to R3QPI. 0,1,2,3 0 null 0 QPI LL 0x35 0x40 UNC_Q_RxL_STALLS.EGRESS_CREDITS Number of stalls trying to send to R3QPI. 0,1,2,3 0 null 0 QPI LL 0x35 0x80 UNC_Q_RxL_STALLS.GV Number of stalls trying to send to R3QPI. 0,1,2,3 0 null 0 QPI LL 0xd 0x0 UNC_Q_TxL0P_POWER_CYCLES Number of QPI qfclk cycles spent in L0p power mode. L0p is a mode where we disable 1/2 of the QPI lanes, decreasing our bandwidth in order to save power. It increases snoop and data transfer latencies and decreases overall bandwidth. This mode can be very useful in NUMA optimized workloads that largely only utilize QPI for snoops and their responses. Use edge detect to count the number of instances when the QPI link entered L0p. Link power states are per link and per direction, so for example the Tx direction could be in one state while Rx was in another. 0,1,2,3 0 null 0 QPI LL 0xc 0x0 UNC_Q_TxL0_POWER_CYCLES Number of QPI qfclk cycles spent in L0 power mode in the Link Layer. L0 is the default mode which provides the highest performance with the most power. Use edge detect to count the number of instances that the link entered L0. Link power states are per link and per direction, so for example the Tx direction could be in one state while Rx was in another. The phy layer sometimes leaves L0 for training, which will not be captured by this event. 0,1,2,3 0 null 0 QPI LL 0x5 0x0 UNC_Q_TxL_BYPASSED Counts the number of times that an incoming flit was able to bypass the Tx flit buffer and pass directly out the QPI Link. Generally, when data is transmitted across QPI, it will bypass the TxQ and pass directly to the link. However, the TxQ will be used with L0p and when LLR occurs, increasing latency to transfer out to the link. 0,1,2,3 0 null 0 QPI LL 0x2 0x2 UNC_Q_TxL_CRC_NO_CREDITS.ALMOST_FULL Number of cycles when the Tx side ran out of Link Layer Retry credits, causing the Tx to stall. 0,1,2,3 0 null 0 QPI LL 0x2 0x1 UNC_Q_TxL_CRC_NO_CREDITS.FULL Number of cycles when the Tx side ran out of Link Layer Retry credits, causing the Tx to stall. 0,1,2,3 0 null 0 QPI LL 0x6 0x0 UNC_Q_TxL_CYCLES_NE Counts the number of cycles when the TxQ is not empty. Generally, when data is transmitted across QPI, it will bypass the TxQ and pass directly to the link. However, the TxQ will be used with L0p and when LLR occurs, increasing latency to transfer out to the link. 0,1,2,3 0 null 0 QPI LL 0x0 0x2 UNC_Q_TxL_FLITS_G0.DATA Counts the number of flits transmitted across the QPI Link. It includes filters for Idle, protocol, and Data Flits. Each 'flit' is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four 'fits', each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI 'speed' (for example, 8.0 GT/s), the 'transfers' here refer to 'fits'. Therefore, in L0, the system will transfer 1 'flit' at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as 'data' bandwidth. For example, when we are transfering a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual 'data' and an additional 16 bits of other information. To calculate 'data' bandwidth, one should therefore do: data flits * 8B / time (for L0) or 4B instead of 8B for L0p. 0,1,2,3 0 null 0 QPI LL 0x0 0x1 UNC_Q_TxL_FLITS_G0.IDLE Counts the number of flits transmitted across the QPI Link. It includes filters for Idle, protocol, and Data Flits. Each 'flit' is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four 'fits', each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI 'speed' (for example, 8.0 GT/s), the 'transfers' here refer to 'fits'. Therefore, in L0, the system will transfer 1 'flit' at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as 'data' bandwidth. For example, when we are transfering a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual 'data' and an additional 16 bits of other information. To calculate 'data' bandwidth, one should therefore do: data flits * 8B / time (for L0) or 4B instead of 8B for L0p. 0,1,2,3 0 null 0 QPI LL 0x0 0x4 UNC_Q_TxL_FLITS_G0.NON_DATA Counts the number of flits transmitted across the QPI Link. It includes filters for Idle, protocol, and Data Flits. Each 'flit' is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four 'fits', each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI 'speed' (for example, 8.0 GT/s), the 'transfers' here refer to 'fits'. Therefore, in L0, the system will transfer 1 'flit' at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as 'data' bandwidth. For example, when we are transfering a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual 'data' and an additional 16 bits of other information. To calculate 'data' bandwidth, one should therefore do: data flits * 8B / time (for L0) or 4B instead of 8B for L0p. 0,1,2,3 0 null 0 QPI LL 0x0 0x18 UNC_Q_TxL_FLITS_G1.DRS Counts the number of flits trasmitted across the QPI Link. This is one of three 'groups' that allow us to track flits. It includes filters for SNP, HOM, and DRS message classes. Each 'flit' is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four 'fits', each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI 'speed' (for example, 8.0 GT/s), the 'transfers' here refer to 'fits'. Therefore, in L0, the system will transfer 1 'flit' at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as 'data' bandwidth. For example, when we are transfering a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual 'data' and an additional 16 bits of other information. To calculate 'data' bandwidth, one should therefore do: data flits * 8B / time. 0,1,2,3 0 null 1 QPI LL 0x0 0x8 UNC_Q_TxL_FLITS_G1.DRS_DATA Counts the number of flits trasmitted across the QPI Link. This is one of three 'groups' that allow us to track flits. It includes filters for SNP, HOM, and DRS message classes. Each 'flit' is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four 'fits', each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI 'speed' (for example, 8.0 GT/s), the 'transfers' here refer to 'fits'. Therefore, in L0, the system will transfer 1 'flit' at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as 'data' bandwidth. For example, when we are transfering a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual 'data' and an additional 16 bits of other information. To calculate 'data' bandwidth, one should therefore do: data flits * 8B / time. 0,1,2,3 0 null 1 QPI LL 0x0 0x10 UNC_Q_TxL_FLITS_G1.DRS_NONDATA Counts the number of flits trasmitted across the QPI Link. This is one of three 'groups' that allow us to track flits. It includes filters for SNP, HOM, and DRS message classes. Each 'flit' is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four 'fits', each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI 'speed' (for example, 8.0 GT/s), the 'transfers' here refer to 'fits'. Therefore, in L0, the system will transfer 1 'flit' at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as 'data' bandwidth. For example, when we are transfering a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual 'data' and an additional 16 bits of other information. To calculate 'data' bandwidth, one should therefore do: data flits * 8B / time. 0,1,2,3 0 null 1 QPI LL 0x0 0x6 UNC_Q_TxL_FLITS_G1.HOM Counts the number of flits trasmitted across the QPI Link. This is one of three 'groups' that allow us to track flits. It includes filters for SNP, HOM, and DRS message classes. Each 'flit' is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four 'fits', each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI 'speed' (for example, 8.0 GT/s), the 'transfers' here refer to 'fits'. Therefore, in L0, the system will transfer 1 'flit' at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as 'data' bandwidth. For example, when we are transfering a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual 'data' and an additional 16 bits of other information. To calculate 'data' bandwidth, one should therefore do: data flits * 8B / time. 0,1,2,3 0 null 1 QPI LL 0x0 0x4 UNC_Q_TxL_FLITS_G1.HOM_NONREQ Counts the number of flits trasmitted across the QPI Link. This is one of three 'groups' that allow us to track flits. It includes filters for SNP, HOM, and DRS message classes. Each 'flit' is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four 'fits', each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI 'speed' (for example, 8.0 GT/s), the 'transfers' here refer to 'fits'. Therefore, in L0, the system will transfer 1 'flit' at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as 'data' bandwidth. For example, when we are transfering a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual 'data' and an additional 16 bits of other information. To calculate 'data' bandwidth, one should therefore do: data flits * 8B / time. 0,1,2,3 0 null 1 QPI LL 0x0 0x2 UNC_Q_TxL_FLITS_G1.HOM_REQ Counts the number of flits trasmitted across the QPI Link. This is one of three 'groups' that allow us to track flits. It includes filters for SNP, HOM, and DRS message classes. Each 'flit' is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four 'fits', each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI 'speed' (for example, 8.0 GT/s), the 'transfers' here refer to 'fits'. Therefore, in L0, the system will transfer 1 'flit' at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as 'data' bandwidth. For example, when we are transfering a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual 'data' and an additional 16 bits of other information. To calculate 'data' bandwidth, one should therefore do: data flits * 8B / time. 0,1,2,3 0 null 1 QPI LL 0x0 0x1 UNC_Q_TxL_FLITS_G1.SNP Counts the number of flits trasmitted across the QPI Link. This is one of three 'groups' that allow us to track flits. It includes filters for SNP, HOM, and DRS message classes. Each 'flit' is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four 'fits', each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI 'speed' (for example, 8.0 GT/s), the 'transfers' here refer to 'fits'. Therefore, in L0, the system will transfer 1 'flit' at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as 'data' bandwidth. For example, when we are transfering a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual 'data' and an additional 16 bits of other information. To calculate 'data' bandwidth, one should therefore do: data flits * 8B / time. 0,1,2,3 0 null 1 QPI LL 0x1 0xc UNC_Q_TxL_FLITS_G2.NCB Counts the number of flits trasmitted across the QPI Link. This is one of three 'groups' that allow us to track flits. It includes filters for NDR, NCB, and NCS message classes. Each 'flit' is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four 'fits', each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI 'speed' (for example, 8.0 GT/s), the 'transfers' here refer to 'fits'. Therefore, in L0, the system will transfer 1 'flit' at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as 'data' bandwidth. For example, when we are transfering a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual 'data' and an additional 16 bits of other information. To calculate 'data' bandwidth, one should therefore do: data flits * 8B / time. 0,1,2,3 0 null 1 QPI LL 0x1 0x4 UNC_Q_TxL_FLITS_G2.NCB_DATA Counts the number of flits trasmitted across the QPI Link. This is one of three 'groups' that allow us to track flits. It includes filters for NDR, NCB, and NCS message classes. Each 'flit' is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four 'fits', each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI 'speed' (for example, 8.0 GT/s), the 'transfers' here refer to 'fits'. Therefore, in L0, the system will transfer 1 'flit' at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as 'data' bandwidth. For example, when we are transfering a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual 'data' and an additional 16 bits of other information. To calculate 'data' bandwidth, one should therefore do: data flits * 8B / time. 0,1,2,3 0 null 1 QPI LL 0x1 0x8 UNC_Q_TxL_FLITS_G2.NCB_NONDATA Counts the number of flits trasmitted across the QPI Link. This is one of three 'groups' that allow us to track flits. It includes filters for NDR, NCB, and NCS message classes. Each 'flit' is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four 'fits', each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI 'speed' (for example, 8.0 GT/s), the 'transfers' here refer to 'fits'. Therefore, in L0, the system will transfer 1 'flit' at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as 'data' bandwidth. For example, when we are transfering a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual 'data' and an additional 16 bits of other information. To calculate 'data' bandwidth, one should therefore do: data flits * 8B / time. 0,1,2,3 0 null 1 QPI LL 0x1 0x10 UNC_Q_TxL_FLITS_G2.NCS Counts the number of flits trasmitted across the QPI Link. This is one of three 'groups' that allow us to track flits. It includes filters for NDR, NCB, and NCS message classes. Each 'flit' is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four 'fits', each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI 'speed' (for example, 8.0 GT/s), the 'transfers' here refer to 'fits'. Therefore, in L0, the system will transfer 1 'flit' at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as 'data' bandwidth. For example, when we are transfering a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual 'data' and an additional 16 bits of other information. To calculate 'data' bandwidth, one should therefore do: data flits * 8B / time. 0,1,2,3 0 null 1 QPI LL 0x1 0x1 UNC_Q_TxL_FLITS_G2.NDR_AD Counts the number of flits trasmitted across the QPI Link. This is one of three 'groups' that allow us to track flits. It includes filters for NDR, NCB, and NCS message classes. Each 'flit' is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four 'fits', each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI 'speed' (for example, 8.0 GT/s), the 'transfers' here refer to 'fits'. Therefore, in L0, the system will transfer 1 'flit' at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as 'data' bandwidth. For example, when we are transfering a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual 'data' and an additional 16 bits of other information. To calculate 'data' bandwidth, one should therefore do: data flits * 8B / time. 0,1,2,3 0 null 1 QPI LL 0x1 0x2 UNC_Q_TxL_FLITS_G2.NDR_AK Counts the number of flits trasmitted across the QPI Link. This is one of three 'groups' that allow us to track flits. It includes filters for NDR, NCB, and NCS message classes. Each 'flit' is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four 'fits', each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI 'speed' (for example, 8.0 GT/s), the 'transfers' here refer to 'fits'. Therefore, in L0, the system will transfer 1 'flit' at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as 'data' bandwidth. For example, when we are transfering a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual 'data' and an additional 16 bits of other information. To calculate 'data' bandwidth, one should therefore do: data flits * 8B / time. 0,1,2,3 0 null 1 QPI LL 0x4 0x0 UNC_Q_TxL_INSERTS Number of allocations into the QPI Tx Flit Buffer. Generally, when data is transmitted across QPI, it will bypass the TxQ and pass directly to the link. However, the TxQ will be used with L0p and when LLR occurs, increasing latency to transfer out to the link. This event can be used in conjunction with the Flit Buffer Occupancy event in order to calculate the average flit buffer lifetime. 0,1,2,3 0 null 0 QPI LL 0x7 0x0 UNC_Q_TxL_OCCUPANCY Accumulates the number of flits in the TxQ. Generally, when data is transmitted across QPI, it will bypass the TxQ and pass directly to the link. However, the TxQ will be used with L0p and when LLR occurs, increasing latency to transfer out to the link. This can be used with the cycles not empty event to track average occupancy, or the allocations event to track average lifetime in the TxQ. 0,1,2,3 0 null 0 QPI LL 0x1c 0x0 UNC_Q_VNA_CREDIT_RETURNS Number of VNA credits returned. 0,1,2,3 0 null 1 QPI LL 0x1b 0x0 UNC_Q_VNA_CREDIT_RETURN_OCCUPANCY Number of VNA credits in the Rx side that are waitng to be returned back across the link. 0,1,2,3 0 null 1 R3QPI 0x1 0x0 UNC_R3_CLOCKTICKS Counts the number of uclks in the QPI uclk domain. This could be slightly different than the count in the Ubox because of enable/freeze delays. However, because the QPI Agent is close to the Ubox, they generally should not diverge by more than a handful of cycles. 0,1,2 0 null 0 R3QPI 0x20 0x8 UNC_R3_IIO_CREDITS_ACQUIRED.DRS Counts the number of times the NCS/NCB/DRS credit is acquried in the QPI for sending messages on BL to the IIO. There is one credit for each of these three message classes (three credits total). NCS is used for reads to PCIe space, NCB is used for transfering data without coherency, and DRS is used for transfering data with coherency (cachable PCI transactions). This event can only track one message class at a time. 0,1 0 null 0 R3QPI 0x20 0x10 UNC_R3_IIO_CREDITS_ACQUIRED.NCB Counts the number of times the NCS/NCB/DRS credit is acquried in the QPI for sending messages on BL to the IIO. There is one credit for each of these three message classes (three credits total). NCS is used for reads to PCIe space, NCB is used for transfering data without coherency, and DRS is used for transfering data with coherency (cachable PCI transactions). This event can only track one message class at a time. 0,1 0 null 0 R3QPI 0x20 0x20 UNC_R3_IIO_CREDITS_ACQUIRED.NCS Counts the number of times the NCS/NCB/DRS credit is acquried in the QPI for sending messages on BL to the IIO. There is one credit for each of these three message classes (three credits total). NCS is used for reads to PCIe space, NCB is used for transfering data without coherency, and DRS is used for transfering data with coherency (cachable PCI transactions). This event can only track one message class at a time. 0,1 0 null 0 R3QPI 0x21 0x8 UNC_R3_IIO_CREDITS_REJECT.DRS Counts the number of times that a request attempted to acquire an NCS/NCB/DRS credit in the QPI for sending messages on BL to the IIO but was rejected because no credit was available. There is one credit for each of these three message classes (three credits total). NCS is used for reads to PCIe space, NCB is used for transfering data without coherency, and DRS is used for transfering data with coherency (cachable PCI transactions). This event can only track one message class at a time. 0,1 0 null 0 R3QPI 0x21 0x10 UNC_R3_IIO_CREDITS_REJECT.NCB Counts the number of times that a request attempted to acquire an NCS/NCB/DRS credit in the QPI for sending messages on BL to the IIO but was rejected because no credit was available. There is one credit for each of these three message classes (three credits total). NCS is used for reads to PCIe space, NCB is used for transfering data without coherency, and DRS is used for transfering data with coherency (cachable PCI transactions). This event can only track one message class at a time. 0,1 0 null 0 R3QPI 0x21 0x20 UNC_R3_IIO_CREDITS_REJECT.NCS Counts the number of times that a request attempted to acquire an NCS/NCB/DRS credit in the QPI for sending messages on BL to the IIO but was rejected because no credit was available. There is one credit for each of these three message classes (three credits total). NCS is used for reads to PCIe space, NCB is used for transfering data without coherency, and DRS is used for transfering data with coherency (cachable PCI transactions). This event can only track one message class at a time. 0,1 0 null 0 R3QPI 0x22 0x8 UNC_R3_IIO_CREDITS_USED.DRS Counts the number of cycles when the NCS/NCB/DRS credit is in use in the QPI for sending messages on BL to the IIO. There is one credit for each of these three message classes (three credits total). NCS is used for reads to PCIe space, NCB is used for transfering data without coherency, and DRS is used for transfering data with coherency (cachable PCI transactions). This event can only track one message class at a time. 0,1 0 null 0 R3QPI 0x22 0x10 UNC_R3_IIO_CREDITS_USED.NCB Counts the number of cycles when the NCS/NCB/DRS credit is in use in the QPI for sending messages on BL to the IIO. There is one credit for each of these three message classes (three credits total). NCS is used for reads to PCIe space, NCB is used for transfering data without coherency, and DRS is used for transfering data with coherency (cachable PCI transactions). This event can only track one message class at a time. 0,1 0 null 0 R3QPI 0x22 0x20 UNC_R3_IIO_CREDITS_USED.NCS Counts the number of cycles when the NCS/NCB/DRS credit is in use in the QPI for sending messages on BL to the IIO. There is one credit for each of these three message classes (three credits total). NCS is used for reads to PCIe space, NCB is used for transfering data without coherency, and DRS is used for transfering data with coherency (cachable PCI transactions). This event can only track one message class at a time. 0,1 0 null 0 R3QPI 0x7 0x4 UNC_R3_RING_AD_USED.CCW_EVEN Counts the number of cycles that the AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. 0,1,2 0 null 0 R3QPI 0x7 0x8 UNC_R3_RING_AD_USED.CCW_ODD Counts the number of cycles that the AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. 0,1,2 0 null 0 R3QPI 0x7 0x1 UNC_R3_RING_AD_USED.CW_EVEN Counts the number of cycles that the AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. 0,1,2 0 null 0 R3QPI 0x7 0x2 UNC_R3_RING_AD_USED.CW_ODD Counts the number of cycles that the AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. 0,1,2 0 null 0 R3QPI 0x8 0x4 UNC_R3_RING_AK_USED.CCW_EVEN Counts the number of cycles that the AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sent, but does not include when packets are being sunk into the ring stop. 0,1,2 0 null 0 R3QPI 0x8 0x8 UNC_R3_RING_AK_USED.CCW_ODD Counts the number of cycles that the AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sent, but does not include when packets are being sunk into the ring stop. 0,1,2 0 null 0 R3QPI 0x8 0x1 UNC_R3_RING_AK_USED.CW_EVEN Counts the number of cycles that the AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sent, but does not include when packets are being sunk into the ring stop. 0,1,2 0 null 0 R3QPI 0x8 0x2 UNC_R3_RING_AK_USED.CW_ODD Counts the number of cycles that the AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sent, but does not include when packets are being sunk into the ring stop. 0,1,2 0 null 0 R3QPI 0x9 0x4 UNC_R3_RING_BL_USED.CCW_EVEN Counts the number of cycles that the BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. 0,1,2 0 null 0 R3QPI 0x9 0x8 UNC_R3_RING_BL_USED.CCW_ODD Counts the number of cycles that the BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. 0,1,2 0 null 0 R3QPI 0x9 0x1 UNC_R3_RING_BL_USED.CW_EVEN Counts the number of cycles that the BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. 0,1,2 0 null 0 R3QPI 0x9 0x2 UNC_R3_RING_BL_USED.CW_ODD Counts the number of cycles that the BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. 0,1,2 0 null 0 R3QPI 0xa 0xf UNC_R3_RING_IV_USED.ANY Counts the number of cycles that the IV ring is being used at this ring stop. This includes when packets are passing by and when packets are being sent, but does not include when packets are being sunk into the ring stop. The IV ring is unidirectional. Whether UP or DN is used is dependent on the system programming. Thereofore, one should generally set both the UP and DN bits for a given polarity (or both) at a given time. 0,1,2 0 null 0 R3QPI 0x12 0x1 UNC_R3_RxR_BYPASSED.AD Counts the number of times when the Ingress was bypassed and an incoming transaction was bypassed directly across the BGF and into the qfclk domain. 0,1 0 null 0 R3QPI 0x10 0x8 UNC_R3_RxR_CYCLES_NE.DRS Counts the number of cycles when the QPI Ingress is not empty. This tracks one of the three rings that are used by the QPI agent. This can be used in conjunction with the QPI Ingress Occupancy Accumulator event in order to calculate average queue occupancy. Multiple ingress buffers can be tracked at a given time using multiple counters. 0,1 0 null 0 R3QPI 0x10 0x1 UNC_R3_RxR_CYCLES_NE.HOM Counts the number of cycles when the QPI Ingress is not empty. This tracks one of the three rings that are used by the QPI agent. This can be used in conjunction with the QPI Ingress Occupancy Accumulator event in order to calculate average queue occupancy. Multiple ingress buffers can be tracked at a given time using multiple counters. 0,1 0 null 0 R3QPI 0x10 0x10 UNC_R3_RxR_CYCLES_NE.NCB Counts the number of cycles when the QPI Ingress is not empty. This tracks one of the three rings that are used by the QPI agent. This can be used in conjunction with the QPI Ingress Occupancy Accumulator event in order to calculate average queue occupancy. Multiple ingress buffers can be tracked at a given time using multiple counters. 0,1 0 null 0 R3QPI 0x10 0x20 UNC_R3_RxR_CYCLES_NE.NCS Counts the number of cycles when the QPI Ingress is not empty. This tracks one of the three rings that are used by the QPI agent. This can be used in conjunction with the QPI Ingress Occupancy Accumulator event in order to calculate average queue occupancy. Multiple ingress buffers can be tracked at a given time using multiple counters. 0,1 0 null 0 R3QPI 0x10 0x4 UNC_R3_RxR_CYCLES_NE.NDR Counts the number of cycles when the QPI Ingress is not empty. This tracks one of the three rings that are used by the QPI agent. This can be used in conjunction with the QPI Ingress Occupancy Accumulator event in order to calculate average queue occupancy. Multiple ingress buffers can be tracked at a given time using multiple counters. 0,1 0 null 0 R3QPI 0x10 0x2 UNC_R3_RxR_CYCLES_NE.SNP Counts the number of cycles when the QPI Ingress is not empty. This tracks one of the three rings that are used by the QPI agent. This can be used in conjunction with the QPI Ingress Occupancy Accumulator event in order to calculate average queue occupancy. Multiple ingress buffers can be tracked at a given time using multiple counters. 0,1 0 null 0 R3QPI 0x11 0x8 UNC_R3_RxR_INSERTS.DRS Counts the number of allocations into the QPI Ingress. This tracks one of the three rings that are used by the QPI agent. This can be used in conjunction with the QPI Ingress Occupancy Accumulator event in order to calculate average queue latency. Multiple ingress buffers can be tracked at a given time using multiple counters. 0,1 0 null 0 R3QPI 0x11 0x1 UNC_R3_RxR_INSERTS.HOM Counts the number of allocations into the QPI Ingress. This tracks one of the three rings that are used by the QPI agent. This can be used in conjunction with the QPI Ingress Occupancy Accumulator event in order to calculate average queue latency. Multiple ingress buffers can be tracked at a given time using multiple counters. 0,1 0 null 0 R3QPI 0x11 0x10 UNC_R3_RxR_INSERTS.NCB Counts the number of allocations into the QPI Ingress. This tracks one of the three rings that are used by the QPI agent. This can be used in conjunction with the QPI Ingress Occupancy Accumulator event in order to calculate average queue latency. Multiple ingress buffers can be tracked at a given time using multiple counters. 0,1 0 null 0 R3QPI 0x11 0x20 UNC_R3_RxR_INSERTS.NCS Counts the number of allocations into the QPI Ingress. This tracks one of the three rings that are used by the QPI agent. This can be used in conjunction with the QPI Ingress Occupancy Accumulator event in order to calculate average queue latency. Multiple ingress buffers can be tracked at a given time using multiple counters. 0,1 0 null 0 R3QPI 0x11 0x4 UNC_R3_RxR_INSERTS.NDR Counts the number of allocations into the QPI Ingress. This tracks one of the three rings that are used by the QPI agent. This can be used in conjunction with the QPI Ingress Occupancy Accumulator event in order to calculate average queue latency. Multiple ingress buffers can be tracked at a given time using multiple counters. 0,1 0 null 0 R3QPI 0x11 0x2 UNC_R3_RxR_INSERTS.SNP Counts the number of allocations into the QPI Ingress. This tracks one of the three rings that are used by the QPI agent. This can be used in conjunction with the QPI Ingress Occupancy Accumulator event in order to calculate average queue latency. Multiple ingress buffers can be tracked at a given time using multiple counters. 0,1 0 null 0 R3QPI 0x13 0x8 UNC_R3_RxR_OCCUPANCY.DRS Accumulates the occupancy of a given QPI Ingress queue in each cycles. This tracks one of the three ring Ingress buffers. This can be used with the QPI Ingress Not Empty event to calculate average occupancy or the QPI Ingress Allocations event in order to calculate average queuing latency. 0 0 null 0 R3QPI 0x13 0x1 UNC_R3_RxR_OCCUPANCY.HOM Accumulates the occupancy of a given QPI Ingress queue in each cycles. This tracks one of the three ring Ingress buffers. This can be used with the QPI Ingress Not Empty event to calculate average occupancy or the QPI Ingress Allocations event in order to calculate average queuing latency. 0 0 null 0 R3QPI 0x13 0x10 UNC_R3_RxR_OCCUPANCY.NCB Accumulates the occupancy of a given QPI Ingress queue in each cycles. This tracks one of the three ring Ingress buffers. This can be used with the QPI Ingress Not Empty event to calculate average occupancy or the QPI Ingress Allocations event in order to calculate average queuing latency. 0 0 null 0 R3QPI 0x13 0x20 UNC_R3_RxR_OCCUPANCY.NCS Accumulates the occupancy of a given QPI Ingress queue in each cycles. This tracks one of the three ring Ingress buffers. This can be used with the QPI Ingress Not Empty event to calculate average occupancy or the QPI Ingress Allocations event in order to calculate average queuing latency. 0 0 null 0 R3QPI 0x13 0x4 UNC_R3_RxR_OCCUPANCY.NDR Accumulates the occupancy of a given QPI Ingress queue in each cycles. This tracks one of the three ring Ingress buffers. This can be used with the QPI Ingress Not Empty event to calculate average occupancy or the QPI Ingress Allocations event in order to calculate average queuing latency. 0 0 null 0 R3QPI 0x13 0x2 UNC_R3_RxR_OCCUPANCY.SNP Accumulates the occupancy of a given QPI Ingress queue in each cycles. This tracks one of the three ring Ingress buffers. This can be used with the QPI Ingress Not Empty event to calculate average occupancy or the QPI Ingress Allocations event in order to calculate average queuing latency. 0 0 null 0 R3QPI 0x37 0x8 UNC_R3_VN0_CREDITS_REJECT.DRS Number of times a request failed to acquire a DRS VN0 credit. In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into. There are two credit pools, VNA and VN0. VNA is a shared pool used to achieve high performance. The VN0 pool has reserved entries for each message class and is used to prevent deadlock. Requests first attempt to acquire a VNA credit, and then fall back to VN0 if they fail. This therefore counts the number of times when a request failed to acquire either a VNA or VN0 credit and is delayed. This should generally be a rare situation. 0,1 0 null 0 R3QPI 0x37 0x1 UNC_R3_VN0_CREDITS_REJECT.HOM Number of times a request failed to acquire a DRS VN0 credit. In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into. There are two credit pools, VNA and VN0. VNA is a shared pool used to achieve high performance. The VN0 pool has reserved entries for each message class and is used to prevent deadlock. Requests first attempt to acquire a VNA credit, and then fall back to VN0 if they fail. This therefore counts the number of times when a request failed to acquire either a VNA or VN0 credit and is delayed. This should generally be a rare situation. 0,1 0 null 0 R3QPI 0x37 0x10 UNC_R3_VN0_CREDITS_REJECT.NCB Number of times a request failed to acquire a DRS VN0 credit. In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into. There are two credit pools, VNA and VN0. VNA is a shared pool used to achieve high performance. The VN0 pool has reserved entries for each message class and is used to prevent deadlock. Requests first attempt to acquire a VNA credit, and then fall back to VN0 if they fail. This therefore counts the number of times when a request failed to acquire either a VNA or VN0 credit and is delayed. This should generally be a rare situation. 0,1 0 null 0 R3QPI 0x37 0x20 UNC_R3_VN0_CREDITS_REJECT.NCS Number of times a request failed to acquire a DRS VN0 credit. In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into. There are two credit pools, VNA and VN0. VNA is a shared pool used to achieve high performance. The VN0 pool has reserved entries for each message class and is used to prevent deadlock. Requests first attempt to acquire a VNA credit, and then fall back to VN0 if they fail. This therefore counts the number of times when a request failed to acquire either a VNA or VN0 credit and is delayed. This should generally be a rare situation. 0,1 0 null 0 R3QPI 0x37 0x4 UNC_R3_VN0_CREDITS_REJECT.NDR Number of times a request failed to acquire a DRS VN0 credit. In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into. There are two credit pools, VNA and VN0. VNA is a shared pool used to achieve high performance. The VN0 pool has reserved entries for each message class and is used to prevent deadlock. Requests first attempt to acquire a VNA credit, and then fall back to VN0 if they fail. This therefore counts the number of times when a request failed to acquire either a VNA or VN0 credit and is delayed. This should generally be a rare situation. 0,1 0 null 0 R3QPI 0x37 0x2 UNC_R3_VN0_CREDITS_REJECT.SNP Number of times a request failed to acquire a DRS VN0 credit. In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into. There are two credit pools, VNA and VN0. VNA is a shared pool used to achieve high performance. The VN0 pool has reserved entries for each message class and is used to prevent deadlock. Requests first attempt to acquire a VNA credit, and then fall back to VN0 if they fail. This therefore counts the number of times when a request failed to acquire either a VNA or VN0 credit and is delayed. This should generally be a rare situation. 0,1 0 null 0 R3QPI 0x36 0x8 UNC_R3_VN0_CREDITS_USED.DRS Number of times a VN0 credit was used on the DRS message channel. In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into. There are two credit pools, VNA and VN0. VNA is a shared pool used to achieve high performance. The VN0 pool has reserved entries for each message class and is used to prevent deadlock. Requests first attempt to acquire a VNA credit, and then fall back to VN0 if they fail. This counts the number of times a VN0 credit was used. Note that a single VN0 credit holds access to potentially multiple flit buffers. For example, a transfer that uses VNA could use 9 flit buffers and in that case uses 9 credits. A transfer on VN0 will only count a single credit even though it may use multiple buffers. 0,1 0 null 0 R3QPI 0x36 0x1 UNC_R3_VN0_CREDITS_USED.HOM Number of times a VN0 credit was used on the DRS message channel. In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into. There are two credit pools, VNA and VN0. VNA is a shared pool used to achieve high performance. The VN0 pool has reserved entries for each message class and is used to prevent deadlock. Requests first attempt to acquire a VNA credit, and then fall back to VN0 if they fail. This counts the number of times a VN0 credit was used. Note that a single VN0 credit holds access to potentially multiple flit buffers. For example, a transfer that uses VNA could use 9 flit buffers and in that case uses 9 credits. A transfer on VN0 will only count a single credit even though it may use multiple buffers. 0,1 0 null 0 R3QPI 0x36 0x10 UNC_R3_VN0_CREDITS_USED.NCB Number of times a VN0 credit was used on the DRS message channel. In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into. There are two credit pools, VNA and VN0. VNA is a shared pool used to achieve high performance. The VN0 pool has reserved entries for each message class and is used to prevent deadlock. Requests first attempt to acquire a VNA credit, and then fall back to VN0 if they fail. This counts the number of times a VN0 credit was used. Note that a single VN0 credit holds access to potentially multiple flit buffers. For example, a transfer that uses VNA could use 9 flit buffers and in that case uses 9 credits. A transfer on VN0 will only count a single credit even though it may use multiple buffers. 0,1 0 null 0 R3QPI 0x36 0x20 UNC_R3_VN0_CREDITS_USED.NCS Number of times a VN0 credit was used on the DRS message channel. In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into. There are two credit pools, VNA and VN0. VNA is a shared pool used to achieve high performance. The VN0 pool has reserved entries for each message class and is used to prevent deadlock. Requests first attempt to acquire a VNA credit, and then fall back to VN0 if they fail. This counts the number of times a VN0 credit was used. Note that a single VN0 credit holds access to potentially multiple flit buffers. For example, a transfer that uses VNA could use 9 flit buffers and in that case uses 9 credits. A transfer on VN0 will only count a single credit even though it may use multiple buffers. 0,1 0 null 0 R3QPI 0x36 0x4 UNC_R3_VN0_CREDITS_USED.NDR Number of times a VN0 credit was used on the DRS message channel. In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into. There are two credit pools, VNA and VN0. VNA is a shared pool used to achieve high performance. The VN0 pool has reserved entries for each message class and is used to prevent deadlock. Requests first attempt to acquire a VNA credit, and then fall back to VN0 if they fail. This counts the number of times a VN0 credit was used. Note that a single VN0 credit holds access to potentially multiple flit buffers. For example, a transfer that uses VNA could use 9 flit buffers and in that case uses 9 credits. A transfer on VN0 will only count a single credit even though it may use multiple buffers. 0,1 0 null 0 R3QPI 0x36 0x2 UNC_R3_VN0_CREDITS_USED.SNP Number of times a VN0 credit was used on the DRS message channel. In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into. There are two credit pools, VNA and VN0. VNA is a shared pool used to achieve high performance. The VN0 pool has reserved entries for each message class and is used to prevent deadlock. Requests first attempt to acquire a VNA credit, and then fall back to VN0 if they fail. This counts the number of times a VN0 credit was used. Note that a single VN0 credit holds access to potentially multiple flit buffers. For example, a transfer that uses VNA could use 9 flit buffers and in that case uses 9 credits. A transfer on VN0 will only count a single credit even though it may use multiple buffers. 0,1 0 null 0 R3QPI 0x33 0x0 UNC_R3_VNA_CREDITS_ACQUIRED Number of QPI VNA Credit acquisitions. This event can be used in conjunction with the VNA In-Use Accumulator to calculate the average lifetime of a credit holder. VNA credits are used by all message classes in order to communicate across QPI. If a packet is unable to acquire credits, it will then attempt to use credts from the VN0 pool. Note that a single packet may require multiple flit buffers (i.e. when data is being transfered). Therefore, this event will increment by the number of credits acquired in each cycle. Filtering based on message class is not provided. One can count the number of packets transfered in a given message class using an qfclk event. 0,1 0 null 0 R3QPI 0x34 0x8 UNC_R3_VNA_CREDITS_REJECT.DRS Number of attempted VNA credit acquisitions that were rejected because the VNA credit pool was full (or almost full). It is possible to filter this event by message class. Some packets use more than one flit buffer, and therefore must acquire multiple credits. Therefore, one could get a reject even if the VNA credits were not fully used up. The VNA pool is generally used to provide the bulk of the QPI bandwidth (as opposed to the VN0 pool which is used to guarantee forward progress). VNA credits can run out if the flit buffer on the receiving side starts to queue up substantially. This can happen if the rest of the uncore is unable to drain the requests fast enough. 0,1 0 null 0 R3QPI 0x34 0x1 UNC_R3_VNA_CREDITS_REJECT.HOM Number of attempted VNA credit acquisitions that were rejected because the VNA credit pool was full (or almost full). It is possible to filter this event by message class. Some packets use more than one flit buffer, and therefore must acquire multiple credits. Therefore, one could get a reject even if the VNA credits were not fully used up. The VNA pool is generally used to provide the bulk of the QPI bandwidth (as opposed to the VN0 pool which is used to guarantee forward progress). VNA credits can run out if the flit buffer on the receiving side starts to queue up substantially. This can happen if the rest of the uncore is unable to drain the requests fast enough. 0,1 0 null 0 R3QPI 0x34 0x10 UNC_R3_VNA_CREDITS_REJECT.NCB Number of attempted VNA credit acquisitions that were rejected because the VNA credit pool was full (or almost full). It is possible to filter this event by message class. Some packets use more than one flit buffer, and therefore must acquire multiple credits. Therefore, one could get a reject even if the VNA credits were not fully used up. The VNA pool is generally used to provide the bulk of the QPI bandwidth (as opposed to the VN0 pool which is used to guarantee forward progress). VNA credits can run out if the flit buffer on the receiving side starts to queue up substantially. This can happen if the rest of the uncore is unable to drain the requests fast enough. 0,1 0 null 0 R3QPI 0x34 0x20 UNC_R3_VNA_CREDITS_REJECT.NCS Number of attempted VNA credit acquisitions that were rejected because the VNA credit pool was full (or almost full). It is possible to filter this event by message class. Some packets use more than one flit buffer, and therefore must acquire multiple credits. Therefore, one could get a reject even if the VNA credits were not fully used up. The VNA pool is generally used to provide the bulk of the QPI bandwidth (as opposed to the VN0 pool which is used to guarantee forward progress). VNA credits can run out if the flit buffer on the receiving side starts to queue up substantially. This can happen if the rest of the uncore is unable to drain the requests fast enough. 0,1 0 null 0 R3QPI 0x34 0x4 UNC_R3_VNA_CREDITS_REJECT.NDR Number of attempted VNA credit acquisitions that were rejected because the VNA credit pool was full (or almost full). It is possible to filter this event by message class. Some packets use more than one flit buffer, and therefore must acquire multiple credits. Therefore, one could get a reject even if the VNA credits were not fully used up. The VNA pool is generally used to provide the bulk of the QPI bandwidth (as opposed to the VN0 pool which is used to guarantee forward progress). VNA credits can run out if the flit buffer on the receiving side starts to queue up substantially. This can happen if the rest of the uncore is unable to drain the requests fast enough. 0,1 0 null 0 R3QPI 0x34 0x2 UNC_R3_VNA_CREDITS_REJECT.SNP Number of attempted VNA credit acquisitions that were rejected because the VNA credit pool was full (or almost full). It is possible to filter this event by message class. Some packets use more than one flit buffer, and therefore must acquire multiple credits. Therefore, one could get a reject even if the VNA credits were not fully used up. The VNA pool is generally used to provide the bulk of the QPI bandwidth (as opposed to the VN0 pool which is used to guarantee forward progress). VNA credits can run out if the flit buffer on the receiving side starts to queue up substantially. This can happen if the rest of the uncore is unable to drain the requests fast enough. 0,1 0 null 0 R3QPI 0x31 0x0 UNC_R3_VNA_CREDIT_CYCLES_OUT Number of QPI uclk cycles when the transmitted has no VNA credits available and therefore cannot send any requests on this channel. Note that this does not mean that no flits can be transmitted, as those holding VN0 credits will still (potentially) be able to transmit. Generally it is the goal of the uncore that VNA credits should not run out, as this can substantially throttle back useful QPI bandwidth. 0,1 0 null 0 R3QPI 0x32 0x0 UNC_R3_VNA_CREDIT_CYCLES_USED Number of QPI uclk cycles with one or more VNA credits in use. This event can be used in conjunction with the VNA In-Use Accumulator to calculate the average number of used VNA credits. 0,1 0 null 0 R2PCIe 0x1 0x0 UNC_R2_CLOCKTICKS Counts the number of uclks in the R2PCIe uclk domain. This could be slightly different than the count in the Ubox because of enable/freeze delays. However, because the R2PCIe is close to the Ubox, they generally should not diverge by more than a handful of cycles. 0,1,2,3 0 null 0 R2PCIe 0x33 0x8 UNC_R2_IIO_CREDITS_ACQUIRED.DRS Counts the number of credits that are acquired in the R2PCIe agent for sending transactions into the IIO on either NCB or NCS are in use. Transactions from the BL ring going into the IIO Agent must first acquire a credit. These credits are for either the NCB or NCS message classes. NCB, or non-coherent bypass messages are used to transmit data without coherency (and are common). NCS is used for reads to PCIe (and should be used sparingly). 0,1 0 null 0 R2PCIe 0x33 0x10 UNC_R2_IIO_CREDITS_ACQUIRED.NCB Counts the number of credits that are acquired in the R2PCIe agent for sending transactions into the IIO on either NCB or NCS are in use. Transactions from the BL ring going into the IIO Agent must first acquire a credit. These credits are for either the NCB or NCS message classes. NCB, or non-coherent bypass messages are used to transmit data without coherency (and are common). NCS is used for reads to PCIe (and should be used sparingly). 0,1 0 null 0 R2PCIe 0x33 0x20 UNC_R2_IIO_CREDITS_ACQUIRED.NCS Counts the number of credits that are acquired in the R2PCIe agent for sending transactions into the IIO on either NCB or NCS are in use. Transactions from the BL ring going into the IIO Agent must first acquire a credit. These credits are for either the NCB or NCS message classes. NCB, or non-coherent bypass messages are used to transmit data without coherency (and are common). NCS is used for reads to PCIe (and should be used sparingly). 0,1 0 null 0 R2PCIe 0x34 0x8 UNC_R2_IIO_CREDITS_REJECT.DRS Counts the number of times that a request pending in the BL Ingress attempted to acquire either a NCB or NCS credit to transmit into the IIO, but was rejected because no credits were available. NCB, or non-coherent bypass messages are used to transmit data without coherency (and are common). NCS is used for reads to PCIe (and should be used sparingly). 0,1 0 null 0 R2PCIe 0x34 0x10 UNC_R2_IIO_CREDITS_REJECT.NCB Counts the number of times that a request pending in the BL Ingress attempted to acquire either a NCB or NCS credit to transmit into the IIO, but was rejected because no credits were available. NCB, or non-coherent bypass messages are used to transmit data without coherency (and are common). NCS is used for reads to PCIe (and should be used sparingly). 0,1 0 null 0 R2PCIe 0x34 0x20 UNC_R2_IIO_CREDITS_REJECT.NCS Counts the number of times that a request pending in the BL Ingress attempted to acquire either a NCB or NCS credit to transmit into the IIO, but was rejected because no credits were available. NCB, or non-coherent bypass messages are used to transmit data without coherency (and are common). NCS is used for reads to PCIe (and should be used sparingly). 0,1 0 null 0 R2PCIe 0x32 0x8 UNC_R2_IIO_CREDITS_USED.DRS Counts the number of cycles when one or more credits in the R2PCIe agent for sending transactions into the IIO on either NCB or NCS are in use. Transactions from the BL ring going into the IIO Agent must first acquire a credit. These credits are for either the NCB or NCS message classes. NCB, or non-coherent bypass messages are used to transmit data without coherency (and are common). NCS is used for reads to PCIe (and should be used sparingly). 0,1 0 null 0 R2PCIe 0x32 0x10 UNC_R2_IIO_CREDITS_USED.NCB Counts the number of cycles when one or more credits in the R2PCIe agent for sending transactions into the IIO on either NCB or NCS are in use. Transactions from the BL ring going into the IIO Agent must first acquire a credit. These credits are for either the NCB or NCS message classes. NCB, or non-coherent bypass messages are used to transmit data without coherency (and are common). NCS is used for reads to PCIe (and should be used sparingly). 0,1 0 null 0 R2PCIe 0x32 0x20 UNC_R2_IIO_CREDITS_USED.NCS Counts the number of cycles when one or more credits in the R2PCIe agent for sending transactions into the IIO on either NCB or NCS are in use. Transactions from the BL ring going into the IIO Agent must first acquire a credit. These credits are for either the NCB or NCS message classes. NCB, or non-coherent bypass messages are used to transmit data without coherency (and are common). NCS is used for reads to PCIe (and should be used sparingly). 0,1 0 null 0 R2PCIe 0x7 0x4 UNC_R2_RING_AD_USED.CCW_EVEN Counts the number of cycles that the AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. 0,1,2,3 0 null 0 R2PCIe 0x7 0x8 UNC_R2_RING_AD_USED.CCW_ODD Counts the number of cycles that the AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. 0,1,2,3 0 null 0 R2PCIe 0x7 0x1 UNC_R2_RING_AD_USED.CW_EVEN Counts the number of cycles that the AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. 0,1,2,3 0 null 0 R2PCIe 0x7 0x2 UNC_R2_RING_AD_USED.CW_ODD Counts the number of cycles that the AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. 0,1,2,3 0 null 0 R2PCIe 0x8 0x4 UNC_R2_RING_AK_USED.CCW_EVEN Counts the number of cycles that the AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. 0,1,2,3 0 null 0 R2PCIe 0x8 0x8 UNC_R2_RING_AK_USED.CCW_ODD Counts the number of cycles that the AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. 0,1,2,3 0 null 0 R2PCIe 0x8 0x1 UNC_R2_RING_AK_USED.CW_EVEN Counts the number of cycles that the AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. 0,1,2,3 0 null 0 R2PCIe 0x8 0x2 UNC_R2_RING_AK_USED.CW_ODD Counts the number of cycles that the AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. 0,1,2,3 0 null 0 R2PCIe 0x9 0x4 UNC_R2_RING_BL_USED.CCW_EVEN Counts the number of cycles that the BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. 0,1,2,3 0 null 0 R2PCIe 0x9 0x8 UNC_R2_RING_BL_USED.CCW_ODD Counts the number of cycles that the BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. 0,1,2,3 0 null 0 R2PCIe 0x9 0x1 UNC_R2_RING_BL_USED.CW_EVEN Counts the number of cycles that the BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. 0,1,2,3 0 null 0 R2PCIe 0x9 0x2 UNC_R2_RING_BL_USED.CW_ODD Counts the number of cycles that the BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. 0,1,2,3 0 null 0 R2PCIe 0xa 0xf UNC_R2_RING_IV_USED.ANY Counts the number of cycles that the IV ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sunk into the ring stop. The IV ring is unidirectional. Whether UP or DN is used is dependent on the system programming. Thereofore, one should generally set both the UP and DN bits for a given polarity (or both) at a given time. 0,1,2,3 0 null 0 R2PCIe 0x12 0x0 UNC_R2_RxR_AK_BOUNCES Counts the number of times when a request destined for the AK ingress bounced. 0 0 null 0 R2PCIe 0x10 0x8 UNC_R2_RxR_CYCLES_NE.DRS Counts the number of cycles when the R2PCIe Ingress is not empty. This tracks one of the three rings that are used by the R2PCIe agent. This can be used in conjunction with the R2PCIe Ingress Occupancy Accumulator event in order to calculate average queue occupancy. Multiple ingress buffers can be tracked at a given time using multiple counters. 0,1 0 null 0 R2PCIe 0x10 0x10 UNC_R2_RxR_CYCLES_NE.NCB Counts the number of cycles when the R2PCIe Ingress is not empty. This tracks one of the three rings that are used by the R2PCIe agent. This can be used in conjunction with the R2PCIe Ingress Occupancy Accumulator event in order to calculate average queue occupancy. Multiple ingress buffers can be tracked at a given time using multiple counters. 0,1 0 null 0 R2PCIe 0x10 0x20 UNC_R2_RxR_CYCLES_NE.NCS Counts the number of cycles when the R2PCIe Ingress is not empty. This tracks one of the three rings that are used by the R2PCIe agent. This can be used in conjunction with the R2PCIe Ingress Occupancy Accumulator event in order to calculate average queue occupancy. Multiple ingress buffers can be tracked at a given time using multiple counters. 0,1 0 null 0 R2PCIe 0x25 0x1 UNC_R2_TxR_CYCLES_FULL.AD Counts the number of cycles when the R2PCIe Egress buffer is full. 0 0 null 0 R2PCIe 0x25 0x2 UNC_R2_TxR_CYCLES_FULL.AK Counts the number of cycles when the R2PCIe Egress buffer is full. 0 0 null 0 R2PCIe 0x25 0x4 UNC_R2_TxR_CYCLES_FULL.BL Counts the number of cycles when the R2PCIe Egress buffer is full. 0 0 null 0 R2PCIe 0x23 0x1 UNC_R2_TxR_CYCLES_NE.AD Counts the number of cycles when the R2PCIe Egress is not empty. This tracks one of the three rings that are used by the R2PCIe agent. This can be used in conjunction with the R2PCIe Egress Occupancy Accumulator event in order to calculate average queue occupancy. Only a single Egress queue can be tracked at any given time. It is not possible to filter based on direction or polarity. 0 0 null 0 R2PCIe 0x23 0x2 UNC_R2_TxR_CYCLES_NE.AK Counts the number of cycles when the R2PCIe Egress is not empty. This tracks one of the three rings that are used by the R2PCIe agent. This can be used in conjunction with the R2PCIe Egress Occupancy Accumulator event in order to calculate average queue occupancy. Only a single Egress queue can be tracked at any given time. It is not possible to filter based on direction or polarity. 0 0 null 0 R2PCIe 0x23 0x4 UNC_R2_TxR_CYCLES_NE.BL Counts the number of cycles when the R2PCIe Egress is not empty. This tracks one of the three rings that are used by the R2PCIe agent. This can be used in conjunction with the R2PCIe Egress Occupancy Accumulator event in order to calculate average queue occupancy. Only a single Egress queue can be tracked at any given time. It is not possible to filter based on direction or polarity. 0 0 null 0 R2PCIe 0x26 0x1 UNC_R2_TxR_NACKS.AD Counts the number of times that the Egress received a NACK from the ring and could not issue a transaction. 0,1 0 null 0 R2PCIe 0x26 0x2 UNC_R2_TxR_NACKS.AK Counts the number of times that the Egress received a NACK from the ring and could not issue a transaction. 0,1 0 null 0 R2PCIe 0x26 0x4 UNC_R2_TxR_NACKS.BL Counts the number of times that the Egress received a NACK from the ring and could not issue a transaction. 0,1 0 null 0 HA 0x20 0x3 UNC_H_ADDR_OPC_MATCH.FILT tbd 0,1,2,3 0 HA_AddrMatch0[31:6], HA_AddrMatch1[13:0], HA_OpcodeMatch[5:0] 0 HA 0x14 0x2 UNC_H_BYPASS_IMC.NOT_TAKEN Counts the number of times when the HA was able to bypass was attempted. This is a latency optimization for situations when there is light loadings on the memory subsystem. This can be filted by when the bypass was taken and when it was not. 0,1,2,3 0 null 0 HA 0x14 0x1 UNC_H_BYPASS_IMC.TAKEN Counts the number of times when the HA was able to bypass was attempted. This is a latency optimization for situations when there is light loadings on the memory subsystem. This can be filted by when the bypass was taken and when it was not. 0,1,2,3 0 null 0 HA 0x0 0x0 UNC_H_CLOCKTICKS Counts the number of uclks in the HA. This will be slightly different than the count in the Ubox because of enable/freeze delays. The HA is on the other side of the die from the fixed Ubox uclk counter, so the drift could be somewhat larger than in units that are closer like the QPI Agent. 0,1,2,3 0 null 0 HA 0xb 0x2 UNC_H_CONFLICT_CYCLES.CONFLICT tbd 0,1,2,3 0 null 0 HA 0xb 0x1 UNC_H_CONFLICT_CYCLES.NO_CONFLICT tbd 0,1,2,3 0 null 0 HA 0x11 0x0 UNC_H_DIRECT2CORE_COUNT Number of Direct2Core messages sent 0,1,2,3 0 null 0 HA 0x12 0x0 UNC_H_DIRECT2CORE_CYCLES_DISABLED Number of cycles in which Direct2Core was disabled 0,1,2,3 0 null 0 HA 0x13 0x0 UNC_H_DIRECT2CORE_TXN_OVERRIDE Number of Reads where Direct2Core overridden 0,1,2,3 0 null 0 HA 0xc 0x2 UNC_H_DIRECTORY_LOOKUP.NO_SNP Counts the number of transactions that looked up the directory. Can be filtered by requests that had to snoop and those that did not have to. 0,1,2,3 0 null 0 HA 0xc 0x1 UNC_H_DIRECTORY_LOOKUP.SNP Counts the number of transactions that looked up the directory. Can be filtered by requests that had to snoop and those that did not have to. 0,1,2,3 0 null 0 HA 0xd 0x3 UNC_H_DIRECTORY_UPDATE.ANY Counts the number of directory updates that were required. These result in writes to the memory controller. This can be filtered by directory sets and directory clears. 0,1,2,3 0 null 0 HA 0xd 0x2 UNC_H_DIRECTORY_UPDATE.CLEAR Counts the number of directory updates that were required. These result in writes to the memory controller. This can be filtered by directory sets and directory clears. 0,1,2,3 0 null 0 HA 0xd 0x1 UNC_H_DIRECTORY_UPDATE.SET Counts the number of directory updates that were required. These result in writes to the memory controller. This can be filtered by directory sets and directory clears. 0,1,2,3 0 null 0 HA 0x22 0x1 UNC_H_IGR_NO_CREDIT_CYCLES.AD_QPI0 Counts the number of cycles when the HA does not have credits to send messages to the QPI Agent. This can be filtered by the different credit pools and the different links. 0,1,2,3 0 null 0 HA 0x22 0x2 UNC_H_IGR_NO_CREDIT_CYCLES.AD_QPI1 Counts the number of cycles when the HA does not have credits to send messages to the QPI Agent. This can be filtered by the different credit pools and the different links. 0,1,2,3 0 null 0 HA 0x22 0x4 UNC_H_IGR_NO_CREDIT_CYCLES.BL_QPI0 Counts the number of cycles when the HA does not have credits to send messages to the QPI Agent. This can be filtered by the different credit pools and the different links. 0,1,2,3 0 null 0 HA 0x22 0x8 UNC_H_IGR_NO_CREDIT_CYCLES.BL_QPI1 Counts the number of cycles when the HA does not have credits to send messages to the QPI Agent. This can be filtered by the different credit pools and the different links. 0,1,2,3 0 null 0 HA 0x1e 0x0 UNC_H_IMC_RETRY tbd 0,1,2,3 0 null 0 HA 0x1a 0xf UNC_H_IMC_WRITES.ALL Counts the total number of full line writes issued from the HA into the memory controller. This counts for all four channels. It can be filtered by full/partial and ISOCH/non-ISOCH. 0,1,2,3 0 null 0 HA 0x1a 0x1 UNC_H_IMC_WRITES.FULL Counts the total number of full line writes issued from the HA into the memory controller. This counts for all four channels. It can be filtered by full/partial and ISOCH/non-ISOCH. 0,1,2,3 0 null 0 HA 0x1a 0x4 UNC_H_IMC_WRITES.FULL_ISOCH Counts the total number of full line writes issued from the HA into the memory controller. This counts for all four channels. It can be filtered by full/partial and ISOCH/non-ISOCH. 0,1,2,3 0 null 0 HA 0x1a 0x2 UNC_H_IMC_WRITES.PARTIAL Counts the total number of full line writes issued from the HA into the memory controller. This counts for all four channels. It can be filtered by full/partial and ISOCH/non-ISOCH. 0,1,2,3 0 null 0 HA 0x1a 0x8 UNC_H_IMC_WRITES.PARTIAL_ISOCH Counts the total number of full line writes issued from the HA into the memory controller. This counts for all four channels. It can be filtered by full/partial and ISOCH/non-ISOCH. 0,1,2,3 0 null 0 HA 0x1 0x3 UNC_H_REQUESTS.READS Counts the total number of read requests made into the Home Agent. Reads include all read opcodes (including RFO). Writes include all writes (streaming, evictions, HitM, etc). 0,1,2,3 0 null 0 HA 0x1 0xc UNC_H_REQUESTS.WRITES Counts the total number of read requests made into the Home Agent. Reads include all read opcodes (including RFO). Writes include all writes (streaming, evictions, HitM, etc). 0,1,2,3 0 null 0 HA 0x3e 0x4 UNC_H_RING_AD_USED.CCW_EVEN Counts the number of cycles that the AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. 0,1,2,3 0 null 0 HA 0x3e 0x8 UNC_H_RING_AD_USED.CCW_ODD Counts the number of cycles that the AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. 0,1,2,3 0 null 0 HA 0x3e 0x1 UNC_H_RING_AD_USED.CW_EVEN Counts the number of cycles that the AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. 0,1,2,3 0 null 0 HA 0x3e 0x2 UNC_H_RING_AD_USED.CW_ODD Counts the number of cycles that the AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. 0,1,2,3 0 null 0 HA 0x3f 0x4 UNC_H_RING_AK_USED.CCW_EVEN Counts the number of cycles that the AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. 0,1,2,3 0 null 0 HA 0x3f 0x8 UNC_H_RING_AK_USED.CCW_ODD Counts the number of cycles that the AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. 0,1,2,3 0 null 0 HA 0x3f 0x1 UNC_H_RING_AK_USED.CW_EVEN Counts the number of cycles that the AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. 0,1,2,3 0 null 0 HA 0x3f 0x2 UNC_H_RING_AK_USED.CW_ODD Counts the number of cycles that the AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. 0,1,2,3 0 null 0 HA 0x40 0x4 UNC_H_RING_BL_USED.CCW_EVEN Counts the number of cycles that the BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. 0,1,2,3 0 null 0 HA 0x40 0x8 UNC_H_RING_BL_USED.CCW_ODD Counts the number of cycles that the BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. 0,1,2,3 0 null 0 HA 0x40 0x1 UNC_H_RING_BL_USED.CW_EVEN Counts the number of cycles that the BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. 0,1,2,3 0 null 0 HA 0x40 0x2 UNC_H_RING_BL_USED.CW_ODD Counts the number of cycles that the BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. 0,1,2,3 0 null 0 HA 0x15 0x1 UNC_H_RPQ_CYCLES_NO_REG_CREDITS.CHN0 Counts the number of cycles when there are no 'regular' credits available for posting reads from the HA into the iMC. In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's RPQ (read pending queue). This queue is broken into regular credits/buffers that are used by general reads, and 'special' requests such as ISOCH reads. This count only tracks the regular credits Common high banwidth workloads should be able to make use of all of the regular buffers, but it will be difficult (and uncommon) to make use of both the regular and special buffers at the same time. One can filter based on the memory controller channel. One or more channels can be tracked at a given time. 0,1,2,3 0 null 0 HA 0x15 0x2 UNC_H_RPQ_CYCLES_NO_REG_CREDITS.CHN1 Counts the number of cycles when there are no 'regular' credits available for posting reads from the HA into the iMC. In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's RPQ (read pending queue). This queue is broken into regular credits/buffers that are used by general reads, and 'special' requests such as ISOCH reads. This count only tracks the regular credits Common high banwidth workloads should be able to make use of all of the regular buffers, but it will be difficult (and uncommon) to make use of both the regular and special buffers at the same time. One can filter based on the memory controller channel. One or more channels can be tracked at a given time. 0,1,2,3 0 null 0 HA 0x15 0x4 UNC_H_RPQ_CYCLES_NO_REG_CREDITS.CHN2 Counts the number of cycles when there are no 'regular' credits available for posting reads from the HA into the iMC. In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's RPQ (read pending queue). This queue is broken into regular credits/buffers that are used by general reads, and 'special' requests such as ISOCH reads. This count only tracks the regular credits Common high banwidth workloads should be able to make use of all of the regular buffers, but it will be difficult (and uncommon) to make use of both the regular and special buffers at the same time. One can filter based on the memory controller channel. One or more channels can be tracked at a given time. 0,1,2,3 0 null 0 HA 0x15 0x8 UNC_H_RPQ_CYCLES_NO_REG_CREDITS.CHN3 Counts the number of cycles when there are no 'regular' credits available for posting reads from the HA into the iMC. In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's RPQ (read pending queue). This queue is broken into regular credits/buffers that are used by general reads, and 'special' requests such as ISOCH reads. This count only tracks the regular credits Common high banwidth workloads should be able to make use of all of the regular buffers, but it will be difficult (and uncommon) to make use of both the regular and special buffers at the same time. One can filter based on the memory controller channel. One or more channels can be tracked at a given time. 0,1,2,3 0 null 0 HA 0x16 0x1 UNC_H_RPQ_CYCLES_NO_SPEC_CREDITS.CHN0 Counts the number of cycles when there are no 'special' credits available for posting reads from the HA into the iMC. In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's RPQ (read pending queue). This queue is broken into regular credits/buffers that are used by general reads, and 'special' requests such as ISOCH reads. This count only tracks the 'special' credits. This statistic is generally not interesting for general IA workloads, but may be of interest for understanding the characteristics of systems using ISOCH. One can filter based on the memory controller channel. One or more channels can be tracked at a given time. 0,1,2,3 0 null 0 HA 0x16 0x2 UNC_H_RPQ_CYCLES_NO_SPEC_CREDITS.CHN1 Counts the number of cycles when there are no 'special' credits available for posting reads from the HA into the iMC. In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's RPQ (read pending queue). This queue is broken into regular credits/buffers that are used by general reads, and 'special' requests such as ISOCH reads. This count only tracks the 'special' credits. This statistic is generally not interesting for general IA workloads, but may be of interest for understanding the characteristics of systems using ISOCH. One can filter based on the memory controller channel. One or more channels can be tracked at a given time. 0,1,2,3 0 null 0 HA 0x16 0x4 UNC_H_RPQ_CYCLES_NO_SPEC_CREDITS.CHN2 Counts the number of cycles when there are no 'special' credits available for posting reads from the HA into the iMC. In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's RPQ (read pending queue). This queue is broken into regular credits/buffers that are used by general reads, and 'special' requests such as ISOCH reads. This count only tracks the 'special' credits. This statistic is generally not interesting for general IA workloads, but may be of interest for understanding the characteristics of systems using ISOCH. One can filter based on the memory controller channel. One or more channels can be tracked at a given time. 0,1,2,3 0 null 0 HA 0x16 0x8 UNC_H_RPQ_CYCLES_NO_SPEC_CREDITS.CHN3 Counts the number of cycles when there are no 'special' credits available for posting reads from the HA into the iMC. In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's RPQ (read pending queue). This queue is broken into regular credits/buffers that are used by general reads, and 'special' requests such as ISOCH reads. This count only tracks the 'special' credits. This statistic is generally not interesting for general IA workloads, but may be of interest for understanding the characteristics of systems using ISOCH. One can filter based on the memory controller channel. One or more channels can be tracked at a given time. 0,1,2,3 0 null 0 HA 0x1b 0x1 UNC_H_TAD_REQUESTS_G0.REGION0 Counts the number of HA requests to a given TAD region. There are up to 11 TAD (target address decode) regions in each home agent. All requests destined for the memory controller must first be decoded to determine which TAD region they are in. This event is filtered based on the TAD region ID, and covers regions 0 to 7. This event is useful for understanding how applications are using the memory that is spread across the different memory regions. It is particularly useful for 'Monroe' systems that use the TAD to enable individual channels to enter self-refresh to save power. 0,1,2,3 0 null 0 HA 0x1b 0x2 UNC_H_TAD_REQUESTS_G0.REGION1 Counts the number of HA requests to a given TAD region. There are up to 11 TAD (target address decode) regions in each home agent. All requests destined for the memory controller must first be decoded to determine which TAD region they are in. This event is filtered based on the TAD region ID, and covers regions 0 to 7. This event is useful for understanding how applications are using the memory that is spread across the different memory regions. It is particularly useful for 'Monroe' systems that use the TAD to enable individual channels to enter self-refresh to save power. 0,1,2,3 0 null 0 HA 0x1b 0x4 UNC_H_TAD_REQUESTS_G0.REGION2 Counts the number of HA requests to a given TAD region. There are up to 11 TAD (target address decode) regions in each home agent. All requests destined for the memory controller must first be decoded to determine which TAD region they are in. This event is filtered based on the TAD region ID, and covers regions 0 to 7. This event is useful for understanding how applications are using the memory that is spread across the different memory regions. It is particularly useful for 'Monroe' systems that use the TAD to enable individual channels to enter self-refresh to save power. 0,1,2,3 0 null 0 HA 0x1b 0x8 UNC_H_TAD_REQUESTS_G0.REGION3 Counts the number of HA requests to a given TAD region. There are up to 11 TAD (target address decode) regions in each home agent. All requests destined for the memory controller must first be decoded to determine which TAD region they are in. This event is filtered based on the TAD region ID, and covers regions 0 to 7. This event is useful for understanding how applications are using the memory that is spread across the different memory regions. It is particularly useful for 'Monroe' systems that use the TAD to enable individual channels to enter self-refresh to save power. 0,1,2,3 0 null 0 HA 0x1b 0x10 UNC_H_TAD_REQUESTS_G0.REGION4 Counts the number of HA requests to a given TAD region. There are up to 11 TAD (target address decode) regions in each home agent. All requests destined for the memory controller must first be decoded to determine which TAD region they are in. This event is filtered based on the TAD region ID, and covers regions 0 to 7. This event is useful for understanding how applications are using the memory that is spread across the different memory regions. It is particularly useful for 'Monroe' systems that use the TAD to enable individual channels to enter self-refresh to save power. 0,1,2,3 0 null 0 HA 0x1b 0x20 UNC_H_TAD_REQUESTS_G0.REGION5 Counts the number of HA requests to a given TAD region. There are up to 11 TAD (target address decode) regions in each home agent. All requests destined for the memory controller must first be decoded to determine which TAD region they are in. This event is filtered based on the TAD region ID, and covers regions 0 to 7. This event is useful for understanding how applications are using the memory that is spread across the different memory regions. It is particularly useful for 'Monroe' systems that use the TAD to enable individual channels to enter self-refresh to save power. 0,1,2,3 0 null 0 HA 0x1b 0x40 UNC_H_TAD_REQUESTS_G0.REGION6 Counts the number of HA requests to a given TAD region. There are up to 11 TAD (target address decode) regions in each home agent. All requests destined for the memory controller must first be decoded to determine which TAD region they are in. This event is filtered based on the TAD region ID, and covers regions 0 to 7. This event is useful for understanding how applications are using the memory that is spread across the different memory regions. It is particularly useful for 'Monroe' systems that use the TAD to enable individual channels to enter self-refresh to save power. 0,1,2,3 0 null 0 HA 0x1b 0x80 UNC_H_TAD_REQUESTS_G0.REGION7 Counts the number of HA requests to a given TAD region. There are up to 11 TAD (target address decode) regions in each home agent. All requests destined for the memory controller must first be decoded to determine which TAD region they are in. This event is filtered based on the TAD region ID, and covers regions 0 to 7. This event is useful for understanding how applications are using the memory that is spread across the different memory regions. It is particularly useful for 'Monroe' systems that use the TAD to enable individual channels to enter self-refresh to save power. 0,1,2,3 0 null 0 HA 0x1c 0x4 UNC_H_TAD_REQUESTS_G1.REGION10 Counts the number of HA requests to a given TAD region. There are up to 11 TAD (target address decode) regions in each home agent. All requests destined for the memory controller must first be decoded to determine which TAD region they are in. This event is filtered based on the TAD region ID, and covers regions 8 to 10. This event is useful for understanding how applications are using the memory that is spread across the different memory regions. It is particularly useful for 'Monroe' systems that use the TAD to enable individual channels to enter self-refresh to save power. 0,1,2,3 0 null 0 HA 0x1c 0x8 UNC_H_TAD_REQUESTS_G1.REGION11 Counts the number of HA requests to a given TAD region. There are up to 11 TAD (target address decode) regions in each home agent. All requests destined for the memory controller must first be decoded to determine which TAD region they are in. This event is filtered based on the TAD region ID, and covers regions 8 to 10. This event is useful for understanding how applications are using the memory that is spread across the different memory regions. It is particularly useful for 'Monroe' systems that use the TAD to enable individual channels to enter self-refresh to save power. 0,1,2,3 0 null 0 HA 0x1c 0x1 UNC_H_TAD_REQUESTS_G1.REGION8 Counts the number of HA requests to a given TAD region. There are up to 11 TAD (target address decode) regions in each home agent. All requests destined for the memory controller must first be decoded to determine which TAD region they are in. This event is filtered based on the TAD region ID, and covers regions 8 to 10. This event is useful for understanding how applications are using the memory that is spread across the different memory regions. It is particularly useful for 'Monroe' systems that use the TAD to enable individual channels to enter self-refresh to save power. 0,1,2,3 0 null 0 HA 0x1c 0x2 UNC_H_TAD_REQUESTS_G1.REGION9 Counts the number of HA requests to a given TAD region. There are up to 11 TAD (target address decode) regions in each home agent. All requests destined for the memory controller must first be decoded to determine which TAD region they are in. This event is filtered based on the TAD region ID, and covers regions 8 to 10. This event is useful for understanding how applications are using the memory that is spread across the different memory regions. It is particularly useful for 'Monroe' systems that use the TAD to enable individual channels to enter self-refresh to save power. 0,1,2,3 0 null 0 HA 0x6 0x3 UNC_H_TRACKER_INSERTS.ALL Counts the number of allocations into the local HA tracker pool. This can be used in conjunction with the occupancy accumulation event in order to calculate average latency. One cannot filter between reads and writes. HA trackers are allocated as soon as a request enters the HA and is released after the snoop response and data return (or post in the case of a write) and the response is returned on the ring. 0,1,2,3 0 null 0 HA 0xf 0x1 UNC_H_TxR_AD.NDR Counts the number of outbound transactions on the AD ring. This can be filtered by the NDR and SNP message classes. See the filter descriptions for more details. 0,1,2,3 0 null 0 HA 0xf 0x2 UNC_H_TxR_AD.SNP Counts the number of outbound transactions on the AD ring. This can be filtered by the NDR and SNP message classes. See the filter descriptions for more details. 0,1,2,3 0 null 0 HA 0x2a 0x3 UNC_H_TxR_AD_CYCLES_FULL.ALL AD Egress Full 0,1,2,3 0 null 0 HA 0x2a 0x1 UNC_H_TxR_AD_CYCLES_FULL.SCHED0 AD Egress Full 0,1,2,3 0 null 0 HA 0x2a 0x2 UNC_H_TxR_AD_CYCLES_FULL.SCHED1 AD Egress Full 0,1,2,3 0 null 0 HA 0x29 0x3 UNC_H_TxR_AD_CYCLES_NE.ALL AD Egress Not Empty 0,1,2,3 0 null 0 HA 0x29 0x1 UNC_H_TxR_AD_CYCLES_NE.SCHED0 AD Egress Not Empty 0,1,2,3 0 null 0 HA 0x29 0x2 UNC_H_TxR_AD_CYCLES_NE.SCHED1 AD Egress Not Empty 0,1,2,3 0 null 0 HA 0x27 0x3 UNC_H_TxR_AD_INSERTS.ALL AD Egress Allocations 0,1,2,3 0 null 0 HA 0x27 0x1 UNC_H_TxR_AD_INSERTS.SCHED0 AD Egress Allocations 0,1,2,3 0 null 0 HA 0x27 0x2 UNC_H_TxR_AD_INSERTS.SCHED1 AD Egress Allocations 0,1,2,3 0 null 0 HA 0x28 0x3 UNC_H_TxR_AD_OCCUPANCY.ALL AD Egress Occupancy 0,1,2,3 0 null 0 HA 0x28 0x1 UNC_H_TxR_AD_OCCUPANCY.SCHED0 AD Egress Occupancy 0,1,2,3 0 null 0 HA 0x28 0x2 UNC_H_TxR_AD_OCCUPANCY.SCHED1 AD Egress Occupancy 0,1,2,3 0 null 0 HA 0x32 0x3 UNC_H_TxR_AK_CYCLES_FULL.ALL AK Egress Full 0,1,2,3 0 null 0 HA 0x32 0x1 UNC_H_TxR_AK_CYCLES_FULL.SCHED0 AK Egress Full 0,1,2,3 0 null 0 HA 0x32 0x2 UNC_H_TxR_AK_CYCLES_FULL.SCHED1 AK Egress Full 0,1,2,3 0 null 0 HA 0x31 0x3 UNC_H_TxR_AK_CYCLES_NE.ALL AK Egress Not Empty 0,1,2,3 0 null 0 HA 0x31 0x1 UNC_H_TxR_AK_CYCLES_NE.SCHED0 AK Egress Not Empty 0,1,2,3 0 null 0 HA 0x31 0x2 UNC_H_TxR_AK_CYCLES_NE.SCHED1 AK Egress Not Empty 0,1,2,3 0 null 0 HA 0x2f 0x3 UNC_H_TxR_AK_INSERTS.ALL AK Egress Allocations 0,1,2,3 0 null 0 HA 0x2f 0x1 UNC_H_TxR_AK_INSERTS.SCHED0 AK Egress Allocations 0,1,2,3 0 null 0 HA 0x2f 0x2 UNC_H_TxR_AK_INSERTS.SCHED1 AK Egress Allocations 0,1,2,3 0 null 0 HA 0xe 0x0 UNC_H_TxR_AK_NDR Counts the number of outbound NDR transactions sent on the AK ring. NDR stands for 'non-data response' and is generally used for completions that do not include data. AK NDR is used for messages to the local socket. 0,1,2,3 0 null 0 HA 0x30 0x3 UNC_H_TxR_AK_OCCUPANCY.ALL AK Egress Occupancy 0,1,2,3 0 null 0 HA 0x30 0x1 UNC_H_TxR_AK_OCCUPANCY.SCHED0 AK Egress Occupancy 0,1,2,3 0 null 0 HA 0x30 0x2 UNC_H_TxR_AK_OCCUPANCY.SCHED1 AK Egress Occupancy 0,1,2,3 0 null 0 HA 0x10 0x1 UNC_H_TxR_BL.DRS_CACHE Counts the number of DRS messages sent out on the BL ring. This can be filtered by the destination. 0,1,2,3 0 null 0 HA 0x10 0x2 UNC_H_TxR_BL.DRS_CORE Counts the number of DRS messages sent out on the BL ring. This can be filtered by the destination. 0,1,2,3 0 null 0 HA 0x10 0x4 UNC_H_TxR_BL.DRS_QPI Counts the number of DRS messages sent out on the BL ring. This can be filtered by the destination. 0,1,2,3 0 null 0 HA 0x36 0x3 UNC_H_TxR_BL_CYCLES_FULL.ALL BL Egress Full 0,1,2,3 0 null 0 HA 0x36 0x1 UNC_H_TxR_BL_CYCLES_FULL.SCHED0 BL Egress Full 0,1,2,3 0 null 0 HA 0x36 0x2 UNC_H_TxR_BL_CYCLES_FULL.SCHED1 BL Egress Full 0,1,2,3 0 null 0 HA 0x35 0x3 UNC_H_TxR_BL_CYCLES_NE.ALL BL Egress Not Empty 0,1,2,3 0 null 0 HA 0x35 0x1 UNC_H_TxR_BL_CYCLES_NE.SCHED0 BL Egress Not Empty 0,1,2,3 0 null 0 HA 0x35 0x2 UNC_H_TxR_BL_CYCLES_NE.SCHED1 BL Egress Not Empty 0,1,2,3 0 null 0 HA 0x33 0x3 UNC_H_TxR_BL_INSERTS.ALL BL Egress Allocations 0,1,2,3 0 null 0 HA 0x33 0x1 UNC_H_TxR_BL_INSERTS.SCHED0 BL Egress Allocations 0,1,2,3 0 null 0 HA 0x33 0x2 UNC_H_TxR_BL_INSERTS.SCHED1 BL Egress Allocations 0,1,2,3 0 null 0 HA 0x34 0x3 UNC_H_TxR_BL_OCCUPANCY.ALL BL Egress Occupancy 0,1,2,3 0 null 0 HA 0x34 0x1 UNC_H_TxR_BL_OCCUPANCY.SCHED0 BL Egress Occupancy 0,1,2,3 0 null 0 HA 0x34 0x2 UNC_H_TxR_BL_OCCUPANCY.SCHED1 BL Egress Occupancy 0,1,2,3 0 null 0 HA 0x18 0x1 UNC_H_WPQ_CYCLES_NO_REG_CREDITS.CHN0 Counts the number of cycles when there are no 'regular' credits available for posting writes from the HA into the iMC. In order to send writes into the memory controller, the HA must first acquire a credit for the iMC's WPQ (write pending queue). This queue is broken into regular credits/buffers that are used by general writes, and 'special' requests such as ISOCH writes. This count only tracks the regular credits Common high banwidth workloads should be able to make use of all of the regular buffers, but it will be difficult (and uncommon) to make use of both the regular and special buffers at the same time. One can filter based on the memory controller channel. One or more channels can be tracked at a given time. 0,1,2,3 0 null 0 HA 0x18 0x2 UNC_H_WPQ_CYCLES_NO_REG_CREDITS.CHN1 Counts the number of cycles when there are no 'regular' credits available for posting writes from the HA into the iMC. In order to send writes into the memory controller, the HA must first acquire a credit for the iMC's WPQ (write pending queue). This queue is broken into regular credits/buffers that are used by general writes, and 'special' requests such as ISOCH writes. This count only tracks the regular credits Common high banwidth workloads should be able to make use of all of the regular buffers, but it will be difficult (and uncommon) to make use of both the regular and special buffers at the same time. One can filter based on the memory controller channel. One or more channels can be tracked at a given time. 0,1,2,3 0 null 0 HA 0x18 0x4 UNC_H_WPQ_CYCLES_NO_REG_CREDITS.CHN2 Counts the number of cycles when there are no 'regular' credits available for posting writes from the HA into the iMC. In order to send writes into the memory controller, the HA must first acquire a credit for the iMC's WPQ (write pending queue). This queue is broken into regular credits/buffers that are used by general writes, and 'special' requests such as ISOCH writes. This count only tracks the regular credits Common high banwidth workloads should be able to make use of all of the regular buffers, but it will be difficult (and uncommon) to make use of both the regular and special buffers at the same time. One can filter based on the memory controller channel. One or more channels can be tracked at a given time. 0,1,2,3 0 null 0 HA 0x18 0x8 UNC_H_WPQ_CYCLES_NO_REG_CREDITS.CHN3 Counts the number of cycles when there are no 'regular' credits available for posting writes from the HA into the iMC. In order to send writes into the memory controller, the HA must first acquire a credit for the iMC's WPQ (write pending queue). This queue is broken into regular credits/buffers that are used by general writes, and 'special' requests such as ISOCH writes. This count only tracks the regular credits Common high banwidth workloads should be able to make use of all of the regular buffers, but it will be difficult (and uncommon) to make use of both the regular and special buffers at the same time. One can filter based on the memory controller channel. One or more channels can be tracked at a given time. 0,1,2,3 0 null 0 HA 0x19 0x1 UNC_H_WPQ_CYCLES_NO_SPEC_CREDITS.CHN0 Counts the number of cycles when there are no 'special' credits available for posting writes from the HA into the iMC. In order to send writes into the memory controller, the HA must first acquire a credit for the iMC's WPQ (write pending queue). This queue is broken into regular credits/buffers that are used by general writes, and 'special' requests such as ISOCH writes. This count only tracks the 'special' credits. This statistic is generally not interesting for general IA workloads, but may be of interest for understanding the characteristics of systems using ISOCH. One can filter based on the memory controller channel. One or more channels can be tracked at a given time. 0,1,2,3 0 null 0 HA 0x19 0x2 UNC_H_WPQ_CYCLES_NO_SPEC_CREDITS.CHN1 Counts the number of cycles when there are no 'special' credits available for posting writes from the HA into the iMC. In order to send writes into the memory controller, the HA must first acquire a credit for the iMC's WPQ (write pending queue). This queue is broken into regular credits/buffers that are used by general writes, and 'special' requests such as ISOCH writes. This count only tracks the 'special' credits. This statistic is generally not interesting for general IA workloads, but may be of interest for understanding the characteristics of systems using ISOCH. One can filter based on the memory controller channel. One or more channels can be tracked at a given time. 0,1,2,3 0 null 0 HA 0x19 0x4 UNC_H_WPQ_CYCLES_NO_SPEC_CREDITS.CHN2 Counts the number of cycles when there are no 'special' credits available for posting writes from the HA into the iMC. In order to send writes into the memory controller, the HA must first acquire a credit for the iMC's WPQ (write pending queue). This queue is broken into regular credits/buffers that are used by general writes, and 'special' requests such as ISOCH writes. This count only tracks the 'special' credits. This statistic is generally not interesting for general IA workloads, but may be of interest for understanding the characteristics of systems using ISOCH. One can filter based on the memory controller channel. One or more channels can be tracked at a given time. 0,1,2,3 0 null 0 HA 0x19 0x8 UNC_H_WPQ_CYCLES_NO_SPEC_CREDITS.CHN3 Counts the number of cycles when there are no 'special' credits available for posting writes from the HA into the iMC. In order to send writes into the memory controller, the HA must first acquire a credit for the iMC's WPQ (write pending queue). This queue is broken into regular credits/buffers that are used by general writes, and 'special' requests such as ISOCH writes. This count only tracks the 'special' credits. This statistic is generally not interesting for general IA workloads, but may be of interest for understanding the characteristics of systems using ISOCH. One can filter based on the memory controller channel. One or more channels can be tracked at a given time. 0,1,2,3 0 null 0 iMC 0x1 0x0 UNC_M_ACT_COUNT Counts the number of DRAM Activate commands sent on this channel. Activate commands are issued to open up a page on the DRAM devices so that it can be read or written to with a CAS. One can calculate the number of Page Misses by subtracting the number of Page Miss precharges from the number of Activates. 0,1,2,3 0 null 0 iMC 0x4 0xf UNC_M_CAS_COUNT.ALL DRAM RD_CAS and WR_CAS Commands 0,1,2,3 0 null 0 iMC 0x4 0x3 UNC_M_CAS_COUNT.RD DRAM RD_CAS and WR_CAS Commands 0,1,2,3 0 null 0 iMC 0x4 0x1 UNC_M_CAS_COUNT.RD_REG DRAM RD_CAS and WR_CAS Commands 0,1,2,3 0 null 0 iMC 0x4 0x2 UNC_M_CAS_COUNT.RD_UNDERFILL DRAM RD_CAS and WR_CAS Commands 0,1,2,3 0 null 0 iMC 0x4 0xc UNC_M_CAS_COUNT.WR DRAM RD_CAS and WR_CAS Commands 0,1,2,3 0 null 0 iMC 0x4 0x8 UNC_M_CAS_COUNT.WR_RMM DRAM RD_CAS and WR_CAS Commands 0,1,2,3 0 null 0 iMC 0x4 0x4 UNC_M_CAS_COUNT.WR_WMM DRAM RD_CAS and WR_CAS Commands 0,1,2,3 0 null 0 iMC 0x6 0x0 UNC_M_DRAM_PRE_ALL Counts the number of times that the precharge all command was sent. 0,1,2,3 0 null 0 iMC 0x5 0x4 UNC_M_DRAM_REFRESH.HIGH Counts the number of refreshes issued. 0,1,2,3 0 null 0 iMC 0x5 0x2 UNC_M_DRAM_REFRESH.PANIC Counts the number of refreshes issued. 0,1,2,3 0 null 0 iMC 0x9 0x0 UNC_M_ECC_CORRECTABLE_ERRORS Counts the number of ECC errors detected and corrected by the iMC on this channel. This counter is only useful with ECC DRAM devices. This count will increment one time for each correction regardless of the number of bits corrected. The iMC can correct up to 4 bit errors in independent channel mode and 8 bit erros in lockstep mode. 0,1,2,3 0 null 0 iMC 0x7 0x8 UNC_M_MAJOR_MODES.ISOCH Counts the total number of cycles spent in a major mode (selected by a filter) on the given channel. Major modea are channel-wide, and not a per-rank (or dimm or bank) mode. 0,1,2,3 0 null 0 iMC 0x7 0x4 UNC_M_MAJOR_MODES.PARTIAL Counts the total number of cycles spent in a major mode (selected by a filter) on the given channel. Major modea are channel-wide, and not a per-rank (or dimm or bank) mode. 0,1,2,3 0 null 0 iMC 0x7 0x1 UNC_M_MAJOR_MODES.READ Counts the total number of cycles spent in a major mode (selected by a filter) on the given channel. Major modea are channel-wide, and not a per-rank (or dimm or bank) mode. 0,1,2,3 0 null 0 iMC 0x7 0x2 UNC_M_MAJOR_MODES.WRITE Counts the total number of cycles spent in a major mode (selected by a filter) on the given channel. Major modea are channel-wide, and not a per-rank (or dimm or bank) mode. 0,1,2,3 0 null 0 iMC 0x84 0x0 UNC_M_POWER_CHANNEL_DLLOFF Number of cycles when all the ranks in the channel are in CKE Slow (DLLOFF) mode. 0,1,2,3 0 null 0 iMC 0x85 0x0 UNC_M_POWER_CHANNEL_PPD Number of cycles when all the ranks in the channel are in PPD mode. If IBT=off is enabled, then this can be used to count those cycles. If it is not enabled, then this can count the number of cycles when that could have been taken advantage of. 0,1,2,3 0 null 0 iMC 0x83 0x1 UNC_M_POWER_CKE_CYCLES.RANK0 Number of cycles spent in CKE ON mode. The filter allows you to select a rank to monitor. If multiple ranks are in CKE ON mode at one time, the counter will ONLY increment by one rather than doing accumulation. Multiple counters will need to be used to track multiple ranks simultaneously. There is no distinction between the different CKE modes (APD, PPDS, PPDF). This can be determined based on the system programming. These events should commonly be used with Invert to get the number of cycles in power saving mode. Edge Detect is also useful here. Make sure that you do NOT use Invert with Edge Detect (this just confuses the system and is not necessary). 0,1,2,3 0 null 0 iMC 0x83 0x2 UNC_M_POWER_CKE_CYCLES.RANK1 Number of cycles spent in CKE ON mode. The filter allows you to select a rank to monitor. If multiple ranks are in CKE ON mode at one time, the counter will ONLY increment by one rather than doing accumulation. Multiple counters will need to be used to track multiple ranks simultaneously. There is no distinction between the different CKE modes (APD, PPDS, PPDF). This can be determined based on the system programming. These events should commonly be used with Invert to get the number of cycles in power saving mode. Edge Detect is also useful here. Make sure that you do NOT use Invert with Edge Detect (this just confuses the system and is not necessary). 0,1,2,3 0 null 0 iMC 0x83 0x4 UNC_M_POWER_CKE_CYCLES.RANK2 Number of cycles spent in CKE ON mode. The filter allows you to select a rank to monitor. If multiple ranks are in CKE ON mode at one time, the counter will ONLY increment by one rather than doing accumulation. Multiple counters will need to be used to track multiple ranks simultaneously. There is no distinction between the different CKE modes (APD, PPDS, PPDF). This can be determined based on the system programming. These events should commonly be used with Invert to get the number of cycles in power saving mode. Edge Detect is also useful here. Make sure that you do NOT use Invert with Edge Detect (this just confuses the system and is not necessary). 0,1,2,3 0 null 0 iMC 0x83 0x8 UNC_M_POWER_CKE_CYCLES.RANK3 Number of cycles spent in CKE ON mode. The filter allows you to select a rank to monitor. If multiple ranks are in CKE ON mode at one time, the counter will ONLY increment by one rather than doing accumulation. Multiple counters will need to be used to track multiple ranks simultaneously. There is no distinction between the different CKE modes (APD, PPDS, PPDF). This can be determined based on the system programming. These events should commonly be used with Invert to get the number of cycles in power saving mode. Edge Detect is also useful here. Make sure that you do NOT use Invert with Edge Detect (this just confuses the system and is not necessary). 0,1,2,3 0 null 0 iMC 0x83 0x10 UNC_M_POWER_CKE_CYCLES.RANK4 Number of cycles spent in CKE ON mode. The filter allows you to select a rank to monitor. If multiple ranks are in CKE ON mode at one time, the counter will ONLY increment by one rather than doing accumulation. Multiple counters will need to be used to track multiple ranks simultaneously. There is no distinction between the different CKE modes (APD, PPDS, PPDF). This can be determined based on the system programming. These events should commonly be used with Invert to get the number of cycles in power saving mode. Edge Detect is also useful here. Make sure that you do NOT use Invert with Edge Detect (this just confuses the system and is not necessary). 0,1,2,3 0 null 0 iMC 0x83 0x20 UNC_M_POWER_CKE_CYCLES.RANK5 Number of cycles spent in CKE ON mode. The filter allows you to select a rank to monitor. If multiple ranks are in CKE ON mode at one time, the counter will ONLY increment by one rather than doing accumulation. Multiple counters will need to be used to track multiple ranks simultaneously. There is no distinction between the different CKE modes (APD, PPDS, PPDF). This can be determined based on the system programming. These events should commonly be used with Invert to get the number of cycles in power saving mode. Edge Detect is also useful here. Make sure that you do NOT use Invert with Edge Detect (this just confuses the system and is not necessary). 0,1,2,3 0 null 0 iMC 0x83 0x40 UNC_M_POWER_CKE_CYCLES.RANK6 Number of cycles spent in CKE ON mode. The filter allows you to select a rank to monitor. If multiple ranks are in CKE ON mode at one time, the counter will ONLY increment by one rather than doing accumulation. Multiple counters will need to be used to track multiple ranks simultaneously. There is no distinction between the different CKE modes (APD, PPDS, PPDF). This can be determined based on the system programming. These events should commonly be used with Invert to get the number of cycles in power saving mode. Edge Detect is also useful here. Make sure that you do NOT use Invert with Edge Detect (this just confuses the system and is not necessary). 0,1,2,3 0 null 0 iMC 0x83 0x80 UNC_M_POWER_CKE_CYCLES.RANK7 Number of cycles spent in CKE ON mode. The filter allows you to select a rank to monitor. If multiple ranks are in CKE ON mode at one time, the counter will ONLY increment by one rather than doing accumulation. Multiple counters will need to be used to track multiple ranks simultaneously. There is no distinction between the different CKE modes (APD, PPDS, PPDF). This can be determined based on the system programming. These events should commonly be used with Invert to get the number of cycles in power saving mode. Edge Detect is also useful here. Make sure that you do NOT use Invert with Edge Detect (this just confuses the system and is not necessary). 0,1,2,3 0 null 0 iMC 0x86 0x0 UNC_M_POWER_CRITICAL_THROTTLE_CYCLES Counts the number of cycles when the iMC is in critical thermal throttling. When this happens, all traffic is blocked. This should be rare unless something bad is going on in the platform. There is no filtering by rank for this event. 0,1,2,3 0 null 0 iMC 0x43 0x0 UNC_M_POWER_SELF_REFRESH Counts the number of cycles when the iMC is in self-refresh and the iMC still has a clock. This happens in some package C-states. For example, the PCU may ask the iMC to enter self-refresh even though some of the cores are still processing. One use of this is for Monroe technology. Self-refresh is required during package C3 and C6, but there is no clock in the iMC at this time, so it is not possible to count these cases. 0,1,2,3 0 null 0 iMC 0x41 0x1 UNC_M_POWER_THROTTLE_CYCLES.RANK0 Counts the number of cycles while the iMC is being throttled by either thermal constraints or by the PCU throttling. It is not possible to distinguish between the two. This can be filtered by rank. If multiple ranks are selected and are being throttled at the same time, the counter will only increment by 1. 0,1,2,3 0 null 0 iMC 0x41 0x2 UNC_M_POWER_THROTTLE_CYCLES.RANK1 Counts the number of cycles while the iMC is being throttled by either thermal constraints or by the PCU throttling. It is not possible to distinguish between the two. This can be filtered by rank. If multiple ranks are selected and are being throttled at the same time, the counter will only increment by 1. 0,1,2,3 0 null 0 iMC 0x41 0x4 UNC_M_POWER_THROTTLE_CYCLES.RANK2 Counts the number of cycles while the iMC is being throttled by either thermal constraints or by the PCU throttling. It is not possible to distinguish between the two. This can be filtered by rank. If multiple ranks are selected and are being throttled at the same time, the counter will only increment by 1. 0,1,2,3 0 null 0 iMC 0x41 0x8 UNC_M_POWER_THROTTLE_CYCLES.RANK3 Counts the number of cycles while the iMC is being throttled by either thermal constraints or by the PCU throttling. It is not possible to distinguish between the two. This can be filtered by rank. If multiple ranks are selected and are being throttled at the same time, the counter will only increment by 1. 0,1,2,3 0 null 0 iMC 0x41 0x10 UNC_M_POWER_THROTTLE_CYCLES.RANK4 Counts the number of cycles while the iMC is being throttled by either thermal constraints or by the PCU throttling. It is not possible to distinguish between the two. This can be filtered by rank. If multiple ranks are selected and are being throttled at the same time, the counter will only increment by 1. 0,1,2,3 0 null 0 iMC 0x41 0x20 UNC_M_POWER_THROTTLE_CYCLES.RANK5 Counts the number of cycles while the iMC is being throttled by either thermal constraints or by the PCU throttling. It is not possible to distinguish between the two. This can be filtered by rank. If multiple ranks are selected and are being throttled at the same time, the counter will only increment by 1. 0,1,2,3 0 null 0 iMC 0x41 0x40 UNC_M_POWER_THROTTLE_CYCLES.RANK6 Counts the number of cycles while the iMC is being throttled by either thermal constraints or by the PCU throttling. It is not possible to distinguish between the two. This can be filtered by rank. If multiple ranks are selected and are being throttled at the same time, the counter will only increment by 1. 0,1,2,3 0 null 0 iMC 0x41 0x80 UNC_M_POWER_THROTTLE_CYCLES.RANK7 Counts the number of cycles while the iMC is being throttled by either thermal constraints or by the PCU throttling. It is not possible to distinguish between the two. This can be filtered by rank. If multiple ranks are selected and are being throttled at the same time, the counter will only increment by 1. 0,1,2,3 0 null 0 iMC 0x8 0x1 UNC_M_PREEMPTION.RD_PREEMPT_RD Counts the number of times a read in the iMC preempts another read or write. Generally reads to an open page are issued ahead of requests to closed pages. This improves the page hit rate of the system. However, high priority requests can cause pages of active requests to be closed in order to get them out. This will reduce the latency of the high-priority request at the expense of lower bandwidth and increased overall average latency. 0,1,2,3 0 null 0 iMC 0x8 0x2 UNC_M_PREEMPTION.RD_PREEMPT_WR Counts the number of times a read in the iMC preempts another read or write. Generally reads to an open page are issued ahead of requests to closed pages. This improves the page hit rate of the system. However, high priority requests can cause pages of active requests to be closed in order to get them out. This will reduce the latency of the high-priority request at the expense of lower bandwidth and increased overall average latency. 0,1,2,3 0 null 0 iMC 0x2 0x2 UNC_M_PRE_COUNT.PAGE_CLOSE Counts the number of DRAM Precharge commands sent on this channel. 0,1,2,3 0 null 0 iMC 0x2 0x1 UNC_M_PRE_COUNT.PAGE_MISS Counts the number of DRAM Precharge commands sent on this channel. 0,1,2,3 0 null 0 iMC 0x12 0x0 UNC_M_RPQ_CYCLES_FULL Counts the number of cycles when the Read Pending Queue is full. When the RPQ is full, the HA will not be able to issue any additional read requests into the iMC. This count should be similar count in the HA which tracks the number of cycles that the HA has no RPQ credits, just somewhat smaller to account for the credit return overhead. We generally do not expect to see RPQ become full except for potentially during Write Major Mode or while running with slow DRAM. This event only tracks non-ISOC queue entries. 0,1,2,3 0 null 0 iMC 0x11 0x0 UNC_M_RPQ_CYCLES_NE Counts the number of cycles that the Read Pending Queue is not empty. This can then be used to calculate the average occupancy (in conjunction with the Read Pending Queue Occupancy count). The RPQ is used to schedule reads out to the memory controller and to track the requests. Requests allocate into the RPQ soon after they enter the memory controller, and need credits for an entry in this buffer before being sent from the HA to the iMC. They deallocate after the CAS command has been issued to memory. This filter is to be used in conjunction with the occupancy filter so that one can correctly track the average occupancies for schedulable entries and scheduled requests. 0,1,2,3 0 null 0 iMC 0x10 0x0 UNC_M_RPQ_INSERTS Counts the number of allocations into the Read Pending Queue. This queue is used to schedule reads out to the memory controller and to track the requests. Requests allocate into the RPQ soon after they enter the memory controller, and need credits for an entry in this buffer before being sent from the HA to the iMC. They deallocate after the CAS command has been issued to memory. This includes both ISOCH and non-ISOCH requests. 0,1,2,3 0 null 0 iMC 0x80 0x0 UNC_M_RPQ_OCCUPANCY Accumulates the occupancies of the Read Pending Queue each cycle. This can then be used to calculate both the average occupancy (in conjunction with the number of cycles not empty) and the average latency (in conjunction with the number of allocations). The RPQ is used to schedule reads out to the memory controller and to track the requests. Requests allocate into the RPQ soon after they enter the memory controller, and need credits for an entry in this buffer before being sent from the HA to the iMC. They deallocate after the CAS command has been issued to memory. 0,1,2,3 0 null 0 iMC 0x22 0x0 UNC_M_WPQ_CYCLES_FULL Counts the number of cycles when the Write Pending Queue is full. When the WPQ is full, the HA will not be able to issue any additional read requests into the iMC. This count should be similar count in the HA which tracks the number of cycles that the HA has no WPQ credits, just somewhat smaller to account for the credit return overhead. 0,1,2,3 0 null 0 iMC 0x21 0x0 UNC_M_WPQ_CYCLES_NE Counts the number of cycles that the Write Pending Queue is not empty. This can then be used to calculate the average queue occupancy (in conjunction with the WPQ Occupancy Accumulation count). The WPQ is used to schedule write out to the memory controller and to track the writes. Requests allocate into the WPQ soon after they enter the memory controller, and need credits for an entry in this buffer before being sent from the HA to the iMC. They deallocate after being issued to DRAM. Write requests themselves are able to complete (from the perspective of the rest of the system) as soon they have 'posted' to the iMC. This is not to be confused with actually performing the write to DRAM. Therefore, the average latency for this queue is actually not useful for deconstruction intermediate write latencies. 0,1,2,3 0 null 0 iMC 0x20 0x0 UNC_M_WPQ_INSERTS Counts the number of allocations into the Write Pending Queue. This can then be used to calculate the average queuing latency (in conjunction with the WPQ occupancy count). The WPQ is used to schedule write out to the memory controller and to track the writes. Requests allocate into the WPQ soon after they enter the memory controller, and need credits for an entry in this buffer before being sent from the HA to the iMC. They deallocate after being issued to DRAM. Write requests themselves are able to complete (from the perspective of the rest of the system) as soon they have 'posted' to the iMC. 0,1,2,3 0 null 0 iMC 0x81 0x0 UNC_M_WPQ_OCCUPANCY Accumulates the occupancies of the Write Pending Queue each cycle. This can then be used to calculate both the average queue occupancy (in conjunction with the number of cycles not empty) and the average latency (in conjunction with the number of allocations). The WPQ is used to schedule write out to the memory controller and to track the writes. Requests allocate into the WPQ soon after they enter the memory controller, and need credits for an entry in this buffer before being sent from the HA to the iMC. They deallocate after being issued to DRAM. Write requests themselves are able to complete (from the perspective of the rest of the system) as soon they have 'posted' to the iMC. This is not to be confused with actually performing the write to DRAM. Therefore, the average latency for this queue is actually not useful for deconstruction intermediate write latencies. So, we provide filtering based on if the request has posted or not. By using the 'not posted' filter, we can track how long writes spent in the iMC before completions were sent to the HA. The 'posted' filter, on the other hand, provides information about how much queueing is actually happenning in the iMC for writes before they are actually issued to memory. High average occupancies will generally coincide with high write major mode counts. 0,1,2,3 0 null 0 iMC 0x23 0x0 UNC_M_WPQ_READ_HIT Counts the number of times a request hits in the WPQ (write-pending queue). The iMC allows writes and reads to pass up other writes to different addresses. Before a read or a write is issued, it will first CAM the WPQ to see if there is a write pending to that address. When reads hit, they are able to directly pull their data from the WPQ instead of going to memory. Writes that hit will overwrite the existing data. Partial writes that hit will not need to do underfill reads and will simply update their relevant sections. 0,1,2,3 0 null 0 iMC 0x24 0x0 UNC_M_WPQ_WRITE_HIT Counts the number of times a request hits in the WPQ (write-pending queue). The iMC allows writes and reads to pass up other writes to different addresses. Before a read or a write is issued, it will first CAM the WPQ to see if there is a write pending to that address. When reads hit, they are able to directly pull their data from the WPQ instead of going to memory. Writes that hit will overwrite the existing data. Partial writes that hit will not need to do underfill reads and will simply update their relevant sections. 0,1,2,3 0 null 0 UBOX 0x0 0x0 UNC_U_CLOCKTICKS tbd 0,1 0 null 0 iMC 0x0 0x0 UNC_M_CLOCKTICKS Uncore Fixed Counter - uclks 0,1,2,3 0x0 null 0 IRP 0x17 0x1 UNC_I_ADDRESS_MATCH.STALL_COUNT Counts the number of times when an inbound write (from a device to memory or another device) had an address match with another request in the write cache. 0,1 0 null 0 IRP 0x17 0x2 UNC_I_ADDRESS_MATCH.MERGE_COUNT Counts the number of times when an inbound write (from a device to memory or another device) had an address match with another request in the write cache. 0,1 0 null 0 IRP 0x14 0x1 UNC_I_CACHE_ACK_PENDING_OCCUPANCY.ANY Accumulates the number of writes that have acquired ownership but have not yet returned their data to the uncore. These writes are generally queued up in the switch trying to get to the head of their queues so that they can post their data. The queue occuapancy increments when the ACK is received, and decrements when either the data is returned OR a tickle is received and ownership is released. Note that a single tickle can result in multiple decrements. 0,1 0 null 0 IRP 0x14 0x2 UNC_I_CACHE_ACK_PENDING_OCCUPANCY.SOURCE Accumulates the number of writes that have acquired ownership but have not yet returned their data to the uncore. These writes are generally queued up in the switch trying to get to the head of their queues so that they can post their data. The queue occuapancy increments when the ACK is received, and decrements when either the data is returned OR a tickle is received and ownership is released. Note that a single tickle can result in multiple decrements. 0,1 0 null 0 IRP 0x13 0x1 UNC_I_CACHE_OWN_OCCUPANCY.ANY Accumulates the number of writes (and write prefetches) that are outstanding in the uncore trying to acquire ownership in each cycle. This can be used with the write transaction count to calculate the average write latency in the uncore. The occupancy increments when a write request is issued, and decrements when the data is returned. 0,1 0 null 0 IRP 0x13 0x2 UNC_I_CACHE_OWN_OCCUPANCY.SOURCE Accumulates the number of writes (and write prefetches) that are outstanding in the uncore trying to acquire ownership in each cycle. This can be used with the write transaction count to calculate the average write latency in the uncore. The occupancy increments when a write request is issued, and decrements when the data is returned. 0,1 0 null 0 IRP 0x10 0x1 UNC_I_CACHE_READ_OCCUPANCY.ANY Accumulates the number of reads that are outstanding in the uncore in each cycle. This can be used with the read transaction count to calculate the average read latency in the uncore. The occupancy increments when a read request is issued, and decrements when the data is returned. 0,1 0 null 0 IRP 0x10 0x2 UNC_I_CACHE_READ_OCCUPANCY.SOURCE Accumulates the number of reads that are outstanding in the uncore in each cycle. This can be used with the read transaction count to calculate the average read latency in the uncore. The occupancy increments when a read request is issued, and decrements when the data is returned. 0,1 0 null 0 IRP 0x12 0x1 UNC_I_CACHE_TOTAL_OCCUPANCY.ANY Accumulates the number of reads and writes that are outstanding in the uncore in each cycle. This is effectively the sum of the READ_OCCUPANCY and WRITE_OCCUPANCY events. 0,1 0 null 0 IRP 0x12 0x2 UNC_I_CACHE_TOTAL_OCCUPANCY.SOURCE Accumulates the number of reads and writes that are outstanding in the uncore in each cycle. This is effectively the sum of the READ_OCCUPANCY and WRITE_OCCUPANCY events. 0,1 0 null 0 IRP 0x11 0x1 UNC_I_CACHE_WRITE_OCCUPANCY.ANY Accumulates the number of writes (and write prefetches) that are outstanding in the uncore in each cycle. This can be used with the transaction count event to calculate the average latency in the uncore. The occupancy increments when the ownership fetch/prefetch is issued, and decrements the data is returned to the uncore. 0,1 0 null 0 IRP 0x11 0x2 UNC_I_CACHE_WRITE_OCCUPANCY.SOURCE Accumulates the number of writes (and write prefetches) that are outstanding in the uncore in each cycle. This can be used with the transaction count event to calculate the average latency in the uncore. The occupancy increments when the ownership fetch/prefetch is issued, and decrements the data is returned to the uncore. 0,1 0 null 0 IRP 0x0 0x0 UNC_I_CLOCKTICKS Number of clocks in the IRP. 0,1 0 null 0 IRP 0xB 0x0 UNC_I_RxR_AK_CYCLES_FULL Counts the number of cycles when the AK Ingress is full. This queue is where the IRP receives responses from R2PCIe (the ring). 0,1 0 null 0 IRP 0xA 0x0 UNC_I_RxR_AK_INSERTS Counts the number of allocations into the AK Ingress. This queue is where the IRP receives responses from R2PCIe (the ring). 0,1 0 null 0 IRP 0xC 0x0 UNC_I_RxR_AK_OCCUPANCY Accumulates the occupancy of the AK Ingress in each cycles. This queue is where the IRP receives responses from R2PCIe (the ring). 0,1 0 null 0 IRP 0x4 0x0 UNC_I_RxR_BL_DRS_CYCLES_FULL Counts the number of cycles when the BL Ingress is full. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes. 0,1 0 null 0 IRP 0x1 0x0 UNC_I_RxR_BL_DRS_INSERTS Counts the number of allocations into the BL Ingress. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes. 0,1 0 null 0 IRP 0x7 0x0 UNC_I_RxR_BL_DRS_OCCUPANCY Accumulates the occupancy of the BL Ingress in each cycles. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes. 0,1 0 null 0 IRP 0x5 0x0 UNC_I_RxR_BL_NCB_CYCLES_FULL Counts the number of cycles when the BL Ingress is full. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes. 0,1 0 null 0 IRP 0x2 0x0 UNC_I_RxR_BL_NCB_INSERTS Counts the number of allocations into the BL Ingress. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes. 0,1 0 null 0 IRP 0x8 0x0 UNC_I_RxR_BL_NCB_OCCUPANCY Accumulates the occupancy of the BL Ingress in each cycles. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes. 0,1 0 null 0 IRP 0x6 0x0 UNC_I_RxR_BL_NCS_CYCLES_FULL Counts the number of cycles when the BL Ingress is full. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes. 0,1 0 null 0 IRP 0x3 0x0 UNC_I_RxR_BL_NCS_INSERTS Counts the number of allocations into the BL Ingress. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes. 0,1 0 null 0 IRP 0x9 0x0 UNC_I_RxR_BL_NCS_OCCUPANCY Accumulates the occupancy of the BL Ingress in each cycles. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes. 0,1 0 null 0 IRP 0x16 0x1 UNC_I_TICKLES.LOST_OWNERSHIP Counts the number of tickles that are received. This is for both explicit (from Cbo) and implicit (internal conflict) tickles. 0,1 0 null 0 IRP 0x16 0x2 UNC_I_TICKLES.TOP_OF_QUEUE Counts the number of tickles that are received. This is for both explicit (from Cbo) and implicit (internal conflict) tickles. 0,1 0 null 0 IRP 0x15 0x1 UNC_I_TRANSACTIONS.READS Counts the number of 'Inbound' transactions from the IRP to the Uncore. This can be filtered based on request type in addition to the source queue. Note the special filtering equation. We do OR-reduction on the request type. If the SOURCE bit is set, then we also do AND qualification based on the source portID. 0,1 0 null 0 IRP 0x15 0x2 UNC_I_TRANSACTIONS.WRITES Counts the number of 'Inbound' transactions from the IRP to the Uncore. This can be filtered based on request type in addition to the source queue. Note the special filtering equation. We do OR-reduction on the request type. If the SOURCE bit is set, then we also do AND qualification based on the source portID. 0,1 0 null 0 IRP 0x15 0x4 UNC_I_TRANSACTIONS.PD_PREFETCHES Counts the number of 'Inbound' transactions from the IRP to the Uncore. This can be filtered based on request type in addition to the source queue. Note the special filtering equation. We do OR-reduction on the request type. If the SOURCE bit is set, then we also do AND qualification based on the source portID. 0,1 0 null 0 IRP 0x15 0x8 UNC_I_TRANSACTIONS.ORDERINGQ Counts the number of 'Inbound' transactions from the IRP to the Uncore. This can be filtered based on request type in addition to the source queue. Note the special filtering equation. We do OR-reduction on the request type. If the SOURCE bit is set, then we also do AND qualification based on the source portID. 0,1 0 IRPFilter[4:0] 0 IRP 0x18 0x0 UNC_I_TxR_AD_STALL_CREDIT_CYCLES Counts the number times when it is not possible to issue a request to the R2PCIe because there are no AD Egress Credits available. 0,1 0 null 0 IRP 0x19 0x0 UNC_I_TxR_BL_STALL_CREDIT_CYCLES Counts the number times when it is not possible to issue data to the R2PCIe because there are no BL Egress Credits available. 0,1 0 null 0 IRP 0xE 0x0 UNC_I_TxR_DATA_INSERTS_NCB Counts the number of requests issued to the switch (towards the devices). 0,1 0 null 0 IRP 0xF 0x0 UNC_I_TxR_DATA_INSERTS_NCS Counts the number of requests issued to the switch (towards the devices). 0,1 0 null 0 IRP 0xD 0x0 UNC_I_TxR_REQUEST_OCCUPANCY Accumultes the number of outstanding outbound requests from the IRP to the switch (towards the devices). This can be used in conjuection with the allocations event in order to calculate average latency of outbound requests. 0,1 0 null 0 IRP 0x1A 0x0 UNC_I_WRITE_ORDERING_STALL_CYCLES Counts the number of cycles when there are pending write ACK's in the switch but the switch->IRP pipeline is not utilized. 0,1 0 null 0