.TH "xdp-monitor" "8" "DECEMBER 12, 2022" "V1.4.0" "A simple XDP monitoring tool" .SH "NAME" XDP-monitor \- a simple BPF-powered XDP monitoring tool .SH "SYNOPSIS" .PP XDP-monitor is a tool that monitors various XDP related statistics and events using BPF tracepoints infrastructure, trying to be as low overhead as possible. .PP Note that by default, statistics for successful XDP redirect events is disabled, as that leads to a per-packet BPF tracing overhead, which while being low overhead, can lead to packet processing degradation. .PP This tool relies on the BPF raw tracepoints infrastructure in the kernel. .PP There is more information on the meaning of the output in both default (terse) and verbose output mode, in the \fIOutput Format Description\fP section. .SS "Running xdp-monitor" .PP The syntax for running xdp-monitor is: .RS .nf \fCxdp-monitor [options] \fP .fi .RE .PP The supported options are: .SS "-i, --interval " .PP Set the polling interval for collecting all statistics and displaying them to the output. The unit of interval is in seconds. .SS "-s, --stats" .PP Enable statistics for successful redirection. This option comes with a per packet tracing overhead, for recording all successful redirections. .SS "-e, --extended" .PP Start xdp-bench in "extended" output mode. If not set, xdp-bench will start in "terse" mode. The output mode can be switched by hitting C-$\ while the program is running. See also the \fBOutput Format Description\fP section below. .SS "-v, --verbose" .PP Enable verbose logging. Supply twice to enable verbose logging from the underlying \fIlibxdp\fP and \fIlibbpf\fP libraries. .SS "--version" .PP Show the application version and exit. .SS "-h, --help" .PP Display a summary of the available options .SH "Output Format Description" .PP By default, redirect success statistics are disabled, use \fI\-\-stats\fP to enable. The terse output mode is default, extended output mode can be activated using the \fI\-\-extended\fP command line option. .PP SIGQUIT (Ctrl + \\) can be used to switch the mode dynamically at runtime. .PP Terse mode displays at most the following fields: .RS .nf \fCrx/s Number of packets received per second redir/s Number of packets successfully redirected per second err,drop/s Aggregated count of errors per second (including dropped packets) xmit/s Number of packets transmitted on the output device per second \fP .fi .RE .PP Verbose output mode displays at most the following fields: .RS .nf \fCFIELD DESCRIPTION receive Displays the number of packets received and errors encountered Whenever an error or packet drop occurs, details of per CPU error and drop statistics will be expanded inline in terse mode. pkt/s - Packets received per second drop/s - Packets dropped per second error/s - Errors encountered per second redirect - Displays the number of packets successfully redirected Errors encountered are expanded under redirect_err field Note that passing -s to enable it has a per packet overhead redir/s - Packets redirected successfully per second redirect_err Displays the number of packets that failed redirection The errno is expanded under this field with per CPU count The recognized errors are: EINVAL: Invalid redirection ENETDOWN: Device being redirected to is down EMSGSIZE: Packet length too large for device EOPNOTSUPP: Operation not supported ENOSPC: No space in ptr_ring of cpumap kthread error/s - Packets that failed redirection per second enqueue to cpu N Displays the number of packets enqueued to bulk queue of CPU N Expands to cpu:FROM->N to display enqueue stats for each CPU enqueuing to CPU N Received packets can be associated with the CPU redirect program is enqueuing packets to. pkt/s - Packets enqueued per second from other CPU to CPU N drop/s - Packets dropped when trying to enqueue to CPU N bulk-avg - Average number of packets processed for each event kthread Displays the number of packets processed in CPUMAP kthread for each CPU Packets consumed from ptr_ring in kthread, and its xdp_stats (after calling CPUMAP bpf prog) are expanded below this. xdp_stats are expanded as a total and then per-CPU to associate it to each CPU's pinned CPUMAP kthread. pkt/s - Packets consumed per second from ptr_ring drop/s - Packets dropped per second in kthread sched - Number of times kthread called schedule() xdp_stats (also expands to per-CPU counts) pass/s - XDP_PASS count for CPUMAP program execution drop/s - XDP_DROP count for CPUMAP program execution redir/s - XDP_REDIRECT count for CPUMAP program execution xdp_exception Displays xdp_exception tracepoint events This can occur due to internal driver errors, unrecognized XDP actions and due to explicit user trigger by use of XDP_ABORTED Each action is expanded below this field with its count hit/s - Number of times the tracepoint was hit per second devmap_xmit Displays devmap_xmit tracepoint events This tracepoint is invoked for successful transmissions on output device but these statistics are not available for generic XDP mode, hence they will be omitted from the output when using SKB mode xmit/s - Number of packets that were transmitted per second drop/s - Number of packets that failed transmissions per second drv_err/s - Number of internal driver errors per second bulk-avg - Average number of packets processed for each event \fP .fi .RE .SH "BUGS" .PP Please report any bugs on Github: \fIhttps://github.com/xdp-project/xdp-tools/issues\fP .SH "AUTHOR" .PP The original xdp-monitor tool was written by Jesper Dangaard Brouer. It was then rewritten to support more features by Kumar Kartikeya Dwivedi. This man page was written by Kumar Kartikeya Dwivedi.