Nvidia GPU exporter for prometheus using nvidia-smi binary

nvidia_gpu_exporter

build Coverage Status Go Report Card Latest GitHub release GitHub license GitHub all releases Docker Pulls

Nvidia GPU exporter for prometheus, using nvidia-smi binary to gather metrics.

Introduction

There are many Nvidia GPU exporters out there however they have problems such as not being maintained, not providing pre-built binaries, having a dependency to Linux and/or Docker, targeting enterprise setups (DCGM) and so on.

This is a simple exporter that uses nvidia-smi(.exe) binary to collect, parse and export metrics. This makes it possible to run it on Windows and get GPU metrics while gaming - no Docker or Linux required.

This project is based on a0s/nvidia-smi-exporter. However, this one is written in Go to produce a single, static binary.

If you are a gamer who's into monitoring, you are in for a treat.

Highlights

  • Will work on any system that has nvidia-smi(.exe)? binary - Windows, Linux, MacOS... No C bindings required
  • Doesn't even need to run the monitored machine: can be configured to execute nvidia-smi command remotely
  • No need for a Docker or Kubernetes environment
  • Auto-discovery of the metric fields nvidia-smi can expose (future-compatible)
  • Comes with its own Grafana dashboard

Visualization

You can use the official Grafana dashboard to see your GPU metrics in a nicely visualized way.

Here's how it looks like: Grafana dashboard

Installation

By downloading the binaries (MacOS/Linux/Windows)

  1. Go to the releases and download the latest release archive for your platform.
  2. Extract the archive.
  3. Move the binary to somewhere in your PATH.

Sample steps for Linux 64-bit:

$ VERSION=0.2.0
$ wget https://github.com/utkuozdemir/nvidia_gpu_exporter/releases/download/v${VERSION}/nvidia_gpu_exporter_${VERSION}_linux_x86_64.tar.gz
$ tar -xvzf nvidia_gpu_exporter_${VERSION}_linux_x86_64.tar.gz
$ mv nvidia_gpu_exporter /usr/local/bin
$ nvidia_gpu_exporter --help

Installing as a Windows Service

Requirements:

Installation steps:

  1. Open a privileged powershell prompt (right click - Run as administrator)
  2. Run the following commands:
scoop bucket add nvidia_gpu_exporter https://github.com/utkuozdemir/scoop_nvidia_gpu_exporter.git
scoop install nvidia_gpu_exporter/nvidia_gpu_exporter --global
New-NetFirewallRule -DisplayName "Nvidia GPU Exporter" -Direction Inbound -Action Allow -Protocol TCP -LocalPort 9835
nssm install nvidia_gpu_exporter "C:\ProgramData\scoop\apps\nvidia_gpu_exporter\current\nvidia_gpu_exporter.exe"
Start-Service nvidia_gpu_exporter

Installing as a Linux (Systemd) Service

If your Linux distro is using systemd, you can install the exporter as a service using the unit file provided.

Follow these simple steps:

  1. Download the Linux binary matching your CPU architecture and put it under /usr/local/bin directory.
  2. Drop a copy of the file nvidia_gpu_exporter.service under /etc/systemd/system directory.
  3. Run sudo systemctl daemon-reload
  4. Start and enable the service to run on boot: sudo systemctl enable --now nvidia_gpu_exporter

Running in Docker

You can run the exporter in a Docker container.

For it to work, you will need to ensure the following:

  • The nvidia-smi binary is bind-mounted from the host to the container under its PATH
  • The devices /dev/nvidiaX (depends on the number of GPUs you have) and /dev/nvidiactl are mounted into the container
  • The library files libnvidia-ml.so and libnvidia-ml.so.1 are mounted inside the container. They are typically found under /usr/lib/x86_64-linux-gnu/ or /usr/lib/i386-linux-gnu/. Locate them in your host to ensure you are mounting them from the correct path.

A working example with all these combined (tested in Ubuntu 20.04):

docker run -d \
--name nvidia_smi_exporter \
--restart unless-stopped \
--device /dev/nvidiactl:/dev/nvidiactl \
--device /dev/nvidia0:/dev/nvidia0 \
-v /usr/lib/x86_64-linux-gnu/libnvidia-ml.so:/usr/lib/x86_64-linux-gnu/libnvidia-ml.so \
-v /usr/lib/x86_64-linux-gnu/libnvidia-ml.so.1:/usr/lib/x86_64-linux-gnu/libnvidia-ml.so.1 \
-v /usr/bin/nvidia-smi:/usr/bin/nvidia-smi \
-p 9835:9835 \
utkuozdemir/nvidia_gpu_exporter:0.2.0

Running in Kubernetes

Using the exporter in Kubernetes is pretty similar with running it in Docker.

You can use the official helm chart to install the exporter.

The chart was tested on the following configuration:

  • Ubuntu Desktop 20.04 with Kernel 5.8.0-55-generic
  • K3s v1.21.1+k3s1
  • Nvidia GeForce RTX 2080 Super
  • Nvidia Driver version 465.27

Note: I didn't have chance to test it on an enterprise cluster with GPU support. If you have access to one and give the exporter a try and share the results, I would appreciate it greatly.

Command Line Reference

The exporter binary accepts the following arguments:

usage: nvidia_gpu_exporter [<flags>]

Flags:
  -h, --help                Show context-sensitive help (also try --help-long and --help-man).
      --web.config.file=""  [EXPERIMENTAL] Path to configuration file that can enable TLS or authentication.
      --web.listen-address=":9835"
                            Address to listen on for web interface and telemetry.
      --web.telemetry-path="/metrics"
                            Path under which to expose metrics.
      --nvidia-smi-command="nvidia-smi"
                            Path or command to be used for the nvidia-smi executable
      --query-field-names="AUTO"
                            Comma-separated list of the query fields. You can find out possible fields by running `nvidia-smi --help-query-gpus`. The value `AUTO` will
                            automatically detect the fields to query.
      --log.level=info      Only log messages with the given severity or above. One of: [debug, info, warn, error]
      --log.format=logfmt   Output format of log messages. One of: [logfmt, json]
      --version             Show application version.

Remote scraping configuration

The exporter can be configured to scrape metrics from a remote machine.

An example use case is running the exporter in a Raspberry Pi in your home network while scraping the metrics from your PC over SSH.

The exporter supports arbitrary commands with arguments to produce nvidia-smi-like output. Therefore, configuration is pretty straightforward.

Simply override the --nvidia-smi-command command-line argument (replace SSH_USER and SSH_HOST with SSH credentials):

nvidia_gpu_exporter --nvidia-smi-command "ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null SSH_USER@SSH_HOST nvidia-smi"

Metrics

This is a sample, incomplete output of the returned metrics. In AUTO query fields mode, the exporter will discover new fields and expose them on a best-effort basis.

# HELP go_gc_duration_seconds A summary of the pause duration of garbage collection cycles.
# TYPE go_gc_duration_seconds summary
go_gc_duration_seconds{quantile="0"} 0
go_gc_duration_seconds{quantile="0.25"} 0
go_gc_duration_seconds{quantile="0.5"} 0
go_gc_duration_seconds{quantile="0.75"} 0
go_gc_duration_seconds{quantile="1"} 0
go_gc_duration_seconds_sum 0
go_gc_duration_seconds_count 0
# HELP go_goroutines Number of goroutines that currently exist.
# TYPE go_goroutines gauge
go_goroutines 7
# HELP go_info Information about the Go environment.
# TYPE go_info gauge
go_info{version="go1.16.5"} 1
# HELP go_memstats_alloc_bytes Number of bytes allocated and still in use.
# TYPE go_memstats_alloc_bytes gauge
go_memstats_alloc_bytes 1.169224e+06
# HELP go_memstats_alloc_bytes_total Total number of bytes allocated, even if freed.
# TYPE go_memstats_alloc_bytes_total counter
go_memstats_alloc_bytes_total 1.169224e+06
# HELP go_memstats_buck_hash_sys_bytes Number of bytes used by the profiling bucket hash table.
# TYPE go_memstats_buck_hash_sys_bytes gauge
go_memstats_buck_hash_sys_bytes 1.44498e+06
# HELP go_memstats_frees_total Total number of frees.
# TYPE go_memstats_frees_total counter
go_memstats_frees_total 273
# HELP go_memstats_gc_cpu_fraction The fraction of this program's available CPU time used by the GC since the program started.
# TYPE go_memstats_gc_cpu_fraction gauge
go_memstats_gc_cpu_fraction 0
# HELP go_memstats_gc_sys_bytes Number of bytes used for garbage collection system metadata.
# TYPE go_memstats_gc_sys_bytes gauge
go_memstats_gc_sys_bytes 4.110176e+06
# HELP go_memstats_heap_alloc_bytes Number of heap bytes allocated and still in use.
# TYPE go_memstats_heap_alloc_bytes gauge
go_memstats_heap_alloc_bytes 1.169224e+06
# HELP go_memstats_heap_idle_bytes Number of heap bytes waiting to be used.
# TYPE go_memstats_heap_idle_bytes gauge
go_memstats_heap_idle_bytes 6.397952e+07
# HELP go_memstats_heap_inuse_bytes Number of heap bytes that are in use.
# TYPE go_memstats_heap_inuse_bytes gauge
go_memstats_heap_inuse_bytes 2.637824e+06
# HELP go_memstats_heap_objects Number of allocated objects.
# TYPE go_memstats_heap_objects gauge
go_memstats_heap_objects 6126
# HELP go_memstats_heap_released_bytes Number of heap bytes released to OS.
# TYPE go_memstats_heap_released_bytes gauge
go_memstats_heap_released_bytes 6.397952e+07
# HELP go_memstats_heap_sys_bytes Number of heap bytes obtained from system.
# TYPE go_memstats_heap_sys_bytes gauge
go_memstats_heap_sys_bytes 6.6617344e+07
# HELP go_memstats_last_gc_time_seconds Number of seconds since 1970 of last garbage collection.
# TYPE go_memstats_last_gc_time_seconds gauge
go_memstats_last_gc_time_seconds 0
# HELP go_memstats_lookups_total Total number of pointer lookups.
# TYPE go_memstats_lookups_total counter
go_memstats_lookups_total 0
# HELP go_memstats_mallocs_total Total number of mallocs.
# TYPE go_memstats_mallocs_total counter
go_memstats_mallocs_total 6399
# HELP go_memstats_mcache_inuse_bytes Number of bytes in use by mcache structures.
# TYPE go_memstats_mcache_inuse_bytes gauge
go_memstats_mcache_inuse_bytes 9600
# HELP go_memstats_mcache_sys_bytes Number of bytes used for mcache structures obtained from system.
# TYPE go_memstats_mcache_sys_bytes gauge
go_memstats_mcache_sys_bytes 16384
# HELP go_memstats_mspan_inuse_bytes Number of bytes in use by mspan structures.
# TYPE go_memstats_mspan_inuse_bytes gauge
go_memstats_mspan_inuse_bytes 46240
# HELP go_memstats_mspan_sys_bytes Number of bytes used for mspan structures obtained from system.
# TYPE go_memstats_mspan_sys_bytes gauge
go_memstats_mspan_sys_bytes 49152
# HELP go_memstats_next_gc_bytes Number of heap bytes when next garbage collection will take place.
# TYPE go_memstats_next_gc_bytes gauge
go_memstats_next_gc_bytes 4.473924e+06
# HELP go_memstats_other_sys_bytes Number of bytes used for other system allocations.
# TYPE go_memstats_other_sys_bytes gauge
go_memstats_other_sys_bytes 885044
# HELP go_memstats_stack_inuse_bytes Number of bytes in use by the stack allocator.
# TYPE go_memstats_stack_inuse_bytes gauge
go_memstats_stack_inuse_bytes 491520
# HELP go_memstats_stack_sys_bytes Number of bytes obtained from system for stack allocator.
# TYPE go_memstats_stack_sys_bytes gauge
go_memstats_stack_sys_bytes 491520
# HELP go_memstats_sys_bytes Number of bytes obtained from system.
# TYPE go_memstats_sys_bytes gauge
go_memstats_sys_bytes 7.36146e+07
# HELP go_threads Number of OS threads created.
# TYPE go_threads gauge
go_threads 8
# HELP nvidia_gpu_exporter_build_info A metric with a constant '1' value labeled by version, revision, branch, and goversion from which nvidia_gpu_exporter was built.
# TYPE nvidia_gpu_exporter_build_info gauge
nvidia_gpu_exporter_build_info{branch="",goversion="go1.16.5",revision="",version=""} 1
# HELP nvidia_smi_accounting_buffer_size accounting.buffer_size
# TYPE nvidia_smi_accounting_buffer_size gauge
nvidia_smi_accounting_buffer_size{uuid="df6e7a7c-7314-46f8-abc4-b88b36dcf3aa"} 4000
# HELP nvidia_smi_accounting_mode accounting.mode
# TYPE nvidia_smi_accounting_mode gauge
nvidia_smi_accounting_mode{uuid="df6e7a7c-7314-46f8-abc4-b88b36dcf3aa"} 0
# HELP nvidia_smi_clocks_current_graphics_clock_hz clocks.current.graphics [MHz]
# TYPE nvidia_smi_clocks_current_graphics_clock_hz gauge
nvidia_smi_clocks_current_graphics_clock_hz{uuid="df6e7a7c-7314-46f8-abc4-b88b36dcf3aa"} 6e+06
# HELP nvidia_smi_clocks_current_memory_clock_hz clocks.current.memory [MHz]
# TYPE nvidia_smi_clocks_current_memory_clock_hz gauge
nvidia_smi_clocks_current_memory_clock_hz{uuid="df6e7a7c-7314-46f8-abc4-b88b36dcf3aa"} 1.6e+07
# HELP nvidia_smi_clocks_current_sm_clock_hz clocks.current.sm [MHz]
# TYPE nvidia_smi_clocks_current_sm_clock_hz gauge
nvidia_smi_clocks_current_sm_clock_hz{uuid="df6e7a7c-7314-46f8-abc4-b88b36dcf3aa"} 6e+06
# HELP nvidia_smi_clocks_current_video_clock_hz clocks.current.video [MHz]
# TYPE nvidia_smi_clocks_current_video_clock_hz gauge
nvidia_smi_clocks_current_video_clock_hz{uuid="df6e7a7c-7314-46f8-abc4-b88b36dcf3aa"} 5.4e+08
# HELP nvidia_smi_clocks_max_graphics_clock_hz clocks.max.graphics [MHz]
# TYPE nvidia_smi_clocks_max_graphics_clock_hz gauge
nvidia_smi_clocks_max_graphics_clock_hz{uuid="df6e7a7c-7314-46f8-abc4-b88b36dcf3aa"} 2.28e+09
# HELP nvidia_smi_clocks_max_memory_clock_hz clocks.max.memory [MHz]
# TYPE nvidia_smi_clocks_max_memory_clock_hz gauge
nvidia_smi_clocks_max_memory_clock_hz{uuid="df6e7a7c-7314-46f8-abc4-b88b36dcf3aa"} 7.751e+09
# HELP nvidia_smi_clocks_max_sm_clock_hz clocks.max.sm [MHz]
# TYPE nvidia_smi_clocks_max_sm_clock_hz gauge
nvidia_smi_clocks_max_sm_clock_hz{uuid="df6e7a7c-7314-46f8-abc4-b88b36dcf3aa"} 2.28e+09
# HELP nvidia_smi_clocks_throttle_reasons_active clocks_throttle_reasons.active
# TYPE nvidia_smi_clocks_throttle_reasons_active gauge
nvidia_smi_clocks_throttle_reasons_active{uuid="df6e7a7c-7314-46f8-abc4-b88b36dcf3aa"} 4
# HELP nvidia_smi_clocks_throttle_reasons_applications_clocks_setting clocks_throttle_reasons.applications_clocks_setting
# TYPE nvidia_smi_clocks_throttle_reasons_applications_clocks_setting gauge
nvidia_smi_clocks_throttle_reasons_applications_clocks_setting{uuid="df6e7a7c-7314-46f8-abc4-b88b36dcf3aa"} 0
# HELP nvidia_smi_clocks_throttle_reasons_gpu_idle clocks_throttle_reasons.gpu_idle
# TYPE nvidia_smi_clocks_throttle_reasons_gpu_idle gauge
nvidia_smi_clocks_throttle_reasons_gpu_idle{uuid="df6e7a7c-7314-46f8-abc4-b88b36dcf3aa"} 0
# HELP nvidia_smi_clocks_throttle_reasons_hw_power_brake_slowdown clocks_throttle_reasons.hw_power_brake_slowdown
# TYPE nvidia_smi_clocks_throttle_reasons_hw_power_brake_slowdown gauge
nvidia_smi_clocks_throttle_reasons_hw_power_brake_slowdown{uuid="df6e7a7c-7314-46f8-abc4-b88b36dcf3aa"} 0
# HELP nvidia_smi_clocks_throttle_reasons_hw_slowdown clocks_throttle_reasons.hw_slowdown
# TYPE nvidia_smi_clocks_throttle_reasons_hw_slowdown gauge
nvidia_smi_clocks_throttle_reasons_hw_slowdown{uuid="df6e7a7c-7314-46f8-abc4-b88b36dcf3aa"} 0
# HELP nvidia_smi_clocks_throttle_reasons_hw_thermal_slowdown clocks_throttle_reasons.hw_thermal_slowdown
# TYPE nvidia_smi_clocks_throttle_reasons_hw_thermal_slowdown gauge
nvidia_smi_clocks_throttle_reasons_hw_thermal_slowdown{uuid="df6e7a7c-7314-46f8-abc4-b88b36dcf3aa"} 0
# HELP nvidia_smi_clocks_throttle_reasons_supported clocks_throttle_reasons.supported
# TYPE nvidia_smi_clocks_throttle_reasons_supported gauge
nvidia_smi_clocks_throttle_reasons_supported{uuid="df6e7a7c-7314-46f8-abc4-b88b36dcf3aa"} 511
# HELP nvidia_smi_clocks_throttle_reasons_sw_power_cap clocks_throttle_reasons.sw_power_cap
# TYPE nvidia_smi_clocks_throttle_reasons_sw_power_cap gauge
nvidia_smi_clocks_throttle_reasons_sw_power_cap{uuid="df6e7a7c-7314-46f8-abc4-b88b36dcf3aa"} 1
# HELP nvidia_smi_clocks_throttle_reasons_sw_thermal_slowdown clocks_throttle_reasons.sw_thermal_slowdown
# TYPE nvidia_smi_clocks_throttle_reasons_sw_thermal_slowdown gauge
nvidia_smi_clocks_throttle_reasons_sw_thermal_slowdown{uuid="df6e7a7c-7314-46f8-abc4-b88b36dcf3aa"} 0
# HELP nvidia_smi_clocks_throttle_reasons_sync_boost clocks_throttle_reasons.sync_boost
# TYPE nvidia_smi_clocks_throttle_reasons_sync_boost gauge
nvidia_smi_clocks_throttle_reasons_sync_boost{uuid="df6e7a7c-7314-46f8-abc4-b88b36dcf3aa"} 0
# HELP nvidia_smi_compute_mode compute_mode
# TYPE nvidia_smi_compute_mode gauge
nvidia_smi_compute_mode{uuid="df6e7a7c-7314-46f8-abc4-b88b36dcf3aa"} 0
# HELP nvidia_smi_count count
# TYPE nvidia_smi_count gauge
nvidia_smi_count{uuid="df6e7a7c-7314-46f8-abc4-b88b36dcf3aa"} 1
# HELP nvidia_smi_display_active display_active
# TYPE nvidia_smi_display_active gauge
nvidia_smi_display_active{uuid="df6e7a7c-7314-46f8-abc4-b88b36dcf3aa"} 0
# HELP nvidia_smi_display_mode display_mode
# TYPE nvidia_smi_display_mode gauge
nvidia_smi_display_mode{uuid="df6e7a7c-7314-46f8-abc4-b88b36dcf3aa"} 1
# HELP nvidia_smi_driver_version driver_version
# TYPE nvidia_smi_driver_version gauge
nvidia_smi_driver_version{uuid="df6e7a7c-7314-46f8-abc4-b88b36dcf3aa"} 471.11
# HELP nvidia_smi_encoder_stats_average_fps encoder.stats.averageFps
# TYPE nvidia_smi_encoder_stats_average_fps gauge
nvidia_smi_encoder_stats_average_fps{uuid="df6e7a7c-7314-46f8-abc4-b88b36dcf3aa"} 0
# HELP nvidia_smi_encoder_stats_average_latency encoder.stats.averageLatency
# TYPE nvidia_smi_encoder_stats_average_latency gauge
nvidia_smi_encoder_stats_average_latency{uuid="df6e7a7c-7314-46f8-abc4-b88b36dcf3aa"} 0
# HELP nvidia_smi_encoder_stats_session_count encoder.stats.sessionCount
# TYPE nvidia_smi_encoder_stats_session_count gauge
nvidia_smi_encoder_stats_session_count{uuid="df6e7a7c-7314-46f8-abc4-b88b36dcf3aa"} 0
# HELP nvidia_smi_enforced_power_limit_watts enforced.power.limit [W]
# TYPE nvidia_smi_enforced_power_limit_watts gauge
nvidia_smi_enforced_power_limit_watts{uuid="df6e7a7c-7314-46f8-abc4-b88b36dcf3aa"} 250
# HELP nvidia_smi_fan_speed_ratio fan.speed [%]
# TYPE nvidia_smi_fan_speed_ratio gauge
nvidia_smi_fan_speed_ratio{uuid="df6e7a7c-7314-46f8-abc4-b88b36dcf3aa"} 0.38
# HELP nvidia_smi_gpu_info A metric with a constant '1' value labeled by gpu uuid, name, driver_model_current, driver_model_pending, vbios_version.
# TYPE nvidia_smi_gpu_info gauge
nvidia_smi_gpu_info{driver_model_current="WDDM",driver_model_pending="WDDM",name="NVIDIA GeForce RTX 2080 SUPER",uuid="df6e7a7c-7314-46f8-abc4-b88b36dcf3aa",vbios_version="90.04.7a.40.73"} 1
# HELP nvidia_smi_index index
# TYPE nvidia_smi_index gauge
nvidia_smi_index{uuid="df6e7a7c-7314-46f8-abc4-b88b36dcf3aa"} 0
# HELP nvidia_smi_inforom_oem inforom.oem
# TYPE nvidia_smi_inforom_oem gauge
nvidia_smi_inforom_oem{uuid="df6e7a7c-7314-46f8-abc4-b88b36dcf3aa"} 1.1
# HELP nvidia_smi_memory_free_bytes memory.free [MiB]
# TYPE nvidia_smi_memory_free_bytes gauge
nvidia_smi_memory_free_bytes{uuid="df6e7a7c-7314-46f8-abc4-b88b36dcf3aa"} 7.883194368e+09
# HELP nvidia_smi_memory_total_bytes memory.total [MiB]
# TYPE nvidia_smi_memory_total_bytes gauge
nvidia_smi_memory_total_bytes{uuid="df6e7a7c-7314-46f8-abc4-b88b36dcf3aa"} 8.589934592e+09
# HELP nvidia_smi_memory_used_bytes memory.used [MiB]
# TYPE nvidia_smi_memory_used_bytes gauge
nvidia_smi_memory_used_bytes{uuid="df6e7a7c-7314-46f8-abc4-b88b36dcf3aa"} 7.06740224e+08
# HELP nvidia_smi_name name
# TYPE nvidia_smi_name gauge
nvidia_smi_name{uuid="df6e7a7c-7314-46f8-abc4-b88b36dcf3aa"} 2080
# HELP nvidia_smi_pci_bus pci.bus
# TYPE nvidia_smi_pci_bus gauge
nvidia_smi_pci_bus{uuid="df6e7a7c-7314-46f8-abc4-b88b36dcf3aa"} 12
# HELP nvidia_smi_pci_device pci.device
# TYPE nvidia_smi_pci_device gauge
nvidia_smi_pci_device{uuid="df6e7a7c-7314-46f8-abc4-b88b36dcf3aa"} 0
# HELP nvidia_smi_pci_device_id pci.device_id
# TYPE nvidia_smi_pci_device_id gauge
nvidia_smi_pci_device_id{uuid="df6e7a7c-7314-46f8-abc4-b88b36dcf3aa"} 7809
# HELP nvidia_smi_pci_domain pci.domain
# TYPE nvidia_smi_pci_domain gauge
nvidia_smi_pci_domain{uuid="df6e7a7c-7314-46f8-abc4-b88b36dcf3aa"} 0
# HELP nvidia_smi_pci_sub_device_id pci.sub_device_id
# TYPE nvidia_smi_pci_sub_device_id gauge
nvidia_smi_pci_sub_device_id{uuid="df6e7a7c-7314-46f8-abc4-b88b36dcf3aa"} 1.074074712e+09
# HELP nvidia_smi_pcie_link_gen_current pcie.link.gen.current
# TYPE nvidia_smi_pcie_link_gen_current gauge
nvidia_smi_pcie_link_gen_current{uuid="df6e7a7c-7314-46f8-abc4-b88b36dcf3aa"} 3
# HELP nvidia_smi_pcie_link_gen_max pcie.link.gen.max
# TYPE nvidia_smi_pcie_link_gen_max gauge
nvidia_smi_pcie_link_gen_max{uuid="df6e7a7c-7314-46f8-abc4-b88b36dcf3aa"} 3
# HELP nvidia_smi_pcie_link_width_current pcie.link.width.current
# TYPE nvidia_smi_pcie_link_width_current gauge
nvidia_smi_pcie_link_width_current{uuid="df6e7a7c-7314-46f8-abc4-b88b36dcf3aa"} 16
# HELP nvidia_smi_pcie_link_width_max pcie.link.width.max
# TYPE nvidia_smi_pcie_link_width_max gauge
nvidia_smi_pcie_link_width_max{uuid="df6e7a7c-7314-46f8-abc4-b88b36dcf3aa"} 16
# HELP nvidia_smi_power_default_limit_watts power.default_limit [W]
# TYPE nvidia_smi_power_default_limit_watts gauge
nvidia_smi_power_default_limit_watts{uuid="df6e7a7c-7314-46f8-abc4-b88b36dcf3aa"} 250
# HELP nvidia_smi_power_draw_watts power.draw [W]
# TYPE nvidia_smi_power_draw_watts gauge
nvidia_smi_power_draw_watts{uuid="df6e7a7c-7314-46f8-abc4-b88b36dcf3aa"} 28.07
# HELP nvidia_smi_power_limit_watts power.limit [W]
# TYPE nvidia_smi_power_limit_watts gauge
nvidia_smi_power_limit_watts{uuid="df6e7a7c-7314-46f8-abc4-b88b36dcf3aa"} 250
# HELP nvidia_smi_power_management power.management
# TYPE nvidia_smi_power_management gauge
nvidia_smi_power_management{uuid="df6e7a7c-7314-46f8-abc4-b88b36dcf3aa"} 1
# HELP nvidia_smi_power_max_limit_watts power.max_limit [W]
# TYPE nvidia_smi_power_max_limit_watts gauge
nvidia_smi_power_max_limit_watts{uuid="df6e7a7c-7314-46f8-abc4-b88b36dcf3aa"} 350
# HELP nvidia_smi_power_min_limit_watts power.min_limit [W]
# TYPE nvidia_smi_power_min_limit_watts gauge
nvidia_smi_power_min_limit_watts{uuid="df6e7a7c-7314-46f8-abc4-b88b36dcf3aa"} 105
# HELP nvidia_smi_pstate pstate
# TYPE nvidia_smi_pstate gauge
nvidia_smi_pstate{uuid="df6e7a7c-7314-46f8-abc4-b88b36dcf3aa"} 8
# HELP nvidia_smi_temperature_gpu temperature.gpu
# TYPE nvidia_smi_temperature_gpu gauge
nvidia_smi_temperature_gpu{uuid="df6e7a7c-7314-46f8-abc4-b88b36dcf3aa"} 34
# HELP nvidia_smi_utilization_gpu_ratio utilization.gpu [%]
# TYPE nvidia_smi_utilization_gpu_ratio gauge
nvidia_smi_utilization_gpu_ratio{uuid="df6e7a7c-7314-46f8-abc4-b88b36dcf3aa"} 0
# HELP nvidia_smi_utilization_memory_ratio utilization.memory [%]
# TYPE nvidia_smi_utilization_memory_ratio gauge
nvidia_smi_utilization_memory_ratio{uuid="df6e7a7c-7314-46f8-abc4-b88b36dcf3aa"} 0
# HELP promhttp_metric_handler_requests_in_flight Current number of scrapes being served.
# TYPE promhttp_metric_handler_requests_in_flight gauge
promhttp_metric_handler_requests_in_flight 1
# HELP promhttp_metric_handler_requests_total Total number of scrapes by HTTP status code.
# TYPE promhttp_metric_handler_requests_total counter
promhttp_metric_handler_requests_total{code="200"} 0
promhttp_metric_handler_requests_total{code="500"} 0
promhttp_metric_handler_requests_total{code="503"} 0

Contributing

See CONTRIBUTING for details.

Owner
Utku Özdemir
Backend & Cloud Infrastructure Engineer
Utku Özdemir
Comments
  • got errors

    got errors "couldn't parse number from: [n/a]"

    Describe the bug executed command: # ./nvidia_gpu_exporter --web.listen-address :20127 --nvidia-smi-command="nvidia-smi" --log.level=debug refresh nvidia-gpu-metrics dashboard in Grafana, then command console throws errors and dashboard shows nothing

    To Reproduce Steps to reproduce the behavior:

    1. Run command './nvidia_gpu_exporter --web.listen-address :20127 --nvidia-smi-command="nvidia-smi" --log.level=debug'
    2. See error image image

    Expected behavior dashboard shows metrics data normally

    Model and Version +-----------------------------------------------------------------------------+ | NVIDIA-SMI 450.57 Driver Version: 450.57 CUDA Version: 11.0 |

    • GPU Model [e.g. GeForce RTX 2080 TI]
    • App version and architecture [' linux_x86_64']
    • Operating System [e.g. Ubuntu 18.04]
    • Nvidia GPU driver version [e.g. Linux driver nvidia-driver-450]

    root@4d15723e44d8:/home# ./nvidia_gpu_exporter --version nvidia_gpu_exporter, version 0.4.0 (branch: HEAD, revision: 76d7496285a4c5f36a4520eaa3fe32fb0c400992) build user: goreleaser build date: 2022-02-08T00:42:44Z go version: go1.17.5 platform: linux/amd64

    could you give a suggestion? Thx!

  • Exporter not able to recover after fist scrape failure

    Exporter not able to recover after fist scrape failure

    Describe the bug We are using this Exporter for Datacenter Monitoring and Metrics analysis. Sometimes the exporter fail to gather information and can not recover from this.

    To Reproduce Steps to reproduce the behavior: Hard to say! It happens from time to time. Service runs fine, then the error appears and is not able to recover.

    Expected behavior Service Stops/enter Failure State when this happens

    Console output curl: nvidia_smi_failed_scrapes_total 4718 Systemctl: Mar 10 18:24:48 HOSTNAME prometheus-nvidia-exporter-2[874]: level=error ts=2022-03-10T17:24:48.652Z caller=exporter.go:148 error="command failed. stderr: err: exit status 2"

    Model and Version

    • GPU Model K3100M + GRIDs
    • App version and architecture v0.4.0 - linux_x86_64
    • Installation method: binary download
    • Operating System: Centos7 + Rocky8
    • Nvidia GPU driver version --- Quadro K3100M: Linux 431 --- GRID: Linux 418 --- etc....

    Updated to 0.4.0, Problems still occur.


    Updated to 0.5.0

  • Add metric to get `nvidia-smi` command's exit status that it is error or success

    Add metric to get `nvidia-smi` command's exit status that it is error or success

    In some environments nvidia-smi commands returns with error code.

    Ex:

    $ nvidia-smi
    Failed to initialize NVML: Driver/library version mismatch
    

    Can you add simple metric (ie: nvidia_smi_command_status) to get whether nvidia-smi commands run successfully.

    I think we need to add new metric if there is stderr

    cmd.Stdout = &stdout
    	cmd.Stderr = &stderr
    
    	err := runCmd(cmd)
    	if err != nil {
    		return nil, fmt.Errorf("command failed. stderr: %s err: %w", stderr.String(), err)
    	}
    
    	t, err := parseCSVIntoTable(strings.TrimSpace(stdout.String()), qFields)
    	if err != nil {
    		return nil, err
    	}
    
    	return &t, nil
    

    Thanks

  • Multinode dashboard extension

    Multinode dashboard extension

    Hi, great project, mate, thanks!

    I don't seem to be able to get data over multiple nodes, though. In our company we have an on-premise cluster of PCs with GPUs, on top of which there is a k8s cluster and we are looking for a proper monitoring solution. I use 3 nodes in my test setup -> 1 master and 2 nodes. All of them are to pick up GPU workloads.

    i used Helm chart to deploy the exporters

    helm install ozdemir utkuozdemir/nvidia-gpu-exporter -f nvidia-gpu-utku-ozdemir-values.yml
    

    with the following values to allow deploying to master nodes as well

    tolerations:
      - key: node-role.kubernetes.io/master
        effect: NoSchedule
        operator: Exists
    
    service:
      type: NodePort
      nodePort: 30699
    

    NOTE: By the way, as it turns out it's not possible to set a nodePort in Helm values, are you planning on adding this feature by any chance? would be very convenient.

    exporters seem to be deployed successfully

    vvcServiceAccount@k8s-master-node0:~$ k get pod -o wide
    NAME                                                      READY   STATUS    RESTARTS   AGE   IP          NODE               NOMINATED NODE   READINESS GATES
    utku-ozdemir-nvidia-gpu-exporter-6gt5r                         1/1     Running   1          19d   10.44.0.1   k8s-worker-node1   <none>           <none>
    utku-ozdemir-nvidia-gpu-exporter-7jqgs                         1/1     Running   1          19d   10.32.0.2   k8s-master-node0   <none>           <none>
    utku-ozdemir-nvidia-gpu-exporter-m9h72                         1/1     Running   0          19d   10.36.0.1   k8s-worker-node2   <none>           <none>
    

    logs from all 3 exporters are the following

    level=info ts=2021-08-13T15:08:03.434Z caller=main.go:65 msg="Listening on address" address=:9835
    ts=2021-08-13T15:08:03.435Z caller=log.go:124 level=info msg="TLS is disabled." http2=false
    

    prometheus specs for scraping

    prometheus:
      prometheusSpec:
        additionalScrapeConfigs:
        - job_name: nvidia_gpu_exporter
          static_configs:
            - targets: [ '10.0.10.3:31585' ] 
    

    with 31585 being the random nodePort assigned to the sevice and 10.0.10.3 being the ip address of the master node

    NAME                               TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
    utku-ozdemir-nvidia-gpu-exporter   NodePort   10.106.182.203   <none>        9835:31585/TCP   19d
    

    i used this grafana dashboard to display the collected data: https://grafana.com/grafana/dashboards/14574

    the results are good: image

    but there is only one node in the dropdown (one of the worker nodes). am i missing something in my setup or is it not possible to have all nodes in the dropdown list to have an overview per node?

    ideally, it would be awesome to also have a summary dashboard with averaged metrics to monitor the whole cluster at once. do you by any chance plan on developing something like that or maybe know some dashboards that already do that? i tried the official one (nvidia/gpu-operator with https://grafana.com/grafana/dashboards/12239 as a dashboard) but it is not nearly as impressive as yours and also it had a bunch of empty charts and "no data" type of situations.

    so, to sum up

    • would be cool to add nodePort option to Helm chart values
    • is it possible to have an overview of multiple nodes?

    thanks!

  • Unable to view encoder stats -

    Unable to view encoder stats - "1:87: parse error: unexpected number \"0\""

    Describe the bug Trying to view encoder stats in a gauge as suggested logs "1:87: parse error: unexpected number \"0\"".

    To Reproduce Add a panel with queries (replacing $gpu with the actual uuid does not work either):

    nvidia_smi_encoder_stats_average_fps{uuid="$gpu"} 0
    nvidia_smi_encoder_stats_average_latency{uuid="$gpu"} 0
    nvidia_smi_encoder_stats_session_count{uuid="$gpu"} 0
    
    1. See error 1:87: parse error: unexpected number \"0\

    Expected behavior To see 3 guages with encoder stats

    Model and Version

    • GTX 950 and Quadro P400
    • v0.3.0 - linux_x86_64
    • Installed via binary download
    • Arch Linux
    • nvidia-driver-470.74
  • Can't visualize metrics on Grafana Dashboard

    Can't visualize metrics on Grafana Dashboard

    Hey, currently I'm facing a problem that I can't visualize the metrics on my Grafana Dashboard. I'll describe the steps that I followed

    1. I installed the .deb package according to what is described in INSTALL.md. My laptop has Ubuntu 20.04 and my GPU is a GeForce 3060.
    2. On CONFIGURE.md I set the command of nvidia-smi as
    nvidia_gpu_exporter --nvidia-smi-command 'nvidia-smi'
    
    1. I imported the dashboard that is ready-to-use on Grafana, but no metrics appeared.

    After step (3), I was looking in this GitHub for similar issues and I found this one: #7 . However, one of the suggestions is to verify the metrics log on Prometheus on http://localhost:9090, but this page throws me the 404 Error Page not Found. So I believe that there's a Prometheus setup step that I'm missing. And that's what I think it's my problem.

    How could I setup Prometheus properly to be able to visualize the metrics on the Dashboard?

  • Monitor can't turn off after windows going to saving mode

    Monitor can't turn off after windows going to saving mode

    When monitor going to sleep, it wake up immidiately. So basically exporter broke monitor sleep.

    To Reproduce Steps to reproduce the behavior:

    1. When exporter running as service, monitor can't fall asleep
    2. When i stop service, monitor can turn off when going to sleep

    Expected behavior Monitor turned off after delay in windows power mode settings

    Console output

    Model and Version

    • GPU Mode: Gigabyte RTX 3060 Gaming OC
    • App version and architecture: v0.3.0 [x86_64.zip]
    • Installation method: binary download, runs as a service with nssm
    • Operating System: Windows 10 LTSC
    • Nvidia GPU driver version: Windows Studio Driver 472.84

    Additional context In Grafana i update information from exporter every 5 sec, and monitor turns on every 5 sec after it going to sleep.

  • Running Exporter Causes Stuttering in All Games

    Running Exporter Causes Stuttering in All Games

    I've been using this amazing exporter for a month or two now, but I noticed in almost all games (and even some videos) that there's be some pretty constant stuttering. This would occur once every 30 seconds or so and would essentially look like someone pressed pause then resume once really quickly.

    I troubleshooted everything under the sun. I ran DDU, did a full Windows Reinstall, disabled/uninstalled any overlays I had running. It turns out the culprit was this exporter. Disabling the exporter made the problem go away immediately.

    I also run the Prometheus Windows Exporter (https://github.com/prometheus-community/windows_exporter) and it doesn't seem to cause the same issue.

    Unfortunately I don't really have any other info to share with you about this, and maybe there's nothing that can be done, but I thought I'd mention it in case there is a possible solution.

    My Main Specs: AMD Ryzen 3900x EVGA 3080 Ultra 1TB NVME 64GB DDR4 @3600

    Thanks!

  • No metrics for multiple master nodes

    No metrics for multiple master nodes

    Hey man, me again!

    Having another issue, may be you could help. I have same setup as before, but now more master nodes: 3 master nodes and 2 worker nodes. The issue is that prometheus does not seem to have access to the metrics from the 2 new master nodes for some reason. And consecutively, they are not displayed in the grafana dashboard.
    All nodes have nvidia_gpu_exporter running.

    image

    image

    image

    that's how i deploy nvidia_gpu_exporter

    helm install --version=0.3.1 ozdemir utkuozdemir/nvidia-gpu-exporter -f nvidia-gpu-ozdemir-values.yml
    

    with such values

    tolerations:
      - key: node-role.kubernetes.io/master
        effect: NoSchedule
        operator: Exists
    
    hostPort:
      enabled: true
      port: 31585
    

    Might be missing something obvious, will continue the investigation.

  • Driver version is not displayed

    Driver version is not displayed

    I have a minor issue of driver version not being displayed in the dashboard.

    image

    Prometheus can't get this info also, it seems image image

    I run a multinode k8s cluster, where i have the nvidia-gpu-exporter Helm chart deployed. Each node has Ubuntu 18.04 installed. When i login to a ozdemir-nvidia-gpu-exporter pod, i can fetch driver version easily like that

    root@ozdemir-nvidia-gpu-exporter-d2j74:/# nvidia-smi --query-gpu=driver_version --format=csv
    driver_version
    460.91.03
    
  • docs: add enterprise kubernetes test, installation and deployment notes

    docs: add enterprise kubernetes test, installation and deployment notes

    Types of changes

    • [ ] Bug fix (non-breaking change which fixes an issue)
    • [ ] New feature (non-breaking change which adds functionality)
    • [ ] Breaking change (fix or feature that would cause existing functionality to change)
    • [x] I have read the CONTRIBUTING document.
    • [x] My code follows the code style of this project.
    • [x] My change requires a change to the documentation.
    • [x] I have updated the documentation accordingly.
    • [ ] I have added tests to cover my changes.
    • [ ] All new and existing tests passed.

    This PR adds enterprise Kubernetes test, installation, and deployment notes to the documentation.

    I have tested this locally and our Staging AKS cluster. The Prometheus metrics were consumed through DataDog and matched the output the Metrics.md document.

  • First draft for automatic AUR PKGBUILD generation

    First draft for automatic AUR PKGBUILD generation

    Closes #86.

    Types of changes

    • [ ] Bug fix (non-breaking change which fixes an issue)
    • [x] New feature (non-breaking change which adds functionality)
    • [ ] Breaking change (fix or feature that would cause existing functionality to change)
    • [ ] I have read the CONTRIBUTING document.
    • [x] My code follows the code style of this project.
    • [x] My change requires a change to the documentation.
    • [ ] I have updated the documentation accordingly.
    • [ ] I have added tests to cover my changes.
    • [ ] All new and existing tests passed.
  • AUR package

    AUR package

    I created an AUR package for this repository to make installing on Archlinux easier. Maybe this is also helpful for others :-)

    (Feel free to close this issue again)

  • add variable datasource to fix error

    add variable datasource to fix error

    Types of changes

    • [x] Bug fix (non-breaking change which fixes an issue)
    • [x] New feature (non-breaking change which adds functionality)
    • [x] I have read the CONTRIBUTING document.
    • [x] My code follows the code style of this project.
    • [ ] Breaking change (fix or feature that would cause existing functionality to change)
    • [ ] My change requires a change to the documentation.
    • [ ] I have updated the documentation accordingly.
    • [ ] I have added tests to cover my changes.
    • [ ] All new and existing tests passed.

    Hello I have tried to provision your dashborad to grafana (https://grafana.com/docs/grafana/latest/administration/provisioning/#dashboards) and got an error: "Error updating options: Datasource ${DS_PROMETHEUS} was not found" I have add variable datasource to fix this issue. This variable datasource will be also usfable when you have multiple prometheuses connected to your hrafana.

  • Dependency Dashboard

    Dependency Dashboard

    This issue lists Renovate updates and detected dependencies. Read the Dependency Dashboard docs to learn more.

    Pending Branch Automerge

    These updates await pending status checks before automerging. Click on a checkbox to abort the branch automerge, and create a PR instead.

    • [ ] chore(deps): update dependency goreleaser/goreleaser to v1.14.1

    Detected dependencies

    dockerfile
    Dockerfile
    • ubuntu 22.04
    github-actions
    .github/workflows/build.yml
    • actions/checkout v3.2.0
    • actions/setup-go v3.5.0
    • golangci/golangci-lint-action v3.3.1
    • codecov/codecov-action v3.1.1
    • goreleaser/goreleaser-action v4.1.0
    .github/workflows/release.yml
    • actions/checkout v3.2.0
    • actions/setup-go v3.5.0
    • docker/login-action v2.1.0
    • goreleaser/goreleaser-action v4.1.0
    gomod
    go.mod
    • go 1.19
    • github.com/go-kit/log v0.2.1
    • github.com/prometheus/client_golang v1.14.0
    • github.com/prometheus/common v0.39.0
    • github.com/prometheus/exporter-toolkit v0.8.2
    • github.com/stretchr/testify v1.8.1
    • golang.org/x/exp v0.0.0-20221230185412-738e83a70c30@738e83a70c30
    • gopkg.in/alecthomas/kingpin.v2 v2.2.6
    regex
    .github/workflows/build.yml
    • golangci/golangci-lint v1.50.1
    • kyoh86/richgo v0.3.11
    • goreleaser/goreleaser v1.14.0
    .github/workflows/release.yml
    • goreleaser/goreleaser v1.14.0
    .github/workflows/build.yml
    .github/workflows/release.yml
    go.mod

    • [ ] Check this box to trigger a request for Renovate to run again on this repository
nano-gpu-agent is a Kubernetes device plugin for GPU resources allocation on node.
nano-gpu-agent is a Kubernetes device plugin for GPU resources allocation on node.

Nano GPU Agent About this Project Nano GPU Agent is a Kubernetes device plugin implement for gpu allocation and use in container. It runs as a Daemons

Dec 29, 2022
gpu-memory-monitor is a metrics server for collecting GPU memory usage of kubernetes pods.

gpu-memory-monitor is a metrics server for collecting GPU memory usage of kubernetes pods. If you have a GPU machine, and some pods are using the GPU device, you can run the container by docker or kubernetes when your GPU device belongs to nvidia. The gpu-memory-monitor will collect the GPU memory usage of pods, you can get those metrics by API of gpu-memory-monitor

Jul 27, 2022
Vulnerability-exporter - A Prometheus Exporter for managing vulnerabilities in kubernetes by using trivy
Vulnerability-exporter - A Prometheus Exporter for managing vulnerabilities in kubernetes by using trivy

Kubernetes Vulnerability Exporter A Prometheus Exporter for managing vulnerabili

Dec 4, 2022
Openvpn exporter - Prometheus OpenVPN exporter For golang

Prometheus OpenVPN exporter Please note: This repository is currently unmaintain

Jan 2, 2022
Json-log-exporter - A Nginx log parser exporter for prometheus metrics

json-log-exporter A Nginx log parser exporter for prometheus metrics. Installati

Jan 5, 2022
Amplitude-exporter - Amplitude charts to prometheus exporter PoC

Amplitude exporter Amplitude charts to prometheus exporter PoC. Work in progress

May 26, 2022
Netstat exporter - Prometheus exporter for exposing reserved ports and it's mapped process

Netstat exporter Prometheus exporter for exposing reserved ports and it's mapped

Feb 3, 2022
Openshift's hpessa-exporter allows users to export SMART information of local storage devices as Prometheus metrics, by using HPE Smart Storage Administrator tool

hpessa-exporter Overview Openshift's hpessa-exporter allows users to export SMART information of local storage devices as Prometheus metrics, by using

Jan 17, 2022
Github billing exporter - Billing exporter for GitHub organizations

GitHub billing exporter Forked From: https://github.com/borisputerka/github_bill

Nov 2, 2022
Export Prometheus metrics from journald events using Prometheus Go client library

journald parser and Prometheus exporter Export Prometheus metrics from journald events using Prometheus Go client library. For demonstration purposes,

Jan 3, 2022
📡 Prometheus exporter that exposes metrics from SpaceX Starlink Dish
📡  Prometheus exporter that exposes metrics from SpaceX Starlink Dish

Starlink Prometheus Exporter A Starlink exporter for Prometheus. Not affiliated with or acting on behalf of Starlink(™) ?? Starlink Monitoring System

Dec 19, 2022
Prometheus exporter for Chia node metrics

chia_exporter Prometheus metric collector for Chia nodes, using the local RPC API Building and Running With the Go compiler tools installed: go build

Sep 19, 2022
Prometheus exporter for Amazon Elastic Container Service (ECS)

ecs_exporter ?? ?? ?? This repo is still work in progress and is subject to change. This repo contains a Prometheus exporter for Amazon Elastic Contai

Nov 27, 2022
Prometheus exporter for DeadMansSnitch

DeadMansSnitch Exporter Prometheus exporter for DeadMansSnitch information (snitches) Configuration Usage: deadmanssnitch-exporter [OPTIONS] Applic

Apr 6, 2022
A prometheus exporter for monitoring FIO nodeos nodes.
A prometheus exporter for monitoring FIO nodeos nodes.

fio-prometheus-exporter This is a simple prometheus exporter for FIO nodeos nodes. It can connect to multiple nodes to display a few critical statisti

Aug 19, 2022
A Prometheus exporter, written in Golang, for Magento 2

Magento 2 Prometheus Exporter A Prometheus exporter, written in Golang, for Magento 2. Philosophy It might be abnormal to start with the "philosophy"

May 3, 2022
Prometheus exporter for podman

Prometheus exporter for podman Exports the following metrics for each running container CPU Usage Memory Usage Netowrk Usage Block Usage Output Exampl

Jul 5, 2022
A Prometheus exporter which scrapes metrics from CloudLinux LVE Stats 2
A Prometheus exporter which scrapes metrics from CloudLinux LVE Stats 2

CloudLinux LVE Exporter for Prometheus LVE Exporter - A Prometheus exporter which scrapes metrics from CloudLinux LVE Stats 2 Help on flags: -h, --h

Nov 2, 2021
Prometheus exporter of Hetzner Cloud inventory

Hetzner Cloud inventory exporter Prometheus exporter of Hetzner Cloud inventory Build Using docker Requires docker make build Locally Requires go buil

Dec 14, 2022