Prometheus exporter for Chia node metrics

chia_exporter

Prometheus metric collector for Chia nodes, using the local RPC API

Building and Running

With the Go compiler tools installed:

go build

Run ./chia_exporter -h to see the command configuration options:

-cert string
      The full node SSL certificate. (default "$HOME/.chia/mainnet/config/ssl/full_node/private_full_node.crt")
-key string
      The full node SSL key. (default "$HOME/.chia/mainnet/config/ssl/full_node/private_full_node.key")
-listen string
      The address to listen on for HTTP requests. (default ":9133")
-url string
      The base URL for the full node RPC endpoint. (default "https://localhost:8555")
-wallet string
      The base URL for the wallet RPC endpoint. (default "https://localhost:9256")

Metrics

Example of all metrics currently exposed:

# HELP chia_blockchain_difficulty Current difficulty
# TYPE chia_blockchain_difficulty gauge
chia_blockchain_difficulty 112
# HELP chia_blockchain_height Current height
# TYPE chia_blockchain_height gauge
chia_blockchain_height 221609
# HELP chia_blockchain_space_bytes Estimated current netspace
# TYPE chia_blockchain_space_bytes gauge
chia_blockchain_space_bytes 1.8771214186533368e+18
# HELP chia_blockchain_sync_status Sync status, 0=not synced, 1=syncing, 2=synced
# TYPE chia_blockchain_sync_status gauge
chia_blockchain_sync_status 2
# HELP chia_blockchain_total_iters Current total iterations
# TYPE chia_blockchain_total_iters gauge
chia_blockchain_total_iters 7.20695891692e+11
# HELP chia_peers_count Number of peers currently connected.
# TYPE chia_peers_count gauge
chia_peers_count{type="1"} 52
chia_peers_count{type="2"} 0
chia_peers_count{type="3"} 1
chia_peers_count{type="4"} 0
chia_peers_count{type="5"} 0
chia_peers_count{type="6"} 1
# HELP chia_wallet_confirmed_balance_mojo Confirmed wallet balance.
# TYPE chia_wallet_confirmed_balance_mojo gauge
chia_wallet_confirmed_balance_mojo{id="1"} 100
# HELP chia_wallet_max_send_mojo Maximum sendable amount.
# TYPE chia_wallet_max_send_mojo gauge
chia_wallet_max_send_mojo{id="1"} 100
# HELP chia_wallet_pending_change_mojo Pending change amount.
# TYPE chia_wallet_pending_change_mojo gauge
chia_wallet_pending_change_mojo{id="1"} 0
# HELP chia_wallet_spendable_balance_mojo Spendable wallet balance.
# TYPE chia_wallet_spendable_balance_mojo gauge
chia_wallet_spendable_balance_mojo{id="1"} 100
# HELP chia_wallet_unconfirmed_balance_mojo Unconfirmed wallet balance.
# TYPE chia_wallet_unconfirmed_balance_mojo gauge
chia_wallet_unconfirmed_balance_mojo{id="1"} 100

Blockchain

Various node and blockchain metrics are collected from the get_blockchain_state endpoint.

Connections

The number of connections are collected for each node type from the get_connections endpoint.

Node types (from chia/server/outbound_message.py):

FULL_NODE = 1
HARVESTER = 2
FARMER = 3
TIMELORD = 4
INTRODUCER = 5
WALLET = 6

Wallet

The wallet balances are collected from the get_wallet_balance endpoint.

Owner
Kevin Retzke
Supporting large-scale scientific computing at Fermilab and on the @opensciencegrid, with a focus on monitoring.
Kevin Retzke
Comments
  • Get plots

    Get plots

    Added data from harvester API get_plots

    # HELP chia_plots Number of plots currently using.
    # TYPE chia_plots gauge
    chia_plots 5
    
    # HELP chia_plots_failed_to_open Number of plots files failed to open.
    # TYPE chia_plots_failed_to_open gauge
    chia_plots_failed_to_open 0
    
    # HELP chia_plots_not_found Number of plots files not found.
    # TYPE chia_plots_not_found gauge
    chia_plots_not_found 0
    
  • Rebuild Dockerfile to run chia_exporter as a shim on Chia

    Rebuild Dockerfile to run chia_exporter as a shim on Chia

    The latest official Chia image is used as a base and the exporter defaults export it's metrics. Further env configuration is needed to make the underlying Chia setup do what you want, for example be a full_node or harvester.

    This is far from ideal as a solution since it violates the separation of concerns running the exporter in the same container instead of as a sidecar. Right now though it's sort of necessary since Chia defaults to only listening on 127.0.0.1, doesn't have any config hooks to override that per RPC service and doesn't even have a configure switch for changing the local_hostname which is used for RPC service binding. Thus the next best thing is to bring the exporter into the same container so it can access the default localhost-only RPC endpoints .

  • netspace overflow and sync status gone

    netspace overflow and sync status gone

    (Sadly) the chia netspace has gone up a lot the last 12 hours and now doesn't fit into int64 anymore:

    2021/05/22 04:51:25 error decoding get_blockchain_state response: json: cannot unmarshal number 9566905519282954240 into Go struct field .blockchain_state.Space of type int64
    

    Also the blockchain_sync_status doesn't get populated but I don't see any error in the exporter log lines..

  • Wallet fingerprint not added to metric

    Wallet fingerprint not added to metric

    When I query the /metrics endpoint I see all my wallet and balances, but the fingerprint values are missing. I had the issue with chia 1.2.3 and 1.2.5.

    It also seems to miss all other wallets (only finding wallet_id=1 and I have many wallets)

    
    chia_wallet_farmed_amount{wallet_fingerprint="",wallet_id="1"} 6.000006000021e+12
    # HELP chia_wallet_fee_amount Fee amount amount
    # TYPE chia_wallet_fee_amount gauge
    chia_wallet_fee_amount{wallet_fingerprint="",wallet_id="1"} 6.000021e+06
    # HELP chia_wallet_height Wallet synced height.
    # TYPE chia_wallet_height gauge
    chia_wallet_height{wallet_fingerprint="",wallet_id="1"} 825447
    # HELP chia_wallet_last_height_farmed Last height farmed
    # TYPE chia_wallet_last_height_farmed gauge
    chia_wallet_last_height_farmed{wallet_fingerprint="",wallet_id="1"} 816184
    # HELP chia_wallet_pool_reward_amount Pool Reward amount
    # TYPE chia_wallet_pool_reward_amount gauge
    chia_wallet_pool_reward_amount{wallet_fingerprint="",wallet_id="1"} 5.25e+12
    # HELP chia_wallet_reward_amount Reward amount
    # TYPE chia_wallet_reward_amount gauge
    chia_wallet_reward_amount{wallet_fingerprint="",wallet_id="1"} 7.5e+11
    # HELP chia_wallet_sync_status Sync status, 0=not synced, 1=syncing, 2=synced
    # TYPE chia_wallet_sync_status gauge
    chia_wallet_sync_status{wallet_fingerprint="",wallet_id="1"} 2
    # HELP go_gc_duration_seconds A summary of the pause duration of garbage collection cycles.
    
    

    Also it is timing out on:

    2021/09/07 12:09:42 error calling get_wallet_balance: Post "https://localhost:9256/get_wallet_balance": context deadline exceeded (Client.Timeout exceeded while awaiting headers)

    In chia my wallets take some time to answer, I think that timeout is causing me problems. Can I increase it somewhere?

  • Get plots details

    Get plots details

    Added details of the plot files, so we can distinguish OG and portables ones.

    plots

    I see that there are other commits in the PR, more than I have created in this branch. But the final changes on files looks fine.

  • error decoding get_plots response

    error decoding get_plots response

    Tip of the tree commit hash f8fd48d When I run the exporter it doesn't like the JSON response from my node (which is running localhost)

    2021/07/15 13:29:33 chia_exporter version 0.5.0
    2021/07/15 13:29:33 Connected to node at https://localhost:8555 on mainnet
    2021/07/15 13:29:35 error decoding get_plots response: json: cannot unmarshal string into Go struct field PlotFiles.failed_to_open_filenames of type main.PlotData
    2021/07/15 13:29:35 Listening on :9133. Serving metrics on /metrics.
    
    

    using a debugger the json reply

    (dlv) p r
    *net/http.Response {
            Status: "200 OK",
            StatusCode: 200,
            Proto: "HTTP/1.1",
            ProtoMajor: 1,
            ProtoMinor: 1,
            Header: net/http.Header [
                    "Content-Type": [
                            "application/json",
                    ],
                    "Content-Length": ["187912"],
                    "Date": [
                            "Thu, 15 Jul 2021 13:56:40 GMT",
                    ],
                    "Server": [
                            "Python/3.8 aiohttp/3.7.4",
                    ],
            ],
            Body: io.ReadCloser(*net/http.cancelTimerBody) *{
                    stop: net/http.setRequestCancel.func2,
                    rc: io.ReadCloser(*net/http.bodyEOFSignal) ...,
                    reqDidTimeout: net/http.(*atomicBool).isSet-fm,},
            ContentLength: 187912,
            TransferEncoding: []string len: 0, cap: 0, nil,
            Close: false,
            Uncompressed: false,
            Trailer: net/http.Header nil,
            Request: *net/http.Request {
                    Method: "POST",
                    URL: *(*"net/url.URL")(0xc0002c8600),
                    Proto: "HTTP/1.1",
                    ProtoMajor: 1,
                    ProtoMinor: 1,
                    Header: net/http.Header [...],
                    Body: io.ReadCloser(io/ioutil.nopCloser) *(*io.ReadCloser)(0xc0002a8a40),
                    GetBody: net/http.NewRequestWithContext.func3,
                    ContentLength: 7,
                    TransferEncoding: []string len: 0, cap: 0, nil,
                    Close: false,
                    Host: "localhost:8560",
                    Form: net/url.Values nil,
                    PostForm: net/url.Values nil,
                    MultipartForm: *mime/multipart.Form nil,
                    Trailer: net/http.Header nil,
                    RemoteAddr: "",
                    RequestURI: "",
                    TLS: *crypto/tls.ConnectionState nil,
                    Cancel: <-chan struct {} {
                            qcount: 0,
                            dataqsiz: 0,
                            buf: *[0]struct struct {} [],
                            elemsize: 0,
                            closed: 1,
                            elemtype: *runtime._type {
                                    size: 0,
                                    ptrdata: 0,
                                    hash: 670477339,
                                    tflag: tflagExtraStar (2),
                                    align: 1,
                                    fieldalign: 1,
                                    kind: 25,
                                    alg: *(*runtime.typeAlg)(0xd97270),
                                    gcdata: *1,
                                    str: 38913,
                                    ptrToThis: 273984,},
                            sendx: 0,
                            recvx: 0,
                            recvq: waitq<struct {}> {
                                    first: *sudog<struct {}> nil,
                                    last: *sudog<struct {}> nil,},
                            sendq: waitq<struct {}> {
                                    first: *sudog<struct {}> nil,
                                    last: *sudog<struct {}> nil,},
                            lock: runtime.mutex {key: 0},},
                    Response: *net/http.Response nil,
                    ctx: context.Context(*context.emptyCtx) ...,},
            TLS: *crypto/tls.ConnectionState {
                    Version: 772,
                    HandshakeComplete: true,
                    DidResume: false,
                    CipherSuite: 4866,
                    NegotiatedProtocol: "",
                    NegotiatedProtocolIsMutual: true,
                    ServerName: "",
                    PeerCertificates: []*crypto/x509.Certificate len: 2, cap: 2, [
                            *(*"crypto/x509.Certificate")(0xc0001d0b00),
                            *(*"crypto/x509.Certificate")(0xc0001d1080),
                    ],
                    VerifiedChains: [][]*crypto/x509.Certificate len: 0, cap: 0, nil,
                    SignedCertificateTimestamps: [][]uint8 len: 0, cap: 0, nil,
                    OCSPResponse: []uint8 len: 0, cap: 0, nil,
                    ekm: crypto/tls.(*cipherSuiteTLS13).exportKeyingMaterial.func1,
                    TLSUnique: []uint8 len: 0, cap: 0, nil,},}
    

    Basically the r.Body is empty

       118:         if err != nil {
       119:                 return fmt.Errorf("error calling %s: %w", endpoint, err)
       120:         }
       121:         t := io.TeeReader(r.Body, os.Stdout)
       122:         t = io.TeeReader(r.Body, ioutil.Discard)
    => 123:         if err := json.NewDecoder(t).Decode(result); err != nil {
       124:                 if err != nil {
       125:                         return fmt.Errorf("error decoding %s response: %w", endpoint, err)
       126:                 }
       127:         }
       128:         return nil
    (dlv) p t
    io.Reader (unreadable unknown or unsupported kind: "invalid")
    (dlv) p result
    interface {}(*main.PlotFiles) *{
            FailedToOpen: []main.PlotData len: 0, cap: 0, nil,
            NotFound: []main.PlotData len: 0, cap: 0, nil,
            Plots: []main.PlotData len: 0, cap: 0, nil,
            Success: false,}
    
    

    The server isn't returning any plot file data, I'm running 1.2.0 on chia.

  • Added Dockerfile

    Added Dockerfile

    I've created a Dockerfile for this project for my own use, and thought it best to submit a PR in case it would be wanted as part of the project.

    If accepted, let me know if you would want any documentation on how to build/deploy using this and I can add a separate PR to add a section to the README :smiley:

  • Feature request - monitor multiple harvesters

    Feature request - monitor multiple harvesters

    Dear @retzkek , thanks for putting together your docker-compose stack chiamon which includes this chia_exporter.

    I am currently evaluating this setup which looks promising, but if I am not mistaken, there is no option yet to monitor multiple (n) harvesters.

    Any plans to do so in the near future?

    Thanks, Tobias

  • get pool state data

    get pool state data

    Some data from farmer API get_pool_state

    # HELP chia_pool_current_difficulty Current difficulty on pool.
    # TYPE chia_pool_current_difficulty gauge
    chia_pool_current_difficulty{launcher_id="0x...",pool_url="https://pool.yyy.y"} 1
    
    # HELP chia_pool_current_points Current points on pool.
    # TYPE chia_pool_current_points gauge
    chia_pool_current_points{launcher_id="0x...",pool_url="https://pool.yyy.y"} 12
    
    # HELP chia_pool_points_acknowledged_24h Points acknowledged last 24h on pool.
    # TYPE chia_pool_points_acknowledged_24h gauge
    chia_pool_points_acknowledged_24h{launcher_id="0x...",pool_url="https://pool.yyy.y"} 5
    
    # HELP chia_pool_points_found_24h Points found last 24h on pool.
    # TYPE chia_pool_points_found_24h gauge
    chia_pool_points_found_24h{launcher_id="0x...",pool_url="https://pool.xchpool.org"} 5
    
    
  • add suport for get_farmed_amount data

    add suport for get_farmed_amount data

    Added data from get_farmed_amount API

    # HELP chia_wallet_farmed_amount Farmed amount
    # TYPE chia_wallet_farmed_amount gauge
    chia_wallet_farmed_amount{wallet_fingerprint="xxxx",wallet_id="1"} 0
    chia_wallet_farmed_amount{wallet_fingerprint="xxxx",wallet_id="2"} 0
    
    # HELP chia_wallet_fee_amount Fee amount amount
    # TYPE chia_wallet_fee_amount gauge
    chia_wallet_fee_amount{wallet_fingerprint="xxxx",wallet_id="1"} 0
    chia_wallet_fee_amount{wallet_fingerprint="xxxx",wallet_id="2"} 0
    
    # HELP chia_wallet_last_height_farmed Last height farmed
    # TYPE chia_wallet_last_height_farmed gauge
    chia_wallet_last_height_farmed{wallet_fingerprint="xxxx",wallet_id="1"} 0
    chia_wallet_last_height_farmed{wallet_fingerprint="xxxx",wallet_id="2"} 0
    
    # HELP chia_wallet_pool_reward_amount Pool Reward amount
    # TYPE chia_wallet_pool_reward_amount gauge
    chia_wallet_pool_reward_amount{wallet_fingerprint="xxxx",wallet_id="1"} 0
    chia_wallet_pool_reward_amount{wallet_fingerprint="xxxx",wallet_id="2"} 0
    
    # HELP chia_wallet_reward_amount Reward amount
    # TYPE chia_wallet_reward_amount gauge
    chia_wallet_reward_amount{wallet_fingerprint="xxxx",wallet_id="1"} 0
    chia_wallet_reward_amount{wallet_fingerprint="xxxx",wallet_id="2"} 0
    
  • error decoding get_blockchain_state response: json: cannot unmarshal into type int64

    error decoding get_blockchain_state response: json: cannot unmarshal into type int64

    error decoding get_blockchain_state response: json: cannot unmarshal number 10555548272510144512 into Go struct field .blockchain_state.Space of type int64

    I am receiving this message in the logs of image docker for chia_exporter.

    Any ideas how to resolve it.

  • Validate endpoint URLs, rename flag url to full_node

    Validate endpoint URLs, rename flag url to full_node

    • Validation checks endpoints are valid URLs and start with 'https://' since not enabling SSL will cause weird EOF errors due to the protocol mismatch
    • Protocol mismatch is a fatal error but other validation failures result in that endpoint being disabled. This allows for selectively disabling unwanted endpoints: -farmer=disabled and Fixes #15
    • The old '-url' flag becomes '-full_node' for consistency but the old flag will continue to work as well.
  • All RPC endpoints are queried regardless of flags

    All RPC endpoints are queried regardless of flags

    During testing for #14 I noticed that even if I don't provide a -wallet flag and so on, the endpoint is still queried since it has a default localhost value. Even if I set the param to an empty value, i.e.: chia_exporter -wallet='' the query is still attempted:

    2021/11/11 14:10:26 error calling get_wallets: Post "/get_wallets": unsupported protocol scheme ""
    

    This means that every endpoint that the user doesn't actually care about for that exporter, is producing an error on every run, which fills up logs quick and makes it harder to debug any real issues.

    Ergo, there is no way to enable or disable which endpoints are queried. While on a monolithic setup where all pieces live on the same host this is not a problem, the moment you separate services you'll want to run multiple instances of the exporter with different scopes. For example, my setup right now is one full_node and two harvesters. For the full_node exporter I would want to disable the harvester requests and on the harvester exporters only request the local harvester.

    Since it's more user-friendly to default to pulling everything from the localhost I understand wanting to set the default value to match that. What I'd recommend is that setting the string to 'false' or empty would be interpreted as disabling that portion.

    TL;DR; for any setup that's more complex than a naive monolithic one, the ability to selectively enable endpoints would be grand.

  • Automate building of Docker images into the GitHub registry.

    Automate building of Docker images into the GitHub registry.

    There are already 3rd party images published on Dockerhub that are wildly out of date. So one automatically up to date official image is probably better than having people rely on unofficial images.

    • Any push to the master branch updates the :latest label
    • Any tag push to master will also be labeled with the tag name, so your releases will get stable labels.
    • Any push to any other branch is labeled with the branch name and the short hash id of the commit.
  • Add collect for get_harvesters

    Add collect for get_harvesters

    This collects metrics from the farmer get_harvesters endpoint (mentioned in the chia-blockhain v1.2.0 release notes but not documented in the wiki). Most of the same data as get_plots from the harvester is collected but this collects once for all remote/connected harvesters. Metrics for plot count have labels for pool_public_key, pool_contract_puzzle_hash and size. These labels can be used to tell if a plot is portable or solo or if you plot different size plots etc.

    • sum(chia_farmer_plots{pool_contract_puzzle_hash!=""}) - total portable plots across all harvesters
    • sum(chia_farmer_plots{pool_public_key!=""}) - total solo plots across all harvesters

    This may obsolete func collectPlots.

Openvpn exporter - Prometheus OpenVPN exporter For golang

Prometheus OpenVPN exporter Please note: This repository is currently unmaintain

Jan 2, 2022
Amplitude-exporter - Amplitude charts to prometheus exporter PoC

Amplitude exporter Amplitude charts to prometheus exporter PoC. Work in progress

May 26, 2022
Vulnerability-exporter - A Prometheus Exporter for managing vulnerabilities in kubernetes by using trivy
Vulnerability-exporter - A Prometheus Exporter for managing vulnerabilities in kubernetes by using trivy

Kubernetes Vulnerability Exporter A Prometheus Exporter for managing vulnerabili

Dec 4, 2022
Netstat exporter - Prometheus exporter for exposing reserved ports and it's mapped process

Netstat exporter Prometheus exporter for exposing reserved ports and it's mapped

Feb 3, 2022
📡 Prometheus exporter that exposes metrics from SpaceX Starlink Dish
📡  Prometheus exporter that exposes metrics from SpaceX Starlink Dish

Starlink Prometheus Exporter A Starlink exporter for Prometheus. Not affiliated with or acting on behalf of Starlink(™) ?? Starlink Monitoring System

Dec 19, 2022
NVIDIA GPU metrics exporter for Prometheus leveraging DCGM

DCGM-Exporter This repository contains the DCGM-Exporter project. It exposes GPU metrics exporter for Prometheus leveraging NVIDIA DCGM. Documentation

Dec 27, 2022
A Prometheus exporter which scrapes metrics from CloudLinux LVE Stats 2
A Prometheus exporter which scrapes metrics from CloudLinux LVE Stats 2

CloudLinux LVE Exporter for Prometheus LVE Exporter - A Prometheus exporter which scrapes metrics from CloudLinux LVE Stats 2 Help on flags: -h, --h

Nov 2, 2021
A Prometheus metrics exporter for AWS that fills in gaps CloudWatch doesn't cover

YAAE (Yet Another AWS Exporter) A Prometheus metrics exporter for AWS that fills in gaps CloudWatch doesn't cover About This exporter is meant to expo

Dec 10, 2022
Prometheus metrics exporter for libvirt.

Libvirt exporter Prometheus exporter for vm metrics written in Go with pluggable metric collectors. Installation and Usage If you are new to Prometheu

Jul 4, 2022
Prometheus Exporter for Kvrocks Metrics
Prometheus Exporter for Kvrocks Metrics

Prometheus Kvrocks Metrics Exporter This is a fork of oliver006/redis_exporter to export the kvrocks metrics. Building and running the exporter Build

Sep 7, 2022
A prometheus exporter which reports metrics about your Gmail inbox.

prometheus-gmail-exporter-go A prometheus exporter for gmail. Heavily inspired by https://github.com/jamesread/prometheus-gmail-exporter, but written

Nov 15, 2022
LLS-Exporter exports fuel level sensor data (rs-485 lls protocol) as prometheus metrics

LLS Exporter LLS Exporter reads rs485/rs232 data from serial port, decodes lls protocol and exports fuel level sensor data as prometheus metrics. Lice

Dec 14, 2021
Openshift's hpessa-exporter allows users to export SMART information of local storage devices as Prometheus metrics, by using HPE Smart Storage Administrator tool

hpessa-exporter Overview Openshift's hpessa-exporter allows users to export SMART information of local storage devices as Prometheus metrics, by using

Jan 17, 2022
Exporter your cypress.io dashboard into prometheus Metrics

Cypress.io dashboard Prometheus exporter Prometheus exporter for a project from Cypress.io dashboards, giving the ability to alert, make special opera

Feb 8, 2022
Github exporter for Prometheus metrics. Written in Go, with love ❤️

Github exporter for Prometheus This is a Github exporter for Prometheus metrics exposed by Github API. Written in Go with pluggable metrics collectors

Oct 5, 2022
Kepler (Kubernetes-based Efficient Power Level Exporter) uses eBPF to probe energy related system stats and exports as Prometheus metrics
Kepler (Kubernetes-based Efficient Power Level Exporter) uses eBPF to probe energy related system stats and exports as Prometheus metrics

kepler Kepler (Kubernetes Efficient Power Level Exporter) uses eBPF to probe energy related system stats and exports as Prometheus metrics Architectur

Dec 26, 2022
Sensu-go-postgres-metrics - The sensu-go-postgres-metrics is a sensu check that collects PostgreSQL metrics

sensu-go-postgres-metrics Table of Contents Overview Known issues Usage examples

Jan 12, 2022
The metrics-agent collects allocation metrics from a Kubernetes cluster system and sends the metrics to cloudability

metrics-agent The metrics-agent collects allocation metrics from a Kubernetes cluster system and sends the metrics to cloudability to help you gain vi

Jan 14, 2022
Export Prometheus metrics from journald events using Prometheus Go client library

journald parser and Prometheus exporter Export Prometheus metrics from journald events using Prometheus Go client library. For demonstration purposes,

Jan 3, 2022