gin-gonic/gin metrics for prometheus.

gin-metrics

gin-gonic/gin metrics exporter for Prometheus.

中文

Introduction

gin-metrics defines some metrics for gin http-server. There have easy way to use it.

Below is the detailed description for every metric.

Metric Type Description
gin_request_total Counter all the server received request num.
gin_request_uv Counter all the server received ip num.
gin_uri_request_total Counter all the server received request num with every uri.
gin_request_body_total Counter the server received request body size, unit byte.
gin_response_body_total Counter the server send response body size, unit byte.
gin_request_duration Histogram the time server took to handle the request.
gin_slow_request_total Counter the server handled slow requests counter, t=%d.

Grafana

Set the grafana directory for details.

grafana

Installation

$ go get github.com/penglongli/gin-metrics

Usage

Your can see some metrics across http://localhost:8080/metrics

package main

import (
	"github.com/gin-gonic/gin"

	"github.com/penglongli/gin-metrics/ginmetrics"
)

func main() {
	r := gin.Default()

	// get global Monitor object
	m := ginmetrics.GetMonitor()

	// +optional set metric path, default /debug/metrics
	m.SetMetricPath("/metrics")
	// +optional set slow time, default 5s
	m.SetSlowTime(10)
	// +optional set request duration, default {0.1, 0.3, 1.2, 5, 10}
	// used to p95, p99
	m.SetDuration([]float64{0.1, 0.3, 1.2, 5, 10})

	// set middleware for gin
	m.Use(r)

	r.GET("/product/:id", func(ctx *gin.Context) {
		ctx.JSON(200, map[string]string{
			"productId": ctx.Param("id"),
		})
	})

	_ = r.Run()
}

Custom Metric

gin-metric provides ways to custom your own metric.

Gauge

With Gauge type metric, you can use three functions to change it's value.

And you should define a Gauge Metric first,

gaugeMetric := &ginmetrics.Metric{
    Type:        ginmetrics.Counter,
    Name:        "example_gauge_metric",
    Description: "an example of gauge type metric",
    Labels:      []string{"label1"},
}

// Add metric to global monitor object
_ = ginmetrics.GetMonitor().AddMetric(gaugeMetric)

SetGaugeValue

SetGaugeValue will setting metric value directly。

_ = ginmetrics.GetMonitor().GetMetric("example_gauge_metric").SetGaugeValue([]string{"label_value1"}, 0.1)

Inc

Inc will increase 1 to metric value

_ = ginmetrics.GetMonitor().GetMetric("example_gauge_metric").Inc([]string{"label_value1"})

Add

Add will add float64 num to metric value

_ = ginmetrics.GetMonitor().GetMetric("example_gauge_metric").Add([]string{"label_value1"}, 0.2)

Counter

With Counter type metric, you can use Inc and Add function, don't use SetGaugeValue.

Histogram and Summary

For Histogram and Summary type metric, should use Observe function.

Contributing

If someone has a problem or suggestions, you can new issues or new pull requrests.

Comments
  • geoip metrics / how to add ?

    geoip metrics / how to add ?

    Hi @penglongli ,

    Hope you are all well !

    I would like to add some custom metrics from the following middleware:

    • https://github.com/cjgiridhar/gin-geo

    How should I do it with gin-metrics ?

    Thanks for any insights or inputs on that question :-)

    Cheers, Luc Michalski

  • How can I add a middleware in metrics path ?

    How can I add a middleware in metrics path ?

    I have added a middleware in all of my server routes, that first checks for api-key. If its not valid, then the server returns 404 unauthorised. I want to add this middleware into metricPath /metrics too. The only way ai could think of is, changing the Use method in middleware from this;

    func (m *Monitor) Use(r *gin.Engine) {
    	m.initGinMetrics()
    	r.Use(m.monitorInterceptor)
    	r.GET(m.metricPath, func(ctx *gin.Context) {
    		promhttp.Handler().ServeHTTP(ctx.Writer, ctx.Request)
    	})
    }
    

    to

    func (m *Monitor) Use(r *gin.Engine, middleware gin.HandlerFunc) {
    	m.initGinMetrics()
    	r.Use(m.monitorInterceptor)
    	r.GET(m.metricPath, func(ctx *gin.Context) {
    		promhttp.Handler().ServeHTTP(ctx.Writer, ctx.Request)
    	}, middleware)
    }
    

    Is there another way to achieve this ?, or else I am happy to update this library.

  • Metric router do not expose all metrics

    Metric router do not expose all metrics

    When i use additional router for metrics. It not exposing everything what should.

    After hiting few paths metrics looks like that

    # HELP gin_request_body_total the server received request body size, unit byte
    # TYPE gin_request_body_total counter
    gin_request_body_total 0
    # HELP gin_request_duration the time server took to handle the request.
    # TYPE gin_request_duration histogram
    gin_request_duration_bucket{uri="",le="0.1"} 19
    gin_request_duration_bucket{uri="",le="0.3"} 19
    gin_request_duration_bucket{uri="",le="1.2"} 19
    gin_request_duration_bucket{uri="",le="5"} 19
    gin_request_duration_bucket{uri="",le="10"} 19
    gin_request_duration_bucket{uri="",le="+Inf"} 19
    gin_request_duration_sum{uri=""} 0.0004908500000000001
    gin_request_duration_count{uri=""} 19
    # HELP gin_request_total all the server received request num.
    # TYPE gin_request_total counter
    gin_request_total 19
    # HELP gin_request_uv_total all the server received ip num.
    # TYPE gin_request_uv_total counter
    gin_request_uv_total 1
    # HELP gin_uri_request_total all the server received request num with every uri.
    # TYPE gin_uri_request_total counter
    gin_uri_request_total{code="404",method="GET",uri=""} 19
    # HELP go_gc_duration_seconds A summary of the pause duration of garbage collection cycles.
    # TYPE go_gc_duration_seconds summary
    go_gc_duration_seconds{quantile="0"} 0.000103925
    go_gc_duration_seconds{quantile="0.25"} 0.000125436
    go_gc_duration_seconds{quantile="0.5"} 0.000153302
    go_gc_duration_seconds{quantile="0.75"} 0.000233413
    go_gc_duration_seconds{quantile="1"} 0.000303185
    go_gc_duration_seconds_sum 0.000919261
    go_gc_duration_seconds_count 5
    # HELP go_goroutines Number of goroutines that currently exist.
    # TYPE go_goroutines gauge
    go_goroutines 15
    # HELP go_info Information about the Go environment.
    # TYPE go_info gauge
    go_info{version="go1.17.2"} 1
    # HELP go_memstats_alloc_bytes Number of bytes allocated and still in use.
    # TYPE go_memstats_alloc_bytes gauge
    go_memstats_alloc_bytes 1.626308e+07
    # HELP go_memstats_alloc_bytes_total Total number of bytes allocated, even if freed.
    # TYPE go_memstats_alloc_bytes_total counter
    go_memstats_alloc_bytes_total 3.4116704e+07
    # HELP go_memstats_buck_hash_sys_bytes Number of bytes used by the profiling bucket hash table.
    # TYPE go_memstats_buck_hash_sys_bytes gauge
    go_memstats_buck_hash_sys_bytes 4859
    # HELP go_memstats_frees_total Total number of frees.
    # TYPE go_memstats_frees_total counter
    go_memstats_frees_total 109443
    # HELP go_memstats_gc_cpu_fraction The fraction of this program's available CPU time used by the GC since the program started.
    # TYPE go_memstats_gc_cpu_fraction gauge
    go_memstats_gc_cpu_fraction 9.311936974007497e-05
    # HELP go_memstats_gc_sys_bytes Number of bytes used for garbage collection system metadata.
    # TYPE go_memstats_gc_sys_bytes gauge
    go_memstats_gc_sys_bytes 5.554864e+06
    # HELP go_memstats_heap_alloc_bytes Number of heap bytes allocated and still in use.
    # TYPE go_memstats_heap_alloc_bytes gauge
    go_memstats_heap_alloc_bytes 1.626308e+07
    # HELP go_memstats_heap_idle_bytes Number of heap bytes waiting to be used.
    # TYPE go_memstats_heap_idle_bytes gauge
    go_memstats_heap_idle_bytes 9.66656e+06
    # HELP go_memstats_heap_inuse_bytes Number of heap bytes that are in use.
    # TYPE go_memstats_heap_inuse_bytes gauge
    go_memstats_heap_inuse_bytes 1.867776e+07
    # HELP go_memstats_heap_objects Number of allocated objects.
    # TYPE go_memstats_heap_objects gauge
    go_memstats_heap_objects 23304
    # HELP go_memstats_heap_released_bytes Number of heap bytes released to OS.
    # TYPE go_memstats_heap_released_bytes gauge
    go_memstats_heap_released_bytes 2.94912e+06
    # HELP go_memstats_heap_sys_bytes Number of heap bytes obtained from system.
    # TYPE go_memstats_heap_sys_bytes gauge
    go_memstats_heap_sys_bytes 2.834432e+07
    # HELP go_memstats_last_gc_time_seconds Number of seconds since 1970 of last garbage collection.
    # TYPE go_memstats_last_gc_time_seconds gauge
    go_memstats_last_gc_time_seconds 1.6361441565619857e+09
    # HELP go_memstats_lookups_total Total number of pointer lookups.
    # TYPE go_memstats_lookups_total counter
    go_memstats_lookups_total 0
    # HELP go_memstats_mallocs_total Total number of mallocs.
    # TYPE go_memstats_mallocs_total counter
    go_memstats_mallocs_total 132747
    # HELP go_memstats_mcache_inuse_bytes Number of bytes in use by mcache structures.
    # TYPE go_memstats_mcache_inuse_bytes gauge
    go_memstats_mcache_inuse_bytes 57600
    # HELP go_memstats_mcache_sys_bytes Number of bytes used for mcache structures obtained from system.
    # TYPE go_memstats_mcache_sys_bytes gauge
    go_memstats_mcache_sys_bytes 65536
    # HELP go_memstats_mspan_inuse_bytes Number of bytes in use by mspan structures.
    # TYPE go_memstats_mspan_inuse_bytes gauge
    go_memstats_mspan_inuse_bytes 201688
    # HELP go_memstats_mspan_sys_bytes Number of bytes used for mspan structures obtained from system.
    # TYPE go_memstats_mspan_sys_bytes gauge
    go_memstats_mspan_sys_bytes 245760
    # HELP go_memstats_next_gc_bytes Number of heap bytes when next garbage collection will take place.
    # TYPE go_memstats_next_gc_bytes gauge
    go_memstats_next_gc_bytes 2.4931168e+07
    # HELP go_memstats_other_sys_bytes Number of bytes used for other system allocations.
    # TYPE go_memstats_other_sys_bytes gauge
    go_memstats_other_sys_bytes 1.946981e+06
    # HELP go_memstats_stack_inuse_bytes Number of bytes in use by the stack allocator.
    # TYPE go_memstats_stack_inuse_bytes gauge
    go_memstats_stack_inuse_bytes 1.015808e+06
    # HELP go_memstats_stack_sys_bytes Number of bytes obtained from system for stack allocator.
    # TYPE go_memstats_stack_sys_bytes gauge
    go_memstats_stack_sys_bytes 1.015808e+06
    # HELP go_memstats_sys_bytes Number of bytes obtained from system.
    # TYPE go_memstats_sys_bytes gauge
    go_memstats_sys_bytes 3.7178128e+07
    # HELP go_threads Number of OS threads created.
    # TYPE go_threads gauge
    go_threads 19
    # HELP process_cpu_seconds_total Total user and system CPU time spent in seconds.
    # TYPE process_cpu_seconds_total counter
    process_cpu_seconds_total 0.41
    # HELP process_max_fds Maximum number of open file descriptors.
    # TYPE process_max_fds gauge
    process_max_fds 8192
    # HELP process_open_fds Number of open file descriptors.
    # TYPE process_open_fds gauge
    process_open_fds 38
    # HELP process_resident_memory_bytes Resident memory size in bytes.
    # TYPE process_resident_memory_bytes gauge
    process_resident_memory_bytes 3.270656e+07
    # HELP process_start_time_seconds Start time of the process since unix epoch in seconds.
    # TYPE process_start_time_seconds gauge
    process_start_time_seconds 1.63614414016e+09
    # HELP process_virtual_memory_bytes Virtual memory size in bytes.
    # TYPE process_virtual_memory_bytes gauge
    process_virtual_memory_bytes 2.025807872e+09
    # HELP process_virtual_memory_max_bytes Maximum amount of virtual memory available in bytes.
    # TYPE process_virtual_memory_max_bytes gauge
    process_virtual_memory_max_bytes -1
    # HELP promhttp_metric_handler_requests_in_flight Current number of scrapes being served.
    # TYPE promhttp_metric_handler_requests_in_flight gauge
    promhttp_metric_handler_requests_in_flight 1
    # HELP promhttp_metric_handler_requests_total Total number of scrapes by HTTP status code.
    # TYPE promhttp_metric_handler_requests_total counter
    promhttp_metric_handler_requests_total{code="200"} 33
    promhttp_metric_handler_requests_total{code="500"} 0
    promhttp_metric_handler_requests_total{code="503"} 0
    

    And should looks like that:

    # HELP gin_request_body_total the server received request body size, unit byte
    # TYPE gin_request_body_total counter
    gin_request_body_total 0
    # HELP gin_request_duration the time server took to handle the request.
    # TYPE gin_request_duration histogram
    gin_request_duration_bucket{uri="",le="0.1"} 17
    gin_request_duration_bucket{uri="",le="0.3"} 17
    gin_request_duration_bucket{uri="",le="1.2"} 17
    gin_request_duration_bucket{uri="",le="5"} 17
    gin_request_duration_bucket{uri="",le="10"} 17
    gin_request_duration_bucket{uri="",le="+Inf"} 17
    gin_request_duration_sum{uri=""} 0.00041688400000000007
    gin_request_duration_count{uri=""} 17
    gin_request_duration_bucket{uri="/api/v1/posts",le="0.1"} 5
    gin_request_duration_bucket{uri="/api/v1/posts",le="0.3"} 5
    gin_request_duration_bucket{uri="/api/v1/posts",le="1.2"} 5
    gin_request_duration_bucket{uri="/api/v1/posts",le="5"} 5
    gin_request_duration_bucket{uri="/api/v1/posts",le="10"} 5
    gin_request_duration_bucket{uri="/api/v1/posts",le="+Inf"} 5
    gin_request_duration_sum{uri="/api/v1/posts"} 0.004988888
    gin_request_duration_count{uri="/api/v1/posts"} 5
    gin_request_duration_bucket{uri="/api/v1/users",le="0.1"} 5
    gin_request_duration_bucket{uri="/api/v1/users",le="0.3"} 5
    gin_request_duration_bucket{uri="/api/v1/users",le="1.2"} 5
    gin_request_duration_bucket{uri="/api/v1/users",le="5"} 5
    gin_request_duration_bucket{uri="/api/v1/users",le="10"} 5
    gin_request_duration_bucket{uri="/api/v1/users",le="+Inf"} 5
    gin_request_duration_sum{uri="/api/v1/users"} 0.007444537000000001
    gin_request_duration_count{uri="/api/v1/users"} 5
    # HELP gin_request_total all the server received request num.
    # TYPE gin_request_total counter
    gin_request_total 27
    # HELP gin_request_uv_total all the server received ip num.
    # TYPE gin_request_uv_total counter
    gin_request_uv_total 1
    # HELP gin_response_body_total the server send response body size, unit byte
    # TYPE gin_response_body_total counter
    gin_response_body_total 100
    # HELP gin_uri_request_total all the server received request num with every uri.
    # TYPE gin_uri_request_total counter
    gin_uri_request_total{code="404",method="GET",uri=""} 17
    gin_uri_request_total{code="404",method="GET",uri="/api/v1/posts"} 5
    gin_uri_request_total{code="404",method="GET",uri="/api/v1/users"} 5
    # HELP go_gc_duration_seconds A summary of the pause duration of garbage collection cycles.
    # TYPE go_gc_duration_seconds summary
    go_gc_duration_seconds{quantile="0"} 0.000113773
    go_gc_duration_seconds{quantile="0.25"} 0.000113773
    go_gc_duration_seconds{quantile="0.5"} 0.00024375
    go_gc_duration_seconds{quantile="0.75"} 0.000290123
    go_gc_duration_seconds{quantile="1"} 0.000290123
    go_gc_duration_seconds_sum 0.000647646
    go_gc_duration_seconds_count 3
    # HELP go_goroutines Number of goroutines that currently exist.
    # TYPE go_goroutines gauge
    go_goroutines 12
    # HELP go_info Information about the Go environment.
    # TYPE go_info gauge
    go_info{version="go1.17.2"} 1
    # HELP go_memstats_alloc_bytes Number of bytes allocated and still in use.
    # TYPE go_memstats_alloc_bytes gauge
    go_memstats_alloc_bytes 1.5163464e+07
    # HELP go_memstats_alloc_bytes_total Total number of bytes allocated, even if freed.
    # TYPE go_memstats_alloc_bytes_total counter
    go_memstats_alloc_bytes_total 2.0640728e+07
    # HELP go_memstats_buck_hash_sys_bytes Number of bytes used by the profiling bucket hash table.
    # TYPE go_memstats_buck_hash_sys_bytes gauge
    go_memstats_buck_hash_sys_bytes 4859
    # HELP go_memstats_frees_total Total number of frees.
    # TYPE go_memstats_frees_total counter
    go_memstats_frees_total 58717
    # HELP go_memstats_gc_cpu_fraction The fraction of this program's available CPU time used by the GC since the program started.
    # TYPE go_memstats_gc_cpu_fraction gauge
    go_memstats_gc_cpu_fraction 0.00010777849461476967
    # HELP go_memstats_gc_sys_bytes Number of bytes used for garbage collection system metadata.
    # TYPE go_memstats_gc_sys_bytes gauge
    go_memstats_gc_sys_bytes 5.542624e+06
    # HELP go_memstats_heap_alloc_bytes Number of heap bytes allocated and still in use.
    # TYPE go_memstats_heap_alloc_bytes gauge
    go_memstats_heap_alloc_bytes 1.5163464e+07
    # HELP go_memstats_heap_idle_bytes Number of heap bytes waiting to be used.
    # TYPE go_memstats_heap_idle_bytes gauge
    go_memstats_heap_idle_bytes 7.192576e+06
    # HELP go_memstats_heap_inuse_bytes Number of heap bytes that are in use.
    # TYPE go_memstats_heap_inuse_bytes gauge
    go_memstats_heap_inuse_bytes 1.7088512e+07
    # HELP go_memstats_heap_objects Number of allocated objects.
    # TYPE go_memstats_heap_objects gauge
    go_memstats_heap_objects 27690
    # HELP go_memstats_heap_released_bytes Number of heap bytes released to OS.
    # TYPE go_memstats_heap_released_bytes gauge
    go_memstats_heap_released_bytes 5.28384e+06
    # HELP go_memstats_heap_sys_bytes Number of heap bytes obtained from system.
    # TYPE go_memstats_heap_sys_bytes gauge
    go_memstats_heap_sys_bytes 2.4281088e+07
    # HELP go_memstats_last_gc_time_seconds Number of seconds since 1970 of last garbage collection.
    # TYPE go_memstats_last_gc_time_seconds gauge
    go_memstats_last_gc_time_seconds 1.6361445335930374e+09
    # HELP go_memstats_lookups_total Total number of pointer lookups.
    # TYPE go_memstats_lookups_total counter
    go_memstats_lookups_total 0
    # HELP go_memstats_mallocs_total Total number of mallocs.
    # TYPE go_memstats_mallocs_total counter
    go_memstats_mallocs_total 86407
    # HELP go_memstats_mcache_inuse_bytes Number of bytes in use by mcache structures.
    # TYPE go_memstats_mcache_inuse_bytes gauge
    go_memstats_mcache_inuse_bytes 57600
    # HELP go_memstats_mcache_sys_bytes Number of bytes used for mcache structures obtained from system.
    # TYPE go_memstats_mcache_sys_bytes gauge
    go_memstats_mcache_sys_bytes 65536
    # HELP go_memstats_mspan_inuse_bytes Number of bytes in use by mspan structures.
    # TYPE go_memstats_mspan_inuse_bytes gauge
    go_memstats_mspan_inuse_bytes 152456
    # HELP go_memstats_mspan_sys_bytes Number of bytes used for mspan structures obtained from system.
    # TYPE go_memstats_mspan_sys_bytes gauge
    go_memstats_mspan_sys_bytes 163840
    # HELP go_memstats_next_gc_bytes Number of heap bytes when next garbage collection will take place.
    # TYPE go_memstats_next_gc_bytes gauge
    go_memstats_next_gc_bytes 1.631904e+07
    # HELP go_memstats_other_sys_bytes Number of bytes used for other system allocations.
    # TYPE go_memstats_other_sys_bytes gauge
    go_memstats_other_sys_bytes 2.303285e+06
    # HELP go_memstats_stack_inuse_bytes Number of bytes in use by the stack allocator.
    # TYPE go_memstats_stack_inuse_bytes gauge
    go_memstats_stack_inuse_bytes 884736
    # HELP go_memstats_stack_sys_bytes Number of bytes obtained from system for stack allocator.
    # TYPE go_memstats_stack_sys_bytes gauge
    go_memstats_stack_sys_bytes 884736
    # HELP go_memstats_sys_bytes Number of bytes obtained from system.
    # TYPE go_memstats_sys_bytes gauge
    go_memstats_sys_bytes 3.3245968e+07
    # HELP go_threads Number of OS threads created.
    # TYPE go_threads gauge
    go_threads 16
    # HELP process_cpu_seconds_total Total user and system CPU time spent in seconds.
    # TYPE process_cpu_seconds_total counter
    process_cpu_seconds_total 0.2
    # HELP process_max_fds Maximum number of open file descriptors.
    # TYPE process_max_fds gauge
    process_max_fds 8192
    # HELP process_open_fds Number of open file descriptors.
    # TYPE process_open_fds gauge
    process_open_fds 36
    # HELP process_resident_memory_bytes Resident memory size in bytes.
    # TYPE process_resident_memory_bytes gauge
    process_resident_memory_bytes 2.766848e+07
    # HELP process_start_time_seconds Start time of the process since unix epoch in seconds.
    # TYPE process_start_time_seconds gauge
    process_start_time_seconds 1.63614452477e+09
    # HELP process_virtual_memory_bytes Virtual memory size in bytes.
    # TYPE process_virtual_memory_bytes gauge
    process_virtual_memory_bytes 1.79955712e+09
    # HELP process_virtual_memory_max_bytes Maximum amount of virtual memory available in bytes.
    # TYPE process_virtual_memory_max_bytes gauge
    process_virtual_memory_max_bytes -1
    # HELP promhttp_metric_handler_requests_in_flight Current number of scrapes being served.
    # TYPE promhttp_metric_handler_requests_in_flight gauge
    promhttp_metric_handler_requests_in_flight 1
    # HELP promhttp_metric_handler_requests_total Total number of scrapes by HTTP status code.
    # TYPE promhttp_metric_handler_requests_total counter
    promhttp_metric_handler_requests_total{code="200"} 7
    promhttp_metric_handler_requests_total{code="500"} 0
    promhttp_metric_handler_requests_total{code="503"} 0
    

    Code used from examples.

  • [cve-2021-3121] prometheus using GoGo Protobuf before 1.3.2

    [cve-2021-3121] prometheus using GoGo Protobuf before 1.3.2

    Hi!

    We are using your great package and it makes life a lot easier! Thank you so much for maintaining it. :1st_place_medal:

    When running a security scan of our code base, we're seeing the following dependency with a known vulnerability:

    Language: Generic
    Severity: HIGH
    Line: 0
    Column: 0
    SecurityTool: Trivy
    Confidence: MEDIUM
    File: /agent/_work/1/s/go.sum
    Code: github.com/gogo/protobuf
    Details: An issue was discovered in GoGo Protobuf before 1.3.2. plugin/unmarshal/unmarshal.go lacks certain index validation, aka the "skippy peanut butter" issue.
    Installed Version: "1.1.1", Update to Version: "v1.3.2" for fix this issue.
    PrimaryURL: https://avd.aquasec.com/nvd/cve-2021-3121.
    Cwe Links: (https://cwe.mitre.org/data/definitions/129.html)
    Type: Vulnerability
    ReferenceHash: 131cf165c82f2b67996ac29097eb5c4a934c7f27bce1572426670a0aff6b8d74
    

    I think you are getting it from the Prometheus packages. Would it be possible to see if you can update it?

    Thank you!

  • Metrics on separate gin server

    Metrics on separate gin server

    UseWithoutExposingEndpoint to enable monitoring of different gin servers/endpoint groups. Expose to enable exposing endpoint on different gin server/endpoint group.

  • Is it possible to use with two different routers?

    Is it possible to use with two different routers?

    Hi I am using two different routers in my project. But when i try to use gin-metrics for both of them i get the error below.

    panic: duplicate metrics collector registration attempted

    My code is similar to this:

     func Router1() *gin.Engine {
    
    	r := gin.New()
    
    	m := ginmetrics.GetMonitor()
    	m.SetMetricPath("/metrics")
    	m.SetSlowTime(10)
    	m.SetDuration([]float64{0.1, 0.3, 1.2, 5, 10})
    	m.Use(r)
    
    	r.GET("/:key", someFunc())
    
    	return r
     }
    
    
     func Router2() *gin.Engine {
    
    	r := gin.New()
    
    
    	m := ginmetrics.GetMonitor()
    	m.SetMetricPath("/metrics")
    	m.SetSlowTime(10)
    	m.SetDuration([]float64{0.1, 0.3, 1.2, 5, 10})
    	m.Use(r)
    
    	r.GET("/:key", someFunc())
    
    	return r
     }
    
    
    func main() {
    
    	g.Go(func() error {
    		return routers.Router1().Run(":8080")
    	})
    
    	g.Go(func() error {
    		return routers.Router2().Run(":8081")
    	})
    
    	if err := g.Wait(); err != nil {
    		log.Fatal(err)
    	}
    }
    
  • PANIC: counter cannot decrease in value

    PANIC: counter cannot decrease in value

    Hi,

    I'm using your library in my application that spawns up 2 different servers (http/https) and i use

    	if config.EnableMetrics {
    		log.Printf("Metrics enabled\n")
    		// get global Monitor object
    		m := ginmetrics.GetMonitor()
    		m.SetMetricPath("/metrics")
    		m.Use(r)
    	}
    

    to setup the metrics in both gin routers applied to servers

    I get this error:

    2021/07/21 09:33:51 [Recovery] 2021/07/21 - 09:33:51 panic recovered:
    POST /v1/key/decrypt/id-test HTTP/2.0
    Host: 127.0.0.1:3000
    Accept-Encoding: gzip
    Content-Type: application/json
    User-Agent: Go-http-client/2.0
    
    
    counter cannot decrease in value
    /home/furio/go/pkg/mod/github.com/prometheus/[email protected]/prometheus/counter.go:109 (0x9905e4)
            (*counter).Add: panic(errors.New("counter cannot decrease in value"))
    /home/furio/go/pkg/mod/github.com/penglongli/[email protected]/ginmetrics/metric.go:67 (0x9bd92f)
            (*Metric).Add: m.vec.(*prometheus.CounterVec).WithLabelValues(labelValues...).Add(value)
    /home/furio/go/pkg/mod/github.com/penglongli/[email protected]/ginmetrics/middleware.go:117 (0x9bea24)
            (*Monitor).ginMetricHandle: _ = m.GetMetric(metricRequestBody).Add(nil, float64(r.ContentLength))
    /home/furio/go/pkg/mod/github.com/penglongli/[email protected]/ginmetrics/middleware.go:97 (0x9be68b)
            (*Monitor).monitorInterceptor: m.ginMetricHandle(ctx, startTime)
    /home/furio/go/pkg/mod/github.com/gin-gonic/[email protected]/context.go:165 (0x9424b9)
            (*Context).Next: c.handlers[c.index](c)
    /home/furio/go/pkg/mod/github.com/gin-gonic/[email protected]/recovery.go:99 (0x9424a0)
            CustomRecoveryWithWriter.func1: c.Next()
    /home/furio/go/pkg/mod/github.com/gin-gonic/[email protected]/context.go:165 (0x941593)
            (*Context).Next: c.handlers[c.index](c)
    /home/furio/go/pkg/mod/github.com/gin-gonic/[email protected]/logger.go:241 (0x941552)
            LoggerWithConfig.func1: c.Next()
    /home/furio/go/pkg/mod/github.com/gin-gonic/[email protected]/context.go:165 (0x937a49)
            (*Context).Next: c.handlers[c.index](c)
    /home/furio/go/pkg/mod/github.com/gin-gonic/[email protected]/gin.go:489 (0x937a2f)
            (*Engine).handleHTTPRequest: c.Next()
    /home/furio/go/pkg/mod/github.com/gin-gonic/[email protected]/gin.go:445 (0x93751b)
            (*Engine).ServeHTTP: engine.handleHTTPRequest(c)
    /usr/local/go/src/net/http/server.go:2887 (0x6b5522)
            serverHandler.ServeHTTP: handler.ServeHTTP(rw, req)
    /usr/local/go/src/net/http/server.go:3459 (0x6b814c)
            initALPNRequest.ServeHTTP: h.h.ServeHTTP(rw, req)
    /usr/local/go/src/net/http/h2_bundle.go:5723 (0x69824a)
            (*http2serverConn).runHandler: handler(rw, req)
    /usr/local/go/src/runtime/asm_amd64.s:1371 (0x4716c0)
            goexit: BYTE    $0x90   // NOP
    
    [GIN-debug] [WARNING] Headers were already written. Wanted to override status code 200 with 500
    [GIN] 2021/07/21 - 09:33:51 | 500 |       1.631ms |       127.0.0.1 | POST     "/v1/key/decrypt/id-test"
    

    And I don't get why this is happening.

  • Bloom filter uses heap and seems unused

    Bloom filter uses heap and seems unused

    I used pprof to investigate memory usage (heap) in my application, and I noticed that the bloomFilter is using approx 4Mb of heap.

    By looking at the code, it seems the bloomFilter is of no use at all.

    It is used in ginmetrics/middlwares.go, to store IP addresses, but those IPs seems to be never used.

    Unless there is another usage which I haven't understand, would it be possible to remove the bloomFilter from this lib? It would free 4Mb of heap usage on every application that uses this library.

  • Custom prometheus metrics in separated package

    Custom prometheus metrics in separated package

    Hi, This is not an issue but more request for example/docs how use your lib.

    I want to define custome metrics in separated package and import it later to place where i initialize route. I tried few ways but wihout success. Could you help with that?

  • The new version triggers a security error

    The new version triggers a security error

    If i try to get the new version i receive a version that is without the latest fix, if i skip the proxy i get this:

    GOPROXY=direct go get -d github.com/penglongli/[email protected]
    go: downloading github.com/penglongli/gin-metrics v0.1.3
    verifying github.com/penglongli/[email protected]: checksum mismatch
            downloaded: h1:V9UZzIqmsIrYLVQbsScBe9r7UlxbN1SdjS3qFOyP9lE=
            go.sum:     h1:IxbEwCtybuq8weteK0AYpE+afxj2U1+qCYIcumy0eaQ=
    
    SECURITY ERROR
    This download does NOT match an earlier download recorded in go.sum.
    The bits may have been replaced on the origin server, or an attacker may
    have intercepted the download attempt.
    

    Is it possible that v0.1.3 was already tagged somewhere?

    Edit: Doing

    GONOSUMDB=github.com/penglongli GOPROXY=direct go get -d
    

    It works (skipping the securitiy for v0.1.3)

  • SetMetricPath 不起效果

    SetMetricPath 不起效果

    当前版本: github.com/penglongli/gin-metrics v0.1.0

    问题情况: m.SetMetricPath("/metrics") 之后实际访问metrics接口路径还是defaultMetricPath(/debug/metrics)

    问题原因: 源码 // SetMetricPath set metricPath property. metricPath is used for Prometheus // to get gin server monitoring data. func (m *Monitor) SetMetricPath(path string) { m.metricPath = defaultMetricPath }

    是不是应该改为 m.metricPath = path

  • bug: when adding a metric, if the function fails, the error is swallowed and a

    bug: when adding a metric, if the function fails, the error is swallowed and a "metric type '%d' not existed" error returned

    For example:

    package main
    
    import (
    	"fmt"
    
    	"github.com/penglongli/gin-metrics/ginmetrics"
    )
    
    func main() {
    	err := ginmetrics.GetMonitor().AddMetric(&ginmetrics.Metric{
    		Name: "test",
    		Type: ginmetrics.Histogram,
    	})
    	fmt.Println(err)
    }
    

    returns

    metric type '3' not existed.

    when it should return

    metric 'test' is histogram type, cannot lose bucket param.

    https://github.com/penglongli/gin-metrics/blob/b66ef4a3274e50cfc651a5639ee4a66bcbf5d0b8/ginmetrics/types.go#L111

  • invalid pattern on singleton GetMonitor()

    invalid pattern on singleton GetMonitor()

    In order to avoid data race on GetMonitor(), gin-metrics has to use sync.Once or sync.Mutex.

    For example with sync.Once:

    var (
    	defaultDuration = []float64{0.1, 0.3, 1.2, 5, 10}
    	monitor         *Monitor
    	onceMonitor     sync.Once
    
    	promTypeHandler = map[MetricType]func(metric *Metric) error{
    		Counter:   counterHandler,
    		Gauge:     gaugeHandler,
    		Histogram: histogramHandler,
    		Summary:   summaryHandler,
    	}
    )
    
    // GetMonitor used to get global Monitor object,
    // this function returns a singleton object.
    func GetMonitor() *Monitor {
            onceMonitor.Do(
                func() {
    		monitor = &Monitor{
    			metricPath:  defaultMetricPath,
    			slowTime:    defaultSlowTime,
    			reqDuration: defaultDuration,
    			metrics:     make(map[string]*Metric),
    		}
    	}
    
    	return monitor
    }
    
    
  • m.ginMetricHandle is not exectued if ctx.Next() panics

    m.ginMetricHandle is not exectued if ctx.Next() panics

    If next middle ware panics then m.ginMetricHandle is not executed. I known in this case response can not be correctly set, but at least can we count it and all correct metrics (client IP, lantency ...)

    This can fix this case:

    // monitorInterceptor as gin monitor middleware.
    func (m *Monitor) monitorInterceptor(ctx *gin.Context) {
    	if ctx.Request.URL.Path != m.metricPath {
    		startTime := time.Now()
    
    		defer func() {
    			// after request
    			m.ginMetricHandle(ctx, startTime)
    		}()
    	}
    
    	// execute normal process.
    	ctx.Next()
    }
    
  • Add fixes against race conditions

    Add fixes against race conditions

    • Ensuring that the monitor is declared a single time by becoming atomic. This approach protects also against race conditions
    • make BloomFilter thread-safe
    • In addition adding GitHub Actions to automate testing. Under current setup, the tests are triggered whenever a PR is created against master branch or whenever commits are merged in master.
The open and composable observability and data visualization platform. Visualize metrics, logs, and traces from multiple sources like Prometheus, Loki, Elasticsearch, InfluxDB, Postgres and many more.
The open and composable observability and data visualization platform. Visualize metrics, logs, and traces from multiple sources like Prometheus, Loki, Elasticsearch, InfluxDB, Postgres and many more.

The open-source platform for monitoring and observability. Grafana allows you to query, visualize, alert on and understand your metrics no matter wher

Jan 3, 2023
A flexible process data collection, metrics, monitoring, instrumentation, and tracing client library for Go
A flexible process data collection, metrics, monitoring, instrumentation, and tracing client library for Go

Package monkit is a flexible code instrumenting and data collection library. See documentation at https://godoc.org/gopkg.in/spacemonkeygo/monkit.v3 S

Dec 14, 2022
A Postgres Metrics Dashboard
A Postgres Metrics Dashboard

#Pome Pome stands for Postgres Metrics. Pome is a PostgreSQL Metrics Dashboard to keep track of the health of your database. This project is at a very

Dec 22, 2022
Very powerful server agent for collecting & sending logs & metrics with an easy-to-use web console.
Very powerful server agent for collecting & sending logs & metrics with an easy-to-use web console.

logkit-community 中文版 Introduce Very powerful server agent for collecting & sending logs & metrics with an easy-to-use web console. logkit-community De

Dec 29, 2022
Metrics dashboards on terminal (a grafana inspired terminal version)

Grafterm Visualize metrics dashboards on the terminal, like a simplified and minimalist version of Grafana for terminal. Features Multiple widgets (gr

Jan 6, 2023
Pixie gives you instant visibility by giving access to metrics, events, traces and logs without changing code.
Pixie gives you instant visibility by giving access to metrics, events, traces and logs without changing code.

Pixie gives you instant visibility by giving access to metrics, events, traces and logs without changing code.

Jan 4, 2023
gosivy - Real-time visualization tool for Go process metrics
 gosivy - Real-time visualization tool for Go process metrics

Gosivy tracks Go process's metrics and plot their evolution over time right into your terminal, no matter where it's running on. It helps you understand how your application consumes the resources.

Nov 27, 2022
Go runtime metrics stack on your local computer.
Go runtime metrics stack on your local computer.

Goal This repository provides a docker-compose for Go runtime metrics stack on your local computer. It is using influxdb and grafana, it based on the

Oct 8, 2021
The Prometheus monitoring system and time series database.

Prometheus Visit prometheus.io for the full documentation, examples and guides. Prometheus, a Cloud Native Computing Foundation project, is a systems

Dec 31, 2022
Monitor your network and internet speed with Docker & Prometheus
Monitor your network and internet speed with Docker & Prometheus

Stand-up a Docker Prometheus stack containing Prometheus, Grafana with blackbox-exporter, and speedtest-exporter to collect and graph home Internet reliability and throughput.

Dec 26, 2022
Like Prometheus, but for logs.
Like Prometheus, but for logs.

Loki: like Prometheus, but for logs. Loki is a horizontally-scalable, highly-available, multi-tenant log aggregation system inspired by Prometheus. It

Dec 30, 2022
Alert dashboard for Prometheus Alertmanager
Alert dashboard for Prometheus Alertmanager

karma Alert dashboard for Prometheus Alertmanager. Alertmanager >=0.19.0 is required as older versions might not show all receivers in karma, see issu

Dec 30, 2022
An example logging system using Prometheus, Loki, and Grafana.
An example logging system using Prometheus, Loki, and Grafana.

Logging Example Structure Collector Export numerical data for Prometheus and log data for Promtail. Exporter uses port 8080 Log files are saved to ./c

Nov 21, 2022
Prometheus exporter for connext subgraphs

Subgraph monitoring exporter Prometheus exporter which provides metrics for monitoring multi subgraphs and rpc nodes by graphql request to graph-node

Nov 30, 2021
Nightingale - A Distributed and High-Performance Monitoring System. Prometheus enterprise edition
Nightingale - A Distributed and High-Performance Monitoring System. Prometheus enterprise edition

Introduction ?? A Distributed and High-Performance Monitoring System. Prometheus

Jan 7, 2022
This is a toolKit/template project for web application/server with Gin, packed common services.

gin-toolKit This is a toolKit/template project for web application/server with Gin, packed common services. These services include fasthttp, xrate, lo

May 26, 2022
Gin adapter for standard net/http middleware

midgin An adapter to use standard net/http middleware in Gin. Overview Gin is a very capable web framework, but it does not directly support standard

Feb 12, 2022
A single sign-on solution based on go-oauth2 / oauth2 and gin-gonic/gin

A single sign-on solution based on go-oauth2 / oauth2 and gin-gonic/gin

Nov 17, 2021
Coraza WAF Gin-gonic middleware

This is a test middleware for Ginonic powered by Coraza Web Application Firewall. You may check the WAF documentation at coraza.io Looking for contrib

Nov 9, 2022
GO API with Gin Gonic with postgresql using gorp

GO API with Gin Gonic with postgresql using gorp Tips: Make sure you have project in src folder of $GOPATH Also, iniitalize go mod init project-name a

Jul 28, 2022