HTTP rate limiting module for Caddy 2

Caddy HTTP Rate Limit Module

This module implements both internal and distributed HTTP rate limiting. Requests can be rejected after a specified rate limit is hit.

WORK IN PROGRESS: Please note that this module is still unfinished and may have bugs. Please try it out and file bug reports - thanks!

Features

  • Multiple rate limit zones
  • Sliding window algorithm
  • Scalable ring buffer implementation
    • Buffer pooling
    • Goroutines: 1 (to clean up old buffers)
    • Memory O(Kn) where:
      • K = events allowed in window (constant, configurable)
      • n = number of rate limits allocated in zone (configured by zone key; constant or dynamic)
  • RL state persisted through config reloads
  • Automatically sets Retry-After header
  • Optional jitter for retry times
  • Configurable memory management
  • Distributed rate limiting across a cluster
  • Caddyfile support

PLANNED:

  • Ability to define matchers in zones with Caddyfile
  • Smoothed estimates of distributed rate limiting
  • RL state persisted in storage for resuming after restarts
  • Admin API endpoints to inspect or modify rate limits

Building

To build Caddy with this module, use xcaddy:

$ xcaddy build --with github.com/mholt/caddy-ratelimit

Overview

The rate_limit HTTP handler module lets you define rate limit zones, which have a unique name of your choosing. A rate limit zone is 1:1 with a rate limit (i.e. events per duration).

A zone also has a key, which is different from its name. Keys associate 1:1 with rate limiters, implemented as ring buffers; i.e. a new key implies allocating a new ring buffer. Keys can be static (no placeholders; same for every request), in which case only one rate limiter will be allocated for the whole zone. Or, keys can contain placeholders which can be different for every request, in which case a zone may contain numerous rate limiters depending on the result of expanding the key.

A zone is synomymous with a rate limit, being a number of events per duration. Both window and max_events are required configuration for a zone. For example: 100 events every 1 minute. Because this module uses a sliding window algorithm, it works by looking back duration and seeing if events have already happened in that timeframe. If so, an internal HTTP 429 error is generated and returned, invoking error routes which you have defined (if any). Otherwise, the a reservation is made and the event is allowed through.

Each zone may optionally filter the requests it applies to by specifying request matchers.

Unlike nginx's rate limit module, this one does not require you to set a memory bound. Instead, rate limiters are scanned every so often and expired ones are deleted so their memory can be recovered by the garbage collector: Caddy does not drop rate limiters on the floor and forget events like nginx does.

Distributed rate limiting

With a little bit more CPU, I/O, and a teensy bit more memory overhead, this module distributes its rate limit state across a cluster. A cluster is simply defined as other rate limit modules that are configured to use the same storage.

Distributed RL works by periodically writing its internal RL state to storage, while also periodically reading other instances' RL state from storage, then accounting for their states when making allowance decisions. In order for this to work, all instances in the cluster must have the exact same RL zone configurations.

This synchronization algorithm is inherently approximate, but also eventually consistent (and is similar to what other enterprise-only rate limiters do). Its performance depends heavily on parameter tuning (e.g. how often to read and write), configured rate limit windows and event maximums, and performance characteristics of the underlying storage implementation. (It will be fairly heavy on reads, but writes will be lighter, even if more frequent.)

Syntax

This is an HTTP handler module, so it can be used wherever http.handlers modules are accepted.

JSON config

{
	"handler": "rate_limit",
	"rate_limits": {
		"": {
			"match": [],
			"key": "",
			"window": "",
			"max_events": 0
		},
		"distributed": {
			"write_interval": "",
			"read_interval": ""
		},
		"storage": {},
		"jitter": 0.0,
		"sweep_interval": ""
	}
}

All fields are optional, but to be useful, you'll need to define at least one zone, and a zone requires window and max_events to be set. Keys can be static (no placeholders) or dynamic (with placeholders). Matchers can be used to filter requests that apply to a zone. Replace with your RL zone's name.

To enable distributed RL, set distributed to a non-null object. The default read and write intervals are 5s, but you should tune these for your individual deployments.

Storage customizes the storage module that is used. Like normal Caddy convention, all instances with the same storage configuration are considered to be part of a cluster.

Jitter is an optional percentage that adds random variance to the Retry-After time to avoid stampeding herds.

Sweep interval configures how often to scan for expired rate limiters. The default is 1m.

Caddyfile config

As with all non-standard HTTP handler modules, this directive is not known to the Caddyfile adapter and so it must be "ordered" manually using global options unless it only appears within a route block. This ordering usually works well, but you should use discretion:

{
	order rate_limit before basicauth
}

Here is the syntax. See the JSON config section above for explanations about each property:

rate_limit {
	zone  {
		key    
		window 
		events 
	}
	distributed {
		read_interval  
		write_interval 
	}
	storage 
	jitter  
	sweep_interval 
}

Like with the JSON config, all subdirectives are optional and have sensible defaults (but you will obviously want to specify at least one zone).

Multiple zones can be defined. Distributed RL can be enabled just by specifying distributed if you want to use its default settings.

Examples

We'll show an equivalent JSON and Caddyfile example that defines two rate limit zones: static_example and dynamic_example.

In the static_example zone, there is precisely one ring buffer allocated because the key is static (no placeholders), and we also demonstrate defining a matcher set to select which requests the rate limit applies to. Only 100 GET requests will be allowed through every minute, across all clients.

In the dynamic_example zone, the key is dynamic (has a placeholder), and in this case we're using the client's IP address ({http.request.remote.host}). We allow only 2 requests per client IP in the last 5 seconds from any given time.

We also enable distributed rate limiting. By deploying this config to two or more instances sharing the same storage module (which we did not define here, so Caddy's global storage config will be used), they will act approximately as one instance when making rate limiting decisions.

JSON example

{
	"apps": {
		"http": {
			"servers": {
				"demo": {
					"listen": [":80"],
					"routes": [
						{
							"handle": [
								{
									"handler": "rate_limit",
									"rate_limits": {
										"static_example": {
											"match": [
												{"method": ["GET"]}
											],
											"key": "static",
											"window": "1m",
											"max_events": 100
										},
										"dynamic_example": {
											"key": "{http.request.remote.host}",
											"window": "5s",
											"max_events": 2
										}
									},
									"distributed": {}
								},
								{
									"handler": "static_response",
									"body": "I'm behind the rate limiter!"
								}
							]
						}
					]
				}
			}
		}
	}
}

Caddyfile example

(The Caddyfile does not yet support defining matchers for RL zones, so that has been omitted from this example.)

{
	order rate_limit before basicauth
}

:80

rate_limit {
	distributed
	zone static_example {
		key    static
		events 100
		window 1m
	}
	zone dynamic_example {
		key    {remote_host}
		events 2
		window 5s
	}
}

respond "I'm behind the rate limiter!"
Owner
Matt Holt
M.S. Computer Science. Author of the Caddy Web Server, CertMagic, Papa Parse, JSON/curl-to-Go, Timeliner, Relica, and more...
Matt Holt
Comments
  • Dynamic zone key for network block of {http.request.remote.host} with certain prefix

    Dynamic zone key for network block of {http.request.remote.host} with certain prefix

    When used as the key of a dynamic zone, can {http.request.remote.host} be reduced to its network block for a certain prefix?

    Assuming http.request.remote.host is 1.2.3.4, and a function is used reducing it to /24, requests (and rate-limiting) to any address in the range 1.2.3.0-255 would be grouped.

    "key": "reduce_to_network_block({http.request.remote.host}, '/24')",
    
  • Issue with having multiple zones for a single key

    Issue with having multiple zones for a single key

    So, I'm trying to use this plugin to limit requests to a single user in the following way: 1 req/sec and 10 req/min.

    rate_limit {
      zone header_limiting_min {
        key {header.authorization}
        events 10
        window 1m
       }
    
      zone header_limiting_sec {
      key {header.authorization}
      events 1
      window 1s
      }
    }
    

    The issue I'm having is that the plugin seems to be counting failed requests as well so I hit the rate-limit after 10 requests whether they were successful or not.

    The other plugin works without this issue but it doesn't support the retry-after header.

    Is this a bug or is there a better way of achieving what I want ?

  • howto set consul storage

    howto set consul storage

    I am trying to get the distributed config working.

    Currently we have our storage on consul via: https://github.com/pteich/caddy-tlsconsul.

    Config looks like this for now:

    {
      "handler": "rate_limit",
      "rate_limits": {
        "msft_scanners": {
          "match": [
            {
              "remote_ip": {
                "ranges": [
                  "10.10.10.1/24"
                ]
              }
            }
          ], 
          "key": "msft",
          "window": "1m",
          "max_events": 2
        } 
      },
      "distributed": {
        "write_interval": "30s",
        "read_interval": "10s"
      }
    }
    

    On start i get an error regarding uuid:

    run: loading initial config: loading new config: loading http app module: provision http: server nzm: setting up route handlers: route 0: loading handler modules: position 0: loading module 'rate_limit': provision http.handlers.rate_limit: open /etc/caddyserver/.local/share/caddy/instance.uuid: no such file or directory
    
  • Rate limit based on path, but doesn't work

    Rate limit based on path, but doesn't work

    Hi, I wanna do rate limit based on the host + path, but the config below didn't work.

    {
            order rate_limit before basicauth
    }
    
    https://example.com {
            rate_limit {
                    distributed
                    zone dynamic_example {
                            key {remote_host}
                            events 1000
                            window 60s
                    }
                    zone pair_api {
                           match {
                                   path /api/auth/pair
                           }
                           key {remote_host}/api/auth/pair
                           events 1
                           window 10s
                    }
            }
    }
    
    zone pair_api {
           match {
                   path /api/auth/pair
           }
           key {remote_host}/api/auth/pair
           events 1
           window 10s
    }
    

    I followed this instruction

    Each zone may optionally filter the requests it applies to by specifying request matchers.

    Anything wrong about this snippet? Thanks in advance

  • Cannot build using xcaddy 0.3.1

    Cannot build using xcaddy 0.3.1

    Getting Error building with new xcaddy

    root@ip:/tmp# xcaddy build --with github.com/mholt/caddy-ratelimit --output ./caddy1
    2022/10/03 11:16:04 [INFO] Temporary folder: /tmp/buildenv_2022-10-03-1116.2667582345
    2022/10/03 11:16:04 [INFO] Writing main module: /tmp/buildenv_2022-10-03-1116.2667582345/main.go
    package main
    
    import (
            caddycmd "github.com/caddyserver/caddy/v2/cmd"
    
            // plug in Caddy modules here
            _ "github.com/caddyserver/caddy/v2/modules/standard"
            _ "github.com/mholt/caddy-ratelimit"
    )
    
    func main() {
            caddycmd.Main()
    }
    2022/10/03 11:16:04 [INFO] Initializing Go module
    2022/10/03 11:16:04 [INFO] exec (timeout=10s): /usr/local/go/bin/go mod init caddy 
    go: creating new go.mod: module caddy
    go: to add module requirements and sums:
            go mod tidy
    2022/10/03 11:16:04 [INFO] Pinning versions
    2022/10/03 11:16:04 [INFO] exec (timeout=0s): /usr/local/go/bin/go get -d -v github.com/caddyserver/caddy/v2 
    go: added github.com/beorn7/perks v1.0.1
    go: added github.com/caddyserver/caddy/v2 v2.6.1
    go: added github.com/caddyserver/certmagic v0.17.1
    go: added github.com/cespare/xxhash/v2 v2.1.2
    go: added github.com/fsnotify/fsnotify v1.5.1
    go: added github.com/go-task/slim-sprig v0.0.0-20210107165309-348f09dbbbc0
    go: added github.com/golang/mock v1.6.0
    go: added github.com/golang/protobuf v1.5.2
    go: added github.com/google/uuid v1.3.0
    go: added github.com/klauspost/cpuid/v2 v2.1.0
    go: added github.com/libdns/libdns v0.2.1
    go: added github.com/lucas-clemente/quic-go v0.28.2-0.20220813150001-9957668d4301
    go: added github.com/marten-seemann/qpack v0.2.1
    go: added github.com/marten-seemann/qtls-go1-18 v0.1.2
    go: added github.com/marten-seemann/qtls-go1-19 v0.1.0
    go: added github.com/matttproud/golang_protobuf_extensions v1.0.1
    go: added github.com/mholt/acmez v1.0.4
    go: added github.com/miekg/dns v1.1.50
    go: added github.com/nxadm/tail v1.4.8
    go: added github.com/onsi/ginkgo v1.16.4
    go: added github.com/prometheus/client_golang v1.12.2
    go: added github.com/prometheus/client_model v0.2.0
    go: added github.com/prometheus/common v0.32.1
    go: added github.com/prometheus/procfs v0.7.3
    go: added go.uber.org/atomic v1.9.0
    go: added go.uber.org/multierr v1.6.0
    go: added go.uber.org/zap v1.21.0
    go: added golang.org/x/crypto v0.0.0-20220722155217-630584e8d5aa
    go: added golang.org/x/exp v0.0.0-20220722155223-a9213eeb770e
    go: added golang.org/x/mod v0.6.0-dev.0.20220106191415-9b9b3d81d5e3
    go: added golang.org/x/net v0.0.0-20220812165438-1d4ff48094d1
    go: added golang.org/x/sys v0.0.0-20220728004956-3c1f35247d10
    go: added golang.org/x/term v0.0.0-20210927222741-03fcf44c2211
    go: added golang.org/x/text v0.3.8-0.20211004125949-5bd84dd9b33b
    go: added golang.org/x/tools v0.1.10
    go: added golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1
    go: added google.golang.org/protobuf v1.28.0
    go: added gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7
    2022/10/03 11:16:06 [INFO] exec (timeout=0s): /usr/local/go/bin/go get -d -v github.com/mholt/caddy-ratelimit github.com/caddyserver/caddy/v2 
    go: github.com/mholt/caddy-ratelimit@upgrade (v0.0.0-20220930195153-598f4b82c131) requires github.com/caddyserver/caddy/[email protected], not github.com/caddyserver/caddy/v2@upgrade (v2.6.1)
    2022/10/03 11:16:06 [FATAL] exit status 1
    
  • Can't build 2.5.0 with caddy-ratelimit due to quic errors

    Can't build 2.5.0 with caddy-ratelimit due to quic errors

    Made a clean container from the image golang:1.18.1-alpine3.15 and added xcaddy 0.3.0 to it. Then I tried to run the following command:

    xcaddy build \
             --with github.com/mholt/caddy-ratelimit \
             --output caddy
    

    It errors out with the following:

    2022/04/28 04:21:52 [INFO] exec (timeout=0s): /usr/local/go/bin/go build -o /go/caddy -ldflags -w -s -trimpath
    # github.com/caddyserver/caddy/v2
    /go/pkg/mod/github.com/caddyserver/caddy/[email protected]/listeners.go:187:68: undefined: quic.EarlySession
    2022/04/28 04:24:07 [INFO] Cleaning up temporary folder: /tmp/buildenv_2022-04-28-0421.740480886
    2022/04/28 04:24:07 [FATAL] exit status 2
    

    The complete output of this command can be found here.

    One thing I notice looking through it is that it appears getting this package forces an upgrade of github.com/lucas-clemente/quic-go to v0.27.0. And I don't see a downgrade back to v0.26.0 later as this line suggests should happen. I'm not sure why that would be but I know from https://github.com/caddyserver/xcaddy/issues/99 that Caddy v2.5.0 is not compatible with quic v0.27.0 so that seems to be the problem.

  • Error during parsing: rate_limit is not a registered directive

    Error during parsing: rate_limit is not a registered directive

    Hi, I am trying to put together a Caddyfile. I am a complete newbie, literally started using Caddy today, so it's entirely possible that I misunderstood something. The error I am getting: Error during parsing: rate_limit is not a registered directive . Dockerfile at the end. If I take out the order directive, I get /etc/caddy/Caddyfile:49: unrecognized directive: rate_limit

    {
      admin off
    
      order rate_limit before basicauth
    
      log {
        output file /var/log/access.log {
        roll_size 40MiB
        roll_uncompressed
        roll_local_time
        }
      }  
    }
    
    (common) {
      header /* {
        -Server
      }
    }
    
    http://mtkk.localhost {
      @static_asset {
        path_regexp static \.(webp|svg|css|js|jpg|png|gif|ico|woff|woff2)$
      }
    
      @hashed_asset {
        path_regexp static \.(css|js)$
      }
    
      log
    
      header {
      # # disable FLoC tracking
      # Permissions-Policy interest-cohort=()
    
      # # enable HSTS
      # Strict-Transport-Security max-age=31536000;
    
      # disable clients from sniffing the media type
      X-Content-Type-Options nosniff
    
      # clickjacking protection
      X-Frame-Options DENY
    
      # keep referrer data off of HTTP connections
      Referrer-Policy no-referrer-when-downgrade
    }
    
    rate_limit {
        distributed
        zone static_example {
          key    static
          events 100
          window 1m
        }
      }
    
    
      root * /var/www/uploads/img/
      file_server @static_asset
      reverse_proxy adonis_app:3333
      encode zstd gzip
    
    	header ?Cache-Control max-age=3600
    	header @hashed_asset Cache-Control max-age=31536000
    
      import common
    }
    
    
    

    I built a custom Dockerfile:

    FROM caddy:2.5.2-builder-alpine AS builder
    
    RUN xcaddy build \
    --with github.com/mholt/caddy-ratelimit
    
    FROM caddy:2.5.2-alpine
    
    COPY --from=builder /usr/bin/caddy /usr/bin/caddy
    

    Edit: removed extra }

  • Plugin does not honor the header directive

    Plugin does not honor the header directive

    In our caddy file we remove the server header with -Server. However when plugin returns a 429 error, it also adds back the Server header not respecting the config. How can we prevent this?

    {
        order rate_limit before basicauth
    }
    :8443 {
        tls /etc/ssl/my.crt /etc/ssl/my.key
        header {
            -Server
            Strict-Transport-Security max-age=31536000;
            X-Content-Type-Options nosniff
            X-Frame-Options DENY
            Referrer-Policy no-referrer-when-downgrade
            X-XSS-Protection "1; mode=block"
        }
        encode gzip
        log {
            output discard
        }
        reverse_proxy http://my-api:8080
        rate_limit {
                distributed
                zone static_example {
                    key    static
                    events 5
                    window 1m
                }
        }
    }
    
  • Need guide

    Need guide

    Hey there,

    How can I enable this just for a specific route?

    ` route /api/gateway/* { # Just wanna rate limit here # I wanna allow user to send 100 req per min

        rewrite * /graphql
        reverse_proxy http://tribe-gateway.development.svc.cluster.local
    }
    

    ` Thanks!

  • Multiple routes issue

    Multiple routes issue

    I have the following .Caddyfile :

    
    {
      order rate_limit before basicauth
      admin off
    }
    
    (rate_limit_num_per_min) {
      rate_limit {
          zone register_limit {
          key    {http.request.remote.host}
          events {args.0}
          window {args.1}s
          }  
      }
    }
    
    
    
    localhost {
    
      encode zstd gzip
      reverse_proxy /*  https://www.example.com
    
      route  /user/login {
        import rate_limit_num_per_min 5 10
      }
    
      route  /user/register {
       import rate_limit_num_per_min 1 10
      }
    
    }
    

    However on both limited routes the same rate limit is applied (5 requests per 10 minutes) and both routes share the cooldown, which is not the desired outcome.

    This is maybe the supposed to work like that (not very experienced with Caddy)?

    If there is a better way to do this or even group the rate limit to cover multiple routes with same parameters but not share the limit I would appreciate help.

  • not enough arguments

    not enough arguments

    #20 125.1 /go/pkg/mod/github.com/mholt/[email protected]/distributed.go:106:87: not enough arguments in call to h.storage.Store
    #20 125.1 	have (string, []byte)
    #20 125.1 	want (context.Context, string, []byte)
    #20 125.1 /go/pkg/mod/github.com/mholt/[email protected]/distributed.go:116:54: not enough arguments in call to h.storage.List
    #20 125.1 	have (string, bool)
    #20 125.1 	want (context.Context, string, bool)
    #20 125.1 /go/pkg/mod/github.com/mholt/[email protected]/distributed.go:132:34: not enough arguments in call to h.storage.Load
    #20 125.1 	have (string)
    #20 125.1 	want (context.Context, string)
    
  • using with trusted_proxies / behind another proxy

    using with trusted_proxies / behind another proxy

    Thanks for your work on this, I am looking to implement this plugin to stop spam at the caddy level and rate limiting seems to be the best thing to do.

    I am having an issue whereby the limits work but they will rate limit all requests with placeholder remote_host, I believe this is because its outside of the reverse_proxy handler and so trusted_proxies does not run before rate_limit

    Is there a way to accomplish this?

  • Possibility to add exceptions/whitelist

    Possibility to add exceptions/whitelist

    Hi there! First of all: thank you for creating Caddy and this plugin, I love both! Great work!! Here's my questions: Is it possible to add an exception for specific IP addresses or subnets to bypass the rate-limit? Maybe this is already possible with how Caddy works, but I haven't been able to figure it out yet. For example, I'd love to whitelist my internal network, so my own servers don't run into the ratelimit. My rate limit configuration currently looks like this:

    rate_limit {
            zone my_zone {
                    key    {remote_host}
                    events 10
                    window 20s
            }
    }
    

    Thank you very much! I appreciate any help!

  • Possible to add MAX incoming connection from remote?

    Possible to add MAX incoming connection from remote?

    Hi I'm using this with the rate limit and its fantastic, however I checked to see if I could find any relation to allowing max active connection from a client/remote and did not succeed.

    Would this be something you would consider adding to this or any other project in caddy?

A very simple rate limiter in go, made as a learning project to learn go and rate limiting patterns!
A very simple rate limiter in go, made as a learning project to learn go and rate limiting patterns!

rate-limiter-go A very simple rate limiter in go, made as a learning project to learn go and rate limiting patterns! Demo: Running the project: To exe

Jun 1, 2022
Caddy-git - Git Plugin for Caddy v2

caddy-git Git Plugin for Caddy v2. Inspired by this comment. Please ask question

Jan 1, 2023
Light weight http rate limiting proxy

Introduction Light weight http rate limiting proxy. The proxy will perform rate limiting based on the rules defined in the configuration file. If no r

Dec 23, 2022
Caddy log filter module with a log field filter to extract the user from a basic Authorization HTTP-Header

caddy-basic-auth-filter This packages contains a log field filter to extract the user from a basic Authorization HTTP-Header. Installation xcaddy buil

May 10, 2022
Golang implementation of Sliding Window Algorithm for distributed rate limiting.
Golang implementation of Sliding Window Algorithm for distributed rate limiting.

slidingwindow Golang implementation of Sliding Window Algorithm for distributed rate limiting. Installation $ go get -u github.com/RussellLuo/slidingw

Dec 27, 2022
A little ping pong service that implements rate limiting with golang

Fred the Guardian Introduction Writing a little ping pong service that implements rate limiting with the programming language golang. Requirements Web

Jan 2, 2022
IONOS DNS module for caddy

This package contains a DNS provider module for Caddy. It is used to manage DNS records with the IONOS DNS API using libdns-ionos..

Nov 9, 2022
netcup DNS module for caddy: dns.providers.netcup

netcup DNS module for Caddy This package contains a DNS provider module for Caddy. It can be used to manage DNS records with the netcup DNS API using

Nov 9, 2022
Pacemaker - Rate limit library. Currently implemented rate limits are

PaceMaker Rate limit library. Currently implemented rate limits are Fixed window

Nov 5, 2022
Redis-rate-limiter - An abstraction over redist rate/v9 package

RATE_LIMIT_POC Notes This POC is based on github.com/go-redis/redis_rate/v9 pack

Feb 14, 2022
A Caddy v2 plugin to track requests in Pirsch analytics

caddy-pirsch-plugin A Caddy v2 plugin to track requests in Pirsch Analytics. Usage pirsch [<matcher>] { client_id <pirsch-client-id> client_se

Sep 15, 2022
Service Management App for Caddy v2

caddy-systemd Service Management App for Caddy v2. Please ask questions either here or via LinkedIn. I am happy to help you! @greenpau Please see othe

Sep 1, 2022
Watch for interesting patterns in Caddy logs and send a Telegram notification.

Watch for interesting patterns in Caddy logs and send a Telegram notification.

Jul 21, 2022
Access ftp through caddy

Access ftp through caddy

Dec 14, 2022
Enable requests served by caddy for distributed tracing via The OpenTracing Project.

caddy-opentracing Enable requests served by caddy for distributed tracing via The OpenTracing Project. Dependencies The Go OpenTracing Library Jaeger,

Sep 30, 2022
Go forward proxy with bandwidth limiting.
Go forward proxy with bandwidth limiting.

Goforward Go forward proxy with rate limiting. The code is based on Michał Łowicki's 100 LOC forward proxy. Download Releases can be downloaded from h

Nov 13, 2022
Simple-request-limiter - Example of limiting API requests using standard Go library

Route: http://localhost:8080/urls example of body in POST request that was used:

Feb 2, 2022
A rate limiter for the gin framework

GinRateLimit GinRateLimit is a rate limiter for the gin framework. By default, it can only store rate limit info in memory. If you want to store it so

Dec 22, 2021
Common rate-limiter implementations

Overview An example Rate Limiter library used to control the rate that events occur, but these can also be used as thresholds that should replenish ov

Dec 1, 2021