Efficient token-bucket-based rate limiter package.

ratelimit

-- import "github.com/juju/ratelimit"

The ratelimit package provides an efficient token bucket implementation. See http://en.wikipedia.org/wiki/Token_bucket.

Usage

func Reader

func Reader(r io.Reader, bucket *Bucket) io.Reader

Reader returns a reader that is rate limited by the given token bucket. Each token in the bucket represents one byte.

func Writer

func Writer(w io.Writer, bucket *Bucket) io.Writer

Writer returns a writer that is rate limited by the given token bucket. Each token in the bucket represents one byte.

type Bucket

type Bucket struct {
}

Bucket represents a token bucket that fills at a predetermined rate. Methods on Bucket may be called concurrently.

func NewBucket

func NewBucket(fillInterval time.Duration, capacity int64) *Bucket

NewBucket returns a new token bucket that fills at the rate of one token every fillInterval, up to the given maximum capacity. Both arguments must be positive. The bucket is initially full.

func NewBucketWithQuantum

func NewBucketWithQuantum(fillInterval time.Duration, capacity, quantum int64) *Bucket

NewBucketWithQuantum is similar to NewBucket, but allows the specification of the quantum size - quantum tokens are added every fillInterval.

func NewBucketWithRate

func NewBucketWithRate(rate float64, capacity int64) *Bucket

NewBucketWithRate returns a token bucket that fills the bucket at the rate of rate tokens per second up to the given maximum capacity. Because of limited clock resolution, at high rates, the actual rate may be up to 1% different from the specified rate.

func (*Bucket) Available

func (tb *Bucket) Available() int64

Available returns the number of available tokens. It will be negative when there are consumers waiting for tokens. Note that if this returns greater than zero, it does not guarantee that calls that take tokens from the buffer will succeed, as the number of available tokens could have changed in the meantime. This method is intended primarily for metrics reporting and debugging.

func (*Bucket) Rate

func (tb *Bucket) Rate() float64

Rate returns the fill rate of the bucket, in tokens per second.

func (*Bucket) Take

func (tb *Bucket) Take(count int64) time.Duration

Take takes count tokens from the bucket without blocking. It returns the time that the caller should wait until the tokens are actually available.

Note that if the request is irrevocable - there is no way to return tokens to the bucket once this method commits us to taking them.

func (*Bucket) TakeAvailable

func (tb *Bucket) TakeAvailable(count int64) int64

TakeAvailable takes up to count immediately available tokens from the bucket. It returns the number of tokens removed, or zero if there are no available tokens. It does not block.

func (*Bucket) TakeMaxDuration

func (tb *Bucket) TakeMaxDuration(count int64, maxWait time.Duration) (time.Duration, bool)

TakeMaxDuration is like Take, except that it will only take tokens from the bucket if the wait time for the tokens is no greater than maxWait.

If it would take longer than maxWait for the tokens to become available, it does nothing and reports false, otherwise it returns the time that the caller should wait until the tokens are actually available, and reports true.

func (*Bucket) Wait

func (tb *Bucket) Wait(count int64)

Wait takes count tokens from the bucket, waiting until they are available.

func (*Bucket) WaitMaxDuration

func (tb *Bucket) WaitMaxDuration(count int64, maxWait time.Duration) bool

WaitMaxDuration is like Wait except that it will only take tokens from the bucket if it needs to wait for no greater than maxWait. It reports whether any tokens have been removed from the bucket If no tokens have been removed, it returns immediately.

Comments
  • Bucket: expose Available() and Capacity()

    Bucket: expose Available() and Capacity()

    Hi,

    The current token bucket implementation is great. Nonetheless we need some underlying data to implement more powerful functionalities.

    For example, if we want to implement a function CanAccept which only checks available tokens without taking it; implement a function that exports rate limiter's metrics to better understand current flow and bottleneck on our services. Both functions will benefit if we can get the underlying data like avail and capacity, including the already public Rate().

    Let me know if there are questions. Highly appreciate it if anyone can help review this issue. Thanks!

  • Add a Clock interface for testing

    Add a Clock interface for testing

    This adds an interface that is used internally to allow a caller to intercept calls to time.Now() and time.Sleep(), and do better testing.

    Fixes #17

    @howbazaar this is about as simple as it gets. Unfortunately the factory functions are a bit tedious. Could be changed to "WithDeps" (or something like that) and have a Deps struct, but for now that is going to hold exactly 1 field. This code is simple enough that it probably won't accumulate more. Your call.

  • Bonus Token by adjustavailableTokens

    Bonus Token by adjustavailableTokens

    I noticed that inside the function adjustavailableTokens()

    func (tb *Bucket) adjustavailableTokens(tick int64) {
    	if tb.availableTokens >= tb.capacity {
    		return
    	}
    	tb.availableTokens += (tick - tb.latestTick) * tb.quantum
    	if tb.availableTokens > tb.capacity {
    		tb.availableTokens = tb.capacity
    	}
    	tb.latestTick = tick
    	return
    }
    

    the tb.latestTick is not updated if tb.availableTokens >= tb.capacity

    That makes the description of the variable latestTick holds the latest tick for which we know the number of tokens in the bucket. not so accurate, IMO.

    And I wrote a snippet of code which can produce surprising results:

    func main() {
    	bucket := ratelimit.NewBucketWithQuantum(time.Second*1, 100, 20)
    	fmt.Printf("Avail=%d\n", bucket.Available())
    
    	fmt.Printf("%v\n", time.Now())
    	fmt.Printf("Pause wait for 5s\n")
    	time.Sleep(time.Second * 5)
    	fmt.Printf("%v\n", time.Now())
    	fmt.Printf("Avail=%d\n", bucket.Available())
    
    	fmt.Printf("Request token up to 100\n")
    	cnt := bucket.TakeAvailable(100)
    	fmt.Printf("Token taken=%d\n", cnt)
    
            // It will surprise you.
    	fmt.Printf("Avail=%d\n", bucket.Available())
    }
    

    Output

    Avail=100                                                                                          
    2019-09-26 01:12:47.9410106 +0800 CST m=+0.003992001                                                Pause wait for 5s                                                                                   
    2019-09-26 01:12:52.9614404 +0800 CST m=+5.024421801                                                Avail=100                                                                                           
    Request token up to 100                                                                             
    Token taken=100                                                                                     
    Avail=100             
    

    That is, after taken all tokens out of the bucket, the bucket is still full.

    Is this by design or just an implementation bug?

  • If not enough tokens are available, no tokens should be removed from the bucket.

    If not enough tokens are available, no tokens should be removed from the bucket.

    func (tb *Bucket) take(now time.Time, count int64, maxWait time.Duration) (time.Duration, bool) {
    	...
    	avail := tb.availableTokens - count
    	if avail >= 0 {
    		tb.availableTokens = avail
    		return 0, true
    	}
    	...
    	tb.availableTokens = avail
    	return waitTime, true
    }
    

    If not enough tokens are available, no tokens should be removed from the bucket. (see https://en.wikipedia.org/wiki/Token_bucket#Algorithm) Otherwise, the availableTokens would be less than zero and would never be enough if many threads are trying to take token from the bucket circularly.

  • bug when system clock rollback

    bug when system clock rollback

    please see this unit test:

    type mockClock struct {
    	mutex sync.RWMutex
    	realStartTime time.Time
    	startTime time.Time
    }
    
    func newMockClock() *mockClock {
    	now := time.Now()
    	return &mockClock{
    		realStartTime: now,
    		startTime: now,
    	}
    }
    
    func (mc *mockClock) Now() time.Time {
    	mc.mutex.RLock()
    	defer mc.mutex.RUnlock()
    	return mc.startTime.Add(time.Now().Sub(mc.realStartTime))
    }
    
    func (mc *mockClock) Sleep(d time.Duration) {
    	time.Sleep(d)
    }
    
    func (mc *mockClock) RollBack(d time.Duration) {
    	mc.mutex.Lock()
    	defer mc.mutex.Unlock()
    	mc.startTime = mc.startTime.Add(-d)
    }
    
    func TestSystemClockRollBack(t *testing.T) {
    	mc := newMockClock()
    	bucket := NewBucketWithRateAndClock(10, 1, mc)
    
    	if bucket.TakeAvailable(1) != 1 {
    		t.Fail()
    	}
    	time.Sleep(time.Second / 10 + 1)
    	if bucket.TakeAvailable(1) != 1 {
    		t.Fail()
    	}
    	time.Sleep(time.Second / 10 + 1)
    	mc.RollBack(time.Hour * 8)
    	bucket.TakeAvailable(1) 
    	time.Sleep(time.Second / 10 + 1)
    	if bucket.TakeAvailable(1) != 1 {
    		t.Fail()
    	}
    	time.Sleep(time.Second / 10 + 1)
    	if bucket.TakeAvailable(1) != 1 {
    		t.Fail()
    	}
    }
    
  • testing through ratelimit

    testing through ratelimit

    We use ratelimit in Kubernetes. I am working on something that uses it, and I am trying to write a test that doesn't actually take a long time to run. As part of that I inject a fake clock interface. It works really well until it hits the ratelimiter logic.

    ratelimit uses time.Now() and time.Sleep() internally. Would you be opposed to something like NewBucketWithClock(..., clock Clock) where Clock was an interface that wrapped time.Now() and similar functions? Users who don't care will ignore it and get the default (which just calls the real time.Now()) and users who do care could provide a mocked clock. If that is OK, I can try to work up a quick patch.

    Or is there a better way to test through this? I really don't want my test doing a bunch of 100ms sleeps to try to prove that it behaves. I will have a number of cases to test and that adds up fast.

  • Removed defered mutex unlocks in favor of explicit calls

    Removed defered mutex unlocks in favor of explicit calls

    I've been playing around with a few potential changes to this package for potential speed improvements and found that the act of defering the mutex unlocks is a significant slow down to this package. Once these were removed and appropriate adjustments made in the code it appears to have resulted in an 3 times increase in parallel calls to Wait().

    Before:

    go test -bench . -benchmem -cpu 1,2,4,8
    
    PASS
    BenchmarkWait           10000000               152 ns/op               0 B/op          0 allocs/op
    BenchmarkWait-2         10000000               155 ns/op               0 B/op          0 allocs/op
    BenchmarkWait-4         10000000               153 ns/op               0 B/op          0 allocs/op
    BenchmarkWait-8         10000000               152 ns/op               0 B/op          0 allocs/op
    BenchmarkWaitParallel   10000000               160 ns/op               0 B/op          0 allocs/op
    BenchmarkWaitParallel-2 10000000               164 ns/op               0 B/op          0 allocs/op
    BenchmarkWaitParallel-4 10000000               178 ns/op               0 B/op          0 allocs/op
    BenchmarkWaitParallel-8 10000000               185 ns/op               0 B/op          0 allocs/op
    

    After:

    go test -bench . -benchmem -cpu 1,2,4,8
    
    PASS
    BenchmarkWait           30000000                56.1 ns/op             0 B/op          0 allocs/op
    BenchmarkWait-2         30000000                56.0 ns/op             0 B/op          0 allocs/op
    BenchmarkWait-4         30000000                55.8 ns/op             0 B/op          0 allocs/op
    BenchmarkWait-8         30000000                55.8 ns/op             0 B/op          0 allocs/op
    BenchmarkWaitParallel   30000000                56.9 ns/op             0 B/op          0 allocs/op
    BenchmarkWaitParallel-2 30000000                59.3 ns/op             0 B/op          0 allocs/op
    BenchmarkWaitParallel-4 20000000                68.1 ns/op             0 B/op          0 allocs/op
    BenchmarkWaitParallel-8 20000000                70.8 ns/op             0 B/op          0 allocs/op
    
  • Does method ratelimit.go , Bucket.abjust has a bug ?

    Does method ratelimit.go , Bucket.abjust has a bug ?

    I'm new in github. Found your package small and more understandable then others.

    But little confused with tb.avail in Bucket.abjust:

    tb.avail += (currentTick - tb.availTick) * tb.quantum

    why used just currentTick , but not currentTick * tb.capacity.

    For example, we have

    t0----------------t1---------------t2 dt = t1-t0 = t2-t1 currentTick = int64(t2.Sub(t0) / dt) = 2. But in such interval [t0,t2] we must have 2 * tb.capacity tokens in the bucket, not just 2.

  • Fix bonus tokens by function adjustavailableTokens (#31)

    Fix bonus tokens by function adjustavailableTokens (#31)

    Once adjustavailableTokens() is called, the number of tokens should be updated in accordance with the tick even if the buket is full.

    Semantically, extra tokens are discarded though we, in fact, simply returns from the function.

  • It's seem that `take` would reduce the available tokens no matter the tokens are insufficient  or sufficient ?

    It's seem that `take` would reduce the available tokens no matter the tokens are insufficient or sufficient ?

    For example the code:

    func test1st() {
    	bucket := ratelimit.NewBucket(time.Minute, 1000)
    	bucket.TakeAvailable(2000)
    	fmt.Println("test#1: left", bucket.Available())
    }
    
    func test2nd() {
    	bucket := ratelimit.NewBucket(time.Minute, 1000)
    	bucket.Take(2000)
    	fmt.Println("test#2: left", bucket.Available())
    }
    
    func main() {
    	test1st()
    	test2nd()
    }
    

    Print the output:

    test#1: left 0
    test#2: left -1000
    

    So the question is: How to make it to discard if the available tokens are insufficient.

  • Unlimited rate after 24h?

    Unlimited rate after 24h?

    Hi,

    in https://github.com/restic/restic/issues/1760, a of restic reported that apparently after roughly 24 hours, restic forgets the upload rate limit and uploads without any limit (they used 500KiB). We're using ratelimit.Reader() with a bucket created as follows:

    https://github.com/restic/restic/blob/abdd59ea1b7dc9ee0be1806da711c630fe2d8fab/internal/limiter/static_limiter.go#L24

    Is there anything we're doing wrong? Is this maybe a bug in the ratelimit.Reader? do you have any idea what may cause this issue?

    I'm at a loss what's going on here...

  • go get

    go get "github.com/juju/ratelimit" error

    error detail like this: connectex: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.

  • Request for a new release

    Request for a new release

    Hi @manadart @mitechie @rogpeppe,

    I guys you are the maintainers, thanks for doing that. I wonder if this lib is still being maintained?

    If yes, could you please cut a new release? People need the tagged release for easy to handle the go module thing. Yeah, before cutting a new release, it'd be better to enable the go module.

    PS: I can help with it if you are busy and I just want to know if this is still being maintained.

  • Can you remove Lock

    Can you remove Lock

    In the process of using ratelimit,there's a lock in the source code . can you replace it with atomic?

    func (tb *Bucket) TakeAvailable(count int64) int64 { tb.mu.Lock() defer tb.mu.Unlock() return tb.takeAvailable(tb.clock.Now(), count) }

  • invalid pseudo-version: tag (v1.0.1) found on revision 59fac5042749 is already canonical, so should not be replaced with a pseudo-version derived from that tag

    invalid pseudo-version: tag (v1.0.1) found on revision 59fac5042749 is already canonical, so should not be replaced with a pseudo-version derived from that tag

    when i use go mod to install this package, it shows up

    cannot load github.com/juju/ratelimit: github.com/juju/[email protected]: invalid pseudo-version: tag (v1.0.1) found on revision 59fac5042749 is already canonical, so should not be replaced with a pseudo-version derived from that tag
    

    looks wrong tag are build?

  • Concerns about license of the project.

    Concerns about license of the project.

    Since this project is a common utils, and used by many other open source projects as a vendor dependency, LGPLv3 with static-linking exception may be controversial in some situation for Go language libraries.

    Many open source libraries in Go in licensed in MIT/BSD/Apache, so is it possible to change the license for the project to more permissive license (e.g. MIT/BSD/Apache) or dual license (e.g. LGPL + MIT) ?

    I found similar discuss in https://github.com/davidmoreno/onion/issues/56

    @kingsamchen @nammn

    Thanks!

  • Func `Wait` in package ratelimit could got blocked if time moves backwards for any reason

    Func `Wait` in package ratelimit could got blocked if time moves backwards for any reason

    We import package ratelimit in Kubernetes and met a scenario, not sure whether the issue should be filed here. The scenario is that when time happened moves backwards for any reason, func take that calculate the time to sleep would get a huge number here:

    	endTick := currentTick + (-avail+tb.quantum-1)/tb.quantum
    	endTime := tb.startTime.Add(time.Duration(endTick) * tb.fillInterval)
    	waitTime := endTime.Sub(now)
    	if waitTime > maxWait {
    		return 0, false
    	}
    

    An available way for the issue might be adding a protection before calculating waitTime, say a comparision between now and startTime.

    /cc @thockin

A Golang blocking leaky-bucket rate limit implementation

Go rate limiter This package provides a Golang implementation of the leaky-bucket rate limit algorithm. This implementation refills the bucket based o

Jan 2, 2023
Simple, thread-safe Go rate-limiter

RateLimit Simple, thread-safe Go rate-limiter. Inspired by Antti Huima's algorithm on http://stackoverflow.com/a/668327 Example package main import (

Oct 16, 2022
A timed rate limiter for Go

go-rate go-rate is a rate limiter designed for a range of use cases, including server side spam protection and preventing saturation of APIs you consu

Dec 17, 2022
HTTP/2 Apple Push Notification service (APNs) provider for Go with token-based connection

APNs Provider HTTP/2 Apple Push Notification service (APNs) provider for Go with token-based connection Example: key, err := apns.AuthKeyFromFile("Aut

Dec 29, 2022
Simple middleware to rate-limit HTTP requests.

Tollbooth This is a generic middleware to rate-limit HTTP requests. NOTE 1: This library is considered finished. NOTE 2: Major version changes are bac

Dec 28, 2022
An efficient and feature complete Hystrix like Go implementation of the circuit breaker pattern.
An efficient and feature complete Hystrix like Go implementation of the circuit breaker pattern.

Circuit Circuit is an efficient and feature complete Hystrix like Go implementation of the circuit breaker pattern. Learn more about the problems Hyst

Dec 28, 2022
Go package that handles HTML, JSON, XML and etc. responses

gores http response utility library for Go this package is very small and lightweight, useful for RESTful APIs. installation go get github.com/alioygu

Oct 31, 2022
Go package for easily rendering JSON, XML, binary data, and HTML templates responses.

Render Render is a package that provides functionality for easily rendering JSON, XML, text, binary data, and HTML templates. This package is based on

Jan 8, 2023
Simple, lightweight and faster response (JSON, JSONP, XML, YAML, HTML, File) rendering package for Go

Package renderer Simple, lightweight and faster response (JSON, JSONP, XML, YAML, HTML, File) rendering package for Go Installation Install the packag

Dec 13, 2022
Go http.Hander based middleware stack with context sharing

wrap Package wrap creates a fast and flexible middleware stack for http.Handlers. Features small; core is only 13 LOC based on http.Handler interface;

Apr 5, 2022
Social network - A microservices based backend for training purposes

social_network A microservices based backend for training purposes Requirements

Feb 24, 2022
A concurrent rate limiter library for Golang based on Sliding-Window rate limiter algorithm.

ratelimiter A generic concurrent rate limiter library for Golang based on Sliding-window rate limitng algorithm. The implementation of rate-limiter al

Jan 6, 2023
Redis-rate-limiter - An abstraction over redist rate/v9 package

RATE_LIMIT_POC Notes This POC is based on github.com/go-redis/redis_rate/v9 pack

Feb 14, 2022
A very simple rate limiter in go, made as a learning project to learn go and rate limiting patterns!
A very simple rate limiter in go, made as a learning project to learn go and rate limiting patterns!

rate-limiter-go A very simple rate limiter in go, made as a learning project to learn go and rate limiting patterns! Demo: Running the project: To exe

Jun 1, 2022
Resize upladed images to s3 bucket with given sizes, and uploades new images back to bucket

Features Resize upladed images to s3 bucket with given sizes, and uploades new images back to bucket Environment Variables IMAGE_SIZES - formax 200x20

Feb 2, 2022
Convert JPEG images from S3 bucket to BMP, GIF, PNG into another bucket
Convert JPEG images from S3 bucket to BMP, GIF, PNG into another bucket

aws-lambda Convert JPEG images from S3 bucket to BMP, GIF, PNG into another bucket Setup two buckets jpeg-images for source jpeg images converted-jpeg

Feb 13, 2022
Ratelimit - This package provides a Golang implementation of the leaky-bucket rate limit algorithm

Go rate limiter This package provides a Golang implementation of the leaky-bucke

Jul 26, 2022
Go package for rate limiter collection

rlc A rate limiter collection for Go. Pick up one of the rate limiters to throttle requests and control quota. RLC Slider TokenBucket RLC RLC is a rat

Jul 6, 2021
Golimit is Uber ringpop based distributed and decentralized rate limiter
Golimit is Uber ringpop based distributed and decentralized rate limiter

Golimit A Distributed Rate limiter Golimit is Uber ringpop based distributed and decentralized rate limiter. It is horizontally scalable and is based

Dec 21, 2022
redis-based rate limiter written in go

redis-based rate limiter written in go

Dec 16, 2021