Generic sharded thread safe LRU cache in Go.

cache

Coverage

Cache is a thread safe, generic, and sharded in memory LRU cache object. This is achieved by partitioning values across many smaller LRU (least recently used) caches and interacting with those caches over channels. Each smaller cache maintains access to its own elements and communicates information back to the Cache object, which then responds back to the original caller.

behavior

LRU backing caches

type lruCache[K comparable, V any] struct {
    table map[K]*list.Node[KVPair[K, V]]
    list  *list.List[KVPair[K, V]]
    ...
    client  *client[K, V]
    evictFn func (k K, v V)
}

The LRU backing caches behave exactly like a normal LRU and are composed of a doubly linked list and a map to allow key lookups. When an entry is added or accessed it is pushed to the front of the list, and when enough items are added to the cache the oldest items(the back of the list) are evicted to make room for more recently used entries. The list implementation itself is a ported version from the go stdlib which uses type parameters instead of runtime interfaces and can be found in the /list directory.

Each *lruCache spawns a single goroutine when *lruCache.serve(ctx) is called.

client

type client[K comparable, V any] struct {
    // GetChannel is a channel for retrieving values from the cache for which this client is associated
    GetChannel *RequestChannel[Request[K], GetResponse[K, V]]
    // PutChannel is a channel for placing values into the cache for which this client is associated
    PutChannel *RequestChannel[Request[KVPair[K, V]], struct{}]
    ...
}

The client abstraction contains a collection of channels by which it communicates with *lruCache objects. This allows each *lruCache to run a single goroutine on which it listens for requests over these channels, processes them, and sends responses without the need for locking anything.

Cache

type Cache[K comparable, V any] struct {
    caches []*cache[K, V]
}

The Cache is the main interface into the underlying caches. This is the object you want to use if you have many objects being accessed or mutated across different goroutines. It splits any contention across LRU partitions and Go channels instead of using Mutexes for consistency. The underlying client for each cache is exposed for any fine-tuning that must be done; if your workflow ends up pinning a few objects to one particular LRU just because of how the hashing works out and some LRUs are full but never being evicted, you can manually do so for caches that are being underutilized. You can also resize them; manually adding elements to them is not recommended if the top level Cache interface is still being used.

Note

All structures in this package are optimized for alignment.

examples

For additional examples, see cache_test.go and cache_benchmark_test.go

Single Cache Partition

package main

import (
	"context"
	"fmt"

	"github.com/alistanis/cache"
)

// Simple example that is shown with a concurrency of 1 in order to 
// illustrate how the smaller LRU caches work.
func main() {
	ctx, cancel := context.WithCancel(context.Background())
	concurrency := 1
	lruCacheLimit := 5
	c := cache.New[int, int](ctx, lruCacheLimit, concurrency)
	defer c.Wait()

	c.Put(42, 42)
	c.Put(1, 1)
	c.Get(42)
	c.Put(0, 0)
	c.Put(2, 2)
	c.Put(3, 3)
	c.Get(42)
	// evict 1
	c.Put(4, 4)

	c.Each(func(key int, val int) {
		fmt.Println(key)
	})

	cancel()
	// Output:
	// 4
	// 42
	// 3
	// 2
	// 0
}

Many Cache Partitions

package main

import (
	"context"
	"fmt"

	"github.com/alistanis/cache"
)

// General example for using the Cache object. Since elements are spread across many partitions,
// order can not be guaranteed, and items will not be evicted in pure LRU terms; it is possible that some partitions
// may see more traffic than others and may be more eviction heavy, but generally, access patterns amortize evenly.
func main() {

	ctx, cancel := context.WithCancel(context.Background())
	concurrency := 10 // runtime.NumCPU() instead of 10 for actual use
	c := cache.New[int, int](ctx, 6, concurrency)
	defer c.Wait()

	fmt.Println(c.Meta().Len())
	fmt.Println(c.Meta().Cap())
	finished := make(chan struct{})
	go func() {
		for i := 0; i < 4*concurrency; i++ {
			c.Put(i, i)
		}
		finished <- struct{}{}
	}()

	go func() {
		for i := 8 * concurrency; i > 3*concurrency; i-- {
			c.Put(i, i)
		}
		finished <- struct{}{}
	}()

	<-finished
	<-finished

	for i := 0; i < 8*concurrency; i++ {
		v, found := c.Get(i)
		if !found {
			// get value from backing store
			// res := db.Query(...)
			// v = getValFromRes(res)
			// put value back into cache
			// c.Put(i, v)
			v = 0
		} else {
			if i != v {
				panic("uh oh")
			}
		}

	}

	// we've put enough values into the cache that 10 partitions are filled with 6 elements each
	fmt.Println(c.Meta().Len())
	fmt.Println(c.Meta().Cap())
	// Output:
	// 0
	// 60
	// 60
	// 60
	cancel()
}

Cache With Eviction Function/Cleanup

package main

import (
	"context"
	"log"
	"os"
	"runtime"
	"syscall"

	"github.com/alistanis/cache"
)

func main() {

	ctx, cancel := context.WithCancel(context.Background())
	errC := make(chan error)

	// concurrency/partition of 1 to guarantee LRU order
	// size of 1 in order to demonstrate eviction
	// Type of cache elements can be inferred by the arguments to the eviction function
	c := cache.WithEvictionFunction(ctx, 1, 1, func(s string, f *os.File) {
		_, err := f.Stat()
		if err != nil {
			errC <- err
			return
		}

		log.Printf("Closing file at path %s, fd: %d", s, f.Fd())

		errC <- f.Close()
	})

	defer c.Wait()
	defer cancel()

	d, err := os.MkdirTemp("", "")
	if err != nil {
		log.Fatal(err)
	}

	// cleanup temp resources after main exits
	defer func(path string) {
		err := os.RemoveAll(path)
		if err != nil {
			log.Fatal(err)
		}

	}(d)

	// exit channel that will block until we're 
	// finished collecting any/all errors
	exit := make(chan struct{})
	go func() {
		for e := range errC {
			if e != nil {
				log.Println(e)
			}
		}
		// signal that we're finished and can exit safely
		exit <- struct{}{}
	}()

	f, err := os.CreateTemp(d, "")
	if err != nil {
		log.Println(err)
		return
	}

	// first entry on the LRU
	c.Put(f.Name(), f)

	f2, err := os.CreateTemp(d, "")
	if err != nil {
		log.Println(err)
		return
	}

	// place f2 in the cache and evict f causing the eviction 
	// function to fire, closing the file and logging
	// 2022/04/13 07:31:47 Closing file at path /var/folders/q3/dt78p91s1b562lmq7qstllv00000gn/T/1705161844/1443821512, fd: 6, inode: 49662131

	c.Put(f2.Name(), f2)

	// now forcibly evict f2
	evicted := c.Evict()

	// 2022/04/13 07:31:47 Closing file at path /var/folders/q3/dt78p91s1b562lmq7qstllv00000gn/T/1705161844/767977656, fd: 7, inode: 49662130
	log.Println(evicted) // 1

	f, err = os.CreateTemp(d, "")
	if err != nil {
		log.Println(err)
		return
	}

	c.Put(f.Name(), f)
	// Evict f again by resizing
	log.Println(c.Resize(0)) // 1

	// We're finished so we can close the error channel
	close(errC)
	// Wait until errors are processed and exit
	<-exit
}

Benchmarks

MacBook Air (M1, 2020), 16GB Ram

go test -v -benchmem ./... -bench . -run=bench

goos: darwin
goarch: arm64
pkg: github.com/alistanis/cache
BenchmarkCache_IntInt_SingleThread
BenchmarkCache_IntInt_SingleThread/Put
BenchmarkCache_IntInt_SingleThread/Put-8         1442264               824.1 ns/op           110 B/op          6 allocs/op
BenchmarkCache_IntInt_SingleThread/Get
BenchmarkCache_IntInt_SingleThread/Get-8         1814316               662.7 ns/op            47 B/op          4 allocs/op
BenchmarkCache_IntInt_ParallelPut
BenchmarkCache_IntInt_ParallelPut-8              6645994               183.5 ns/op           110 B/op          6 allocs/op
BenchmarkCache_IntInt_ParallelGet
BenchmarkCache_IntInt_ParallelGet-8              8311953               138.8 ns/op            48 B/op          5 allocs/op
BenchmarkCache_StringString_SingleThread
BenchmarkCache_StringString_SingleThread/Put
BenchmarkCache_StringString_SingleThread/Put-8           1000000              1073 ns/op             209 B/op          9 allocs/op
BenchmarkCache_StringString_SingleThread/Get
BenchmarkCache_StringString_SingleThread/Get-8           1444695               828.3 ns/op            87 B/op          5 allocs/op
BenchmarkCache_StringString_ParallelPut
BenchmarkCache_StringString_ParallelPut-8                4905408               238.7 ns/op           209 B/op          9 allocs/op
BenchmarkCache_StringString_ParallelGet
BenchmarkCache_StringString_ParallelGet-8                6977521               170.2 ns/op            88 B/op          6 allocs/op
PASS
ok      github.com/alistanis/cache      12.475s
PASS
ok      github.com/alistanis/cache/list 0.093s

MacBook Pro (M1 Pro, 16-inch, 2021), 16GB Ram

go test -v -benchmem ./... -bench . -run=bench
goos: darwin
goarch: arm64
pkg: github.com/alistanis/cache
BenchmarkCache_IntInt_SingleThread
BenchmarkCache_IntInt_SingleThread/Put
BenchmarkCache_IntInt_SingleThread/Put-10        1431043               825.7 ns/op           110 B/op          6 allocs/op
BenchmarkCache_IntInt_SingleThread/Get
BenchmarkCache_IntInt_SingleThread/Get-10        1772635               673.3 ns/op            47 B/op          4 allocs/op
BenchmarkCache_IntInt_ParallelPut
BenchmarkCache_IntInt_ParallelPut-10             6866359               179.5 ns/op           110 B/op          6 allocs/op
BenchmarkCache_IntInt_ParallelGet
BenchmarkCache_IntInt_ParallelGet-10             8667046               138.0 ns/op            48 B/op          5 allocs/op
BenchmarkCache_StringString_SingleThread
BenchmarkCache_StringString_SingleThread/Put
BenchmarkCache_StringString_SingleThread/Put-10                  1000000              1094 ns/op             209 B/op          9 allocs/op
BenchmarkCache_StringString_SingleThread/Get
BenchmarkCache_StringString_SingleThread/Get-10                  1455924               822.9 ns/op            87 B/op          5 allocs/op
BenchmarkCache_StringString_ParallelPut
BenchmarkCache_StringString_ParallelPut-10                       4883151               280.0 ns/op           209 B/op          9 allocs/op
BenchmarkCache_StringString_ParallelGet
BenchmarkCache_StringString_ParallelGet-10                       6611814               190.0 ns/op            88 B/op          6 allocs/op
PASS
ok      github.com/alistanis/cache      13.023s
PASS
ok      github.com/alistanis/cache/list 0.156s

MacBook Pro (Intel(R) Core(TM) i9-9980HK CPU @ 2.40GHz, 16-inch, 2019), 64 GB Ram

go test -v -benchmem ./... -bench . -run=bench
goos: darwin
goarch: amd64
pkg: github.com/alistanis/cache
cpu: Intel(R) Core(TM) i9-9980HK CPU @ 2.40GHz
BenchmarkCache_IntInt_SingleThread
BenchmarkCache_IntInt_SingleThread/Put
BenchmarkCache_IntInt_SingleThread/Put-16         794200              1452 ns/op             110 B/op          6 allocs/op
BenchmarkCache_IntInt_SingleThread/Get
BenchmarkCache_IntInt_SingleThread/Get-16         976036              1184 ns/op              47 B/op          4 allocs/op
BenchmarkCache_IntInt_ParallelPut
BenchmarkCache_IntInt_ParallelPut-16             6931814               177.8 ns/op           111 B/op          6 allocs/op
BenchmarkCache_IntInt_ParallelGet
BenchmarkCache_IntInt_ParallelGet-16             8706753               138.9 ns/op            48 B/op          5 allocs/op
BenchmarkCache_StringString_SingleThread
BenchmarkCache_StringString_SingleThread/Put
BenchmarkCache_StringString_SingleThread/Put-16                   586318              2015 ns/op             209 B/op          9 allocs/op
BenchmarkCache_StringString_SingleThread/Get
BenchmarkCache_StringString_SingleThread/Get-16                   860658              1434 ns/op              87 B/op          6 allocs/op
BenchmarkCache_StringString_ParallelPut
BenchmarkCache_StringString_ParallelPut-16                       5286390               227.2 ns/op           209 B/op          9 allocs/op
BenchmarkCache_StringString_ParallelGet
BenchmarkCache_StringString_ParallelGet-16                       6639519               162.4 ns/op            88 B/op          6 allocs/op
PASS
ok      github.com/alistanis/cache      10.423s
PASS
ok      github.com/alistanis/cache/list 0.111s

Linux, Intel(R) Core(TM) i7-10750H CPU @ 2.60GHz

go test -v -benchmem ./... -bench . -run=bench
goos: linux
goarch: amd64
pkg: github.com/alistanis/cache
cpu: Intel(R) Core(TM) i7-10750H CPU @ 2.60GHz
BenchmarkCache_IntInt_SingleThread
BenchmarkCache_IntInt_SingleThread/Put
BenchmarkCache_IntInt_SingleThread/Put-12        1000000              1062 ns/op             110 B/op          6 allocs/op
BenchmarkCache_IntInt_SingleThread/Get
BenchmarkCache_IntInt_SingleThread/Get-12        1349502               886.8 ns/op            47 B/op          4 allocs/op
BenchmarkCache_IntInt_ParallelPut
BenchmarkCache_IntInt_ParallelPut-12             6455076               197.1 ns/op           110 B/op          6 allocs/op
BenchmarkCache_IntInt_ParallelGet
BenchmarkCache_IntInt_ParallelGet-12             6827888               168.3 ns/op            48 B/op          5 allocs/op
BenchmarkCache_StringString_SingleThread
BenchmarkCache_StringString_SingleThread/Put
BenchmarkCache_StringString_SingleThread/Put-12                   842470              1446 ns/op             209 B/op          9 allocs/op
BenchmarkCache_StringString_SingleThread/Get
BenchmarkCache_StringString_SingleThread/Get-12                  1000000              1075 ns/op              87 B/op          6 allocs/op
BenchmarkCache_StringString_ParallelPut
BenchmarkCache_StringString_ParallelPut-12                       4743643               269.0 ns/op           209 B/op          9 allocs/op
BenchmarkCache_StringString_ParallelGet
BenchmarkCache_StringString_ParallelGet-12                       5551136               206.4 ns/op            88 B/op          6 allocs/op
PASS
ok      github.com/alistanis/cache      11.210s
PASS
ok      github.com/alistanis/cache/list 0.002s
Owner
Similar Resources

False-sharing-demo - Demo for performance effects of CPU cache false-sharing

Example of CPU cache false-sharing in Go. A simple example where 2 integer varia

Aug 28, 2022

This provides the lru package which implements a fixed-size thread safe LRU cache

golang-lru This provides the lru package which implements a fixed-size thread sa

Dec 22, 2021

Thread-safe LRU cache with permanency and context-based expiration

go-wlru Thread-safe LRU cache with permanency and context-based expiration Operational Complexity (Time) Operation Best Average Worst Access Θ(1) Θ(1)

Mar 7, 2022

A simple and efficient thread-safe sharded hashmap for Go

shardmap A simple and efficient thread-safe sharded hashmap for Go. This is an alternative to the standard Go map and sync.Map, and is optimized for w

Dec 17, 2022

Lru - A simple LRU cache using go generics

LRU Cache A simple LRU cache using go generics. Examples Basic usage. func main(

Nov 9, 2022

Go concurrent-safe, goroutine-safe, thread-safe queue

Go concurrent-safe, goroutine-safe, thread-safe queue

goconcurrentqueue - Concurrent safe queues The package goconcurrentqueue offers a public interface Queue with methods for a queue. It comes with multi

Dec 31, 2022

A simple thread-safe, fixed size LRU written in Go. Based on dominictarr's Hashlru Algorithm. 🔃

go-hashlru A simple thread-safe, fixed size LRU written in Go. Based on dominictarr's Hashlru Algorithm. 🔃 Uses map[interface{}]interface{} to allow

Dec 5, 2022

Cache library for golang. It supports expirable Cache, LFU, LRU and ARC.

Cache library for golang. It supports expirable Cache, LFU, LRU and ARC.

GCache Cache library for golang. It supports expirable Cache, LFU, LRU and ARC. Features Supports expirable Cache, LFU, LRU and ARC. Goroutine safe. S

Dec 30, 2022

lru: the most concise and efficient LRU algorithm based on golang

lru This package of lru is the most concise and efficient LRU algorithm based on golang. Example Quick start: package main import ( "fmt" "github.

Dec 27, 2021

Least-recently-used-LRU- - Design CacheEvictionPolicy with 2 strategy LRU(Least recently used)

Least-recently-used-LRU- Design CacheEvictionPolicy with 2 strategy LRU(Least re

Jan 4, 2022

Fast thread-safe inmemory cache for big number of entries in Go. Minimizes GC overhead

fastcache - fast thread-safe inmemory cache for big number of entries in Go Features Fast. Performance scales on multi-core CPUs. See benchmark result

Dec 30, 2022

fastcache - fast thread-safe inmemory cache for big number of entries in Go

Fast thread-safe inmemory cache for big number of entries in Go. Minimizes GC overhead

Dec 27, 2022

Light weight thread safe cache for golang

go-cache Light weight thread safe LRU cache Getting started import( "fmt" "github.com/tak1827/go-cache/lru" ) func main() { size := 2 cache := l

Dec 12, 2021

Routines was a fixed number thread pool to process the user task, and it would respawn a corresponding new thread when panic

Routines Routines was a fixed number thread pool to process the user task, and it would respawn a corresponding new thread when panic. It supports the

Dec 16, 2021

LevelDB style LRU cache for Go, support non GC object.

Go语言QQ群: 102319854, 1055927514 凹语言(凹读音“Wa”)(The Wa Programming Language): https://github.com/wa-lang/wa LRU Cache Install go get github.com/chai2010/c

Jul 5, 2020

An in-memory cache library for golang. It supports multiple eviction policies: LRU, LFU, ARC

GCache Cache library for golang. It supports expirable Cache, LFU, LRU and ARC. Features Supports expirable Cache, LFU, LRU and ARC. Goroutine safe. S

May 31, 2021

Golang LRU cache

golang-lru This provides the lru package which implements a fixed-size thread safe LRU cache. It is based on the cache in Groupcache. Documentation Fu

Dec 29, 2022

A fast little LRU cache for Go

tinylru A fast little LRU cache. Getting Started Installing To start using tinylru, install Go and run go get: $ go get -u github.com/tidwall/tinylru

Dec 24, 2022

A skip list of arbitrary elements that can be filtered using roaring bitmaps stored in an LRU cache

Skipfilter This package provides a data structure that combines a skiplist with a roaring bitmap cache. This library was created to efficiently filter

Aug 4, 2022
MongoDB generic REST server in Go
MongoDB generic REST server in Go

Mora - Mongo Rest API REST server for accessing MongoDB documents and meta data Documents When querying on collections those parameters are available:

Dec 17, 2022
The k8s-generic-webhook is a library to simplify the implementation of webhooks for arbitrary customer resources (CR) in the operator-sdk or controller-runtime.

k8s-generic-webhook The k8s-generic-webhook is a library to simplify the implementation of webhooks for arbitrary customer resources (CR) in the opera

Nov 24, 2022
Package create provides a generic option pattern for creating new values of any type

create Package create provides a generic option pattern for creating new values

Dec 30, 2021
Generic inquiry tool to OPA server for CI process, such as GitHub Actions

opaq opaq is a generic inquiry tool to OPA server. A major purpose of this tool is for inquiry in GitHub Actions. Features Data formatting: OPA server

Jan 20, 2022
A Go package providing a generic data type to track maximum and minimum peak values.

go-peak Overview go-peak is a Go package providing a generic data type that tracks the maximum and minimum peak values within a specific period of tim

Mar 26, 2022
sget is a keyless safe script retrieval and execution tool

sget ⚠️ Not ready for use yet! sget is a keyless safe script retrieval and execution tool Security Should you discover any security issues, please ref

Dec 18, 2022
Image clone controller is a kubernetes controller to safe guard against the risk of container images disappearing

Image clone controller image clone controller is a kubernetes controller to safe guard against the risk of container images disappearing from public r

Oct 10, 2021
Chai - type safe http handlers with automatic swagger generation
Chai - type safe http handlers with automatic swagger generation

chai Description chai is an extension for a few popular http routers that adds s

Feb 25, 2022
concurrent, cache-efficient, and Dockerfile-agnostic builder toolkit
concurrent, cache-efficient, and Dockerfile-agnostic builder toolkit

BuildKit BuildKit is a toolkit for converting source code to build artifacts in an efficient, expressive and repeatable manner. Key features: Automati

Dec 31, 2022
Aws-secretsmanager-caching-extension - Cache server for AWS Secrets Manager
Aws-secretsmanager-caching-extension - Cache server for AWS Secrets Manager

AWS Lambda Extension / Sidecar Container Cache Server The cache server is writte

Aug 12, 2022