An in-memory key:value store/cache (similar to Memcached) library for Go, suitable for single-machine applications.

go-cache

go-cache is an in-memory key:value store/cache similar to memcached that is suitable for applications running on a single machine. Its major advantage is that, being essentially a thread-safe map[string]interface{} with expiration times, it doesn't need to serialize or transmit its contents over the network.

Any object can be stored, for a given duration or forever, and the cache can be safely used by multiple goroutines.

Although go-cache isn't meant to be used as a persistent datastore, the entire cache can be saved to and loaded from a file (using c.Items() to retrieve the items map to serialize, and NewFrom() to create a cache from a deserialized one) to recover from downtime quickly. (See the docs for NewFrom() for caveats.)

Installation

go get github.com/patrickmn/go-cache

Usage

import (
	"fmt"
	"github.com/patrickmn/go-cache"
	"time"
)

func main() {
	// Create a cache with a default expiration time of 5 minutes, and which
	// purges expired items every 10 minutes
	c := cache.New(5*time.Minute, 10*time.Minute)

	// Set the value of the key "foo" to "bar", with the default expiration time
	c.Set("foo", "bar", cache.DefaultExpiration)

	// Set the value of the key "baz" to 42, with no expiration time
	// (the item won't be removed until it is re-set, or removed using
	// c.Delete("baz")
	c.Set("baz", 42, cache.NoExpiration)

	// Get the string associated with the key "foo" from the cache
	foo, found := c.Get("foo")
	if found {
		fmt.Println(foo)
	}

	// Since Go is statically typed, and cache values can be anything, type
	// assertion is needed when values are being passed to functions that don't
	// take arbitrary types, (i.e. interface{}). The simplest way to do this for
	// values which will only be used once--e.g. for passing to another
	// function--is:
	foo, found := c.Get("foo")
	if found {
		MyFunction(foo.(string))
	}

	// This gets tedious if the value is used several times in the same function.
	// You might do either of the following instead:
	if x, found := c.Get("foo"); found {
		foo := x.(string)
		// ...
	}
	// or
	var foo string
	if x, found := c.Get("foo"); found {
		foo = x.(string)
	}
	// ...
	// foo can then be passed around freely as a string

	// Want performance? Store pointers!
	c.Set("foo", &MyStruct, cache.DefaultExpiration)
	if x, found := c.Get("foo"); found {
		foo := x.(*MyStruct)
			// ...
	}
}

Reference

godoc or http://godoc.org/github.com/patrickmn/go-cache

Owner
Patrick Mylund Nielsen
Make software out of steel, not clay
Patrick Mylund Nielsen
Comments
  • Cache items gone when re-running script

    Cache items gone when re-running script

    Here is the code:

    c := cache.New(0, 10*time.Minute)
    session := &Session{}
    
    // Not set? Set it
    if _, found := c.Get(fmt.Sprintf("mykey-%d", iteration)); found == false {
    log.Println("DEBUG: SESSION: NOT FOUND")
    session = session.Create(somevarshere)
    c.Set(fmt.Sprintf("mykey-%d", iteration), session, cache.NoExpiration)
    }
    
    if sess, found := c.Get(fmt.Sprintf("mykey-%d", iteration)); found {
      log.Println("DEBUG: FOUND!")
      session = sess.(*Session)
    }
    

    Results:

    2019/02/08 00:30:29 DEBUG: STARTING ITERATION  1
    2019/02/08 00:30:29 DEBUG: SESSION: NOT FOUND
    

    No matter how many times I run the script, it does not find the cache. Should this be stored in memory so when my cronjob runs the script it is able to pull it? Is the cache.New() overriding it?

  • Add GetWithExpiration(k) (interface{}, time.Time, bool)

    Add GetWithExpiration(k) (interface{}, time.Time, bool)

    I ran into a concurrency issues using cache.Items() the documentation said you needed to synchronize access, but I don't think there's a safe way to do that, especially with a cleanup goroutine modifying the map.

    Specifically I was getting nil pointer access panics using it with a lot of concurrent goroutines.

    • https://github.com/ulule/limiter/issues/17#issuecomment-182262296

    https://github.com/dougnukem/limiter/blob/concurrency_issue/store_memory.go#L36

    ...
    item, found := s.Cache.Items()[key]
    
    panic: runtime error: invalid memory address or nil pointer dereference
    [signal 0xb code=0x1 addr=0x0 pc=0x5e304]
    
    goroutine 555 [running]:
    github.com/ulule/limiter.(*MemoryStore).Get(0xc82000bba0, 0xc820866920, 0x18, 0x0, 0x0, 0xa, 0x186a0, 0x0, 0x0, 0x0, ...)
        /Users/ddaniels/dev/src/github.com/ulule/limiter/store_memory.go:36 +0x24c
    
  • Efficient deletion

    Efficient deletion

    Have you thought about introducing (as an option, maybe) a binary tree into the cache structure, with items arranged in sorted order according to their expiration? This could avoid going through all the items when deleting expired ones. I might give it a try if anyone is interested.

  • Fix unprotected access to shared state in Save()

    Fix unprotected access to shared state in Save()

    This fixes an issue in save where the map was not safely copied. This led to a unprotected access of shared state and a thus a possible race condition while writing the cached items to the output stream. The original code copied the map with an assignment statement which does not copy the map data structure, just a new pointer to the same map.

  • Feature Request: Expose Keys?

    Feature Request: Expose Keys?

    I'd love to be able to expose the list of keys that are currently in my cache.

    This functionality basically already exists in go-cache, it's just unexported/not wrapped in a nice method name.

  •  GetWithExpirationUpdate - atomic implementation

    GetWithExpirationUpdate - atomic implementation

    This PR is fixed version of #96. Main changes are:

    • Type of cache.items are converted from map[string]Item to map[string]*Item. I needed to do it because, in GetWithExpirationUpdate, it is the only way to modify the Expiration field of an Item. The other way around (re-setting the item) needs a write lock, therefore blocks all reads/writes to items. Not convenient for 'cache-get's.
    • Now every Item has its own RWLock. This way, we don't need a write lock in GetWithExpirationUpdate.

    Supersedes #125

  • Add go.mod file declaring proper semantic import path

    Add go.mod file declaring proper semantic import path

    As the latest version is tagged as v2.x.x in git, using the package as a Go module will require importing it as github.com/patrickmn/go-cache/v2, otherwise it will show up as:

    github.com/patrickmn/go-cache v2.1.0+incompatible
    

    in a dependent's go.mod.

    See: https://github.com/golang/go/wiki/Modules#semantic-import-versioning

    I know it's not pretty and introduces more things to maintain manually, but apparently it's what is supposed to be done in this context.

  • less than one purging time not respected

    less than one purging time not respected

    Summary

    Setting up a cache with cleanup_interval less than 1, after an item expires, I am no longer able to Get it.

    Steps to reproduce

    The following is a minimal working example illustrating the problem:

    package main
    
    import (
    	"fmt"
    	"github.com/patrickmn/go-cache"
    	"time"
    )
    
    func main() {
    	duration := time.Duration(-1) * time.Nanosecond
    	fmt.Println(duration < 0)
    	cache := cache.New(5*time.Second, duration)
    	cache.SetDefault("foo", "bar")
    	for {
    		item, found := cache.Get("foo")
    		if found {
    			fmt.Println("Found it!" + item.(string))
    			time.Sleep(time.Second)
    		} else {
    			fmt.Println("Key not found!")
    			return
    		}
    	}
    }
    

    Experienced behaviour

    After the item expires, Get returns a found boolean of false, indicating that the item cannot be found.

    Expected behaviour

    The item should still be found, as the janitor should be disabled, as a less than 1 cleanup interval was specified on creation of the cache.

    Am I doing something wrong, or misunderstanding something?

  • Why are the keys strings?

    Why are the keys strings?

    Hey there,

    A coworker and I were wondering about the reasoning behind making keys to the cache strings. Using structs as keys in maps is slower and often less convenient than using structs (as Ashish Gandhi details towards the end of this talk). Is it because allowing for a cache key of interface{} introduces the risk of runtime type error if a user tried to store, say, a slice?

    Thanks for writing and maintaining the package!

    T

  • Change map[string]interface{} to binary tree

    Change map[string]interface{} to binary tree

    map[string]interface{} benchmark:

    dtlbox-ubuntu1@~/Work/gosource/src/github.com/pmylund/go-cache$ go test -bench=".*"
    PASS
    BenchmarkCacheGet   50000000            48.3 ns/op
    BenchmarkRWMutexMapGet  50000000            37.5 ns/op
    BenchmarkCacheGetConcurrent 50000000            47.9 ns/op
    BenchmarkRWMutexMapGetConcurrent    50000000            35.4 ns/op
    BenchmarkCacheGetManyConcurrent 50000000            48.5 ns/op
    BenchmarkShardedCacheGetManyConcurrent   5000000           423 ns/op
    BenchmarkCacheSet    5000000           382 ns/op
    BenchmarkRWMutexMapSet  20000000           109 ns/op
    BenchmarkCacheSetDelete  5000000           523 ns/op
    BenchmarkRWMutexMapSetDelete    10000000           240 ns/op
    BenchmarkCacheSetDeleteSingleLock    5000000           449 ns/op
    BenchmarkRWMutexMapSetDeleteSingleLock  10000000           191 ns/op
    ok      github.com/pmylund/go-cache 29.174s
    

    binary tree benchmark

    dtlbox-ubuntu1@~/Work/gosource/src/github.com/t0pep0/go-cache$ go test -bench=".*"
    PASS
    BenchmarkCacheGet   50000000            52.5 ns/op
    BenchmarkRWMutexMapGet  50000000            45.9 ns/op
    BenchmarkCacheGetConcurrent 50000000            45.3 ns/op
    BenchmarkRWMutexMapGetConcurrent    50000000            34.8 ns/op
    BenchmarkCacheGetManyConcurrent 50000000            51.2 ns/op
    BenchmarkShardedCacheGetManyConcurrent   5000000           380 ns/op
    BenchmarkCacheSet   10000000           166 ns/op
    BenchmarkRWMutexMapSet  20000000           114 ns/op
    BenchmarkCacheSetDelete 10000000           279 ns/op
    BenchmarkRWMutexMapSetDelete    10000000           284 ns/op
    BenchmarkCacheSetDeleteSingleLock   10000000           178 ns/op
    BenchmarkRWMutexMapSetDeleteSingleLock  10000000           187 ns/op
    ok      github.com/t0pep0/go-cache  29.163s
    
  • Replace time.Now() by runtime.nanotime()

    Replace time.Now() by runtime.nanotime()

    • time.Time is 24 bytes. int64 returned by nanotime() is 8 bytes. This one is not relevant for the code - item.Expiration is int64 already.
    • runtime.nanotime() is 2x faster
    import	_ "unsafe" // required to use //go:linkname
    
    //go:noescape
    //go:linkname nanotime runtime.nanotime
    func nanotime() int64
    
    // time.Now() is 45ns, runtime.nanotime is 20ns
    // I can not create an exported symbol with //go:linkname
    // I need a wrapper
    // Go does not inline functions? https://lemire.me/blog/2017/09/05/go-does-not-inline-functions-when-it-should/
    // The wrapper costs 5ns per call
    func Nanotime() int64 {
    	return nanotime()
    }
    

    Using 1ms resolution we can potentially save 4 bytes more.

  • Float calculation error

    Float calculation error

    Floats cannot be added or subtracted directly. Because of the accuracy problem, calculation errors may occur. eg:

    func TestIncrementFloat64(t *testing.T) { tc := New(DefaultExpiration, 0) tc.Set("float64", float64(0.6), DefaultExpiration) n, err := tc.IncrementFloat64("float64", 0.7) if err != nil { t.Error("Error incrementing:", err) } if n != 1.3 { t.Error("Returned number is not 1.3:", n) } x, found := tc.Get("float64") if !found { t.Error("float64 was not found") } if x.(float64) != 1.3 { t.Error("float64 is not 1.3:", x) } }

    the result will be 1.2999999999999998.

    func TestDecrementFloat64(t *testing.T) { tc := New(DefaultExpiration, 0) tc.Set("float64", float64(74.96), DefaultExpiration) n, err := tc.DecrementFloat64("float64", 20.48) if err != nil { t.Error("Error decrementing:", err) } if n != 54.48 { t.Error("Returned number is not 54.48:", n) } x, found := tc.Get("float64") if !found { t.Error("float64 was not found") } if x.(float64) != 54.48 { t.Error("float64 is not 54.48:", x) } }

    the result will be 54.47999999999999

  • Feature request: Renaming

    Feature request: Renaming

    There is no way to just change the key for an item. If I wanted to do this manually, I would have to use Delete and then Add, but upon deletion the OnEvicted function would be called, which I don't want.

    My use case is a webservice, where the user can create some data entry using an assistant, which consists of multiple web pages. I use go-cache to store the already entered data from past pages until the assistant is finished. I use an XSRF-token (generated with golang.org/x/net/xsrftoken) as the key for the cache item. My issue with this is, that I cannot extend the validity of the token after one step of the assistant has been completed, so I would have to generate a new one, but copying the cache item, so that it can be found with the new token would be expensive (contains large files, which are deleted through OnEvicted).

    Ideally I could also set a new duration upon renaming, but maybe this would be better as a separate feature.

  • Call onEvicted on flush

    Call onEvicted on flush

    The documentation of Flush states, that it deletes all items from the cache. When calling Delete, onEvicted is run. Combining these two pieces of knowledge I expected, that onEvicted would be called for all items, when calling Flush, but I had to learn, that this assumption is incorrect.

    Was this a deliberate design decision, or would you consider changing the behavior?

Related tags
An in-memory key:value store/cache library written in Go 1.18 generics

go-generics-cache go-generics-cache is an in-memory key:value store/cache that is suitable for applications running on a single machine. This in-memor

Dec 27, 2022
groupcache is a caching and cache-filling library, intended as a replacement for memcached in many cases.

groupcache Summary groupcache is a distributed caching and cache-filling library, intended as a replacement for a pool of memcached nodes in many case

Dec 31, 2022
Gocodecache - An in-memory cache library for code value master in Golang

gocodecache An in-memory cache library for code master in Golang. Installation g

Jun 23, 2022
🧩 Redify is the optimized key-value proxy for quick access and cache of any other database throught Redis and/or HTTP protocol.

Redify (Any database as redis) License Apache 2.0 Redify is the optimized key-value proxy for quick access and cache of any other database throught Re

Sep 25, 2022
Fast key-value cache written on pure golang

GoCache Simple in-memory key-value cache with default or specific expiration time. Install go get github.com/DylanMrr/GoCache Features Key-value stor

Nov 16, 2021
🦉owlcache is a lightweight, high-performance, non-centralized, distributed Key/Value memory-cached data sharing application written by Go
 🦉owlcache is a lightweight, high-performance, non-centralized, distributed Key/Value memory-cached data sharing application written by Go

??owlcache is a lightweight, high-performance, non-centralized, distributed Key/Value memory-cached data sharing application written by Go . keyword : golang cache、go cache、golang nosql

Nov 5, 2022
Go Memcached client library #golang

About This is a memcache client library for the Go programming language (http://golang.org/). Installing Using go get $ go get github.com/bradfitz/gom

Dec 28, 2022
Cache library for golang. It supports expirable Cache, LFU, LRU and ARC.
Cache library for golang. It supports expirable Cache, LFU, LRU and ARC.

GCache Cache library for golang. It supports expirable Cache, LFU, LRU and ARC. Features Supports expirable Cache, LFU, LRU and ARC. Goroutine safe. S

Dec 30, 2022
An in-memory cache library for golang. It supports multiple eviction policies: LRU, LFU, ARC

GCache Cache library for golang. It supports expirable Cache, LFU, LRU and ARC. Features Supports expirable Cache, LFU, LRU and ARC. Goroutine safe. S

May 31, 2021
A zero-dependency cache library for storing data in memory with generics.

Memory Cache A zero-dependency cache library for storing data in memory with generics. Requirements Golang 1.18+ Installation go get -u github.com/rod

May 26, 2022
A memcached binary protocol toolkit for go.

gomemcached This is a memcached binary protocol toolkit in go. It provides client and server functionality as well as a little sample server showing h

Nov 9, 2022
A memcached proxy that manages data chunking and L1 / L2 caches
A memcached proxy that manages data chunking and L1 / L2 caches

Rend: Memcached-Compatible Server and Proxy Rend is a proxy whose primary use case is to sit on the same server as both a memcached process and an SSD

Dec 24, 2022
memcached operator

memcached-operator Operator SDK 中的 Go 编程语言支持可以利用 Operator SDK 中的 Go 编程语言支持,为 Memcached 构 建基于 Go 的 Operator 示例、分布式键值存储并管理其生命周期。 前置条件 安装 Docker Desktop,

Sep 18, 2022
Dec 28, 2022
Package cache is a middleware that provides the cache management for Flamego.

cache Package cache is a middleware that provides the cache management for Flamego. Installation The minimum requirement of Go is 1.16. go get github.

Nov 9, 2022
A mem cache base on other populator cache, add following feacture

memcache a mem cache base on other populator cache, add following feacture add lazy load(using expired data, and load it asynchronous) add singlefligh

Oct 28, 2021
Cache - A simple cache implementation

Cache A simple cache implementation LRU Cache An in memory cache implementation

Jan 25, 2022
Gin-cache - Gin cache middleware with golang

Gin-cache - Gin cache middleware with golang

Nov 28, 2022
Cachy is a simple and lightweight in-memory cache api.
Cachy is a simple and lightweight in-memory cache api.

cachy Table of Contents cachy Table of Contents Description Features Structure Configurability settings.json default values for backup_file_path Run o

Apr 24, 2022