Cache library for golang. It supports expirable Cache, LFU, LRU and ARC.

GCache

wercker statusGoDoc

Cache library for golang. It supports expirable Cache, LFU, LRU and ARC.

Features

  • Supports expirable Cache, LFU, LRU and ARC.

  • Goroutine safe.

  • Supports event handlers which evict, purge, and add entry. (Optional)

  • Automatically load cache if it doesn't exists. (Optional)

Install

$ go get github.com/bluele/gcache

Example

Manually set a key-value pair.

package main

import (
  "github.com/bluele/gcache"
  "fmt"
)

func main() {
  gc := gcache.New(20).
    LRU().
    Build()
  gc.Set("key", "ok")
  value, err := gc.Get("key")
  if err != nil {
    panic(err)
  }
  fmt.Println("Get:", value)
}
Get: ok

Manually set a key-value pair, with an expiration time.

package main

import (
  "github.com/bluele/gcache"
  "fmt"
  "time"
)

func main() {
  gc := gcache.New(20).
    LRU().
    Build()
  gc.SetWithExpire("key", "ok", time.Second*10)
  value, _ := gc.Get("key")
  fmt.Println("Get:", value)

  // Wait for value to expire
  time.Sleep(time.Second*10)

  value, err = gc.Get("key")
  if err != nil {
    panic(err)
  }
  fmt.Println("Get:", value)
}
Get: ok
// 10 seconds later, new attempt:
panic: ErrKeyNotFound

Automatically load value

package main

import (
  "github.com/bluele/gcache"
  "fmt"
)

func main() {
  gc := gcache.New(20).
    LRU().
    LoaderFunc(func(key interface{}) (interface{}, error) {
      return "ok", nil
    }).
    Build()
  value, err := gc.Get("key")
  if err != nil {
    panic(err)
  }
  fmt.Println("Get:", value)
}
Get: ok

Automatically load value with expiration

package main

import (
  "fmt"
  "time"

  "github.com/bluele/gcache"
)

func main() {
  var evictCounter, loaderCounter, purgeCounter int
  gc := gcache.New(20).
    LRU().
    LoaderExpireFunc(func(key interface{}) (interface{}, *time.Duration, error) {
      loaderCounter++
      expire := 1 * time.Second
      return "ok", &expire, nil
    }).
    EvictedFunc(func(key, value interface{}) {
      evictCounter++
      fmt.Println("evicted key:", key)
    }).
    PurgeVisitorFunc(func(key, value interface{}) {
      purgeCounter++
      fmt.Println("purged key:", key)
    }).
    Build()
  value, err := gc.Get("key")
  if err != nil {
    panic(err)
  }
  fmt.Println("Get:", value)
  time.Sleep(1 * time.Second)
  value, err = gc.Get("key")
  if err != nil {
    panic(err)
  }
  fmt.Println("Get:", value)
  gc.Purge()
  if loaderCounter != evictCounter+purgeCounter {
    panic("bad")
  }
}
Get: ok
evicted key: key
Get: ok
purged key: key

Cache Algorithm

  • Least-Frequently Used (LFU)

Discards the least frequently used items first.

func main() {
  // size: 10
  gc := gcache.New(10).
    LFU().
    Build()
  gc.Set("key", "value")
}
  • Least Recently Used (LRU)

Discards the least recently used items first.

func main() {
  // size: 10
  gc := gcache.New(10).
    LRU().
    Build()
  gc.Set("key", "value")
}
  • Adaptive Replacement Cache (ARC)

Constantly balances between LRU and LFU, to improve the combined result.

detail: http://en.wikipedia.org/wiki/Adaptive_replacement_cache

func main() {
  // size: 10
  gc := gcache.New(10).
    ARC().
    Build()
  gc.Set("key", "value")
}
  • SimpleCache (Default)

SimpleCache has no clear priority for evict cache. It depends on key-value map order.

func main() {
  // size: 10
  gc := gcache.New(10).Build()
  gc.Set("key", "value")
  v, err := gc.Get("key")
  if err != nil {
    panic(err)
  }
}

Loading Cache

If specified LoaderFunc, values are automatically loaded by the cache, and are stored in the cache until either evicted or manually invalidated.

func main() {
  gc := gcache.New(10).
    LRU().
    LoaderFunc(func(key interface{}) (interface{}, error) {
      return "value", nil
    }).
    Build()
  v, _ := gc.Get("key")
  // output: "value"
  fmt.Println(v)
}

GCache coordinates cache fills such that only one load in one process of an entire replicated set of processes populates the cache, then multiplexes the loaded value to all callers.

Expirable cache

func main() {
  // LRU cache, size: 10, expiration: after a hour
  gc := gcache.New(10).
    LRU().
    Expiration(time.Hour).
    Build()
}

Event handlers

Evicted handler

Event handler for evict the entry.

func main() {
  gc := gcache.New(2).
    EvictedFunc(func(key, value interface{}) {
      fmt.Println("evicted key:", key)
    }).
    Build()
  for i := 0; i < 3; i++ {
    gc.Set(i, i*i)
  }
}
evicted key: 0

Added handler

Event handler for add the entry.

func main() {
  gc := gcache.New(2).
    AddedFunc(func(key, value interface{}) {
      fmt.Println("added key:", key)
    }).
    Build()
  for i := 0; i < 3; i++ {
    gc.Set(i, i*i)
  }
}
added key: 0
added key: 1
added key: 2

Author

Jun Kimura

Owner
Jun Kimura
Gopher, Rustacean
Jun Kimura
Comments
  • Feature Request: Allowing for unbounded cache size

    Feature Request: Allowing for unbounded cache size

    Being forced to provide a cache size during construction of the cache doesn't work well for our particular use case. The size of our cache is bounded in other more real-world ways and we don't need to worry about the cache growing out of control. To that end, I'd like to request a new function on the cache builder to allow for no max cache size. The New() method could be changed such that it allows for <=0 size that is interpreted as unbounded.

  • ARC cache does not work as expected (when used with expiry)

    ARC cache does not work as expected (when used with expiry)

    Consider this modified test:-

    	gc := buildLoadingARCacheWithExpiration(20, 1*time.Second)
    	gc.Get("test1")
    	gc.Get("test2")
    	length := gc.Len()
    	expectedLength := 2
    	if length != expectedLength {
    		t.Errorf("Expected length is %v, not %v", expectedLength, length)
    	}
    	time.Sleep(2 * time.Second)
    	length = gc.Len()
    	expectedLength = 0
    	if length != expectedLength {
    		t.Errorf("Expected length is %v, not %v", expectedLength, length)
    	}
    
    	gc.Get("test1")
    	length = gc.Len()
    	expectedLength = 1
    	if length != expectedLength {
    		t.Errorf("Expected length is %v, not %v", expectedLength, length)
    	}
    	gc.Get("test2")
    	length = gc.Len()
    	expectedLength = 2
    	if length != expectedLength {
    		t.Errorf("Expected length is %v, not %v", expectedLength, length)
    	}
    }
    

    Adding back test1 and test2 after they have expired doesn't return a length of 2 as expected. It looks like there is an issue with adding an entry that has expired.

  • Improve the efficiency of Keys(), GetALL(), and Len().

    Improve the efficiency of Keys(), GetALL(), and Len().

    Preallocate the maximum number of available items, including expired items. While this isn't optimal, it is a reasonable guess for the length.

    Even though this approach results in one extra mutex acquisition, in simple go test benchmarking, this improves the test time from ~1.444s to ~1.332s. Presumably this improvement is the result of two things:

    1. reduced GC pressure on the GC
    2. reduced time that c.mu is locked while the GC copies data around

    There is still room to be had by reducing the number of times c.mu is exclusively locked, but this is an easy performance improvement.

    Fixes: #43 Replaces: #50

  • Memory Leak in LFU Cache Implementation

    Memory Leak in LFU Cache Implementation

    Description

    There is a fairly severe memory leak in the implementation of the LFU cache. This leak may also manifest itself in the other algorithms, although I haven't checked yet.

    Reproducing

    Here is what my main function looks like to produce this leak.

    func main() {
            defer profile.Start(profile.MemProfile).Stop()
    	gc := gcache.New(10).
    		LFU().
    		Build()
    	gc.Set("key0", "ok0")
    	gc.Set("key1", "ok1")
    	gc.Set("key2", "ok2")
    	gc.Set("key3", "ok3")
    
    	for i := 0; i < 400000; i++ {
    		v, err := gc.Get("key0")
    		if err != nil {
    			panic(err)
    		}
    		if i == 399999 {
    			fmt.Println("value:", v)
    		}
    	}
    }
    

    Evidence

    As you can see in the function above. We first initialize a new LFU cache that can hold 10 items. We then set 4 of these items to key:value pairs. Finally we read the first item 400,000 times, and print it's value on the last run.

    Below is the output of the memory profiler looking at the functions making the most memory allocations.

    $ go tool pprof ~/test /tmp/profile958427541/mem.pprof
    File: test
    Type: inuse_space
    Time: Dec 4, 2020 at 12:38pm (EST)
    Entering interactive mode (type "help" for commands, "o" for options)
    (pprof) top5
    Showing nodes accounting for 14MB, 100% of 14MB total
    Showing top 5 nodes out of 8
          flat  flat%   sum%        cum   cum%
       10.58MB 75.53% 75.53%       14MB   100%  github.com/bluele/gcache.(*LFUCache).increment
        3.43MB 24.47%   100%     3.43MB 24.47%  container/list.(*List).insertValue (inline)
             0     0%   100%     3.43MB 24.47%  container/list.(*List).InsertAfter (inline)
             0     0%   100%       14MB   100%  github.com/bluele/gcache.(*LFUCache).Get
             0     0%   100%       14MB   100%  github.com/bluele/gcache.(*LFUCache).get
    

    This immediately indicates that somehow, the 4 key value pairs placed into the cache, along with the 400,000 reads performed, required an allocation of 14MB of memory. As we can see, the majority of all the allocations came from inside the github.com/bluele/gcache.(*LFUCache).increment frunction.

    If we take a look at this function we see the following:

    func (c *LFUCache) increment(item *lfuItem) {
    	currentFreqElement := item.freqElement
    	currentFreqEntry := currentFreqElement.Value.(*freqEntry)
    	nextFreq := currentFreqEntry.freq + 1
    	delete(currentFreqEntry.items, item)
    
    	nextFreqElement := currentFreqElement.Next()
    	if nextFreqElement == nil {
    		nextFreqElement = c.freqList.InsertAfter(&freqEntry{
    			freq:  nextFreq,
    			items: make(map[*lfuItem]struct{}),
    		}, currentFreqElement)
    	}
    	nextFreqElement.Value.(*freqEntry).items[item] = struct{}{}
    	item.freqElement = nextFreqElement
    }
    

    At first glance, it becomes obvious that the frequency list inside the cache struct (c.freqList) is growing quite large. If we trace the increment function upwards we can see it is called every time we request an item from the cache, as long as we did not set a timeout on our items (in this case we did not).

    The result of this is that c.freqList grows by one list entry every time we request an item from the cache. We can see this behavior in the Devel debugger:

    (hits goroutine(1):1 total:1) (PC: 0x4f9225)
       179:
       180: func (c *LFUCache) increment(item *lfuItem) {
       181:         currentFreqElement := item.freqElement
       182:         currentFreqEntry := currentFreqElement.Value.(*freqEntry)
       183:         nextFreq := currentFreqEntry.freq + 1
    => 184:         delete(currentFreqEntry.items, item)
       185:
       186:         nextFreqElement := currentFreqElement.Next()
       187:         if nextFreqElement == nil {
       188:                 nextFreqElement = c.freqList.InsertAfter(&freqEntry{
       189:                         freq:  nextFreq,
    (dlv) continue
    (dlv) continue // We skip ahead now 40 calls to increment()
    (dlv) continue
    .
    .
    .
    .
    (hits goroutine(1):40 total:40) (PC: 0x4f9225)
       181:         currentFreqElement := item.freqElement
       182:         currentFreqEntry := currentFreqElement.Value.(*freqEntry)
       183:         nextFreq := currentFreqEntry.freq + 1
       184:         delete(currentFreqEntry.items, item)
       185:
    => 186:         nextFreqElement := currentFreqElement.Next()
       187:         if nextFreqElement == nil {
       188:                 nextFreqElement = c.freqList.InsertAfter(&freqEntry{
       189:                         freq:  nextFreq,
       190:                         items: make(map[*lfuItem]struct{}),
       191:                 }, currentFreqElement)
    (dlv) print c.freqList
    *container/list.List {
            root: container/list.Element {
                    next: *(*"container/list.Element")(0xc000078750),
                    prev: *(*"container/list.Element")(0xc0000795f0),
                    list: *container/list.List nil,
                    Value: interface {} nil,},
            len: 40,}
    

    The above steps were as follows. Set a breakpoint on line 186, in the middle of the increment function. Run continue to skip to the next time we hit the break point. Do this 39 more times to allow 40 consecutive reads from the cache. At this point, we print the frequency list in the cache with (dlv) print c.freqList. This shows us that the list is of length 40, even though there are only 4 items in the cache.

    If we follow the debugger further this pattern continues and the list grows linearly. This is obviously unacceptable behavior and ends up consuming large amounts of memory for operations that should consume none.

  • Extend SimpleCache to allow for unbounded growth

    Extend SimpleCache to allow for unbounded growth

    In this PR, we allow users to create a (SimpleCache) cache with no artificial upper bound on the number of elements that it can host. To achieve that, the user will specify a size <= 0 before building the cache.

  • examples causing redefine main function error with `go get`

    examples causing redefine main function error with `go get`

    examples causing redefine main function error with go get

    github.com/bluele/gcache/examples
    # github.com/bluele/gcache/examples
    ..\Go\src\github.com\bluele\gcache\examples\custom_expiration.go:9:6: main redeclared in this block
    	previous declaration at ..\Go\src\github.com\bluele\gcache\examples\autoloading_cache.go:8:6
    ..\Go\src\github.com\bluele\gcache\examples\example.go:8:6: main redeclared in this block
    	previous declaration at ..\Go\src\github.com\bluele\gcache\examples\custom_expiration.go:9:6
    
  • Replace runtime panics with bundled errors

    Replace runtime panics with bundled errors

    Issue #40

    This PR introduces breaking changes in the cache building API:

    Previous function signature: func (cb *CacheBuilder) Build() Cache

    New function signature: func (cb *CacheBuilder) Build() (Cache, error)

    @bluele we should probably start proper versioning of the package to not catch users off-guard.

  • LoaderFunc define expiry?

    LoaderFunc define expiry?

    Just discovered this library and seems useful for what I'd like to do.

    I noticed in #25 , functionality was added to allow user defined expiry per item. The SetWithExpire methods. Can we extend this to LoaderFunc?

    type LoaderFuncWithExpire func(interface{}) (interface{}, time.Duration, error)
    

    Unrelated question: If my LoaderFunc is very slow, and there are multiple Get for same key, will gcache run the LoaderFunc multiple times, or only once and send same output to all callers?

  • Expirable entries

    Expirable entries

    Hi,

    Do you plan to add, in the near future, support for expirable entries?

    e.g:

    expiration := time.Now().Add(24 * time.Hour) // Now + 24h
    gc := gcache.New(10).LRU().Build()
    gc.Set("key", "value", expiration)
    

    If you don't, would you be interested in a PR that extends the current implementation to support that feature?

  • how to share between multiple processes?

    how to share between multiple processes?

    Hi, gcache developers!!

    I hope to use a gcache in multiprocesses. By the way, I'm thinking about how I can share a cache store in every process (there is a single parent process and multiple child processes).

    When I wrote C++ before, I used shared memory to share it between processes, and I'm working on a project to move to Golang, and I'm thinking about a good way.

    I'm thinking about gcache, is there an appropriate way?

  • Update Len() so it just looks at the length of the items

    Update Len() so it just looks at the length of the items

    Currently, the different caches use len(c.GetALL()) to figure out the number of items in the cache. This causes a problem with eviction order because GetALL does an underlying GetIfPresent on all keys in the cache. That GetIfPresent causes the metrics around each item (number of hits, last hit timestamp, etc.) to be updated. Therefore, when calling Len(), the eviction order is changed (in the case of the LRU, it's made random because it becomes map ordered).

    @bluele Please note: I did not include ARC in here since it didn't pass the unit tests when I made this change. I am unsure as to the expected eviction behavior of ARC so I didn't want to go too far down the rabbit hole without consulting with you first.

  • I make a generic version

    I make a generic version

    Not sure if there any plan to support generics, I made one at https://github.com/szyhf/go-gcache, just simply replace the implement.

    Really thanks for this awsome lib, if there any plan I really like to make a pr.

LFU Redis implements LFU Cache algorithm using Redis as data storage

LFU Redis cache library for Golang LFU Redis implements LFU Cache algorithm using Redis as data storage LFU Redis Package gives you control over Cache

Nov 10, 2022
This provides the lru package which implements a fixed-size thread safe LRU cache

golang-lru This provides the lru package which implements a fixed-size thread sa

Dec 22, 2021
Lru - A simple LRU cache using go generics

LRU Cache A simple LRU cache using go generics. Examples Basic usage. func main(

Nov 9, 2022
lru: the most concise and efficient LRU algorithm based on golang

lru This package of lru is the most concise and efficient LRU algorithm based on golang. Example Quick start: package main import ( "fmt" "github.

Dec 27, 2021
Least-recently-used-LRU- - Design CacheEvictionPolicy with 2 strategy LRU(Least recently used)

Least-recently-used-LRU- Design CacheEvictionPolicy with 2 strategy LRU(Least re

Jan 4, 2022
Thread-safe LRU cache with permanency and context-based expiration

go-wlru Thread-safe LRU cache with permanency and context-based expiration Operational Complexity (Time) Operation Best Average Worst Access Θ(1) Θ(1)

Mar 7, 2022
LevelDB style LRU cache for Go, support non GC object.

Go语言QQ群: 102319854, 1055927514 凹语言(凹读音“Wa”)(The Wa Programming Language): https://github.com/wa-lang/wa LRU Cache Install go get github.com/chai2010/c

Jul 5, 2020
LRU-based cache package for Go.

cache is LRU-based cache package written in vanilla Go - with no package dependency. LRU stands for Least Recently Used and it is one of the famous cache replacement algorithm

Sep 8, 2022
Solution for Leetcode problem: 146. LRU Cache

Solution for Leetcode problem: 146. LRU Cache link My solution for the above lee

Jan 30, 2022
It is a cache system that supports the http port.
It is a cache system that supports the http port.

jarjarbinks This service has two different endpoints that are only used to save cache entry and find the saved entry with the relevant key. The cache

Jan 31, 2022
Gin-cache - Gin cache middleware with golang

Gin-cache - Gin cache middleware with golang

Nov 28, 2022
Dec 28, 2022
Package cache is a middleware that provides the cache management for Flamego.

cache Package cache is a middleware that provides the cache management for Flamego. Installation The minimum requirement of Go is 1.16. go get github.

Nov 9, 2022
A mem cache base on other populator cache, add following feacture

memcache a mem cache base on other populator cache, add following feacture add lazy load(using expired data, and load it asynchronous) add singlefligh

Oct 28, 2021
Cache - A simple cache implementation

Cache A simple cache implementation LRU Cache An in memory cache implementation

Jan 25, 2022
gdcache is a pure non-intrusive distributed cache library implemented by golang
gdcache is a pure non-intrusive distributed cache library implemented by golang

gdcache is a pure non-intrusive distributed cache library implemented by golang, you can use it to implement your own distributed cache

Sep 26, 2022
Gocodecache - An in-memory cache library for code value master in Golang

gocodecache An in-memory cache library for code master in Golang. Installation g

Jun 23, 2022
groupcache is a caching and cache-filling library, intended as a replacement for memcached in many cases.

groupcache Summary groupcache is a distributed caching and cache-filling library, intended as a replacement for a pool of memcached nodes in many case

Dec 31, 2022