Concurrent data structures for Go

GoDoc reference GoReport codecov

xsync

Concurrent data structures for Go. An extension for the standard sync package.

This library should be considered experimental, so make sure to run tests and benchmarks for your use cases before adding it to your application.

Important note. Only 64-bit builds are officially supported at the moment. If you need to run a 32-bit build, make sure to test it and open a GH issue in case of any problems.

Benchmarks

Benchmark results may be found here.

Counter

A Counter is a striped int64 counter inspired by the j.u.c.a.LongAdder class from Java standard library.

var c xsync.Counter
// increment and decrement the counter
c.Inc()
c.Dec()
// read the current value 
v := c.Value()

Works better in comparison with a single atomically updated int64 counter in high contention scenarios.

Map

A Map is like a concurrent hash table based map. It follows the interface of sync.Map.

m := xsync.NewMap()
m.Store("foo", "bar")
v, ok := m.Load("foo")

Map uses a modified version of Cache-Line Hash Table (CLHT) data structure: https://github.com/LPD-EPFL/CLHT

CLHT is built around idea to organize the hash table in cache-line-sized buckets, so that on all modern CPUs update operations complete with at most one cache-line transfer. Also, Get operations involve no write to memory, as well as no mutexes or any other sort of locks. Due to this design, in all considered scenarios Map outperforms sync.Map.

One important difference with sync.Map is that only string keys are supported. That's because Golang standard library does not expose the built-in hash functions for interface{} values.

MPMCQueue

A MPMCQeueue is a bounded multi-producer multi-consumer concurrent queue.

q := xsync.NewMPMCQueue(1024)
// producer inserts an item into the queue
q.Enqueue("foo")
// optimistic insertion attempt; doesn't block
inserted := q.TryEnqueue("bar")
// consumer obtains an item from the queue
item := q.Dequeue()
// optimistic obtain attempt; doesn't block
item, ok := q.TryDequeue()

Based on the algorithm from the MPMCQueue C++ library which in its turn references D.Vyukov's MPMC queue. According to the following classification, the queue is array-based, fails on overflow, provides causal FIFO, has blocking producers and consumers.

The idea of the algorithm is to allow parallelism for concurrent producers and consumers by introducing the notion of tickets, i.e. values of two counters, one per producers/consumers. An atomic increment of one of those counters is the only noticeable contention point in queue operations. The rest of the operation avoids contention on writes thanks to the turn-based read/write access for each of the queue items.

In essence, MPMCQueue is a specialized queue for scenarios where there are multiple concurrent producers and consumers of a single queue running on a large multicore machine.

To get the optimal performance, you may want to set the queue size to be large enough, say, an order of magnitude greater than the number of producers/consumers, to allow producers and consumers to progress with their queue operations in parallel most of the time.

RBMutex

A RBMutex is a reader biased reader/writer mutual exclusion lock. The lock can be held by an many readers or a single writer.

var m xsync.RBMutex
// reader lock calls return a token
t := m.RLock()
// the token must be later used to unlock the mutex
m.RUnlock(t)
// writer locks are the same as in sync.RWMutex
m.Lock()
m.Unlock()

RBMutex is based on the BRAVO (Biased Locking for Reader-Writer Locks) algorithm: https://arxiv.org/pdf/1810.01553.pdf

The idea of the algorithm is to build on top of an existing reader-writer mutex and introduce a fast path for readers. On the fast path, reader lock attempts are sharded over an internal array based on the reader identity (a token in case of Golang). This means that readers do not contend over a single atomic counter like it's done in, say, sync.RWMutex allowing for better scalability in terms of cores.

Hence, by the design RBMutex is a specialized mutex for scenarios, such as caches, where the vast majority of locks are acquired by readers and write lock acquire attempts are infrequent. In such scenarios, RBMutex should perform better than the sync.RWMutex on large multicore machines.

RBMutex extends sync.RWMutex internally and uses it as the "reader bias disabled" fallback, so the same semantics apply. The only noticeable difference is in the reader tokens returned from the RLock/RUnlock methods.

License

Licensed under MIT.

Owner
Andrey Pechkurov
Software engineer. Node.js contributor. Occasional tech blogger and speaker.
Andrey Pechkurov
Comments
  • Panic while storing

    Panic while storing

    I was trying to benchmark the map implementation against sync.Map with concurrent load, but could not because of panic:

    panic: runtime error: invalid memory address or nil pointer dereference
    [signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x56b27a]
    
    goroutine 73 [running]:
    github.com/puzpuzpuz/xsync.(*Map).doStore(0xc00009e6c0, {0xc0000b4028, 0x7}, {0x59bb40, 0xc0000d6040}, 0x0)
    	/home/runner/go/pkg/mod/github.com/puzpuzpuz/[email protected]/map.go:190 +0x21a
    github.com/puzpuzpuz/xsync.(*Map).Store(...)
    	/home/runner/go/pkg/mod/github.com/puzpuzpuz/[email protected]/map.go:163
    github.com/veartutop/cachex_test.xSyncMap.make({0x0, 0x0, 0x4800}, 0x0, 0xf4240)
    	/home/runner/work/cache/cache/benchmark_test.go:330 +0x12a
    github.com/veartutop/cachex_test.Benchmark_concurrent.func1(0xc0000cb200)
    

    Reproducer: https://github.com/vearutop/cache/pull/6/checks?check_run_id=3389323336 https://github.com/vearutop/cache/blob/xsync/benchmark_test.go#L31 https://github.com/vearutop/cache/blob/xsync/benchmark_test.go#L312-L364

    The benchmark stores 1e6 entries into the map and then reads "random" keys from multiple goroutines with occasional 10% writes.

    Same benchmark passes for sync.Map, please help to check if I'm misusing this lib.

  • Specialized versions of Map: CounterMap, ByteSliceMap, etc.

    Specialized versions of Map: CounterMap, ByteSliceMap, etc.

    Current xsync.Map supports string keys and interface{} values. This is sufficient for many use cases, but not all of them. So, that's why specialized map data structures could be added to the library.

    IMO the most interesting one is CounterMap which would hold int64s as both keys and values and support Inc/Dec operations. Such map may be used in certain niche use cases. Also, it should outperform any synchronized built-in map, as well as xsync.Map.

    Another option might be a map that would support []byte slices as keys. This is not supported by the built-in maps (both map and sync.Map) since slices are not comparable.

    I'll be collecting feedback to understand the demand for specialized map collections, so leave comments on this issue if you need them.

  • MapOf: Generic Keys

    MapOf: Generic Keys

    Support for generic keys, and an accompanying hasher func, as discussed in #17

    This shouldn't be a breaking change for function calls, but since the MapOf type has changed from type MapOf[V any] struct to type MapOf[K, V any] struct, this isn't a fully backwards-compatible change.

    Usage:

    // Existing string-keyed API is unchanged
    m := NewMapOf[Widget]()
    
    // For arbitrary key types
    hasher := func(k User) uint64 { ... }
    m := NewTypedMapOf[User, Widget](hasher)
    m.Store(User{}, Widget{})
    
    // UInt64 hasher is noop
    hasher := func(k uint64) uint64 { return k }
    m := NewTypedMapOf[UInt64, Widget](hasher)
    m.Store(1234, Widget{})
    
    Benchmarks are unchanged
    name                              old time/op  new time/op  delta
    Counter-8                         4.35ns ± 0%  2.35ns ± 0%   ~     (p=1.000 n=1+1)
    AtomicInt64-8                     65.8ns ± 0%  66.9ns ± 0%   ~     (p=1.000 n=1+1)
    Map_NoWarmUp/99%-reads-8          28.8ns ± 0%  28.5ns ± 0%   ~     (p=1.000 n=1+1)
    Map_NoWarmUp/90%-reads-8          42.4ns ± 0%  42.5ns ± 0%   ~     (p=1.000 n=1+1)
    Map_NoWarmUp/75%-reads-8          56.1ns ± 0%  47.5ns ± 0%   ~     (p=1.000 n=1+1)
    Map_NoWarmUp/50%-reads-8          85.3ns ± 0%  57.5ns ± 0%   ~     (p=1.000 n=1+1)
    Map_NoWarmUp/0%-reads-8           83.2ns ± 0%  73.1ns ± 0%   ~     (p=1.000 n=1+1)
    MapStandard_NoWarmUp/99%-reads-8   386ns ± 0%   346ns ± 0%   ~     (p=1.000 n=1+1)
    MapStandard_NoWarmUp/90%-reads-8   452ns ± 0%   430ns ± 0%   ~     (p=1.000 n=1+1)
    MapStandard_NoWarmUp/75%-reads-8   532ns ± 0%   426ns ± 0%   ~     (p=1.000 n=1+1)
    MapStandard_NoWarmUp/50%-reads-8   418ns ± 0%   449ns ± 0%   ~     (p=1.000 n=1+1)
    MapStandard_NoWarmUp/0%-reads-8    496ns ± 0%   464ns ± 0%   ~     (p=1.000 n=1+1)
    Map_WarmUp/100%-reads-8           49.5ns ± 0%  48.6ns ± 0%   ~     (p=1.000 n=1+1)
    Map_WarmUp/99%-reads-8            51.6ns ± 0%  47.8ns ± 0%   ~     (p=1.000 n=1+1)
    Map_WarmUp/90%-reads-8            48.0ns ± 0%  46.8ns ± 0%   ~     (p=1.000 n=1+1)
    Map_WarmUp/75%-reads-8            51.1ns ± 0%  48.3ns ± 0%   ~     (p=1.000 n=1+1)
    Map_WarmUp/50%-reads-8            59.4ns ± 0%  53.6ns ± 0%   ~     (p=1.000 n=1+1)
    Map_WarmUp/0%-reads-8             68.8ns ± 0%  66.5ns ± 0%   ~     (p=1.000 n=1+1)
    MapStandard_WarmUp/100%-reads-8    126ns ± 0%   111ns ± 0%   ~     (p=1.000 n=1+1)
    MapStandard_WarmUp/99%-reads-8     240ns ± 0%   162ns ± 0%   ~     (p=1.000 n=1+1)
    MapStandard_WarmUp/90%-reads-8     214ns ± 0%   199ns ± 0%   ~     (p=1.000 n=1+1)
    MapStandard_WarmUp/75%-reads-8     294ns ± 0%   283ns ± 0%   ~     (p=1.000 n=1+1)
    MapStandard_WarmUp/50%-reads-8     442ns ± 0%   443ns ± 0%   ~     (p=1.000 n=1+1)
    MapStandard_WarmUp/0%-reads-8      616ns ± 0%   614ns ± 0%   ~     (p=1.000 n=1+1)
    MapRange-8                        6.35ms ± 0%  6.34ms ± 0%   ~     (p=1.000 n=1+1)
    MapRangeStandard-8                6.11ms ± 0%  6.06ms ± 0%   ~     (p=1.000 n=1+1)
    MapOf_NoWarmUp/99%-reads-8        28.3ns ± 0%  28.9ns ± 0%   ~     (p=1.000 n=1+1)
    MapOf_NoWarmUp/90%-reads-8        39.1ns ± 0%  41.6ns ± 0%   ~     (p=1.000 n=1+1)
    MapOf_NoWarmUp/75%-reads-8        47.0ns ± 0%  48.2ns ± 0%   ~     (p=1.000 n=1+1)
    MapOf_NoWarmUp/50%-reads-8        63.6ns ± 0%  62.9ns ± 0%   ~     (p=1.000 n=1+1)
    MapOf_NoWarmUp/0%-reads-8         76.5ns ± 0%  71.8ns ± 0%   ~     (p=1.000 n=1+1)
    MapOf_WarmUp/100%-reads-8         50.9ns ± 0%  51.1ns ± 0%   ~     (p=1.000 n=1+1)
    MapOf_WarmUp/99%-reads-8          49.7ns ± 0%  52.9ns ± 0%   ~     (p=1.000 n=1+1)
    MapOf_WarmUp/90%-reads-8          59.6ns ± 0%  51.0ns ± 0%   ~     (p=1.000 n=1+1)
    MapOf_WarmUp/75%-reads-8          55.7ns ± 0%  51.9ns ± 0%   ~     (p=1.000 n=1+1)
    MapOf_WarmUp/50%-reads-8          56.4ns ± 0%  56.2ns ± 0%   ~     (p=1.000 n=1+1)
    MapOf_WarmUp/0%-reads-8           69.2ns ± 0%  67.3ns ± 0%   ~     (p=1.000 n=1+1)
    MapOfRange-8                      7.83ms ± 0%  7.25ms ± 0%   ~     (p=1.000 n=1+1)
    QueueProdCons-8                    164ns ± 0%   172ns ± 0%   ~     (p=1.000 n=1+1)
    QueueProdConsWork100-8             191ns ± 0%   181ns ± 0%   ~     (p=1.000 n=1+1)
    ChanProdCons-8                    74.1ns ± 0%  75.4ns ± 0%   ~     (p=1.000 n=1+1)
    ChanProdConsWork100-8              419ns ± 0%   402ns ± 0%   ~     (p=1.000 n=1+1)
    RBMutexReadOnly-8                 2.45ns ± 0%  2.42ns ± 0%   ~     (p=1.000 n=1+1)
    RBMutexWrite10000-8               8.44ns ± 0%  9.57ns ± 0%   ~     (p=1.000 n=1+1)
    RBMutexWrite1000-8                25.6ns ± 0%  25.3ns ± 0%   ~     (p=1.000 n=1+1)
    RBMutexWrite100-8                 60.1ns ± 0%  58.9ns ± 0%   ~     (p=1.000 n=1+1)
    RBMutexWorkReadOnly-8             21.4ns ± 0%  20.0ns ± 0%   ~     (p=1.000 n=1+1)
    RBMutexWorkWrite10000-8           30.1ns ± 0%  31.3ns ± 0%   ~     (p=1.000 n=1+1)
    RBMutexWorkWrite1000-8            60.5ns ± 0%  58.7ns ± 0%   ~     (p=1.000 n=1+1)
    RBMutexWorkWrite100-8              125ns ± 0%   125ns ± 0%   ~     (p=1.000 n=1+1)
    RWMutexReadOnly-8                  127ns ± 0%   139ns ± 0%   ~     (p=1.000 n=1+1)
    RWMutexWrite10000-8                128ns ± 0%   123ns ± 0%   ~     (p=1.000 n=1+1)
    RWMutexWrite1000-8                84.7ns ± 0%  83.0ns ± 0%   ~     (p=1.000 n=1+1)
    RWMutexWrite100-8                 44.0ns ± 0%  43.1ns ± 0%   ~     (p=1.000 n=1+1)
    RWMutexWorkReadOnly-8              150ns ± 0%   152ns ± 0%   ~     (p=1.000 n=1+1)
    RWMutexWorkWrite10000-8            151ns ± 0%   142ns ± 0%   ~     (p=1.000 n=1+1)
    RWMutexWorkWrite1000-8             132ns ± 0%   123ns ± 0%   ~     (p=1.000 n=1+1)
    RWMutexWorkWrite100-8              103ns ± 0%    97ns ± 0%   ~     (p=1.000 n=1+1)
    
  • Map: add way to provide initial size hints

    Map: add way to provide initial size hints

    Since the difference in performance caused by map growth is noticeble, it would be great if we could initialize the map with size hints to try to avoid growths on add.

    The fact that we spread keys across buckets makes this non-trivial, but maybe it would be enough to assume normally distributed keys and just grow the buckets proportionally to the initial hint?

    I've just glossed over the code, but could try a PR if there's no obvious blocker I missed and nobody else beats me to it!

  •  github.com/puzpuzpuz/xsync@v2.0.0: invalid version: module contains a go.mod file, so module path must match major version (

    github.com/puzpuzpuz/[email protected]: invalid version: module contains a go.mod file, so module path must match major version ("github.com/puzpuzpuz/xsync/v2")

    Thank you for releasing 2.0.0!!! This library is awesome :-)

    Unfortunately, I can't install 2.0.0 right now:

    $ go get github.com/puzpuzpuz/xsync
    go: added github.com/puzpuzpuz/xsync v1.5.2
    
    $ go get github.com/puzpuzpuz/[email protected]
    go: github.com/puzpuzpuz/[email protected]: invalid version: module contains a go.mod file, so module path must match major version ("github.com/puzpuzpuz/xsync/v2")
    
  • Use hash/maphash instead of go:linkname hacks

    Use hash/maphash instead of go:linkname hacks

    This PR has two commits. The first is using maphash only as an implementation detail, without any API changes. The second commit makes the usage of maphash part of the API, which would simplify implementing good hash functions for custom types.

  • Map: make size() public

    Map: make size() public

    Hello,

    It would be really nice to have this function in the public API: https://github.com/puzpuzpuz/xsync/blob/d5f0f9d5b79c39bf68bf9771f52f477262972f6f/mapof.go#L325

    Thanks!

  • nilVal is not unique

    nilVal is not unique

    The compiler is free to make new(struct{}) == new(struct{}), and indeed does so. This means Map can get confused if and returns nil if a user tries to store new(struct{}).

    I suggest using the address of a global int, for example.

    var nilVal int // use &nilVal
    
  • [Question] CPU cache friendly byte slice

    [Question] CPU cache friendly byte slice

    Thank you for your library, articles and talks. Could you please help me.

    I have a logger that writes a lot of messages in parallel to stdout. The problem is that messages were written simultaneously and shuffled. So I had to add a mutex and lock before printing:

    l.mu.Lock()
    fmt.Fprintf(os.Stdout format, v...)
    l.mu.Unlock()
    

    I wish to avoid the locking because I need as small latency as possible. But I'm fine with some pauses and I don't care much about order of messages. On my server I have 24 CPUs and each has it's own cache. I have an idea to make per-cpu list of byte slices and then periodically gather all of them and dump to a log. Can this work in practice? I'm feeling that I'm reinventing some existing structure. Could you please recommend an optimal way to do that.

    I see that the state striping is something that probably can help me. I see that the library has a Counter that uses the same principle.

    I also asked the question on SO https://stackoverflow.com/questions/74954360/cpu-cache-friendly-byte-slice

  • Update test

    Update test

    I'm glad it's getting better and better. The company has used it in many projects, thank you here.


    Oh, by the way, it is more meaningful to change t.Error to t.Fatal, so it is recommended to replace it.


    If you find it useless, please close it directly.

  • LoadOrStore: actually return the given value

    LoadOrStore: actually return the given value

    The method docs for LoadOrStore mention:

    it stores and returns the given value

    but the current implementation does not return the provided value when !loaded.

    Maybe I'm overlooking something obvious? :thinking:

  • Is it possible to use zero value of Map/MapOf struct?

    Is it possible to use zero value of Map/MapOf struct?

    hi,

    followed over here from the golang issue.

    was testing using the library as a drop in replacement for existing sync.Map and a generic sync.MapOf[K,V] wrapper that we use, but a big issue is that the zero value isn't valid, which would result in a lot of incompatible refactoring for our code. it worked great when i did initialize them.

    ideally, something like this would not panic.

    func TestMap_ZeroValueValid(t *testing.T) {
    	EnableAssertions()
    	m := new(Map)
    	v := 42
    	m.Store("foo", v)
    	m.Store("foo", v)
    	DisableAssertions()
    }
    

    I have a naive solution that uses a sync.Once, happy to open a PR if this is something you would consider changing.

    all i did was just just add a sync.Once

    type Map struct {
    	totalGrowths int64
    	totalShrinks int64
    	resizing     int64          // resize in progress flag; updated atomically
    	resizeMu     sync.Mutex     // only used along with resizeCond
    	resizeCond   sync.Cond      // used to wake up resize waiters (concurrent modifications)
    	table        unsafe.Pointer // *mapTable
    
    	initialized sync.Once
    }
    

    adding init function

    func (m *Map) init(sizeHint int) {
    	m.initialized.Do(func() {
    		m.resizeCond = *sync.NewCond(&m.resizeMu)
    		var table *mapTable
    		if sizeHint <= minMapTableCap {
    			table = newMapTable(minMapTableLen)
    		} else {
    			tableLen := nextPowOf2(uint32(sizeHint / entriesPerMapBucket))
    			table = newMapTable(int(tableLen))
    		}
    		atomic.StorePointer(&m.table, unsafe.Pointer(table))
    	})
    }
    

    changing NewMapPresized function

    func NewMapPresized(sizeHint int) *Map {
    	m := &Map{}
    	m.init(sizeHint)
    	return m
    }
    

    then i added init to these entrypoints:

    func (m *Map) Load(key string) (value interface{}, ok bool) {
    	m.init(minMapTableCap)
    ...
    }
    func (m *Map) Clear() {
    	m.init(minMapTableCap)
    ...
    }
    func (m *Map) Size() int {
    	m.init(minMapTableCap)
    ...
    }
    func (m *Map) Range(f func(key string, value interface{}) bool) {
    	m.init(minMapTableCap)
    ...
    }
    func (m *Map) doCompute(
    	key string,
    	valueFn func(oldValue interface{}, loaded bool) (interface{}, bool),
    	loadIfExists, computeOnly bool,
    ) (interface{}, bool) {
    	m.init(minMapTableCap)
    }
    

    and the same thing for the generic.

  • Built-in hash function for comparable types

    Built-in hash function for comparable types

    Hey @puzpuzpuz Thanks for such a cool concurrent data synchronization package

    I really like the example of a hash function for a comparable structure, because there is no magic in it, math and only https://github.com/puzpuzpuz/xsync/blob/051117f622763ecb6d970af5d2e8130bcb0e0e96/example_test.go#L16-L32

    And this comment gave me the idea to get the built-in hashing function from the map, in a slightly tricky and insecure magical way https://github.com/puzpuzpuz/xsync/blob/051117f622763ecb6d970af5d2e8130bcb0e0e96/mapof.go#L32-L33

    Here's what I got: https://goplay.tools/snippet/w91su4PCNMY

    func main() {
    	type t struct{ int }
    
    	hash := HashFuncOf[t]()
    	if hash == nil {
    		log.Fatalln("hash func is nil")
    	}
    
    	seed := maphash.MakeSeed()
    	a, b := hash.Sum64(seed, t{1}), hash.Sum64(seed, t{1})
    	fmt.Println(a == b)
    
    	// Output:
    	// true
    }
    

    This is not safe because future versions of golang may change the internal arrangement of types. On the other hand, it might be OK if you put this idea in a separate package like github.com/puzpuzpuz/xsync/unsafehasher and warn the user about the potential problem in the documentation or init function, like this:

    // This init function will prevent the application from starting
    // if the internals of golang types change in such a way that
    // this package causes a panic.
    func init() {
    	type t struct{ bool; int; string; float64 }
    	HashFuncOf[t]().Sum64(maphash.MakeSeed(), t{})
    }
    

    Also, the trick has a minus, which consists in the fact that the built-in hash function requires passing by pointer, which can negatively affect performance, but it can probably be reduced using sync.Pool

    What do you think about supporting this way of creating hash functions?

    And yet, it seems that this comment about string keys is superfluous here, apparently accidentally copied from the implementation of Map https://github.com/puzpuzpuz/xsync/blob/051117f622763ecb6d970af5d2e8130bcb0e0e96/mapof.go#L31-L34

  • Alternative epoch-based design for Maps

    Alternative epoch-based design for Maps

    Just like in sync.Map, Store operation in Map allocates an intermediate interface{} struct for each value (see #1 for more details). Hence, each value pointer stored in map buckets references the following chain: *interface{} -> interface{} -> value. If we could specialize the Map to a concrete value type (say, with upcoming generics language feature), we could get rid of the intermediate interface{} struct and the chain could be simplified to the following: *value -> value. There are multiple way to achieve this and this issues describes one of them.

    The idea is to replace atomic snapshots (for which we need *interface{} pointers) with epoch-based reader-writer communication.

    Currently the bucket layout looks like the following:

    | bucket mutex	| keys array		| values array		| pointer to next bucket  |
    | 8 bytes	| 24 bytes (3 pointers)	| 24 bytes (3 pointers)	| 8 bytes		  |
    |<-					one cache line (64 bytes)			->|
    

    The epoch-based design would need to change it to this:

    | bucket mutex	| keys array		| values array		| epoch			  |
    | 8 bytes	| 24 bytes (3 pointers)	| 24 bytes (3 pointers)	| 8 bytes		  |
    |<-					one cache line (64 bytes)			->|
    

    Notice that the linked list (chain of buckets) is now gone and replaced with the epoch counter (uint64). Each epoch counter is per buckets and assumes two phases:

    1. Even value - update finished phase
    2. Odd value - in progress update phase

    Note: 64B assumes only 3 entries per bucket, so that the table will be able to hold only 3*number_of_buckets entries. That could be improved by used 128B bucket sizes which is enough to fit 7 entries. Although, that would have a slight impact on the write performance.

    Writers should do the following when updating a bucket:

    1. Lock the bucket
    2. Increment the epoch atomically, so that it's odd (update in progress)
    3. Execute the usual write logic
    4. Decrement the epoch atomically, so that it's even (update finished)

    Readers should execute the following seqlock-style logic when reading from a bucket:

    1. Read the epoch atomically. If it's odd, goto 1
    2. Scan the bucket. If the entry is not there, return nil, false
    3. If the entry is found, read the epoch atomically. It the epoch doesn't match with the value obtained at step 1, spin over the epoch along with reading the entry (we could do a goto 1 here, but that's not necessary)

    Just like with atomic snapshots, the above design assumes that readers do not block each other and writers allow readers to scan the table concurrently (although, readers might need to do a few spins until they epoch check succeeds).

Probabilistic data structures for processing continuous, unbounded streams.

Boom Filters Boom Filters are probabilistic data structures for processing continuous, unbounded streams. This includes Stable Bloom Filters, Scalable

Dec 30, 2022
Graph algorithms and data structures
Graph algorithms and data structures

Your basic graph Golang library of basic graph algorithms Topological ordering, image by David Eppstein, CC0 1.0. This library offers efficient and we

Jan 2, 2023
Graph algorithms and data structures
Graph algorithms and data structures

Your basic graph Golang library of basic graph algorithms Topological ordering, image by David Eppstein, CC0 1.0. This library offers efficient and we

Jan 25, 2021
This is the course materials for the Go Data Structures Crash Course!

Go Data Structures Course ?? Welcome Gophers! This is the official repository that contains all of the data structures we cover in the Go Data Structu

May 10, 2022
Basic Implementation of Data-structures in Go

Data structures in Go v1.15.6 This repo consists the implementation of the following: Stacks Queues Linked Lists (Singly) Binary Search Trees Heaps (M

May 24, 2021
Data Structures in Go: Hash Table

Data Structures in Go: Hash Table he time has come to implement one of the most commonly used data structures in software development - the Hash Table

Oct 20, 2021
Algorithms and Data Structures Solved in Golang

Algorithms and Data Structures Solved in Golang Hi! I'm Bruno Melo and this repository contains a lot challenges solved on many plataforms using go as

Oct 20, 2022
Common data structures for solving problems in Golang

datastructs Common data structs for solving problems in Golang. List of data structures can be found in datastructs/pkg Rules for data structures Don'

Nov 7, 2021
Some data structures and algorithms using golang

Some data structures and algorithms using golang

Aug 13, 2022
Data structures and algorithms implementation from ZeroToMastery course

ZeroToMastery Data Structures & Algorithms course This repo includes all the data structure and algorithm exercises solutions and implementations. Ins

Jul 4, 2022
Data Structures & Algorithms in Go

Data Structures and Algorithms with Go The aim of this repository is to provide Gophers with how data structures and algorithms are implemented in the

Dec 28, 2021
Grokking-algorithms-go - Solutions to common Data Structures problems

This is a repository dedicated to study, learn and solve Data Structure algorith

Apr 4, 2022
Practice-dsa-go - Data Structures and Algorithms for Interview Preparation in Go

Data Structures and Algorithms for Interview Preparation in Go Data Structures K

Jul 3, 2022
Implementation of various data structures and algorithms in Go
Implementation of various data structures and algorithms in Go

GoDS (Go Data Structures) Implementation of various data structures and algorithms in Go. Data Structures Containers Lists ArrayList SinglyLinkedList

Jan 25, 2022
Tutorial code for my video Learn to Use Basic Data Structures - Slices, Structs and Maps in Golang

Learn to Use Basic Data Structures - Slices, Structs and Maps in Golang Read text from a file and split into words. Introduction to slices / lists. Co

Jan 26, 2022
Data Structures and Algorithms implementation in Go

Data Structures and Algorithms Clean and simple implementation in Go Implementation There are several data structures and algorithms implemented in th

Jan 2, 2023
Library of generic data structures for Go.

gods Library of generic data structures for Go. priority queue sorted list priority queue unsorted list priority queue heap priority queue adaptable h

Dec 26, 2022
Zero allocation Nullable structures in one library with handy conversion functions, marshallers and unmarshallers

nan - No Allocations Nevermore Package nan - Zero allocation Nullable structures in one library with handy conversion functions, marshallers and unmar

Dec 20, 2022