Probabilistic data structures for processing continuous, unbounded streams.

Boom Filters

Build Status GoDoc

Boom Filters are probabilistic data structures for processing continuous, unbounded streams. This includes Stable Bloom Filters, Scalable Bloom Filters, Counting Bloom Filters, Inverse Bloom Filters, Cuckoo Filters, several variants of traditional Bloom filters, HyperLogLog, Count-Min Sketch, and MinHash.

Classic Bloom filters generally require a priori knowledge of the data set in order to allocate an appropriately sized bit array. This works well for offline processing, but online processing typically involves unbounded data streams. With enough data, a traditional Bloom filter "fills up", after which it has a false-positive probability of 1.

Boom Filters are useful for situations where the size of the data set isn't known ahead of time. For example, a Stable Bloom Filter can be used to deduplicate events from an unbounded event stream with a specified upper bound on false positives and minimal false negatives. Alternatively, an Inverse Bloom Filter is ideal for deduplicating a stream where duplicate events are relatively close together. This results in no false positives and, depending on how close together duplicates are, a small probability of false negatives. Scalable Bloom Filters place a tight upper bound on false positives while avoiding false negatives but require allocating memory proportional to the size of the data set. Counting Bloom Filters and Cuckoo Filters are useful for cases which require adding and removing elements to and from a set.

For large or unbounded data sets, calculating the exact cardinality is impractical. HyperLogLog uses a fraction of the memory while providing an accurate approximation. Similarly, Count-Min Sketch provides an efficient way to estimate event frequency for data streams, while Top-K tracks the top-k most frequent elements.

MinHash is a probabilistic algorithm to approximate the similarity between two sets. This can be used to cluster or compare documents by splitting the corpus into a bag of words.

Installation

$ go get github.com/tylertreat/BoomFilters

Stable Bloom Filter

This is an implementation of Stable Bloom Filters as described by Deng and Rafiei in Approximately Detecting Duplicates for Streaming Data using Stable Bloom Filters.

A Stable Bloom Filter (SBF) continuously evicts stale information so that it has room for more recent elements. Like traditional Bloom filters, an SBF has a non-zero probability of false positives, which is controlled by several parameters. Unlike the classic Bloom filter, an SBF has a tight upper bound on the rate of false positives while introducing a non-zero rate of false negatives. The false-positive rate of a classic Bloom filter eventually reaches 1, after which all queries result in a false positive. The stable-point property of an SBF means the false-positive rate asymptotically approaches a configurable fixed constant. A classic Bloom filter is actually a special case of SBF where the eviction rate is zero and the cell size is one, so this provides support for them as well (in addition to bitset-based Bloom filters).

Stable Bloom Filters are useful for cases where the size of the data set isn't known a priori and memory is bounded. For example, an SBF can be used to deduplicate events from an unbounded event stream with a specified upper bound on false positives and minimal false negatives.

Usage

package main

import (
    "fmt"
    "github.com/tylertreat/BoomFilters"
)

func main() {
    sbf := boom.NewDefaultStableBloomFilter(10000, 0.01)
    fmt.Println("stable point", sbf.StablePoint())
    
    sbf.Add([]byte(`a`))
    if sbf.Test([]byte(`a`)) {
        fmt.Println("contains a")
    }
    
    if !sbf.TestAndAdd([]byte(`b`)) {
        fmt.Println("doesn't contain b")
    }
    
    if sbf.Test([]byte(`b`)) {
        fmt.Println("now it contains b!")
    }
    
    // Restore to initial state.
    sbf.Reset()
}

Scalable Bloom Filter

This is an implementation of a Scalable Bloom Filter as described by Almeida, Baquero, Preguica, and Hutchison in Scalable Bloom Filters.

A Scalable Bloom Filter (SBF) dynamically adapts to the size of the data set while enforcing a tight upper bound on the rate of false positives and a false-negative probability of zero. This works by adding Bloom filters with geometrically decreasing false-positive rates as filters become full. A tightening ratio, r, controls the filter growth. The compounded probability over the whole series converges to a target value, even accounting for an infinite series.

Scalable Bloom Filters are useful for cases where the size of the data set isn't known a priori and memory constraints aren't of particular concern. For situations where memory is bounded, consider using Inverse or Stable Bloom Filters.

The core parts of this implementation were originally written by Jian Zhen as discussed in Benchmarking Bloom Filters and Hash Functions in Go.

Usage

package main

import (
    "fmt"
    "github.com/tylertreat/BoomFilters"
)

func main() {
    sbf := boom.NewDefaultScalableBloomFilter(0.01)
    
    sbf.Add([]byte(`a`))
    if sbf.Test([]byte(`a`)) {
        fmt.Println("contains a")
    }
    
    if !sbf.TestAndAdd([]byte(`b`)) {
        fmt.Println("doesn't contain b")
    }
    
    if sbf.Test([]byte(`b`)) {
        fmt.Println("now it contains b!")
    }
    
    // Restore to initial state.
    sbf.Reset()
}

Inverse Bloom Filter

An Inverse Bloom Filter, or "the opposite of a Bloom filter", is a concurrent, probabilistic data structure used to test whether an item has been observed or not. This implementation, originally described and written by Jeff Hodges, replaces the use of MD5 hashing with a non-cryptographic FNV-1 function.

The Inverse Bloom Filter may report a false negative but can never report a false positive. That is, it may report that an item has not been seen when it actually has, but it will never report an item as seen which it hasn't come across. This behaves in a similar manner to a fixed-size hashmap which does not handle conflicts.

This structure is particularly well-suited to streams in which duplicates are relatively close together. It uses a CAS-style approach, which makes it thread-safe.

Usage

package main

import (
    "fmt"
    "github.com/tylertreat/BoomFilters"
)

func main() {
    ibf := boom.NewInverseBloomFilter(10000)
    
    ibf.Add([]byte(`a`))
    if ibf.Test([]byte(`a`)) {
        fmt.Println("contains a")
    }
    
    if !ibf.TestAndAdd([]byte(`b`)) {
        fmt.Println("doesn't contain b")
    }
    
    if ibf.Test([]byte(`b`)) {
        fmt.Println("now it contains b!")
    }
}

Counting Bloom Filter

This is an implementation of a Counting Bloom Filter as described by Fan, Cao, Almeida, and Broder in Summary Cache: A Scalable Wide-Area Web Cache Sharing Protocol.

A Counting Bloom Filter (CBF) provides a way to remove elements by using an array of n-bit buckets. When an element is added, the respective buckets are incremented. To remove an element, the respective buckets are decremented. A query checks that each of the respective buckets are non-zero. Because CBFs allow elements to be removed, they introduce a non-zero probability of false negatives in addition to the possibility of false positives.

Counting Bloom Filters are useful for cases where elements are both added and removed from the data set. Since they use n-bit buckets, CBFs use roughly n-times more memory than traditional Bloom filters.

See Deletable Bloom Filter for an alternative which avoids false negatives.

Usage

package main

import (
    "fmt"
    "github.com/tylertreat/BoomFilters"
)

func main() {
    bf := boom.NewDefaultCountingBloomFilter(1000, 0.01)
    
    bf.Add([]byte(`a`))
    if bf.Test([]byte(`a`)) {
        fmt.Println("contains a")
    }
    
    if !bf.TestAndAdd([]byte(`b`)) {
        fmt.Println("doesn't contain b")
    }
    
    if bf.TestAndRemove([]byte(`b`)) {
        fmt.Println("removed b")
    }
    
    // Restore to initial state.
    bf.Reset()
}

Cuckoo Filter

This is an implementation of a Cuckoo Filter as described by Andersen, Kaminsky, and Mitzenmacher in Cuckoo Filter: Practically Better Than Bloom. The Cuckoo Filter is similar to the Counting Bloom Filter in that it supports adding and removing elements, but it does so in a way that doesn't significantly degrade space and performance.

It works by using a cuckoo hashing scheme for inserting items. Instead of storing the elements themselves, it stores their fingerprints which also allows for item removal without false negatives (if you don't attempt to remove an item not contained in the filter).

For applications that store many items and target moderately low false-positive rates, cuckoo filters have lower space overhead than space-optimized Bloom filters.

Usage

package main

import (
    "fmt"
    "github.com/tylertreat/BoomFilters"
)

func main() {
    cf := boom.NewCuckooFilter(1000, 0.01)
    
    cf.Add([]byte(`a`))
    if cf.Test([]byte(`a`)) {
        fmt.Println("contains a")
    }
    
    if contains, _ := cf.TestAndAdd([]byte(`b`)); !contains {
        fmt.Println("doesn't contain b")
    }
    
    if cf.TestAndRemove([]byte(`b`)) {
        fmt.Println("removed b")
    }
    
    // Restore to initial state.
    cf.Reset()
}

Classic Bloom Filter

A classic Bloom filter is a special case of a Stable Bloom Filter whose eviction rate is zero and cell size is one. We call this special case an Unstable Bloom Filter. Because cells require more memory overhead, this package also provides two bitset-based Bloom filter variations. The first variation is the traditional implementation consisting of a single bit array. The second implementation is a partitioned approach which uniformly distributes the probability of false positives across all elements.

Bloom filters have a limited capacity, depending on the configured size. Once all bits are set, the probability of a false positive is 1. However, traditional Bloom filters cannot return a false negative.

A Bloom filter is ideal for cases where the data set is known a priori because the false-positive rate can be configured by the size and number of hash functions.

Usage

package main

import (
    "fmt"
    "github.com/tylertreat/BoomFilters"
)

func main() {
    // We could also use boom.NewUnstableBloomFilter or boom.NewPartitionedBloomFilter.
    bf := boom.NewBloomFilter(1000, 0.01)
    
    bf.Add([]byte(`a`))
    if bf.Test([]byte(`a`)) {
        fmt.Println("contains a")
    }
    
    if !bf.TestAndAdd([]byte(`b`)) {
        fmt.Println("doesn't contain b")
    }
    
    if bf.Test([]byte(`b`)) {
        fmt.Println("now it contains b!")
    }
    
    // Restore to initial state.
    bf.Reset()
}

Count-Min Sketch

This is an implementation of a Count-Min Sketch as described by Cormode and Muthukrishnan in An Improved Data Stream Summary: The Count-Min Sketch and its Applications.

A Count-Min Sketch (CMS) is a probabilistic data structure which approximates the frequency of events in a data stream. Unlike a hash map, a CMS uses sub-linear space at the expense of a configurable error factor. Similar to Counting Bloom filters, items are hashed to a series of buckets, which increment a counter. The frequency of an item is estimated by taking the minimum of each of the item's respective counter values.

Count-Min Sketches are useful for counting the frequency of events in massive data sets or unbounded streams online. In these situations, storing the entire data set or allocating counters for every event in memory is impractical. It may be possible for offline processing, but real-time processing requires fast, space-efficient solutions like the CMS. For approximating set cardinality, refer to the HyperLogLog.

Usage

package main

import (
    "fmt"
    "github.com/tylertreat/BoomFilters"
)

func main() {
    cms := boom.NewCountMinSketch(0.001, 0.99)
    
    cms.Add([]byte(`alice`)).Add([]byte(`bob`)).Add([]byte(`bob`)).Add([]byte(`frank`))
    fmt.Println("frequency of alice", cms.Count([]byte(`alice`)))
    fmt.Println("frequency of bob", cms.Count([]byte(`bob`)))
    fmt.Println("frequency of frank", cms.Count([]byte(`frank`)))
    

    // Serialization example
    buf := new(bytes.Buffer)
    n, err := cms.WriteDataTo(buf)
    if err != nil {
       fmt.Println(err, n)
    }

    // Restore to initial state.
    cms.Reset()

    newCMS := boom.NewCountMinSketch(0.001, 0.99)
    n, err = newCMS.ReadDataFrom(buf)
    if err != nil {
       fmt.Println(err, n)
    }

    fmt.Println("frequency of frank", newCMS.Count([]byte(`frank`)))

   
}

Top-K

Top-K uses a Count-Min Sketch and min-heap to track the top-k most frequent elements in a stream.

Usage

package main

import (
    "fmt"
    "github.com/tylertreat/BoomFilters"
)

func main() {
	topk := boom.NewTopK(0.001, 0.99, 5)

	topk.Add([]byte(`bob`)).Add([]byte(`bob`)).Add([]byte(`bob`))
	topk.Add([]byte(`tyler`)).Add([]byte(`tyler`)).Add([]byte(`tyler`)).Add([]byte(`tyler`))
	topk.Add([]byte(`fred`))
	topk.Add([]byte(`alice`)).Add([]byte(`alice`)).Add([]byte(`alice`)).Add([]byte(`alice`))
	topk.Add([]byte(`james`))
	topk.Add([]byte(`fred`))
	topk.Add([]byte(`sara`)).Add([]byte(`sara`))
	topk.Add([]byte(`bill`))

	for i, element := range topk.Elements() {
		fmt.Println(i, string(element.Data), element.Freq)
	}
	
	// Restore to initial state.
	topk.Reset()
}

HyperLogLog

This is an implementation of HyperLogLog as described by Flajolet, Fusy, Gandouet, and Meunier in HyperLogLog: the analysis of a near-optimal cardinality estimation algorithm.

HyperLogLog is a probabilistic algorithm which approximates the number of distinct elements in a multiset. It works by hashing values and calculating the maximum number of leading zeros in the binary representation of each hash. If the maximum number of leading zeros is n, the estimated number of distinct elements in the set is 2^n. To minimize variance, the multiset is split into a configurable number of registers, the maximum number of leading zeros is calculated in the numbers in each register, and a harmonic mean is used to combine the estimates.

For large or unbounded data sets, calculating the exact cardinality is impractical. HyperLogLog uses a fraction of the memory while providing an accurate approximation.

This implementation was originally written by Eric Lesh. Some small changes and additions have been made, including a way to construct a HyperLogLog optimized for a particular relative accuracy and adding FNV hashing. For counting element frequency, refer to the Count-Min Sketch.

Usage

package main

import (
    "fmt"
    "github.com/tylertreat/BoomFilters"
)

func main() {
    hll, err := boom.NewDefaultHyperLogLog(0.1)
    if err != nil {
        panic(err)
    }
    
    hll.Add([]byte(`alice`)).Add([]byte(`bob`)).Add([]byte(`bob`)).Add([]byte(`frank`))
    fmt.Println("count", hll.Count())

    // Serialization example
    buf := new(bytes.Buffer)
    _, err = hll.WriteDataTo(buf)
    if err != nil {
       fmt.Println(err)
    }
    
    // Restore to initial state.
    hll.Reset()

    newHll, err := boom.NewDefaultHyperLogLog(0.1)
    if err != nil {
       fmt.Println(err)
    }

    _, err = newHll.ReadDataFrom(buf)
    if err != nil {
       fmt.Println(err)
    }
    fmt.Println("count", newHll.Count())

}

MinHash

This is a variation of the technique for estimating similarity between two sets as presented by Broder in On the resemblance and containment of documents.

MinHash is a probabilistic algorithm which can be used to cluster or compare documents by splitting the corpus into a bag of words. MinHash returns the approximated similarity ratio of the two bags. The similarity is less accurate for very small bags of words.

Usage

package main

import (
    "fmt"
    "github.com/tylertreat/BoomFilters"
)

func main() {
    bag1 := []string{"bill", "alice", "frank", "bob", "sara", "tyler", "james"}
	bag2 := []string{"bill", "alice", "frank", "bob", "sara"}
	
	fmt.Println("similarity", boom.MinHash(bag1, bag2))
}

References

Owner
Tyler Treat
Interested in messaging middleware, distributed systems, and cloud infrastructure.
Tyler Treat
Comments
  • CountMinSketch remove?

    CountMinSketch remove?

    Shouldn't there be a Remove / TestAndRemove API in CountMinSketch?

    I'm willing to to submit a PR, just want to make sure first that there's no hole in my reasoning.

    Pseudo code:

    Compute k hashes h1, h2, ... hk
    let count1, count2, ... countk = the cells h1...hk point to
    min := Minimum(count1 ... countk)
    if min > 0
        conclude element exists
        decrement count1 ... countk by min
    else
        conclude element doesn't exist
    
  • Reduced memory allocs hashKernel, countMinSketch.Reset()

    Reduced memory allocs hashKernel, countMinSketch.Reset()

    hashKernel

    this method gets the upper/lower portions of the hash just using bitwise operations, instead of using a slice.

    this reduces the allocs of this method to 0

    before:

    $ go test -bench=. -benchmem boom.go boom_test.go
    goos: darwin
    goarch: amd64
    cpu: Intel(R) Core(TM) i5-5257U CPU @ 2.70GHz
    BenchmarkHashKernel-4           25947562                38.74 ns/op            8 B/op          1 allocs/op
    PASS
    

    after:

    $ go test -bench=. -benchmem boom.go boom_test.go
    goos: darwin
    goarch: amd64
    cpu: Intel(R) Core(TM) i5-5257U CPU @ 2.70GHz
    BenchmarkHashKernel-4          80436614                12.77 ns/op            0 B/op          0 allocs/op
    PASS
    

    this should improve performance on methods that use this - e.g. countMinSketch Add/Count

    before:

    BenchmarkCMSAdd-4       19589024                69.54 ns/op            8 B/op          1 allocs/op
    BenchmarkCMSCount-4     19352895                67.81 ns/op            8 B/op          1 allocs/op
    

    after:

    BenchmarkCMSAdd-4       41909880                31.24 ns/op            0 B/op          0 allocs/op
    BenchmarkCMSCount-4     34261183                37.24 ns/op            0 B/op          0 allocs/op
    

    countMinSketch.Reset()

    instead of creating a new matrix (which causes allocs), this re-uses the existing matrix and resets the counters to 0

    before:

    $ go test -bench=BenchmarkCMSReset -benchmem countmin.go countmin_test.go boom.go
    goos: darwin
    goarch: amd64
    cpu: Intel(R) Core(TM) i5-5257U CPU @ 2.70GHz
    BenchmarkCMSReset-4         9706            118053 ns/op          663632 B/op          4 allocs/op
    PASS
    

    after:

    $ go test -bench=BenchmarkCMSReset -benchmem countmin.go countmin_test.go boom.go
    goos: darwin
    goarch: amd64
    cpu: Intel(R) Core(TM) i5-5257U CPU @ 2.70GHz
    BenchmarkCMSReset-4        14294             82448 ns/op               0 B/op          0 allocs/op
    PASS
    
  • Calculated regionSize for DeletableBloomFilter is wrong, causes out-of-bounds panic

    Calculated regionSize for DeletableBloomFilter is wrong, causes out-of-bounds panic

    The way regionSize is calculated for DeletableBloomFilter does not mesh with the size of collisions. If a value is hashed to a high index (m - 1 for example), if there is a collision at that index, the index into collisions is out of bounds (equal to r). It seems like maybe integer truncation isn't factored in during the calculation of regionSize or during idx/d.regionSize. If not that, then collisions should have r + 1 buckets.

    This was discovered in cayleygraph/cayley using this panic:

    panic: runtime error: index out of range
    
    goroutine 2752066 [running]:
    github.com/tylertreat/BoomFilters.(*Buckets).setBits(0xc42016e4e0, 0x100000078, 0xc400000001)
            /home/kevin/go/src/github.com/tylertreat/BoomFilters/buckets.go:101 +0x128
    github.com/tylertreat/BoomFilters.(*Buckets).Set(0xc42016e4e0, 0x78, 0x1, 0xc42016e4b0)
            /home/kevin/go/src/github.com/tylertreat/BoomFilters/buckets.go:62 +0x53
    github.com/tylertreat/BoomFilters.(*DeletableBloomFilter).Add(0xc42001e240, 0xc4200243c0, 0x18, 0x18, 0x8, 0x0)
            /home/kevin/go/src/github.com/tylertreat/BoomFilters/deletable.go:96 +0x106
    github.com/cayleygraph/cayley/graph/kv.(*QuadStore).bloomAdd(0xc4200dc0a0, 0xc44090cea0)
            /home/kevin/go/src/github.com/cayleygraph/cayley/graph/kv/indexing.go:943 +0xca
    github.com/cayleygraph/cayley/graph/kv.(*QuadStore).indexLink(0xc4200dc0a0, 0x8d1e20, 0xc429580d60, 0xc44090cea0, 0x0, 0x0)
            /home/kevin/go/src/github.com/cayleygraph/cayley/graph/kv/indexing.go:527 +0x22d
    github.com/cayleygraph/cayley/graph/kv.(*QuadStore).indexLinks(0xc4200dc0a0, 0x8d1b60, 0xc420022138, 0x8d1e20, 0xc429580d60, 0xc6558cc000, 0x66d63, 0x66d63, 0x0, 0xc48977d8e7086225)
            /home/kevin/go/src/github.com/cayleygraph/cayley/graph/kv/indexing.go:510 +0x116
    github.com/cayleygraph/cayley/graph/kv.(*QuadStore).ApplyDeltas(0xc4200dc0a0, 0xc608642000, 0x66d63, 0x75b8e, 0x8c0001, 0x0, 0x0)
            /home/kevin/go/src/github.com/cayleygraph/cayley/graph/kv/indexing.go:400 +0xa54
    github.com/cayleygraph/cayley/writer.(*Single).ApplyTransaction(0xc4262440a0, 0xc48c656020, 0xc50ef3d4c0, 0x8cfc00)
            /home/kevin/go/src/github.com/cayleygraph/cayley/writer/single.go:121 +0x5b
    
  • CMS serialization of matrix data

    CMS serialization of matrix data

    Added simple cms serialization, tests, benchmarks and updated readme. Could you review PR? if it is ok, I can add serialization to other filters.

    cms := boom.NewCountMinSketch(0.001, 0.99)
    
        cms.Add([]byte(`alice`)).Add([]byte(`bob`)).Add([]byte(`bob`)).Add([]byte(`frank`))
        fmt.Println("frequency of alice", cms.Count([]byte(`alice`)))
        fmt.Println("frequency of bob", cms.Count([]byte(`bob`)))
        fmt.Println("frequency of frank", cms.Count([]byte(`frank`)))
    
    
        // Serialization example
        buf := new(bytes.Buffer)
        n, err := cms.WriteDataTo(buf)
        if err != nil {
           fmt.Println(err, n)
        }
    
        newCMS := boom.NewCountMinSketch(0.001, 0.99)
        n, err = newCMS.ReadDataFrom(buf)
        if err != nil {
           fmt.Println(err, n)
        }
    
        fmt.Println("frequency of frank", cms.Count([]byte(`frank`)))
    
  • topk, support to get most frequent elements along with its frequency & fixed panic error in topk

    topk, support to get most frequent elements along with its frequency & fixed panic error in topk

    topk usage

    package main
    
    import (
        "fmt"
        "github.com/tylertreat/BoomFilters"
    )
    
    const NUM_TOPK_ELEMENTS = 10
    
    func main() {
        topk := boom.NewTopK(0.001, 0.99, NUM_TOPK_ELEMENTS)
    
        topk.Add([]byte(`bob`)).Add([]byte(`bob`)).Add([]byte(`bob`))
        topk.Add([]byte(`tyler`)).Add([]byte(`tyler`)).Add([]byte(`tyler`)).Add([]byte(`tyler`))
        topk.Add([]byte(`fred`))
        topk.Add([]byte(`alice`)).Add([]byte(`alice`)).Add([]byte(`alice`)).Add([]byte(`alice`))
        topk.Add([]byte(`james`))
        topk.Add([]byte(`fred`))
        topk.Add([]byte(`sara`)).Add([]byte(`sara`))
        topk.Add([]byte(`bill`))
    
        for _, element := range topk.Elements() {
            fmt.Println(string(element.Data), element.Freq)
        }   
    
        // Restore to initial state.
        topk.Reset()
    }
    
  • topk panics if added elements count is less than total elements defined.

    topk panics if added elements count is less than total elements defined.

    Defined a topk and declared total elements count of 10. But added only 7 elements. topk fails if executed to get elements.

    package main
    
    import (
        "fmt"
        "github.com/tylertreat/BoomFilters"
    )
    
    const NUM_TOPK_ELEMENTS = 10
    
    func main() {
        topk := boom.NewTopK(0.001, 0.99, NUM_TOPK_ELEMENTS) 
    
        topk.Add([]byte(`bob`)).Add([]byte(`bob`)).Add([]byte(`bob`))
        topk.Add([]byte(`tyler`)).Add([]byte(`tyler`)).Add([]byte(`tyler`)).Add([]byte(`tyler`))
        topk.Add([]byte(`fred`))
        topk.Add([]byte(`alice`)).Add([]byte(`alice`)).Add([]byte(`alice`)).Add([]byte(`alice`))
        topk.Add([]byte(`james`))
        topk.Add([]byte(`fred`))
        topk.Add([]byte(`sara`)).Add([]byte(`sara`))
        topk.Add([]byte(`bill`))
    
        for i, element := range topk.Elements() {
            fmt.Println(i, string(element))
        }   
    
        // Restore to initial state.
        topk.Reset()
    }
    
    root@server ~/go # go run /tmp/test_tk.go 
    panic: runtime error: invalid memory address or nil pointer dereference
    [signal 0xb code=0x1 addr=0x18 pc=0x4507d1]
    
    goroutine 1 [running]:
    github.com/tylertreat/BoomFilters.(*elementHeap).Less(0xc20801e180, 0x9, 0x4, 0xa)
            <autogenerated>:8 +0xb1
    container/heap.down(0x7f1db00a6910, 0xc20801e180, 0x4, 0xa)
            /root/.gvm/gos/go1.4/src/container/heap/heap.go:111 +0xb4
    container/heap.Init(0x7f1db00a6910, 0xc20801e180)
            /root/.gvm/gos/go1.4/src/container/heap/heap.go:45 +0x81
    github.com/tylertreat/BoomFilters.(*TopK).Elements(0xc20801e080, 0x0, 0x0, 0x0)
            /root/.gvm/pkgsets/go1.4/global/src/github.com/tylertreat/BoomFilters/topk.go:76 +0x211
    main.main()
            /tmp/test_tk.go:20 +0x4c0
    
    goroutine 2 [runnable]:
    runtime.forcegchelper()
            /root/.gvm/gos/go1.4/src/runtime/proc.go:90
    runtime.goexit()
            /root/.gvm/gos/go1.4/src/runtime/asm_amd64.s:2232 +0x1
    
    goroutine 3 [runnable]:
    runtime.bgsweep()
            /root/.gvm/gos/go1.4/src/runtime/mgc0.go:82
    runtime.goexit()
            /root/.gvm/gos/go1.4/src/runtime/asm_amd64.s:2232 +0x1
    
    goroutine 4 [runnable]:
    runtime.runfinq()
            /root/.gvm/gos/go1.4/src/runtime/malloc.go:712
    runtime.goexit()
            /root/.gvm/gos/go1.4/src/runtime/asm_amd64.s:2232 +0x1
    exit status 2
    
  • Allow the user to set the hash

    Allow the user to set the hash

    Fnv is a nice and fast hash but in my experience it can result in a different false positive rate than expected. Below is the result of a test run I did to see if the expected false positive rate matches the actual rate (see code at the bottom of this pull request).

    FNV:

    $ time ./test
    NewBloomFilter(      1000,  0.1%) =  0.0%
    NewBloomFilter(     10000,  0.1%) =  0.4%
    NewBloomFilter(  10000000,  0.1%) =  0.1%
    NewBloomFilter(1000000000,  0.1%) =  1.6%
    NewBloomFilter(      1000,  1.0%) =  0.0%
    NewBloomFilter(     10000,  1.0%) =  0.4%
    NewBloomFilter(  10000000,  1.0%) =  1.6%
    NewBloomFilter(1000000000,  1.0%) =  1.4%
    NewBloomFilter(      1000,  2.0%) =  0.0%
    NewBloomFilter(     10000,  2.0%) =  3.2%
    NewBloomFilter(  10000000,  2.0%) =  2.1%
    NewBloomFilter(1000000000,  2.0%) =  1.2%
    NewBloomFilter(      1000, 10.0%) = 28.5%
    NewBloomFilter(     10000, 10.0%) = 11.2%
    NewBloomFilter(  10000000, 10.0%) = 10.7%
    NewBloomFilter(1000000000, 10.0%) =  8.5%
    
    real    32m10.216s
    user    25m2.260s
    sys     0m8.997s
    

    Now I used this patch to set the hash to use the first 8 bytes from a Sha256 hash (see code below). As you can see this results in a much closer false positive rate than expected. Of course if you look at the runtime you see it took 6.5 times as much time because sha256 is much slower. This patch will give the user the option to make this trade-off themselves.

    I only added the SetHash function to the classic hash since I wanted to see if you're interested in this pull request first. If you are I can also add it to the other filters and add some documentation about the trade-off between different hashes.

    Sha256:

    $ time ./test
    NewBloomFilter(      1000,  0.1%) =  0.2%
    NewBloomFilter(     10000,  0.1%) =  0.1%
    NewBloomFilter(  10000000,  0.1%) =  0.1%
    NewBloomFilter(1000000000,  0.1%) =  1.9%
    NewBloomFilter(      1000,  1.0%) =  1.2%
    NewBloomFilter(     10000,  1.0%) =  0.9%
    NewBloomFilter(  10000000,  1.0%) =  1.0%
    NewBloomFilter(1000000000,  1.0%) =  1.7%
    NewBloomFilter(      1000,  2.0%) =  1.3%
    NewBloomFilter(     10000,  2.0%) =  2.2%
    NewBloomFilter(  10000000,  2.0%) =  2.0%
    NewBloomFilter(1000000000,  2.0%) =  1.9%
    NewBloomFilter(      1000, 10.0%) = 10.3%
    NewBloomFilter(     10000, 10.0%) = 10.2%
    NewBloomFilter(  10000000, 10.0%) = 10.2%
    NewBloomFilter(1000000000, 10.0%) =  9.0%
    
    real    219m5.628s
    user    164m24.160s
    sys     1m6.653s
    

    The program used to do this benchmark (at the moment using sha256):

    package main
    
    import (
            "crypto/sha256"
            "encoding/binary"
            "fmt"
            "hash"
            "runtime"
    
            "github.com/erikdubbelboer/BoomFilters"
    )
    
    type hashwrap struct {
            hash.Hash
    }
    
    func (h hashwrap) Sum64() uint64 {
            b := h.Sum(nil)
            return binary.LittleEndian.Uint64(b[:8])
    }
    
    func main() {
            buf := make([]byte, 4)
            for _, f := range []float64{0.001, 0.01, 0.02, 0.1} {
                    for _, n := range []uint{1000, 10000, 10000000, 1000000000} {
                            runtime.GC()
    
                            bf := boom.NewBloomFilter(n, f)
                            bf.SetHash(hashwrap{Hash: sha256.New()})
    
                            for i := uint(0); i < n; i++ {
                                    binary.BigEndian.PutUint32(buf, uint32(i))
                                    bf.Add(buf)
                            }
    
                            fp := 0
                            for i := uint(0); i < n; i++ {
                                    binary.BigEndian.PutUint32(buf, uint32(i+n+1))
                                    if bf.Test(buf) {
                                            fp++
                                    }
                            }
    
                            fmt.Printf("NewBloomFilter(%10d, %4.1f%%) = %4.1f%%\n", n, f*100, (float64(fp)/float64(n))*100)
                    }
            }
    }
    
  • Fix GobDecode not setting a hash kernel for classic Bloom filters

    Fix GobDecode not setting a hash kernel for classic Bloom filters

    The previous assumption, decoding were overwriting an existing and populated struct, falls apart when a struct with its default (empty) values were given. In practice this did break decoding into slices and maps such as var all []*T; gob.NewDecoder(r).Decode(all).

  • Remove errors from example code

    Remove errors from example code

    Hi, I've found some syntax errors in your examples which are easy to fix but maybe hard to spot for people new to Golang. Thanks for providing this library. Cheers

  • Add method ImportElementsFrom on inverse bloom filter

    Add method ImportElementsFrom on inverse bloom filter

    ReadFrom and GobDecode methods allowed a bloom filter to be exported and then recreated as part of a new array. The new ImportElementsFrom method uses the filter.Add method to add the elements written out by one filter to the array of another filter.

    This allows for use cases such as converting from one filter size to another. For example, rolling a size 1,000 filter into a size 1,000,000 filter.

  • Adds serialization methods for ScalableBloomFilter, PartionedBloomFilter, and Buckets.

    Adds serialization methods for ScalableBloomFilter, PartionedBloomFilter, and Buckets.

    Hey,

    I added serialization methods for ScalableBloomFilter, PartionedBloomFilter, and Buckets. Let me know if you want different function names since they're slightly different from the CMS WriteDataTo and WriteDataFrom.

    This is related to #6.

    Thanks!

  •   Is concurrency security of the Test function ?

    Is concurrency security of the Test function ?

    In Test function, the hash will be changed: hash.Write(data)。 So it's concurrency security?

    func (b *BloomFilter) Test(data []byte) bool {
    	lower, upper := hashKernel(data, b.hash)
    
    	// If any of the K bits are not set, then it's not a member.
    	for i := uint(0); i < b.k; i++ {
    		if b.buckets.Get((uint(lower)+uint(upper)*i)%b.m) == 0 {
    			return false
    		}
    	}
    
    	return true
    }
    
    func hashKernel(data []byte, hash hash.Hash64) (uint32, uint32) {
    	hash.Write(data)
    	sum := hash.Sum(nil)
    	hash.Reset()
    	return binary.BigEndian.Uint32(sum[4:8]), binary.BigEndian.Uint32(sum[0:4])
    }
    
  • Unit tests depend on presence of file outside of repo

    Unit tests depend on presence of file outside of repo

    The hyperloglog unit tests depend on the presence of /usr/share/dict/words (see https://github.com/tylertreat/BoomFilters/blob/611b3dbe80e85df3a0a10a43997d4d5784e86245/hyperloglog_test.go#L34). It would be helpful to include the test data in the repo to ensure the tests work without modification on systems that don't follow that dictionary path convention or don't load a dictionary package.

  • Scalable Bloom Filter panics after ~31 million insertions

    Scalable Bloom Filter panics after ~31 million insertions

    Hey @tylertreat! I noticed that NewDefaultScalableBloomFilter(0.01) panics after exactly 31,536,466 insertions. Here is a test case:

    func TestScalableBloomLarge(t *testing.T) {
    	total := uint32(40000000) // 40 million
    	f := NewDefaultScalableBloomFilter(0.01)
    
    	i := uint32(0)
    
    	defer func() {
    		if err := recover(); err != nil {
    			fmt.Printf("panicked after %d iterations: %s\n", i, err)
    			panic(err)
    		}
    	}()
    
    	for ; i < total; i++ {
    		if i%1000000 == 0 {
    			fmt.Printf("  i: %d\n", i)
    		}
    		key := make([]byte, 8)
    		binary.BigEndian.PutUint32(key, i)
    		f.Add(key)
    	}
    }
    

    and here is the test ouput:

    === RUN   TestScalableBloomLarge
      i: 0
      i: 1000000
      i: 2000000
    # ...
      i: 29000000
      i: 30000000
      i: 31000000
    panicked after 31536466 iterations: runtime error: makeslice: len out of range
    --- FAIL: TestScalableBloomLarge (215.81s)
    panic: runtime error: makeslice: len out of range [recovered]
    	panic: runtime error: makeslice: len out of range [recovered]
    	panic: runtime error: makeslice: len out of range
    
    goroutine 132 [running]:
    testing.tRunner.func1(0xc000100f00)
    	/usr/local/Cellar/go/1.12.1/libexec/src/testing/testing.go:830 +0x392
    panic(0x1174160, 0x11d3170)
    	/usr/local/Cellar/go/1.12.1/libexec/src/runtime/panic.go:522 +0x1b5
    github.com/tylertreat/BoomFilters.TestScalableBloomLarge.func1(0xc00007cf7c)
    	/Users/jameshageman/stripe/BoomFilters/scalable_test.go:179 +0x118
    panic(0x1174160, 0x11d3170)
    	/usr/local/Cellar/go/1.12.1/libexec/src/runtime/panic.go:522 +0x1b5
    github.com/tylertreat/BoomFilters.NewPartitionedBloomFilter(0x2710, 0x357fb461c091b, 0x54e5e2762f38eb)
    	/Users/jameshageman/stripe/BoomFilters/partitioned.go:52 +0xb1
    github.com/tylertreat/BoomFilters.(*ScalableBloomFilter).addFilter(0xc00028a000)
    	/Users/jameshageman/stripe/BoomFilters/scalable.go:148 +0x6a
    github.com/tylertreat/BoomFilters.(*ScalableBloomFilter).Add(0xc00028a000, 0xc0d18e4858, 0x8, 0x8, 0x11d8120, 0xc00028a000)
    	/Users/jameshageman/stripe/BoomFilters/scalable.go:120 +0x112
    github.com/tylertreat/BoomFilters.TestScalableBloomLarge(0xc000100f00)
    	/Users/jameshageman/stripe/BoomFilters/scalable_test.go:189 +0xc6
    testing.tRunner(0xc000100f00, 0x11ad908)
    	/usr/local/Cellar/go/1.12.1/libexec/src/testing/testing.go:865 +0xc0
    created by testing.(*T).Run
    	/usr/local/Cellar/go/1.12.1/libexec/src/testing/testing.go:916 +0x35a
    FAIL	github.com/tylertreat/BoomFilters	218.857s
    

    The culprit appears to be this line: https://github.com/tylertreat/BoomFilters/blob/611b3dbe80e85df3a0a10a43997d4d5784e86245/partitioned.go#L52

    which confuses me because k should be a totally reasonable size, and is only a function of fpr and not n.

  • Inverse Bloom Filter: wrong result after ReadFrom gzipped i/o stream

    Inverse Bloom Filter: wrong result after ReadFrom gzipped i/o stream

    Hi, Thanks for writing this useful package, I like it very much.

    However when I found a problem as the title said when using InverseBloomFilter.

    I edited the test code (attached below) for writing and reading a gzipped i/o stream for saving disk space, but the result said

    Expected both filters to Test the same for 0
    

    And when switching off gzipped, i.e., no compression, everything is OK.

    I also test the ScalableBloomFilter, these's no such problem.

    Please help, thanks for your time.


    Test code:

    https://gist.github.com/shenwei356/659cffae84f9cb2365ba70ac3866af49

  • About PartitionedBloomFilter.GobEncode With bytes.Buffer

    About PartitionedBloomFilter.GobEncode With bytes.Buffer

    PartitionedBloomFilter.GobEncode collects the data from its partitions, but bytes.Buffer will grow the double capacity if nessage. So the memory of program will grow fast. Actually you can collect the length of data first, grow the buffer and reduce the unnessage memory.

A bloom filter is a probabilistic data structure that is used to determine if an element is present in a set

A Go implementation of a bloom filter, with support for boltdb and badgerdb as optional in-memory persistent storage.

Jul 4, 2022
A tree like tool help you to explore data structures in your redis server
 A tree like tool help you to explore data structures in your redis server

Redis-view is a tree like tool help you explore data structures in your redis server

Mar 17, 2022
Graph algorithms and data structures
Graph algorithms and data structures

Your basic graph Golang library of basic graph algorithms Topological ordering, image by David Eppstein, CC0 1.0. This library offers efficient and we

Jan 2, 2023
Graph algorithms and data structures
Graph algorithms and data structures

Your basic graph Golang library of basic graph algorithms Topological ordering, image by David Eppstein, CC0 1.0. This library offers efficient and we

Jan 25, 2021
This is the course materials for the Go Data Structures Crash Course!

Go Data Structures Course ?? Welcome Gophers! This is the official repository that contains all of the data structures we cover in the Go Data Structu

May 10, 2022
Basic Implementation of Data-structures in Go

Data structures in Go v1.15.6 This repo consists the implementation of the following: Stacks Queues Linked Lists (Singly) Binary Search Trees Heaps (M

May 24, 2021
Concurrent data structures for Go

xsync Concurrent data structures for Go. An extension for the standard sync package. This library should be considered experimental, so make sure to r

Jan 9, 2023
Data Structures in Go: Hash Table

Data Structures in Go: Hash Table he time has come to implement one of the most commonly used data structures in software development - the Hash Table

Oct 20, 2021
Algorithms and Data Structures Solved in Golang

Algorithms and Data Structures Solved in Golang Hi! I'm Bruno Melo and this repository contains a lot challenges solved on many plataforms using go as

Oct 20, 2022
Common data structures for solving problems in Golang

datastructs Common data structs for solving problems in Golang. List of data structures can be found in datastructs/pkg Rules for data structures Don'

Nov 7, 2021
Some data structures and algorithms using golang

Some data structures and algorithms using golang

Aug 13, 2022
Data structures and algorithms implementation from ZeroToMastery course

ZeroToMastery Data Structures & Algorithms course This repo includes all the data structure and algorithm exercises solutions and implementations. Ins

Jul 4, 2022
Data Structures & Algorithms in Go

Data Structures and Algorithms with Go The aim of this repository is to provide Gophers with how data structures and algorithms are implemented in the

Dec 28, 2021
Grokking-algorithms-go - Solutions to common Data Structures problems

This is a repository dedicated to study, learn and solve Data Structure algorith

Apr 4, 2022
Practice-dsa-go - Data Structures and Algorithms for Interview Preparation in Go

Data Structures and Algorithms for Interview Preparation in Go Data Structures K

Jul 3, 2022
Implementation of various data structures and algorithms in Go
Implementation of various data structures and algorithms in Go

GoDS (Go Data Structures) Implementation of various data structures and algorithms in Go. Data Structures Containers Lists ArrayList SinglyLinkedList

Jan 25, 2022
Tutorial code for my video Learn to Use Basic Data Structures - Slices, Structs and Maps in Golang

Learn to Use Basic Data Structures - Slices, Structs and Maps in Golang Read text from a file and split into words. Introduction to slices / lists. Co

Jan 26, 2022
Data Structures and Algorithms implementation in Go

Data Structures and Algorithms Clean and simple implementation in Go Implementation There are several data structures and algorithms implemented in th

Jan 2, 2023