Surprisingly space efficient trie in Golang(11 bits/key; 100 ns/get).

Slim - surprisingly space efficient data types in Golang

Travis test

Report card Coverage Status

GoDoc PkgGoDev Sourcegraph

Slim is collection of surprisingly space efficient data types, with corresponding serialization APIs to persisting them on-disk or for transport.

Why slim

As data on internet keeps increasing exponentially, the capacity gap between memory and disk becomes greater.

Most of the time, a data itself does not need to be loaded into expensive main memory. Only the much more important information, WHERE-A-DATA-IS, deserve a seat in main memory.

This is what slim does, keeps as little information as possible in main memory, as a minimized index of huge amount external data.

  • SlimIndex: is a common index structure, building on top of SlimTrie.

    GoDoc

  • SlimTrie is the underlying index data structure, evolved from trie.

    GoDoc

    Features:

    • Minimized: 11 bits per key(far less than an 64-bits pointer!!).

    • Stable: memory consumption is stable in various scenarios. The Worst case converges to average consumption tightly. See benchmark.

    • Loooong keys: You can have VERY long keys(16K bytes), without any waste of memory(and money). Do not waste your life writing another prefix compression:). (aws-s3 limits key length to 1024 bytes). Memory consumption only relates to key count, not to key length.

    • Ordered: like btree, keys are stored. Range-scan will be ready in 0.6.0.

    • Fast: ~150 ns per Get(). Time complexity for a get is O(log(n) + k); n: key count; k: key length.

    • Ready for transport: a single proto.Marshal() is all it requires to serialize, transport or persisting on disk etc.

Memory overhead

  • Random string, fixed length, default mode, no label is store if possible:

    Bits/key: memory or disk-space in bits a key consumed in average. It does not change when key-length(k) becomes larger!

  • 1 million var-length string, 10 to 20 byte in different mode SlimTrie:

    - size gzip-size
    Original 15.0M 14.0M
    Complete 14.0M 10.0M
    InnerLabel 1.3M 0.9M
    NoLabel 1.3M 0.8M

    Raw string list and serialized slim is stored in: https://github.com/openacid/testkeys/tree/master/assets

    • Original: raw string lines in a text file.

    • Complete: NewSlimTrie(..., Opt{Complete:Bool(true)}): lossless SlimTrie, stores complete info of every string. This mode provides accurate query.

    • InnerLabel: NewSlimTrie(..., Opt{InnerPrefix:Bool(true)}) SlimTrie stores only label strings of inner nodes(but not label to a leaf). There is false positive in this mode.

    • NoLabel: No label info is stored. False positive rate is higher.

Performance

Time(in nano second) spent on a Get() with golang-map, SlimTrie, array and btree by google.

  • 3.3 times faster than the btree.
  • 2.3 times faster than binary search.

Time(in nano second) spent on a Get() with different key count(n) and key length(k):

False Positive Rate

Bloom filter requires about 9 bits/key to archieve less than 1% FPR.

See: trie/report/

Status

  • SlimTrie APIs are stable, and has been used in a production env.

    Meanwhile we focus on optimizing memory usage and query performance.

  • Internal data structure are promised to be backward compatible for ever. No data migration issue!

Roadmap

  • 2021-01-15 v0.5.11 Query by range
  • 2019-09-18 v0.5.10 Reduce false positive rate to less than 0.05%
  • 2019-06-03 v0.5.9 Large key set benchmark
  • 2019-05-29 v0.5.6 Support up to 2 billion keys
  • 2019-05-18 v0.5.4 Reduce memory usage from 40 to 14 bits/key
  • 2019-04-20 v0.4.3 Range index: many keys share one index item
  • 2019-04-18 v0.4.1 Marshaling support
  • 2019-03-08 v0.1.0 SlimIndex SlimTrie

Change-log

Change-log

Synopsis

Index keys, get by key

package index_test

import (
	"fmt"
	"strings"

	"github.com/openacid/slim/index"
)

type Data string

func (d Data) Read(offset int64, key string) (string, bool) {
	kv := strings.Split(string(d)[offset:], ",")[0:2]
	if kv[0] == key {
		return kv[1], true
	}
	return "", false
}

func Example() {

	// Accelerate external data accessing (in memory or on disk) by indexing
	// them with a SlimTrie:

	// `data` is a sample of some unindexed data. In our example it is a comma
	// separated key value series.
	//
	// In order to let SlimTrie be able to read data, `data` should have
	// a `Read` method:
	//     Read(offset int64, key string) (string, bool)
	data := Data("Aaron,1,Agatha,1,Al,2,Albert,3,Alexander,5,Alison,8")

	// keyOffsets is a prebuilt index that stores key and its offset in data accordingly.
	keyOffsets := []index.OffsetIndexItem{
		{Key: "Aaron", Offset: 0},
		{Key: "Agatha", Offset: 8},
		{Key: "Al", Offset: 17},
		{Key: "Albert", Offset: 22},
		{Key: "Alexander", Offset: 31},
		{Key: "Alison", Offset: 43},
	}

	// `SlimIndex` is simply a container of SlimTrie and its data.
	st, err := index.NewSlimIndex(keyOffsets, data)
	if err != nil {
		fmt.Println(err)
	}

	// Lookup
	v, found := st.Get("Alison")
	fmt.Printf("key: %q\n  found: %t\n  value: %q\n", "Alison", found, v)

	v, found = st.Get("foo")
	fmt.Printf("key: %q\n  found: %t\n  value: %q\n", "foo", found, v)

	// Output:
	// key: "Alison"
	//   found: true
	//   value: "8"
	// key: "foo"
	//   found: false
	//   value: ""
}

Index key ranges, get by key

Create an index item for every 4(or more as you wish) keys.

Let several adjacent keys share one index item reduces a lot memory cost if there are huge amount keys in external data. Such as to index billions of 4KB objects on a 4TB disk(because one disk IO costs 20ms for either reading 4KB or reading 1MB).

package index_test

import (
	"fmt"
	"strings"

	"github.com/openacid/slim/index"
)

type RangeData string

func (d RangeData) Read(offset int64, key string) (string, bool) {
	for i := 0; i < 4; i++ {
		if int(offset) >= len(d) {
			break
		}

		kv := strings.Split(string(d)[offset:], ",")[0:2]
		if kv[0] == key {
			return kv[1], true
		}
		offset += int64(len(kv[0]) + len(kv[1]) + 2)

	}
	return "", false
}

func Example_indexRanges() {

	// Index ranges instead of keys:
	// In this example at most 4 keys shares one index item.

	data := RangeData("Aaron,1,Agatha,1,Al,2,Albert,3,Alexander,5,Alison,8")

	// keyOffsets is a prebuilt index that stores range start, range end and its offset.
	keyOffsets := []index.OffsetIndexItem{
		// Aaron  +--> 0
		// Agatha |
		// Al     |
		// Albert |

		// Alexander +--> 31
		// Alison    |

		{Key: "Aaron", Offset: 0},
		{Key: "Agatha", Offset: 0},
		{Key: "Al", Offset: 0},
		{Key: "Albert", Offset: 0},

		{Key: "Alexander", Offset: 31},
		{Key: "Alison", Offset: 31},
	}

	st, err := index.NewSlimIndex(keyOffsets, data)
	if err != nil {
		panic(err)
	}

	v, found := st.RangeGet("Aaron")
	fmt.Printf("key: %q\n  found: %t\n  value: %q\n", "Aaron", found, v)

	v, found = st.RangeGet("Al")
	fmt.Printf("key: %q\n  found: %t\n  value: %q\n", "Al", found, v)

	v, found = st.RangeGet("foo")
	fmt.Printf("key: %q\n  found: %t\n  value: %q\n", "foo", found, v)

	// Output:
	// key: "Aaron"
	//   found: true
	//   value: "1"
	// key: "Al"
	//   found: true
	//   value: "2"
	// key: "foo"
	//   found: false
	//   value: ""
}

Scan

package trie

import (
	"fmt"

	"github.com/openacid/slim/encode"
)

func ExampleSlimTrie_ScanFrom() {
	var keys = []string{
		"",
		"`",
		"a",
		"ab",
		"abc",
		"abca",
		"abcd",
		"abcd1",
		"abce",
		"be",
		"c",
		"cde0",
		"d",
	}
	values := makeI32s(len(keys))

	codec := encode.I32{}
	st, _ := NewSlimTrie(codec, keys, values, Opt{
		Complete: Bool(true),
	})

	// untilD stops when encountering "d".
	untilD := func(k, v []byte) bool {
		if string(k) == "d" {
			return false
		}

		_, i32 := codec.Decode(v)
		fmt.Println(string(k), i32)
		return true
	}

	fmt.Println("scan (ab, +∞):")
	st.ScanFrom("ab", false, true, untilD)

	fmt.Println()
	fmt.Println("scan [be, +∞):")
	st.ScanFrom("be", true, true, untilD)

	fmt.Println()
	fmt.Println("scan (ab, be):")
	st.ScanFromTo(
		"ab", false,
		"be", false,
		true, untilD)

	// Output:
	//
	// scan (ab, +∞):
	// abc 4
	// abca 5
	// abcd 6
	// abcd1 7
	// abce 8
	// be 9
	// c 10
	// cde0 11
	//
	// scan [be, +∞):
	// be 9
	// c 10
	// cde0 11
	//
	// scan (ab, be):
	// abc 4
	// abca 5
	// abcd 6
	// abcd1 7
	// abce 8
}

Getting started

Install

go get github.com/openacid/slim/trie

Who are using slim

baishancloud

Feedback and contributions

Feedback and Contributions are greatly appreciated.

At this stage, the maintainers are most interested in feedback centered on:

  • Do you have a real life scenario that slim supports well, or doesn't support at all?
  • Do any of the APIs fulfill your needs well?

Let us know by filing an issue, describing what you did or wanted to do, what you expected to happen, and what actually happened:

Or other type of issue.

Authors

See also the list of contributors who participated in this project.

License

This project is licensed under the MIT License - see the LICENSE file for details.

Comments
  • Organising the Key Offsets

    Organising the Key Offsets

    Can the Trie Lookup happen without organizing the key offsets?

    I want to have similar implementation of redis cache .Load all the keys into trie and then lookup should happen from any other application as well.

    The exisiting implementation is a sort of lookup of key needs to load the key offsets.

    Example: App1 needs to load the keys and store in trie object. App2 should only lookup for the key and should not again load all the key offsets.

  • Explain the computation of space complexity

    Explain the computation of space complexity

    What document to add 裁剪之后的SlimTrie: 每个节点都有大于1个子节点, 除LeafParent节点之外。 SlimTrie的大小跟key的长度无关. 也就是说任意长度的key的集合都可以使用有限大小的SlimTrie来索引: SlimTrie最多有2n-1个节点: 最多节点是出现在所有inner节点都是二分的时候(空间复杂度为O(n))

    //why the most node number of SlimTrie is 2n-1. Could you plase add more comments?

    Additional context

  • Probabilistic structure (bloom-filter-like) or a guaranteed structure?

    Probabilistic structure (bloom-filter-like) or a guaranteed structure?

    What document to add

    I could not find any information explaining whether this "slim" structure is a probabilistic one or not. Trie is not probabilistic and thus provides guarantees. But in the redame of "slim" there is a mention of FPR (false positive rate) so I wonder whether this is actually a trie-like structure (i.e. all is guaranteed) or rather a bloom-like structuer (i.e. no guarantees).

    Additional context

    N/A

  • 文档: 如何更新/删除key

    文档: 如何更新/删除key

    What document to add

    这是一个很好的索引设计,感谢开源

    从介绍中( https://blog.openacid.com/tech/algorithm/slimtrie-impl/ )了解到,SlimTrie只索引静态数据,数据生成之后在使用阶段不修改。

    请问: 1: 在实际生产中,是不是需要一个临时数据结构(类似LSM tree里面的SSTable)去保存N个文件, 然后写入SlimTrie?有没有示例代码?

    2: 在实际生产中,要对已经存在SlimTrie的key进行更新/删除, 如何进行?是不是要重建SlimTrie?有没有示例代码?

    再次感谢

  • Trie are missing keys

    Trie are missing keys

    I created a trie with ~70k unique keys and corresponding integer values with trie.NewSlimTrie(encode.Int{}, keys, values) I then measured how many of my keys were in the trie. I was lacking roughly 14%

    To Reproduce This doesn't return the same number of keys, but shows the problem:

    func TestSlimTrie(t *testing.T) {
    	var keys []string
    	var indexes []int
    	for i := 0; i < 70000; i++ {
    		keys = append(keys, randStringRunes(10))
    		indexes = append(indexes, i)
    	}
    	sort.Strings(keys)
    	tr, err := trie.NewSlimTrie(encode.Int{}, keys, indexes)
    	if err != nil {
    		t.Error(err)
    		return
    	}
    	count := 0
    	for _, key := range keys {
    		_, found := tr.Get(key)
    		if !found {
    			count++
    		}
    	}
    	if count > 0 {
    		t.Errorf("Couldn't find %f%% of the keys entered. Missing %d keys", float64(count)/float64(len(keys))*100, count)
    	}
    }
    

    Environment:

    • MacOS
    • openacid/slim v0.5.5

    Language:

    • go 1.12
  • error struct proposal

    error struct proposal

    定义一个描述错误的数据格式, 用来实现golang在运行时的错误传递, 目标:

    • 设计数据结构的前提假设:

      • 模块的用户, 90% 只判断有没有err, 不判断err是什么.
      • 每个模块定义自己本层的错误, 但提供接口允许用户在需要时拿到最底层的错误(例如errno等).
      • 避免直接向上传递错误(上层模块拿到底层错误用处不大, 例如创建block时只得到一个permission deny的errno, 无法有效的定位到到底是哪个文件/目录出现了问题).
      • error的设计目标是为了简化查错, 提供完整的日志, 方便统计分析, 出错时的整条调用链的检查.
    • 可以用于API层的错误描述, 一般API处理时收到错误并需要将错误返回给客户端, 要求错误有:

      • 具体唯一确定的error code, 便于客户端判断
      • 人类可读的error message.
      • 发生错误的相关的东西是什么.
    • Message通过Code和Resource生成出来. 只提供一个Message接口用来输出message.

    • 希望这个模块可以作为go的errors包的无修改替代品, 和 https://github.com/pkg/errors 的无修改替代品.

    • 提供一个方便的接口让用户直接取得最底层的错误.

    • 记录引发错误的底层错误是什么. 类似在存储服务中, 一个底层的IO错误导致API失败, 如果能在日志中记录引发错误的错误, 可以方便定位问题.

      有一类上层错误由几个下层错误引起, 例如多数派写中, nw = 5,3, 这时写失败3个后端会引起整个API调用失败, 这时需要记录多个下层错误.

    • error 结构里可选的带有stacktrace 信息,方便打印日志(参考了 https://github.com/pkg/errors )

    type Error struct {
      Code       string  // error code
      Cause    []error   // 0 or seveal error that cause this error.
      Resource   string  // optional
      *stack             // optional traceback info of this error
    }
    
    
    // 实现系统error interface
    func (e *Error) Error() string // 直接返回Code的string
    
    // 兼容 https://github.com/pkg/errors 的接口:
    func (e *Error) Cause() string // 返回第一个cause
    // 其他接口没差别不列出了
    
    
    // 扩展的接口
    func (e *Error) AllCause() string // 返回所有cause的slice
    func (e *Error) RootCause() string // 返回最初的cause
    func (e *Error) AllRootCause() string // 返回所有的最底层的cause; cause的树的叶子节点.
    func (e *Error) Message() string  // 给人看的, 通过Code和Resource拼装起来.
    
    

    例子🌰: s2, group not found的一个可能的错误信息

    以s2中场景为例, 描述下如何表示一个具体的错误, 假设一个错误是group没读取到. 而引发这个错误的是通过dbproxy读取group信息失败引发的. 而读dbproxy时重试了2次都失败了, 一次是mysql被置位readonly, 一次是socket读取超时:

    Code: GroupNotFound
    Resource: "group_id://123"
    Cause: 
        - Code: DBProxyReadError
          Resource: "dbproxy://127.0.0.1:3303"
          Cause:
            - Code: MysqlReadonly
              Resource: "mysql://192.168.2.2:3306"
            - Code: InvalidResponse
              Resource: "mysql://192.168.9.9:3306"
              Cause:
                  - Code: SocketReadTimeout
                    Resource: "tcp4://192.168.9.9:3306"
    

    例子🌰: ec, block build error的一个可能的错误信息

    Code: BlockBuildError
    Resource: "block://aabbccddeeff"
    Cause:
        - Code: NeedleWriteError
          Resource: "needle://3cp/foo/bar.jpg"
          Cause:
              - Code: FSIsReadonly
                Resource: "file:///ecdrives/bbb" # schema of local file url in browser
                Cause:
                    - <a native fs error> # may be an error with errno.
    
  • build(deps): bump github.com/kr/pretty from 0.2.1 to 0.3.0

    build(deps): bump github.com/kr/pretty from 0.2.1 to 0.3.0

    Bumps github.com/kr/pretty from 0.2.1 to 0.3.0.

    Commits
    • a883a84 go.mod: upgrade to github.com/kr/text v0.2.0 (#71)
    • 59b4212 Add GoStringer support (#68)
    • 3630c7d Use github.com/rogpeppe/go-internal/fmtsort for stable map output (#67)
    • 814ac30 .github: add a workflow to build and test pushes and pull requests (#63)
    • 4403578 diff: detect cycles in the values being compared. (#64)
    • See full diff in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
  • build(deps): bump github.com/golang/protobuf from 1.3.1 to 1.4.2

    build(deps): bump github.com/golang/protobuf from 1.3.1 to 1.4.2

    Bumps github.com/golang/protobuf from 1.3.1 to 1.4.2.

    Release notes

    Sourced from github.com/golang/protobuf's releases.

    v1.4.2

    Notable changes:

    v1.4.1

    Notable changes:

    v1.4.0

    Overview

    This release of the github.com/golang/protobuf module introduces a number of significant changes relative to the previous minor release. In particular, this module is now implemented in terms of the new google.golang.org/protobuf module, which is the next major revision of Go bindings for protocol buffers. From this point onwards, most of the development effort for Go protobufs will be dedicated to the new module, with minimal changes being made to this module.

    See the release notes for the new module for specific implementation details that may affect this release.

    Backwards compatibility

    This release maintains backwards compatibility with previous releases of this module. Any observable changes in behavior are to fix bugs, change unspecified behavior, or to make behavior more compliant with the protobuf specification. The compatibility document provides us the freedom to make changes in these areas.

    Notable changes

    Wire serialization

    Wire serialization is now implemented in terms of the new proto package by calling out to the relevant functionality in that package (e.g., proto.Marshal and proto.Unmarshal). There should be no observable changes in behavior other what is mentioned elsewhere in the release notes (e.g., behavior around errors or nil values).

    JSON and text serialization

    The JSON and text format implementations have been ported to use protobuf reflection under the hood instead of relying on Go reflection. This provides flexibility as they can operate on any concrete message type that properly implements the new proto.Message interface.

    The implementations do not use the new protojson or prototext packages in order to maintain a higher degree of backwards compatibility. Our analysis unfortunately showed us that too many tests rely on their output being stable by performing byte-for-byte comparisons. Even though the compatibility promise gives us the freedom to change the output, we have chosen not to do so for pragmatic reasons. The implementations are now functionally frozen (bugs and all) and will not receive future improvements. Users are encouraged to migrate to the protojson or prototext packages instead.

    ... (truncated)

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
  • build(deps): bump jinja2 from 2.10.1 to 2.11.3 in /scripts

    build(deps): bump jinja2 from 2.10.1 to 2.11.3 in /scripts

    Bumps jinja2 from 2.10.1 to 2.11.3.

    Release notes

    Sourced from jinja2's releases.

    2.11.3

    This contains a fix for a speed issue with the urlize filter. urlize is likely to be called on untrusted user input. For certain inputs some of the regular expressions used to parse the text could take a very long time due to backtracking. As part of the fix, the email matching became slightly stricter. The various speedups apply to urlize in general, not just the specific input cases.

    2.11.2

    2.11.1

    This fixes an issue in async environment when indexing the result of an attribute lookup, like {{ data.items[1:] }}.

    2.11.0

    This is the last version to support Python 2.7 and 3.5. The next version will be Jinja 3.0 and will support Python 3.6 and newer.

    2.10.3

    2.10.2

    Changelog

    Sourced from jinja2's changelog.

    Version 2.11.3

    Released 2021-01-31

    • Improve the speed of the urlize filter by reducing regex backtracking. Email matching requires a word character at the start of the domain part, and only word characters in the TLD. :pr:1343

    Version 2.11.2

    Released 2020-04-13

    • Fix a bug that caused callable objects with __getattr__, like :class:~unittest.mock.Mock to be treated as a :func:contextfunction. :issue:1145
    • Update wordcount filter to trigger :class:Undefined methods by wrapping the input in :func:soft_str. :pr:1160
    • Fix a hang when displaying tracebacks on Python 32-bit. :issue:1162
    • Showing an undefined error for an object that raises AttributeError on access doesn't cause a recursion error. :issue:1177
    • Revert changes to :class:~loaders.PackageLoader from 2.10 which removed the dependency on setuptools and pkg_resources, and added limited support for namespace packages. The changes caused issues when using Pytest. Due to the difficulty in supporting Python 2 and :pep:451 simultaneously, the changes are reverted until 3.0. :pr:1182
    • Fix line numbers in error messages when newlines are stripped. :pr:1178
    • The special namespace() assignment object in templates works in async environments. :issue:1180
    • Fix whitespace being removed before tags in the middle of lines when lstrip_blocks is enabled. :issue:1138
    • :class:~nativetypes.NativeEnvironment doesn't evaluate intermediate strings during rendering. This prevents early evaluation which could change the value of an expression. :issue:1186

    Version 2.11.1

    Released 2020-01-30

    • Fix a bug that prevented looking up a key after an attribute ({{ data.items[1:] }}) in an async template. :issue:1141

    ... (truncated)

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

  • build(deps): bump github.com/google/btree from 1.0.0 to 1.0.1

    build(deps): bump github.com/google/btree from 1.0.0 to 1.0.1

    ⚠️ Dependabot Preview has been deactivated ⚠️

    This pull request was created by Dependabot Preview, and you've upgraded to Dependabot. This means it won't respond to dependabot commands nor will it be automatically closed if a new version is found.

    If you close this pull request, Dependabot will re-create it the next time it checks for updates and everything will work as expected.


    Bumps github.com/google/btree from 1.0.0 to 1.0.1.

    Commits
    • 479b5e8 Minor documentation fix, DescendGreaterThan starts with the last item in the ...
    • be84af9 add lock for trees in cloneTest
    • 2023616 seperate copyright notice from go.mod body (#31)
    • bfb3895 add mod file (#30)
    • See full diff in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language
    • @dependabot badge me will comment on this PR with code to add a "Dependabot enabled" badge to your readme

    Additionally, you can set the following in your Dependabot dashboard:

    • Update frequency (including time of day and day of week)
    • Pull request limits (per update run and/or open at any time)
    • Out-of-range updates (receive only lockfile updates, if desired)
    • Security updates (receive only security updates, if desired)
  • build(deps): bump github.com/golang/protobuf from 1.3.1 to 1.4.3

    build(deps): bump github.com/golang/protobuf from 1.3.1 to 1.4.3

    Bumps github.com/golang/protobuf from 1.3.1 to 1.4.3.

    Release notes

    Sourced from github.com/golang/protobuf's releases.

    v1.4.3

    Notable changes:

    (#1221) jsonpb: Fix marshaling of Duration (#1210) proto: convert integer to rune before converting to string

    v1.4.2

    Notable changes:

    v1.4.1

    Notable changes:

    v1.4.0

    Overview

    This release of the github.com/golang/protobuf module introduces a number of significant changes relative to the previous minor release. In particular, this module is now implemented in terms of the new google.golang.org/protobuf module, which is the next major revision of Go bindings for protocol buffers. From this point onwards, most of the development effort for Go protobufs will be dedicated to the new module, with minimal changes being made to this module.

    See the release notes for the new module for specific implementation details that may affect this release.

    Backwards compatibility

    This release maintains backwards compatibility with previous releases of this module. Any observable changes in behavior are to fix bugs, change unspecified behavior, or to make behavior more compliant with the protobuf specification. The compatibility document provides us the freedom to make changes in these areas.

    Notable changes

    Wire serialization

    Wire serialization is now implemented in terms of the new proto package by calling out to the relevant functionality in that package (e.g., proto.Marshal and proto.Unmarshal). There should be no observable changes in behavior other what is mentioned elsewhere in the release notes (e.g., behavior around errors or nil values).

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language
    • @dependabot badge me will comment on this PR with code to add a "Dependabot enabled" badge to your readme

    Additionally, you can set the following in your Dependabot dashboard:

    • Update frequency (including time of day and day of week)
    • Pull request limits (per update run and/or open at any time)
    • Out-of-range updates (receive only lockfile updates, if desired)
    • Security updates (receive only security updates, if desired)
  • build(deps): bump github.com/openacid/testkeys from 0.1.6 to 0.1.7

    build(deps): bump github.com/openacid/testkeys from 0.1.6 to 0.1.7

    Bumps github.com/openacid/testkeys from 0.1.6 to 0.1.7.

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
  • build(deps): bump github.com/golang/protobuf from 1.3.1 to 1.5.2

    build(deps): bump github.com/golang/protobuf from 1.3.1 to 1.5.2

    ⚠️ Dependabot Preview has been deactivated ⚠️

    This pull request was created by Dependabot Preview, and you've upgraded to Dependabot. This means it won't respond to dependabot commands nor will it be automatically closed if a new version is found.

    If you close this pull request, Dependabot will re-create it the next time it checks for updates and everything will work as expected.


    Bumps github.com/golang/protobuf from 1.3.1 to 1.5.2.

    Release notes

    Sourced from github.com/golang/protobuf's releases.

    v1.5.2

    Notable changes:

    • (#1306) all: deprecate the module
    • (#1300) jsonpb: restore previous behavior for handling nulls and JSONPBUnmarshaler

    v1.5.1

    Notable changes:

    v1.5.0

    Overview

    This marks the ptypes package as deprecated and upgrades the dependency on google.golang.org/protobuf to a pre-release version of v1.26.0. A subsequent patch release will update the dependency to v1.26.0 proper.

    Notable changes

    • (#1217) ptypes: deprecate the package
    • (#1214) rely on protodesc.ToFileDescriptorProto

    v1.4.3

    Notable changes:

    • (#1221) jsonpb: Fix marshaling of Duration
    • (#1210) proto: convert integer to rune before converting to string

    v1.4.2

    Notable changes:

    v1.4.1

    Notable changes:

    v1.4.0

    ... (truncated)

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language
    • @dependabot badge me will comment on this PR with code to add a "Dependabot enabled" badge to your readme

    Additionally, you can set the following in your Dependabot dashboard:

    • Update frequency (including time of day and day of week)
    • Pull request limits (per update run and/or open at any time)
    • Out-of-range updates (receive only lockfile updates, if desired)
    • Security updates (receive only security updates, if desired)
  • add

    add "contribution.md"

    To describe:

    • [ ] How to contribute code.
    • [ ] code style
    • [ ] document style
    • [ ] how to test
    • [x] purpose of this project, what to do and what not to do.
A memory-efficient trie for testing the existence/prefixes of string only(for now).

Succinct Trie A memory-efficient trie for testing the existence/prefixes of string only(for now). Install go get -u github.com/nobekanai/sutrie Docume

Mar 10, 2022
Double-ARray Trie System for golang

Darts This is a GO implementation of Double-ARray Trie System. It's a clone of the C++ version Darts can be used as simple hash dictionary. You can al

Nov 17, 2022
Trie data structure implementation in Golang 🌳

Gotri Gotri is an Unicode character based Trie/prefix tree implementation in Go, with the suggestion/auto-complete feature for character searching. Si

Jun 17, 2022
Hairetsu: a TRIE implementation by double array

hairetsu hairetsu is a TRIE implementation by double array. alpha quality : thin

Mar 20, 2022
An app with Trie tree and Breve search Implementation CLI and HTTP both 🥳

Introduction LifeLongLearner project consists of two different parts. My English Vocabulary My Technical Book Notes All of them provided by me within

Jul 1, 2022
Go package for mapping values to and from space-filling curves, such as Hilbert and Peano curves.
Go package for mapping values to and from space-filling curves, such as Hilbert and Peano curves.

Hilbert Go package for mapping values to and from space-filling curves, such as Hilbert and Peano curves. Documentation available here This is not an

Dec 23, 2022
HyperLogLog with lots of sugar (Sparse, LogLog-Beta bias correction and TailCut space reduction)

An improved version of HyperLogLog for the count-distinct problem, approximating the number of distinct elements in a multiset using 33-50% less space

Dec 31, 2022
Go Solution for LeetCode algorithms problems, 100% coverage.

LeetCode 的 Go 解答 进度 统计规则:1.免费题,2.算法题,3.能提交 Go 解答 Easy Medium Hard Total Accepted 265 456 187 908 Total 267 472 197 936 题解 题号 题目 通过率 难度 收藏 1250 * Check

Jan 5, 2023
A Go library for an efficient implementation of a skip list: https://godoc.org/github.com/MauriceGit/skiplist
A Go library for an efficient implementation of a skip list: https://godoc.org/github.com/MauriceGit/skiplist

Fast Skiplist Implementation This Go-library implements a very fast and efficient Skiplist that can be used as direct substitute for a balanced tree o

Dec 30, 2022
A simple and efficient thread-safe sharded hashmap for Go

shardmap A simple and efficient thread-safe sharded hashmap for Go. This is an alternative to the standard Go map and sync.Map, and is optimized for w

Dec 17, 2022
Package mafsa implements Minimal Acyclic Finite State Automata in Go, essentially a high-speed, memory-efficient, Unicode-friendly set of strings.

MA-FSA for Go Package mafsa implements Minimal Acyclic Finite State Automata (MA-FSA) with Minimal Perfect Hashing (MPH). Basically, it's a set of str

Oct 27, 2022
Novel, efficient, and practical image compression with visually appealing results. 🤏 ✨
Novel, efficient, and practical image compression with visually appealing results. 🤏 ✨

Tiny Thumb ?? ✨ A novel, efficient, and practical method for lossy image compression, that produces visually appealing thumbnails. This technique is u

Nov 1, 2022
A faster method to get elements from an interface (Struct or Slice type) for Go.

A faster method to get elements from an interface (Struct or Slice type) for Go.

May 13, 2022
Fast in-memory key:value store/cache with TTL

MCache library go-mcache - this is a fast key:value storage. Its major advantage is that, being essentially a thread-safe . map[string]interface{} wit

Nov 11, 2022
A thread safe map which has expiring key-value pairs

~ timedmap ~ A map which has expiring key-value pairs. go get github.com/zekroTJA/timedmap Intro This package allows to set values to a map which will

Dec 29, 2022
CLRS study. Codes are written with golang.

algorithms CLRS study. Codes are written with golang. go version: 1.11 Heap BinaryHeap on array BinaryHeap on linkedlist LeftistHeap FibonacciHeap Tre

Dec 26, 2022
Golang string comparison and edit distance algorithms library, featuring : Levenshtein, LCS, Hamming, Damerau levenshtein (OSA and Adjacent transpositions algorithms), Jaro-Winkler, Cosine, etc...

Go-edlib : Edit distance and string comparison library Golang string comparison and edit distance algorithms library featuring : Levenshtein, LCS, Ham

Dec 20, 2022
Gota: DataFrames and data wrangling in Go (Golang)

Gota: DataFrames, Series and Data Wrangling for Go This is an implementation of DataFrames, Series and data wrangling methods for the Go programming l

Jan 6, 2023
Roaring bitmaps in Go (golang)

roaring This is a go version of the Roaring bitmap data structure. Roaring bitmaps are used by several major systems such as Apache Lucene and derivat

Jan 8, 2023