LevelDB is a fast key-value storage library written at Google that provides an ordered mapping from string keys to string values.

LevelDB is a fast key-value storage library written at Google that provides an ordered mapping from string keys to string values.

ci

Authors: Sanjay Ghemawat ([email protected]) and Jeff Dean ([email protected])

Features

  • Keys and values are arbitrary byte arrays.
  • Data is stored sorted by key.
  • Callers can provide a custom comparison function to override the sort order.
  • The basic operations are Put(key,value), Get(key), Delete(key).
  • Multiple changes can be made in one atomic batch.
  • Users can create a transient snapshot to get a consistent view of data.
  • Forward and backward iteration is supported over the data.
  • Data is automatically compressed using the Snappy compression library.
  • External activity (file system operations etc.) is relayed through a virtual interface so users can customize the operating system interactions.

Documentation

LevelDB library documentation is online and bundled with the source code.

Limitations

  • This is not a SQL database. It does not have a relational data model, it does not support SQL queries, and it has no support for indexes.
  • Only a single process (possibly multi-threaded) can access a particular database at a time.
  • There is no client-server support builtin to the library. An application that needs such support will have to wrap their own server around the library.

Getting the Source

git clone --recurse-submodules https://github.com/google/leveldb.git

Building

This project supports CMake out of the box.

Build for POSIX

Quick start:

mkdir -p build && cd build
cmake -DCMAKE_BUILD_TYPE=Release .. && cmake --build .

Building for Windows

First generate the Visual Studio 2017 project/solution files:

mkdir build
cd build
cmake -G "Visual Studio 15" ..

The default default will build for x86. For 64-bit run:

cmake -G "Visual Studio 15 Win64" ..

To compile the Windows solution from the command-line:

devenv /build Debug leveldb.sln

or open leveldb.sln in Visual Studio and build from within.

Please see the CMake documentation and CMakeLists.txt for more advanced usage.

Contributing to the leveldb Project

The leveldb project welcomes contributions. leveldb's primary goal is to be a reliable and fast key/value store. Changes that are in line with the features/limitations outlined above, and meet the requirements below, will be considered.

Contribution requirements:

  1. Tested platforms only. We generally will only accept changes for platforms that are compiled and tested. This means POSIX (for Linux and macOS) or Windows. Very small changes will sometimes be accepted, but consider that more of an exception than the rule.

  2. Stable API. We strive very hard to maintain a stable API. Changes that require changes for projects using leveldb might be rejected without sufficient benefit to the project.

  3. Tests: All changes must be accompanied by a new (or changed) test, or a sufficient explanation as to why a new (or changed) test is not required.

  4. Consistent Style: This project conforms to the Google C++ Style Guide. To ensure your changes are properly formatted please run:

    clang-format -i --style=file <file>
    

Submitting a Pull Request

Before any pull request will be accepted the author must first sign a Contributor License Agreement (CLA) at https://cla.developers.google.com/.

In order to keep the commit timeline linear squash your changes down to a single commit and rebase on google/leveldb/master. This keeps the commit timeline linear and more easily sync'ed with the internal repository at Google. More information at GitHub's About Git rebase page.

Performance

Here is a performance report (with explanations) from the run of the included db_bench program. The results are somewhat noisy, but should be enough to get a ballpark performance estimate.

Setup

We use a database with a million entries. Each entry has a 16 byte key, and a 100 byte value. Values used by the benchmark compress to about half their original size.

LevelDB:    version 1.1
Date:       Sun May  1 12:11:26 2011
CPU:        4 x Intel(R) Core(TM)2 Quad CPU    Q6600  @ 2.40GHz
CPUCache:   4096 KB
Keys:       16 bytes each
Values:     100 bytes each (50 bytes after compression)
Entries:    1000000
Raw Size:   110.6 MB (estimated)
File Size:  62.9 MB (estimated)

Write performance

The "fill" benchmarks create a brand new database, in either sequential, or random order. The "fillsync" benchmark flushes data from the operating system to the disk after every operation; the other write operations leave the data sitting in the operating system buffer cache for a while. The "overwrite" benchmark does random writes that update existing keys in the database.

fillseq      :       1.765 micros/op;   62.7 MB/s
fillsync     :     268.409 micros/op;    0.4 MB/s (10000 ops)
fillrandom   :       2.460 micros/op;   45.0 MB/s
overwrite    :       2.380 micros/op;   46.5 MB/s

Each "op" above corresponds to a write of a single key/value pair. I.e., a random write benchmark goes at approximately 400,000 writes per second.

Each "fillsync" operation costs much less (0.3 millisecond) than a disk seek (typically 10 milliseconds). We suspect that this is because the hard disk itself is buffering the update in its memory and responding before the data has been written to the platter. This may or may not be safe based on whether or not the hard disk has enough power to save its memory in the event of a power failure.

Read performance

We list the performance of reading sequentially in both the forward and reverse direction, and also the performance of a random lookup. Note that the database created by the benchmark is quite small. Therefore the report characterizes the performance of leveldb when the working set fits in memory. The cost of reading a piece of data that is not present in the operating system buffer cache will be dominated by the one or two disk seeks needed to fetch the data from disk. Write performance will be mostly unaffected by whether or not the working set fits in memory.

readrandom  : 16.677 micros/op;  (approximately 60,000 reads per second)
readseq     :  0.476 micros/op;  232.3 MB/s
readreverse :  0.724 micros/op;  152.9 MB/s

LevelDB compacts its underlying storage data in the background to improve read performance. The results listed above were done immediately after a lot of random writes. The results after compactions (which are usually triggered automatically) are better.

readrandom  : 11.602 micros/op;  (approximately 85,000 reads per second)
readseq     :  0.423 micros/op;  261.8 MB/s
readreverse :  0.663 micros/op;  166.9 MB/s

Some of the high cost of reads comes from repeated decompression of blocks read from disk. If we supply enough cache to the leveldb so it can hold the uncompressed blocks in memory, the read performance improves again:

readrandom  : 9.775 micros/op;  (approximately 100,000 reads per second before compaction)
readrandom  : 5.215 micros/op;  (approximately 190,000 reads per second after compaction)

Repository contents

See doc/index.md for more explanation. See doc/impl.md for a brief overview of the implementation.

The public interface is in include/leveldb/*.h. Callers should not include or rely on the details of any other header files in this package. Those internal APIs may be changed without warning.

Guide to header files:

  • include/leveldb/db.h: Main interface to the DB: Start here.

  • include/leveldb/options.h: Control over the behavior of an entire database, and also control over the behavior of individual reads and writes.

  • include/leveldb/comparator.h: Abstraction for user-specified comparison function. If you want just bytewise comparison of keys, you can use the default comparator, but clients can write their own comparator implementations if they want custom ordering (e.g. to handle different character encodings, etc.).

  • include/leveldb/iterator.h: Interface for iterating over data. You can get an iterator from a DB object.

  • include/leveldb/write_batch.h: Interface for atomically applying multiple updates to a database.

  • include/leveldb/slice.h: A simple module for maintaining a pointer and a length into some other byte array.

  • include/leveldb/status.h: Status is returned from many of the public interfaces and is used to report success and various kinds of errors.

  • include/leveldb/env.h: Abstraction of the OS environment. A posix implementation of this interface is in util/env_posix.cc.

  • include/leveldb/table.h, include/leveldb/table_builder.h: Lower-level modules that most clients probably won't use directly.

Owner
Google
Google ❤️ Open Source
Google
Comments
  • Comprehensive, Native Windows Support

    Comprehensive, Native Windows Support

    Now, before you tell me this is a lot of work: I know, and am working on it (and almost done). Ideally, I would like to have my changes merged here, so I have a few questions and concerns for my current port.

    Questions

    Should I target a specific C++ standard?

    Currently, my code depends on a few C++11 features, which can be easily removed with a few macros. This makes the code less readable, however, if C++03 support is desired, I will gladly change my implementation to conform to an older standard.

    How to handle Unicode filesystem support?

    Currently, LevelDB uses char-based (narrow) strings for for all filesystem operations, which does not translate well for Windows systems (since narrow strings use the ANSI, or OEM legacy codepages, and not UTF-8, for backwards compatibility). This means paths using international characters, or emojis, are therefore not supported with a simple port, something I consider to be an undesirable solution for a modern library. All the current forks of levelDB do not solve this fundamental issue, leading me to create my own implementation. Possible solutions include:

    1. A narrow (UTF-8) API on *Nix, and a wide (UTF-16) API on Windows, using a typedef to determine the proper path type.
    2. Converting all narrow strings from UTF-8 to UTF-16 before calling WinAPI functions.
    3. Providing both a narrow (ANSI) and wide (UTF-16) API on Windows.

    The 2nd option, although the least amount of work, is the least amenable for me since the expected encoding for paths from levelDB would then conflict with the entirety of the WinAPI. The 3rd option, however, duplicates code to support both the narrow and wide WinAPI, which would increase the amount of work required to maintain levelDB. The first option is a happy median: it minimizes redundancy and is consistent with expectations about *Nix and Windows paths. I am, however, amenable to any suggestions the levelDB authors may have.

    Intellectual Property

    To emulate the behavior of mmap on Windows, I used a very lightweight library (<250 lines of code) from Steven Lee, mman-win32. However, looking over your contributor license agreement, it seems that my port would not satisfy Google's CLA until I remove this code from my implementation. If this is the case, I could easily use the raw WinAPI functions rather than the emulated mmap in my Windows port. Please notify me if I should remove this code prior to submitting a pull request.

    Other Changes

    CMake Build System

    I introduced a CMake build system, which retains most of the same logic as the existing Makefile. The existing Makefile has not been deprecated.

    AppVeyor Continual Integration

    To ensure builds do not break the Windows builds, I am planning to add an AppVeyor configuration, which allows continual integration on Windows using MSVC.

    Summary

    If there is still interest for native Windows support, and the proposed changes are amenable to the levelDB authors, I would gladly submit a pull request.

  • Provide a shared library

    Provide a shared library

    Original issue 27 created by quadrispro on 2011-08-09T12:57:55.000Z:

    Please add a target into the Makefile to compile a shared library object.

    Thanks in advance for any reply.

  • CMake Support

    CMake Support

    Hi, @cmumford

    Does it make sense if we add cmake support to leveldb? if the answer is YES, I will try to do it.

    There are some useful LLVM tools likes clang-tidy/woboq that need cmake support. We will get code format automatically, static check and online code browser if here is CMakeLists.txt.

    any comments are appreciated. thx

  • Compaction error: IO error: .../xxxxx.ldb: Too many open files

    Compaction error: IO error: .../xxxxx.ldb: Too many open files

    I also read the issue 181

    LevelDB's above a certain size (about 40 GB) seems to cause leveldb to open every single file in the database without closing anything in between.

    Also, it seems it opens every file twice, for some reason.

    My problem is almost the same.

    OS: FreeBSD 10.1-RELEASE amd64 Leveldb: master branch, ( also test 1.18,1.17,...1.14 ) Dataset: 99G with snappy compressed, 58612 *.sst files. ulimit -n: 706995 kern.maxfiles: 785557 kern.maxfilesperproc: 706995

    The dataset was generated by leveldb 1.8.0 , running several months. Last week , I restart the server , then the issue occurred.

    It seems open every *.sst file twice,and not close them.

    $ fstat -m|grep leveldb|wc
      117223 1055007 8668825
    

    58612 * 2 ~= 117223 < 706995 (system limit)

    $ fstat -m|grep leveldb
    USER     CMD          PID   FD MOUNT      INUM MODE         SZ|DV R/W
    root     leveldb-tools 67098   67 /         92326 -rw-r--r--  1594319  r
    root     leveldb-tools 67098   68 /         92326 -rw-r--r--  1594319  r
    root     leveldb-tools 67098   69 /         45578 -rw-r--r--  2124846  r
    root     leveldb-tools 67098   70 /         45578 -rw-r--r--  2124846  r
    root     leveldb-tools 67098   71 /         45579 -rw-r--r--  2123789  r
    root     leveldb-tools 67098   72 /         45579 -rw-r--r--  2123789  r
    root     leveldb-tools 67098   73 /         45580 -rw-r--r--  2125455  r
    root     leveldb-tools 67098   74 /         45580 -rw-r--r--  2125455  r
    root     leveldb-tools 67098   75 /         45581 -rw-r--r--  2123795  r
    root     leveldb-tools 67098   76 /         45581 -rw-r--r--  2123795  r
    root     leveldb-tools 67098   77 /         45582 -rw-r--r--  2122645  r
    root     leveldb-tools 67098   78 /         45582 -rw-r--r--  2122645  r
    root     leveldb-tools 67098   79 /         45583 -rw-r--r--  2119487  r
    root     leveldb-tools 67098   80 /         45583 -rw-r--r--  2119487  r
    root     leveldb-tools 67098   81 /         45584 -rw-r--r--  2117737  r
    root     leveldb-tools 67098   82 /         45584 -rw-r--r--  2117737  r
    ... more ....
    

    as above , each file open twice (the same inode num: 92326,92326,45578,45578,...)

    $ tail -f LOG
    2016/08/10-11:17:48.121149 802006400 Recovering log #18223888
    2016/08/10-11:17:48.329778 802006400 Delete type=2 #18223889
    2016/08/10-11:17:48.333491 802006400 Delete type=3 #18223887
    2016/08/10-11:17:48.333993 802006400 Delete type=0 #18223888
    2016/08/10-11:17:48.388989 802007400 Compacting 58608@0 + 0@1 files
    2016/08/10-11:20:14.324576 802007400 compacted to: files[ 58608 0 0 0 0 0 0 ]
    2016/08/10-11:20:14.325108 802007400 Compaction error: IO error: ..../leveldb/18223891.ldb: Too many open files
    

    After the IO error, the open files reduse to 87580

    fstat -m | grep leveldb | wc
       87580  788220 6476498
    

    And the program cost 100% CPU

      PID USERNAME    THR PRI NICE   SIZE    RES STATE   C   TIME    WCPU COMMAND
    67098 root          6  35    0  5293M  3607M uwait   4  48:55 100.00% leveldb-tools
    

    But there is no disk io at all

      PID USERNAME     VCSW  IVCSW   READ  WRITE  FAULT  TOTAL PERCENT COMMAND
    67098 root            0    216      0      0      0      0   0.00% leveldb-tools
    

    Then ,can't seek, can't get, can't put .....

    I'v tried change leveldb_options_set_max_open_files() , 100, 1024, 400000, but it not worked.

  • Concurrency support for multiple processes (1 exclusive initializer / n readers)

    Concurrency support for multiple processes (1 exclusive initializer / n readers)

    Original issue 176 created by shri314 on 2013-06-10T20:03:38.000Z:

    Can the designers of leveldb explain the rational behind the design decision of not supporting multiple processes in leveldb implementation?

    The documentation clearly says, under Concurrency section that: "A database may only be opened by one process at a time. The leveldb implementation acquires a lock from the operating system to prevent misuse."

    Currently I can see that when one process opens level db, it uses fcntl with RW lock (exclusive lock). However this is a severely limiting, as no other process can ever open the same database even if it wants to just inspect the database contents for RDONLY purposes.

    The use case for example is - one process exclusively opens leveldb database and fills up the database, then closes it. Then n different processes start reading that database.

  • There is a static initializer generated in util/comparator.cc

    There is a static initializer generated in util/comparator.cc

    Original issue 75 created by [email protected] on 2012-03-13T10:14:38.000Z:

    Static initializers are totally fine in 99% of the projects. However in Chrome we are trying to remove them as they significantly slow down startup due to disk seeks.

    There is only one static initializer generated by leveldb: $ nm libleveldb.a|grep _GLOBAL__I 0000000000000050 t _GLOBAL__I__ZN7leveldb10ComparatorD2Ev $

    A global instance of BytewiseComparatorImpl is created at static initialization time in util/comparator.cc:

    // Intentionally not destroyed to prevent destructor racing // with background threads. static const Comparator* bytewise = new BytewiseComparatorImpl;

    const Comparator* BytewiseComparator() { return bytewise; }

    I tried to make BytewiseComparator() CreateBytewiseComparator() instead so that it returns a new instance every time it is called. But then I'm encountering some ownership issues when it is used in the Options class. I initially made Options call CreateBytewiseComparator() in its constructor and delete it in its destructor (I also provided the correct implementations of copy constructor/assignment operator). The thing is that the comparator must live longer than the Options instance which owns it since the client seems to still use the pointer after Options goes out of scope.

    Therefore I was also thinking about a totally different approach and wanted to add atomicops and CallOnce (GoogleOnceInit) from V8 to leveldb. That way we can keep BytewiseComparator() as it is and initialize the global instance the first time it is used. Adding all these dependencies might seem overkill. This is why I'm not directly sending a CL to you. They might serve you later though.

    What do you think?

  • Add DB::SuspendCompactions() and DB:: ResumeCompactions() methods

    Add DB::SuspendCompactions() and DB:: ResumeCompactions() methods

    Original issue 184 created by chirino on 2013-07-01T13:33:37.000Z:

    If an application wants to take a consistent backup of the leveldb data files, it needs to ensure that the background compaction threads are not modifying those files.

  • Xcode 9 / Swift 4 warnings

    Xcode 9 / Swift 4 warnings

    There are 3 warnings when building a project in Xcode 9 with Swift 4.

    Two warnings are the same, for lines 274 and 275: Possible misuse of comma operator here - Cast expression to void to silence warning

    and on line 1350: Code will never be executed

  • Add O_CLOEXEC to open calls.

    Add O_CLOEXEC to open calls.

    This prevents file descriptors from leaking to child processes.

    When compiled for older (pre-2.6.23) kernels which lack support for O_CLOEXEC, there is no change in behavior. With newer kernels, child processes will no longer inherit leveldb's file handles, which reduces the changes of accidentally corrupting the database.

    Fixes #623

  • 'string' file not found

    'string' file not found

    screen shot 2017-10-10 at 20 44 59

    Getting this Error while compiling IOS project. Looks like some CPP code is there in project which is not being complied properly.

    /Users/cvi/Desktop/Ritesh/quintessence-learning/iOSApp/Pods/leveldb-library/include/leveldb/slice.h:21:10: error: 'string' file not found #include ^ :0: error: could not build Objective-C module 'CoreFoundation'

  • LevelDB on Windows

    LevelDB on Windows

    Hi, i used MSYS2 and mingw compiler , if this pull request interesting for you please merge if not simple reject. I tested code on Windows/Linux/MacOSX , used in FastoNoSQL application.

  • [BUG] LevelDB data loss after a crash when deployed on GlusterFS

    [BUG] LevelDB data loss after a crash when deployed on GlusterFS

    Description

    We run a simple workload on LevelDB that inserts two key-value pairs. The two inserts end up going to different log files, and the first insert is set as asynchronous.

    The file system trace we observed is shown below:

    1 append("3.log") # first insert
    2 create("4.log")
    3 close("3.log")
    4 append("4.log") # second insert
    5 fdatasync("4.log")
    

    When deployed on GlusterFS, the first append (line 1) may return successfully, but the data fails to persist to disk. This is due to a common approach in distributed file system for write optimization, which delays write submission to server, and lie to application that write has finished without error.

    When any failure happens during the write submission, GlusterFS will make close (line 3) return with -1 to propagate the error. However, since LevelDB doesn't check any error returned by close, it's not aware about any error happens during the first insert.

    In GlusterFS, fdatasync("4.log") will only persist data on 4.log but not 3.log, therefore, if any crash happens after fsync (line 5), LevelDB will not recover the first insert after reboot.

    As a consequence, there is data loss on the first insert, but not second insert, which violates the ordering guarantee provided by LevelDB.

    Fix

    To fix the problem, we could add error handling logic for close operation. Basically, when error happens, we should consider previous append as failed, and either redo it or call fsync on that specific log file to force the file system persist the write.

  • Unused warn in `third_party/benchmark/src/complexity.cc`

    Unused warn in `third_party/benchmark/src/complexity.cc`

    I tried to build the source shortly after i clone the repo:

    [ 71%] Building CXX object third_party/benchmark/src/CMakeFiles/benchmark.dir/complexity.cc.o
    leveldb/third_party/benchmark/src/complexity.cc:85:10: error: variable 'sigma_gn' set but not used [-Werror,-Wunused-but-set-variable]
      double sigma_gn = 0.0;
             ^
    1 error generated.
    make[2]: *** [third_party/benchmark/src/CMakeFiles/benchmark.dir/complexity.cc.o] Error 1
    make[1]: *** [third_party/benchmark/src/CMakeFiles/benchmark.dir/all] Error 2
    make: *** [all] Error 2
    

    It seems that the sigma_gn did not get used at any place.

    And the solution is to simply remove this two lines:

    LeastSq MinimalLeastSq(const std::vector<int64_t>& n,
                           const std::vector<double>& time,
                           BigOFunc* fitting_curve) {
    -  double sigma_gn = 0.0;
    + //  double sigma_gn = 0.0;
      double sigma_gn_squared = 0.0;
      double sigma_time = 0.0;
      double sigma_time_gn = 0.0;
    
      // Calculate least square fitting parameter
      for (size_t i = 0; i < n.size(); ++i) {
        double gn_i = fitting_curve(n[i]);
    -     sigma_gn += gn_i;
    + //    sigma_gn += gn_i;
        sigma_gn_squared += gn_i * gn_i;
        sigma_time += time[i];
        sigma_time_gn += time[i] * gn_i;
      }
    
  • Throw specific exception instead of assert

    Throw specific exception instead of assert

    I noticed that there are many assert in the programme which may cause levelDB to crash. May I know whether there are specific reasons we only assert instead of using specific exceptions?

    For example, in write_batch.cc

    void WriteBatchInternal::SetContents(WriteBatch* b, const Slice& contents) {
      assert(contents.size() >= kHeader);
      b->rep_.assign(contents.data(), contents.size());
    }
    

    Thank you.

Related tags
a key-value store with multiple backends including leveldb, badgerdb, postgresql

Overview goukv is an abstraction layer for golang based key-value stores, it is easy to add any backend provider. Available Providers badgerdb: Badger

Jan 5, 2023
Nipo is a powerful, fast, multi-thread, clustered and in-memory key-value database, with ability to configure token and acl on commands and key-regexes written by GO

Welcome to NIPO Nipo is a powerful, fast, multi-thread, clustered and in-memory key-value database, with ability to configure token and acl on command

Dec 28, 2022
Membin is an in-memory database that can be stored on disk. Data model smiliar to key-value but values store as JSON byte array.

Membin Docs | Contributing | License What is Membin? The Membin database system is in-memory database smiliar to key-value databases, target to effici

Jun 3, 2021
Key-Value Storage written in Go.

kvs kvs is an in-memory key-value storage written in Go. It has 2 different usage. It can be used as a package by importing it to your code or as a se

Jun 15, 2022
Fast and simple key/value store written using Go's standard library
Fast and simple key/value store written using Go's standard library

Table of Contents Description Usage Cookbook Disadvantages Motivation Benchmarks Test 1 Test 4 Description Package pudge is a fast and simple key/valu

Nov 17, 2022
A simple, fast, embeddable, persistent key/value store written in pure Go. It supports fully serializable transactions and many data structures such as list, set, sorted set.

NutsDB English | 简体中文 NutsDB is a simple, fast, embeddable and persistent key/value store written in pure Go. It supports fully serializable transacti

Jan 1, 2023
BadgerDB is an embeddable, persistent and fast key-value (KV) database written in pure Go
BadgerDB is an embeddable, persistent and fast key-value (KV) database written in pure Go

BadgerDB BadgerDB is an embeddable, persistent and fast key-value (KV) database written in pure Go. It is the underlying database for Dgraph, a fast,

Dec 10, 2021
Eagle - Eagle is a fast and strongly encrypted key-value store written in pure Golang.

EagleDB EagleDB is a fast and simple key-value store written in Golang. It has been designed for handling an exaggerated read/write workload, which su

Dec 10, 2022
pure golang key database support key have value. 非常高效实用的键值数据库。
pure golang key database support key have value.  非常高效实用的键值数据库。

orderfile32 pure golang key database support key have value The orderfile32 is standard alone fast key value database. It have two version. one is thi

Apr 30, 2022
Fast key-value DB in Go.
Fast key-value DB in Go.

BadgerDB BadgerDB is an embeddable, persistent and fast key-value (KV) database written in pure Go. It is the underlying database for Dgraph, a fast,

Dec 29, 2022
Badger - Fast Key-Value DB in Go

BadgerDB This is a fork of dgraph-io/badger, maintained by the Outcaste team. Ba

Jan 8, 2023
levigo is a Go wrapper for LevelDB

levigo levigo is a Go wrapper for LevelDB. The API has been godoc'ed and is available on the web. Questions answered at [email protected].

Jan 5, 2023
Persistent stacks and queues for Go backed by LevelDB

Goque Goque provides embedded, disk-based implementations of stack and queue data structures. Motivation for creating this project was the need for a

Dec 17, 2022
🔑A high performance Key/Value store written in Go with a predictable read/write performance and high throughput. Uses a Bitcask on-disk layout (LSM+WAL) similar to Riak.

bitcask A high performance Key/Value store written in Go with a predictable read/write performance and high throughput. Uses a Bitcask on-disk layout

Sep 26, 2022
Embedded key-value store for read-heavy workloads written in Go
Embedded key-value store for read-heavy workloads written in Go

Pogreb Pogreb is an embedded key-value store for read-heavy workloads written in Go. Key characteristics 100% Go. Optimized for fast random lookups an

Jan 3, 2023
GalaxyDB is a hobbyist key-value database written in Go.

GalaxyDB GalaxyDB is a hobbyist key-value database written in Go Author: Andrew N ([email protected]) Features Data is stored via keys Operations Grafana

Mar 30, 2022
A SQLite-based hierarchical key-value store written in Go

camellia ?? A lightweight hierarchical key-value store camellia is a Go library that implements a simple, hierarchical, persistent key-value store, ba

Nov 9, 2022
An in-memory key:value store/cache (similar to Memcached) library for Go, suitable for single-machine applications.

go-cache go-cache is an in-memory key:value store/cache similar to memcached that is suitable for applications running on a single machine. Its major

Dec 29, 2022
An embedded key/value database for Go.

bbolt bbolt is a fork of Ben Johnson's Bolt key/value store. The purpose of this fork is to provide the Go community with an active maintenance and de

Jan 1, 2023