ArcticDB - an embeddable columnar database written in Go


Go Reference Go Report Card Build

This project is still in its infancy, consider it not production-ready, probably has various consistency and correctness problems and all API will change!

ArcticDB is an embeddable columnar database written in Go. It features semi-structured schemas (could also be described as typed wide-columns), and uses Apache Parquet for storage, and Apache Arrow at query time. Building on top of Apache Arrow, ArcticDB provides a query builder and various optimizers (it reminds of DataFrame-like APIs).

ArcticDB is optimized for use cases where the majority of interactions are writes, and when data is queried, a lot of data is queried at once (our use case at Polar Signals can be broadly described as Observability and specifically for Parca). It could also be described as a wide-column columnar database.

Read the annoucement blog post to learn about what made us create it: https://www.polarsignals.com/blog/posts/2022/05/04/introducing-arcticdb/

Design choices

ArcticDB was specifically built for Observability workloads. This resulted in several characteristics that make it unique in its combination.

Table Of Contents:

Columnar layout

Observability data is most useful when highly dimensional and those dimensions can be searched and aggregated by efficiently. Contrary to many relational databases like (MySQL, PostgreSQL, CockroachDB, TiDB, etc.) that store data all data belonging to a single row together, in a columnar layout all data of the same column in a table is available in one contiguous chunk of data, making it very efficient to scan and more importantly, only the data truly necessary for a query is loaded in the first place. ArcticDB uses Apache Parquet for storage, and Apache Arrow at query time. Apache Parquet is used for storage to make use of its efficient encodings to save on memory and disk space. Apache Arrow is used at query time as a foundation to vectorize the query execution.

Dynamic Columns

While columnar databases already exist, most require a static schema, however, Observability workloads differ in that their schemas are not static, meaning not all columns are pre-defined. On the other hand, wide column databases also already exist, but typically are not strictly typed, and most wide-column databases are row-based databases, not columnar databases.

Take a Prometheus time-series for example. Prometheus time-series are uniquely identified by the combination of their label-sets:

http_requests_total{path="/api/v1/users", code="200"} 12

This model does not map well into a static schema, as label-names cannot be known upfront. The most suitable data-type some columnar databases have to offer is a map, however, maps have the same problems as row-based databases, where all values of a map in a row are stored together, unable to exploit the advantages of a columnar layout. An ArcticDB schema can define a column to be dynamic, causing a column to be created on the fly when a new label-name is seen.

An ArcticDB schema for Prometheus could look like this:

package arcticprometheus

import (
	"github.com/polarsignals/arcticdb/dynparquet"
	"github.com/segmentio/parquet-go"
)

func Schema() *dynparquet.Schema {
	return dynparquet.NewSchema(
		"prometheus",
		[]dynparquet.ColumnDefinition{{
			Name:          "labels",
			StorageLayout: parquet.Encoded(parquet.Optional(parquet.String()), &parquet.RLEDictionary),
			Dynamic:       true,
		}, {
			Name:          "timestamp",
			StorageLayout: parquet.Int(64),
			Dynamic:       false,
		}, {
			Name:          "value",
			StorageLayout: parquet.Leaf(parquet.DoubleType),
			Dynamic:       false,
		}},
		[]dynparquet.SortingColumn{
			dynparquet.NullsFirst(dynparquet.Ascending("labels")),
			dynparquet.Ascending("timestamp"),
		},
	)
}

Note: We are aware that Prometheus uses double-delta encoding for timestamps and XOR encoding for values. This schema is purely an example to highlight the dynamic columns feature.

With this schema, all rows are expected to have a timestamp and a value but can vary in their columns prefixed with labels.. In this schema all dynamically created columns are still Dictionary and run-length encoded and must be of type string.

Immutable & Sorted

There are only writes and reads. All data is immutable and sorted. Having all data sorted allows ArcticDB to avoid maintaining an index per column, and still serve queries with low latency.

To maintain global sorting ArcticDB requires all inserts to be sorted if they contain multiple rows. Combined with immutability, global sorting of all data can be maintained at a reasonable cost. To optimize throughput, it is preferable to perform inserts in as large batches as possible. ArcticDB maintains inserted data in batches of a configurable amount of rows (by default 8192), called a Granule. To directly jump to data needed for a query, ArcticDB maintains a sparse index of Granules. The sparse index is small enough to fully reside in memory, it is currently implemented as a b-tree of Granules.

Sparse index of Granules

At insert time, ArcticDB splits the inserted rows into the appropriate Granule according to their lower and upper bound, to maintain global sorting. Once a Granule exceeds the configured amount, the Granule is split into N new Granules depending.

Split of Granule

Under the hood, Granules are a list of sorted Parts, and only if a query requires it are all parts merged into a sorted stream using a direct k-way merge using a min-heap. An example of an operation that requires the whole Granule to be read as a single sorted stream are the aforementioned Granule splits.

A Granule is organized in Parts

Snapshot isolation

ArcticDB has snapshot isolation, however, it comes with a few caveats that should be well understood. It does not have read-after-write consistency as the intended use is for users reading data that are not the same as the entity writing data to it. To see new data the user re-runs a query. Choosing to trade-off read-after-write consistency allows for mechanisms to increase throughput significantly. ArcticDB releases write transactions in batches. It essentially only ensures write atomicity and that writes are not torn when reading. Since data is immutable, those characteristics together result in snapshot isolation.

More concretely, arcticDB maintains a watermark indicating that all transactions equal and lower to the watermark are safe to be read. Only write transactions obtain a new transaction ID, while reads use the transaction ID of the watermark to identify data that is safe to be read. The watermark is only increased when strictly monotonic, consecutive transactions have finished. This means that a low write transaction can block higher write transactions to become available to be read. To ensure progress is made, write transactions have a timeout.

This mechanism inspired by a mix of Google Spanner, Google Percolator and Highly Available Transactions.

Transactions are released in batches indicated by the watermark

Roadmap

  • Persistence: ArcticDB is currently fully in-memory.

Acknowledgments

ArcticDB stands on the shoulders of giants. Shout out to Segment for creating the incredible parquet-go library as well as InfluxData for starting and various contributors after them working on Go support for Apache Arrow.

Comments
  • Persist data

    Persist data

    Currently, once the configured size of data is reached the active-append table-block is swapped out for a new one and the old one is thrown away. We of course want to persist data in some way. Since we already keep the data in parquet format in memory, it would be great to write that out and memory map it.

  • Persist data

    Persist data

    Hi, all! I open this as a draft PR because some points need to be refined and improved after discussion, since this is a feature involving several changes. The current implementation uses a block file for each table, where blocks are appended in a log-like fashion. This should be better than storing a single table block on a separate file, because we avoid the cost of creating/opening file each time. There is another point I had to address: when syncing a block to disk, there could be some parts in the block which have not been already committed (by increasing the watermark). It would have been natural to wait for all the txns of the block to be aligned with the watermark to start writing the block to disk, but I decided to follow another approach. I used a pendingWritersWg WaitGroup inside the TableBlock object to track the number of goroutines trying to perform an Insert() operation on the block. Before writing the block to disk we wait on the following wait groups:

    block.pendingWritersWg.Wait()
    block.wg.Wait()
    

    The first wait ensures that all goroutines ended the execution of the Insert() method, so that we can consequently wait for all compactions to finish on the second WaitGroup, because we are sure no one will trigger other compaction operations for that block. At this point, the block will no more get modified and, even if there are pending write transactions, all of them should be committed successfully (it is just a matter of waiting for the watermark to be incremented). So, instead of waiting, I implemented the possibility to iterate on all parts, even those which are not aligned with the watermarks, in order to speed up disk writing.

    Additional points I think should be addressed:

    • I think we should add the possibility to enable/disable persistence when creating a TableConfig, and maybe, we could decide to rotate the BlockFile when a configurable max size is reached for a file.

    • Because now, we store data on disk, the user may also want to configure a root directory where to place all block files.

    • We should add a Close() method to the Table object, in order to do all the necessary clean-up operations when closing a table (wait for all pending blocks to be correctly written to disk and close the current active block file).

    • Moreover, when testing, we should also allow the ActiveMemorySize to be modified. Actually the limit is to high (512MB), so no disk operation will be triggered.

  • Move operator tests to logic tests

    Move operator tests to logic tests

    A datadriven logic testing framework was recently added in https://github.com/polarsignals/frostdb/pull/211. Slowly but surely, it would be nice to move our operator tests to this logic testing framework to promote readability and conciseness. #255 does this for the distinct operator. This issue can be closed once the following tests have been moved over:

    • [ ] aggregate_test.go
    • [ ] filter_test.go
  • Parallelize query execution

    Parallelize query execution

    Currently, arcticDB's query execution is not parallelized, but we want to do that. There are well-known techniques such as vulcano to generalize the parallelization of steps within a query. I think vulcano is a promising direction for arcticDB but I'd be happy for us to explore other possibilities as well.

  • frostdb: add leveled compaction

    frostdb: add leveled compaction

    This PR implements simple leveled compaction of parts into level 1 parts as described in #223, which are guaranteed to never be overlapping. This will hopefully improve our memory usage during compactions and avoid the spikes we've seen in our clusters. Please refer to the individual commits for details. I've tried to split it up into 1) The addition of leveled compaction, 2) Handling out-of-order inserts which should only happen very rarely but it is an edge case we need to handle.

  • pqarrow: use ColumnIndex to answer distinct queries at the scan level

    pqarrow: use ColumnIndex to answer distinct queries at the scan level

    This commit uses a column chunk's column index to optimize distinct queries where there is a single value in the chunk (across all pages) of a dictionary-encoded column. In this case, a dictionary can be constructed with the value stored in the index, avoiding an expensive page decompression step.

    name          old time/op    new time/op    delta
    QueryTypes-8     169ms ±11%     133ms ±18%  -21.14%  (p=0.049 n=10+3)
    
    name          old alloc/op   new alloc/op   delta
    QueryTypes-8     443MB ± 0%     436MB ± 0%   -1.54%  (p=0.007 n=10+3)
    
    name          old allocs/op  new allocs/op  delta
    QueryTypes-8     1.13M ± 0%     1.05M ± 0%   -6.78%  (p=0.007 n=10+3)
    

    Closes #158

    cc @metalmatze @thorfour

  • Sync doesn't guarantee to see the last write

    Sync doesn't guarantee to see the last write

    While trying to add back the write benchmark I realized Sync doesn't guarantee to see the last write even if it has already returned.

    see https://github.com/polarsignals/frostdb/pull/111

    I'm guessing this is a bug, but may be that's not a given guarantee.

  • Fixes write  benchmarks

    Fixes write benchmarks

    This fixes the write benchmarks with new change in the code and some wrong expectations.

    However running it seems to show that we are missing data when running insert in parallel but not when we run sequentially.

  • table: handle non existent columns

    table: handle non existent columns

    Co-authored-by: @metalmatze

    While testing system-wide (https://github.com/javierhonduco/parca-agent/tree/system-wide-v2), trying to select a cgroup_id in Parca's UI resulted in the following Panic:

    panic: arrow/array: number of columns/fields mismatch
    
    goroutine 16824 [running]:
    github.com/apache/arrow/go/v8/arrow/array.NewRecord(0xc007a8e960, {0xc00e9cee40, 0x0, 0xc00e9ceee0?}, 0x0)
    	/home/javierhonduco/go/pkg/mod/github.com/apache/arrow/go/[email protected]/arrow/array/record.go:149 +0x173
    github.com/polarsignals/frostdb/pqarrow.contiguousParquetRowGroupToArrowRecord({0x4209b08, 0xc0089ced20}, {0x41fc6f8, 0x5e4c638}, {0x7f6bb853a458, 0xc023f5c228}, 0xc007a8e960, {0x0, 0x0?}, {0xc012cd1400, ...})
    	/home/javierhonduco/go/pkg/mod/github.com/polarsignals/[email protected]/pqarrow/arrow.go:204 +0x60a
    github.com/polarsignals/frostdb/pqarrow.ParquetRowGroupToArrowRecord({0x4209b08?, 0xc0089ced20?}, {0x41fc6f8?, 0x5e4c638?}, {0x7f6bb853a458?, 0xc023f5c228?}, 0x0?, {0x0?, 0x0?}, {0xc012cd1400, ...})
    	/home/javierhonduco/go/pkg/mod/github.com/polarsignals/[email protected]/pqarrow/arrow.go:97 +0xa6
    github.com/polarsignals/frostdb.(*Table).Iterator(0xc0000ac280, {0x4209b08, 0xc0089ced20}, 0xc006d3a730?, {0x41fc6f8, 0x5e4c638}, 0xc007a8e960, {0xc012cd13c0, 0x1, 0x1}, ...)
    	/home/javierhonduco/go/pkg/mod/github.com/polarsignals/[email protected]/table.go:360 +0x61b
    github.com/polarsignals/frostdb/query/physicalplan.(*TableScan).Execute.func1(0xc00e9cf101?)
    	/home/javierhonduco/go/pkg/mod/github.com/polarsignals/[email protected]/query/physicalplan/physicalplan.go:86 +0x225
    github.com/polarsignals/frostdb.(*Table).View(0xc0134e88c0?, 0x374e4d5?)
    	/home/javierhonduco/go/pkg/mod/github.com/polarsignals/[email protected]/table.go:284 +0x29
    github.com/polarsignals/frostdb/query/physicalplan.(*TableScan).Execute(0xc00f40a5a0, {0x4209b08?, 0xc0089ced20}, {0x41fc6f8?, 0x5e4c638})
    	/home/javierhonduco/go/pkg/mod/github.com/polarsignals/[email protected]/query/physicalplan/physicalplan.go:73 +0x131
    github.com/polarsignals/frostdb/query/physicalplan.(*OutputPlan).Execute(...)
    	/home/javierhonduco/go/pkg/mod/github.com/polarsignals/[email protected]/query/physicalplan/physicalplan.go:58
    github.com/polarsignals/frostdb/query.LocalQueryBuilder.Execute({{0x41fc6f8?, 0x5e4c638?}, {0xc0134e8840?}}, {0x4209b08, 0xc0089ced20}, 0xc012cd1310)
    	/home/javierhonduco/go/pkg/mod/github.com/polarsignals/[email protected]/query/engine.go:111 +0x115
    github.com/parca-dev/parca/pkg/query.(*ColumnQueryAPI).Values(0xc0000ac4b0, {0x4209b08, 0xc0089ced20}, 0xc012cd1160?)
    	/home/javierhonduco/code/parca/pkg/query/columnquery.go:127 +0x1a8
    github.com/parca-dev/parca/gen/proto/go/parca/query/v1alpha1._QueryService_Values_Handler.func1({0x4209b08, 0xc0089ced20}, {0x357c360?, 0xc00115ce40})
    	/home/javierhonduco/code/parca/gen/proto/go/parca/query/v1alpha1/query_vtproto.pb.go:271 +0x78
    github.com/grpc-ecosystem/go-grpc-middleware/v2/interceptors.UnaryServerInterceptor.func1({0x4209b08, 0xc0089ced20}, {0x357c360, 0xc00115ce40}, 0x0?, 0xc01e14c300)
    	/home/javierhonduco/go/pkg/mod/github.com/grpc-ecosystem/go-grpc-middleware/[email protected]/interceptors/server.go:22 +0x21e
    github.com/grpc-ecosystem/go-grpc-middleware.ChainUnaryServer.func1.1.1({0x4209b08?, 0xc0089ced20?}, {0x357c360?, 0xc00115ce40?})
    	/home/javierhonduco/go/pkg/mod/github.com/grpc-ecosystem/[email protected]/chain.go:25 +0x3a
    github.com/grpc-ecosystem/go-grpc-prometheus.(*ServerMetrics).UnaryServerInterceptor.func1({0x4209b08, 0xc0089ced20}, {0x357c360, 0xc00115ce40}, 0x0?, 0xc00f40a4a0)
    	/home/javierhonduco/go/pkg/mod/github.com/grpc-ecosystem/[email protected]/server_metrics.go:107 +0x87
    github.com/grpc-ecosystem/go-grpc-middleware.ChainUnaryServer.func1.1.1({0x4209b08?, 0xc0089ced20?}, {0x357c360?, 0xc00115ce40?})
    	/home/javierhonduco/go/pkg/mod/github.com/grpc-ecosystem/[email protected]/chain.go:25 +0x3a
    go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc.UnaryServerInterceptor.func1({0x4209b08, 0xc0089ce810}, {0x357c360, 0xc00115ce40}, 0xc00f40a440, 0xc00f40a4c0)
    	/home/javierhonduco/go/pkg/mod/go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/[email protected]/interceptor.go:325 +0x664
    github.com/grpc-ecosystem/go-grpc-middleware.ChainUnaryServer.func1.1.1({0x4209b08?, 0xc0089ce810?}, {0x357c360?, 0xc00115ce40?})
    	/home/javierhonduco/go/pkg/mod/github.com/grpc-ecosystem/[email protected]/chain.go:25 +0x3a
    github.com/grpc-ecosystem/go-grpc-middleware.ChainUnaryServer.func1({0x4209b08, 0xc0089ce810}, {0x357c360, 0xc00115ce40}, 0xc0089a2af0?, 0x30ac120?)
    	/home/javierhonduco/go/pkg/mod/github.com/grpc-ecosystem/[email protected]/chain.go:34 +0xbf
    github.com/parca-dev/parca/gen/proto/go/parca/query/v1alpha1._QueryService_Values_Handler({0x358c700?, 0xc0000ac4b0}, {0x4209b08, 0xc0089ce810}, 0xc00115ccc0, 0xc0000d6f90)
    	/home/javierhonduco/code/parca/gen/proto/go/parca/query/v1alpha1/query_vtproto.pb.go:273 +0x138
    google.golang.org/grpc.(*Server).processUnaryRPC(0xc001466fc0, {0x42181c0, 0xc01385e1b0}, 0xc0200c5320, 0xc0002c39b0, 0x5dec3f8, 0x0)
    	/home/javierhonduco/go/pkg/mod/google.golang.org/[email protected]/server.go:1283 +0xcfd
    google.golang.org/grpc.(*Server).handleStream(0xc001466fc0, {0x42181c0, 0xc01385e1b0}, 0xc0200c5320, 0x0)
    	/home/javierhonduco/go/pkg/mod/google.golang.org/[email protected]/server.go:1620 +0xa1b
    google.golang.org/grpc.(*Server).serveStreams.func1.2()
    	/home/javierhonduco/go/pkg/mod/google.golang.org/[email protected]/server.go:922 +0x98
    created by google.golang.org/grpc.(*Server).serveStreams.func1
    	/home/javierhonduco/go/pkg/mod/google.golang.org/[email protected]/server.go:920 +0x28a
    

    We believe that this is due to the filter not being able to find any column with this name.

    Test plan

    [javierhonduco@computer frostdb]$ go test .
    ok  	github.com/polarsignals/frostdb	(cached)
    

    With Parca Agent rebased off system-wide:

    image
  • Test_Table_Concurrency and Test_Table_GranuleSplit fail with a low probability

    Test_Table_Concurrency and Test_Table_GranuleSplit fail with a low probability

    OS

    $ sw_vers
    ProductName:	macOS
    ProductVersion:	11.6.4
    BuildVersion:	20G417
    

    Go version

    $ go version
    go version go1.18 darwin/amd64
    

    revision

    latest(ccf34f7bbb98fa1b6af19a0198b81f7b4cd1441e)

    Output

    $ go test -count 300 -run Test_Table_Concurrency -timeout 20h
    --- FAIL: Test_Table_Concurrency (27.61s)
        --- FAIL: Test_Table_Concurrency/8192 (6.71s)
            table_test.go:516:
                	Error Trace:	table_test.go:516
                	Error:      	Not equal:
                	            	expected: 8000
                	            	actual  : 7990
                	Test:       	Test_Table_Concurrency/8192
    --- FAIL: Test_Table_Concurrency (26.70s)
        --- FAIL: Test_Table_Concurrency/8192 (6.65s)
            table_test.go:516:
                	Error Trace:	table_test.go:516
                	Error:      	Not equal:
                	            	expected: 8000
                	            	actual  : 7990
                	Test:       	Test_Table_Concurrency/8192
    FAIL
    exit status 1
    FAIL	github.com/polarsignals/arcticdb	8061.410s
    
  • logicalplan: add max aggregate function

    logicalplan: add max aggregate function

    This commit adds an aggregate function to find the maximum value of an int64 column. This will be specifically useful to find the latest timestamp to query over a range in a historical dataset in benchmarks.

  • physicalplan: add partial ordering support do the OrderedAggregate

    physicalplan: add partial ordering support do the OrderedAggregate

    This PR adds the missing partial ordering support to the OrderedAggregate. Instead of returning a record on each call to Callback as was previously done, the OrderedAggregate now buffers the aggregation results and groups for each ordered set and then merges the results on Finish.

    A bunch of incidental bugfixes and test flake fixes have also been included. The first couple of commits also introduce some helper code used by the last (main) commit.

    Closes #287

    The ordered aggregate is still not ready for production usage since it doesn't handle columns appearing/disappearing from the input. However, I want to take a different approach (starting down the static schema route I mentioned) to solve this, so would rather merge a complete working version (at least from the unit test perspective) of the OrderedAggregate before working on these edge cases.

    In terms of performance difference, as expected, the OrderedAggregate doesn't perform much better than the HashAggregate for QueryMerge (it provides a small 3% perf improvement) given the large number of groups and the new requirement that all data must be buffered in order to merge partially ordered sets. This is expected, and there are some performance angles that can be worked on to improve this (e.g. plan time settings, reducing unnecessary allocations).

  • Arrow record ingestion support

    Arrow record ingestion support

    This adds support to ingest Arrow records directly into FrostDB. It does not deprecate the previous method of ingesting parquet buffers directly. That may be a future PR to deprecate that write path.

    It stores arrow records inside the Part object, where a part may now hold either a parquet buffer or an arrow record.

    Things that are not implemented in this PR that should be implemented in follow-on PRs:

    • WAL support (writing the wal with arrow records is not supported at this time)
    • Nested schema support
    • Dictionary arrays. (We want this to reduce the memory overhead of storing arrow records)
  • Reuse aggregation arrays for each underlying column

    Reuse aggregation arrays for each underlying column

    No, that arrays can stay as an Aggregation field, just that you can always set it up so that two aggregation struct array field indices point to the same builder.

    Originally posted by @asubiotto in https://github.com/polarsignals/frostdb/pull/282#discussion_r1048793519

    Right now we copy the data into multiple arrays if there are aggregations on the same column. For example, sum(value) and count(value) are going to write the same data into two different arrays. It's no necessary and both aggregations could read the underlying arrays and only after aggregating write into different aggregated arrays.

  • Test flake

    Test flake

    === RUN   TestOrderedAggregate/MultiGroupCol
        ordered_aggregate_test.go:130: 
            	Error Trace:	ordered_aggregate_test.go:130
            	            				physicalplan.go:68
            	            				ordered_aggregate.go:272
            	            				ordered_aggregate.go:226
            	            				ordered_aggregate_test.go:188
            	Error:      	Not equal: 
            	            	expected: "a"
            	            	actual  : "b"
            	            	
            	            	Diff:
            	            	--- Expected
            	            	+++ Actual
            	            	@@ -1 +1 @@
            	            	-a
            	            	+b
            	Test:       	TestOrderedAggregate/MultiGroupCol
    

    @asubiotto

  • Write level0 -> level1 compaction tests/benchmarks with real testdata files

    Write level0 -> level1 compaction tests/benchmarks with real testdata files

    It would be nice to isolate our compaction cycles with some real data in tests/benchmarks. This would also help us analyze the memory spikes we see further and test improvements like @thorfour introduced recently

  • logictests: exec query results with null values returned as whitespace

    logictests: exec query results with null values returned as whitespace

        Hmm, I guess it's probably an artifact of the aggregator that doesn't correctly set nulls in the output (and instead sets an empty string). This is probably something to open an issue about as well although it's not too high priority. We can probably merge this as is with at least a comment pointing to the issue.
    

    Originally posted by @asubiotto in https://github.com/polarsignals/frostdb/pull/257#discussion_r1035904418

BuntDB is an embeddable, in-memory key/value database for Go with custom indexing and geospatial support
BuntDB is an embeddable, in-memory key/value database for Go with custom indexing and geospatial support

BuntDB is a low-level, in-memory, key/value store in pure Go. It persists to disk, is ACID compliant, and uses locking for multiple readers and a sing

Dec 30, 2022
A simple, fast, embeddable, persistent key/value store written in pure Go. It supports fully serializable transactions and many data structures such as list, set, sorted set.

NutsDB English | 简体中文 NutsDB is a simple, fast, embeddable and persistent key/value store written in pure Go. It supports fully serializable transacti

Jan 1, 2023
Owl is a db manager platform,committed to standardizing the data, index in the database and operations to the database, to avoid risks and failures.

Owl is a db manager platform,committed to standardizing the data, index in the database and operations to the database, to avoid risks and failures. capabilities which owl provides include Process approval、sql Audit、sql execute and execute as crontab、data backup and recover .

Nov 9, 2022
This is a simple graph database in SQLite, inspired by "SQLite as a document database".

About This is a simple graph database in SQLite, inspired by "SQLite as a document database". Structure The schema consists of just two structures: No

Jan 3, 2023
Hard Disk Database based on a former database

Hard Disk Database based on a former database

Nov 1, 2021
Simple key value database that use json files to store the database

KValDB Simple key value database that use json files to store the database, the key and the respective value. This simple database have two gRPC metho

Nov 13, 2021
Beerus-DB: a database operation framework, currently only supports Mysql, Use [go-sql-driver/mysql] to do database connection and basic operations

Beerus-DB · Beerus-DB is a database operation framework, currently only supports Mysql, Use [go-sql-driver/mysql] to do database connection and basic

Oct 29, 2022
Nipo is a powerful, fast, multi-thread, clustered and in-memory key-value database, with ability to configure token and acl on commands and key-regexes written by GO

Welcome to NIPO Nipo is a powerful, fast, multi-thread, clustered and in-memory key-value database, with ability to configure token and acl on command

Dec 28, 2022
🤔 A minimize Time Series Database, written from scratch as a learning project.
🤔 A minimize Time Series Database, written from scratch as a learning project.

mandodb ?? A minimize Time Series Database, written from scratch as a learning project. 时序数据库(TSDB: Time Series Database)大多数时候都是为了满足监控场景的需求,这里先介绍两个概念:

Jan 3, 2023
GalaxyDB is a hobbyist key-value database written in Go.

GalaxyDB GalaxyDB is a hobbyist key-value database written in Go Author: Andrew N ([email protected]) Features Data is stored via keys Operations Grafana

Mar 30, 2022
A lightweight document-oriented NoSQL database written in pure Golang.
A lightweight document-oriented NoSQL database written in pure Golang.

Lightweight document-oriented NoSQL Database ???? English | ???? 简体中文 | ???? Spanish CloverDB is a lightweight NoSQL database designed for being simpl

Jan 1, 2023
An embedded key/value database for Go.

bbolt bbolt is a fork of Ben Johnson's Bolt key/value store. The purpose of this fork is to provide the Go community with an active maintenance and de

Jan 1, 2023
CockroachDB - the open source, cloud-native distributed SQL database.
CockroachDB - the open source, cloud-native distributed SQL database.

CockroachDB is a cloud-native SQL database for building global, scalable cloud services that survive disasters. What is CockroachDB? Docs Quickstart C

Jan 2, 2023
ACID key-value database.

Coffer Simply ACID* key-value database. At the medium or even low latency it tries to provide greater throughput without losing the ACID properties of

Dec 7, 2022
A decentralized, trusted, high performance, SQL database with blockchain features
A decentralized, trusted, high performance, SQL database with blockchain features

中文简介 CovenantSQL(CQL) is a Byzantine Fault Tolerant relational database built on SQLite: ServerLess: Free, High Availabile, Auto Sync Database Service

Jan 3, 2023
Native GraphQL Database with graph backend
Native GraphQL Database with graph backend

The Only Native GraphQL Database With A Graph Backend. Dgraph is a horizontally scalable and distributed GraphQL database with a graph backend. It pro

Jan 4, 2023
EliasDB a graph-based database.
EliasDB a graph-based database.

EliasDB EliasDB is a graph-based database which aims to provide a lightweight solution for projects which want to store their data as a graph. Feature

Jan 4, 2023
LevelDB key/value database in Go.

This is an implementation of the LevelDB key/value database in the Go programming language. Installation go get github.com/syndtr/goleveldb/leveldb R

Jan 1, 2023
immudb - world’s fastest immutable database
immudb - world’s fastest immutable database

immudb Note: The master branch is the joint point for all ongoing development efforts. Thus it may be in an unstable state and should not be used in p

Jan 4, 2023