A minimal, single-table No-SQL database.

SimpleDB

SimpleDB is a very basic No-SQL database format for long-term data storage in Golang. It is WIP, has a LOT of drawbacks, and definitely is not production-grade:

  • not very performant
  • not scaleable, and it isn't safe for multiple connections at once
  • limited to one table per DB
  • simplistic query support

Depending on your use-case, the benefits may be worthwhile:

  • very easy to reimplement and maintain
  • zero dependencies
  • helps keep large amounts of data out of memory when not needed
  • very minimal, no daemons to spin up or configuration to learn

Example usage:

type Car struct {
  Year  uint16
  Color string
  Make  string
  Model string
}

tempFile, _ := os.CreateTemp(os.TempDir(), "simpledb-")

db, err := simpledb.NewDB(tempFile, Car{})
if err != nil {
  // ...
}

carID, err := db.Insert(&Car{
  Year:  2008,
  Color: "brown",
  Make:  "Mazda",
  Model: "Miata",
})
if err != nil {
  // ...
}

var car Car
err = db.Find(carID, &car)
if err != nil {
  // ...
}

fmt.Printf("car.Year: %d\n", car.Year)
fmt.Printf("car.Color: %s\n", car.Color)
fmt.Printf("car.Make: %s\n", car.Make)
fmt.Printf("car.Model: %s\n", car.Model)

To create a SimpleDB, you must provide a data source which satisfies the simpledb.Source interface:

type Source interface {
  io.Reader
  io.Writer
  io.Seeker
  io.Closer
  Truncate(size int64) error
}

simpledb.Source is an interface for the long-term storage used by DB. Usually, this is an *os.File, but you could also design a source which reads and writes through some other means. Read and Write calls should both move the same cursor of the Seeker, and Seek calls should support all three whence values.

You must also pass a zero-value struct instance, whose exported fields will define the table schema. SimpleDB is, for the moment, a single-table database.

Guidelines for struct types which can define valid SimpleDB tables:

  • The struct type must export only fields whose types are fixed-size, or are slices which boil down to those types.
  • simpledb.PrimitiveFixedSizeKinds defines the set of usable fixed-size types.
  • string is also allowed.
  • Arrays of fixed-size types are considered to also be fixed-size types and can be used.
  • The sequence in which struct fields are declared does not matter - they are sorted alphabetically to decide encoding order.

Additional type support (e.g. for maps and structs) is forthcoming.

How does it work?

When first opened on a new file, the database will not write any data, because an empty SimpleDB has zero size. As values are inserted into the table, SimpleDB encodes and writes the values directly to the Source file. First it writes the 'row header', consisting of a random uint64 ID, and the size of the row, encoded as a unsigned varint. The index of that row is its offset from the start, which for the first row would be zero; For the second row, the index would be the size of the first row, etc.

Slices are encoded first by writing their slice length encoded as a unsigned varint, then each element is written. All values are encoded with binary.BigEndian.

As each row is inserted, their indices are cached in memory, mapped to by their ID numbers. A caller who retains the ID number can thus quickly look-up and decode the stored value. However, perhaps you don't have the ID number, or you want to find multiple rows...

Filtering

You can use the db.Filter method to return all rows which match a certain query. Currently this is limited to deep-equality-based checks, but in the future I plan to extend the query functionality quite a bit.

rows, err := usersDB.Filter(map[string]interface{}{
  "UserName": "josh89",
})
if err != nil {
  // ...
} else if len(rows) == 0 {
  // username not found
}

id := rows[0].ID
user := rows[0].Value.(*User)

Indexing

If you will need to look up rows using certain fields frequently, you can add an index to that field.

type User struct {
  UserName string `simpledb:"indexed"`
}

Adding the tag simpledb:"indexed" to a struct field used to define a SimpleDB Schema will add an in-memory cache for that field to the database. The cache records the row's ID number, mapping it to the value of the field upon insertion or reading from disk.

When calling db.Filter, SimpleDB will compare the cached value with the queried value using reflect.DeepEqual.

Dropping

You can drop rows using db.Drop(id), but this alone does not reduce the on-disk size of the database. It only zeros the given row on-disk. Dropped rows on-disk look like big sectors of zeros which are skipped when reading the database from disk.

Defragging

To re-compact the database on-disk back down to its optimal size, you should call db.Defrag(). This operation removes all zero'd rows from the database file on-disk and thus reduces file size. Best practice is to call db.Defrag() before closing an application which uses a SimpleDB.

Owner
Similar Resources

OpenTelemetry instrumentation for database/sql

otelsql It is an OpenTelemetry instrumentation for Golang database/sql, a port from https://github.com/open-telemetry/opentelemetry-go-contrib/pull/50

Dec 28, 2022

Scan database/sql rows directly to structs, slices, and primitive types

Scan Scan standard lib database rows directly to structs or slices. For the most comprehensive and up-to-date docs see the godoc Examples Multiple Row

Dec 28, 2022

Prometheus metrics for Go database/sql via VictoriaMetrics/metrics

sqlmetrics Prometheus metrics for Go database/sql via VictoriaMetrics/metrics Features Simple API. Easy to integrate. Install Go version 1.16+ go get

Dec 16, 2022

Attach hooks to any database/sql driver

sqlhooks Attach hooks to any database/sql driver. The purpose of sqlhooks is to provide a way to instrument your sql statements, making really easy to

Jan 6, 2023

A go Library for scan database/sql rows to struct、slice、other types. And it support multiple databases connection management

ploto A go Library for scan database/sql rows to struct、slice、other types. And it support multiple databases connection management It's not an ORM. wo

Nov 3, 2022

Go database/sql

Go database/sql 源码阅读 基于 release-branch.go1.17 Benchmark 连接池测试 简单介绍 database/sql 库,包括结构体和主要的方法 介绍主要函数的调用逻辑 用OneNote看源码:Link 介绍最近几个版本的commit changes 结合实

Dec 18, 2021

Sqlbench runs benchmarks on an SQL database

sqlbench runs benchmarks on an SQL database. Right now this works for PostgreSQL

Oct 13, 2022

Lightweight SQL database written in Go for prototyping and playing with text (CSV, JSON) data

gopicosql Lightweight SQL database written in Go for prototyping and playing wit

Jul 27, 2022

BigQuery database/sql golang driver

BigQuery SQL Driver This library is compatible with Go 1.17+ Please refer to CHA

Dec 7, 2022
Convert data exports from various services to a single SQLite database
Convert data exports from various services to a single SQLite database

Bionic Bionic is a tool to convert data exports from web apps to a single SQLite database. Bionic currently supports data exports from Google, Apple H

Dec 9, 2022
A fast data generator that's multi-table aware and supports multi-row DML.
A fast data generator that's multi-table aware and supports multi-row DML.

If you need to generate a lot of random data for your database tables but don't want to spend hours configuring a custom tool for the job, then datage

Dec 26, 2022
Typescript type declaration to PostgreSQL CREATE TABLE converter

ts2psql NOTE: This is WIP. Details in this readme are ideal state. Current usage: go build && ./ts2psql (or go build && ts2psql if on Windows OS). A s

Jan 13, 2022
write APIs using direct SQL queries with no hassle, let's rethink about SQL

SQLer SQL-er is a tiny portable server enables you to write APIs using SQL query to be executed when anyone hits it, also it enables you to define val

Jan 7, 2023
Parses a file and associate SQL queries to a map. Useful for separating SQL from code logic

goyesql This package is based on nleof/goyesql but is not compatible with it any more. This package introduces support for arbitrary tag types and cha

Oct 20, 2021
Go-sql-reader - Go utility to read the externalised sql with predefined tags

go-sql-reader go utility to read the externalised sql with predefined tags Usage

Jan 25, 2022
sqlx is a library which provides a set of extensions on go's standard database/sql library

sqlx is a library which provides a set of extensions on go's standard database/sql library. The sqlx versions of sql.DB, sql.TX, sql.Stmt, et al. all leave the underlying interfaces untouched, so that their interfaces are a superset on the standard ones. This makes it relatively painless to integrate existing codebases using database/sql with sqlx.

Jan 7, 2023
Dumpling is a fast, easy-to-use tool written by Go for dumping data from the database(MySQL, TiDB...) to local/cloud(S3, GCP...) in multifarious formats(SQL, CSV...).

?? Dumpling Dumpling is a tool and a Go library for creating SQL dump from a MySQL-compatible database. It is intended to replace mysqldump and mydump

Nov 9, 2022
Additions to Go's database/sql for super fast performance and convenience.

gocraft/dbr (database records) gocraft/dbr provides additions to Go's database/sql for super fast performance and convenience. $ go get -u github.com/

Jan 1, 2023
Interceptors for database/sql

sqlmw sqlmw provides an absurdly simple API that allows a caller to wrap a database/sql driver with middleware. This provides an abstraction similar t

Dec 27, 2022