Simple and powerful toolkit for BoltDB


Build Status GoDoc

Storm is a simple and powerful toolkit for BoltDB. Basically, Storm provides indexes, a wide range of methods to store and fetch data, an advanced query system, and much more.

In addition to the examples below, see also the examples in the GoDoc.

For extended queries and support for Badger, see also Genji

Table of Contents

Getting Started

GO111MODULE=on go get -u

Import Storm

import ""

Open a database

Quick way of opening a database

db, err := storm.Open("my.db")

defer db.Close()

Open can receive multiple options to customize the way it behaves. See Options below

Simple CRUD system

Declare your structures

type User struct {
  ID int // primary key
  Group string `storm:"index"` // this field will be indexed
  Email string `storm:"unique"` // this field will be indexed with a unique constraint
  Name string // this field will not be indexed
  Age int `storm:"index"`

The primary key can be of any type as long as it is not a zero value. Storm will search for the tag id, if not present Storm will search for a field named ID.

type User struct {
  ThePrimaryKey string `storm:"id"`// primary key
  Group string `storm:"index"` // this field will be indexed
  Email string `storm:"unique"` // this field will be indexed with a unique constraint
  Name string // this field will not be indexed

Storm handles tags in nested structures with the inline tag

type Base struct {
  Ident bson.ObjectId `storm:"id"`

type User struct {
  Base      `storm:"inline"`
  Group     string `storm:"index"`
  Email     string `storm:"unique"`
  Name      string
  CreatedAt time.Time `storm:"index"`

Save your object

user := User{
  ID: 10,
  Group: "staff",
  Email: "[email protected]",
  Name: "John",
  Age: 21,
  CreatedAt: time.Now(),

err := db.Save(&user)
// err == nil

err = db.Save(&user)
// err == storm.ErrAlreadyExists

That's it.

Save creates or updates all the required indexes and buckets, checks the unique constraints and saves the object to the store.

Auto Increment

Storm can auto increment integer values so you don't have to worry about that when saving your objects. Also, the new value is automatically inserted in your field.

type Product struct {
  Pk                  int `storm:"id,increment"` // primary key with auto increment
  Name                string
  IntegerField        uint64 `storm:"increment"`
  IndexedIntegerField uint32 `storm:"index,increment"`
  UniqueIntegerField  int16  `storm:"unique,increment=100"` // the starting value can be set

p := Product{Name: "Vaccum Cleaner"}

// 0
// 0
// 0
// 0

_ = db.Save(&p)

// 1
// 1
// 1
// 100

Simple queries

Any object can be fetched, indexed or not. Storm uses indexes when available, otherwise it uses the query system.

Fetch one object

var user User
err := db.One("Email", "[email protected]", &user)
// err == nil

err = db.One("Name", "John", &user)
// err == nil

err = db.One("Name", "Jack", &user)
// err == storm.ErrNotFound

Fetch multiple objects

var users []User
err := db.Find("Group", "staff", &users)

Fetch all objects

var users []User
err := db.All(&users)

Fetch all objects sorted by index

var users []User
err := db.AllByIndex("CreatedAt", &users)

Fetch a range of objects

var users []User
err := db.Range("Age", 10, 21, &users)

Fetch objects by prefix

var users []User
err := db.Prefix("Name", "Jo", &users)

Skip, Limit and Reverse

var users []User
err := db.Find("Group", "staff", &users, storm.Skip(10))
err = db.Find("Group", "staff", &users, storm.Limit(10))
err = db.Find("Group", "staff", &users, storm.Reverse())
err = db.Find("Group", "staff", &users, storm.Limit(10), storm.Skip(10), storm.Reverse())

err = db.All(&users, storm.Limit(10), storm.Skip(10), storm.Reverse())
err = db.AllByIndex("CreatedAt", &users, storm.Limit(10), storm.Skip(10), storm.Reverse())
err = db.Range("Age", 10, 21, &users, storm.Limit(10), storm.Skip(10), storm.Reverse())

Delete an object

err := db.DeleteStruct(&user)

Update an object

// Update multiple fields
// Only works for non zero-value fields (e.g. Name can not be "", Age can not be 0)
err := db.Update(&User{ID: 10, Name: "Jack", Age: 45})

// Update a single field
// Also works for zero-value fields (0, false, "", ...)
err := db.UpdateField(&User{ID: 10}, "Age", 0)

Initialize buckets and indexes before saving an object

err := db.Init(&User{})

Useful when starting your application

Drop a bucket

Using the struct

err := db.Drop(&User)

Using the bucket name

err := db.Drop("User")

Re-index a bucket

err := db.ReIndex(&User{})

Useful when the structure has changed

Advanced queries

For more complex queries, you can use the Select method. Select takes any number of Matcher from the q package.

Here are some common Matchers:

// Equality
q.Eq("Name", John)

// Strictly greater than
q.Gt("Age", 7)

// Lesser than or equal to
q.Lte("Age", 77)

// Regex with name that starts with the letter D
q.Re("Name", "^D")

// In the given slice of values
q.In("Group", []string{"Staff", "Admin"})

// Comparing fields
q.EqF("FieldName", "SecondFieldName")
q.LtF("FieldName", "SecondFieldName")
q.GtF("FieldName", "SecondFieldName")
q.LteF("FieldName", "SecondFieldName")
q.GteF("FieldName", "SecondFieldName")

Matchers can also be combined with And, Or and Not:

// Match if all match
  q.Gt("Age", 7),
  q.Re("Name", "^D")

// Match if one matches
  q.Re("Name", "^A"),
    q.Re("Name", "^B")
  q.Re("Name", "^C"),
  q.In("Group", []string{"Staff", "Admin"}),
    q.StrictEq("Password", []byte(password)),
    q.Eq("Registered", true)

You can find the complete list in the documentation.

Select takes any number of matchers and wraps them into a q.And() so it's not necessary to specify it. It returns a Query type.

query := db.Select(q.Gte("Age", 7), q.Lte("Age", 77))

The Query type contains methods to filter and order the records.

// Limit
query = query.Limit(10)

// Skip
query = query.Skip(20)

// Calls can also be chained
query = query.Limit(10).Skip(20).OrderBy("Age").Reverse()

But also to specify how to fetch them.

var users []User
err = query.Find(&users)

var user User
err = query.First(&user)

Examples with Select:

// Find all users with an ID between 10 and 100
err = db.Select(q.Gte("ID", 10), q.Lte("ID", 100)).Find(&users)

// Nested matchers
err = db.Select(q.Or(
  q.Gt("ID", 50),
  q.Lt("Age", 21),
    q.Eq("Group", "admin"),
    q.Gte("Age", 21),

query := db.Select(q.Gte("ID", 10), q.Lte("ID", 100)).Limit(10).Skip(5).Reverse().OrderBy("Age", "Name")

// Find multiple records
err = query.Find(&users)
// or
err = db.Select(q.Gte("ID", 10), q.Lte("ID", 100)).Limit(10).Skip(5).Reverse().OrderBy("Age", "Name").Find(&users)

// Find first record
err = query.First(&user)
// or
err = db.Select(q.Gte("ID", 10), q.Lte("ID", 100)).Limit(10).Skip(5).Reverse().OrderBy("Age", "Name").First(&user)

// Delete all matching records
err = query.Delete(new(User))

// Fetching records one by one (useful when the bucket contains a lot of records)
query = db.Select(q.Gte("ID", 10),q.Lte("ID", 100)).OrderBy("Age", "Name")

err = query.Each(new(User), func(record interface{}) error) {
  u := record.(*User)
  return nil

See the documentation for a complete list of methods.


tx, err := db.Begin(true)
if err != nil {
  return err
defer tx.Rollback()

accountA.Amount -= 100
accountB.Amount += 100

err = tx.Save(accountA)
if err != nil {
  return err

err = tx.Save(accountB)
if err != nil {
  return err

return tx.Commit()


Storm options are functions that can be passed when constructing you Storm instance. You can pass it any number of options.


By default, Storm opens a database with the mode 0600 and a timeout of one second. You can change this behavior by using BoltOptions

db, err := storm.Open("my.db", storm.BoltOptions(0600, &bolt.Options{Timeout: 1 * time.Second}))


To store the data in BoltDB, Storm marshals it in JSON by default. If you wish to change this behavior you can pass a codec that implements codec.MarshalUnmarshaler via the storm.Codec option:

db := storm.Open("my.db", storm.Codec(myCodec))
Provided Codecs

You can easily implement your own MarshalUnmarshaler, but Storm comes with built-in support for JSON (default), GOB, Sereal, Protocol Buffers and MessagePack.

These can be used by importing the relevant package and use that codec to configure Storm. The example below shows all variants (without proper error handling):

import (

var gobDb, _ = storm.Open("gob.db", storm.Codec(gob.Codec))
var jsonDb, _ = storm.Open("json.db", storm.Codec(json.Codec))
var serealDb, _ = storm.Open("sereal.db", storm.Codec(sereal.Codec))
var protobufDb, _ = storm.Open("protobuf.db", storm.Codec(protobuf.Codec))
var msgpackDb, _ = storm.Open("msgpack.db", storm.Codec(msgpack.Codec))

Tip: Adding Storm tags to generated Protobuf files can be tricky. A good solution is to use this tool to inject the tags during the compilation.

Use existing Bolt connection

You can use an existing connection and pass it to Storm

bDB, _ := bolt.Open(filepath.Join(dir, "bolt.db"), 0600, &bolt.Options{Timeout: 10 * time.Second})
db := storm.Open("my.db", storm.UseDB(bDB))

Batch mode

Batch mode can be enabled to speed up concurrent writes (see Batch read-write transactions)

db := storm.Open("my.db", storm.Batch())

Nodes and nested buckets

Storm takes advantage of BoltDB nested buckets feature by using storm.Node. A storm.Node is the underlying object used by storm.DB to manipulate a bucket. To create a nested bucket and use the same API as storm.DB, you can use the DB.From method.

repo := db.From("repo")

err := repo.Save(&Issue{
  Title: "I want more features",
  Author: user.ID,

err = repo.Save(newRelease("0.10"))

var issues []Issue
err = repo.Find("Author", user.ID, &issues)

var release Release
err = repo.One("Tag", "0.10", &release)

You can also chain the nodes to create a hierarchy

chars := db.From("characters")
heroes := chars.From("heroes")
enemies := chars.From("enemies")

items := db.From("items")
potions := items.From("consumables").From("medicine").From("potions")

You can even pass the entire hierarchy as arguments to From:

privateNotes := db.From("notes", "private")
workNotes :=  db.From("notes", "work")

Node options

A Node can also be configured. Activating an option on a Node creates a copy, so a Node is always thread-safe.

n := db.From("my-node")

Give a bolt.Tx transaction to the Node

n = n.WithTransaction(tx)

Enable batch mode

n = n.WithBatch(true)

Use a Codec

n = n.WithCodec(gob.Codec)

Simple Key/Value store

Storm can be used as a simple, robust, key/value store that can store anything. The key and the value can be of any type as long as the key is not a zero value.

Saving data :

db.Set("logs", time.Now(), "I'm eating my breakfast man")
db.Set("sessions", bson.NewObjectId(), &someUser)
db.Set("weird storage", "754-3010", map[string]interface{}{
  "hair": "blonde",
  "likes": []string{"cheese", "star wars"},

Fetching data :

user := User{}
db.Get("sessions", someObjectId, &user)

var details map[string]interface{}
db.Get("weird storage", "754-3010", &details)

db.Get("sessions", someObjectId, &details)

Deleting data :

db.Delete("sessions", someObjectId)
db.Delete("weird storage", "754-3010")

You can find other useful methods in the documentation.


BoltDB is still easily accessible and can be used as usual

db.Bolt.View(func(tx *bolt.Tx) error {
  bucket := tx.Bucket([]byte("my bucket"))
  val := bucket.Get([]byte("any id"))
  return nil

A transaction can be also be passed to Storm

db.Bolt.Update(func(tx *bolt.Tx) error {
  dbx := db.WithTransaction(tx)
  err = dbx.Save(&user)
  return nil




  • Storm v3

    Storm v3

    I have finally found some time to work actively on the next version of Storm (:tada: :tada: :tada:). The idea is quite old (see but i never really could have found the time and energy to redesign something that complicated (Storm is basically a database now). Now that i do, let's boostrap the next version !

    A new direction

    Storm v2 works fine but it suffers from design decisions that were made at a time when Storm was a simple wrapper to simplify some redondant tasks done with BoltDB. Since then, a lot of awesome features where added thanks to dozens of issues and contributions and turned Storm into a very cool database that can be used to perform complicated requests, take advantage of indexes and much more. The original design is now reaching its limit and requires too much energy for Storm to evolve properly.

    That's why i think it should be rewritten. But the goal must remain the same: something simple to use so even beginners can use it, powerful with a lot of features available out of the box, flexible so anyone can customize the behaviour of the various components. This might sound complicated, but the current version of Storm is already decent in all of these criterias. Decent, not great though.

    Wanted features

    Here is an ambitious, non exhaustive list of the features we could have in the next version:

    • Centralized, index aware query system
    • Support for dynamic data (i.e Maps)
    • Typed indexes
    • Custom indexes
    • Aggregation
    • Expiration
    • Geo indexes
    • Code generation
    • Better high level API
    • Low level API
    • Better documentation

    Other features

    This is a list of features we could have but i'm not sure about:

    • Abstracting the low level storage (aka Support for BuntDB, Badger, etc)
    • Search (Bleve)

    Design evolution

    I will keep updating this post as the design evolves

    Help wanted

    Any comment, feature request, remark, contribution is welcome!

  • Simple Query Engine

    Simple Query Engine

    There should be a low level engine that executes the queries to BoltDB. This engine would be able to fetch one or more values based on some options:

    • greater than, greater than and equal
    • less than, less than and equal
    • in, not in
    • skip, limit
    • count only (also for special queries)
    • etc.

    This engine would also be used by indexes to fetch indexed values and could also be exported for those who want to make custom queries.

    When possible, every call to BoltDB would be made via this engine.

    Example with One:

    • db.One
    • Index
    • Engine fetches the id from the index
    • Engine fetches the data

    This engine would be able to find records without indexes (see #42)

  • Using SemVer

    Using SemVer

    I have some ideas that will potentially break some things so i think i should start using SemVer to avoid causing a mess on other people's work. I don't know at what version to start though. I don't think Storm is near v1 since there are still a few things to add to make it good enough (orderBy, expiration, map support, etc.)

  • Performance, decomposing requests and code generation

    Performance, decomposing requests and code generation

    Storm's primary goal is to be simple to use.

    It has never been about achieving good performance, that's why reflection is heavily used. I am not a fan of reflection, it just happened naturally when designing the API. I sacrified raw speed for ease of use and i am glad i did.

    But i believe we can do something to avoid reflection when not needed, and without a lot of work. Basically, getting one of several records is like this:

    • Using reflection on the given structure, extracting all the relevant informations
    • Fetching the selected field and value in the indexes to get the matching IDs
    • Querying BoltDB for those IDs

    The reflection boilerplate is essentially done at the first step, it is necessary so we can collect the following informations:

    • The name of the bucket, which is the name of the struct
    • What field is the ID and is it a zero value?
    • What fields are indexed and what kind of index is used for the field

    I think that if we can provide a set of methods that allow the users to manually provide these informations, we could achieve excellent performance.

    These methods could also be used internally to simplify some parts of the code.

    But the most interesting part is that we may be able to transform current struct declarations that use reflection into using the new methods that we talked about above, at compile time, using go generate or whatever.

  • Consider using a DSL that generates code instead of reflection

    Consider using a DSL that generates code instead of reflection

    For better performance, generated code would work better than reflection.

    Something like a goa/gorma DSL could be a potential approach.

  • Add support for nested buckets

    Add support for nested buckets

    BoltDB supports nested buckets, but it's not implemented in Storm.

    As to "why do we need this?":

    Many database applications have natural partitions, where it is typically one partition in use at a time.

    With GitHub as an example: A user has many repositories, but looks mostly at a single repository at a time, with its:

    • Code (Git repo)
    • Issues
    • Pull requests
    • Settings
    • ...

    In the world of relational databases this is solved with where clauses and joins.

    Translated to the BoltDB world we could:

    1. Filter all your entities by some repositoryID, which gets messy, fast ... and how do we delete a repository?
    2. Store all in one big object graph, which makes filtering easier, but doesn't scale.
    3. Partition the application into buckets.

    Storm currently stores everything below the root bucket with a bucket name derived from the struct name and package. This is perfectly fine for many applications.

    Suggestion: New bucket tag that points to the parent bucket:

    type Issue struct {
        ParentBucket      [][]string `storm:"container"`
    • This would work fine with Save operations.
    • But would not work with Find and One etc.

    This could be solved by either

    • Adding a variadic (optional) parentBucket ...string to these methods:
    func (s *DB)  Find(fieldName string, value interface{}, to interface{}, parentBucket ...string) error {
    • Adding a new concept of a container/bucket and a way to switch between them, so only the method receiver changes:
    func (c *Container)  Find(fieldName string, value interface{}, to interface{}) error {
    • ...?

    Not sure what is best/simplest, but it should good enough as a foundation for discussion.

    Some additional integration/helpers would be nice, but the above is a start.

  • "unique" tag gets ignored, if I add a json tag

    Hey guys, I have a User struct, that I want to store in a storm DB and also send as a response to API queries. I use the JSON Marshaler on storm (per default) and also send the user struct to my clients via JSON. Therefore, I wanted to add some struct fields like the following:

    type User struct {
    	ID          int    `storm:"id,increment"`
    	FirstName   string `json:"firstName"`
    	LastName    string `json:"lastName"`
    	Email       string `storm:"unique" json:"email"`
    	Password    string `json:"password"`
    	AvatarURL   string `json:"avatarURL"`

    However, when I add the json:"email" tag to the Email field, the storm:"unique" tag gets ignored, and I can create multiple users with the same URL (they essentially overwrite each other, every user created this way has the same ID).

    Thanks for your help on this! 😉

  • storm discussions/questions (e.g. on a slack channel, etc.)

    storm discussions/questions (e.g. on a slack channel, etc.)

    Would you be open to creating a support/discussion channel on Slack for example where we can discuss and ask questions about storm and perhaps other projects you created?

    BoltDB for example has a Slack channel here:

    Thank you.

  • OrderBy and Reverse for 2 different columns

    OrderBy and Reverse for 2 different columns

    In my program i need just sort one column by two different reverses. Like

    db.Select().OrderBy("Priority", "Timestamp").Reverse().Find(&servers)

    And servers with priority should be first in list, and then any other servers according to timestamp. But i can reverse or not only both of this columns - so i can sort only by prior or time in one query...

    db.Select().OrderBy("Priority", "Timestamp").Reverse().Find(&servers)
    localhost       | 19728.75/12045.97 mbit/s |    0.46 ms | 03-04-2018 10:02:31 +         |     0.00/    0.00 mbit/s |    0.00 ms | 03-04-2018 10:03:44 -       | 23943.74/14376.69 mbit/s |    0.21 ms | 03-04-2018 10:02:30 -
    db.Select().OrderBy("Priority", "Timestamp").Find(&servers)       | 23943.74/14376.69 mbit/s |    0.21 ms | 03-04-2018 10:02:30 -         |     0.00/    0.00 mbit/s |    0.00 ms | 03-04-2018 10:03:44 -
    localhost       | 19728.75/12045.97 mbit/s |    0.46 ms | 03-04-2018 10:02:31 +

    But i need prior servers first, and then everything else in timestamp sorted condition.

    localhost       | 19728.75/12045.97 mbit/s |    0.46 ms | 03-04-2018 10:02:31 +       | 23943.74/14376.69 mbit/s |    0.21 ms | 03-04-2018 10:02:30 -         |     0.00/    0.00 mbit/s |    0.00 ms | 03-04-2018 10:03:44 -

    Is it possible?

  • Segfault on DB.Save()

    Segfault on DB.Save()

    We use BoltDB as an index and metadata store for a file cache, used as part of GitBook's hosting system.

    One of the edge nodes crashed with a SEGFAULT after a few million requests. I've attached stacktraces below.

    It fails with a SEGFAULT in Tx.Commit() -> Bucket.spill() -> Node.write()


    • Go: 1.7rc2
    • BoltDB: v1.2.1-5-g05e441d - 05e441d7b3ded9164c5b912521504e7711dd0ba2
    • Storm: 97b157d5b760af7ddc878aca8b833de6fec335e8


    screen shot 2016-09-19 at 4 53 15 pm ### Pretty stacktrace
    1: running [Created by edge.(*Refresher).loop @ .:0]
        runtime    panic.go:566        throw(0xe94a1f, 0x5)
        runtime    sigpanic_unix.go:27 sigpanic()
        bolt       node.go:205         (*node).write(0xc427ed4c40, 0xc42c5ffff0)
        bolt       bucket.go:598       (*Bucket).write(0xc427c3a700, 0xc425f01101, 0xc428b43158, 0x80)
        bolt       bucket.go:506       (*Bucket).spill(0xc427c3a640, 0xc425f01000, 0xc428b433c8)
        bolt       bucket.go:508       (*Bucket).spill(0xc427c3a600, 0xc425f00f00, 0xc428b43638)
        bolt       bucket.go:508       (*Bucket).spill(0xc42a6dba58, 0x99a82de, 0x147ce80)
        bolt       tx.go:163           (*Tx).Commit(0xc42a6dba40, 0, 0)
        bolt       db.go:602           (*DB).Update(0xc4200de3c0, 0xc428b438e0, 0, 0)
        storm      save.go:51          (*Node).Save(0xc4201a3620, 0xcda920, #9, 0x1, #9)
        storm      save.go:113         (*DB).Save(#4, 0xcda920, #9, 0x3, #8)
        macrophage index.go:108        thunderbolt.Set(0x134e4e0, #4, #8, 0x71, 0, 0, 0x60f, #2, #2, 0, ...)
        macrophage macro.go:132        (*Macrophage).MetaSet(#3, #7, 0x1f, #10, 0x51, 0x60f, #2, #2, #6, 0x24, ...)
        cache      write.go:67         CacheWriter.WriteMeta(#7, 0x1f, #10, 0x51, 0x134d760, #3, 0xc42c5ffd80, 0x60f, #1, #1, ...)
        edge       refresher.go:109    (*Refresher).refresh(#5, #7, 0x1f, #10, 0x51, 0x60f, #1, #1, #6, 0x24, ...)
        edge       refresher.go:179    (*Refresher).loop.func1(0xc4205fafc0, 0xc4205fafd0, 0xc42840e880, 0x71, #5, 0xc420054320)
        runtime    asm_amd64.s:2086    goexit()

    Raw stacktrace

    unexpected fault address 0xc42c600000
    fatal error: fault
    [signal SIGSEGV: segmentation violation code=0x1 addr=0xc42c600000 pc=0x877d46]
    goroutine 57198547 [running]:
    runtime.throw(0xe94a1f, 0x5)
        /usr/local/go/src/runtime/panic.go:566 +0x95 fp=0xc428b42d60 sp=0xc428b42d40
        /usr/local/go/src/runtime/sigpanic_unix.go:27 +0x288 fp=0xc428b42db8 sp=0xc428b42d60*node).write(0xc427ed4c40, 0xc42c5ffff0)
        /go/src/ +0x86 fp=0xc428b42ef8 sp=0xc428b42db8*Bucket).write(0xc427c3a700, 0xc425f01101, 0xc428b43158, 0x80)
        /go/src/ +0xb1 fp=0xc428b42f58 sp=0xc428b42ef8*Bucket).spill(0xc427c3a640, 0xc425f01000, 0xc428b433c8)
        /go/src/ +0x101 fp=0xc428b431c8 sp=0xc428b42f58*Bucket).spill(0xc427c3a600, 0xc425f00f00, 0xc428b43638)
        /go/src/ +0x937 fp=0xc428b43438 sp=0xc428b431c8*Bucket).spill(0xc42a6dba58, 0x99a82de, 0x147ce80)
        /go/src/ +0x937 fp=0xc428b436a8 sp=0xc428b43438*Tx).Commit(0xc42a6dba40, 0x0, 0x0)
        /go/src/ +0x125 fp=0xc428b437f8 sp=0xc428b436a8*DB).Update(0xc4200de3c0, 0xc428b438e0, 0x0, 0x0)
        /go/src/ +0x10d fp=0xc428b43848 sp=0xc428b437f8*Node).Save(0xc4201a3620, 0xcda920, 0xc42a17a190, 0x1, 0xc42a17a190)
        /go/src/ +0x271 fp=0xc428b43938 sp=0xc428b43848*DB).Save(0xc420375560, 0xcda920, 0xc42a17a190, 0x3, 0xc428a4aa00)
        /go/src/ +0x43 fp=0xc428b43970 sp=0xc428b43938, 0xc420375560, 0xc428a4aa00, 0x71, 0x0, 0x0, 0x60f, 0x57defcd8, 0x57defcd8, 0x0, ...)
        /go/src/ +0xaf fp=0xc428b439b0 sp=0xc428b43970*Macrophage).MetaSet(0xc4201a4940, 0xc42805d820, 0x1f, 0xc42a53f9e0, 0x51, 0x60f, 0x57defcd8, 0x57defcd8, 0xc42558b440, 0x24, ...)
        /go/src/ +0x15b fp=0xc428b43a80 sp=0xc428b439b0, 0x1f, 0xc42a53f9e0, 0x51, 0x134d760, 0xc4201a4940, 0xc42c5ffd80, 0x60f, 0x57def749, 0x57def749, ...)
        /go/src/ +0x17c fp=0xc428b43b20 sp=0xc428b43a80*Refresher).refresh(0xc4203815c0, 0xc42805d820, 0x1f, 0xc42a53f9e0, 0x51, 0x60f, 0x57def749, 0x57def749, 0xc42558b440, 0x24, ...)
        /go/src/ +0x7ff fp=0xc428b43ec0 sp=0xc428b43b20*Refresher).loop.func1(0xc4205fafc0, 0xc4205fafd0, 0xc42840e880, 0x71, 0xc4203815c0, 0xc420054320)
        /go/src/ +0x12d fp=0xc428b43f70 sp=0xc428b43ec0
        /usr/local/go/src/runtime/asm_amd64.s:2086 +0x1 fp=0xc428b43f78 sp=0xc428b43f70
    created by*Refresher).loop
        /go/src/ +0x2dd
  • Do not mix value and pointer receivers

    Do not mix value and pointer receivers

    See transaction.go line 4, Begin() takes a value receiver but all other methods take a pointer receiver. The reason is this:

    The solution is simple, once you have a pointer receiver, make all methods on a type pointer receivers.

    Not sure whether there's some other code in storm that does the same thing, Node.Begin() is what I ran into so far.

  • [WIP] Storm v4

    [WIP] Storm v4

    After v3, I realized Storm was becoming hard to maintain. It was relying too much on reflection, felt bloated, and was performing poorly. There were a lot of ongoing discussions on the subject, like or, that led me to realize what was the actual problem: Storm design was good enough for a simple toolkit but not for an actual database. If fixing some of the performance issues can be done by tweaking a loop there and there, any other addition was a real pain and my motivation wasn't high enough to handle that level of plumbing. Instead, I decided to write an actual database as a separate project, Genji, with the goal of, someday, building Storm on top of it. However, Genji's API is still not yet stable and requires a little bit more work before being able to be relied upon. But as a first step, here is a draft of what it would look like.

    Another aspect I wanted to fix was the Storm API. I believe the current API is error-prone and requires too much magic, as we can see in these issues. I think this is the opportunity to improve that design as well.

    Storm now uses Genji. What does it mean?

    Genji is a document-oriented, embedded SQL database written in Go.

    • It uses BoltDB as a storage engine by default: Nothing changes for Storm, it's still a toolkit for BoltDB. What changes though, is how that data is encoded, stored, and queried.

    • It supports other backends: This means that Storm can also be used to store data in-memory or in Badger. The Badger engine is still experimental though, but this is possible in the future.

    • It supports SQL: This means that if users need advanced queries, they can use SQL directly.

    • It's a document database: This means that we can now query nested fields as well

    • It supports schemas: This means that we can now apply constraints on some of the fields of the table/bucket

    • It is designed for performance: Even though Genji is still in an early phase and lacks optimizations, it was designed to scale and it's much easier to apply changes there.

    What are the big changes in the API?

    Take a look at the README file.

    How can I contribute?

    Inputs are welcome! Don't hesitate to comment on the PR or creating an issue.

    Next steps

    • [ ] Iterate on this PR until having a working version
    • [ ] Merge in a dedicated branch
    • [ ] Release one or several betas
    • [ ] Release version 4
  • idea: add a

    idea: add a "sliceindex" tag

    As it is, one can add a index tag to a slice field, and the whole slice will be used as a key in the index - which is expected, but not very useful. It would be nice to have a sliceindex tag which build an index where each value of the slice is inserted in the index.

    type Post struct {
        ID int `storm:"increment"`
        Tags []string{} `storm:"sliceindex"`
        Text string
    // db contains several entries with various tags ...
    // returns posts which contain the tag "foo"
    var foo []Post
    db.Find("Tags", "foo", &foo)
    // returns posts which contain the tags "foo" and "bar"
    var fooBar []Post
    db.Find("Tags", []string{"foo", "bar"})

    Well, I guess it is already possible to do just that with a separate bucket associating Tags and post IDs, but I think this would make what I see as a very common use case much easier to do.

  • Is there a way to get the next sequence before saving an object?

    Is there a way to get the next sequence before saving an object?

    I have a struct as follows

    type struct App {
        ID   int `storm:"increment"`
        Uid  int `storm:"index"`
        Key string

    and I want to give a value to Key before saving each instance, the value is calculated with ID and Uid. So is there a way to get the next sequence before saving an object like the way you use BoltDB?

    Yes, I can make this done by updating the key after saving the object, but that's several more lines of code and I'm afraid of two IO operations for every single creation of an object.

  • [Question] Fetch certain fields only

    [Question] Fetch certain fields only

    Hey there I wonder if storm can support fetch certain fields only like gorm do:

    type User struct {
      ID     uint
      Name   string
      Age    int
      Gender string  // hundreds of fields
    type APIUser struct {
      ID   uint
      Name string
    // Select `id`, `name` automatically when querying
    // SELECT `id`, `name` FROM `users` LIMIT 10

    I've seen you had answered at #216 that this will be able to do with the v3, but I've not found any documents or samples about that. Could you please help on this? Thanks a lot!

  • storm.ErrNotFound vs index.ErrNotFound

    storm.ErrNotFound vs index.ErrNotFound

    Hi, I am wondering if these two errors are supposed to be considered equivalent? I notice they are documented the same, and obviously have the same error message.

    At my workplace we were just caught out by an index.ErrNotFound error in the following code:

    // ListCustomers returns a slice of Customers.
    func ListCustomers(ctx context.Context, tx storm.Node) ([]Customer, error) {
    	var c []Customer
    	err := tx.AllByIndex("Username", &c)
    	if errors.Is(err, storm.ErrNotFound) { // note: errors.Is() does not match index.ErrNotFound
    		return c, nil
    	return c, err

    I was just hoping for some guidance as to whether I should be handling both errors explicitly.

    Thanks 😄

    /cc @cwx-iggy

    PS: If these errors are supposed to be equivalent, but have been duplicated to avoid a cyclic package dependency, perhaps the construction via errors.New() could be moved into an internal package and referenced from both the storm and index packages?

Fast and simple key/value store written using Go's standard library
Fast and simple key/value store written using Go's standard library

Table of Contents Description Usage Cookbook Disadvantages Motivation Benchmarks Test 1 Test 4 Description Package pudge is a fast and simple key/valu

Nov 17, 2022
GoPostgreSQL - An example usage of PostgreSQL with GO, very simple since the objective is that others can read and learn
GoPostgreSQL - An example usage of PostgreSQL with GO, very simple since the objective is that others can read and learn

GoPostgreSQL - An example usage of PostgreSQL with GO, very simple since the objective is that others can read and learn

Feb 10, 2022
moss - a simple, fast, ordered, persistable, key-val storage library for golang

moss moss provides a simple, fast, persistable, ordered key-val collection implementation as a 100% golang library. moss stands for "memory-oriented s

Dec 18, 2022
Simple Shamir's Secret Sharing (s4) - A go package giving a easy to use interface for the shamir's secret sharing algorithm

Simple Shamir's Secret Sharing (s4) With Simple Shamir's Secret Sharing (s4) I want to provide you an easy to use interface for this beautiful little

Jan 2, 2023
Simple implementation of a sharded mutex in Go
Simple implementation of a sharded mutex in Go

Sharded Mutex in Go This package contains a sharded mutex which should do better than a traditional sync.RWMutex in certain cases where you want to pr

May 7, 2022
A simple Git Notes Key Value store

Gino Keva - Git Notes Key Values Gino Keva works as a simple Key Value store built on top of Git Notes, using an event sourcing architecture. Events a

Aug 14, 2022
Simple DB using yaml. A project for managing the content of yaml files.

DB Yaml Simple DB using yaml. A project for managing the content of yaml files. Table of Contents DB Yaml Features Usage Write to DB Query DB Get Firs

Dec 27, 2022
This is a simple graph database in SQLite, inspired by "SQLite as a document database".

About This is a simple graph database in SQLite, inspired by "SQLite as a document database". Structure The schema consists of just two structures: No

Jan 3, 2023
A simple memory database. It's nothing but a homework to learn primary datastruct of golang.

A simple memory database. It's nothing but a homework to learn primary datastruct of golang.

Nov 8, 2021
Simple key value database that use json files to store the database

KValDB Simple key value database that use json files to store the database, the key and the respective value. This simple database have two gRPC metho

Nov 13, 2021
Go reproduction of Bustub--a simple relational database system.

Bustub in Golang Bustub is the course project of CMU15-445 Database System, which is a simple relational database system. This repo is a golang reprod

Dec 18, 2021
A simple wrapper around badgerDB that can be used across multiple projects

mstore Mstore is a simple wrapper around badgerDB for platform applications that require a quick persistent cache close to the consumer. It's intended

Dec 14, 2021
Simple Go program to prevent AFK timeouts during FFXIV Endwalker launch.
Simple Go program to prevent AFK timeouts during FFXIV Endwalker launch.

Idler Just a super simple keyboard idler written in Go, to assist in hands-free queueing/preventing AFK timeouts during the FFXIV Endwalker expansion

Dec 22, 2021
Fsyncperf - A very simple program to tell how fast/slow is fsync on your disk

fsyncperf This is a very simple program to tell you who fast/slow is fsync on yo

May 9, 2022
Simple-read-file - Example of how to read file in Go

simple-read-file This repository contains a simple example of how to read file i

Jan 11, 2022
This is a simple Golang application that executes SQL commands to clean up a mirror node's database.

This is a simple Golang application that executes SQL commands to clean up a mirror node's database.

Jan 24, 2022
Kv-badgerdb - Simple BadgerDB driver for Kilovolt

BadgerDB driver for Kilovolt Simple BadgerDB driver for Kilovolt. Usage Usage is

Jan 28, 2022
A simple golang api generator that stores struct fields in key/value based databases

Backgen A simple golang API generator that uses key/value based databases. It does not provide the database itself, only uses a interface to access se

Feb 4, 2022
Owl is a db manager platform,committed to standardizing the data, index in the database and operations to the database, to avoid risks and failures.

Owl is a db manager platform,committed to standardizing the data, index in the database and operations to the database, to avoid risks and failures. capabilities which owl provides include Process approval、sql Audit、sql execute and execute as crontab、data backup and recover .

Nov 9, 2022