Golang in-memory database built on immutable radix trees

go-memdb CircleCI

Provides the memdb package that implements a simple in-memory database built on immutable radix trees. The database provides Atomicity, Consistency and Isolation from ACID. Being that it is in-memory, it does not provide durability. The database is instantiated with a schema that specifies the tables and indices that exist and allows transactions to be executed.

The database provides the following:

  • Multi-Version Concurrency Control (MVCC) - By leveraging immutable radix trees the database is able to support any number of concurrent readers without locking, and allows a writer to make progress.

  • Transaction Support - The database allows for rich transactions, in which multiple objects are inserted, updated or deleted. The transactions can span multiple tables, and are applied atomically. The database provides atomicity and isolation in ACID terminology, such that until commit the updates are not visible.

  • Rich Indexing - Tables can support any number of indexes, which can be simple like a single field index, or more advanced compound field indexes. Certain types like UUID can be efficiently compressed from strings into byte indexes for reduced storage requirements.

  • Watches - Callers can populate a watch set as part of a query, which can be used to detect when a modification has been made to the database which affects the query results. This lets callers easily watch for changes in the database in a very general way.

For the underlying immutable radix trees, see go-immutable-radix.

Documentation

The full documentation is available on Godoc.

Example

Below is a simple example of usage

// Create a sample struct
type Person struct {
	Email string
	Name  string
	Age   int
}

// Create the DB schema
schema := &memdb.DBSchema{
	Tables: map[string]*memdb.TableSchema{
		"person": &memdb.TableSchema{
			Name: "person",
			Indexes: map[string]*memdb.IndexSchema{
				"id": &memdb.IndexSchema{
					Name:    "id",
					Unique:  true,
					Indexer: &memdb.StringFieldIndex{Field: "Email"},
				},
				"age": &memdb.IndexSchema{
					Name:    "age",
					Unique:  false,
					Indexer: &memdb.IntFieldIndex{Field: "Age"},
				},
			},
		},
	},
}

// Create a new data base
db, err := memdb.NewMemDB(schema)
if err != nil {
	panic(err)
}

// Create a write transaction
txn := db.Txn(true)

// Insert some people
people := []*Person{
	&Person{"[email protected]", "Joe", 30},
	&Person{"[email protected]", "Lucy", 35},
	&Person{"[email protected]", "Tariq", 21},
	&Person{"[email protected]", "Dorothy", 53},
}
for _, p := range people {
	if err := txn.Insert("person", p); err != nil {
		panic(err)
	}
}

// Commit the transaction
txn.Commit()

// Create read-only transaction
txn = db.Txn(false)
defer txn.Abort()

// Lookup by email
raw, err := txn.First("person", "id", "[email protected]")
if err != nil {
	panic(err)
}

// Say hi!
fmt.Printf("Hello %s!\n", raw.(*Person).Name)

// List all the people
it, err := txn.Get("person", "id")
if err != nil {
	panic(err)
}

fmt.Println("All the people:")
for obj := it.Next(); obj != nil; obj = it.Next() {
	p := obj.(*Person)
	fmt.Printf("  %s\n", p.Name)
}

// Range scan over people with ages between 25 and 35 inclusive
it, err = txn.LowerBound("person", "age", 25)
if err != nil {
	panic(err)
}

fmt.Println("People aged 25 - 35:")
for obj := it.Next(); obj != nil; obj = it.Next() {
	p := obj.(*Person)
	if p.Age > 35 {
		break
	}
	fmt.Printf("  %s is aged %d\n", p.Name, p.Age)
}
// Output:
// Hello Joe!
// All the people:
//   Dorothy
//   Joe
//   Lucy
//   Tariq
// People aged 25 - 35:
//   Joe is aged 30
//   Lucy is aged 35
Owner
HashiCorp
Consistent workflows to provision, secure, connect, and run any infrastructure for any application.
HashiCorp
Comments
  • fix: nil strings should be allowed only when AllowMissing is set

    fix: nil strings should be allowed only when AllowMissing is set

    This corrects a bug introduced in 97e94de6e70b3bcdd50849897f5e601e37055947.

    If pointer to strings are being Indexed, a nil pointer should not result in an error only when AllowMissing is set to true for an Index, otherwise it should rightfully error.

  • Binary representation of UInt indexes should sort properly.

    Binary representation of UInt indexes should sort properly.

    We're using memdb with an id field that's supposed to be monotonically increasing. When the value of the field crosses certain binary-relevant thresholds the ordering gets out of whack. For example on an iteration, 256 shows up before 255.

    This PR makes the bytes representation of all types of uints sortable.

  • fix: allow nil string pointers

    fix: allow nil string pointers

    If a string pointer field is nil, do no return an error but an empty value.

    This helps for the cases when AllowMissing for a field is okay but we would still like it to be indexed.

  • Multi index

    Multi index

    It would be nice to be able to create multiple index entries for one object (e.g. by returning multiple values from FromObject()).

    Example:

    type Object struct {
        tags []string
    }
    

    If you want to have an index on the tags, that's not currently possible.

    Would you be interested in a pull request for this?

  • Index out of order for `IntFieldIndex`

    Index out of order for `IntFieldIndex`

    With the following code (slight modification of example code), I found results of the rows being out of order.

    Based on the example I would assume that the results would be in order.

    Code:

    package main
    
    import (
    	"fmt"
    	"time"
    
    	"github.com/hashicorp/go-memdb"
    	"github.com/segmentio/ksuid"
    )
    
    func main() {
    	// Create a sample struct
    	type Person struct {
    		Email string
    		Name  string
    		Age   int
    	}
    
    	// Create the DB schema
    	schema := &memdb.DBSchema{
    		Tables: map[string]*memdb.TableSchema{
    			"person": {
    				Name: "person",
    				Indexes: map[string]*memdb.IndexSchema{
    					"id": {
    						Name:    "id",
    						Unique:  true,
    						Indexer: &memdb.StringFieldIndex{Field: "Email"},
    					},
    					"age": {
    						Name:    "age",
    						Unique:  false,
    						Indexer: &memdb.IntFieldIndex{Field: "Age"},
    					},
    				},
    			},
    		},
    	}
    
    	// Create a new data base
    	db, err := memdb.NewMemDB(schema)
    	if err != nil {
    		panic(err)
    	}
    
    	// Create a write transaction
    	txn := db.Txn(true)
    
    	// Insert very many people
    	for i := 0; i < 1000; i++ {
    		p := &Person{
    			Email: ksuid.New().String(),
    			Age:   i,
    		}
    		if err := txn.Insert("person", p); err != nil {
    			panic(err)
    		}
    		// fmt.Println("INserted", i)
    	}
    	fmt.Println("inserted")
    
    	// Commit the transaction
    	txn.Commit()
    
    	// Create read-only transaction
    	txn = db.Txn(false)
    	defer txn.Abort()
    
    	// Range scan over people with ages between 25 and 35 inclusive
    	start := time.Now()
    	it, err := txn.LowerBound("person", "age", 1)
    	if err != nil {
    		panic(err)
    	}
    
    	fmt.Println("People aged 25 - 35:")
    	i := 0
    	for obj := it.Next(); obj != nil; obj = it.Next() {
    		p := obj.(*Person)
    		fmt.Printf("  %s is aged %d\n", p.Email, p.Age)
    		// if p.Age > 200 {
    		// 	break
    		// }
    		i++
    		if i == 100 {
    			break
    		}
    	}
    	fmt.Println(time.Since(start))
    }
    

    Output:

    inserted
    People aged 25 - 35:
      21ZfMMKi3eIPU3bZOXN2MueNVFC is aged 1
      21ZfMLObHa1548szSMA4t4stgMh is aged 2
      21ZfMG9Wn7AQttLuhv54A3wcjdq is aged 3
      21ZfMMoydQX1MocpnLR9koVsbOO is aged 4
      21ZfMLbElyEiw8oqSoLoP6CcTtU is aged 5
      21ZfMIHZfEqLar18hb6MyVTptSz is aged 6
      21ZfMNNDLrsrUp4CsQvVyzE9hO1 is aged 7
      21ZfMGGz9D84sx0lGNZNM4KefFU is aged 8
      21ZfMLX1HPc9luyo3qEA4D92mh4 is aged 9
      21ZfMMkFCvTqD3h0eLKyytVyL9l is aged 10
      21ZfMFkBm7dwi3EPln2VLHnwob5 is aged 11
      21ZfMFgIMjvt8m3Kt7OcGcrFSNi is aged 12
      21ZfMJw348zQ0zzdtfB7wuAnZ0n is aged 13
      21ZfMJEKCmOyyDpYWYsPsB4TYT4 is aged 14
      21ZfMFhfwNq5DsB7b8yTwL8e242 is aged 15
      21ZfMIiklNhWrfZKSw85J5lB88n is aged 16
      21ZfMIEjiSS5OF0jL45zo4uw4cN is aged 17
      21ZfMKR0phJOhwcQw1uVgwUw58z is aged 18
      21ZfMJn8F8ix9vQ3eXz9TmeP8Jr is aged 19
      21ZfMMuJdacG1Ns3XRzx1fiWyio is aged 20
      21ZfMJsT309ghr2UJdgWNxfYif6 is aged 21
      21ZfMGMNrW7a1fpLnjQYK5goMLK is aged 22
      21ZfMMBC5d7fRGRZHBygY8p0RVH is aged 23
      21ZfMFyUVofmkDOXXjsT0yf2NaC is aged 24
      21ZfMLeEhUXKeI1LdCDOUung68u is aged 25
      21ZfML3QnDhUkNLDP59rma1lS7z is aged 26
      21ZfMNBtH47fZLVjyrZekP2d7ju is aged 27
      21ZfMJTWv9u2VbqR4F5B7wXIpIx is aged 28
      21ZfMJO8NbfsEF7Xu6PzbyP3lw3 is aged 29
      21ZfMJ56lOXMV8WdWnBPgzW9kYO is aged 30
      21ZfMFvSItxvMvuJh8Z5el0x0vC is aged 31
      21ZfMKc7cTdDKxFaigQ6csXlT9f is aged 32
      21ZfMI5njSIOtmeZPYEaw0VUKVk is aged 33
      21ZfMIKLK4BH9k2j8oJGnnLgDDl is aged 34
      21ZfMJKu6LaogaZzvS57AIOTWa3 is aged 35
      21ZfMGOd9lcMDdRVWlFR9yyvMHz is aged 36
      21ZfMFwONDqAfh5UuSKQyFyo6mV is aged 37
      21ZfMK5TXeFVLJVjsANXidatBic is aged 38
      21ZfMHSkeVxEETr3b04mPSVjUQl is aged 39
      21ZfMIvFosiaEHQXEV3IqLWpHNl is aged 40
      21ZfMJjYKiHHUeqTcWKw1oWKIE2 is aged 41
      21ZfMKMujeeCzJziK1IltGV9ZrQ is aged 42
      21ZfMLPQLmwcbCpntzxQLl9AY2e is aged 43
      21ZfMJOsp4XWqQ5GxL6449nbPq0 is aged 44
      21ZfMKarrJ8pJqlRELJV2ztyzMb is aged 45
      21ZfMIKgiiMJGcxFhQKndcOO9QX is aged 46
      21ZfMKYxGCJxHvjtjJcEB9AHrxA is aged 47
      21ZfMMlBDe4CiAc2ncX0TQYWEX6 is aged 48
      21ZfMHIL1Z9dALZStOJIXPss1ie is aged 49
      21ZfMMsh7dvzqXTV5j0YchSXQ0p is aged 50
      21ZfMIjHK4KTWpRnOYkopppzfIF is aged 51
      21ZfMLkemfP2f7dCDk8k2XcvZHY is aged 52
      21ZfMIlLYbF5fesjH71qUm8L1Tu is aged 53
      21ZfMJaAhu5RXB31xJaWmy0rYSu is aged 54
      21ZfMGT8a1WsoXvaVy7aCni8WLk is aged 55
      21ZfMNJLw41lx23iLOvM3vlabjF is aged 56
      21ZfMMi8MYqnrvCa4EWUTKQNAGC is aged 57
      21ZfMJPplhdf1TbVtymnB6a0AHd is aged 58
      21ZfMLJtnT5jnRKYljfFN7gAIIC is aged 59
      21ZfMHbs0SDoDCSxbku7toPGPhR is aged 60
      21ZfMIB9QE49c3AtVg7qvN8yDZ7 is aged 61
      21ZfMMeGTeuUU6KAJ8h8LRb5XNM is aged 62
      21ZfMIE9od85kp89U4y7vB2hAVl is aged 63
      21ZfMMxNoT5kyDNB7OVq2BL2foB is aged 64
      21ZfMGa0JbhDYTS2N4HoqB1C6p0 is aged 128
      21ZfMK4v8dJ7h1M3xDuoabZU6QD is aged 192
      21ZfMGDbQjaFYpJ9LJ89kQIb5tp is aged 256
      21ZfMIujwXJU8xfjkRWTScnNTHp is aged 320
      21ZfMICYjUJ3kz5X1cSONVwtx0g is aged 384
      21ZfMMVcMMpLrnaapM4VxMx1fSF is aged 448
      21ZfMGbTsmgYg0MHea4fsmV5XCn is aged 512
      21ZfMKC9iUdS7fXNHCGrgiENnCu is aged 576
      21ZfMHsvapaO4Ho97oMQrTaktw1 is aged 640
      21ZfMICIqVxOcp4OfEea915Q46r is aged 704
      21ZfMGUDI2fyVdYcm0WwdgdfUQV is aged 768
      21ZfMImVF18NB7GXncR0gec7haO is aged 832
      21ZfMKDCnGRv1BnDqad14rzmwpb is aged 896
      21ZfMNNtiBWhwOn2gDpqAc7EZSb is aged 960
      21ZfMKnGTiF8YRMycIFpq0tRzxa is aged 65
      21ZfMJjxf82foQmcCom66hXoEaC is aged 129
      21ZfMJk0PVaUlns8z00OHsczZz5 is aged 193
      21ZfMM1PKTQaktWBzc7ahOKlLzT is aged 257
      21ZfMJ2yhE6qKVJJvL65NeCdnjz is aged 321
      21ZfMHsCDeOcVgn6RINlxjbeIMi is aged 385
      21ZfMMLYCd2kBKMXCMLYqGV7y69 is aged 449
      21ZfMHxl2gIZIE1KRSnEAck7WCm is aged 513
      21ZfMNEBTld9vkQIhteZVoOuYeh is aged 577
      21ZfMLVNVbHOsf8lKiCw6Aw5ria is aged 641
      21ZfMMthuNgtsUQgU0gGea4UMqq is aged 705
      21ZfMGbBbTLAOCIFxZGwS5qsS0N is aged 769
      21ZfMIlfnhTPrheY3PqiUoZKWpE is aged 833
      21ZfMHD4e4yLZEQ2N3j5DJ2KU8x is aged 897
      21ZfMFeW56O1TAh9LeMZdCWEMSw is aged 961
      21ZfMKZwIouXKp8N4TvVPdGe1ga is aged 66
      21ZfMKx1NyfYTI8z4HFplydPO1C is aged 130
      21ZfMKrhq9yyi76X10tTjBIUzDz is aged 194
      21ZfMLlD8q6MHLh42HLcIK4OemN is aged 258
      21ZfMNJjsfcA3Y5WLItkLSN1Uyt is aged 322
      21ZfMJTy6LNDMhhB22VZ1P2YW1i is aged 386
      21ZfMI5oGGhoFF33enXVvBTCL3f is aged 450
    
  • How to perform searching in database by multiple fields if one of them is optional?

    How to perform searching in database by multiple fields if one of them is optional?

    Hello there. Currently, I'm implementing a lookup mechanism to search data in an in-memory database. But I bumped into a problem to perform searching by several fields. Let's say I have the following structure stored in a database:

    type Person struct {
       FirstName         string
       SecondName    string
       LastName         string
    }
    

    And I would like to search people by either first name or second name or last name or use them together. Compound indexes don't work for this case since I can search let's say only by first name and other search criteria will be empty. Also I can't create separate index for each field since go-mem doesn't support the search by several indexes. Do you guys know any workarounds or how can I perform searching by several fields?

  • Using nested fields for indexes.

    Using nested fields for indexes.

    Hi there. I see that PR https://github.com/hashicorp/go-memdb/pull/62 hasn't been merged yet. Do you know guys other workarounds to add indexes for nested fields? Let's say I have approximately the following structure:

    type MyStruct struct {
       ID int
       External ExternalDependency
       YetAnotherDependency []AnotherDependency 
    }
    
    type ExternalDependency struct {
       ExternalID string
    }
    
    type AnotherDependency struct {
      CustomID string
    }
    

    How to add indexes for ExternalID and CustomID?

  • Watch improvements for many nodes

    Watch improvements for many nodes

    This is a follow-up of https://github.com/hashicorp/consul/issues/4984

    Recap

    memdb provides a WatchSet.AddWithLimit(limit, nodeCh, fallbackNodeCh) feature that falls back to a higher (in the tree hierarchy) node if > limit nodes are watched. This mechanism bounds the number of goroutines used to watch a set of nodes. Consul sets this limit to 2048 currently on master. When watching service nodes fore example the fallback is the root of all nodes. As described in the linked issue switching to this fallback greatly decreases performance as the root node constantly changes. We'd like to increase this limit while still keeping the number of G under control.

    Proposition 1

    aFew is currently set to 32. We could selectively use a bigger select{} based on the number of channels to watch. My very early benchmarks show that performance is decent with aFew up to 128, it decreases rapidly after that. This allows to cut the number by ~4, however it doesn't scale well.

    Proposition 2

    This is untested/unbenchmarked yet. Currently each radix node has a ch that gets closed when a change occurs. WatchSet.Watch() blocks on these chs. We could reverse this mechanism by having a slice of chs in each node. WatchSet would have a single ch, add it to each watched node on Add() and then block on this single ch on Watch(). On change, each of the node chs would be closed. While being intrusive, this would allow to watch many many nodes with no additional G. Note that the two mechanisms could cohabit during the transition with the nodes holding both their current WatchCh and a slice, and two WatchSet implementations.

    What do you think ?

    /cc @banks @pierresouchay

  • How to use Get() and the ResultIterator?

    How to use Get() and the ResultIterator?

    Hi all,

    I'm kinda stuck on using the Get() function.

    My result should return three DB entries which I try to retrieve like this: result, err := txn.Get("foobar", "RefType", refType)

    How do I go over the results one by one?

    Thanks!

  • Adds longest prefix matching for custom indexes.

    Adds longest prefix matching for custom indexes.

    This adds support for longest prefix matching as needed by https://github.com/hashicorp/consul/pull/1764 (it's not yet integrated over there).

    One unfortunate thing is that the null suffixes we add with the StringFieldIndex don't work with the algorithm as its currently written - it would need to see if there's a null edge and use that in order to terminate properly. The LongestPrefix algorithm is down in the immutable radix tree, so there's no super clean way to get it to understand this.

    For my application, I need a custom indexer, similar to the one in the unit test here that can index an empty string, and that will only get added for query templates, so this isn't a big deal. I commented LongestPrefix with details about this limitation, and I also made it look for common misconfigurations that will make it not work as expected.

    I think we could make under-the-hood improvements to remove this limitation with the StringFieldIndex in the future, but this interface would still be useful (it's the LongestPrefix form of First).

  • Bug in walkVals function expanding the possible values in a compoundMultiIndex

    Bug in walkVals function expanding the possible values in a compoundMultiIndex

    When inserting an entry into a table with a compound multi index with a few multi indexers the walkVals function tries to expand all the possible values for a given entry which has multiple string slices. There is a bug in the recursive algoritm. In the example there is 36 possible values for the criterias index with 12 each of the last values of 301, 302 and 303 but this is not the case when the values are expanded.

    package main
    
    import (
    "fmt"
    "github.com/hashicorp/go-memdb"
    )
    
    func main() {
    // Create a sample struct
    type LineItem struct {
    Id int
    CreativeType string
    DeviceType []string
    InventoryType []string
    Partners []string
    Dayparts []string
    }
    
    // Create the DB schema
    schema := &memdb.DBSchema{
    	Tables: map[string]*memdb.TableSchema{
    		"lineItem": &memdb.TableSchema{
    			Name: "lineItem",
    			Indexes: map[string]*memdb.IndexSchema{
    				"id": &memdb.IndexSchema{
    					Name:    "id",
    					Unique:  true,
    					Indexer: &memdb.IntFieldIndex{Field: "Id"},
    				},
    				"criterias": &memdb.IndexSchema{
    					Name:   "criterias",
    					Unique: false,
    					Indexer: &memdb.CompoundMultiIndex{
    						Indexes: []memdb.Indexer{
    							&memdb.StringFieldIndex{Field: "CreativeType"},
    							&memdb.StringSliceFieldIndex{Field: "DeviceType"},
    							&memdb.StringSliceFieldIndex{Field: "InventoryType"},
    							&memdb.StringSliceFieldIndex{Field: "Partners"},
    							&memdb.StringSliceFieldIndex{Field: "Dayparts"},
    						},
    					},
    				},
    			},
    		},
    	},
    }
    
    err := schema.Validate()
    if err != nil {
    	panic(err)
    }
    
    // Create a new data base
    db, err := memdb.NewMemDB(schema)
    if err != nil {
    	panic(err)
    }
    
    // Create a write transaction
    txn := db.Txn(true)
    
    // Insert some people
    li := []*LineItem{
    	{
    		Id:            1045,
    		CreativeType:  "video",
    		DeviceType:    []string{"tv", "desktop", "mobile"},
    		InventoryType: []string{"web", "app"},
    		Partners:      []string{"microsoft", "google"},
    		Dayparts:      []string{"301", "302", "303"},
    	},
    }
    for _, p := range li {
    	fmt.Printf("%+v\n", p)
    	if err := txn.Insert("lineItem", p); err != nil {
    		panic(err)
    	}
    }
    
    // Commit the transaction
    txn.Commit()
    
    // Create read-only transaction
    txn = db.Txn(false)
    defer txn.Abort()
    
    // Lookup by email
    raw, err := txn.First("lineItem", "id", 1045)
    if err != nil {
    	panic(err)
    }
    
    // Say hi!
    fmt.Printf("Hello %d!\n", raw.(*LineItem).Id)
    
    // List by criterias index
    it, err := txn.Get("lineItem", "criterias", "video", "tv", "app", "microsoft", "303")
    if err != nil {
    	panic(err)
    }
    
    fmt.Print("Found:")
    for obj := it.Next(); obj != nil; obj = it.Next() {
    	p := obj.(*LineItem)
    	fmt.Printf("  %v\n", p)
    }
    
    // What? Can't find it?
    it, err = txn.Get("lineItem", "criterias", "video", "tv", "app", "microsoft", "301")
    if err != nil {
    	panic(err)
    }
    
    fmt.Print("Found:")
    for obj := it.Next(); obj != nil; obj = it.Next() {
    	p := obj.(*LineItem)
    	fmt.Printf("  %v\n", p)
    }
    
    }
    
  • Improve performance of FromObject and parseUUID

    Improve performance of FromObject and parseUUID

    FromObject had a bug where it was attempting to preallocate a slice with enough capacity to avoid growth, by calculating length^3. But ^ is XOR, so it was allocation an empty slice (or at least much smaller than expected).

    parseUUID is a bit of a odd function, parsing partial UUIDs as well as correct ones. Simplified it to improve perf by 60%.

  • Create index method

    Create index method

    Hi,

    while starting our server we consume a lot of data and stores it into the go-memdb. While consuming we have a lot of updates which takes really long cause we have a multi-value index in our data.

    With this PR we can remove the expensive index from the initial schema, store all data and create the index afterwards. This safes us a lot of startup time.

    Maybe you can take a look at this PR. If there are any questions feel free to ask.

    Thx a lot!! Stephan

  • How can I store nested Data?

    How can I store nested Data?

    id: 1 company: xyz employes:

    I have above structure of incoming data. I have 2 struts to store company info and employee info. So basically I have belo field in strut company

    type Company strut { Id string Employees []Employee }

    For above structure I wonder how do I insert employee information

  • watchCtx() non-blocking after being triggered once

    watchCtx() non-blocking after being triggered once

    I have noticed that once a watchset was triggered, its watchCtx() method is not blocking anymore. I am not sure if this is by purpose or it is an actual issue. If this is by purpose, ignore my request. The following code snippet is (a bit simplified) the workaround to achieve the desired behavior. Every time the watchset was triggered, a new watchset is initialized. `func main() { // .. go handle(ctx) }

    func handle(ctx context.Context) { if ws, al, err := Watch(); err != nil { // log } else {

        for {
            // blocking call according to documentation
            if err = ws.WatchCtx(ctx); err != nil {
                fmt.Println("received cancel, exit loop")
                break
            } else {
                fmt.Println("received update")
    
                // reinit watcher to prevent being retriggered by the same event
                if ws, al, err = Watch(); err != nil {
                    // log
                    break
                } else {
                    fmt.Println(al)
                }
            }
        }
    }
    

    }

    func Watch() (ws memdb.WatchSet, al AccessList, err error) { txn := db.Txn(false)

    if wc, v, e := txn.FirstWatch(accessTable, idIndex, id); e != nil {
        err = e
    } else if v == nil {
        err = memdb.ErrNotFound
    } else {
        ws = memdb.NewWatchSet()
        ws.Add(wc)
        al = v.(AccessList)
    }
    
    return
    

    } `

  • Add TimeFieldIndex

    Add TimeFieldIndex

    My team is using memdb with a time.Time field. In my view, time.Time is a type that can be used in general.

    I don't know if this is a good way to index the time.Time field, but this PR adds a TimeFieldIndex to make it easy to index time.Time.

    Notes for reviewer:

    Adding a Time field to TestObject causes panic from quick.Check. So I added a separate object, TestObjectWithTime. https://github.com/golang/go/issues/27017

Related tags
A simple memory database. It's nothing but a homework to learn primary datastruct of golang.

A simple memory database. It's nothing but a homework to learn primary datastruct of golang.

Nov 8, 2021
BuntDB is an embeddable, in-memory key/value database for Go with custom indexing and geospatial support
BuntDB is an embeddable, in-memory key/value database for Go with custom indexing and geospatial support

BuntDB is a low-level, in-memory, key/value store in pure Go. It persists to disk, is ACID compliant, and uses locking for multiple readers and a sing

Dec 30, 2022
Nipo is a powerful, fast, multi-thread, clustered and in-memory key-value database, with ability to configure token and acl on commands and key-regexes written by GO

Welcome to NIPO Nipo is a powerful, fast, multi-thread, clustered and in-memory key-value database, with ability to configure token and acl on command

Dec 28, 2022
Membin is an in-memory database that can be stored on disk. Data model smiliar to key-value but values store as JSON byte array.

Membin Docs | Contributing | License What is Membin? The Membin database system is in-memory database smiliar to key-value databases, target to effici

Jun 3, 2021
The lightweight, distributed relational database built on SQLite.
The lightweight, distributed relational database built on SQLite.

rqlite is a lightweight, distributed relational database, which uses SQLite as its storage engine. Forming a cluster is very straightforward, it grace

Jan 5, 2023
Owl is a db manager platform,committed to standardizing the data, index in the database and operations to the database, to avoid risks and failures.

Owl is a db manager platform,committed to standardizing the data, index in the database and operations to the database, to avoid risks and failures. capabilities which owl provides include Process approval、sql Audit、sql execute and execute as crontab、data backup and recover .

Nov 9, 2022
This is a simple graph database in SQLite, inspired by "SQLite as a document database".

About This is a simple graph database in SQLite, inspired by "SQLite as a document database". Structure The schema consists of just two structures: No

Jan 3, 2023
Hard Disk Database based on a former database

Hard Disk Database based on a former database

Nov 1, 2021
Simple key value database that use json files to store the database

KValDB Simple key value database that use json files to store the database, the key and the respective value. This simple database have two gRPC metho

Nov 13, 2021
Beerus-DB: a database operation framework, currently only supports Mysql, Use [go-sql-driver/mysql] to do database connection and basic operations

Beerus-DB · Beerus-DB is a database operation framework, currently only supports Mysql, Use [go-sql-driver/mysql] to do database connection and basic

Oct 29, 2022
A rest-api that works with golang as an in-memory key value store

Rest API Service in GOLANG A rest-api that works with golang as an in-memory key value store Usage Run command below in terminal in project directory.

Dec 6, 2021
Eventually consistent distributed in-memory cache Go library

bcache A Go Library to create distributed in-memory cache inside your app. Features LRU cache with configurable maximum keys Eventual Consistency sync

Dec 2, 2022
An in-memory key:value store/cache (similar to Memcached) library for Go, suitable for single-machine applications.

go-cache go-cache is an in-memory key:value store/cache similar to memcached that is suitable for applications running on a single machine. Its major

Dec 29, 2022
A tiny date object in Go. Tinydate uses only 4 bytes of memory

go-tinydate A tiny date object in Go. Tinydate uses 4 bytes of memory vs the 24 bytes of a standard time.Time{} A tinydate only has day precision. It

Mar 17, 2022
A minimalistic in-memory key value store.
A minimalistic in-memory key value store.

A minimalistic in-memory key value store. Overview You can think of Kiwi as thread safe global variables. This kind of library comes in helpful when y

Dec 6, 2021
Distributed cache and in-memory key/value data store.

Distributed cache and in-memory key/value data store. It can be used both as an embedded Go library and as a language-independent service.

Dec 30, 2022
A tiny Golang JSON database

Scribble A tiny JSON database in Golang Installation Install using go get github.com/nanobox-io/golang-scribble. Usage // a new scribble driver, provi

Dec 31, 2022
☄ The golang convenient converter supports Database to Struct, SQL to Struct, and JSON to Struct.
☄ The golang convenient converter supports Database to Struct, SQL to Struct, and JSON to Struct.

Gormat - Cross platform gopher tool The golang convenient converter supports Database to Struct, SQL to Struct, and JSON to Struct. 中文说明 Features Data

Dec 20, 2022
pure golang key database support key have value. 非常高效实用的键值数据库。
pure golang key database support key have value.  非常高效实用的键值数据库。

orderfile32 pure golang key database support key have value The orderfile32 is standard alone fast key value database. It have two version. one is thi

Apr 30, 2022