expressive DynamoDB library for Go

dynamo GoDoc

import "github.com/guregu/dynamo"

dynamo is an expressive DynamoDB client for Go, with an easy but powerful API. dynamo integrates with the official AWS SDK.

This library is stable and versioned with Go modules.

Example

package dynamo

import (
	"time"

	"github.com/aws/aws-sdk-go/aws"
	"github.com/aws/aws-sdk-go/aws/session"
	"github.com/guregu/dynamo"
)

// Use struct tags much like the standard JSON library,
// you can embed anonymous structs too!
type widget struct {
	UserID int       // Hash key, a.k.a. partition key
	Time   time.Time // Range key, a.k.a. sort key

	Msg       string              `dynamo:"Message"`    // Change name in the database
	Count     int                 `dynamo:",omitempty"` // Omits if zero value
	Children  []widget            // Lists
	Friends   []string            `dynamo:",set"` // Sets
	Set       map[string]struct{} `dynamo:",set"` // Map sets, too!
	SecretKey string              `dynamo:"-"`    // Ignored
}


func main() {
	db := dynamo.New(session.New(), &aws.Config{Region: aws.String("us-west-2")})
	table := db.Table("Widgets")

	// put item
	w := widget{UserID: 613, Time: time.Now(), Msg: "hello"}
	err := table.Put(w).Run()

	// get the same item
	var result widget
	err = table.Get("UserID", w.UserID).
		Range("Time", dynamo.Equal, w.Time).
		One(&result)

	// get all items
	var results []widget
	err = table.Scan().All(&results)

	// use placeholders in filter expressions (see Expressions section below)
	var filtered []widget
	err = table.Scan().Filter("'Count' > ?", 10).All(&filtered)
}

Expressions

dynamo will help you write expressions used to filter results in queries and scans, and add conditions to puts and deletes.

Attribute names may be written as is if it is not a reserved word, or be escaped with single quotes (''). You may also use dollar signs ($) as placeholders for attribute names and list indexes. DynamoDB has very large amount of reserved words so it may be a good idea to just escape everything.

Question marks (?) are used as placeholders for attribute values. DynamoDB doesn't have value literals, so you need to substitute everything.

Please see the DynamoDB reference on expressions for more information. The Comparison Operator and Function Reference is also handy.

// Using single quotes to escape a reserved word, and a question mark as a value placeholder.
// Finds all items whose date is greater than or equal to lastUpdate.
table.Scan().Filter("'Date' >= ?", lastUpdate).All(&results)

// Using dollar signs as a placeholder for attribute names.
// Deletes the item with an ID of 42 if its score is at or below the cutoff, and its name starts with G.
table.Delete("ID", 42).If("Score <= ? AND begins_with($, ?)", cutoff, "Name", "G").Run()

// Put a new item, only if it doesn't already exist.
table.Put(item{ID: 42}).If("attribute_not_exists(ID)").Run()

Encoding support

dynamo automatically handles the following interfaces:

This allows you to define custom encodings and provides built-in support for types such as time.Time.

Struct tags and fields

dynamo handles struct tags similarly to the standard library encoding/json package. It uses dynamo for the struct tag's name, taking the form of: dynamo:"attributeName,option1,option2,etc". You can omit the attribute name to use the default: dynamo:",option1,etc".

Renaming

By default, dynamo will use the name of your fields as the name of the DynamoDB attribute it corresponds do. You can specify a different name with the dynamo struct tag like so: dynamo:"other_name_goes_here". If two fields have the same name, dynamo will prioritize the higher-level field.

Omission

If you set a field's name to "-" (as in dynamo:"-") that field will be ignored. It will be omitted when marshaling and ignored when unmarshaling. Also, fields that start with a lowercase letter will be ignored. However, embedding a struct whose type has a lowercase letter but contains uppercase fields is OK.

Sets

By default, slices will be marshaled as DynamoDB lists. To marshal a field to sets instead, use the dynamo:",set" option. Empty sets will be automatically omitted.

You can use maps as sets too. The following types are supported:

  • []T
  • map[T]struct{}
  • map[T]bool

where T represents any type that marshals into a DynamoDB string, number, or binary value.

Note that the order of objects within a set is undefined.

Omitting empty values (omitempty)

Using the omitempty option (as in dynamo:",omitempty") will omit the field if it has a zero (ex. an empty string, 0, nil pointer) value. Structs are supported.

It also supports the isZeroer interface below:

type isZeroer interface {
	IsZero() bool
}

If IsZero() returns true, the field will be omitted. This gives us built-in support for time.Time.

You can also use the dynamo:",omitemptyelem" option to omit empty values inside of slices.

Automatic omission

Some values will be automatically omitted.

  • Empty strings
  • Empty sets
  • Empty structs
  • Nil pointers and interfaces
  • Types that implement encoding.TextMarshaler and whose MarshalText method returns 0-length or nil slice.
  • Zero-length binary (byte slices)

To override this behavior, use the dynamo:",allowempty" flag. Not all empty types can be stored by DynamoDB. For example, empty sets will still be omitted.

To override auto-omit behavior for children of a map, for example map[string]string, use the dynamo:",allowemptyelem" option.

Using the NULL type

DynamoDB has a special NULL type to represent null values. In general, this library avoids marshaling things as NULL and prefers to omit those values instead. If you want empty/nil values to marshal to NULL, use the dynamo:",null" option.

Unix time

By default, time.Time will marshal to a string because it implements encoding.TextMarshaler.

If you want time.Time to marshal as a Unix time value (number of seconds since the Unix epoch), you can use the dynamo:",unixtime" option. This is useful for TTL fields, which must be Unix time.

Creating tables

You can use struct tags to specify hash keys, range keys, and indexes when creating a table.

For example:

type UserAction struct {
	UserID string    `dynamo:"ID,hash" index:"Seq-ID-index,range"`
	Time   time.Time `dynamo:",range"`
	Seq    int64     `localIndex:"ID-Seq-index,range" index:"Seq-ID-index,hash"`
	UUID   string    `index:"UUID-index,hash"`
}

This creates a table with the primary hash key ID and range key Time. It creates two global secondary indices called UUID-index and Seq-ID-index, and a local secondary index called ID-Seq-index.

Compatibility with the official AWS library

dynamo has been in development before the official AWS libraries were stable. We use a different encoder and decoder than the dynamodbattribute package. dynamo uses the dynamo struct tag instead of the dynamodbav struct tag, and we also prefer to automatically omit invalid values such as empty strings, whereas the dynamodbattribute package substitutes null values for them. Items that satisfy the dynamodbattribute.(Un)marshaler interfaces are compatibile with both libraries.

In order to use dynamodbattribute's encoding facilities, you must wrap objects passed to dynamo with dynamo.AWSEncoding. Here is a quick example:

// Notice the use of the dynamodbav struct tag
type book struct {
	ID    int    `dynamodbav:"id"`
	Title string `dynamodbav:"title"`
}
// Putting an item
err := db.Table("Books").Put(dynamo.AWSEncoding(book{
	ID:    42,
	Title: "Principia Discordia",
})).Run()
// When getting an item you MUST pass a pointer to AWSEncoding!
var someBook book
err := db.Table("Books").Get("ID", 555).One(dynamo.AWSEncoding(&someBook))

Integration tests

By default, tests are run in offline mode. Create a table called TestDB, with a Number Partition Key called UserID and a String Sort Key called Time. Change the table name with the environment variable DYNAMO_TEST_TABLE. You must specify DYNAMO_TEST_REGION, setting it to the AWS region where your test table is.

DYNAMO_TEST_REGION=us-west-2 go test github.com/guregu/dynamo/... -cover

License

BSD 2-Clause

Comments
  • BatchGet with two separate keys returns duplicate

    BatchGet with two separate keys returns duplicate

    @guregu So I'm still trying to track down the cause of this, but I have the following code:

    t := db.db.Table("Location")
    	k := make([]dynamo.Keyed, len(keys))
    	for i, key := range keys {
    		k[i] = dynamo.Keys{key, nil}
    	}
    	var ls []Location
    	err := t.Batch("Id").Get(k...).All(&ls)
    	retVal := make([]LocationDataResult, len(keys))
    	for i := range keys {
    		if i < len(ls) {
    			retVal[i] = LocationDataResult{&ls[i], err}
    		} else {
    			retVal[i] = LocationDataResult{Error:err}
    			//TODO--If error is nil, we say the object wasn't found
    		}
    	}
    	return retVal
    

    And when I pass in 2 keys, where one can be "found" and the other can't (as in, there's a single object in my DB table, and only one of those keys corresponds to it), I receive back 2 copies of the one "valid" object. I would expect to get back only one result?

    I do see that the Keys are created correctly, and are getting appended onto the BatchGet correctly. That's where I'm at right now, and will continue to investigate further.

  • SerializationExceptions when updating an attribute to an empty custom string

    SerializationExceptions when updating an attribute to an empty custom string

    I have a server that makes frequent use of your excellent dynamo package to save various data. When an error is encountered, I print it out with a statement such as log.Println(err). Recently, I have been getting the error

    SerializationException: status code: 400, request id: O6MMFM7MPC2NI4DV407EI50J0JVV4KQNSO5AEMVJF66Q9ASUAAJG
    

    The last time I got a SerializationError, it was due to #137 which was a bug with this package.

    This time, the code throwing the error was performing an Update operation. I suspect that the problem was update.Seting a string to an empty value. I will continue to look into the cause of the issue, but I am of the opinion that your library should catch this issue before it hits dynamo and either throw an preemptive and descriptive error, panic, or change the Update to a Delete (in my order of preference).

        update := db.userTable.Update(userIndexID, userID)
        update.Set("color", user.Color) // TextMarshaler
        update.Set("experience", user.Experience) // int
        update.Set("money", user.Money) // int
        if len(user.Achievements) > 0 {
            update.SetSet("achievements", user.Achievements) // map[TextMarshaler]Empty (Empty is struct{})
        }
        update.Set("statistics", user.Statistics) // map[TextMarshaler]float64
        if len(user.CanonicalName) > 0 { // added this check today, I'm now seeing if the error still appears
            update.Set("canonicalName", string(user.CanonicalName)) // UserName (string)
        }
        if len(user.Name) > 0 { // added this check today, I'm now seeing if the error still appears
            update.Set("name", user.Name) // UserName (string)
        }
        if user.Updated > 0 {
            update.Set("updated", user.Updated) // int64
        }
        return update.Run()
    

    Further research reveals that empty strings are now supported by dynamodb for non-key attributes (see https://aws.amazon.com/about-aws/whats-new/2020/05/amazon-dynamodb-now-supports-empty-values-for-non-key-string-and-binary-attributes-in-dynamodb-tables/). I am continuing to investigate the true cause of the error (it might be that the version of the AWS SDK used is not high enough to support empty strings).

    Do you know of a better way to diagnose SerializationExceptions than blindly guessing what caused it? The problem I just described has gone unnoticed in my server's logs for the last 3+ days, possibly causing data loss for a subset of users, and I would like to avoid this type of problem in the future.

  • LastEvaluatedKey may point unexpected position in case Limit is set to Query

    LastEvaluatedKey may point unexpected position in case Limit is set to Query

    I'm getting paginated query result 10 items each by callilng Query.Limit(10) and pass LastEvaluatedKey returned to next query. It seems to work correctly in usual cases, of course. But in case DynamoDB divides result due to 1MB limitation, LastEvaluatedKey returned by dynamo points an item ahead of the last item of result.

    To my understanding, in such a case dynamo iterates to query internally to fulfill the result with 10 items then return it to client. But LastEvaluatedKey returned by DynamoDB in last query will not always match to the last item of returned to client. It seems be because dynamo passes Limit = 10, which is the value I passed to dynamo, to DynamoDB in every iterating query and LastEvaluatedKey points the last item of list DynamoDB returned, but not the last item of list returned to client.

    Is it an expected behavior? Is there any way to get correct LastEvaluatedKey even in above case?

  • SerializationException when inserting list with empty strings

    SerializationException when inserting list with empty strings

    to reproduce:

    err = table.Put(struct { ID string Test []string }{"test", []string{"hi", ""}}).Run() if err != nil { panic(err) }

    results in:

    panic: SerializationException: status code: 400, request id:

    By dumping the marshalled output, it looks like it's being turned into:

    { M: { Test: { L: [{ S: "hi" },<invalid value>] }, ID: { S: "test" } } }

  • Context Deadline Exceeded

    Context Deadline Exceeded

    func (h Historical) BatchUpdateCustomerData(customerID string) (int, error) {
    	fmt.Printf("Updating customer: %s", customerID)
    	items := []map[string]*dynamodb.AttributeValue{}
    	tbl := h.gDyn.Table(h.table)
    	qry := tbl.Get("customer_id", customerID).Range("date", gdynamo.Greater, 0).Index(h.index)
    	ct, _ := qry.Count()
    	fmt.Sprintf("updating %d itmes", ct)
    	qry.All(&items)
    	btch := tbl.Batch().Write()
    	for _, item := range items {
    		btch.Put(item)
    	}
    
    	res, err := btch.Run()
    
    	fmt.Sprintf("updated %d itmes", ct)
    	if err != nil {
    		return res, err
    	}
    
    	return res, nil
    }
    

    The above code executes fine for small data sets. For 4000+ items, I'm getting an error that looks like this:

    Error: RequestCanceled: request context canceled
    caused by: context deadline exceeded
    
  • The provided key element does not match the schema

    The provided key element does not match the schema

    I am trying to do a batch put, and no matter what I try, I get the following error: Error: ValidationException: The provided key element does not match the schema

    The hash key in my table is id The range key in my table is date

    This is my code:

    func (h Historical) BatchUpdateCustomerData(customerID string) (int, error) {
    	fmt.Printf("Updating customer: %s", customerID)
    	items := []map[string]*dynamodb.AttributeValue{}
    	tbl := h.gDyn.Table(h.table)
    	qry := tbl.Get("customer_id", customerID).Range("date", gdynamo.Greater, 0).Index(h.index)
    	x, _ := qry.Count()
    	print(x)
    	qry.All(&items)
    	btch := tbl.Batch().Write().Put(gdynamo.AWSEncoding(items))
    
    	res, err := btch.Run()
    
    	if err != nil {
    		return res, err
    	}
    
    	return res, nil
    }
    

    I am trying to retrieve data from a table, and put it back, in an unchanged form.

    I tried something more basic, rather than items := []map[string]*dynamodb.AttributeValue{} I tried items := []map[string]interface{}{} and that didn't work either.

    I also tried a simple struct:

    type Updatable struct {
    	ID         string  `dynamo:"id" json:"id"`
    	Date       float64 `dynamo:"date" json:"date"`
    	CustomerID string  `dynamo:"customer_id" json:"customer_id"`
    }
    

    And that also didn't work.

    I keep getting the same error.

    Even if I specify in the Batch() method my hash and range key specifically, such that Batch("id", "date"), I keep getting the same error.

  • Multiple Types from Same Table?

    Multiple Types from Same Table?

    If I have a table in DynamoDB, say Animals, where I insert both cats and dogs, is there a best practice way of fetching both types, at once, into an Animal list?

    For Example in a batched query, I just want to hydrate all of the animals, but ensure all their properties are initialized. I know the following code won't work (how could it, without knowing what animal type you're trying to populate into), but am unsure of the best-practice way of doing this.

    var ls []Animal
    err := t.Batch("Id").Get(k...).All(&ls)
    

    For reference, there is a property on every animal, called AnimalType that was meant to be used as a differentiator. Do I first have to batch query to get the types, and then somehow populate my animals from there?

  • Can't differentiate between read and write in ConsumedCapacity

    Can't differentiate between read and write in ConsumedCapacity

    The ConsumedCapacity type in guregu is basically the same as the ConsumedCapacity type in the AWS SDK (but "with less pointers"), except that where the Table, GlobalSecondaryIndexes, and LocalSecondaryIndexes fields in the AWS SDK further break down reads and writes...

    // some fields omitted
    type ConsumedCapacity struct {
    	// The amount of throughput consumed on each global index affected by the operation.
    	GlobalSecondaryIndexes map[string]*Capacity `type:"map"`
    
    	// The amount of throughput consumed on each local index affected by the operation.
    	LocalSecondaryIndexes map[string]*Capacity `type:"map"`
    
    	// The amount of throughput consumed on the table affected by the operation.
    	Table *Capacity `type:"structure"`
    }
    
    type Capacity struct {
    	// The total number of capacity units consumed on a table or an index.
    	CapacityUnits *float64 `type:"double"`
    
    	// The total number of read capacity units consumed on a table or an index.
    	ReadCapacityUnits *float64 `type:"double"`
    
    	// The total number of write capacity units consumed on a table or an index.
    	WriteCapacityUnits *float64 `type:"double"`
    }
    

    ...guregu has opted to throw away the read/write breakdown and just report the total (corresponding to CapacityUnits above):

    // some fields omitted
    type ConsumedCapacity struct {
    	// GSI is a map of Global Secondary Index names to consumed capacity units.
    	GSI map[string]float64
    	// GSI is a map of Local Secondary Index names to consumed capacity units.
    	LSI map[string]float64
    	// Table is the amount of throughput consumed by the table.
    	Table float64
    }
    

    This makes it impossible to accurately break down consumed capacity for complex queries, and for no reason I can discern. If the goal is to have fewer pointers, you can still have a Capacity struct that breaks down read and write according to the information returned by Dynamo.

    I can see a couple ways forward to fix this:

    1. Change the custom ConsumedCapacity type to include all three pieces of information present in the AWS response. This would unfortunately have to be a breaking change, or would require new fields to be introduced.
    2. Expose the raw AWS SDK ConsumedCapacity type, pointers and all. This would not be breaking.
  • virtual PagingIter.LastEvaluatedKey

    virtual PagingIter.LastEvaluatedKey

    Fixes #186. Previously, PagingIter.LastEvaluatedKey always returned the LastEvaluatedKey given to us by AWS, regardless of whether we were at the end of the iterator. Limit (not SearchLimit) also triggered this behavior.

    This patch improves PagingIter, having it automatically determine LastEvaluatedKey based on the actual last value that we have locally evaluated.

    Usually we can determine this without any extra effort, but rarely we will need to call DescribeTable to fetch the necessary key schemas. In this case, the table's description is cached inside of dynamo.Table via atomics.

    This won't affect you the following use cases, which already had correct behavior:

    • you always iterated to the end of an iterator before calling LastEvaluatedKey()
    • you didn't use Limit (SearchLimit is OK)
  • Documentation on struct tags

    Documentation on struct tags

    I was getting a cryptic error messages when trying to create a table, after looking at the source code I realized it was because I was missing the ID, which by the way I thought was determined by convention. The error message was index out of range [0] with length 0.

    After looking at the issues I found an issue that had this struct tag dynamo:"id,hash" in the code that showed how to specify the ID. Then I looked into the documentaiton to see if this was there but couldnt find anything. Is there any place where all of the possible struct tags are doucmented? I'm still trying to specify the sort key in the code.

  • :sparkles: Adding ability to specify idempotency token in write transaction

    :sparkles: Adding ability to specify idempotency token in write transaction

    Currently, you cannot specify a client token for Dynamo transactions. This PR adds that functionality through a new method to the WriteTx struct

    IdempotentWithToken(enabled bool, token string) *WriteTx
    

    This method will use the token provided, only if it is not blank. If it is not blank and enabled, it will use that token for requests.

  • Migrate to AWS SDK v2

    Migrate to AWS SDK v2

    Migration to AWS SDK v2.

    This PR is based on @niltonkummer's work, and I made modifications to pass tests and finish as a PR to the main stream. This version is working fine with my own application (a DB performance test tool).

    some notes:

    • TestTx has a high chance of failing because of https://github.com/aws/aws-sdk/issues/413. It only passes once on 70 tries in my experiments.
    • I removed most of retry logic since the SDK already has its own logic.
  • customize tag name

    customize tag name

    Problem encode.go

    tags := strings.Split(field.Tag.Get("dynamo"), ",")
    

    this is not flexible

    Advice user should be able to customize the tag name they want, like json

  • Provide public MarshalStruct

    Provide public MarshalStruct

    Can we provide a public function MarshalStruct for external calls, in case the caller just needs to add some extra fields to the marshal process but doesn't want to actually handle the marshal process himself?

    For example:

    // dynamodb table
    type Item struct {
        PK string `dynamo:",hash"`
    }
    
    type User struct {
        Username string
        Email string
    }
    
    type UserItem struct {
        *Item
        *User
    }
    
    // We want to add PK automatically in the marshal process
    func (u *User) MarshalDynamoItem() (map[string]*dynamodb.AttributeValue, error) {
        return dynamo.MarshalStruct(&UserItem{&Item{PK:"U#"+u.Username}, u})
    }
    
  • unixtime format time.Time is always omitempty

    unixtime format time.Time is always omitempty

    I have noticed that when storing time.Time values with the unixtime option, that zero value are always omitted. With the following struct, the MyFIeld is never stored if its value is 0.

    type MyStruct struct {
    	MyField     time.Time `dynamo:",unixtime,allowempty"`
    }
    

    The top level README is also slightly misleading. It talks about the TextMarshaller defaulting to omitempty and time.Time triggering this behaviour, but it doesn't mention that when using the unixtime enconding, the TextMarshaller is ignored and there is special code in the library to always omitempty.

    Without this field present my objects don't appear in my global secondary index which does make it a touch difficult for me to query them. There may be smarter things I can do with AWS, but I'd prefer to just store 0 in that field.

  • Incrementing a value

    Incrementing a value

    Hello,

    How would you recommend I do something like this with this library:

    dynamoDB.updateItem({
      TableName: "Users",
      Key: { "UserId": { S: "c6af9ac6-7b61" } },
      ExpressionAttributeValues: { ":inc": {N: "1"} },
      UpdateExpression: "ADD loginCount :inc"
    })
    

    I am currently fetching the row getting the value then adding 1 to it but I am worried about race conditions.

Redisx: a library of Go utilities built on the redigo redis client library

redisx redisx is a library of Go utilities built on the redigo redis client libr

Dec 24, 2021
Go client library for Pilosa

Go Client for Pilosa Go client for Pilosa high performance distributed index. What's New? See: CHANGELOG Requirements Go 1.12 and higher. Install Down

Dec 3, 2022
Go Memcached client library #golang

About This is a memcache client library for the Go programming language (http://golang.org/). Installing Using go get $ go get github.com/bradfitz/gom

Jan 8, 2023
Redis client library for Go

go-redis go-redis is a Redis client library for the Go programming language. It's built on the skeleton of gomemcache. It is safe to use by multiple g

Nov 8, 2022
A pure go library to handle MySQL network protocol and replication.

A pure go library to handle MySQL network protocol and replication.

Jan 3, 2023
Books-rest api - Simple CRUD Rest API architecture using postgresql db with standard Library

books-rest_api Simple CRUD Rest API architecture using postgresql db with standa

Feb 8, 2022
Prueba de concepto: Boletia, una aplicación para venta de boletos, basada en microservicios event-driven. Desarrollada sobre AWS Serverless: Api Gateway, Lambda, DynamoDB, DynamoDB Streams
Prueba de concepto: Boletia, una aplicación para venta de boletos, basada en microservicios event-driven. Desarrollada sobre AWS Serverless: Api Gateway, Lambda, DynamoDB, DynamoDB Streams

Prueba de concepto: Boletia, una aplicación para venta de boletos, basada en microservicios event-driven. Desarrollada sobre AWS Serverless: Api Gatew

May 7, 2022
Dynamodb-expire-non-latest - Dynamodb spike to find best solution to set expire on old records

Goal, expire non-latest records User (identified by IP address), adds record A,

Jan 5, 2022
Mantil-template-form-to-dynamodb - Receive form data and write it to a DynamoDB table
Mantil-template-form-to-dynamodb - Receive form data and write it to a DynamoDB table

This template is an example of serverless integration between Google Forms and DynamoDB

Jan 17, 2022
A simple and expressive HTTP server mocking library for end-to-end tests in Go.

mockhttp A simple and expressive HTTP server mocking library for end-to-end tests in Go. Installation go get -d github.com/americanas-go/mockhttp Exa

Dec 19, 2021
Dynatomic is a library for using dynamodb as an atomic counter

Dynatomic Dynatomic is a library for using dynamodb as an atomic counter Dynatomic Motivation Usage Development Contributing Motivation The dynatomic

Sep 26, 2022
A go library for testing Amazon DynamoDB.

minidyn Amazon DynamoDB testing library written in Go. Goals Make local testing for DynamoDB as accurate as possible. Run DynamoDB tests in a CI witho

Nov 9, 2022
Expressive end-to-end HTTP API testing made easy in Go

baloo Expressive and versatile end-to-end HTTP API testing made easy in Go (golang), built on top of gentleman HTTP client toolkit. Take a look to the

Dec 13, 2022
Simple and expressive toolbox written in Go

ugo Simple and expressive toolbox written with love and care in Go. Deeply inspired by underscore.js and has the same syntax and behaviour Fully cover

Sep 27, 2022
Elvish = Expressive Programming Language + Versatile Interactive Shell

Elvish: Expressive Programming Language + Versatile Interactive Shell Elvish is an expressive programming language and a versatile interactive shell,

Dec 25, 2022
Expressive flags for Go

Expressive flags for Go Package xflags provides an alternative to Go's flag package for defining and parsing command line arguments with an emphasis o

Dec 8, 2022
Oak is an expressive, dynamically typed programming language

Oak ?? Oak is an expressive, dynamically typed programming language. It takes the best parts of my experience with Ink, and adds what I missed and rem

Dec 30, 2022
BuildKit - A toolkit for converting source code to build artifacts in an efficient, expressive and repeatable manner
BuildKit - A toolkit for converting source code to build artifacts in an efficient, expressive and repeatable manner

BuildKit BuildKit is a toolkit for converting source code to build artifacts in an efficient, expressive and repeatable manner. Key features: Automati

Feb 19, 2022
Simple key-value store abstraction and implementations for Go (Redis, Consul, etcd, bbolt, BadgerDB, LevelDB, Memcached, DynamoDB, S3, PostgreSQL, MongoDB, CockroachDB and many more)

gokv Simple key-value store abstraction and implementations for Go Contents Features Simple interface Implementations Value types Marshal formats Road

Dec 24, 2022
PoC for running AWS services(kinesis, dynamodb, lambdas) locally with Localstack

hotdog-localstack-PoC PoC for running AWS services(kinesis, dynamodb, lambdas) locally with Localstack alias awslocal="aws --endpoint-url=http://local

Dec 3, 2022