SpiceDB is a Zanzibar-inspired database that stores, computes, and validates application permissions.

SpiceDB

Container Image GoDoc License Build Status Mailing List Discord Server Twitter

SpiceDB is a Zanzibar-inspired database that stores, computes, and validates application permissions.

Developers create a schema that models their permissions requirements and use a client library to apply the schema to the database, insert data into the database, and query the data to efficiently check permissions in their applications.

Features that distinguish SpiceDB from other systems include:

See CONTRIBUTING.md for instructions on how to contribute and perform common tasks like building the project and running tests.

Why SpiceDB?

Verifiable Correctness

The data used to calculate permissions have the most critical correctness requirements in the entirety a software system. Despite that, developers continue to build their own ad-hoc solutions coupled to the internal code of each new project. By developing a SpiceDB schema, you can iterate far more quickly and exhaustively test designs before altering any application code. This becomes especially important as you introduce backwards-compatible changes to the schema and want to ensure that the system remains secure.

Optimal Flexibility

The SpiceDB schema langauge is built on top of the concept of a graph of relationships between objects. This ReBAC design is capable of efficiently supporting all popular access control models (such as RBAC and ABAC) and custom models that contain hybrid behavior.

Modern solutions to developing permission systems all have a similar goal: to decouple policy from the application. Using a dedicated database like SpiceDB not only accomplishes this, but takes this idea a step further by also decoupling the data that policies operate on. SpiceDB is designed to share a single unified view of permissions across as many applications as your organization has. This has strategy has become an industry best-practice and is being used to great success at companies large (Google, GitHub, Airbnb) and small (Carta, Authzed).

Getting Started

Installing SpiceDB

SpiceDB is currently packaged by Homebrew for both macOS and Linux. Individual releases and other formats are also available on the releases page.

brew install authzed/tap/spicedb

SpiceDB is also available as a container image:

docker pull quay.io/authzed/spicedb:latest

For production usage, we highly recommend using a tag that corresponds to the latest release, rather than latest.

Running SpiceDB locally

spicedb serve --grpc-preshared-key "somerandomkeyhere" --grpc-no-tls

Visit http://localhost:8080 to see next steps, including loading the schema

Developing your own schema

Integrating with your application

Owner
authzed
A Zanzibar-inspired database platform that stores, computes, and validates application permissions.
authzed
Comments
  • Add

    Add "public" keyword/type

    The Zanzibar implementation at Google uses a special-case userset to represent the set of all users "aka public".

    As per one of their public presentations: 103707706-21b5c900-4f7d-11eb-8184-ed57ae6cb002

    Because SpiceDB's schema language is more expressive, we have some better options than introducing this concept as a special-cased tuple:

    • A keyword could be used to embellish relations/permissions that are public.
    • We could introduce a type to represent public, but it might be surprising if a user unintentionally unions a relation/permission with public by accident.
  • ZedToken increasing latency instead of reducing it

    ZedToken increasing latency instead of reducing it

    We've been using SpiceDB in production for a few months now at https://www.veed.io/ and have gradually been migrating our authorization data to it. We've attached a Postgres datasource, which now has a size of about 7GB and we've run into a bit of an unexpected problem.

    The easiest way to explain it is by showing some metrics. Screen_Shot_2022-09-06_at_13 38 34

    As our data size grew from about 2-3 million data points to >10 million, we started noticing significant slowdowns on some user accounts, but not on others. Following the documentation, we had implemented ZedToken caching and use atLeastAsFresh consistency for a number of our users - which we found to be the exact users having issues with significant slowdowns. Our investigation into the issue resulted in the discovery that making use of ZedTokens actually slowed down requests by a factor of 100. We've now set everything to be fully consistent - and the results speak for themselves: Screen_Shot_2022-09-06_at_13 39 36

    Slowdowns have completely stopped and latency dropped down to 300ms, but this isn't the behaviour I'd expect. Considering we expect our data set to grow significantly over the coming months, I do believe we are going to need to rely on ZedTokens to ensure our latency doesn't grow again - especially since the solution wouldn't be as straightforward as it was now when fully consistent calls become problematic.

  • Fix revive lint warnings

    Fix revive lint warnings

    This is related to issue https://github.com/authzed/spicedb/issues/36

    All issues involve renaming function to drop a prefix corresponding to the package name. The fix has been done automatically with a refactoring tool.

    This creates a change in the public API as namespace.NamespaceWithComment is renamed to namespace.WithComment.

  • Better Caching Cost & Density

    Better Caching Cost & Density

    Improve Cache Density and Cost Estimate

    Hi Authzed folks - apologies in advance for this wall of text. 🙂

    I noticed a few weeks ago that the cache cost functions are not accurate if the cost represents bytes (which I believe it does). For example, the cost of a checkResultEntry is set to just 8 bytes, the cost of that struct when empty. But that cost doesn't include the memory pointed to by checkResultEntry.response, which could be much more.

    As I worked to improve the cache cost functions, I found a way to fit 2x more cache items into the same amount of memory: instead of caching the Go structs, cache the protobuf-marshaled bytes.

    The improved cache cost functions help keep the physical memory used by the cache much closer to the configured max cost.

    I'd be happy to open some PRs for these changes, but wanted to post my findings here and see which of the changes you'd like (if any).

    Cache Density

    I experimented with storing the marshaled bytes of protobuf messages rather than the Go objects directly.

    There are two main advantages to this:

    • Calculating the cost of a []byte is quite simple. Most importantly, the cost function does not need to change as the protobuf message changes: protobuf takes care of those details.
    • Second, the cache can store more items per MB of space used. In one test (below), the cache fit 212% more items per MB! However, later tests with more accurate cost functions improved cache density by a more modest 50-70%. All tests were on a single local instance of spicedb, so a load test at scale is warranted.

    Below are the results for two tests run on a single spicedb instance serving check requests. Total profiled space is for the whole application, while cache profiled space includes just the stacks related to caching. In this test, the cost function was still poor, but it does show that using marshaled bytes significantly improves cache density. | test | total profiled space | cache profiled space | cache calculated cost | key count | keys/ cache profiled MB | | --- | --- | --- | --- | --- | --- | | protobuf structs | 69.16 MB | 54.85 MB | 32 MB | 142,857 | 2,605 | | marshaled []byte | 77.02 MB | 61.0 MB | 30.1 MB | 337,311 | 5,529 |

    Of course, marshaling isn't free. However, existing code already calls proto.Clone() on every cache write, and as that is replaced with the call to proto.Marshal(), the relative cost may not be significant. Still, a test to check impact on CPU during a load test is warranted.

    Cache Cost Function

    Now, the long story.

    Background

    As stated above, the cache was using more memory than the 'max cost' setting because the cost of each cached item was being set to the size of a pointer (8 bytes) rather than the size of the memory referenced by a pointer.

    The first attempt at improving the cost function made the situation better, but there was still a substantial difference between the configured cache size and the total memory used. Below are flamegraphs for in-use space for a local spicedb instance, taken after running a 15 minute load test of check requests. Between 0 and 32 MB cache, the memory increased 59MB, 184% the increase in cache size. Between 32 and 64 MB cache, the memory increased 70MB, 219% the increase in cache size.

    1 byte Cache (single instance, local) image

    32 MB Cache (single instance, local) image

    64 MB Cache (single instance, local) image

    Aside on Profiling

    In the flamegraphs above, the in-use bytes within ristretto.(*Cache).processItems are very close to the allocated cache size. Also, the bytes allocated within caching.(#Dispatcher).DispatchCheck grow proportionally with the cache size.

    Initially I thought this meant the DispatchCheck() function was responsible for leaking memory. However, I no longer think that is the case.

    Heap profiles work by sampling allocations. When a sample is taken, the stack responsible for the allocation is added to the profile. So, seeing DispatchCheck() in the flamegraph doesn't mean that DispatchCheck() is responsible for keeping bytes from GC, only that it was responsible for originally allocating those bytes.

    Reviewing the spiceDB code, this makes sense - DispatchCheck() creates the object that is stored in the cache (via proto.Clone()), but then it is the cache that keeps that object from GC. When ristretto stores an item, it allocates a wrapper struct, which explains why it is also in the profile.

    Given this, the best way to measure memory used by the cache is to sum ristretto.(*Cache).processItems and proto.Clone. Doing so for the examples above gives 113MB for the 64MB cache (176% larger) and 59MB for the 32MB cache (184% larger).

    Size Classes

    One of the main breakthroughs I had was learning about class sizes in Go. Class sizes are predefined object sizes (8, 16, 24, 32, 48, etc). When allocating a 'small' object, Go takes the number of required bytes and then allocates the next size class larger than what is required. This is done to make GC tracking more efficient for small objects. See 'One more thing' section.

    So, a cost function that returns only the bytes required for an object will systematically under-report the actual cost in memory!

    This article indicates that append() is aware of class sizes and can be used to find them at run time. This code demonstrates: https://go.dev/play/p/lRaSqzunZ73

    After accounting for class sizes, I was able to write a cost function that exactly matched the allocated bytes, as reported by memstats.TotalAlloc.

    Keys Count Too

    Still, even accounting for size classes, the cost function was not controlling memory like I wanted. How could my tests show a perfect match to the reported allocated memory, but still allow the cache to grow beyond max cost? The answer is fairly simple: cache keys are stored too, and take up memory. After including keys in the cost function, I got the following results (caching []byte):

    | test | total profiled space | cache profiled space | cache computed space | key count | keys/cache profiled MB | | --- | --- | --- | --- | --- | --- | | 8MB cache | 33.1 MB | 16.2 MB | 8 MB | 42,094 | 2,598 | | 16MB cache | 40.4 MB | 24.3 MB | 16 MB | 84,097 | 3,460 | | 32MB cache | 63.8 MB | 44.4 MB | 32 MB | 168,152 | 3,787 |

    The difference in cache size between 8MB and 16MB max cost was 8.1MB! Between 16MB and 32MB, 20.1 MB, which is off by about 26%.

    Final Cost Function (protobuf structs, not bytes)

    This test was run with a cost function that accounted for keys and size classes. No changes were made to the objects stored in the cache for this test.

    | test | total profiled space | cache profiled space | cache computed space | key count | keys/cache profiled MB | | --- | --- | --- | --- | --- | --- | | no cache (1 byte) | 15.6 MB | 0 MB | 0 MB | 0 | 0 | | 16MB cache | 34.8 MB | 21.5 MB | 16 MB | 46,916 | 2,182 | | 32MB cache | 55.2 MB | 37.8 MB | 32 MB | 93,825 | 2,482 |

    This shows there is still some overhead for the cache, since going from a cache with only 1 byte max cost (effectively, no cache) to 16 MB cost added 21.5 MB to memory used by the cache. But, going from 16MB to 32MB added 16.3MB, off by ~2%.

    Compared to the test which used a similar cost function, but stored bytes instead, this also shows that storing bytes is still more efficient, although less so than in the original test. This makes sense, because now that they key is included in the cost function, the space saved on the items themselves is a smaller proportion of the total cost per entry.

    Misc Learnings

    • Are there memory leaks?
      • I don't think so. Once the cache reaches capacity and begins to evict items, memory use is stable.
    • Is protocol buffers increasing memory footprint?
      • The items stored in the cache are protobuf generated types and have some fields specific to protobuf (protoimpl.MessageState, protoimpl.SizeCache, protoimpl.UnknownFields). It is possible these fields are getting populated after the cost function runs and increasing memory footprint beyond what the cost function calculates. Running spicedb locally, I did see that this was the case - sending a message from the cache caused its size to increase significantly. However, subsequent sends shared the memory added by the first send. To further test if protobuf fields were increasing cost, I ran tests where a the cached object was never returned to callers, only deep copies. Memory use was similar enough that I don't think the protobuf fields have a significant impact.
      • 32 MB Cache (main) image
      • 32 MB Cache (clone on return) image
  • service-discovery: Added ZooKeeper based service discovery

    service-discovery: Added ZooKeeper based service discovery

    I have implemented an alternative service discovery that can be used without kubernetes. It uses Apache ZooKeeper. It also contains the code necessary to work inside AWS ECS containers (it can get the IP from the task and instance metadata endpoint), but it falls back to the IP of the first public network interface. The address defined in dispatch-cluster-addr takes precedence in any case.

    I will use this in our deployment on ECS. The SRV record method was not reliable so I made a custom resolver that uses ZooKeeper to discover the peers, since we were already using ZooKeeper for some of our existing services.

    This is the first time I'm coding in Go, so I hope I didn't mess up anything.

  • Add quickstart examples

    Add quickstart examples

    Closes https://github.com/authzed/spicedb/issues/469

    This creates a collection of quickstart Docker Compose files to get new-comers quickly running with the datastore of their choosing. ~I also moved k8s/example.yaml under the examples/ directory, since it seemed to fit well there. Though, I'm not sure if this breaks documentation links.~ I reverted this change, things broke when that file moved.

    Most datastores were straightforward, but Cockroach and Spanner (especially Spanner) required some extra plumbing to get them operational.

  • introduce validate command

    introduce validate command

    Closes https://github.com/authzed/spicedb/issues/290

    What

    The purpose of this command is to take a playground file and run the assertions and validations defined.

    The rationale is that schema development happens in the playground, but once the YAML is downloaded, there is nothing developers can do other than loading it with testserve command, or uploading it back to the playground. This attempts to reuse and run the assertions and validations as test-suite outside of the the playground, and in a programmatic way rather than only interactively. Rather than duplicating the same tests in the client application, the playground tests become the canonical representation for the business rules defined in the schema.

    Example:

    1. developers introduce changes in schema via the playground
    2. YAML file is downloaded and persisted in git repository
    3. changes are pushed, PR is opened, CI runs spicedb validate, demonstrating changes are sound.

    Assumptions

    • Introducing a new CLI command is cool, exposing new API in the go code requires more consideration
    • Version 2 of the Playground file is not really API, so instead of updating the public structures, in parsed the file in two phases: one time with the public stuff, and one with the v2 fields
    • I'm not sure I got right the versioning strategy y'all have with the API. It sounds like v0 is like "it's public, but may be broken anytime". I assumed it's OK to expose methods reusing v0 types, but would definitely appreciate some guidance here

    Features

    • accepts multiple playground files as input
    • process returns 0 if valid, non-zero if invalid
    • errors by line and message are logged (e.g. can be surfaced in the GitHub PR)

    TODO

    • Planning to add tests if this is the design seems sound
  • LookupSubjects API

    LookupSubjects API

    The Lookup Watch API Proposal includes the addition of the "reachability" APIs, which allow a caller to query the data-driven shape of the permissions graph.

    One of the APIs proposed is LookupSubjects which would act as a filtered, streaming form of ExpandPermission, but across an entire object type:

    message LookupSubjectsRequest {
        Consistency consistency = 1;
    
        ObjectReference resource = 2;
        string optional_permission = 3;
    
        string optional_subject_type = 4;
        string optional_subject_relation = 5;
    }
    
    message LookupSubjectsResponse {
        Relationship found_relationship = 1;
        ZedToken found_at = 2;
    }
    

    All fields on the request besides consistency and resource would be optional, in which case all subjects (of all kinds) would be find for the specified resource.

    Open Questions

    1. Should the LookupSubjectsResponse contain the path of all relations/permissions that were traversed to reach a subject? This could be very useful in building permissions panels or auditing systems.
    2. Should optional_subject_type (and relation) be repeated, to allow filtering to a set of allowed types, instead of a single type?
    3. Should optional_permission be repeated, to allow filtering to a set of allowed permissions/relations?
  • Dashboard example zed usage references HEAD formula & `login` command

    Dashboard example zed usage references HEAD formula & `login` command

    Brew installation of zed fails with the Errno:ENOENT error:

    ibazulic@cyberdyne:~$ brew install --HEAD authzed/tap/zed
    ==> Tapping authzed/tap
    Cloning into '/home/linuxbrew/.linuxbrew/Homebrew/Library/Taps/authzed/homebrew-tap'...
    remote: Enumerating objects: 34, done.
    remote: Counting objects: 100% (34/34), done.
    remote: Compressing objects: 100% (25/25), done.
    remote: Total 34 (delta 15), reused 10 (delta 3), pack-reused 0
    Receiving objects: 100% (34/34), 8.73 KiB | 1.75 MiB/s, done.
    Resolving deltas: 100% (15/15), done.
    Tapped 2 formulae (16 files, 92.0KB).
    ==> Downloading https://ghcr.io/v2/linuxbrew/core/go/manifests/1.17.1
    ######################################################################## 100.0%
    ==> Downloading https://ghcr.io/v2/linuxbrew/core/go/blobs/sha256:65e57b46322ebb9957754293cc66012579d93a7795b286bd2f267758f8006d7b
    ==> Downloading from https://pkg-containers.githubusercontent.com/ghcr1/blobs/sha256:65e57b46322ebb9957754293cc66012579d93a7795b286bd2f267758f8006d7b?se=2021-09-30T17%3A50%3A00Z&sig=hB1Y%2FHG%2FMPADkzMm6M92
    ######################################################################## 100.0%
    ==> Cloning https://github.com/authzed/zed.git
    Cloning into '/home/ibazulic/.cache/Homebrew/zed--git'...
    ==> Checking out branch main
    Already on 'main'
    Your branch is up to date with 'origin/main'.
    ==> Installing zed from authzed/tap
    ==> Installing dependencies for authzed/tap/zed: go
    ==> Installing authzed/tap/zed dependency: go
    ==> Pouring go--1.17.1.x86_64_linux.bottle.tar.gz
     /home/linuxbrew/.linuxbrew/Cellar/go/1.17.1: 10,810 files, 537.4MB
    ==> Installing authzed/tap/zed --HEAD
    Error: An exception occurred within a child process:
      Errno::ENOENT: No such file or directory - zed
    

    Pulling zed normally via brew install authzed/tap/zed works but this binary does not have the login option needed to log into spicedb according to instructions.

  • Support OpenTelemetry collectors

    Support OpenTelemetry collectors

    Everything is instrumented using OpenTelemetry, but Jaeger is the only format exposed by command-line flags. If it can be made generic enough, this could be upstreamed into cobrautil.

  • Add support for Application Default Credentials for Cloud Spanner datastore

    Add support for Application Default Credentials for Cloud Spanner datastore

    The Cloud Spanner driver currently requires a service account JSON file.

    Another way to authenticate with Google services is using Application Default Credentials, which allows one to authenticate with cloud APIs without manually supplying a service account file (one use case is when running on a GCE instance with a linked service account).

    The Go client libraries support Application Default Credentials out of the box -- they're used by default if no credentials are supplied: https://pkg.go.dev/cloud.google.com/go?utm_source=godoc#hdr-Authentication_and_Authorization

    For instance, for this line: https://github.com/authzed/spicedb/blob/42f730ab06c8b9ec90b74f2d390454083a925627/internal/datastore/spanner/spanner.go#L74

    The equivalent code using Application Default Credentials would be:

     client, err := spanner.NewClient(context.Background(), database) 
    

    Is it possible to add support for using Application Default Credentials to connect to Cloud Spanner? I'm willing to write the PR.

  • Bump github.com/cespare/xxhash/v2 from 2.1.2 to 2.2.0

    Bump github.com/cespare/xxhash/v2 from 2.1.2 to 2.2.0

    Bumps github.com/cespare/xxhash/v2 from 2.1.2 to 2.2.0.

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
  • Bump github.com/rs/cors from 1.8.2 to 1.8.3

    Bump github.com/rs/cors from 1.8.2 to 1.8.3

    Bumps github.com/rs/cors from 1.8.2 to 1.8.3.

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
  • Bump golang.org/x/tools from 0.3.0 to 0.4.0

    Bump golang.org/x/tools from 0.3.0 to 0.4.0

    Bumps golang.org/x/tools from 0.3.0 to 0.4.0.

    Release notes

    Sourced from golang.org/x/tools's releases.

    gopls/v0.4.0

    • Improved support for working with modules (@​ridersofrohan). A detailed walk-through of the new features can be found here. A quick summary:
      • Use the -modfile flag to suggest which modules should be added/removed from the go.mod file, rather than editing it automatically.
      • Suggest dependency upgrades in-editor and provide additional language features, such as formatting, for the go.mod file.
    • Inverse implementations (@​muirdm). "Go to implementations" on a concrete type will show the interfaces it implements.
    • Completion improvements (@​muirdm). Specifically, improved completion for keywords. Also, offer if err != nil { return err } as a completion item.
    • Jumping to definition on an import statement returns all files as definition locations (@​danishprakash).
    • Support for running go generate through the editor, via a code lens (@​marwan-at-work).
    • Command-line support for workspace symbols (@​daisuzu).

    Opt-in:

    • Code actions suggesting gofmt -s-style simplifications (@​ridersofrohan). To get these on-save, add the following setting:
    "[go]": {
    	"editor.codeActionsOnSave": {
    		"source.fixAll": true,
    	}
    }
    
    • Code actions suggesting fixes for type errors, such as missing return values (goreturns-style), undeclared names, unused parameters, and assignment statements that should be converted from := to = (@​ridersofrohan). Add the following to your gopls settings to opt-in to these analyzers. In the future, they will be on by default and high-confidence suggested fixes may be applied on save. See additional documentation on analyzers here.
    "gopls": {
    	"analyses": {
    		"fillreturns": true,
                    "undeclaredname": true,
                    "unusedparams": true,
                    "nonewvars": true,
    	}
    }
    
    • Further improvements in the support for multiple concurrent clients (@​findleyr). See #34111 for all details.

    For a complete list of the issues resolved, see the gopls/v0.4.0 milestone.

    gopls/v0.3.4

    gopls/v0.3.3

    • Support for workspace symbols. (@​daisuzu)
    • Various completion improvements, including fixes for completion in code that doesn't parse. (@​muirdm)
    • Limit diagnostic concurrency, preventing huge spikes in memory usage that some users encountered. (@​heschik)
    • Improved handling for URIs containing escaped characters. (@​heschik)
    • Module versions from "go list" in pkg.go.dev links. (@​ridersofrohan)

    ... (truncated)

    Commits
    • aee3994 gopls/internal/lsp/fake: in (*Workdir).RenameFile, fall back to read + write
    • fe60148 go.mod: update golang.org/x dependencies
    • c9ea9a7 gopls/internal/regtest: add a test for the case when the renaming package's p...
    • bf5db81 gopls/internal/lsp/cache: improve ad-hoc warning for nested modules
    • aa9f4b2 go/analysis: document that facts are gob encoded in one gulp
    • bdcd082 internal/gcimporter: skip tests earlier when 'go build' is not available
    • 2ad6325 gopls/internal/lsp/cache: expand ImportPath!=PackagePath comment
    • 52c7b88 gopls/internal/robustio: only define ERROR_SHARING_VIOLATION on Windows
    • 4f69bf3 gopls/internal/lsp/cache: narrow reloadOrphanedFiles to open files
    • 6002d6e gopls/internal/regtest/misc: test Implementations + vendor
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
  • Bump cloud.google.com/go/spanner from 1.39.0 to 1.42.0

    Bump cloud.google.com/go/spanner from 1.39.0 to 1.42.0

    Bumps cloud.google.com/go/spanner from 1.39.0 to 1.42.0.

    Release notes

    Sourced from cloud.google.com/go/spanner's releases.

    spanner: v1.42.0

    1.42.0 (2022-12-14)

    Features

    • spanner: Add database roles (#5701) (6bb95ef)
    • spanner: Rewrite signatures and type in terms of new location (620e6d8)

    Bug Fixes

    • spanner: Fallback to check grpc error message if ResourceType is nil for checking sessionNotFound errors (#7163) (2552e09)

    spanner: v1.41.0

    1.41.0 (2022-12-01)

    Features

    spanner: v1.40.0

    1.40.0 (2022-11-03)

    Features

    • spanner/spansql: Add support for interval arg of some date/timestamp functions (#6950) (1ce0f7d)
    • spanner: Configurable logger (#6958) (bd85442), refs #6957
    • spanner: PG JSONB support (#6874) (5b14658)
    • spanner: Update result_set.proto to return undeclared parameters in ExecuteSql API (de4e16a)
    • spanner: Update transaction.proto to include different lock modes (caf4afa)
    Commits
    • 22e90d9 chore(main): release spanner 1.42.0 (#7130)
    • 2552e09 fix(spanner): fallback to check grpc error message if ResourceType is nil for...
    • 6bb95ef feat(spanner): add database roles (#5701)
    • f2b1f1b chore(bigquery/storage/managedwriter): internal refactor (flow controller, id...
    • bcc9fcd test(bigtable): expand integration tests for read stats (#7143)
    • ab332ce fix(internal/gapicgen): disable rest for non-rest APIs (#7157)
    • dc89409 chore(main): release pubsublite 1.6.0 (#7129)
    • 5fa8555 feat(pubsublite): create/update export subscriptions (#6885)
    • 176f533 feat(pubsublite): unload idle partition publishers (#7105)
    • 28f3572 feat(all): enable REGAPIC and REST numeric enums (#6999)
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
  • Bump github.com/envoyproxy/protoc-gen-validate from 0.6.13 to 0.9.1

    Bump github.com/envoyproxy/protoc-gen-validate from 0.6.13 to 0.9.1

    Bumps github.com/envoyproxy/protoc-gen-validate from 0.6.13 to 0.9.1.

    Release notes

    Sourced from github.com/envoyproxy/protoc-gen-validate's releases.

    v0.9.1

    What's Changed

    New Contributors

    Full Changelog: https://github.com/bufbuild/protoc-gen-validate/compare/v0.9.0...v0.9.1

    v0.9.0

    What's Changed

    Full Changelog: https://github.com/bufbuild/protoc-gen-validate/compare/v0.8.0...v0.9.0

    v0.7.0

    What's Changed

    ... (truncated)

    Commits
    • 8ed4f9c Bump proto-google-common-protos from 2.10.0 to 2.11.0 in /java (#748) #patch
    • f154818 Bump google.protobuf.version from 3.21.9 to 3.21.10 in /java (#747) #patch
    • 0c04917 Bump golang.org/x/tools from 0.2.0 to 0.3.0 (#734) #patch
    • 7d84560 Bump grpc-bom from 1.50.2 to 1.51.0 in /java (#742) #patch
    • 967d85d Bump golang.org/x/net from 0.1.0 to 0.2.0 (#732) #patch
    • 31388c3 Bump os-maven-plugin from 1.7.0 to 1.7.1 in /java (#731) #patch
    • 774e011 Removing more from the no-op proto-gen-validate build (#738)
    • 2682ad0 GH-728 Fix typo in readme (#729) #patch
    • 5e042b7 attach linux arm64 artifact (#725)
    • ae855fa Bump proto-google-common-protos from 2.9.6 to 2.10.0 in /java (#722) #patch
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
  • Make Playground available on Docker also

    Make Playground available on Docker also

    The only thing lacking compared to OpenFGA is the capability to have the nice Playground UI on top of you own instance of the SpiceDB Docker. Please build the https://github.com/authzed/spicedb/tree/main/pkg/development/wasm and included it in the Docker as a http port.

This is a simple graph database in SQLite, inspired by "SQLite as a document database".

About This is a simple graph database in SQLite, inspired by "SQLite as a document database". Structure The schema consists of just two structures: No

Jan 3, 2023
A simple golang api generator that stores struct fields in key/value based databases

Backgen A simple golang API generator that uses key/value based databases. It does not provide the database itself, only uses a interface to access se

Feb 4, 2022
Owl is a db manager platform,committed to standardizing the data, index in the database and operations to the database, to avoid risks and failures.

Owl is a db manager platform,committed to standardizing the data, index in the database and operations to the database, to avoid risks and failures. capabilities which owl provides include Process approval、sql Audit、sql execute and execute as crontab、data backup and recover .

Nov 9, 2022
Beerus-DB: a database operation framework, currently only supports Mysql, Use [go-sql-driver/mysql] to do database connection and basic operations

Beerus-DB · Beerus-DB is a database operation framework, currently only supports Mysql, Use [go-sql-driver/mysql] to do database connection and basic

Oct 29, 2022
Hard Disk Database based on a former database

Hard Disk Database based on a former database

Nov 1, 2021
Simple key value database that use json files to store the database

KValDB Simple key value database that use json files to store the database, the key and the respective value. This simple database have two gRPC metho

Nov 13, 2021
This is a simple Golang application that executes SQL commands to clean up a mirror node's database.

This is a simple Golang application that executes SQL commands to clean up a mirror node's database.

Jan 24, 2022
Nipo is a powerful, fast, multi-thread, clustered and in-memory key-value database, with ability to configure token and acl on commands and key-regexes written by GO

Welcome to NIPO Nipo is a powerful, fast, multi-thread, clustered and in-memory key-value database, with ability to configure token and acl on command

Dec 28, 2022
BuntDB is an embeddable, in-memory key/value database for Go with custom indexing and geospatial support
BuntDB is an embeddable, in-memory key/value database for Go with custom indexing and geospatial support

BuntDB is a low-level, in-memory, key/value store in pure Go. It persists to disk, is ACID compliant, and uses locking for multiple readers and a sing

Dec 30, 2022
The Prometheus monitoring system and time series database.

Prometheus Visit prometheus.io for the full documentation, examples and guides. Prometheus, a Cloud Native Computing Foundation project, is a systems

Dec 31, 2022
Fast specialized time-series database for IoT, real-time internet connected devices and AI analytics.
Fast specialized time-series database for IoT, real-time internet connected devices and AI analytics.

unitdb Unitdb is blazing fast specialized time-series database for microservices, IoT, and realtime internet connected devices. As Unitdb satisfy the

Jan 1, 2023
VictoriaMetrics: fast, cost-effective monitoring solution and time series database
VictoriaMetrics: fast, cost-effective monitoring solution and time series database

VictoriaMetrics VictoriaMetrics is a fast, cost-effective and scalable monitoring solution and time series database. It is available in binary release

Jan 8, 2023
LinDB is an open-source Time Series Database which provides high performance, high availability and horizontal scalability.
LinDB is an open-source Time Series Database which provides high performance, high availability and horizontal scalability.

LinDB is an open-source Time Series Database which provides high performance, high availability and horizontal scalability. LinDB stores all monitoring data of ELEME Inc, there is 88TB incremental writes per day and 2.7PB total raw data.

Jan 1, 2023
☄ The golang convenient converter supports Database to Struct, SQL to Struct, and JSON to Struct.
☄ The golang convenient converter supports Database to Struct, SQL to Struct, and JSON to Struct.

Gormat - Cross platform gopher tool The golang convenient converter supports Database to Struct, SQL to Struct, and JSON to Struct. 中文说明 Features Data

Dec 20, 2022
TalariaDB is a distributed, highly available, and low latency time-series database for Presto
TalariaDB is a distributed, highly available, and low latency time-series database for Presto

TalariaDB is a distributed, highly available, and low latency time-series database that stores real-time data. It's built on top of Badger DB.

Nov 16, 2022
Dolt is a SQL database that you can fork, clone, branch, merge, push and pull just like a git repository.

Dolt is a SQL database that you can fork, clone, branch, merge, push and pull just like a git repository. Connect to Dolt just like any MySQL database to run queries or update the data using SQL commands. Use the command line interface to import CSV files, commit your changes, push them to a remote, or merge your teammate's changes.

Dec 31, 2022
rosedb is an embedded and fast k-v database based on LSM + WAL
rosedb is an embedded and fast k-v database based on LSM + WAL

A simple k-v database in pure Golang, supports string, list, hash, set, sorted set.

Dec 30, 2022
DonutDB: A SQL database implemented on DynamoDB and SQLite

DonutDB: A SQL database implemented on DynamoDB and SQLite

Dec 21, 2022
Export output from pg_stat_activity and pg_stat_statements from Postgres into a time-series database that supports the Influx Line Protocol (ILP).

pgstat2ilp pgstat2ilp is a command-line program for exporting output from pg_stat_activity and pg_stat_statements (if the extension is installed/enabl

Dec 15, 2021