A new way of working with Protocol Buffers.

Buf

License Release CI Docker Homebrew AUR Google Group Slack

All documentation is hosted at https://buf.build. Please head over there for more details.

Goal


Buf’s long-term goal is to enable schema-driven development: a future where APIs are defined consistently, in a way that service owners and clients can depend on.

Defining APIs using an IDL provides a number of benefits over simply exposing JSON/REST services, and today, Protobuf is the most stable, widely-adopted IDL in the industry.

However, as it stands, using Protobuf is much more difficult than using JSON as your data transfer format.

Enter Buf: We’re building tooling to make Protobuf reliable and easy to use for service owners and clients, while keeping it the obvious choice on the technical merits.

Your organization should not have to reinvent the wheel to create, maintain, and consume Protobuf APIs efficiently and effectively. We'll handle your Protobuf management strategy for you, so you can focus on what matters.

We’re working quickly to build a modern Protobuf ecosystem. Our first tool is the Buf CLI, built to help you create consistent Protobuf APIs that preserve compatibility and comply with design best-practices. The tool is currently available on an open-source basis.

Our second tool, the Buf Schema Registry (“BSR”), will be the hub of our ecosystem. The BSR is a platform that serves as the source of truth for your organization's Protobuf files, enabling you to centrally maintain compatibility and manage dependencies, while enabling your clients to consume APIs reliably and efficiently. The BSR will be available for a limited, free private beta shortly.

Quick Links

However, we recommend you read the below introduction first!

The problems we aim to solve

Traditionally, adopting Protobuf presents a number of challenges across the API lifecycle. These are the problems we aim to solve.

Creating consistent Protobuf APIs

  • API designs are often inconsistent: Writing maintainable, consistent Protobuf APIs isn't as widely understood as writing maintainable JSON/REST-based APIs. With no standards enforcement, inconsistency can arise across an organization's Protobuf APIs, and design decisions can inadvertantly affect your API's future iterability.

Maintaining compatible, accessible Protobuf APIs

  • Dependency management is usually an afterthought: Protobuf files are vendored manually, with an error-prone copy-and-paste process from Github repositories. There is no centralized attempt to track and manage around cross-file dependencies.

  • Forwards and backwards compatibility is not enforced: While forwards and backwards compatibility is a promise of Protobuf, actually maintaining backwards-compatible Protobuf APIs isn't widely practiced, and is hard to enforce.

Consuming Protobuf APIs efficiently and reliably

  • Stub distribution is a difficult, unsolved process: Organizations have to choose to either centralize the protoc workflow and distribute generated code, or require all service clients to run protoc independently. Because there is a steep learning curve to using protoc and associated plugins in a reliable manner, organizations end up choosing to struggle with distribution of Protobuf files and stubs. This creates substantial overhead, and often requires a dedicated team to manage the process. Even when using a build system like Bazel, exposing APIs to external customers remains problematic.

  • The tooling ecosystem is limited: Lots of easy-to-use tooling exists today for JSON/REST APIs. Mock server generation, fuzz testing, documentation, and other daily API concerns are not widely standardized and easy to use for Protobuf APIs, requiring teams to regularly reinvent the wheel and build custom tooling to replicate the JSON ecosystem.

Buf is building a modern Protobuf ecosystem

Our tools will address many of the problems above, ultimately allowing you to redirect much of your time and energy from managing Protobuf files to implementing your core features and infrastructure.

The Buf CLI

The Buf CLI incorporates the following components to help you create consistent Protobuf APIs:

The Buf CLI is designed to be extremely simple to use, while providing functionality for advanced use cases. Features of the CLI include:

  • Automatic file discovery: By default, Buf will build your .proto files by walking your file tree and building them per your build configuration. This means you no longer need to manually specify your --proto_paths and files every time you run the tool. However, Buf does allow manual file specification through command-line flags if you want no file discovery to occur, for example in Bazel setups.

  • Selectable configuration: of the exact lint and breaking change configuration you want. While we recommend using the defaults, Buf allows you to easily understand and select the exact set of lint and breaking change rules your organization needs.

    Buf provides 40 available lint rules and 54 available breaking rules to cover most needs. We believe our breaking change detection truly covers every scenario for your APIs.

  • Selectable error output: By default, Buf outputs file:line:col:message information for every lint error and every breaking change, with the file path carefully outputted to match the input location, including if absolute paths are used, and for breaking change detection, including if types move across files. JSON output that includes the end line and end column of the lint error is also available, and JUnit output is coming soon.

  • Editor integration: The default error output is easily parseable by any editor, making the feedback loop for issues very short. Currently, we only provide Vim and Visual Studio Code integration for linting but will extend this in the future to include other editors such as Emacs and Intellij IDEs.

  • Check anything from anywhere: Buf allows you to not only check a Protobuf schema stored locally as .proto files, but allows you to check many different Inputs:

    • Tar or zip archives containing .proto files, both local and remote.
    • Git repository branches or tags containing .proto files, both local and remote.
    • Pre-built Images or FileDescriptorSets from protoc, from both local and remote (http/https) locations.
  • Speed: Buf's internal Protobuf compiler utilizes all available cores to compile your Protobuf schema, while still maintaining deterministic output. Additionally files are copied into memory before processing. As an unscientific example, Buf can compile all 2,311 .proto files in googleapis in about 0.8s on a four-core machine, as opposed to about 4.3s for protoc on the same machine. While both are very fast, this allows for instantaneous feedback, which is especially useful with editor integration. Buf's speed is directly proportional to the input size, so checking a single file only takes a few milliseconds.

The Buf Schema Registry

The Buf Schema Registry will be a powerful hosted SaaS platform to serve as your organization’s source of truth for your Protobuf APIs, built around the primitive of Protobuf Modules. We’re introducing the concept of Protobuf Modules to enable the BSR to manage a group of Protobuf files together, similar to a Go Module.

Initially, the BSR will offer the following key features:

  • Centrally managed dependencies: Resolve diamond dependency issues caused by haphazard versioning, even with external repository dependants.

  • Automatically enforce forwards and backwards compatibility: Ensure API clients never break, without wasteful team-to-team communication or custom SLAs.

  • Generated libraries produced by a managed compiler: Language-specific stub generation using Buf’s high-performance, drop-in protoc replacement.

Over time, our goal is to make the BSR the only tool you need to manage your Protobuf workflow from end to end. To that end, there's a lot we are planning with the Buf Schema Registry. For a quick overview, see our roadmap.

Where to go from here

To install Buf, proceed to installation. This includes links to an example repository for Travis CI and GitHub Actions integration.

Next, we recommend completing the tour. This tour should only take about 10 minutes, and will give you an overview of most of the existing functionality of Buf.

After completing the tour, check out the remainder of the docs for your specific areas of interest. We've aimed to provide as much documentation as we can for the various components of Buf to give you a full understanding of Buf's surface area.

Finally, follow the project on GitHub, and contact us if you'd like to get involved.

Owner
A new way of working with Protocol Buffers.
null
Comments
  • Evaluate adding symlink support

    Evaluate adding symlink support

    Hi, another question for you.

    Is it possible to have buf follow symlinks? The way the build folder is setup with bazel (by default) means the original repo is symlinked. We need to use the build folder as bazel pulls in all the external proto files we need to compile with.

    Thanks, Adam

  • Add functionality to convert Image formats

    Add functionality to convert Image formats

    I am dealing with a set of proto files from a couple legacy repos that I cannot change. I can successfully build an image from these protos, but I cannot generate stubs from them with protoc. To generate stubs correctly I would have to write some nasty commands to automatically fixup the proto files before building the image. However, if I could build a JSON image, modify that really easily with jq, and then convert that to a binary image with buf to use as input to protoc, I wouldn't have to resort to hack-ily sed-ing the proto files. Is this possible?

    P.S. Converting binary images to JSON would also be useful for introspection, but that is not necessary for me right now.

  • Add configuration option to enable inline comment-driven ignores

    Add configuration option to enable inline comment-driven ignores

    Documentation says

    Note that buf does not allow comment-driven ignores.

    While I understand the reasoning, I still suggest reconsidering. For example I just finished integrating buf checks in a code base which is a large monorepo. Most protos conform to almost all checks, but for most checks there's a few proto files that don't. Of course I could fix everything (actually sometimes that's impossible, but okay), but that would involve a ton of work. The yaml solution is far from perfect as every time the config changes the whole repository needs to be re-linted rather than just one proto file that has changed. Not to mention a huge config with all the checks and files needs to be created and maintained. I would prefer to silence those places and create issues for maintainers to fix whatever is possible at their own pace. Instead I have to only enable a couple of very basic checks for the whole repo. Which means most of new bad code will not be rejected, so while existing issues are being fixed new ones will pop up.

    As you can see, while principled, the no-comments approach is far from practical as it makes it impossible to implement buf check in a large repository in a meaningful way.

  • Disable linting for certain directories

    Disable linting for certain directories

    No matter what I do, buf check lint checks all directories. I can't get it to actually ignore lint errors from a particular directory.

    Is there a way to make this work?

    Thank you!

  • Buf compiled with protobuf v1.4.0 drops Custom Options

    Buf compiled with protobuf v1.4.0 drops Custom Options

    mkdir foo
    cd foo
    go mod init foo.bar
    go get google.golang.org/protobuf/proto
    go get github.com/bufbuild/buf/cmd/buf
    

    The binary installed will drop any custom options defined in your proto files when generating the Image file.

  • Breaking change detector does not ignore deleting files

    Breaking change detector does not ignore deleting files

    Is this expected? If so, what is the recommended way to delete a service?

    Config:

    build:
        roots:
            - proto/
    lint:
        use:
            - DEFAULT
            - FILE_LOWER_SNAKE_CASE
    breaking:
        use:
            - FILE
        ignore:
            - company/proto/iam/v1test
    

    Steps:

    • Delete: company/proto/iam/v1test.
    • Run: buf check breaking --against-input .git#branch=master

    Result:

    :1:1:Previously present file "company/proto/iam/v1test/service.proto" was deleted.

    Expected:

    Breaking change detector to pass.

  • add a .pre-commit-hooks.yaml

    add a .pre-commit-hooks.yaml

    For easy implementation/integration with pre-commit, could a .pre-commit-hooks.yaml be added to the repository?

    For a local repository containing some *.proto files, we should be able to add something like the following pre-commit hook to the .pre-commit-config.yaml:

    - repo: https://github.com/bufbuild/buf
        rev: v0.41.0
        hooks:
          - id: buf
            name: buf linter
            description: Validates proto files
            entry: buf
            language: golang
            language_version: 1.15.8
            pass_filenames: false
            args: [lint --path ./api]
    
  • Zsh/Fish completion failing

    Zsh/Fish completion failing

    Hi there,

    zsh completion is not working, I'm getting _arguments:comparguments:325: invalid option definition: --log-format[The log format [text,color,json].]:

    Generated completion looks like:

    function _buf {
      local -a commands
    
      _arguments -C \
        '--log-format[The log format [text,color,json].]:' \
        '--log-level[The log level [debug,info,warn,error].]:' \
        '--timeout[The duration until timing out.]:' \
        "1: :->cmnds" \
        "*::arg:->args"
    

    It seems cobra doesn't escape flag description properly: https://github.com/spf13/cobra/blob/v1.0.0/zsh_completions.go#L334 and braces from flag descriptions with permitted values break it.

    I'm not sure whether it's done on purpose for some flexibility or not.

    % zsh --version
    zsh 5.8 (x86_64-pc-linux-gnu)
    
  • Changing an enum field to an identical type is considered breaking WIRE change

    Changing an enum field to an identical type is considered breaking WIRE change

    Before:

    message Bar {
      enum Foo {
        FOO_UNSPECIFIED = 0;
        FOO_A = 1;
      }
      Foo field = 1;
    }
    

    After:

    enum Foo {
      FOO_UNSPECIFIED = 0;
      FOO_A = 1;
    }
    
    message Bar {
      Foo field = 1;
    }
    

    This is currently detected as a breaking change with a message like:

    Field "1" on message "Bar" changed type from "Bar.Foo" to "Foo".
    

    This is not actually a breaking change in the wire format as the enum values are the same even if the type has changed name.

  • Add support for arbitrary git ref in check breaking

    Add support for arbitrary git ref in check breaking

    The buf check breaking command has support for branches and tags when checking against a git repository input. I want it to accept an arbitrary git reference, like refs/remotes/origin/master or refs/pull/3/head.

    In my humble opinion, if you're not afraid of breaking backwards compatibility, it makes sense to remove branch and tag flags, since they're just a strict subset of providing the reference directly (and removing them would simplify some of the flag logic).

  • Windows support

    Windows support

    Hello, it would be great if you could provide also Windows binaries.

    I'm requesting this because we currently use protoc (via protobuf-gradle-plugin) so it would be wonderful to use protoc-gen-buf-check-lint protoc plugin for linting.

    If I could help with this, just let me know.

  • Bump github.com/rs/cors from 1.8.2 to 1.8.3

    Bump github.com/rs/cors from 1.8.2 to 1.8.3

    Bumps github.com/rs/cors from 1.8.2 to 1.8.3.

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
  • Remove auto value from reflect protocol and make unset the default

    Remove auto value from reflect protocol and make unset the default

    This changes the definition of buf curl --reflect-protocol slightly to make it such that this flag no longer has an auto value. Instead, this flag is defined as "use a specific reflection protocol. if no protocol is set, we will choose". This means there is no special value needed to be understood by our users - the default value for --reflect-protocol is no value.

    As part of this change, this changes the ReflectProtocol type to be an enum in line with the rest of the codebase and our code standards (although we should make it more clear in our code standards, its a little obscure to be fair). We prefer int enums over strings because they do not allow people to do things such as cast user-provided flag values to their type directly :-) enums should be just that, and parsing should be commonly validated in the package that provides them. bufanalysis is a good example of this style (that bufcurl is now largely updated to reflect).

  • Use protoencoding for marshal/unmarshal in bufcurl and turn off pretty printing

    Use protoencoding for marshal/unmarshal in bufcurl and turn off pretty printing

    I was working through examples for buf curl, and noticed that it pretty-prints the output JSON. Whether we should do this or not is worthy of a quick conversation, but the buf tool's overall stance (including for --error-format=json) has been to print compact JSON, and leave it to the user to format their JSON with other tools if they want, sticking with the unix philosophy of doing one thing well, i.e. we give the user the most compact output possible, and they can use jq to format if they want:

    $ buf curl https://demo.connect.build/buf.connect.demo.eliza.v1.ElizaService/Say -d '{"sentence":"Hello."}'
    {"sentence":"Hello...I'm glad you could drop by today."}
    
    $ buf curl https://demo.connect.build/buf.connect.demo.eliza.v1.ElizaService/Say -d '{"sentence":"Hello."}' | jq
    {
      "sentence": "Hello, how are you feeling today?"
    }
    

    In going through the code to put up the proposal to make this change, I noticed that bufcurl is using proto.Umarshal/Marshal and protojson directly, which we want to avoid - we've strived to use protoencoding everywhere so we have the same behavior for Protobuf encoding everywhere, and can easily change it later. Example here being we always want to use deterministic wire marshaling, and we want to call json.Compact on the result for JSON marshaling due to the protojson purposefully non-determinstic JSON marshalling issue (and if this were to ever be fixed, we could easily change the usage of json.Compact in one place). This updates the code to use protoencoding everywhere in bufcurl, and I'll have a follow-up to do this in bufstudioagent (the only other place in bufbuild/buf that doesn't use protoencoding) as well.

    Let me know your thoughts especially re: pretty-printing, we should come to agreement on that 100%.

  • buf lint comments not working when directories set

    buf lint comments not working when directories set

    When specifying directories in buf.work.yaml, buf lint will no longer report COMMENT errors.

    Setup

    # buf.yaml
    
    version: v1
    
    lint:
      use:
        - DEFAULT
        - COMMENTS
        - PACKAGE_NO_IMPORT_CYCLE
    
    // service.proto
    
    syntax = "proto3";
    
    package test.activity.v1alpha;
    
    service Service {
      rpc Test(test.activity.v1alpha.TestRequest) returns (test.activity.v1alpha.TestResponse);
    }
    
    message TestRequest {
      string value = 1;
    }
    
    message TestResponse {
      string value = 1;
    }
    

    Working example

    When you only have buf.yaml in the top level, linting works as intended:

    $ tree
    .
    ├── buf.yaml
    └── test
        └── activity
            └── v1alpha
                └── service.proto
    
    $ buf lint
    test/activity/v1alpha/service.proto:7:1:Service "Service" should have a non-empty comment for documentation.
    test/activity/v1alpha/service.proto:8:3:RPC "Test" should have a non-empty comment for documentation.
    test/activity/v1alpha/service.proto:11:1:Message "TestRequest" should have a non-empty comment for documentation.
    test/activity/v1alpha/service.proto:12:3:Field "value" should have a non-empty comment for documentation.
    test/activity/v1alpha/service.proto:15:1:Message "TestResponse" should have a non-empty comment for documentation.
    test/activity/v1alpha/service.proto:16:3:Field "value" should have a non-empty comment for documentation.
    

    Issue example

    When you put the protobuf files in the proto directory and add a buf.work.yaml file, buf lint will no longer output any errors.

    # buf.work.yaml
    
    version: v1
    
    directories:
      - protos
    
    $ tree
    .
    ├── buf.work.yaml
    ├── buf.yaml
    └── protos
        └── test
            └── activity
                └── v1alpha
                    └── service.proto
    
    $ buf lint
    # empty response
    

    The interesting thing is, it seems to only be the COMMENT lint rules. If I rename the service to Test, I get this error when linting:

    $ buf lint
    protos/test/activity/v1alpha/service.proto:7:9:Service name "Test" should be suffixed with "Service".
    
  • Bump github.com/docker/docker from 20.10.21+incompatible to 20.10.22+incompatible

    Bump github.com/docker/docker from 20.10.21+incompatible to 20.10.22+incompatible

    Bumps github.com/docker/docker from 20.10.21+incompatible to 20.10.22+incompatible.

    Release notes

    Sourced from github.com/docker/docker's releases.

    v20.10.22

    Bug fixes and enhancements

    • Improve error message when attempting to pull an unsupported image format or OCI artifact (moby/moby#44413, moby/moby#44569).
    • Fix an issue where the host's ephemeral port-range was ignored when selecting random ports for containers (moby/moby#44476).
    • Fix ssh: parse error in message type 27 errors during docker build on hosts using OpenSSH 8.9 or above (moby/moby#3862).
    • seccomp: block socket calls to AF_VSOCK in default profile (moby/moby#44564).

    Packaging Updates

    Commits
    • 42c8b31 Merge pull request #44656 from thaJeztah/20.10_containerd_binary_1.6.13
    • ff29c40 update containerd binary to v1.6.13
    • 0234322 Merge pull request #44488 from thaJeztah/20.10_backport_update_gotestsum
    • edca413 [20.10] update gotestsum to v1.8.2
    • 6112b23 Merge pull request #44476 from sbuckfelder/20.10_UPDATE
    • 194e73f Merge pull request #44607 from thaJeztah/20.10_containerd_binary_1.6.12
    • a9fdcd5 [20.10] update containerd binary to v1.6.12 (addresses CVE-2022-23471)
    • 48f955d Merge pull request #44597 from thaJeztah/20.10_containerd_1.6.11
    • 50d4d98 Merge pull request #44569 from thaJeztah/20.10_backport_relax_checkSupportedM...
    • 17451d2 Merge pull request #44593 from thaJeztah/20.10_update_go_1.18.9
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
Protocol Buffers - Google's data interchange format

Protocol Buffers - Google's data interchange format Copyright 2008 Google Inc. https://developers.google.com/protocol-buffers/ Overview Protocol Buffe

Jan 3, 2023
A plugin of protoc that for using a service of Protocol Buffers as http.Handler definition

protoc-gen-gohttp protoc-gen-gohttp is a plugin of protoc that for using a service of Protocol Buffers as http.Handler definition. The generated inter

Dec 9, 2021
Go support for Protocol Buffers - Google's data interchange format

Go support for Protocol Buffers - Google's data interchange format Google's data interchange format. Copyright 2010 The Go Authors. https://github.com

Dec 15, 2021
Estudos com Golang, GRPC e Protocol Buffers

Golang, GRPC e Protocol Buffers Estudos com Golang, GRPC e Protocol Buffers Projeto feito para fins de estudos. Para rodar basta seguir os passos abai

Feb 10, 2022
This is a golang C2 + Implant that communicates via Protocol Buffers (aka. protobufs).

Br4vo6ix DISCLAIMER: This tool is for educational, competition, and training purposes only. I am in no way responsible for any abuse of this tool This

Nov 9, 2022
Syslogpars - Simple syslog server, working to UDP-protocol

syslogparse Simple syslog server, working to UDP-protocol. Server was tested wit

Jan 22, 2022
wire protocol for multiplexing connections or streams into a single connection, based on a subset of the SSH Connection Protocol

qmux qmux is a wire protocol for multiplexing connections or streams into a single connection. It is based on the SSH Connection Protocol, which is th

Dec 26, 2022
A simple tool to convert socket5 proxy protocol to http proxy protocol

Socket5 to HTTP 这是一个超简单的 Socket5 代理转换成 HTTP 代理的小工具。 如何安装? Golang 用户 # Required Go 1.17+ go install github.com/mritd/s2h@master Docker 用户 docker pull m

Jan 2, 2023
A library for working with IP addresses and networks in Go

IPLib I really enjoy Python's ipaddress library and Ruby's ipaddr, I think you can write a lot of neat software if some of the little problems around

Dec 20, 2022
A simulation to see what's the result among normal people、rich-second generation、hard-working people

A simulation to see what's the result of competion among normal people、rich-second generation and hard-working people. 假设: 一个社会集体中有部分富二代,部分努力的人,多数是普通人

Feb 20, 2022
gh is GitHub on the command line. It brings pull requests, issues, and other GitHub concepts to the terminal next to where you are already working with git and your code
gh is GitHub on the command line. It brings pull requests, issues, and other GitHub concepts to the terminal next to where you are already working with git and your code

gh is GitHub on the command line. It brings pull requests, issues, and other GitHub concepts to the terminal next to where you are already working with git and your code

Jan 24, 2022
Go-wd - Get the same working directory path at 'go run' and after 'go build'

go-wd Get the same working directory path at 'go run' and after 'go build' Usage

Jan 30, 2022
Envoy-eds-server - Envoy EDS server is a working Envoy Discovery Service implementation

envoy-eds-server Intro Envoy EDS server is a working Envoy Discovery Service imp

Apr 2, 2022
Go tool to wrap and fix errors with the new %w verb directive
Go tool to wrap and fix errors with the new %w verb directive

errwrap Wrap and fix Go errors with the new %w verb directive. This tool analyzes fmt.Errorf() calls and reports calls that contain a verb directive t

Nov 10, 2022
the pluto is a gateway new time, high performance, high stable, high availability, easy to use

pluto the pluto is a gateway new time, high performance, high stable, high availability, easy to use Acknowledgments thanks nbio for providing low lev

Sep 19, 2021
A new Minecraft: Pi Edition Launcher

Pipan About [TODO] Building Pipan is a go project. go build and then run the resultant executable. You will need go >= 1.16 for module support, and a

Oct 11, 2022
Traefik proxy plugin to extract HTTP header value and create a new header with extracted value

Copy header value Traefik plugin Traefik plugin that copies HTTP header value with format key1=value1; key2=value2 into a new header. Motivation for t

May 26, 2022
Notifies you about new matrix messages on your LaMetric Time

Matrix -> LaMetric Time bridge This small golang app notifies you about new messages on your LaMetric Time. This should be run on a Raspberry Pi or so

Sep 29, 2022