A suite of gRPC debugging tools. Like Fiddler/Charles but for gRPC.

grpc-tools CircleCI GitHub release

A suite of tools for gRPC debugging and development. Like Fiddler/Charles but for gRPC!

The main tool is grpc-dump which transparently intercepts network traffic and logs all gRPC and gRPC-Web requests with full metadata as a JSON stream. This stream is easily readable as it is or you can use tools like jq for more complex visualisation.

demo

This repository currently includes:

  • grpc-dump: a small gRPC proxy that dumps RPC details to a file for debugging, and later analysis/replay.
  • grpc-replay: takes the output from grpc-dump and replays requests to the server.
  • grpc-fixture: a proxy that takes the output from grpc-dump and replays saved responses to client requests.
  • grpc-proxy: a library for writing gRPC intercepting proxies. grpc-dump and grpc-fixture are both built on top of this library.

These tools are in alpha so expect breaking changes between releases. See the changelog for full details.

Installation:

The recommended way to install these tools is via Homebrew using:

brew install bradleyjkemp/formulae/grpc-tools

Alternatively, binaries can be downloaded from the GitHub releases page.

Or you can build the tools from source using:

go install github.com/bradleyjkemp/grpc-tools/...

grpc-dump

grpc-dump lets you see all of the gRPC requests being made by applications on your machine without any code changes required to applications or servers.

Simply start grpc-dump and configure your system/application to use it as a HTTP(S) proxy. You'll soon see requests logged in full as a JSON stream with service and method names.

Even if you don't have the original .proto files, grpc-dump will attempt to deserialise messages heuristically to give a human readable form.

# start the proxy (leave out the --port flag to automatically pick on)
grpc-dump --port=12345

# in another terminal, run your application pointing it at the proxy
# Warning: if your application connects to a localhost/127.0.0.1 address then proxy settings
# are usually ignored. To fix this you can use a service like https://readme.localtest.me
http_proxy=http://localhost:12345 my-app

# all the requests made by the application will be logged to standard output in the grpc-dump window e.g.
# {"service": "echo", "method": "Hi", "messages": ["....."] }
# JSON will be logged to STDOUT and any info or warning messages will be logged to STDERR

Many applications expect to talk to a gRPC server over TLS. For this you need to use the --key and --cert flags to point grpc-dump to certificates valid for the domains your application connects to.

The recommended way to generate these files is via the excellent mkcert tool. grpc-dump will automatically use any mkcert generated certificates in the current directory.

# Configure your system to trust mkcert certificates
mkcert -install

# Generate certificates for domains you want to intercept connections to
mkcert mydomain.com *.mydomain.com

# Start grpc-dump using the key and certificate created by mkcert
# Or start grpc-dump from the same directory and it will
# detect them automatically
grpc-dump --key=mydomain.com-key.pem --cert=mydomain.com.pem

More details for using grpc-dump (including the specification for the JSON output) can be found here.

grpc-fixture

# save the (stdout) output of grpc-dump to a file
grpc-dump --port=12345 > my-app.dump

# in another, run your application pointing it at the proxy
http_proxy=http://localhost:12345 my-app

# now run grpc-fixture from the previously saved output
grpc-fixture --port=12345 --dump=my-app.dump

# when running the application again, all requests will
# be intercepted and answered with saved responses,
# no requests will be made to the real gRPC server.
http_proxy=http://localhost:12345 my-app

For applications that expect a TLS server, the same --key and --cert flags can be used as described above for grpc-dump.

More details for using grpc-fixture can be found here.

Comments
  • grpc-proxy

    grpc-proxy

    Hi,

    I would like to get more information about the grpc-proxy you built. can it also be used for Tensorflow serving (it uses grpc request / response)? I want to read the header and re-write it and forward it to to TensorFlow Serving and read the response and forward it to the grpc client.

    grpc client <-> grpc proxy <-> TensorFlow Serving

    Thanks.

  • where to set HTTP_PROXY in python app?

    where to set HTTP_PROXY in python app?

    Thanks for this library. really make grpc more easy to use.

    I am trying the example with my python sever. have no idea what is the http_proxy for. If I let the grpc_dump connect to the port my python server listening to. The request are show, but no any response.

    my python server is simliar to:

    https://github.com/grpc/grpc/blob/master/examples/python/helloworld/greeter_server_with_reflection.py

    clearly I miss something. I am trying to find the demo app you are using. Could you put the echo app code as an example? Thanks.

  • error: Decompressor is not installed for grpc-encoding

    error: Decompressor is not installed for grpc-encoding "gzip"

    For compressed gRPC messages, I get the following error in grpc-dump (0.1.2, macOS):

    "error":{"code":12,"message":"grpc: Decompressor is not installed for grpc-encoding \"gzip\""}
    

    The request seems not to be forwarded to the server.

    I installed grpc-dump via homebrew. Is there anything I can do to use compression?

  • Update deps. Fix HTTP/2 support. Add master key file input.

    Update deps. Fix HTTP/2 support. Add master key file input.

    Hi, great project!

    I made the following changes for myself:

    • Fix bug where HTTP/2 was not intercepted. Added test.
    • Dumping response metadata (headers and trailers).
    • Added support for writing TLS Master secrets to a file (can be used by Wireshark for decrypting the traffic).
    • Update dependencies.
    • Some linter errors.

    Let me know if any changes are necessary.

  • [Issue #96] this is a PoC to solve Issue #96.

    [Issue #96] this is a PoC to solve Issue #96.

    This should fix issue #96.

    Getting messages resolved using proto.MessageTypeand get then registered by proto.RegisterType is probably a bad idea and should only used by generated Go types by protoc. After some debugging and tests I got with a solution where dumpInterceptor wraps a messages event into a type that implements json.Marshal interface and configures an AnyResolver for the jsonpb.Marshaler knowing all FileDescriptors the proto_descriptor.LoadProtoDirectories function ever sees during the load of the protofiles provided by the -proto_roots argument.

    This is implemented in this PR and works for the use case mentioned at the beginning of this issue.

    dump_001.json is a successful run of grpc-dump with this PR applied.

    @bradleyjkemp I openend this as a Draft PR as my implementation is probably not clean in the general way grpc-tools is designed. I also do not know the protoreflect library well enough. Please let me know of any other way the issue described can be fixed or how you would integrate the functionality into grpc-dump properly.

  • mTLS support and grpc-cache

    mTLS support and grpc-cache

    I've created a "grpc-cache" tool using mwitkow's code as a base, but having it be part of this ecosystem would be better. I'd like to add it, but I'm not seeing a way to support mTLS. It seems that this is the right place to add a third alternative. Am i reading this right?

  • Replaying requests without proto definitions

    Replaying requests without proto definitions

    Is there any way to replay requests dumped by grpc-dump when the description files / definitions are not known? Won't the raw_message field in the dump be enough to resend it (without modifications)?

  • Bump github.com/jhump/protoreflect from 1.4.4 to 1.5.0

    Bump github.com/jhump/protoreflect from 1.4.4 to 1.5.0

    Bumps github.com/jhump/protoreflect from 1.4.4 to 1.5.0.

    Release notes

    Sourced from github.com/jhump/protoreflect's releases.

    v1.5.0

    This release contains bug fixes and new features/APIs.

    This release also fixes the repo's go.mod file to be compatible with Go 1.13.

    "github.com/jhump/protoreflect/codec"

    Additions:

    • This is a new package. It provides a Buffer type that provides API for interacting with the protobuf binary format. This makes it easy to write programs that can dynamically emit or consume a stream of protobuf-encoded data.

    "github.com/jhump/protoreflect/desc"

    Additions:

    • Added a new CreateFileDescriptorsFromSet function. This is a convenience method around CreateFileDescriptors when the source of descriptors is a *FileDescriptorSet.

    "github.com/jhump/protoreflect/desc/builder"

    Changes/fixes:

    • When custom options were used, BuilderOptions had to be used with a custom extension registry, or else those custom options would not make it into the built descriptor. Now, the custom option can be defined in the file being built or in any of its dependencies, and they will successfully be interpreted and retained. This means that BuilderOptions.Extensions should no longer be needed in most cases.
    • Added new methods to FileBuilder: AddDependency and AddImportedDependency, which allow explicitly adding imports to the file. These can be used during building to resolve custom options.
    • When using FromFile to convert and existing file descriptor into a builder and then building the result, the output descriptor would strip unused imports from the file. This is no longer the case, so that some imports that are only used to define custom options can be retained and custom options correctly interpreted.

    "github.com/jhump/protoreflect/desc/protoparse"

    Additions:

    • A new ErrorReporter type and eponymous name on the Parser struct have been added. When the field is unset, the behavior matches previous versions. When set, the reporter will be called for each error encountered, and parsing may continue, allowing a single parse invocation to report many errors instead of failing after the first one.
    • A new ErrorWithPos interface is provided to represent an error that includes source position. The type of errors returned by the parser were previously unspecified, but were often instances of the concrete exported type *ErrorWithSourcePos. Code that inspected errors and attempted type assertions to this type should instead type assert to this new interface as errors returned by the parser in the future may not be of this concrete type, but may still contain source position information.

    Changes/fixes:

    • When generating source code info for a descriptor, no entry was created for allow_alias options for enums. This is fixed.
    • When generating source code info, an option with a boolean value could be missing its trailing comment. This is fixed.
    • If multiple goroutines were invoking protoparse and the files being compiled used the package's copy of standard imports (such as google/protobuf/timestamp.proto et al), when the race detector was enabled, it was possible for it to be tripped. The race should be harmless in practice since the writes that trigger the race are actually no-ops, so it is mainly an issue when running tests with the race detector enabled. This has been fixed: concurrent invocations of protoparse no longer share a copy of these standard imports. Instead, they are cloned for each invocation to use safely.
    • When generating source code info, the output is now significantly closer to that of protoc. Here are the most significant variances that were addressed:
      • The order of source code location entries in the resulting descriptor now matches protoc's order.
      • Updated the way source info was generated for group and map fields to address small discrepancies with protoc.
      • This package now correctly handles extension blocks and preserves comments for them, the same way that protoc generates source info for extension blocks.
      • Fixed generation of source locations for "intermediate" paths to match behavior of protoc. For example, protoc emits multiple entries for the path option (without an index), one for each actual option declared in the file. This package now does the same.
      • Updated computation of "column" in source locations to account for tab stabs. This mirrors protoc's logic, which assumes that tab stops are 8 characters apart.
    • This package now accepts an empty file, just as protoc does. Instead of returning an error, it will return an empty descriptor.
    • The UninterpretedOptions fields in options messages of descriptors produced by this package will now be nil when all options are interpreted. Previously, the field was set to an empty slice, which caused confusing/noisy output if the descriptor were then marshaled to JSON via the golang.org/protobuf/jsonpb package.

    "github.com/jhump/protoreflect/desc/protoprint"

    Changes/fixes:

    • The default ordering of a descriptor, when no source info is available, has changed slightly. Previously, options would appear before package and import statements. Now options appear after package and import statements. If source info is available, elements will still retain their ordering per the source info.
    • It was previously possible, when not explicitly sorting elements and when a file descriptor had source info present, for elements to be printed in non-deterministic order. In this mode, the printer uses relative order based on source info to sort elements. But, if source info had ambiguous entries (e.g. more than one entry that matched a printed element), the entry in source info that was used relied on map iteration order. This has been fixed and output is deterministic, even in cases such as this.
    • Previously, trying to print a descriptor that had an extension whose type was a group (vs. a message) would result in a type conversion panic. This has been fixed.
    • Previously, if a descriptor included source code info and it indicated that a set of extensions for the same extendee were defined in multiple extension blocks, this would not be preserved. Instead, all extensions for the same extendee were grouped into a single extension block when the descriptor was printed. Also, comments (leading and trailing) for the extension block were not included in the printed output. This has been corrected. Printing such a descriptor will group extensions correctly into blocks as well as include any comments.

    "github.com/jhump/protoreflect/dynamic"

    ... (truncated)
    Commits
    • c67d679 more explicit AST node types to distinguish between terminals and productions...
    • 009ea28 protoparse: lexer and grammar updates to better match protoc (#262)
    • b1090fc Change ErrorWithPos.GetUnderlying to ErrorWithPos.Unwrap (#261)
    • 8771ced Disable GOPROXY for gotip (#259)
    • 0a6fd46 Add GetUnderlying function to ErrorWithPos (#260)
    • e9bc1d6 Fix go.mod for Go 1.13 (#258)
    • bd35b72 protoparse: add error productions to the grammar and update validation to ret...
    • 5aeb119 add codec package, with exported API for reading/writing protobuf binary form...
    • 09dc841 protoparse: uninterpreted options should use nil instead of empty slice (#253)
    • 900b929 protoparse: make lexer do fewer allocations (#251)
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language
    • @dependabot badge me will comment on this PR with code to add a "Dependabot enabled" badge to your readme

    Additionally, you can set the following in your Dependabot dashboard:

    • Update frequency (including time of day and day of week)
    • Pull request limits (per update run and/or open at any time)
    • Out-of-range updates (receive only lockfile updates, if desired)
    • Security updates (receive only security updates, if desired)

    Finally, you can contact us by mentioning @dependabot.

  • how to use with docker

    how to use with docker

    I tried setting an environment variable host.docker.internal:grpcdumpPORT in my docker compose file and the calls just appear as failed dialing host.docker.internal. Are there any examples of how to use with docker.

  • Listen to different interfaces capability

    Listen to different interfaces capability

    Hello !

    I would like the grpc-dump tool to give the ability for users to make the proxy listening on a different interface or IP address.

    I found my way by monkey patching the source code and it worked pretty well, guessing you would implement this easily by just adding a CLI argument and reinjecting the value into the right function call :)

    Thanks !

  • Bump google.golang.org/grpc from 1.22.0 to 1.22.1

    Bump google.golang.org/grpc from 1.22.0 to 1.22.1

    Bumps google.golang.org/grpc from 1.22.0 to 1.22.1.

    Release notes

    Sourced from google.golang.org/grpc's releases.

    Release 1.22.1

    • server: populate WireLength on stats.InPayload for unary RPCs
    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot ignore this [patch|minor|major] version will close this PR and stop Dependabot creating any more for this minor/major version (unless you reopen the PR or upgrade to it). To ignore the version in this PR you can just close it
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language
    • @dependabot badge me will comment on this PR with code to add a "Dependabot enabled" badge to your readme

    Additionally, you can set the following in your Dependabot dashboard:

    • Update frequency (including time of day and day of week)
    • Pull request limits (per update run and/or open at any time)
    • Out-of-range updates (receive only lockfile updates, if desired)
    • Security updates (receive only security updates, if desired)

    Finally, you can contact us by mentioning @dependabot.

  • Add a `force_tls_destination` flag for forcing secure connection to destination

    Add a `force_tls_destination` flag for forcing secure connection to destination

    The use case

    • I'm developing an app using gRPC and I'm connecting to a stage/prod service running remotely
    • I want to sniff the traffic and forward it to the remote service

    The problem

    I know that I can import a certificate that my system trusts and establish 2 TLS sessions (app <> grpc-dump and grpc-dump <> remote server), but that's additional hassle. I'd much rather connect insecurely my app to grpc-dump, and then have grpc-dump connect securely to the remote server.

    This, ofc, assumes I have the power to change my app's code and allow it to create plain text connections to a server. Whereas I agree that this is not the most correct and black-box way to do it, I think it's much more convenient for a lot of people.

    Solution

    Add a flag to the cli that forces grpc-dump to establish a TLS connection even if the original connection it is trying to relay is plain text. Default value: false If the value is false at runtime, only use TLS if the original request was also TLS secured.

    Any input is appreciated, and I'm sorry if the code is not perfect - this is my first Go code :)

    I hope you'll find this feature useful or interested. Or not, I'll keep it to myself :)

  • Bump github.com/jhump/protoreflect from 1.7.0 to 1.9.0

    Bump github.com/jhump/protoreflect from 1.7.0 to 1.9.0

    Bumps github.com/jhump/protoreflect from 1.7.0 to 1.9.0.

    Release notes

    Sourced from github.com/jhump/protoreflect's releases.

    v1.9.0

    This release contains numerous improvements to the protoparse package, to more closely match protoc in terms of proto source files that are acceptable. It also contains some fixes in other packages.

    "github.com/jhump/protoreflect/desc/builder"

    Changes/fixes:

    • When adding a message to another (to make a nested/enclosed type), the target enclosing message could be incorrectly detached from its parent element. This was the result of a typo in the implementation code and has been fixed.

    "github.com/jhump/protoreflect/desc/protoparse"

    Additions:

    • The protoparse package now issues warnings when it detects that a source file has unused imports. This mirrors the warnings that protoc issues in the same cases. This feature requires the use of a WarningReporter with a parse operation. The concrete type of value provided to the warning reporter will be a protoparse.ErrorUnusedImport.

    Changes/fixes:

    • The protoc compiler was more strict than protoparse when it comes to resolving relative (vs. full qualified) names. This led to conditions where protoparse would accept a proto source file that protoc would reject. The issue is when the first component of an identifier could match multiple lexical scopes. In such a case protoc only matches the most enclosing scope. But protoparse would fallback to other enclosing scopes if the most enclosing scope could not be used to resolve a symbol. (Hard to describe succinctly, so see the example in this bug report.) This issue is now fixed and protoparse resolves names in the same manner as protoc.
    • The protoc compiler uses "C++ enum scoping rules" for protobuf enums. This means that enum values are declared in the namespace of the enclosing enum (as siblings of the enum itself). But protoparse incorrectly treated the enum as the parent scope/namespace. This led to source files that protoparse would accept but that protoc would reject. This issue is now fixed.
    • The use of custom options in oneof statements could incorrectly result in error messages about failing to resolve the custom option name, even if the source file and the option reference were valid. This has been fixed.

    "github.com/jhump/protoreflect/dynamic/msgregistry"

    Additions:

    • A new error type, ErrUnexpectedType, was introduced. When a call to FindMessageTypeByUrl or FindEnumTypeByUrl fails because of a type mismatch (expecting a message, got an enum, or vice versa), this can now be determined programmatically by type-asserting the error to the new error type. This provides a proper/robust way to detect this kind of error (previously, callers would have to examine the error text, which is quite brittle).

    v1.8.2

    This release contains numerous improvements to the protoparse package, to more closely match protoc in terms of proto source files that are acceptable.

    "github.com/jhump/protoreflect/desc/protoparse"

    Changes/fixes:

    • Extensions in a syntax = "proto3" source file were not allowed to have an optional keyword. However, as of the addition of "proto3 optional" support, this is now allowed by protoc. So protoparse now accepts such declarations, to match protoc functionality.
      • Extensions that have an explicit optional keyword are marked in the descriptor with the proto3_optional option. But, unlike normal fields with the proto3_optional option set, they are not (and, in fact, cannot be) included in implicit single-field oneofs.
    • The official compiler, protoc, rejects proto source files for the following reasons. However the protoparse would accept such invalid source files. This has been remedied and protoparse also now rejects such programs:
      • An enum cannot contain a value named option or reserved. This is not an explicit check but is instead a limitation of how the protoc parser works: it assumes these keywords indicate options or reserved ranges, not the start of values with these names.
      • A message cannot begin a field declaration with the keyword reserved, for example in a proto3 file where a type (message or enum) named reserved is also defined. Similar to above, the protoc parser will never recognize such a statement as a field, but protoparse would.
      • A oneof cannot contain a field whose name matches a label keyword (optional, repeated, or required). Unlike the above two, this is not related to limits of the parser but is instead an explicit check to prevent common errors: since oneof blocks do not contain labels, a field thusly named is more likely to be a typo even if otherwise syntactically correct.
      • An enum can only allow aliases (via option allow_alias = true;) if it actually contains values that are aliases. Put another way: if there are no aliases, this option must not be set.
      • A message cannot use message-set wire format (via option message_set_wire_format = true;) if it has any normal fields. Message sets must have only extension fields. Similarly, a message cannot use message-set wire format if it has no extension ranges.
      • An extension for a message that uses message-set wire format must be a message type; scalar extensions are not allowed for messages that use message-set wire format.

    v1.8.1

    This release contains some small bug fixes to the protoparse package.

    "github.com/jhump/protoreflect/desc/protoparse"

    Changes/fixes:

    ... (truncated)

    Commits
    • d3608fa protoparse: oops, we weren't ever linking options for oneofs (#408)
    • 6cc1efa protoparse: report warnings when a file has unused imports (#403)
    • ac729f7 protoparse: c++ scoping rules for enum values (#401)
    • e5cc6ba protoparse: take 3... still getting scoping rules right (#399)
    • c34b9b1 pin ci badge to master branch
    • bc94b04 switch from travis to circleci (#398)
    • a6abd35 protoparse: take 2 on fixing symbol resolution to properly match protoc's C++...
    • 05026f3 protoparse: fix symbol resolution to correctly mimic protoc behavior, which i...
    • 2837af4 desc/builder: fix typo that resulted in wrong message being removed from its ...
    • 8255811 dynamic/msgregistry: Add typed errors for lookups (#386)
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language
    • @dependabot badge me will comment on this PR with code to add a "Dependabot enabled" badge to your readme

    Additionally, you can set the following in your Dependabot dashboard:

    • Update frequency (including time of day and day of week)
    • Pull request limits (per update run and/or open at any time)
    • Out-of-range updates (receive only lockfile updates, if desired)
    • Security updates (receive only security updates, if desired)
  • Bump google.golang.org/grpc from 1.26.0 to 1.39.0

    Bump google.golang.org/grpc from 1.26.0 to 1.39.0

    Bumps google.golang.org/grpc from 1.26.0 to 1.39.0.

    Release notes

    Sourced from google.golang.org/grpc's releases.

    Release 1.39.0

    Behavior Changes

    • csds: return empty response if xds client is not set (#4505)
    • metadata: convert keys to lowercase in FromContext() (#4416)

    New Features

    • xds: add GetServiceInfo to GRPCServer (#4507)
    • xds: add test-only injection of xds config to client and server (#4476)
    • server: allow PreparedMsgs to work for server streams (#3480)

    Performance Improvements

    • transport: remove decodeState from client & server to reduce allocations (#4423)

    Bug Fixes

    • server: return UNIMPLEMENTED on receipt of malformed method name (#4464)
    • xds/rds: use 100 as default weighted cluster totalWeight instead of 0 (#4439)
    • transport: unblock read throttling when controlbuf exits (#4447)
    • client: fix status code to return Unavailable for servers shutting down instead of Unknown (#4561)

    Documentation

    • doc: fix broken benchmark dashboard link in README.md (#4503)
    • example: improve hello world server with starting msg (#4468)
    • client: Clarify that WaitForReady will block for CONNECTING channels (#4477)

    Release 1.38.1

    internal/transport: do not mask ConnectionError (#4561)

    Release 1.38.0

    API Changes

    • reflection: accept interface instead of grpc.Server struct in Register() (#4340)
    • resolver: add error return value from ClientConn.UpdateState (#4270)

    Behavior Changes

    • client: do not poll name resolver when errors or bad updates are reported (#4270)
    • transport: InTapHandle may return RPC status errors; no longer RST_STREAMs (#4365)

    ... (truncated)

    Commits
    • ebf6a4b Change version to 1.39.0 (#4541)
    • 20551e1 internal/transport: do not mask ConnectionError (#4561) (#4569)
    • 22c5358 xds: add HashPolicy fields to RDS update (#4521)
    • 4554924 internal: fix deadlock during switch_balancer and NewSubConn() (#4536)
    • 2d3b1f9 grpc: prevent deadlock in Test/ClientUpdatesParamsAfterGoAway on failure (#4534)
    • 6351a55 xds: remove env var protetion of advanced routing features (#4529)
    • 95e48a8 Add GetServiceInfo to xds.GRPCServer (#4507)
    • aa1169a vet: remove support for non-module-aware Go versions (#4530)
    • b1418a6 xds: export XDSClient interface and use it in balancer tests (#4510)
    • 7301a31 c2p: add random number to xDS node ID in google-c2p resolver (#4519)
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language
    • @dependabot badge me will comment on this PR with code to add a "Dependabot enabled" badge to your readme

    Additionally, you can set the following in your Dependabot dashboard:

    • Update frequency (including time of day and day of week)
    • Pull request limits (per update run and/or open at any time)
    • Out-of-range updates (receive only lockfile updates, if desired)
    • Security updates (receive only security updates, if desired)
  • Upgrade to GitHub-native Dependabot

    Upgrade to GitHub-native Dependabot

    Dependabot Preview will be shut down on August 3rd, 2021. In order to keep getting Dependabot updates, please merge this PR and migrate to GitHub-native Dependabot before then.

    Dependabot has been fully integrated into GitHub, so you no longer have to install and manage a separate app. This pull request migrates your configuration from Dependabot.com to a config file, using the new syntax. When merged, we'll swap out dependabot-preview (me) for a new dependabot app, and you'll be all set!

    With this change, you'll now use the Dependabot page in GitHub, rather than the Dependabot dashboard, to monitor your version updates, and you'll configure Dependabot through the new config file rather than a UI.

    If you've got any questions or feedback for us, please let us know by creating an issue in the dependabot/dependabot-core repository.

    Learn more about migrating to GitHub-native Dependabot

    Please note that regular @dependabot commands do not work on this pull request.

  • Bump github.com/golang/protobuf from 1.4.2 to 1.5.2

    Bump github.com/golang/protobuf from 1.4.2 to 1.5.2

    Bumps github.com/golang/protobuf from 1.4.2 to 1.5.2.

    Release notes

    Sourced from github.com/golang/protobuf's releases.

    v1.5.2

    Notable changes:

    • (#1306) all: deprecate the module
    • (#1300) jsonpb: restore previous behavior for handling nulls and JSONPBUnmarshaler

    v1.5.1

    Notable changes:

    v1.5.0

    Overview

    This marks the ptypes package as deprecated and upgrades the dependency on google.golang.org/protobuf to a pre-release version of v1.26.0. A subsequent patch release will update the dependency to v1.26.0 proper.

    Notable changes

    • (#1217) ptypes: deprecate the package
    • (#1214) rely on protodesc.ToFileDescriptorProto

    v1.4.3

    Notable changes:

    • (#1221) jsonpb: Fix marshaling of Duration
    • (#1210) proto: convert integer to rune before converting to string
    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language
    • @dependabot badge me will comment on this PR with code to add a "Dependabot enabled" badge to your readme

    Additionally, you can set the following in your Dependabot dashboard:

    • Update frequency (including time of day and day of week)
    • Pull request limits (per update run and/or open at any time)
    • Out-of-range updates (receive only lockfile updates, if desired)
    • Security updates (receive only security updates, if desired)
  • Bump github.com/sirupsen/logrus from 1.7.0 to 1.8.1

    Bump github.com/sirupsen/logrus from 1.7.0 to 1.8.1

    Bumps github.com/sirupsen/logrus from 1.7.0 to 1.8.1.

    Release notes

    Sourced from github.com/sirupsen/logrus's releases.

    v1.8.1

    No release notes provided.

    v1.8.0

    Correct versioning number replacing v1.7.1

    v1.7.1

    Code quality:

    • use go 1.15 in travis
    • use magefile as task runner

    Fixes:

    • small fixes about new go 1.13 error formatting system
    • Fix for long time race condiction with mutating data hooks

    Features:

    • build support for zos
    Changelog

    Sourced from github.com/sirupsen/logrus's changelog.

    1.8.1

    Code quality:

    • move magefile in its own subdir/submodule to remove magefile dependency on logrus consumer
    • improve timestamp format documentation

    Fixes:

    • fix race condition on logger hooks

    1.8.0

    Correct versioning number replacing v1.7.1.

    1.7.1

    Beware this release has introduced a new public API and its semver is therefore incorrect.

    Code quality:

    • use go 1.15 in travis
    • use magefile as task runner

    Fixes:

    • small fixes about new go 1.13 error formatting system
    • Fix for long time race condiction with mutating data hooks

    Features:

    • build support for zos
    Commits
    • bdc0db8 Merge pull request #1244 from sirupsen/dbd-release
    • 1bfef4b update changelog
    • 7a997b9 improve documentation about timestamp format
    • f104497 Merge pull request #1238 from thaJeztah/move_mage
    • 1d8091a move "mage" to a separate module
    • feebf74 travis: run mage with -v to not discard output
    • 6cff360 Merge pull request #1234 from sirupsen/dbd-cleanup
    • d172886 fix race condition AddHook and traces
    • d59e561 Merge pull request #1231 from sirupsen/dbd-cleanup
    • 35ab8d8 update changelog
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language
    • @dependabot badge me will comment on this PR with code to add a "Dependabot enabled" badge to your readme

    Additionally, you can set the following in your Dependabot dashboard:

    • Update frequency (including time of day and day of week)
    • Pull request limits (per update run and/or open at any time)
    • Out-of-range updates (receive only lockfile updates, if desired)
    • Security updates (receive only security updates, if desired)
A suite of tools for NFT generative art.

nftool A suite of tools for NFT generative art. Features Traits/Attributes/Properties Generation Configure custom rarity Generate collection attribute

Dec 15, 2022
gNXI Tools - gRPC Network Management/Operations Interface Tools

gNxI Tools gNMI - gRPC Network Management Interface gNOI - gRPC Network Operations Interface A collection of tools for Network Management that use the

Dec 15, 2022
protoc-gen-grpc-gateway-ts is a Typescript client generator for the grpc-gateway project. It generates idiomatic Typescript clients that connect the web frontend and golang backend fronted by grpc-gateway.

protoc-gen-grpc-gateway-ts protoc-gen-grpc-gateway-ts is a Typescript client generator for the grpc-gateway project. It generates idiomatic Typescript

Dec 19, 2022
HTTP, HTTP2, HTTPS, Websocket debugging proxy
HTTP, HTTP2, HTTPS, Websocket debugging proxy

English | 简体中文 We recommend updating whistle and Node to ensure that you receive important features, bugfixes and performance improvements. Some versi

Dec 31, 2022
Fix Burp Suite's horrible TLS stack & spoof any browser fingerprint
Fix Burp Suite's horrible TLS stack & spoof any browser fingerprint

Awesome TLS This extension hijacks Burp's HTTP and TLS stack to make it more powerful and less prone to fingerprinting by all kinds of WAFs. It does t

Jan 2, 2023
Tools - This subrepository holds the source for various packages and tools that support

Go Tools This subrepository holds the source for various packages and tools that

Jan 12, 2022
Golang http&grpc server for gracefully shutdown like nginx -s reload

supervisor Golang http & grpc server for gracefully shutdown like nginx -s reload if you want a server which would be restarted without stopping servi

Jan 8, 2022
grpc-http1: A gRPC via HTTP/1 Enabling Library for Go

grpc-http1: A gRPC via HTTP/1 Enabling Library for Go This library enables using all the functionality of a gRPC server even if it is exposed behind a

Dec 17, 2022
Server and client implementation of the grpc go libraries to perform unary, client streaming, server streaming and full duplex RPCs from gRPC go introduction

Description This is an implementation of a gRPC client and server that provides route guidance from gRPC Basics: Go tutorial. It demonstrates how to u

Nov 24, 2021
Go based grpc - grpc gateway micro service example

go-grpc-gateway-server This repository provides an example for go based microservice. Go micro services developed based on gRPC protobuf's and also us

Dec 8, 2021
Simple grpc web and grpc transcoding with Envoy
Simple grpc web and grpc transcoding with Envoy

gRPC Web and gRPC Transcoding with Envoy This is a simple stand-alone set of con

Dec 25, 2021
Go-grpc - This is grpc server for golang.

go-grpc This is grpc server for golang. protocのインストール brew install protoc Golang用のプラグインのインストール go install google.golang.org/protobuf/cmd/protoc-gen-go

Jan 2, 2022
GRPC - Creating a gRPC service from scratch

#Go gRPC services course Creating a gRPC service from scratch Command line colle

Jan 2, 2022
Totem - A Go library that can turn a single gRPC stream into bidirectional unary gRPC servers

Totem is a Go library that can turn a single gRPC stream into bidirectional unar

Jan 6, 2023
Grpc-gateway-map-null - gRPC Gateway test using nullable values in map

Demonstrate gRPC gateway behavior with nullable values in maps Using grpc-gatewa

Jan 6, 2022
Todo-app-grpc - Go/GRPC codebase containing RealWorld examples (CRUD, auth, advanced patterns, etc)

Go/GRPC codebase containing RealWorld examples (CRUD, auth, advanced patterns, e

Oct 12, 2022
GRPC - A client-server mockup, using gRPC to expose functionality.

gRPC This is a mockup application that I built to help me visualise and understand the basic concepts of gRPC. In this exchange, the client can use a

Jan 4, 2022
Raft-grpc-demo - Some example code for how to use Hashicorp's Raft implementation with gRPC

raft-grpc-example This is some example code for how to use Hashicorp's Raft impl

Jan 4, 2022