A reduced, refined gRPC implementation in Go

reRPC

Build Report Card GoDoc

reRPC is a small framework for building HTTP APIs. You write a short API definition file and implement your application logic, and reRPC generates code to handle marshaling, routing, error handling, and content-type negotiation. It also generates an idiomatic, type-safe client.

reRPC is wire-compatible with both the gRPC and Twirp protocols, including full support for gRPC streaming. reRPC servers interoperate seamlessly with generated clients in more than a dozen languages, command-line tools like grpcurl, and proxies like Envoy and gRPC-Gateway. Thanks to Twirp's simple, human-readable JSON protocol, reRPC servers are also easy to debug with cURL.

Under the hood, reRPC is just protocol buffers and the standard library: no custom HTTP implementation, no new name resolution or load balancing APIs, and no surprises. Everything you already know about net/http still applies, and any package that works with an http.Server, http.Client, or http.Handler also works with reRPC.

For more on reRPC, including a walkthrough and a comparison to alternatives, see the docs.

A Small Example

Curious what all this looks like in practice? Here's a small h2c server:

package main

import (
  "net/http"

  "golang.org/x/net/http2"
  "golang.org/x/net/http2/h2c"

  pingpb "github.com/rerpc/rerpc/internal/ping/v1test" // generated
)

type PingServer struct {
  pingpb.UnimplementedPingServiceReRPC // returns errors from all methods
}

func main() {
  ping := &PingServer{}
  mux := rerpc.NewServeMux(
    pingpb.NewPingHandlerReRPC(ping),
    rerpc.NewBadRouteHandler(),
  )
  handler := h2c.NewHandler(mux, &http2.Server{})
  http.ListenAndServe(":8081", handler)
}

With that server running, you can make requests with any gRPC client or with cURL:

$ curl --request POST \
  --header "Content-Type: application/json" \
  http://localhost:8081/internal.ping.v1test.PingService/Ping

{"code":"unimplemented","msg":"internal.ping.v1test.PingService.Ping isn't implemented"}

You can find production-ready examples of servers and clients in the API documentation.

Status

This is the earliest of early alphas: APIs will break before the first stable release.

Support and Versioning

reRPC supports:

Within those parameters, reRPC follows semantic versioning.

Legal

Offered under the MIT license. This is a personal project developed in my spare time - it's not endorsed by, supported by, or (as far as I know) used by my current or former employers.

Owner
reRPC
A gRPC and Twirp-compatible RPC framework
reRPC
Comments
  • Unify Request and Response into Message

    Unify Request and Response into Message

    Originally, we expected the generic Request and Response types to diverge quite a bit. In practice, they've ended up nearly identical. The methods we anticipate adding (primarily DisableCompression()) apply equally to both.

    The code for the two types is so similar that we're often making near-identical changes to their code. (For example, supporting trailers required verbatim copies across the two types.)

    This commit unifies the two types into connect.Message. We can then unify AnyRequest/AnyResponse and ReceiveRequest/ReceiveResponse. Since Request.Msg was never @bufdev's favorite and Message.Msg is even worse, I've renamed to Message.Body - but I'm totally open to suggestions for a better field name.

    After this PR, we've slimmed down connect's exported API quite a bit. On my monitor, the GoDoc table of contents now fits (barely) on one screen.

  • Implement gRPC's standard interop tests

    Implement gRPC's standard interop tests

    The first-party gRPC implementations have a standardized battery of interoperability tests. Many of them test particular flags or option schemes that may not apply to us, but let's see if we can get some useful test coverage from them.

    https://github.com/grpc/grpc/blob/master/doc/interop-test-descriptions.md

  • Random Compression Method Selection

    Random Compression Method Selection

    Say you register 4 compression methods on both the handler and the client.

    It seems like the server picks the first compression method it recognizes, unless a specific method was used when sending the request.

    However, looking at the client side the order the methods are sent is random since it collects names from a map.

    As far as I can tell this makes it impossible to select an order of preference from the client side. Furthermore it appears to me that gzip cannot be removed from the pool, only replaced.

  • More request information in Interceptor.WrapStreamContext()

    More request information in Interceptor.WrapStreamContext()

    Currently there is no way to enrich a stream context with information about the request. The WrapStreamContext() method only accepts a parent context and does not give us any information about the request.

    In particular, we need access to the endpoint spec and request headers.

    Our primary use case for this is authentication, where we would like to be able to apply some generic token parsing and validation logic to all requests, and enrich context with authentication info.

  • Rename protoc-gen-go-connect to protoc-gen-goconnect/protoc-gen-connectgo?

    Rename protoc-gen-go-connect to protoc-gen-goconnect/protoc-gen-connectgo?

    The go-.* paradigm is really only used by protoc-gen-go-grpc, which is relatively new, and I'd argue that it's not productive. We generally want people to get into the habit of generating to a sub-directory named after their plugin, So we might want to have i.e /internal/gen/proto/goconnect or internal/gen/proto/connectgo, and then name the plugin accordingly.

    Note that we did the same go-.* style with our internal plugins for bufbuild/buf (this is on me), so we should probably change that too once we agree on the naming scheme.

  • Possible to retrieve peer info?

    Possible to retrieve peer info?

    With grpc-go I can get peer info like so:

    p, ok := peer.FromContext(ctx)
    if !ok {
        return nil, errors.New("could not get peer from context")
    }
    addr := p.Addr.String()
    

    Is it possible to do this with connect-go?

    I am trying to get the IP that the request is originating from.

    Thanks

  • Can't detect MethodOptions

    Can't detect MethodOptions

    I am currently in the process of migrating an existing grpc api server to connect-go. Went smoothly so far, great work !

    I am actually stuck at one point, in the current implementation i detect the specified MethodOptions per service and construct ACL Rules from them.

    Example Proto

    service HealthService {
      rpc Get(HealthServiceGetRequest) returns (HealthServiceGetResponse) {
        option (visibility) = VISIBILITY_PUBLIC;
      }
    }
    

    Then i use @jhump "github.com/jhump/protoreflect/grpcreflect" to load the ServiceDescriptors and iterate through. This requires a grpc.Server instance which is not available anymore.

    Any hints howto get access to the MethodOptions with a connect-go implementation are welcome.

  • Support dynamic input type and output type

    Support dynamic input type and output type

    Is your feature request related to a problem? Please describe.

    Trying to build a connect gateway, support post and grpc.

    The proto is dynamic, generated from grpc reflect, but connect handler can not use the dynamic type due to handler code like this

    request = &Request[Req]{
      Msg:    new(Req),
      spec:   receiver.Spec(),
      header: receiver.Header(),
    }
    

    I think connect-go can support dynamic pb type, the input and output type is based on reflect MessageType, maybe pass from HandlerOption, so NewUnaryHandler[any,any]() can work as expected.

    Describe the solution you'd like

    Describe alternatives you've considered

    Additional context

    connect protocol makes grpc eaiser, but for migration, need a gateway to translate the protocol for other languages that don't speak connect protocol.

  • How to implement a

    How to implement a "full-stream" interceptor?

    Is your feature request related to a problem? Please describe. I'm struggling to understand how to implement a logging/tracing interceptor for the full Client/Server side streams. Something similar to how it was done with grpc-go, example for Elasticsearch's APM tracing: https://github.com/elastic/apm-agent-go/blob/main/module/apmgrpc/server.go#L111

    If I embed the tracing-information using WrapStreamContext (which is also missing something like Spec, to be able to identify which stream is called), how would I go about closing the transaction on end?

    Describe the solution you'd like I'd like a solution that allows for tracking the duration and result of streams, similar to how it's possible for unary calls.

  • Fixed README examples; Set autogeneration of them using mdox tool.

    Fixed README examples; Set autogeneration of them using mdox tool.

    Hey!

    Huge fan here, this project is amazing! 💪🏽 It has lots of features (3 protocols in one), so I think examples has to be clear. I found the existing ones obsolete so I recreated them and committed with tooling that autogenerates them in README once changed.

    make test also checks if the examples are buildable, which will keep them up-to-date!

    Hopefully that will help make this project more accessible, cheers. Keep the good work!

    I hope you don't mind putting auto-formatter for README. It ensures consistency and e.g puts all on single line (all IDEs handle that just fine, so no point in trying to manually adjust width of text)

    Signed-off-by: Bartlomiej Plotka [email protected]

  • Evaluate Style Guide

    Evaluate Style Guide

    Package Structure

    • [x] The codec and compress packages are split out so that we can more easily add a separate package for the Protocol abstraction (e.g. gRPC, gRPC-Web, Connect) without introducing an import cycle.
    • [x] The clientstream and handlerstream packages are split out so that we can tidy up names for these types. Otherwise, you'd have names like NewClientClientStream (the client's view of a client streaming endpoint).
    • [x] We might want it to be compress.Gzip instead of compress.GzipCompressor.
      • Edit: We'll move this to compress/gzip.Compressor.

    Method Naming

    • [x] The stream has a ReceivedHeader method to distinguish itself from the request headers. Should we instead just name this ResponseHeader for clarity?
    • [x] Similarly, let's make it explicit to be RequestHeader and ResponseHeader.
    • [x] As discussed, we need to decide what we're doing with Simple/Full.
      • Client-side: WrappedPingClient and UnwrappedPingClient interfaces. PingClient is reserved for the type that users interact with.
      • Server-side: PingService (acts upon generic types), and NewPingService (acts upon any). Comments are left in-line to describe how to implement the simple, non-generic method signatures.

    Future Proofing

    • [x] Top-level abstractions (e.g. Codec and Compressor) are propagated through the clientCfg and into the protocol abstraction via a catch-all protocolClientParams. If we eventually plan to export the protocol abstraction and include it as a ClientOption, the relationship here is fuzzy.
      • What happens if we ever have a protocol that doesn't interact with all of the protocolClientParams types - is it a no-op, an error, or a silent failure?
      • We could tie these options to each protocol individually to clear these relationships up, but we end up with some more repetition (i.e. we need to repeat the same options for similar protocols like gRPC and gRPC-Web). For example, each of the gRPC and gRPC-Web protocols would have a Codec option.
      • In the connect-api branch, we were able to get around this because the protocols were separate client implementations, and they could each individually own what options they exposed (e.g. here).
    • [x] We still need to figure out error details. I know the gRPC protocol requires the proto.Any, and Akshay had some ideas around this - we could rename the methods to include Any so it leaves room for us to add other types later (e.g. AddAnyDetail). The abstraction violation between the pluggable Codec and the proto.Any sucks, but I know we're at mercy to the gRPC protocol here.

    1.0 Features

    • [x] The gRPC health and reflection packages are valuable, but they aren't really necessary to begin with. We should consider whether or not these packages should exist in a separate repository (similar to gRPC middleware repositories).
      • I know we need to be mindful of this w.r.t. including the RegistrationName in the generated code. If we were to drop this support to begin with, we'd need to reintroduce this as a HandlerOption later, and that's tied to the connect library itself. It's not immediately obvious how this would work.
      • Decision: health and reflection are staying where they are. We need these features for easy gRPC user adoption. To be clear, health is non-optional. reflection is a huge quality of life improvement and its (nearly) part of the gRPC specification at this point.
    • [x] connect.MaxHeaderBytes is kinda nice, but doesn't feel necessary and is prone to change across different protocols.
    • [x] Should connect.ReceiveResponse be in an internal package? It's only used twice and otherwise distracts from the API that users ought to interact with. This might already be your plan based on the conversations we had earlier about the user-facing API and connect internals.
      • It looks like ReceiveRequest needs to be exported for the generated code, so I can see an argument to export it for symmetry.
    • [x] Drop IsValidHeaderKey and IsValidHeaderValue.

    Implementation Details

    • [x] In the connect-api branch, I left a note for myself about whether or not the discard helper function can hang forever (e.g. discard(cs.response.Body)). This might have happened when I introduced the gRPC testing suite, but I can't recall. We need to make sure this can't happen.
      • Nothing to do here - this is a consequence of needing to read the http.Response.Body to completion to reuse the connection. This is also just an implementation detail, so it's not blocking regardless.
  • pkg.go.dev example doesn't compile

    pkg.go.dev example doesn't compile

    Describe the bug

    The example doesn't work here: https://pkg.go.dev/github.com/bufbuild/connect-go#example-package-Handler

    Fails with the error

    Output:
    
    go: downloading github.com/bufbuild/connect-go v1.4.1
    go: downloading google.golang.org/protobuf v1.28.1
    package play.ground
    	prog.go:8:2: use of internal package github.com/bufbuild/connect-go/internal/gen/connect/ping/v1 not allowed
    package play.ground
    	prog.go:9:2: use of internal package github.com/bufbuild/connect-go/internal/gen/connect/ping/v1/pingv1connect not allowed
    

    The solution here would be either

    • Move protos from internal to something that isn't restrictive like private
    • Create a BSR module and remove internal/gen completely
    • Create another example in https://github.com/bufbuild/connect-go/blob/main/handler_example_test.go so that it is not rendered as a runnable example

    From the docs here

    To achieve this we can use a “whole file example.” A whole file example is a file that ends in _test.go and contains exactly one example function, no test or benchmark functions, and at least one other package-level declaration. When displaying such examples godoc will show the entire file.

  • client side of server streaming call does not always drain HTTP response body

    client side of server streaming call does not always drain HTTP response body

    When invoking a server or bidi stream using the Connect or gRPC-web protocols, the response stream is only read to the end of the "end of stream" frame. It is never verified that the body contain no more. The standard gRPC protocol does not exhibit this issue, since it must drain the body fully and read trailers in order to get the RPC status.

    This is an issue when trying to use HTTP middleware that wraps the body reader. The middleware will never detect that the response is finished, because the underlying reader is never fully drained (e.g. read until it returns a non-nil error, typically io.EOF). This also means that it is possible for the server to write additional content, and thus send back a corrupt/invalid response body, and the RPC client will not notice. (Unclear what action should be taken in this case -- like whether this should result in an RPC wire error, especially if the call otherwise succeeded.)

  • Expose *http.Request to server Peer

    Expose *http.Request to server Peer

    In some cases, we need access to the underlying *http.Request in the handler.

    For example, we need access to the underlying request.TLS to identify peer identity.

  • expose an API for constructing a wire error

    expose an API for constructing a wire error

    When using connect.NewError, it is not possible create an error such that err.IsWireError() will return true.

    This capability is particularly useful for dealing with bidi streaming APIs that represent a conversation, where each response message correlates with a request message. It is common for these kinds of APIs to support partial success by having a response message indicate a failure just for that one request message. Often such APIs use google.rpc.Status in the response message to indicate a possible failure. Sometimes they do not.

    For this sort of use, it is commonplace for the calling Go code to translate that status into an error. Since the error was received over the wire during an RPC call, it should technically be a wire error, so that any other handling/propagation logic higher up the stack can react to it correctly.

    This could be addressed as easily as introducing a new NewWireError function with the same signature and semantics as NewError except that the returned error will return true when err.IsWireError() is called.

  • Distinguish between

    Distinguish between "the server ended the stream" and "the connection dropped unexpectedly"

    Is your feature request related to a problem? Please describe.

    We're using the connect protocol to stream messages from a server to a client. I seem to have misconfigured the reverse proxy in between them, so the connection drops after 30 seconds.

    What surprised me, though, was that in this case (*ServerStreamForClient).Err() returns nil, even though the client never received the end-of-stream message.

    It seems like when the connection dropped unexpectedly, the client receives an io.EOF, which is suppressed.

    If the server closed the stream and the client received the end-of-stream message, then the error is errSpecialEnvelope, which is also suppressed since it wraps io.EOF.

    It seems like it's not possible to distinguish between "the server ended the stream" and "the connection dropped unexpectedly".

    Describe the solution you'd like

    I would like (*ServerStreamForClient).Err() to only return nil if the client received the end-of-stream message from the server. If the client received io.EOF before receiving the end-of-stream message, then I would like it to return the error.

  • document in the FAQ how client and server might access authenticated identity of the remote

    document in the FAQ how client and server might access authenticated identity of the remote

    The Peer object only provides an address. For clients, it just echos back the host that was used in the HTTP request, without returning anything about the remote server's identity. It could at least return the resolved IP, to provide more information when multiple A records are available.

    But it also provides no way to get the authenticated identity, when using TLS. This is easily available in the response object returned from the http.Client or the request object provided to the http.Handler.

    The gRPC version of this type has a generic AuthInfo field with an interface type, and users can then try to type assert to a specific implementation. The idea is that different authn mechanisms might be used to authenticate parties (like JWTs or other custom auth cookies for client authn), so the representation needs to be flexible enough so an interceptor could provide the identity (instead of hard-coding the representation used in mutually-authenticated TLS). Speaking of which, there is not a way for an authenticating interceptor to override the peer since there is no exported setter (or constructor which allows setting it).

Server and client implementation of the grpc go libraries to perform unary, client streaming, server streaming and full duplex RPCs from gRPC go introduction

Description This is an implementation of a gRPC client and server that provides route guidance from gRPC Basics: Go tutorial. It demonstrates how to u

Nov 24, 2021
Raft-grpc-demo - Some example code for how to use Hashicorp's Raft implementation with gRPC

raft-grpc-example This is some example code for how to use Hashicorp's Raft impl

Jan 4, 2022
A suite of gRPC debugging tools. Like Fiddler/Charles but for gRPC.

grpc-tools A suite of tools for gRPC debugging and development. Like Fiddler/Charles but for gRPC! The main tool is grpc-dump which transparently inte

Dec 22, 2022
grpc-http1: A gRPC via HTTP/1 Enabling Library for Go

grpc-http1: A gRPC via HTTP/1 Enabling Library for Go This library enables using all the functionality of a gRPC server even if it is exposed behind a

Dec 17, 2022
Go based grpc - grpc gateway micro service example

go-grpc-gateway-server This repository provides an example for go based microservice. Go micro services developed based on gRPC protobuf's and also us

Dec 8, 2021
Simple grpc web and grpc transcoding with Envoy
Simple grpc web and grpc transcoding with Envoy

gRPC Web and gRPC Transcoding with Envoy This is a simple stand-alone set of con

Dec 25, 2021
Go-grpc - This is grpc server for golang.

go-grpc This is grpc server for golang. protocのインストール brew install protoc Golang用のプラグインのインストール go install google.golang.org/protobuf/cmd/protoc-gen-go

Jan 2, 2022
GRPC - Creating a gRPC service from scratch

#Go gRPC services course Creating a gRPC service from scratch Command line colle

Jan 2, 2022
Totem - A Go library that can turn a single gRPC stream into bidirectional unary gRPC servers

Totem is a Go library that can turn a single gRPC stream into bidirectional unar

Jan 6, 2023
Grpc-gateway-map-null - gRPC Gateway test using nullable values in map

Demonstrate gRPC gateway behavior with nullable values in maps Using grpc-gatewa

Jan 6, 2022
Todo-app-grpc - Go/GRPC codebase containing RealWorld examples (CRUD, auth, advanced patterns, etc)

Go/GRPC codebase containing RealWorld examples (CRUD, auth, advanced patterns, e

Oct 12, 2022
GRPC - A client-server mockup, using gRPC to expose functionality.

gRPC This is a mockup application that I built to help me visualise and understand the basic concepts of gRPC. In this exchange, the client can use a

Jan 4, 2022
Benthos-input-grpc - gRPC custom benthos input

gRPC custom benthos input Create a custom benthos input that receives messages f

Sep 26, 2022
Go-grpc-template - A small template for quickly bootstrapping a, developer platform independent gRPC golang application
Go-grpc-template - A small template for quickly bootstrapping a, developer platform independent gRPC golang application

go-grpc-template A small template for quickly bootstrapping a developer platform

Jan 20, 2022
Grpc-train - Train booking demo using gRPC

gRPC Demo: Train Booking Service Description Usage Contributing Development Tool

Feb 6, 2022
This repo contains a sample app exposing a gRPC health endpoint to demo Kubernetes gRPC probes.

This repo contains a sample app exposing a health endpoint by implementing grpc_health_v1. Usecase is to demo the gRPC readiness and liveness probes introduced in Kubernetes 1.23.

Feb 9, 2022
Go-grpc-tutorial - Simple gRPC server/client using go

Simple gRPC server/client using go Run server go run usermgmt_server/usermgmt_

Feb 14, 2022
The Go language implementation of gRPC. HTTP/2 based RPC

gRPC-Go The Go implementation of gRPC: A high performance, open source, general RPC framework that puts mobile and HTTP/2 first. For more information

Jan 6, 2023
Backend implementation using go, proto3 and gRPC for a mock online store

Backend implementation using go, proto3 and gRPC for a mock online store Ricardo RICO URIBE Tasks I - Order service The current system exposes a produ

Oct 10, 2021