The Go language implementation of gRPC. HTTP/2 based RPC

gRPC-Go

Build Status GoDoc GoReportCard

The Go implementation of gRPC: A high performance, open source, general RPC framework that puts mobile and HTTP/2 first. For more information see the Go gRPC docs, or jump directly into the quick start.

Prerequisites

Installation

With Go module support (Go 1.11+), simply add the following import

import "google.golang.org/grpc"

to your code, and then go [build|run|test] will automatically fetch the necessary dependencies.

Otherwise, to install the grpc-go package, run the following command:

$ go get -u google.golang.org/grpc

Note: If you are trying to access grpc-go from China, see the FAQ below.

Learn more

FAQ

I/O Timeout Errors

The golang.org domain may be blocked from some countries. go get usually produces an error like the following when this happens:

$ go get -u google.golang.org/grpc
package google.golang.org/grpc: unrecognized import path "google.golang.org/grpc" (https fetch: Get https://google.golang.org/grpc?go-get=1: dial tcp 216.239.37.1:443: i/o timeout)

To build Go code, there are several options:

  • Set up a VPN and access google.golang.org through that.

  • Without Go module support: git clone the repo manually:

    git clone https://github.com/grpc/grpc-go.git $GOPATH/src/google.golang.org/grpc

    You will need to do the same for all of grpc's dependencies in golang.org, e.g. golang.org/x/net.

  • With Go module support: it is possible to use the replace feature of go mod to create aliases for golang.org packages. In your project's directory:

    go mod edit -replace=google.golang.org/grpc=github.com/grpc/grpc-go@latest
    go mod tidy
    go mod vendor
    go build -mod=vendor

    Again, this will need to be done for all transitive dependencies hosted on golang.org as well. For details, refer to golang/go issue #28652.

Compiling error, undefined: grpc.SupportPackageIsVersion

If you are using Go modules:

Ensure your gRPC-Go version is required at the appropriate version in the same module containing the generated .pb.go files. For example, SupportPackageIsVersion6 needs v1.27.0, so in your go.mod file:

module <your module name>

require (
    google.golang.org/grpc v1.27.0
)

If you are not using Go modules:

Update the proto package, gRPC package, and rebuild the .proto files:

go get -u github.com/golang/protobuf/{proto,protoc-gen-go}
go get -u google.golang.org/grpc
protoc --go_out=plugins=grpc:. *.proto

How to turn on logging

The default logger is controlled by environment variables. Turn everything on like this:

$ export GRPC_GO_LOG_VERBOSITY_LEVEL=99
$ export GRPC_GO_LOG_SEVERITY_LEVEL=info

The RPC failed with error "code = Unavailable desc = transport is closing"

This error means the connection the RPC is using was closed, and there are many possible reasons, including:

  1. mis-configured transport credentials, connection failed on handshaking
  2. bytes disrupted, possibly by a proxy in between
  3. server shutdown
  4. Keepalive parameters caused connection shutdown, for example if you have configured your server to terminate connections regularly to trigger DNS lookups. If this is the case, you may want to increase your MaxConnectionAgeGrace, to allow longer RPC calls to finish.

It can be tricky to debug this because the error happens on the client side but the root cause of the connection being closed is on the server side. Turn on logging on both client and server, and see if there are any transport errors.

Owner
grpc
A high performance, open source, general-purpose RPC framework
grpc
Comments
  • protoc-gen-go-grpc: API for service registration

    protoc-gen-go-grpc: API for service registration

    There were some changes in #3657 that make it harder to develop gRPC services and harder to find new unimplemented methods - I wanted to start a discussion around the new default and figure out why the change was made. I do understand this is in an unreleased version, so I figured a discussion would be better than a bug report or feature request.

    From my perspective, this is a number of steps backwards for reasons I will outline below.

    When implementing a gRPC service in Go, I often start with a blank slate - the service has been defined in proto files, the go and gRPC protobuf definitions have been generated, all that's left to do is write the code. I often use something like the following so the compiler will help me along, telling me about missing methods, incorrect signatures, things like that.

    package chat
    func init() {
    	// Ensure that Server implements the ChatIngestServer interface
    	var server *Server = nil
    	var _ pb.ChatIngestServer = server
    }
    

    This can alternatively be done with var _ pb.ChatIngestServer = &Server{} but that theoretically leaves a little bit more memory around at runtime.

    After this, I add all the missing methods myself (returning the unimplemented status) and start adding implementations to them until I have a concrete implementation for all the methods.

    Problems with the new approach

    • As soon as you embed the Unimplemented implementation, the Go compiler gets a lot less useful - it will no longer tell you about missing methods and because Go interfaces are implicit, if you make a mistake in implementing a method (like misspelling a method name), you will not find out until runtime. Additionally, because these are public methods, if they're attached to a public struct (like the commonly used Server), they may not be detected as an unused method.
    • If protos and generated files are updated, you will not know about any missing methods until runtime. Personally, I would prefer to know if I have not fully implemented a service at compile time rather than waiting until clients are running against it.
    • IDE hinting is worse with the new changes - IDEs will generally not recommend a method stub for an unimplemented method if it's provided by an embedded struct because even though it's technically implemented, it does not have a working implementation.

    I generally prefer compile time guarantees that all methods are implemented over runtime checks.

    Benefits of the new approach

    • Protos and generated files can be updated without requiring updates to server implementations.

    Proposal

    By default, the option requireUnimplementedServers should default to false. This option is more valuable when dealing with external protobufs which are not versioned (maybe there should be a recommendation to embed the unimplemented struct in this instance) and makes it harder to catch mistakes if you are developing a canonical implementation of a service which should implement all the available methods.

    At least for me, the problems with the new approach vastly outweigh the benefits I've seen so far.

  • Support serving web content from the same port

    Support serving web content from the same port

    https://github.com/grpc/grpc-common/blob/master/PROTOCOL-HTTP2.md#appendix-a---grpc-for-protobuf says grpc protobuf mapping uses service names as paths. Would it be possible to serve web content from other urls?

    I see TLS side does alpn, so that's an easy place to hook up.

    Any thoughts about non-TLS? e.g. a service running on localhost. Of course that would mean needing to do a http/1 upgrade negotiation, as then the port could not default to http/2.

    Use case 1: host a web application and its api on the same port.

    Use case 2: serve a health check response.

    Use case 3: serve a "nothing to see here" html page.

    Use case 4: serve a /robots.txt file.

  • Go 1.7 uses import

    Go 1.7 uses import "context"

    If you're using the new "context" library, then gRPC servers can't match the server to the interface, because it has the wrong type of context.

    Compiling bin/linux.x86_64/wlserver

    asci/cs/tools/whitelist/wlserver

    ./wlserver.go:767: cannot use wls (type _Server) as type WhitelistProto.GoogleWhitelistServer in argument to WhitelistProto.RegisterGoogleWhitelistServer: *Server does not implement WhitelistProto.GoogleWhitelistServer (wrong type for Delete method) have Delete(context.Context, *WhitelistProto.DeleteRequest) (_WhitelistProto.DeleteReply, error) want Delete(context.Context, _WhitelistProto.DeleteRequest) (_WhitelistProto.DeleteReply, error)

  • Connection latency significantly affects throughput

    Connection latency significantly affects throughput

    I am working on a system that uses GRPC to send 1MB blobs between clients and servers and have observed some poor throughput when connection latency is high (180ms round trip is typical between Australia and the USA).

    The same throughput issues are not present when I take GRPC out of the equation. I have prepared a self-contained program that reproduces the problem on a local machine by simulating latency at the net.Listener level. It can run either using GRPC or just plain HTTP/2 POST requests. In each case the payload and latency are the same, but—as you can see from the data below—GRPC becomes several orders of magnitude slower as latency increases.

    The program and related files: https://gist.github.com/adg/641d04ef335782648502cb32a03b2b07

    The output of a typical run:

    $ ./run.sh 
    Duration	Latency	Proto
    
    6.977221ms	0s	GRPC
    4.833989ms	0s	GRPC
    4.714891ms	0s	GRPC
    3.884165ms	0s	GRPC
    5.254322ms	0s	GRPC
    
    8.507352ms	0s	HTTP/2.0
    936.436µs	0s	HTTP/2.0
    453.471µs	0s	HTTP/2.0
    252.786µs	0s	HTTP/2.0
    265.955µs	0s	HTTP/2.0
    
    107.32663ms	1ms	GRPC
    102.51629ms	1ms	GRPC
    100.235617ms	1ms	GRPC
    100.444982ms	1ms	GRPC
    100.881221ms	1ms	GRPC
    
    12.423725ms	1ms	HTTP/2.0
    3.02918ms	1ms	HTTP/2.0
    2.775928ms	1ms	HTTP/2.0
    4.161895ms	1ms	HTTP/2.0
    2.951534ms	1ms	HTTP/2.0
    
    195.731175ms	2ms	GRPC
    190.571784ms	2ms	GRPC
    188.810298ms	2ms	GRPC
    190.593822ms	2ms	GRPC
    190.015888ms	2ms	GRPC
    
    19.18046ms	2ms	HTTP/2.0
    4.663983ms	2ms	HTTP/2.0
    5.45113ms	2ms	HTTP/2.0
    5.56255ms	2ms	HTTP/2.0
    5.582744ms	2ms	HTTP/2.0
    
    378.653747ms	4ms	GRPC
    362.14625ms	4ms	GRPC
    357.95833ms	4ms	GRPC
    364.525646ms	4ms	GRPC
    364.954077ms	4ms	GRPC
    
    33.666184ms	4ms	HTTP/2.0
    8.68926ms	4ms	HTTP/2.0
    10.658349ms	4ms	HTTP/2.0
    10.741361ms	4ms	HTTP/2.0
    10.188444ms	4ms	HTTP/2.0
    
    719.696194ms	8ms	GRPC
    699.807568ms	8ms	GRPC
    703.794127ms	8ms	GRPC
    702.610461ms	8ms	GRPC
    710.592955ms	8ms	GRPC
    
    55.66933ms	8ms	HTTP/2.0
    18.449093ms	8ms	HTTP/2.0
    17.080567ms	8ms	HTTP/2.0
    20.597944ms	8ms	HTTP/2.0
    17.318133ms	8ms	HTTP/2.0
    
    1.415272339s	16ms	GRPC
    1.350923577s	16ms	GRPC
    1.355653965s	16ms	GRPC
    1.338834603s	16ms	GRPC
    1.358419144s	16ms	GRPC
    
    102.133898ms	16ms	HTTP/2.0
    39.144638ms	16ms	HTTP/2.0
    40.82348ms	16ms	HTTP/2.0
    35.133498ms	16ms	HTTP/2.0
    39.516466ms	16ms	HTTP/2.0
    
    2.630821843s	32ms	GRPC
    2.46741086s	32ms	GRPC
    2.507019279s	32ms	GRPC
    2.476177935s	32ms	GRPC
    2.49266693s	32ms	GRPC
    
    179.271675ms	32ms	HTTP/2.0
    72.575954ms	32ms	HTTP/2.0
    67.23265ms	32ms	HTTP/2.0
    70.651455ms	32ms	HTTP/2.0
    67.579124ms	32ms	HTTP/2.0
    

    I theorize that there is something wrong with GRPC's flow control mechanism, but that's just a guess.

  • Failed HTTP/2 Parsing StatusCode.Unavailable when calling Streaming RPCs from Golang Server

    Failed HTTP/2 Parsing StatusCode.Unavailable when calling Streaming RPCs from Golang Server

    Please answer these questions before submitting your issue.

    This is a continuation of https://github.com/grpc/grpc/issues/11586 which I am opening here for better visibility from the grpc-go devs.

    What version of gRPC are you using?

    We are using python grpcio==1.3.5 and grpc-go==v1.4.x. We've also reproduced this on python grpcio==1.4.0

    What version of Go are you using (go version)?

    We're using go version 1.8.1

    What operating system (Linux, Windows, …) and version?

    Ubuntu 14.04

    What did you do?

    If possible, provide a recipe for reproducing the error. Happens inconsistently, every so often a streaming RPC will fail with the following error: <_Rendezvous of RPC that terminated with (StatusCode.UNAVAILABLE, Failed parsing HTTP/2)>

    Some grpc logs: E0629 13:45:52.222804121 27606 channel_connectivity.c:138] watch_completion_error: "Cancelled" E0629 13:45:52.222827355 27606 completion_queue.c:226] Operation failed: tag=0x7f10bbd9ca60, error={"created":"@1498769152.222798356","description":"Timed out waiting for connection state change","file":"src/core/ext/filters/client_channel/channel_connectivity.c","file_line":145} E0629 13:45:52.222838571 27606 channel_connectivity.c:138] watch_completion_error: "Cancelled" E0629 13:45:52.222846339 27606 completion_queue.c:226] Operation failed: tag=0x7f10bbd9cad0, error={"created":"@1498769152.222799406","description":"Timed out waiting for connection state change","file":"src/core/ext/filters/client_channel/channel_connectivity.c","file_line":145} E0629 13:45:52.223925299 27603 channel_connectivity.c:138] watch_completion_error: "Cancelled" E0629 13:45:52.223942312 27603 completion_queue.c:226] Operation failed: tag=0x7f10bbd9c9f0, error={"created":"@1498769152.223918465","description":"Timed out waiting for connection state change","file":"src/core/ext/filters/client_channel/channel_connectivity.c","file_line":145} E0629 13:45:52.223949262 27603 channel_connectivity.c:138] watch_completion_error: "Cancelled" E0629 13:45:52.223979616 27603 completion_queue.c:226] Operation failed: tag=0x7f10bbd9c980, error={"created":"@1498769152.223919439","description":"Timed out waiting for connection state change","file":"src/core/ext/filters/client_channel/channel_connectivity.c","file_line":145} E0629 13:45:52.224009309 27603 channel_connectivity.c:138] watch_completion_error: "Cancelled" E0629 13:45:52.224017226 27603 completion_queue.c:226] Operation failed: tag=0x7f10bbd9c830, error={"created":"@1498769152.223920475","description":"Timed out waiting for connection state change","file":"src/core/ext/filters/client_channel/channel_connectivity.c","file_line":145} E0629 13:45:52.224391810 27609 channel_connectivity.c:138] watch_completion_error: "Cancelled" E0629 13:45:52.224403941 27609 completion_queue.c:226] Operation failed: tag=0x7f10bbd9cc20, error={"created":"@1498769152.224387963","description":"Timed out waiting for connection state change","file":"src/core/ext/filters/client_channel/channel_connectivity.c","file_line":145} E0629 13:52:37.556768181 28157 channel_connectivity.c:138] watch_completion_error: "Cancelled" E0629 13:52:37.556831045 28157 completion_queue.c:226] Operation failed: tag=0x7f10bbd9cb40, error={"created":"@1498769557.556750425","description":"Timed out waiting for connection state change","file":"src/core/ext/filters/client_channel/channel_connectivity.c","file_line":145} E0629 13:52:37.557441154 28161 channel_connectivity.c:138] watch_completion_error: "Cancelled" E0629 13:52:37.557504078 28161 completion_queue.c:226] Operation failed: tag=0x7f10bbd9c830, error={"created":"@1498769557.557416763","description":"Timed out waiting for connection state change","file":"src/core/ext/filters/client_channel/channel_connectivity.c","file_line":145} E0629 13:52:37.557563746 28161 channel_connectivity.c:138] watch_completion_error: "Cancelled" E0629 13:52:37.557608834 28161 completion_queue.c:226] Operation failed: tag=0x7f10bbd9cc20, error={"created":"@1498769557.557420283","description":"Timed out waiting for connection state change","file":"src/core/ext/filters/client_channel/channel_connectivity.c","file_line":145} E0629 13:52:37.557649360 28161 channel_connectivity.c:138] watch_completion_error: "Cancelled" E0629 13:52:37.557694897 28161 completion_queue.c:226] Operation failed: tag=0x7f10bbd9c980, error={"created":"@1498769557.557423433","description":"Timed out waiting for connection state change","file":"src/core/ext/filters/client_channel/channel_connectivity.c","file_line":145} E0629 13:52:37.558510258 28166 channel_connectivity.c:138] watch_completion_error: "Cancelled" E0629 13:52:37.558572634 28166 completion_queue.c:226] Operation failed: tag=0x7f10bbd9cd70, error={"created":"@1498769557.558490789","description":"Timed out waiting for connection state change","file":"src/core/ext/filters/client_channel/channel_connectivity.c","file_line":145} E0629 13:52:37.558610179 28166 channel_connectivity.c:138] watch_completion_error: "Cancelled" E0629 13:52:37.558644492 28166 completion_queue.c:226] Operation failed: tag=0x7f10bbd9cec0, error={"created":"@1498769557.558494483","description":"Timed out waiting for connection state change","file":"src/core/ext/filters/client_channel/channel_connectivity.c","file_line":145} E0629 13:52:37.559833158 28167 channel_connectivity.c:138] watch_completion_error: "Cancelled" E0629 13:52:37.559901218 28167 completion_queue.c:226] Operation failed: tag=0x7f10bbd9cad0, error={"created":"@1498769557.559815450","description":"Timed out waiting for connection state change","file":"src/core/ext/filters/client_channel/channel_connectivity.c","file_line":145} E0629 14:11:46.635698278 29153 channel_connectivity.c:138] watch_completion_error: "Cancelled" E0629 14:11:46.635812439 29153 completion_queue.c:226] Operation failed: tag=0x7f108afcb1a0, error={"created":"@1498770706.635668871","description":"Timed out waiting for connection state change","file":"src/core/ext/filters/client_channel/channel_connectivity.c","file_line":145} E0629 14:11:46.635887056 29153 channel_connectivity.c:138] watch_completion_error: "Cancelled" E0629 14:11:46.635944586 29153 completion_queue.c:226] Operation failed: tag=0x7f108afcb210, error={"created":"@1498770706.635675260","description":"Timed out waiting for connection state change","file":"src/core/ext/filters/client_channel/channel_connectivity.c","file_line":145} E0629 14:11:46.636461489 29155 channel_connectivity.c:138] watch_completion_error: "Cancelled" E0629 14:11:46.636525366 29155 completion_queue.c:226] Operation failed: tag=0x7f108afcb130, error={"created":"@1498770706.636440110","description":"Timed out waiting for connection state change","file":"src/core/ext/filters/client_channel/channel_connectivity.c","file_line":145} E0629 14:11:46.636556141 29155 channel_connectivity.c:138] watch_completion_error: "Cancelled" E0629 14:11:46.636585820 29155 completion_queue.c:226] Operation failed: tag=0x7f108afcb360, error={"created":"@1498770706.636443702","description":"Timed out waiting for connection state change","file":"src/core/ext/filters/client_channel/channel_connectivity.c","file_line":145} E0629 14:11:46.637721291 29163 channel_connectivity.c:138] watch_completion_error: "Cancelled" E0629 14:11:46.637791752 29163 completion_queue.c:226] Operation failed: tag=0x7f108afcb0c0, error={"created":"@1498770706.637702529","description":"Timed out waiting for connection state change","file":"src/core/ext/filters/client_channel/channel_connectivity.c","file_line":145} E0629 14:11:46.637836300 29163 channel_connectivity.c:138] watch_completion_error: "Cancelled" E0629 14:11:46.637872014 29163 completion_queue.c:226] Operation failed: tag=0x7f108afcb2f0, error={"created":"@1498770706.637706809","description":"Timed out waiting for connection state change","file":"src/core/ext/filters/client_channel/channel_connectivity.c","file_line":145} E0629 14:11:46.641194536 29163 channel_connectivity.c:138] watch_completion_error: "Cancelled" E0629 14:11:46.641241298 29163 completion_queue.c:226] Operation failed: tag=0x7f108afcb050, error={"created":"@1498770706.641178364","description":"Timed out waiting for connection state change","file":"src/core/ext/filters/client_channel/channel_connectivity.c","file_line":145} E0629 14:28:37.539497986 29251 channel_connectivity.c:138] watch_completion_error: "Cancelled" E0629 14:28:37.539555939 29251 completion_queue.c:226] Operation failed: tag=0x7f10bbd9cc90, error={"created":"@1498771717.539483236","description":"Timed out waiting for connection state change","file":"src/core/ext/filters/client_channel/channel_connectivity.c","file_line":145} E0629 14:28:37.540536617 29265 channel_connectivity.c:138] watch_completion_error: "Cancelled" E0629 14:28:37.540601626 29265 completion_queue.c:226] Operation failed: tag=0x7f10bbd9c910, error={"created":"@1498771717.540517372","description":"Timed out waiting for connection state change","file":"src/core/ext/filters/client_channel/channel_connectivity.c","file_line":145} E0629 14:28:37.540647559 29265 channel_connectivity.c:138] watch_completion_error: "Cancelled" E0629 14:28:37.540679773 29265 completion_queue.c:226] Operation failed: tag=0x7f10bbd9cd70, error={"created":"@1498771717.540521809","description":"Timed out waiting for connection state change","file":"src/core/ext/filters/client_channel/channel_connectivity.c","file_line":145} E0629 14:28:37.541893786 29265 channel_connectivity.c:138] watch_completion_error: "Cancelled" E0629 14:28:37.541943420 29265 completion_queue.c:226] Operation failed: tag=0x7f10bbd9ce50, error={"created":"@1498771717.541871189","description":"Timed out waiting for connection state change","file":"src/core/ext/filters/client_channel/channel_connectivity.c","file_line":145} E0629 14:28:37.541982533 29265 channel_connectivity.c:138] watch_completion_error: "Cancelled" E0629 14:28:37.542009741 29265 completion_queue.c:226] Operation failed: tag=0x7f10bbd9c830, error={"created":"@1498771717.541874944","description":"Timed out waiting for connection state change","file":"src/core/ext/filters/client_channel/channel_connectivity.c","file_line":145} E0629 14:28:37.542044730 29265 channel_connectivity.c:138] watch_completion_error: "Cancelled" E0629 14:28:37.542080406 29265 completion_queue.c:226] Operation failed: tag=0x7f10bbd9c980, error={"created":"@1498771717.541878692","description":"Timed out waiting for connection state change","file":"src/core/ext/filters/client_channel/channel_connectivity.c","file_line":145} E0629 14:28:37.543488271 29265 channel_connectivity.c:138] watch_completion_error: "Cancelled" E0629 14:28:37.543534201 29265 completion_queue.c:226] Operation failed: tag=0x7f10bbd9cad0, error={"created":"@1498771717.543473445","description":"Timed out waiting for connection state change","file":"src/core/ext/filters/client_channel/channel_connectivity.c","file_line":145} <_Rendezvous of RPC that terminated with (StatusCode.UNAVAILABLE, Failed parsing HTTP/2)>

    What did you expect to see?

    The streaming RPC to succeed.

    What did you see instead?

    The streaming RPC failed

  • ClientConn is inflexible for client-side LB

    ClientConn is inflexible for client-side LB

    Client side LB was being discussed here: https://groups.google.com/forum/#!searchin/grpc-io/loadbalancing/grpc-io/yqB8sNNHeoo/0Mfu4b2cdaUJ

    We've been considering using GRPC for our new MicroService stack. We are using Etcd+SkyDNS for DNS SRV based service discovery and would like to leverage that for RR-like RPC load balancing between backends.

    However, it seems that the current ClientConn is fairly "single-homed". I thought about implementing an LbClientConn that would aggregate multiple ClientConn, but all the auto-generated code takes ClientConn structure and not a swappable interface.

    Are you planning on doing client-side LB anytime soon? Or maybe ideas or hints on how to make an early stab at it?

  • Instrumentation hooks

    Instrumentation hooks

    We're currently experimenting with GRPC and wondering how we'll monitor the client code/server code dispatch using Prometheus metrics (should look familiar ;)

    I've been looking for a place in the grpc-go to be able to hook up gathering of ServiceName, MethodName, bytes, latency data, and found none.

    Reading upon the thread in #131 about RPC interceptors, it is suggested to add the instrumentation in our Application Code (a.k.a. the code implementing the auto-generated Proto interfaces). I see the point about not cluttering grpc-go implementation and being implementation agnostic.

    However, adding instrumentation into Application Code means that either we need to: a) add a lot of repeatable code inside Application Code to handle instrumentation b) use the callFoo pattern described in #131 proposed [only applicable to Client] c) add a thin implementation of each Proto-generated interface that wraps the "real" Proto-generated method calls with metrics [only applicable to Client]

    There are downsides to each solution though: a) leads to a lot of clutter and errors related to copy pasting, and some of these will be omitted or badly done b) means that we loose the best (IMHO) feature of Proto-generated interfaces: the "natural" syntax that allows for easy mocking in unit tests (through injection of the Proto-generated Interface), and is only applicable on the Client-side c) is very tedious because each time we re-generate the Proto (add a method or a service) we need to go and manually copy paste some boiler plate. This would be a huge drag on our coding workflow, since we really want to rely on Proto-generate code as much as possible. And also is only applicable on the Client-side.

    I think that cleanest solution would be a pluggable set of callbacks on pre-call/post-call on client/server that would grant access to ServiceName, MethodName and RpcContext (provided the latter stats about bytes transferred/start time of the call). This would allow people to plug an instrumentation mechanism of their choice (statsd, grafana, Prometheus), and shouldn't have any impact on performance that interceptors described in #131 could have had (the double serialization/deserialization).

    Having seen how amazingly useful RPC instrumentation was inside Google, I'm sure you've been thinking about solving this in gRPC, and I'm curious to know what you're planning :)

  • Access to TLS client certificate

    Access to TLS client certificate

    I can't see any way for an RPC method to authenticate a client based on a TLS certificate.

    An example program where an RPC method echoes the client TLS certificate would be great.

  • Document how to use ServeHTTP

    Document how to use ServeHTTP

    Now that #75 is fixed (via #514), let's add examples on how to use ServeHTTP. The examples were removed from earlier versions of #514 to reduce the size of that change.

    First, I'd like to get #545 submitted, to clean up the testdata files, before fixing this issue would otherwise make it worse.

    /cc @iamqizhao

  • Unexpected transport closing: too many pings from client

    Unexpected transport closing: too many pings from client

    What version of gRPC are you using?

    583a6303969ea5075e9bd1dc4b75805dfe66989a

    What version of Go are you using (go version)?

    1.10

    What operating system (Linux, Windows, …) and version?

    Linux AMD64, Kernel 4.10

    What did you do?

    When I have the server configured with GZIP compression as so:

    gzcp := grpc.NewGZIPCompressor()
    grpcServer := grpc.NewServer(grpc.RPCCompressor(gzcp))
    

    Then when serving thousands of concurrent requests a second, clients will occasionally be disconnected with

    rpc error: code = Unavailable desc = transport is closing
    

    I see no errors from the server, and the both the client and server are far from overloaded (<10% CPU usage etc). Not all clients are affected at once, it will just be one connection which gets this error.

    While trying to debug this, I disabled GZIP compression so I could use more easily look at packet captures. I am unable to reproduce this error once the GZIP compressor is no longer in use.

    This issue is mostly to ask what the best way to proceed with diagnosing the problem is, or if there are any reasons why having a compressor would change the behavior of the system (aside from CPU usage which I don't think is a problem).

  • cmd/protoc-gen-go-grpc: add code generator

    cmd/protoc-gen-go-grpc: add code generator

    Add a code generator for gRPC services.

    No tests included here. A followup PR updates code generation to use this generator, which acts as a test of the generator as well.

  • Cleanup error formatting and logging

    Cleanup error formatting and logging

    Mostly add missing colons but also address few other minor issues.

    It's confusing to read an error message without a colon for nesting, so that was the main motivation for the change. Happened to me couple of times and I couldn't find the whole string anywhere in code, wasted some time on it.

    RELEASE NOTES: none

  • How to check if server implements bidirectional stream method before sending data

    How to check if server implements bidirectional stream method before sending data

    We (Dapr) have a gRPC API that uses a bi-directional stream:

    service ServiceInvocation {
      rpc CallLocalStream (stream InternalInvokeRequestStream) returns (stream InternalInvokeResponseStream) {}
    }
    

    This is a new API that we're adding, and we need to maintain backwards-compatibility so new clients (which implement CallLocalStream) can invoke old servers (which do not implement CallLocalStream), falling back to a unary RPC in that case.

    The problem we have is that when we create the stream on the client, with:

    stream, err := clientV1.CallLocalStream(ctx, opts...)
    if err != nil {
    	return nil, err
    }
    

    Even when the server does not implement CallLocalStream, err is always nil. We get an error (with code "Unimplemented") only after we try to receive a message from the stream.

    This is a problem for us because by the time we (try to) receive a message from the stream, we have already sent data to the server, and that works without errors. That data comes from a readable stream, which we end up consuming along the way, so the data doesn't exist anymore if we then need to fall back to a unary RPC.

    Is there a way to determine earlier if the server implements a bidirectional stream method? Ideally, without having to make a "ping" call which would add latency.

  • transport: send a trailers only-response if no messages or headers are sent

    transport: send a trailers only-response if no messages or headers are sent

    This PR adds a check if a stream is closed with an error. If so, and no messages or headers are sent, a trailers only-response is sent.

    Fixes: https://github.com/grpc/grpc-go/issues/3125

    RELEASE NOTES:

    • transport: send a trailers only-response if no messages or headers are sent
  • Improve observability of hitting maxConcurrentStream limit

    Improve observability of hitting maxConcurrentStream limit

    Use case(s) - what problem will this feature solve?

    In our setup we are seeing latency spikes caused by hitting maxConcurrentStream limit enforced by the server. The problem is that we had to change the grpc code and rebuild the binary with custom logging to understand that this is a case. I propose improving observability for this, either by exposing metric or some log entry.

    Proposed Solution

    The naive solution here would be:

    • Measure latency of this for the loop https://github.com/grpc/grpc-go/blob/12b8fb52a18c8a1667dde7a4f8087ecdd2abbeaf/internal/transport/http2_client.go#L818-L842
    • If the latency exceeds hardcoded threshold (let's say 50ms), we log info on some higher verbosity level.
      • Similar pattern is widely used e.g. kubernetes/client-go, for different waiting time: https://github.com/kubernetes/client-go/blob/e7cd4ba474b5efc2882e377362c9aa8b407428d9/rest/request.go#L613-L615

    This will make debugging easier in some testing environments.

    We can also expose some metric with histogram of waiting time to be able to track this in production environment, where wedo not enable higher verbosity by default.

    Alternatives Considered

    Additional Context

  • random balancer

    random balancer

    Please see the FAQ in our main README.md before submitting your issue.

    Use case(s) - what problem will this feature solve?

    I've a lot of clients which will connect a small number of servers to long polling some data periodically. So it's important that each server should bear as equal connections as possible.

    Proposed Solution

    In this scenario, a random balancer is an easy solution to think about. Althrough grpc-go supports to regist customized balancer, but because random balancer is so common that I created this issue to discuss whether it's possible to add a grpc-go/balancer/random just like roundroubin etc.

    p.s. I also found that pickfirst balancer is in grpc-go/pickfirst.go but not in grpc-go/balancer. I'd like to know why all balancer implentations are not in a same directory and whether it's possible to move pickfirst.go to grpc-go/balancer which makes balancers of grpc-go organized.

    Alternatives Considered

    NA

    Additional Context

    NA

  • Fix header limit exceeded

    Fix header limit exceeded

    This PR fixes https://github.com/grpc/grpc-go/issues/4265 by returning a small hardcoded error message to the client in cases when header size limit is exceeded, instead of closing the connection without passing any error. This implements the proposed solution from the issue description.

    In the https://github.com/grpc/grpc-go/issues/4265 @easwars also suggested a different approach: ignore header size limit on the server and let the client handle it. Please let me know if I should implement this suggestion instead.

Antenna RPC is an RPC protocol for distributed computing, it's based on QUIC and Colfer. its currently an WIP.

aRPC - Antenna Remote Procedure Call Antenna remote procedure call (aRPC) is an RPC protocol focused on distributed processing and HPC. aRPC is implem

Jun 16, 2021
rpc/v2 support for JSON-RPC 2.0 Specification.

rpc rpc/v2 support for JSON-RPC 2.0 Specification. gorilla/rpc is a foundation for RPC over HTTP services, providing access to the exported methods of

Jul 4, 2021
Go Substrate RPC Client (GSRPC)Go Substrate RPC Client (GSRPC)

Go Substrate RPC Client (GSRPC) Substrate RPC client in Go. It provides APIs and types around Polkadot and any Substrate-based chain RPC calls. This c

Nov 11, 2021
Server and client implementation of the grpc go libraries to perform unary, client streaming, server streaming and full duplex RPCs from gRPC go introduction

Description This is an implementation of a gRPC client and server that provides route guidance from gRPC Basics: Go tutorial. It demonstrates how to u

Nov 24, 2021
Raft-grpc-demo - Some example code for how to use Hashicorp's Raft implementation with gRPC

raft-grpc-example This is some example code for how to use Hashicorp's Raft impl

Jan 4, 2022
grpc-http1: A gRPC via HTTP/1 Enabling Library for Go

grpc-http1: A gRPC via HTTP/1 Enabling Library for Go This library enables using all the functionality of a gRPC server even if it is exposed behind a

Dec 17, 2022
Go based grpc - grpc gateway micro service example

go-grpc-gateway-server This repository provides an example for go based microservice. Go micro services developed based on gRPC protobuf's and also us

Dec 8, 2021
Json to rpc example with envoy, go, grpc, nats

grpc-nats-envoy json to rpc example with envoy, go, grpc, redis This repo is a mirror of https://github.com/charlesonunze/grpc-redis-envoy-example It

Dec 7, 2021
A suite of gRPC debugging tools. Like Fiddler/Charles but for gRPC.

grpc-tools A suite of tools for gRPC debugging and development. Like Fiddler/Charles but for gRPC! The main tool is grpc-dump which transparently inte

Dec 22, 2022
Simple grpc web and grpc transcoding with Envoy
Simple grpc web and grpc transcoding with Envoy

gRPC Web and gRPC Transcoding with Envoy This is a simple stand-alone set of con

Dec 25, 2021
Go-grpc - This is grpc server for golang.

go-grpc This is grpc server for golang. protocのインストール brew install protoc Golang用のプラグインのインストール go install google.golang.org/protobuf/cmd/protoc-gen-go

Jan 2, 2022
GRPC - Creating a gRPC service from scratch

#Go gRPC services course Creating a gRPC service from scratch Command line colle

Jan 2, 2022
Totem - A Go library that can turn a single gRPC stream into bidirectional unary gRPC servers

Totem is a Go library that can turn a single gRPC stream into bidirectional unar

Jan 6, 2023
Grpc-gateway-map-null - gRPC Gateway test using nullable values in map

Demonstrate gRPC gateway behavior with nullable values in maps Using grpc-gatewa

Jan 6, 2022
Todo-app-grpc - Go/GRPC codebase containing RealWorld examples (CRUD, auth, advanced patterns, etc)

Go/GRPC codebase containing RealWorld examples (CRUD, auth, advanced patterns, e

Oct 12, 2022
GRPC - A client-server mockup, using gRPC to expose functionality.

gRPC This is a mockup application that I built to help me visualise and understand the basic concepts of gRPC. In this exchange, the client can use a

Jan 4, 2022
Benthos-input-grpc - gRPC custom benthos input

gRPC custom benthos input Create a custom benthos input that receives messages f

Sep 26, 2022
Go-grpc-template - A small template for quickly bootstrapping a, developer platform independent gRPC golang application
Go-grpc-template - A small template for quickly bootstrapping a, developer platform independent gRPC golang application

go-grpc-template A small template for quickly bootstrapping a developer platform

Jan 20, 2022
Grpc-train - Train booking demo using gRPC

gRPC Demo: Train Booking Service Description Usage Contributing Development Tool

Feb 6, 2022