The Go language implementation of gRPC. HTTP/2 based RPC

gRPC-Go

Build Status GoDoc GoReportCard

The Go implementation of gRPC: A high performance, open source, general RPC framework that puts mobile and HTTP/2 first. For more information see the Go gRPC docs, or jump directly into the quick start.

Prerequisites

Installation

With Go module support (Go 1.11+), simply add the following import

import "google.golang.org/grpc"

to your code, and then go [build|run|test] will automatically fetch the necessary dependencies.

Otherwise, to install the grpc-go package, run the following command:

$ go get -u google.golang.org/grpc

Note: If you are trying to access grpc-go from China, see the FAQ below.

Learn more

FAQ

I/O Timeout Errors

The golang.org domain may be blocked from some countries. go get usually produces an error like the following when this happens:

$ go get -u google.golang.org/grpc
package google.golang.org/grpc: unrecognized import path "google.golang.org/grpc" (https fetch: Get https://google.golang.org/grpc?go-get=1: dial tcp 216.239.37.1:443: i/o timeout)

To build Go code, there are several options:

  • Set up a VPN and access google.golang.org through that.

  • Without Go module support: git clone the repo manually:

    git clone https://github.com/grpc/grpc-go.git $GOPATH/src/google.golang.org/grpc

    You will need to do the same for all of grpc's dependencies in golang.org, e.g. golang.org/x/net.

  • With Go module support: it is possible to use the replace feature of go mod to create aliases for golang.org packages. In your project's directory:

    go mod edit -replace=google.golang.org/grpc=github.com/grpc/grpc-go@latest
    go mod tidy
    go mod vendor
    go build -mod=vendor

    Again, this will need to be done for all transitive dependencies hosted on golang.org as well. For details, refer to golang/go issue #28652.

Compiling error, undefined: grpc.SupportPackageIsVersion

If you are using Go modules:

Ensure your gRPC-Go version is required at the appropriate version in the same module containing the generated .pb.go files. For example, SupportPackageIsVersion6 needs v1.27.0, so in your go.mod file:

module <your module name>

require (
    google.golang.org/grpc v1.27.0
)

If you are not using Go modules:

Update the proto package, gRPC package, and rebuild the .proto files:

go get -u github.com/golang/protobuf/{proto,protoc-gen-go}
go get -u google.golang.org/grpc
protoc --go_out=plugins=grpc:. *.proto

How to turn on logging

The default logger is controlled by environment variables. Turn everything on like this:

$ export GRPC_GO_LOG_VERBOSITY_LEVEL=99
$ export GRPC_GO_LOG_SEVERITY_LEVEL=info

The RPC failed with error "code = Unavailable desc = transport is closing"

This error means the connection the RPC is using was closed, and there are many possible reasons, including:

  1. mis-configured transport credentials, connection failed on handshaking
  2. bytes disrupted, possibly by a proxy in between
  3. server shutdown
  4. Keepalive parameters caused connection shutdown, for example if you have configured your server to terminate connections regularly to trigger DNS lookups. If this is the case, you may want to increase your MaxConnectionAgeGrace, to allow longer RPC calls to finish.

It can be tricky to debug this because the error happens on the client side but the root cause of the connection being closed is on the server side. Turn on logging on both client and server, and see if there are any transport errors.

Owner
grpc
A high performance, open source, general-purpose RPC framework
grpc
Comments
  • protoc-gen-go-grpc: API for service registration

    protoc-gen-go-grpc: API for service registration

    There were some changes in #3657 that make it harder to develop gRPC services and harder to find new unimplemented methods - I wanted to start a discussion around the new default and figure out why the change was made. I do understand this is in an unreleased version, so I figured a discussion would be better than a bug report or feature request.

    From my perspective, this is a number of steps backwards for reasons I will outline below.

    When implementing a gRPC service in Go, I often start with a blank slate - the service has been defined in proto files, the go and gRPC protobuf definitions have been generated, all that's left to do is write the code. I often use something like the following so the compiler will help me along, telling me about missing methods, incorrect signatures, things like that.

    package chat
    func init() {
    	// Ensure that Server implements the ChatIngestServer interface
    	var server *Server = nil
    	var _ pb.ChatIngestServer = server
    }
    

    This can alternatively be done with var _ pb.ChatIngestServer = &Server{} but that theoretically leaves a little bit more memory around at runtime.

    After this, I add all the missing methods myself (returning the unimplemented status) and start adding implementations to them until I have a concrete implementation for all the methods.

    Problems with the new approach

    • As soon as you embed the Unimplemented implementation, the Go compiler gets a lot less useful - it will no longer tell you about missing methods and because Go interfaces are implicit, if you make a mistake in implementing a method (like misspelling a method name), you will not find out until runtime. Additionally, because these are public methods, if they're attached to a public struct (like the commonly used Server), they may not be detected as an unused method.
    • If protos and generated files are updated, you will not know about any missing methods until runtime. Personally, I would prefer to know if I have not fully implemented a service at compile time rather than waiting until clients are running against it.
    • IDE hinting is worse with the new changes - IDEs will generally not recommend a method stub for an unimplemented method if it's provided by an embedded struct because even though it's technically implemented, it does not have a working implementation.

    I generally prefer compile time guarantees that all methods are implemented over runtime checks.

    Benefits of the new approach

    • Protos and generated files can be updated without requiring updates to server implementations.

    Proposal

    By default, the option requireUnimplementedServers should default to false. This option is more valuable when dealing with external protobufs which are not versioned (maybe there should be a recommendation to embed the unimplemented struct in this instance) and makes it harder to catch mistakes if you are developing a canonical implementation of a service which should implement all the available methods.

    At least for me, the problems with the new approach vastly outweigh the benefits I've seen so far.

  • Support serving web content from the same port

    Support serving web content from the same port

    https://github.com/grpc/grpc-common/blob/master/PROTOCOL-HTTP2.md#appendix-a---grpc-for-protobuf says grpc protobuf mapping uses service names as paths. Would it be possible to serve web content from other urls?

    I see TLS side does alpn, so that's an easy place to hook up.

    Any thoughts about non-TLS? e.g. a service running on localhost. Of course that would mean needing to do a http/1 upgrade negotiation, as then the port could not default to http/2.

    Use case 1: host a web application and its api on the same port.

    Use case 2: serve a health check response.

    Use case 3: serve a "nothing to see here" html page.

    Use case 4: serve a /robots.txt file.

  • Go 1.7 uses import

    Go 1.7 uses import "context"

    If you're using the new "context" library, then gRPC servers can't match the server to the interface, because it has the wrong type of context.

    Compiling bin/linux.x86_64/wlserver

    asci/cs/tools/whitelist/wlserver

    ./wlserver.go:767: cannot use wls (type _Server) as type WhitelistProto.GoogleWhitelistServer in argument to WhitelistProto.RegisterGoogleWhitelistServer: *Server does not implement WhitelistProto.GoogleWhitelistServer (wrong type for Delete method) have Delete(context.Context, *WhitelistProto.DeleteRequest) (_WhitelistProto.DeleteReply, error) want Delete(context.Context, _WhitelistProto.DeleteRequest) (_WhitelistProto.DeleteReply, error)

  • Connection latency significantly affects throughput

    Connection latency significantly affects throughput

    I am working on a system that uses GRPC to send 1MB blobs between clients and servers and have observed some poor throughput when connection latency is high (180ms round trip is typical between Australia and the USA).

    The same throughput issues are not present when I take GRPC out of the equation. I have prepared a self-contained program that reproduces the problem on a local machine by simulating latency at the net.Listener level. It can run either using GRPC or just plain HTTP/2 POST requests. In each case the payload and latency are the same, but—as you can see from the data below—GRPC becomes several orders of magnitude slower as latency increases.

    The program and related files: https://gist.github.com/adg/641d04ef335782648502cb32a03b2b07

    The output of a typical run:

    $ ./run.sh 
    Duration	Latency	Proto
    
    6.977221ms	0s	GRPC
    4.833989ms	0s	GRPC
    4.714891ms	0s	GRPC
    3.884165ms	0s	GRPC
    5.254322ms	0s	GRPC
    
    8.507352ms	0s	HTTP/2.0
    936.436µs	0s	HTTP/2.0
    453.471µs	0s	HTTP/2.0
    252.786µs	0s	HTTP/2.0
    265.955µs	0s	HTTP/2.0
    
    107.32663ms	1ms	GRPC
    102.51629ms	1ms	GRPC
    100.235617ms	1ms	GRPC
    100.444982ms	1ms	GRPC
    100.881221ms	1ms	GRPC
    
    12.423725ms	1ms	HTTP/2.0
    3.02918ms	1ms	HTTP/2.0
    2.775928ms	1ms	HTTP/2.0
    4.161895ms	1ms	HTTP/2.0
    2.951534ms	1ms	HTTP/2.0
    
    195.731175ms	2ms	GRPC
    190.571784ms	2ms	GRPC
    188.810298ms	2ms	GRPC
    190.593822ms	2ms	GRPC
    190.015888ms	2ms	GRPC
    
    19.18046ms	2ms	HTTP/2.0
    4.663983ms	2ms	HTTP/2.0
    5.45113ms	2ms	HTTP/2.0
    5.56255ms	2ms	HTTP/2.0
    5.582744ms	2ms	HTTP/2.0
    
    378.653747ms	4ms	GRPC
    362.14625ms	4ms	GRPC
    357.95833ms	4ms	GRPC
    364.525646ms	4ms	GRPC
    364.954077ms	4ms	GRPC
    
    33.666184ms	4ms	HTTP/2.0
    8.68926ms	4ms	HTTP/2.0
    10.658349ms	4ms	HTTP/2.0
    10.741361ms	4ms	HTTP/2.0
    10.188444ms	4ms	HTTP/2.0
    
    719.696194ms	8ms	GRPC
    699.807568ms	8ms	GRPC
    703.794127ms	8ms	GRPC
    702.610461ms	8ms	GRPC
    710.592955ms	8ms	GRPC
    
    55.66933ms	8ms	HTTP/2.0
    18.449093ms	8ms	HTTP/2.0
    17.080567ms	8ms	HTTP/2.0
    20.597944ms	8ms	HTTP/2.0
    17.318133ms	8ms	HTTP/2.0
    
    1.415272339s	16ms	GRPC
    1.350923577s	16ms	GRPC
    1.355653965s	16ms	GRPC
    1.338834603s	16ms	GRPC
    1.358419144s	16ms	GRPC
    
    102.133898ms	16ms	HTTP/2.0
    39.144638ms	16ms	HTTP/2.0
    40.82348ms	16ms	HTTP/2.0
    35.133498ms	16ms	HTTP/2.0
    39.516466ms	16ms	HTTP/2.0
    
    2.630821843s	32ms	GRPC
    2.46741086s	32ms	GRPC
    2.507019279s	32ms	GRPC
    2.476177935s	32ms	GRPC
    2.49266693s	32ms	GRPC
    
    179.271675ms	32ms	HTTP/2.0
    72.575954ms	32ms	HTTP/2.0
    67.23265ms	32ms	HTTP/2.0
    70.651455ms	32ms	HTTP/2.0
    67.579124ms	32ms	HTTP/2.0
    

    I theorize that there is something wrong with GRPC's flow control mechanism, but that's just a guess.

  • Failed HTTP/2 Parsing StatusCode.Unavailable when calling Streaming RPCs from Golang Server

    Failed HTTP/2 Parsing StatusCode.Unavailable when calling Streaming RPCs from Golang Server

    Please answer these questions before submitting your issue.

    This is a continuation of https://github.com/grpc/grpc/issues/11586 which I am opening here for better visibility from the grpc-go devs.

    What version of gRPC are you using?

    We are using python grpcio==1.3.5 and grpc-go==v1.4.x. We've also reproduced this on python grpcio==1.4.0

    What version of Go are you using (go version)?

    We're using go version 1.8.1

    What operating system (Linux, Windows, …) and version?

    Ubuntu 14.04

    What did you do?

    If possible, provide a recipe for reproducing the error. Happens inconsistently, every so often a streaming RPC will fail with the following error: <_Rendezvous of RPC that terminated with (StatusCode.UNAVAILABLE, Failed parsing HTTP/2)>

    Some grpc logs: E0629 13:45:52.222804121 27606 channel_connectivity.c:138] watch_completion_error: "Cancelled" E0629 13:45:52.222827355 27606 completion_queue.c:226] Operation failed: tag=0x7f10bbd9ca60, error={"created":"@1498769152.222798356","description":"Timed out waiting for connection state change","file":"src/core/ext/filters/client_channel/channel_connectivity.c","file_line":145} E0629 13:45:52.222838571 27606 channel_connectivity.c:138] watch_completion_error: "Cancelled" E0629 13:45:52.222846339 27606 completion_queue.c:226] Operation failed: tag=0x7f10bbd9cad0, error={"created":"@1498769152.222799406","description":"Timed out waiting for connection state change","file":"src/core/ext/filters/client_channel/channel_connectivity.c","file_line":145} E0629 13:45:52.223925299 27603 channel_connectivity.c:138] watch_completion_error: "Cancelled" E0629 13:45:52.223942312 27603 completion_queue.c:226] Operation failed: tag=0x7f10bbd9c9f0, error={"created":"@1498769152.223918465","description":"Timed out waiting for connection state change","file":"src/core/ext/filters/client_channel/channel_connectivity.c","file_line":145} E0629 13:45:52.223949262 27603 channel_connectivity.c:138] watch_completion_error: "Cancelled" E0629 13:45:52.223979616 27603 completion_queue.c:226] Operation failed: tag=0x7f10bbd9c980, error={"created":"@1498769152.223919439","description":"Timed out waiting for connection state change","file":"src/core/ext/filters/client_channel/channel_connectivity.c","file_line":145} E0629 13:45:52.224009309 27603 channel_connectivity.c:138] watch_completion_error: "Cancelled" E0629 13:45:52.224017226 27603 completion_queue.c:226] Operation failed: tag=0x7f10bbd9c830, error={"created":"@1498769152.223920475","description":"Timed out waiting for connection state change","file":"src/core/ext/filters/client_channel/channel_connectivity.c","file_line":145} E0629 13:45:52.224391810 27609 channel_connectivity.c:138] watch_completion_error: "Cancelled" E0629 13:45:52.224403941 27609 completion_queue.c:226] Operation failed: tag=0x7f10bbd9cc20, error={"created":"@1498769152.224387963","description":"Timed out waiting for connection state change","file":"src/core/ext/filters/client_channel/channel_connectivity.c","file_line":145} E0629 13:52:37.556768181 28157 channel_connectivity.c:138] watch_completion_error: "Cancelled" E0629 13:52:37.556831045 28157 completion_queue.c:226] Operation failed: tag=0x7f10bbd9cb40, error={"created":"@1498769557.556750425","description":"Timed out waiting for connection state change","file":"src/core/ext/filters/client_channel/channel_connectivity.c","file_line":145} E0629 13:52:37.557441154 28161 channel_connectivity.c:138] watch_completion_error: "Cancelled" E0629 13:52:37.557504078 28161 completion_queue.c:226] Operation failed: tag=0x7f10bbd9c830, error={"created":"@1498769557.557416763","description":"Timed out waiting for connection state change","file":"src/core/ext/filters/client_channel/channel_connectivity.c","file_line":145} E0629 13:52:37.557563746 28161 channel_connectivity.c:138] watch_completion_error: "Cancelled" E0629 13:52:37.557608834 28161 completion_queue.c:226] Operation failed: tag=0x7f10bbd9cc20, error={"created":"@1498769557.557420283","description":"Timed out waiting for connection state change","file":"src/core/ext/filters/client_channel/channel_connectivity.c","file_line":145} E0629 13:52:37.557649360 28161 channel_connectivity.c:138] watch_completion_error: "Cancelled" E0629 13:52:37.557694897 28161 completion_queue.c:226] Operation failed: tag=0x7f10bbd9c980, error={"created":"@1498769557.557423433","description":"Timed out waiting for connection state change","file":"src/core/ext/filters/client_channel/channel_connectivity.c","file_line":145} E0629 13:52:37.558510258 28166 channel_connectivity.c:138] watch_completion_error: "Cancelled" E0629 13:52:37.558572634 28166 completion_queue.c:226] Operation failed: tag=0x7f10bbd9cd70, error={"created":"@1498769557.558490789","description":"Timed out waiting for connection state change","file":"src/core/ext/filters/client_channel/channel_connectivity.c","file_line":145} E0629 13:52:37.558610179 28166 channel_connectivity.c:138] watch_completion_error: "Cancelled" E0629 13:52:37.558644492 28166 completion_queue.c:226] Operation failed: tag=0x7f10bbd9cec0, error={"created":"@1498769557.558494483","description":"Timed out waiting for connection state change","file":"src/core/ext/filters/client_channel/channel_connectivity.c","file_line":145} E0629 13:52:37.559833158 28167 channel_connectivity.c:138] watch_completion_error: "Cancelled" E0629 13:52:37.559901218 28167 completion_queue.c:226] Operation failed: tag=0x7f10bbd9cad0, error={"created":"@1498769557.559815450","description":"Timed out waiting for connection state change","file":"src/core/ext/filters/client_channel/channel_connectivity.c","file_line":145} E0629 14:11:46.635698278 29153 channel_connectivity.c:138] watch_completion_error: "Cancelled" E0629 14:11:46.635812439 29153 completion_queue.c:226] Operation failed: tag=0x7f108afcb1a0, error={"created":"@1498770706.635668871","description":"Timed out waiting for connection state change","file":"src/core/ext/filters/client_channel/channel_connectivity.c","file_line":145} E0629 14:11:46.635887056 29153 channel_connectivity.c:138] watch_completion_error: "Cancelled" E0629 14:11:46.635944586 29153 completion_queue.c:226] Operation failed: tag=0x7f108afcb210, error={"created":"@1498770706.635675260","description":"Timed out waiting for connection state change","file":"src/core/ext/filters/client_channel/channel_connectivity.c","file_line":145} E0629 14:11:46.636461489 29155 channel_connectivity.c:138] watch_completion_error: "Cancelled" E0629 14:11:46.636525366 29155 completion_queue.c:226] Operation failed: tag=0x7f108afcb130, error={"created":"@1498770706.636440110","description":"Timed out waiting for connection state change","file":"src/core/ext/filters/client_channel/channel_connectivity.c","file_line":145} E0629 14:11:46.636556141 29155 channel_connectivity.c:138] watch_completion_error: "Cancelled" E0629 14:11:46.636585820 29155 completion_queue.c:226] Operation failed: tag=0x7f108afcb360, error={"created":"@1498770706.636443702","description":"Timed out waiting for connection state change","file":"src/core/ext/filters/client_channel/channel_connectivity.c","file_line":145} E0629 14:11:46.637721291 29163 channel_connectivity.c:138] watch_completion_error: "Cancelled" E0629 14:11:46.637791752 29163 completion_queue.c:226] Operation failed: tag=0x7f108afcb0c0, error={"created":"@1498770706.637702529","description":"Timed out waiting for connection state change","file":"src/core/ext/filters/client_channel/channel_connectivity.c","file_line":145} E0629 14:11:46.637836300 29163 channel_connectivity.c:138] watch_completion_error: "Cancelled" E0629 14:11:46.637872014 29163 completion_queue.c:226] Operation failed: tag=0x7f108afcb2f0, error={"created":"@1498770706.637706809","description":"Timed out waiting for connection state change","file":"src/core/ext/filters/client_channel/channel_connectivity.c","file_line":145} E0629 14:11:46.641194536 29163 channel_connectivity.c:138] watch_completion_error: "Cancelled" E0629 14:11:46.641241298 29163 completion_queue.c:226] Operation failed: tag=0x7f108afcb050, error={"created":"@1498770706.641178364","description":"Timed out waiting for connection state change","file":"src/core/ext/filters/client_channel/channel_connectivity.c","file_line":145} E0629 14:28:37.539497986 29251 channel_connectivity.c:138] watch_completion_error: "Cancelled" E0629 14:28:37.539555939 29251 completion_queue.c:226] Operation failed: tag=0x7f10bbd9cc90, error={"created":"@1498771717.539483236","description":"Timed out waiting for connection state change","file":"src/core/ext/filters/client_channel/channel_connectivity.c","file_line":145} E0629 14:28:37.540536617 29265 channel_connectivity.c:138] watch_completion_error: "Cancelled" E0629 14:28:37.540601626 29265 completion_queue.c:226] Operation failed: tag=0x7f10bbd9c910, error={"created":"@1498771717.540517372","description":"Timed out waiting for connection state change","file":"src/core/ext/filters/client_channel/channel_connectivity.c","file_line":145} E0629 14:28:37.540647559 29265 channel_connectivity.c:138] watch_completion_error: "Cancelled" E0629 14:28:37.540679773 29265 completion_queue.c:226] Operation failed: tag=0x7f10bbd9cd70, error={"created":"@1498771717.540521809","description":"Timed out waiting for connection state change","file":"src/core/ext/filters/client_channel/channel_connectivity.c","file_line":145} E0629 14:28:37.541893786 29265 channel_connectivity.c:138] watch_completion_error: "Cancelled" E0629 14:28:37.541943420 29265 completion_queue.c:226] Operation failed: tag=0x7f10bbd9ce50, error={"created":"@1498771717.541871189","description":"Timed out waiting for connection state change","file":"src/core/ext/filters/client_channel/channel_connectivity.c","file_line":145} E0629 14:28:37.541982533 29265 channel_connectivity.c:138] watch_completion_error: "Cancelled" E0629 14:28:37.542009741 29265 completion_queue.c:226] Operation failed: tag=0x7f10bbd9c830, error={"created":"@1498771717.541874944","description":"Timed out waiting for connection state change","file":"src/core/ext/filters/client_channel/channel_connectivity.c","file_line":145} E0629 14:28:37.542044730 29265 channel_connectivity.c:138] watch_completion_error: "Cancelled" E0629 14:28:37.542080406 29265 completion_queue.c:226] Operation failed: tag=0x7f10bbd9c980, error={"created":"@1498771717.541878692","description":"Timed out waiting for connection state change","file":"src/core/ext/filters/client_channel/channel_connectivity.c","file_line":145} E0629 14:28:37.543488271 29265 channel_connectivity.c:138] watch_completion_error: "Cancelled" E0629 14:28:37.543534201 29265 completion_queue.c:226] Operation failed: tag=0x7f10bbd9cad0, error={"created":"@1498771717.543473445","description":"Timed out waiting for connection state change","file":"src/core/ext/filters/client_channel/channel_connectivity.c","file_line":145} <_Rendezvous of RPC that terminated with (StatusCode.UNAVAILABLE, Failed parsing HTTP/2)>

    What did you expect to see?

    The streaming RPC to succeed.

    What did you see instead?

    The streaming RPC failed

  • ClientConn is inflexible for client-side LB

    ClientConn is inflexible for client-side LB

    Client side LB was being discussed here: https://groups.google.com/forum/#!searchin/grpc-io/loadbalancing/grpc-io/yqB8sNNHeoo/0Mfu4b2cdaUJ

    We've been considering using GRPC for our new MicroService stack. We are using Etcd+SkyDNS for DNS SRV based service discovery and would like to leverage that for RR-like RPC load balancing between backends.

    However, it seems that the current ClientConn is fairly "single-homed". I thought about implementing an LbClientConn that would aggregate multiple ClientConn, but all the auto-generated code takes ClientConn structure and not a swappable interface.

    Are you planning on doing client-side LB anytime soon? Or maybe ideas or hints on how to make an early stab at it?

  • Instrumentation hooks

    Instrumentation hooks

    We're currently experimenting with GRPC and wondering how we'll monitor the client code/server code dispatch using Prometheus metrics (should look familiar ;)

    I've been looking for a place in the grpc-go to be able to hook up gathering of ServiceName, MethodName, bytes, latency data, and found none.

    Reading upon the thread in #131 about RPC interceptors, it is suggested to add the instrumentation in our Application Code (a.k.a. the code implementing the auto-generated Proto interfaces). I see the point about not cluttering grpc-go implementation and being implementation agnostic.

    However, adding instrumentation into Application Code means that either we need to: a) add a lot of repeatable code inside Application Code to handle instrumentation b) use the callFoo pattern described in #131 proposed [only applicable to Client] c) add a thin implementation of each Proto-generated interface that wraps the "real" Proto-generated method calls with metrics [only applicable to Client]

    There are downsides to each solution though: a) leads to a lot of clutter and errors related to copy pasting, and some of these will be omitted or badly done b) means that we loose the best (IMHO) feature of Proto-generated interfaces: the "natural" syntax that allows for easy mocking in unit tests (through injection of the Proto-generated Interface), and is only applicable on the Client-side c) is very tedious because each time we re-generate the Proto (add a method or a service) we need to go and manually copy paste some boiler plate. This would be a huge drag on our coding workflow, since we really want to rely on Proto-generate code as much as possible. And also is only applicable on the Client-side.

    I think that cleanest solution would be a pluggable set of callbacks on pre-call/post-call on client/server that would grant access to ServiceName, MethodName and RpcContext (provided the latter stats about bytes transferred/start time of the call). This would allow people to plug an instrumentation mechanism of their choice (statsd, grafana, Prometheus), and shouldn't have any impact on performance that interceptors described in #131 could have had (the double serialization/deserialization).

    Having seen how amazingly useful RPC instrumentation was inside Google, I'm sure you've been thinking about solving this in gRPC, and I'm curious to know what you're planning :)

  • Access to TLS client certificate

    Access to TLS client certificate

    I can't see any way for an RPC method to authenticate a client based on a TLS certificate.

    An example program where an RPC method echoes the client TLS certificate would be great.

  • Document how to use ServeHTTP

    Document how to use ServeHTTP

    Now that #75 is fixed (via #514), let's add examples on how to use ServeHTTP. The examples were removed from earlier versions of #514 to reduce the size of that change.

    First, I'd like to get #545 submitted, to clean up the testdata files, before fixing this issue would otherwise make it worse.

    /cc @iamqizhao

  • Unexpected transport closing: too many pings from client

    Unexpected transport closing: too many pings from client

    What version of gRPC are you using?

    583a6303969ea5075e9bd1dc4b75805dfe66989a

    What version of Go are you using (go version)?

    1.10

    What operating system (Linux, Windows, …) and version?

    Linux AMD64, Kernel 4.10

    What did you do?

    When I have the server configured with GZIP compression as so:

    gzcp := grpc.NewGZIPCompressor()
    grpcServer := grpc.NewServer(grpc.RPCCompressor(gzcp))
    

    Then when serving thousands of concurrent requests a second, clients will occasionally be disconnected with

    rpc error: code = Unavailable desc = transport is closing
    

    I see no errors from the server, and the both the client and server are far from overloaded (<10% CPU usage etc). Not all clients are affected at once, it will just be one connection which gets this error.

    While trying to debug this, I disabled GZIP compression so I could use more easily look at packet captures. I am unable to reproduce this error once the GZIP compressor is no longer in use.

    This issue is mostly to ask what the best way to proceed with diagnosing the problem is, or if there are any reasons why having a compressor would change the behavior of the system (aside from CPU usage which I don't think is a problem).

  • cmd/protoc-gen-go-grpc: add code generator

    cmd/protoc-gen-go-grpc: add code generator

    Add a code generator for gRPC services.

    No tests included here. A followup PR updates code generation to use this generator, which acts as a test of the generator as well.

  • Deprecate use of `ioutil` package

    Deprecate use of `ioutil` package

    Use of all ioutil functions was deprecated in Go versions 1.16 and 1.17. Some functions were moved to package io while others were moved to package os.

    SUMMARY OF CHANGES:

    • Replace ioutil.ReadFile with os.ReadFile
    • Replace ioutil.Discard with io.Discard
    • Replace ioutil.ReadAll with io.ReadAll
    • Replace ioutil.WriteFile with os.WriteFile
    • Replace ioutil.TempDir with os.MkdirTemp
    • Replace ioutil.TempFile with os.CreateTemp
    • Replace ioutil.NopCloser with io.NopCloser

    Resolves #5897

    RELEASE NOTES: N/A

  • Fix header limit exceeded

    Fix header limit exceeded

    This PR fixes https://github.com/grpc/grpc-go/issues/4265 by returning a small hardcoded error message to the client in cases when header size limit is exceeded, instead of closing the connection without passing any error. This implements the proposed solution from the issue description.

    In the https://github.com/grpc/grpc-go/issues/4265 @easwars also suggested a different approach: ignore header size limit on the server and let the client handle it. Please let me know if I should implement this suggestion instead.

  • transport: fix severity of log when receiving a GOAWAY with error code ENHANCE_YOUR_CALM

    transport: fix severity of log when receiving a GOAWAY with error code ENHANCE_YOUR_CALM

    According to A8

    When a client receives a GOAWAY with error code ENHANCE_YOUR_CALM and debug 
    data equal to ASCII "too_many_pings", it should log the occurrence at a log level that is 
    enabled by default and double the configure KEEPALIVE_TIME used for new connections
    on that channel.
    

    But we are logging this at INFO and verbosity level 2. https://github.com/grpc/grpc-go/blob/12b8fb52a18c8a1667dde7a4f8087ecdd2abbeaf/internal/transport/http2_client.go#L1258

  • priority: improve and reduce verbosity of logs

    priority: improve and reduce verbosity of logs

    Minor improvements to priority LB policy's logging:

    • Start log lines with upper case wherever possible
    • Downgrade commonly occurring log lines from Info to Debug. The latter is nothing but Info at verbosity 2.
    • Change some Warning lines to Debug. These log lines were not pointing to any problem, but were occurring so frequently that they were causing log spam.

    RELEASE NOTES: none

  • Add an example to illustrate the use of `authz` package

    Add an example to illustrate the use of `authz` package

    We have an authz implementation which is split up as the API and the engine.

    The API supports two ways of specifying the authorization policy: as a static string, or as a file to watch. The second method supports online updates to the policy.

    We should have examples which illustrates the use of both.

    Existing tests can serve as good starting point to understand the usage of the API.

Hprose is a cross-language RPC. This project is Hprose for Golang.
Hprose is a cross-language RPC. This project is Hprose for Golang.

Hprose 3.0 for Golang Introduction Hprose is a High Performance Remote Object Service Engine. It is a modern, lightweight, cross-language, cross-platf

Dec 26, 2022
A Go library for master-less peer-to-peer autodiscovery and RPC between HTTP services

sleuth sleuth is a Go library that provides master-less peer-to-peer autodiscovery and RPC between HTTP services that reside on the same network. It w

Dec 28, 2022
Take control of your data, connect with anything, and expose it anywhere through protocols such as HTTP, GraphQL, and gRPC.
Take control of your data, connect with anything, and expose it anywhere through protocols such as HTTP, GraphQL, and gRPC.

Semaphore Chat: Discord Documentation: Github pages Go package documentation: GoDev Take control of your data, connect with anything, and expose it an

Sep 26, 2022
Simple, fast and scalable golang rpc library for high load

gorpc Simple, fast and scalable golang RPC library for high load and microservices. Gorpc provides the following features useful for highly loaded pro

Dec 19, 2022
The jsonrpc package helps implement of JSON-RPC 2.0

jsonrpc About Simple, Poetic, Pithy. No reflect package. But reflect package is used only when invoke the debug handler. Support GAE/Go Standard Envir

Dec 17, 2022
Distributed Lab 2: RPC in Go

Distributed Lab 2: RPC in Go Using the lab sheet There are two ways to use the lab sheet, you can either: create a new repo from this template - this

Oct 18, 2021
An implementation of a distributed access-control server that is based on Google Zanzibar

An implementation of a distributed access-control server that is based on Google Zanzibar - "Google's Consistent, Global Authorization System".

Dec 22, 2022
Golang implementation of the Raft consensus protocol

raft raft is a Go library that manages a replicated log and can be used with an FSM to manage replicated state machines. It is a library for providing

Jan 9, 2023
Simplified distributed locking implementation using Redis

redislock Simplified distributed locking implementation using Redis. For more information, please see examples. Examples import ( "fmt" "time"

Dec 24, 2022
The pure golang implementation of nanomsg (version 1, frozen)
The pure golang implementation of nanomsg (version 1, frozen)

mangos NOTE: This is the legacy version of mangos (v1). Users are encouraged to use mangos v2 instead if possible. No further development is taking pl

Dec 7, 2022
A Golang implementation of the Umee network, a decentralized universal capital facility in the Cosmos ecosystem.

Umee A Golang implementation of the Umee network, a decentralized universal capital facility in the Cosmos ecosystem. Umee is a Universal Capital Faci

Jan 3, 2023
Golang implementation of distributed mutex on Azure lease blobs

Distributed Mutex on Azure Lease Blobs This package implements distributed lock available for multiple processes. Possible use-cases include exclusive

Jul 31, 2022
implementation of some distributed system techniques

Distributed Systems These applications were built with the objective of studding a distributed systems using the most recent technics. The main ideia

Feb 18, 2022
An implementation of a distributed KV store backed by Raft tolerant of node failures and network partitions 🚣
An implementation of a distributed KV store backed by Raft tolerant of node failures and network partitions 🚣

barge A simple implementation of a consistent, distributed Key:Value store which uses the Raft Concensus Algorithm. This project launches a cluster of

Nov 24, 2021
A naive implementation of Raft consensus algorithm.

This implementation is used to learn/understand the Raft consensus algorithm. The code implements the behaviors shown in Figure 2 of the Raft paper wi

Dec 3, 2021
A spinlock implementation for Go.

A spinlock implementation for Go.

Dec 20, 2021
This is my implementation of Raft consensus algorithm that I did for own learning.

This is my implementation of Raft consensus algorithm that I did for own learning. Please follow the link to learn more about raft consensus algorithm https://raft.github.io. And Soon, I will be developing same algorithm in Java as well

Jan 12, 2022
Lockgate is a cross-platform locking library for Go with distributed locks using Kubernetes or lockgate HTTP lock server as well as the OS file locks support.

Lockgate Lockgate is a locking library for Go. Classical interface: 2 types of locks: shared and exclusive; 2 modes of locking: blocking and non-block

Dec 16, 2022