Best microservices framework in Go, like alibaba Dubbo, but with more features, Scale easily.

  • stable branch: v1.6.x
  • development branch: master

Official site: http://rpcx.io

License GoDoc travis Go Report Card coveralls QQ3群

Notice: etcd

etcd plugin has been moved to rpcx-etcd

Announce

A tcpdump-like tool added: rpcxdump。 You can use it to debug communications between rpcx services and clients.

Cross-Languages

you can use other programming languages besides Go to access rpcx services.

  • rpcx-gateway: You can write clients in any programming languages to call rpcx services via rpcx-gateway
  • http invoke: you can use the same http requests to access rpcx gateway
  • Java Services/Clients: You can use rpcx-java to implement/access rpcx servies via raw protocol.

If you can write Go methods, you can also write rpc services. It is so easy to write rpc applications with rpcx.

Installation

install the basic features:

go get -v github.com/smallnest/rpcx/...

If you want to use quickcp registry, use those tags to go getgo build or go run. For example, if you want to use all features, you can:

go get -v -tags "quic kcp" github.com/smallnest/rpcx/...

tags:

  • quic: support quic transport
  • kcp: support kcp transport
  • ping: support network quality load balancing
  • utp: support utp transport

Which companies are using rpcx?

Features

rpcx is a RPC framework like Alibaba Dubbo and Weibo Motan.

rpcx is created for targets:

  1. Simple: easy to learn, easy to develop, easy to intergate and easy to deploy
  2. Performance: high perforamnce (>= grpc-go)
  3. Cross-platform: support raw slice of bytes, JSON, Protobuf and MessagePack. Theoretically it can be used with java, php, python, c/c++, node.js, c# and other platforms
  4. Service discovery and service governance: support zookeeper, etcd and consul.

It contains below features

  • Support raw Go functions. There's no need to define proto files.
  • Pluggable. Features can be extended such as service discovery, tracing.
  • Support TCP, HTTP, QUIC and KCP
  • Support multiple codecs such as JSON, Protobuf, MessagePack and raw bytes.
  • Service discovery. Support peer2peer, configured peers, zookeeper, etcd, consul and mDNS.
  • Fault tolerance:Failover, Failfast, Failtry.
  • Load banlancing:support Random, RoundRobin, Consistent hashing, Weighted, network quality and Geography.
  • Support Compression.
  • Support passing metadata.
  • Support Authorization.
  • Support heartbeat and one-way request.
  • Other features: metrics, log, timeout, alias, circuit breaker.
  • Support bidirectional communication.
  • Support access via HTTP so you can write clients in any programming languages.
  • Support API gateway.
  • Support backup request, forking and broadcast.

rpcx uses a binary protocol and platform-independent, which means you can develop services in other languages such as Java, python, nodejs, and you can use other prorgramming languages to invoke services developed in Go.

There is a UI manager: rpcx-ui.

Performance

Test results show rpcx has better performance than other rpc framework except standard rpc lib.

The benchmark code is at rpcx-benchmark.

Listen to others, but test by yourself.

Test Environment

  • CPU: Intel(R) Xeon(R) CPU E5-2630 v3 @ 2.40GHz, 32 cores
  • Memory: 32G
  • Go: 1.9.0
  • OS: CentOS 7 / 3.10.0-229.el7.x86_64

Use

  • protobuf
  • the client and the server on the same server
  • 581 bytes payload
  • 500/2000/5000 concurrent clients
  • mock processing time: 0ms, 10ms and 30ms

Test Result

mock 0ms process time

Throughputs Mean Latency P99 Latency

mock 10ms process time

Throughputs Mean Latency P99 Latency

mock 30ms process time

Throughputs Mean Latency P99 Latency

Examples

You can find all examples at rpcxio/rpcx-examples.

The below is a simple example.

Server

    // define example.Arith
    ……

    s := server.NewServer()
	s.RegisterName("Arith", new(example.Arith), "")
	s.Serve("tcp", addr)

Client

    // prepare requests
    ……

    d := client.NewPeer2PeerDiscovery("tcp@"+addr, "")
	xclient := client.NewXClient("Arith", client.Failtry, client.RandomSelect, d, client.DefaultOption)
	defer xclient.Close()
	err := xclient.Call(context.Background(), "Mul", args, reply, nil)

Contribute

see contributors.

Welcome to contribute:

  • submit issues or requirements
  • send PRs
  • write projects to use rpcx
  • write tutorials or articles to introduce rpcx

License

Apache License, Version 2.0

Owner
smallnest
Author of 《Scala Collections Cookbook》
smallnest
Comments
  • 发现一个内存被改写的bug

    发现一个内存被改写的bug

    并发调用 GO 接口 发现一个内存被改写的bug 场景: A函数调用GO函数 B函数从channel接受数据

    tempChan := make(chan *client.Call, 10)
    func A(){
      for{
        oneClient.Go(context.Background(), "A", "B", a, r, tempChan)
      }
    }
    func B(){
      for{
        temp := <-tempChan
      }
    }
    
    type Call struct {
    	ServicePath   string            // The name of the service and method to call.
    	ServiceMethod string            // The name of the service and method to call.
    	Metadata      map[string]string //metadata
    	ResMetadata   map[string]string
    	Args          interface{} // The argument to the function (*struct).
    	Reply         interface{} // The reply from the function (*struct).
    	Error         error       // After completion, the error status.
    	Done          chan *Call  // Strobes when call is complete.
    	Raw           bool        // raw message or not
    }
    

    如果B函数没有及时接收消息 而client的input 连续接收到多个消息 则第二次的消息内容会覆盖掉第一次的内容

    错误原因: Call中string是通过SliceByteToString直接引用byte中的内存 func (m *Message) Decode(r io.Reader) 中如果没有重新make byte而是使用原先的data 则第二次的内容就会覆盖掉之前的内容

    if cap(m.data) >= totalL { //reuse data m.data = m.data[:totalL] } else { m.data = make([]byte, totalL) }

  • XClient.Call 的args和reply直接传[]byte数据?

    XClient.Call 的args和reply直接传[]byte数据?

    我在rpcx实践中有这个问题:数据流向为 客户端->agent->rpc(pb格式) agent接收客户端请求,并解析出后续需要发起rpc调用的服务名、方法名、Marshal之后的args([]byte数据)。在agent转到rpc服务时,我需要把Marshal之后的args(即[]byte)unmarshal到args对象。然后再通过XClient.Call发起rpc调用! Call(ctx context.Context, serviceMethod string, args interface{}, reply interface{}) error 上面做法带来的问题是: 1、agent上需要根据服务名和方法名反射unmarshal出args对象。并且添加了新的rpc服务后,agent也需要更新,很麻烦!

    我的问题是: 在XClient中能否加一个api,类似于Call,但args和reply类型为[]byte

    BCall(ctx context.Context, serviceMethod string, args []byte, reply []byte, marshalType int32) error

    这样rpc客户端直接把二进制数据传递到rpc服务端,服务端框架层根据marshalType将args恢复成controller中需要的结构体对象!然后进行业务逻辑的处理……

    请赐教!

  • 客户端打到10000次请求就卡

    客户端打到10000次请求就卡

    https://github.com/smallnest/rpcx/issues/5

    但是不知道怎么解决我是按照 example的方式使用的

    d := client.NewPeer2PeerDiscovery("tcp@"+serverAddress+":"+ cast.ToString(port), "")
    
    oneClient := client.NewOneClient(client.Failtry, client.RandomSelect, d, client.DefaultOption)
    
    

    全局只使用了一个client。

    看到你有说连接池。但是也没看到相关例子

    我的server是这么启动的

    server := server.NewServer()
    
    	this.server = server
    
    	go server.Serve("tcp", fmt.Sprintf("%s:%d", this.serverAddress, this.port))
    
    

    貌似必须 go 的方式才能异步吧。。。

    您给掌掌眼。怎么搞

  • http方式访问服务接口,是这样吗?

    http方式访问服务接口,是这样吗?

    ``func ArithClientHttp() { args := &Args{ A: 10, B: 20, }

    argsBytes, err := json.Marshal(args)
    if err != nil {
    	fmt.Printf("Marshal args error:%v\n", err)
    	return
    }
    
    req, err := http.NewRequest("POST", "http://127.0.0.1:9528", bytes.NewReader(argsBytes))
    if err != nil {
    	fmt.Printf("http.NewRequest error:%v\n", err)
    	return
    }
    
    //需要设置的request header
    h := req.Header
    h.Set(client.XMessageID,"10000")
    h.Set(client.XMessageType,"0")
    h.Set(client.XSerializeType,"2")
    h.Set(client.XServicePath,"Arith")
    h.Set(client.XServiceMethod,"Mul")
    
    //发送http请求
    //http请求===>rpcx请求===>rpcx服务===>返回rpcx结果===>转换为http的response===>输出到client
    res, err := http.DefaultClient.Do(req)
    if err != nil{
    	fmt.Printf("failed to call server, err:%v\n", err)
    	return
    }
    defer res.Body.Close()
    
    
    fmt.Printf("res:%v\n", res)
    // 获取结果
    replyData, err := ioutil.ReadAll(res.Body)
    if err != nil{
    	fmt.Printf("failed to read response, err:%v\n", err)
    	return
    }
    
    fmt.Printf("replyData:%v\n", replyData)
    
    // 反序列化
    reply := &Reply{}
    err = json.Unmarshal(replyData, reply)
    if err != nil {
    	fmt.Printf("Unmarshal error:%v\n", err)
    	return
    }
    
    fmt.Printf("ArithClientHttp reply:%+v\n", reply)
    

    }

    请求报错: rpcx: failed to handle gateway request: *part1.Args is not a proto.Unmarshaler

  • etcdv3 使用rpcx报错

    etcdv3 使用rpcx报错

    https://github.com/rpcxio/rpcx-examples/tree/master/registry/etcdv3

    操作这个example报错。 etcd: etcd Version: 3.4.9 Git SHA: Not provided (use ./build instead of go build) Go Version: go1.14.3 Go OS/Arch: darwin/amd64

    报错1: etcd的报错 WARNING: 2020/07/14 15:41:25 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing" WARNING: 2020/07/14 15:41:25 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing" WARNING: 2020/07/14 15:41:25 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing" WARNING: 2020/07/14 15:41:25 grpc: Server.processUnaryRPC failed to write status connection error: desc = "transport is closing" WARNING: 2020/07/14 15:41:25 grpc: Server.processUnaryRPC failed to write status connection error: desc = "transport is closing" WARNING: 2020/07/14 15:41:25 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing" WARNING: 2020/07/14 15:41:25 grpc: Server.processUnaryRPC failed to write status connection error: desc = "transport is closing" WARNING: 2020/07/14 15:41:25 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing" WARNING: 2020/07/14 15:41:25 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing" WARNING: 2020/07/14 15:41:25 grpc: Server.processUnaryRPC failed to write status connection error: desc = "transport is closing" WARNING: 2020/07/14 15:41:25 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing" WARNING: 2020/07/14 15:41:25 grpc: Server.processUnaryRPC failed to write status connection error: desc = "transport is closing"

    client端端报错: 2020/07/14 15:41:25 etcdv3_discovery.go:244: WARN : chan is closed and will rewatch 2020/07/14 15:41:25 etcdv3_discovery.go:244: WARN : chan is closed and will rewatch 2020/07/14 15:41:25 etcdv3_discovery.go:244: WARN : chan is closed and will rewatch 2020/07/14 15:41:25 etcdv3_discovery.go:244: WARN : chan is closed and will rewatch 2020/07/14 15:41:25 etcdv3_discovery.go:244: WARN : chan is closed and will rewatch 2020/07/14 15:41:25 etcdv3_discovery.go:244: WARN : chan is closed and will rewatch 2020/07/14 15:41:25 etcdv3_discovery.go:244: WARN : chan is closed and will rewatch 2020/07/14 15:41:25 etcdv3_discovery.go:244: WARN : chan is closed and will rewatch 2020/07/14 15:41:25 etcdv3_discovery.go:244: WARN : chan is closed and will rewatch 2020/07/14 15:41:25 etcdv3_discovery.go:244: WARN : chan is closed and will rewatch 2020/07/14 15:41:25 etcdv3_discovery.go:244: WARN : chan is closed and will rewatch 2020/07/14 15:41:25 etcdv3_discovery.go:244: WARN : chan is closed and will rewatch 2020/07/14 15:41:25 etcdv3_discovery.go:244: WARN : chan is closed and will rewatch

    server报错3: server连上后,重新关闭,再开启 2020/07/14 15:42:56 etcdv3.go:66: ERROR: cannot create etcd path /rpcx_test: rpc error: code = Canceled desc = grpc: the client connection is closing 2020/07/14 15:42:56 rpc error: code = Canceled desc = grpc: the client connection is closing exit status 1

    怎么弄,大佬

  • rpcx sendraw response Error

    rpcx sendraw response Error

    env: go 1.12.1 rpcx: github.com/smallnest/rpcx v0.0.0-20191008054500-6a4c1b1de0fa 使用client.sendraw, SerializeType:protobuf 响应的PB格式: import "google/protobuf/any.proto"; // response message Resp { int32 code = 1; // 返回码 uint64 req_time = 2; // 请求时间 uint64 time = 3; // 当前服务器时间 string msg = 4; google.protobuf.Any data = 5; }


    使用: go funcA() { client.Sendraw(....) }() go funcB() { client.Sendraw(....) }() 同时请求同一个servicepath和servicemethod,偶尔出现sendraw响应的payload错误(貌似内存被覆盖)。 在client/client.go 的input接收数据时: if call.Raw { call.Metadata, call.Reply, _ = convertRes2Raw(res) log.Info("call payload: ", call.Reply, "args=", call.Args) } 这里的call.Reply的内容是正确的,然后进入call.done(),将call对象发给call内部的channel, 在channel接收数据后 select { ..... case call := <-done: err = call.Error m = call.Metadata if call.Reply != nil { // 这里的call.Reply的数据(protobuf的marshal)偶尔出现问题,和上面client.input时接收的不一致 log.Info("chan payload: ", call.Reply, "args=", call.Args) payload = call.Reply.([]byte) } } }

    求大神指导下

  • server shutdown will wait for all client process

    server shutdown will wait for all client process

    fix #696

    因为 ln 的 close 会导致连接强制关闭,因此修改 ln 以及 conn 的关闭时机,通过 checkProcessMsg() 确保所有 process 处理完毕,并加上超时时间。

    time.Sleep(shutdownPollInterval) 是为了确保 ln 断开后,客户端正常的结束关闭流程,否则会出现 read 的过程中被 force close

  • 使用ETCD做服务发现,当服务端没有启动的时候,客户端会造成go程泄露,Close()不管用的。

    使用ETCD做服务发现,当服务端没有启动的时候,客户端会造成go程泄露,Close()不管用的。

    goroutine profile: total 75 56 @ 0x3c2a5 0xs086af 0x82eb 0xb5d0 0x47da1 0xbb25cf github.com/rpcxio/rpcx-etcd/store/etcdv3.New.func1+0x4f /root/xxx/src/github.com/rpcxio/rpcx-etcd/store/etcdv3/etcdv3.go:71

  • client hangs after sync call to server (one in 5000 times)

    client hangs after sync call to server (one in 5000 times)

    As I'm stress testing my server using rpcx on the server, and rpcx on the client, I notice that, for about 1 in 5000 calls, the client will hang in the spot shown below (stack trace from kill -QUIT). The server has received the synchronous request, and replied. However the client never thinks the call is complete, and just hangs forever.

    I wonder if you could advise on how to go about solving this?

    It does seem to be a bug somewhere (probably a race, since it happens infrequently), in the rpcx client implementation.

    I am on ubuntu 18.04 on this commit:

    commit d969a5f620f8383be39d17007b29dfd93983a819 (HEAD -> master, origin/master, origin/HEAD)
    Merge: a54ce65 54101b2
    Author: smallnest <[email protected]>
    Date:   Sat Mar 13 11:21:52 2021 +0800
    
        Merge pull request #562 from fly512/master
    

    the stack trace is always the same:

    SIGQUIT: quit
    PC=0x4732c1 m=0 sigcode=0
    
    goroutine 0 [idle]:
    runtime.futex(0x102b508, 0x80, 0x0, 0x0, 0x0, 0xc00003a800, 0xc000128008, 0x113920e2022c03, 0x7fffcb1fb128, 0x40db5f, ...)
    	/usr/local/go1.15.7/src/runtime/sys_linux_amd64.s:587 +0x21
    runtime.futexsleep(0x102b508, 0x0, 0xffffffffffffffff)
    	/usr/local/go1.15.7/src/runtime/os_linux.go:45 +0x46
    runtime.notesleep(0x102b508)
    	/usr/local/go1.15.7/src/runtime/lock_futex.go:159 +0x9f
    runtime.stopm()
    	/usr/local/go1.15.7/src/runtime/proc.go:1924 +0xc5
    runtime.findrunnable(0xc00003a800, 0x0)
    	/usr/local/go1.15.7/src/runtime/proc.go:2485 +0xa7f
    runtime.schedule()
    	/usr/local/go1.15.7/src/runtime/proc.go:2683 +0x2d7
    runtime.park_m(0xc000000300)
    	/usr/local/go1.15.7/src/runtime/proc.go:2851 +0x9d
    runtime.mcall(0x94fa00)
    	/usr/local/go1.15.7/src/runtime/asm_amd64.s:318 +0x5b
    
    goroutine 1 [select]:
    github.com/glycerine/goq/vendor/github.com/smallnest/rpcx/client.(*Client).call(0xc0002be750, 0xaff840, 0xc0002bc900, 0xa558a0, 0x11, 0xa4cadc, 0x5, 0x981d80, 0xc0002bc8a0, 0x981e80, ...)
    	/home/jaten/go/src/github.com/glycerine/goq/vendor/github.com/smallnest/rpcx/client/client.go:240 +0x23c
    github.com/glycerine/goq/vendor/github.com/smallnest/rpcx/client.(*Client).Call(...)
    	/home/jaten/go/src/github.com/glycerine/goq/vendor/github.com/smallnest/rpcx/client/client.go:231
    main.(*ClientRpcx).DoSyncCallWithContext(0xc0002be680, 0xaff7c0, 0xc000026170, 0xc0002dc000, 0x0, 0xc0002e4900, 0x2f1, 0x2f1, 0x0, 0x0)
    	/home/jaten/go/src/github.com/glycerine/goq/xc.go:169 +0x219
    main.(*ClientRpcx).DoSyncCall(...)
    	/home/jaten/go/src/github.com/glycerine/goq/xc.go:137
    main.(*Submitter).SubmitJobGetReply(0xc00012e6c0, 0xc0002dc000, 0x1, 0xc00013d860, 0xc0002dc000, 0x0, 0x400, 0x7fc18e1d1700)
    	/home/jaten/go/src/github.com/glycerine/goq/sub.go:86 +0xbb
    main.main()
    	/home/jaten/go/src/github.com/glycerine/goq/main.go:169 +0x221f
    
    goroutine 21 [IO wait]:
    internal/poll.runtime_pollWait(0x7fc18c3d0d48, 0x72, 0xaf7180)
    	/usr/local/go1.15.7/src/runtime/netpoll.go:222 +0x55
    internal/poll.(*pollDesc).wait(0xc000123f18, 0x72, 0xaf7100, 0xfe5680, 0x0)
    	/usr/local/go1.15.7/src/internal/poll/fd_poll_runtime.go:87 +0x45
    internal/poll.(*pollDesc).waitRead(...)
    	/usr/local/go1.15.7/src/internal/poll/fd_poll_runtime.go:92
    internal/poll.(*FD).Read(0xc000123f00, 0xc0002d8000, 0x4000, 0x4000, 0x0, 0x0, 0x0)
    	/usr/local/go1.15.7/src/internal/poll/fd_unix.go:159 +0x1a5
    net.(*netFD).Read(0xc000123f00, 0xc0002d8000, 0x4000, 0x4000, 0x7fc18c1ea878, 0x7fc18c1ea878, 0x60)
    	/usr/local/go1.15.7/src/net/fd_posix.go:55 +0x4f
    net.(*conn).Read(0xc00011c218, 0xc0002d8000, 0x4000, 0x4000, 0x0, 0x0, 0x0)
    	/usr/local/go1.15.7/src/net/net.go:182 +0x8e
    bufio.(*Reader).Read(0xc00010f1a0, 0xc00030a004, 0x1, 0xc, 0x203000, 0x203000, 0x203000)
    	/usr/local/go1.15.7/src/bufio/bufio.go:227 +0x222
    io.ReadAtLeast(0xaf5c00, 0xc00010f1a0, 0xc00030a004, 0x1, 0xc, 0x1, 0x201, 0x1000000000000, 0x7fc18c1ea878)
    	/usr/local/go1.15.7/src/io/io.go:314 +0x87
    io.ReadFull(...)
    	/usr/local/go1.15.7/src/io/io.go:333
    github.com/glycerine/goq/vendor/github.com/smallnest/rpcx/protocol.(*Message).Decode(0xc000318000, 0xaf5c00, 0xc00010f1a0, 0xc000200060, 0x0)
    	/home/jaten/go/src/github.com/glycerine/goq/vendor/github.com/smallnest/rpcx/protocol/message.go:401 +0x7a
    github.com/glycerine/goq/vendor/github.com/smallnest/rpcx/client.(*Client).input(0xc0002be750)
    	/home/jaten/go/src/github.com/glycerine/goq/vendor/github.com/smallnest/rpcx/client/client.go:502 +0xd0
    created by github.com/glycerine/goq/vendor/github.com/smallnest/rpcx/client.(*Client).Connect
    	/home/jaten/go/src/github.com/glycerine/goq/vendor/github.com/smallnest/rpcx/client/connection.go:56 +0x1ff
    
    rax    0xca
    rbx    0x102b3c0
    rcx    0x4732c3
    rdx    0x0
    rdi    0x102b508
    rsi    0x80
    rbp    0x7fffcb1fb0f0
    rsp    0x7fffcb1fb0a8
    r8     0x0
    r9     0x0
    r10    0x0
    r11    0x286
    r12    0x3
    r13    0x102ae80
    r14    0x4
    r15    0x11
    rip    0x4732c1
    rflags 0x286
    cs     0x33
    fs     0x0
    gs     0x0
    

    This is where the client is making the call. The source code is open source:

    https://github.com/glycerine/goq/blob/master/xc.go#L140

  • 请教一个docker-compose部署多个server和一个client服务的连接报错问题。

    请教一个docker-compose部署多个server和一个client服务的连接报错问题。

    自己写了个demo,不使用docker部署的话,单纯运行backend和service(即客户端,服务端)是可以跑通的。但是尝试用docker-compose运行1个backend和2个service时有一个问题。

    架构思路的话:

    • backend集成了rpcx client和gin,gin暴露http api给外部使用,rpcx client使用etcd获取service的rpc接口地址,然后调用rpc。
    • service运行rpcx server,暴露rpc接口给backend使用。注册中心的话使用etcd,通过flag传入该server的地址然后注册到etcd上。

    这个是backend的报错:

    [GIN-debug] Listening and serving HTTP on :3010
    2020/05/29 06:26:33 connection.go:96: WARN : failed to dial server: dial tcp 10.0.0.201:8973: connect: connection refused
    2020/05/29 06:26:33 connection.go:96: WARN : failed to dial server: dial tcp 10.0.0.201:8973: connect: connection refused
    2020/05/29 06:26:33 connection.go:96: WARN : failed to dial server: dial tcp 10.0.0.201:8972: connect: connection refused
    2020/05/29 06:26:33 connection.go:96: WARN : failed to dial server: dial tcp 10.0.0.201:8972: connect: connection refused
    2020/05/29 06:26:33 connection.go:96: WARN : failed to dial server: dial tcp 10.0.0.201:8972: connect: connection refused
    [GIN] 2020/05/29 - 06:26:33 | 500 |     33.5876ms |     192.168.0.1 | GET      "/api/v1/match?page=1&size=10&key=format.keyword&val=doc&index=testtable1"
    
    

    这个是启动服务的docker-compose.yml

    version: '2'
    services:
      server1:
        image: yz_classic_server:1.0
        container_name: yz_classic_server1
        entrypoint: /rpcx-service/server -addr='10.0.0.201:8972'
        ports:
          - "8972:8972"
      server2:
        image: yz_classic_server:1.0
        container_name: yz_classic_server2
        # -flag 传入addr 10.0.0.201:8973
        entrypoint: /rpcx-service/server -addr='10.0.0.201:8973'
        ports:
          - "8973:8972"
      backend:
        image: yz_classic_backend:1.0
        container_name: yz_classic_backend
        ports:
          - "3010:3010"
    

    这个是backend的main.go

    
    var (
    	defaultAddr = ":3010"
    	addr  = flag.String("addr", defaultAddr, "http address")
    
    	//etcd
    	basePath = flag.String("base", "/rpcx_yz_classic", "prefix path")
    	etcdAddr = flag.String("etcdAddr", "10.0.0.201:2379", "etcd address")
    )
    
    
    
    func main() {
    	// gin + rpcx client
    	dEtcd := client.NewEtcdV3Discovery(*basePath, "search", []string{*etcdAddr}, nil)
    	v1.Xclient = client.NewXClient("search", client.Failover, client.RandomSelect, dEtcd, client.DefaultOption)
    	defer v1.Xclient.Close()
    
    	r := v1.InitRouter()
    	r.Run(defaultAddr) // listen and serve on 0.0.0.0:3010
    }
    
    

    这个是service的main.go

    var (
    	addr = flag.String("addr", "localhost:8972", "server address")
    
    	//etcd
    	basePath = flag.String("base", "/rpcx_yz_classic", "prefix path")
    	etcdAddr = flag.String("etcdAddr", "10.0.0.201:2379", "etcd address")
    )
    
    func main() {
    	flag.Parse()
    	// init es dao
    	dao.InitEs()
    
    	// rpcx service
    	s := server.NewServer()
    
    	addRegistryPlugin(s) //etcd
    
    	s.RegisterName("search", search.New(), "") //user.New()返回一个服务对象,该服务对象的所有方法都会是允许rpc调用的,只要符合方法签名
    	//err := s.Serve("tcp", *addr)
    	err := s.Serve("tcp", "localhost:8972")
    	if err != nil {
    		panic(err)
    	}
    }
    
    
    func addRegistryPlugin(s *server.Server) {
    	fmt.Println("*addr:", *addr)
    	r := &serverplugin.EtcdV3RegisterPlugin{
    		ServiceAddress: "tcp@" + *addr,
    		EtcdServers:    []string{*etcdAddr},
    		BasePath:       *basePath,
    		UpdateInterval: time.Minute,
    	}
    	err := r.Start()
    	if err != nil {
    		log.Fatal(err)
    	}
    	s.Plugins.Add(r)
    }
    

    注意的是, 10.0.0.201是一台机子,里面部署了etcd; localhost(内网ip为10.0.0.222)是另一台机子,这台机子运行了docker-compose up -d

    自己排查了很久,但是还是不知道问题具体在哪。求指教。thx.

  • support pass metadata via jsonrpc 2.0

    support pass metadata via jsonrpc 2.0

    1. 客户端使用ReqMetaDataKey, 请求额外带元数据
    2. jsonrpc里没有这个元数据

    没想好怎么改, 从params中ReqMetaDataKey么?

    1. https://github.com/smallnest/rpcx/issues/391
    2. https://github.com/smallnest/rpcx/blob/master/server/jsonrpc2.go#L77
  • getCachedClient 存在并发bug

    getCachedClient 存在并发bug

    https://github.com/smallnest/rpcx/blob/6d27bc7e40cf7ffdec0d9b2b4dac715b3dc1bdbd/client/xclient.go#L282-L338 上面这段代码存在并发问题,锁的使用存在问题,可能多个线程都拿到了nil的client, c.slGroup.Do似乎只能保证同时创建一个连接,但是c.slGroup.Forget之后可能创建出多个连接,导致连接泄露,创建一些不被使用的连接

  • selector未初始化,没有panic,而是挂起

    selector未初始化,没有panic,而是挂起

    初始化xclient时,使用了SelectByUser,初始化后,没有调用xClient.SetSelector()去设置自定义的selector,之后用xclient发起调用,导致程序挂起; image 请教一下,这里selector没有初始化,不应该是panic吗? 操作系统:mac rpcx版本:1.7.12

  • Update xclient.go

    Update xclient.go

    watch() many goroutine sort the single slice at the same time. it has a concurrent problem. and there is no need to sort the pairs, we should delete the sort code.

Create production ready microservices mono repo pattern wired with Neo4j. Microservices for other languages and front end repos to be added as well in future.
Create production ready microservices mono repo pattern wired with Neo4j. Microservices for other languages and front end repos to be added as well in future.

Create Microservices MonoRepo in GO/Python Create a new production-ready project with backend (Golang), (Python) by running one CLI command. Focus on

Oct 26, 2022
Go-kit-microservices - Example microservices implemented with Go Kit

Go Kit Microservices Example microservices implemented with go kit, a programmin

Jan 18, 2022
Rpcx-framework - An RPC microservices framework based on rpcx, simple and easy to use, ultra fast and efficient, powerful, service discovery, service governance, service layering, version control, routing label registration.

RPCX Framework An RPC microservices framework based on rpcx. Features: simple and easy to use, ultra fast and efficient, powerful, service discovery,

Jan 5, 2022
Realize is the #1 Golang Task Runner which enhance your workflow by automating the most common tasks and using the best performing Golang live reloading.
Realize is the #1 Golang Task Runner which enhance your workflow by automating the most common tasks and using the best performing Golang live reloading.

#1 Golang live reload and task runner Content - ⭐️ Top Features - ???? Get started - ?? Config sample - ?? Commands List - ?? Support and Suggestions

Dec 31, 2022
A standard library for microservices.

Go kit Go kit is a programming toolkit for building microservices (or elegant monoliths) in Go. We solve common problems in distributed systems and ap

Jan 1, 2023
Zeebe.io - Workflow Engine for Microservices Orchestration

Distributed Workflow Engine for Microservices Orchestration

Jan 2, 2023
Design-based APIs and microservices in Go
Design-based APIs and microservices in Go

Goa is a framework for building micro-services and APIs in Go using a unique design-first approach. Overview Goa takes a different approach to buildin

Jan 5, 2023
goTempM is a full stack Golang microservices sample application built on top of the Micro platform.
goTempM is a full stack Golang microservices sample application built on top of the Micro platform.

goTempM is a full stack Golang microservices sample application built on top of the Micro platform.

Sep 24, 2022
Sample cloud-native application with 10 microservices showcasing Kubernetes, Istio, gRPC and OpenCensus.
Sample cloud-native application with 10 microservices showcasing Kubernetes, Istio, gRPC and OpenCensus.

Online Boutique is a cloud-native microservices demo application. Online Boutique consists of a 10-tier microservices application. The application is

Dec 31, 2022
Microservices using Go, RabbitMQ, Docker, WebSocket, PostgreSQL, React
Microservices using Go, RabbitMQ, Docker, WebSocket, PostgreSQL, React

Microservices A basic example of microservice architecture which demonstrates communication between a few loosely coupled services. Written in Go Uses

Jan 1, 2023
Go microservices with REST, and gRPC using BFF pattern.
Go microservices with REST, and gRPC using BFF pattern.

Go microservices with REST, and gRPC using BFF pattern. This repository contains backend services. Everything is dockerized and ready to

Jan 4, 2023
TinyHat.Me: Microservices deployed with Kubernetes that enable users to propose hat pictures and try on hats from a user-curated database.
TinyHat.Me: Microservices deployed with Kubernetes that enable users to propose hat pictures and try on hats from a user-curated database.

Click here to see the "buggy" version ?? The Scenario TinyHat.Me is an up and coming startup that provides an API to allow users to try on tiny hats v

Jun 17, 2022
This is an example to demonstrate implementation golang microservices using domain driven design principles and sugestions from go-kit

go-kit DDD Domain Driven Design is prevelent and rising standard for organizing your microservice code. This design architecture emphasis on Code orga

Feb 9, 2022
Box is an incrementally adoptable tool for building scalable, cloud native, microservices.

Box is a tool for building scalable microservices from predefined templates. Box is currently in Beta so if you find any issues or have some ideas

Feb 3, 2022
Access to b2c microservices through this service
Access to b2c microservices through this service

API service Access to b2c microservices through this service Config file Create config file with services addresses. Services: vdc - get camera inform

Nov 8, 2021
This is demo / sample / example project using microservices architecture for Online Food Delivery App.

Microservices This is demo / sample / example project using microservices architecture for Online Food Delivery App. Architecture Services menu-servic

Nov 21, 2022
Goya circuit is a circuit breaker mechanism implementation in microservices.

Goya-Circuit: 类似于Hystrix的熔断器实现 Goya circuit is a circuit breaker mechanism implementation in microservices. It can prevent the whole link avalanche ca

Mar 8, 2022
Example golang microservices deployed on kubernetes.
Example golang microservices deployed on kubernetes.

Tech Stack Golang RabbitMQ Docker K8S MongoDB Services There are two services which communicate via http(sync) and rabbitmq(async). Services opened to

Sep 6, 2022
A microservices-demo service that provides catalogue/product information.

A microservices-demo service that provides catalogue/product information. This service is built, tested and released by travis.

Nov 22, 2021