Netpoll is a high-performance non-blocking I/O networking framework, which focused on RPC scenarios, developed by ByteDance.

中文

Introduction

Netpoll is a high-performance non-blocking I/O networking framework, which focused on RPC scenarios, developed by ByteDance.

RPC is usually heavy on processing logic and therefore cannot handle I/O serially. But Go's standard library net designed blocking I/O API, so that the RPC framework can only follow the One Conn One Goroutine design. It will waste a lot of cost for context switching, due to a large number of goroutines under high concurrency. Besides, net.Conn has no API to check Alive, so it is difficult to make an efficient connection pool for RPC framework, because there may be a large number of failed connections in the pool.

On the other hand, the open source community currently lacks Go network libraries that focus on RPC scenarios. Similar repositories such as: evio, gnet, etc., are all focus on scenarios like Redis, Haproxy.

But now, Netpoll was born and solved the above problems. It draws inspiration from the design of evio and netty, has excellent Performance, and is more suitable for microservice architecture. Also Netpoll provides a number of Features, and it is recommended to replace net in some RPC scenarios.

We developed the RPC framework KiteX and HTTP framework Hertz (to be open sourced) based on Netpoll, both with industry-leading performance.

Examples show how to build RPC client and server using Netpoll.

For more information, please refer to Document.

Features

  • Already

    • LinkBuffer provides nocopy API for streaming reading and writing
    • gopool provides high-performance goroutine pool
    • mcache provides efficient memory reuse
    • multisyscall supports batch system calls
    • IsActive supports checking whether the connection is alive
    • Dialer supports building clients
    • EventLoop supports building a server
    • TCP, Unix Domain Socket
    • Linux, Mac OS (operating system)
  • Future

    • io_uring
    • Shared Memory IPC
    • Serial scheduling I/O, suitable for pure computing
    • TLS
    • UDP
  • Unsupported

    • Windows (operating system)

Performance

Benchmark is not a digital game, it should meet the requirements of industrial use first. In the RPC scenario, concurrent calls and waiting timeout are necessary support items.

Therefore, we set that the benchmark should meet the following conditions:

  1. Support concurrent calls, support timeout(1s)
  2. Use protocol: header(4 bytes) indicates the total length of payload

Similar repositories such as net , evio, gnet. We compared their performance through Benchmarks, as shown below.

For more benchmark reference Netpoll-Benchmark , KiteX-Benchmark and Hertz-Benchmark .

Environment

  • CPU: Intel(R) Xeon(R) Gold 5118 CPU @ 2.30GHz, 4 cores
  • Memory: 8GB
  • OS: Debian 5.4.56.bsk.1-amd64 x86_64 GNU/Linux
  • Go: 1.15.4

Concurrent Performance (Echo 1KB)

image image

Transport Performance (concurrent=100)

image image

Benchmark Conclusion

Compared with net , Netpoll latency about 34% and qps about 110% (continue to pressurize, net latency is too high to reference)

Document

Comments
  • 长连接时可能是惊群问题

    长连接时可能是惊群问题

    Describe the bug 长连接时可能是惊群问题,当SetNumLoops大于1时,cpu占满,设置为1时一切正常。

    To Reproduce Steps to reproduce the behavior:

    1. 编写一个最简单的长连接服务端,用map缓存所有conn客户端连接对象池。并设置SetNumLoops(4).
    2. 编写一个最简单的tcp客户端,连接建立2000个连接。不断开。保持连接,60s一个心跳包。
    3. 现在观察服务端cpu,会发现cpu占满。
    4. 然后把服务器端的SetNumLoops(1),再重新测试观察cpu占用已降低到正常。

    Screenshots 服务端参数: image 客户端建立2千个连接,并保活。 image 此时的服务器端cpu情况: image

    Desktop (please complete the following information):

    • OS: mac book pro
    • Version
    image
  • connection.Reader().Next(n) n大于当前已有数据时阻塞,ReadTimeout后框架层没有关闭连接,connection.Reader()已有数据且应用层未读取时会不停触发 OnRequest

    connection.Reader().Next(n) n大于当前已有数据时阻塞,ReadTimeout后框架层没有关闭连接,connection.Reader()已有数据且应用层未读取时会不停触发 OnRequest

    server

    package main
    
    import (
    	"context"
    	"fmt"
    	"log"
    	"time"
    
    	"github.com/cloudwego/netpoll"
    )
    
    func main() {
    	network, address := "tcp", "127.0.0.1:8888"
    
    	// 创建 listener
    	listener, err := netpoll.CreateListener(network, address)
    	if err != nil {
    		panic("create netpoll listener fail")
    	}
    
    	// handle: 连接读数据和处理逻辑
    	var onRequest netpoll.OnRequest = handler
    
    	// options: EventLoop 初始化自定义配置项
    	var opts = []netpoll.Option{
    		netpoll.WithReadTimeout(5 * time.Second),
    		netpoll.WithIdleTimeout(10 * time.Minute),
    		netpoll.WithOnPrepare(nil),
    	}
    
    	// 创建 EventLoop
    	eventLoop, err := netpoll.NewEventLoop(onRequest, opts...)
    	if err != nil {
    		panic("create netpoll event-loop fail")
    	}
    
    	// 运行 Server
    	err = eventLoop.Serve(listener)
    	if err != nil {
    		panic("netpoll server exit")
    	}
    }
    
    // 读事件处理
    func handler(ctx context.Context, connection netpoll.Connection) error {
    	total := 2
    	t := time.Now()
    	reader := connection.Reader()
    	data, err := reader.Peek(reader.Len())
    	log.Printf("before Next, len: %v, data: %v", reader.Len(), string(data))
    	data, err = reader.Next(total)
    	if err != nil {
    		log.Printf("Next failed, total: %v, reader.Len: %v, block time: %v, error: %v", total, reader.Len(), int(time.Since(t)), err)
    		return err
    	}
    
    	log.Printf("after  Next, len: %v, data: %v, timeused: %v", len(data), string(data), int(time.Since(t).Seconds()))
    
    	n, err := connection.Write(data)
    	if err != nil {
    		return err
    	}
    	if n != len(data) {
    		return fmt.Errorf("write failed: %v < %v", n, len(data))
    	}
    
    	return nil
    }
    

    client

    package main
    
    import (
    	"log"
    	"net"
    	"time"
    )
    
    func main() {
    	conn, err := net.Dial("tcp", "127.0.0.1:8888")
    	if err != nil {
    		log.Fatal("dial failed:", err)
    	}
    
    	// 用于测试,设定单个完整协议包 2 字节,server端使用 connection.Reader.Next(2) 进行读取
    	// server 端超时时间设定为 5s
    
    	// 第一组,在超时时间内发送,server端能读到完整包,但 connection.Reader.Next(2) 阻塞 2s
    	conn.Write([]byte("a"))
    	time.Sleep(time.Second * 2)
    	conn.Write([]byte("a"))
    
    	time.Sleep(time.Second * 1)
    
    	// 第二组,超过超时时间、分开多次发送完整包,server端 connection.Reader.Next(2) 阻塞 5s(超时时间为5s)后报错,但是连接没有断开
    	// 30s 内、client 发送完剩余数据前,server 端多次触发 OnRequest,并且每次 OnRequest 中的 connection.Reader.Next(2) 阻塞 5s(超时时间为5s)后报错,但是连接没有断开
    	// 30s 后 client 发送完整包剩余数据,server 端 connection.Reader.Next(2) 读到完整包
    	conn.Write([]byte("b"))
    	time.Sleep(time.Second * 30)
    	conn.Write([]byte("b"))
    
    	time.Sleep(time.Second * 1)
    
    	// 第三组,只发送半包,client 端不再有行动
    	// server 端多次触发 OnRequest,并且每次 OnRequest 中的 connection.Reader.Next(2) 阻塞 5s(超时时间为5s)后报错,但是连接没有断开
    	// 实际场景中,server 端可能收不到 tcp FIN1,比如 client 设备断电,server 端无法及时迅速地释放该连接,如果大量连接进行攻击,存在服务不可用风险
    	conn.Write([]byte("c"))
    
    	<-make(chan int)
    }
    

    日志

    go run ./netpoll.go 
    2021/07/18 07:25:22 before Next, len: 1, data: a
    2021/07/18 07:25:24 after  Next, len: 2, data: aa, timeused: 2
    2021/07/18 07:25:25 before Next, len: 1, data: b
    2021/07/18 07:25:30 Next failed, total: 2, reader.Len: 1, block time: 5005315692, error: connection read timeout 5s
    2021/07/18 07:25:30 before Next, len: 1, data: b
    2021/07/18 07:25:35 Next failed, total: 2, reader.Len: 1, block time: 5017124559, error: connection read timeout 5s
    2021/07/18 07:25:35 before Next, len: 1, data: b
    2021/07/18 07:25:40 Next failed, total: 2, reader.Len: 1, block time: 5009562038, error: connection read timeout 5s
    2021/07/18 07:25:40 before Next, len: 1, data: b
    2021/07/18 07:25:45 Next failed, total: 2, reader.Len: 1, block time: 5008370180, error: connection read timeout 5s
    2021/07/18 07:25:45 before Next, len: 1, data: b
    2021/07/18 07:25:50 Next failed, total: 2, reader.Len: 1, block time: 5011104792, error: connection read timeout 5s
    2021/07/18 07:25:50 before Next, len: 1, data: b
    2021/07/18 07:25:55 after  Next, len: 2, data: bb, timeused: 4
    2021/07/18 07:25:56 before Next, len: 1, data: c
    2021/07/18 07:26:01 Next failed, total: 2, reader.Len: 1, block time: 5009599769, error: connection read timeout 5s
    2021/07/18 07:26:01 before Next, len: 1, data: c
    2021/07/18 07:26:06 Next failed, total: 2, reader.Len: 1, block time: 5017649436, error: connection read timeout 5s
    2021/07/18 07:26:06 before Next, len: 1, data: c
    2021/07/18 07:26:11 Next failed, total: 2, reader.Len: 1, block time: 5015780369, error: connection read timeout 5s
    2021/07/18 07:26:11 before Next, len: 1, data: c
    2021/07/18 07:26:16 Next failed, total: 2, reader.Len: 1, block time: 5013565228, error: connection read timeout 5s
    2021/07/18 07:26:16 before Next, len: 1, data: c
    2021/07/18 07:26:21 Next failed, total: 2, reader.Len: 1, block time: 5004234323, error: connection read timeout 5s
    2021/07/18 07:26:21 before Next, len: 1, data: c
    2021/07/18 07:26:26 Next failed, total: 2, reader.Len: 1, block time: 5014860948, error: connection read timeout 5s
    2021/07/18 07:26:26 before Next, len: 1, data: c
    2021/07/18 07:26:31 Next failed, total: 2, reader.Len: 1, block time: 5009890510, error: connection read timeout 5s
    2021/07/18 07:26:31 before Next, len: 1, data: c
    2021/07/18 07:26:36 Next failed, total: 2, reader.Len: 1, block time: 5009386524, error: connection read timeout 5s
    2021/07/18 07:26:36 before Next, len: 1, data: c
    2021/07/18 07:26:41 Next failed, total: 2, reader.Len: 1, block time: 5009694923, error: connection read timeout 5s
    2021/07/18 07:26:41 before Next, len: 1, data: c
    2021/07/18 07:26:46 Next failed, total: 2, reader.Len: 1, block time: 5006999390, error: connection read timeout 5s
    2021/07/18 07:26:46 before Next, len: 1, data: c
    2021/07/18 07:26:51 Next failed, total: 2, reader.Len: 1, block time: 5016639111, error: connection read timeout 5s
    2021/07/18 07:26:51 before Next, len: 1, data: c
    2021/07/18 07:26:56 Next failed, total: 2, reader.Len: 1, block time: 5004699154, error: connection read timeout 5s
    2021/07/18 07:26:56 before Next, len: 1, data: c
    2021/07/18 07:27:01 Next failed, total: 2, reader.Len: 1, block time: 5003720648, error: connection read timeout 5s
    2021/07/18 07:27:01 before Next, len: 1, data: c
    2021/07/18 07:27:06 Next failed, total: 2, reader.Len: 1, block time: 5013684114, error: connection read timeout 5s
    2021/07/18 07:27:06 before Next, len: 1, data: c
    2021/07/18 07:27:11 Next failed, total: 2, reader.Len: 1, block time: 5008594864, error: connection read timeout 5s
    2021/07/18 07:27:11 before Next, len: 1, data: c
    2021/07/18 07:27:16 Next failed, total: 2, reader.Len: 1, block time: 5016949058, error: connection read timeout 5s
    2021/07/18 07:27:16 before Next, len: 1, data: c
    ### 一直这样提示
    
  • seem it can't work on Mac OSX

    seem it can't work on Mac OSX

    I try to write a simple test on Mac OSX below, but it can't work. I use telnet 127.0.0.1 8080, type hello world, but can't get the response.

    package main
    
    import (
    	"context"
    	"flag"
    	"fmt"
    	"time"
    
    	"github.com/cloudwego/netpoll"
    )
    
    var (
    	addr = flag.String("addr", "127.0.0.1:8080", "server address")
    )
    
    func onRequest(_ context.Context, conn netpoll.Connection) error {
    	println("hello world")
    
    	var reader, writer = conn.Reader(), conn.Writer()
    
    	defer reader.Release()
    	// client send "Hello World", size is 11
    	buf, err := reader.Next(11)
    	if err != nil {
    		fmt.Printf("error %v\n", err)
    		return err
    	}
    
    	alloc, _ := writer.Malloc(len(buf))
    	copy(alloc, buf)
    	err = writer.Flush()
    	if err != nil {
    		fmt.Printf("flush error %v\n", err)
    		return err
    	}
    
    	return nil
    }
    
    func onPrepare(conn netpoll.Connection) context.Context {
    	println("hello prepare")
    	return context.TODO()
    }
    
    func main() {
    	flag.Parse()
    	l, err := netpoll.CreateListener("tcp", *addr)
    	if err != nil {
    		panic("create netpoll listener failed")
    	}
    	defer l.Close()
    
    	println("hello event loop")
    
    	loop, err1 := netpoll.NewEventLoop(
    		onRequest,
    		netpoll.WithReadTimeout(time.Second),
    		netpoll.WithOnPrepare(onPrepare),
    	)
    	if err1 != nil {
    		panic("create event loop failed")
    	}
    
    	println("begin to serve")
    	loop.Serve(l)
    }
    
  • 当客户端主动关闭链接时,服务端获取不到完整的数据

    当客户端主动关闭链接时,服务端获取不到完整的数据

    Describe the bug 当客户端主动关闭链接时,服务端获取不到完整的数据。

    To Reproduce

    1. 客户端发送消息并主动关闭链接。
    2. 统计服务端收到的消息长度。
    3. 消息实际长度为27260。

    Expected behavior A clear and concise description of what you expected to happen.

    Screenshots 客户端代码 image 服务端部分代码 image 统计结果 image

    Desktop (please complete the following information):

    • OS: macOS
    • Version: 13.0.1
  • 运行普通的server端demo有问题

    运行普通的server端demo有问题

    image 如上图: d当我监听12345端口后,想请求下该端口查看是netpoll处理io事件流程,却发现监听端口始终无法启动,也没有任何报错。 之前曾经短暂启动成功过 下面是具体的code代码:

    import ( "context" "fmt" "github.com/cloudwego/netpoll" "time" )

    func main() {

    l, err := netpoll.CreateListener("tcp", ":12345")
    if err != nil {
    	panic("监听错误")
    }
    eventLoop, _ := netpoll.NewEventLoop(
    	//这个就是接收到io事件后处理的handler
    	func(ctx context.Context, connection netpoll.Connection) error {
    		fmt.Printf("收到请求了")
    		return nil
    	},
    	netpoll.WithReadTimeout(time.Second))
    fmt.Printf("准备开始监听请求")
    eventLoop.Serve(l)
    

    }

    下面是我电脑的配置:

    image

  • C++ server端重启造成client端panic

    C++ server端重启造成client端panic

    每当server端重启时,client端偶尔会出现panic,server端为C++代码,C++的grpc依赖版本为1.43.0

    报错详情

    panic: runtime error: invalid memory address or nil pointer dereference
    [signal SIGSEGV: segmentation violation code=0x1 addr=0x20 pc=0x902720]
    
    goroutine 89 [running]:
    github.com/cloudwego/netpoll.(*linkBufferNode).Malloc(...)
           github.com/cloudwego/netpoll/nocopy_linkbuffer.go:730
    github.com/cloudwego/netpoll.(*LinkBuffer).book(0xc86d608e60, 0x40000, 0x7f0b1)
            github.com/cloudwego/netpoll/nocopy_linkbuffer.go:587 +0xa0
    github.com/cloudwego/netpoll.(*connection).inputs(0xcdee844e80?, {0xc000440300, 0x20, 0x20})
            github.com/cloudwego/netpoll/connection_reactor.go:77 +0x4a
    github.com/cloudwego/netpoll.(*defaultPoll).handler(0xc000532280, {0xc000544000, 0x1, 0xc000544000?})
            github.com/cloudwego/netpoll/poll_default_linux.go:139 +0x10a
    github.com/cloudwego/netpoll.(*defaultPoll).Wait(0xc000532280)
            github.com/cloudwego/netpoll/poll_default_linux.go:103 +0x111
    created by github.com/cloudwego/netpoll.(*manager).Run
            github.com/cloudwego/netpoll/poll_manager.go:106 +0x31
    
    

    环境 OS : linux Go version : 1.18.8 netpoll version : github.com/cloudwego/netpoll v0.3.1 kitex version: github.com/cloudwego/kitex v0.4.3

  • graceful shutdown failed

    graceful shutdown failed

    EventLoop 的 OnRequest 内,链接被主动关闭时,会导致链接(processing)死锁,从而无法关闭。

    OnRequest

    func (s *server) onHandle(_ context.Context, conn netpoll.Connection) error {
    	time.Sleep(time.Second * 3)
    	conn.Close()
    	return nil
    }
    

    connection_onevent.go:115 image image

  • 我在运行示例代码的时候报的netpoll找不到这些,升级了go版本还是这个问题,请问我该怎么做谢谢

    我在运行示例代码的时候报的netpoll找不到这些,升级了go版本还是这个问题,请问我该怎么做谢谢

    ....\go1.17.1\pkg\mod\github.com\cloudwego\[email protected]\connection.go:59:18: undefined: OnRequest ....\go1.17.1\pkg\mod\github.com\cloudwego\[email protected]\connection_impl.go:30:2: undefined: netFD ....\go1.17.1\pkg\mod\github.com\cloudwego\[email protected]\connection_impl.go:40:19: undefined: barrier ....\go1.17.1\pkg\mod\github.com\cloudwego\[email protected]\connection_impl.go:41:19: undefined: barrier ....\go1.17.1\pkg\mod\github.com\cloudwego\[email protected]\connection_impl.go:252:32: undefined: Conn ....\go1.17.1\pkg\mod\github.com\cloudwego\[email protected]\connection_impl.go:252:46: undefined: OnPrepare ....\go1.17.1\pkg\mod\github.com\cloudwego\[email protected]\connection_impl.go:272:38: undefined: Conn ....\go1.17.1\pkg\mod\github.com\cloudwego\[email protected]\connection_onevent.go:48:39: undefined: OnRequest ....\go1.17.1\pkg\mod\github.com\cloudwego\[email protected]\connection_onevent.go:71:40: undefined: OnPrepare ....\go1.17.1\pkg\mod\github.com\cloudwego\[email protected]\net_sock.go:41:116: undefined: netFD ....\go1.17.1\pkg\mod\github.com\cloudwego\[email protected]\net_sock.go:41:116: too many errors

  • onRequest task里面为什么需要重新上锁呢?

    onRequest task里面为什么需要重新上锁呢?

    https://github.com/cloudwego/netpoll/blob/5607dcbce465c6c75e53fa346da356df96bfb38b/connection_onevent.go#L111 是否仅当 c.Reader().Len() <= 0 进行一次unlock即可,可以减少一次增删锁的开销,task中重新上锁的原因是什么呢?

  • 关于benchmark代码、更多参数和测试指标

    关于benchmark代码、更多参数和测试指标

    目前benchmark代码的链接已经失效,希望能早点提供测试用例完整代码

    目前文档中benchmark的图,只包括了 100连接(没提到 payload)以及 payload 1k(没提到连接数)的指标,测试内容太少、能够说明的情况也局限

    希望能提供不同并发情况下,比如1k、10k、100k、1000k场景下不同payload时与其他网络库、标准库的指标对比

    另外由于本人参考文档写的echo server压测时发现netpoll内存占用非常高,而内存占用也是海量并发场景时的重要指标。所以希望也能提供包括内存占用以及其他指标的数据对比

  • No Copy Buffer 的 No Copy 指的是什么呢?

    No Copy Buffer 的 No Copy 指的是什么呢?

    // readBinary cannot use mcache, because the memory allocated by readBinary will not be recycled.
    func (b *LinkBuffer) readBinary(n int) (p []byte) {
    	b.recalLen(-n) // re-cal length
    
    	// single node
    	p = make([]byte, n)
    	if b.isSingleNode(n) {
    		copy(p, b.read.Next(n))
    		return p
    	}
    	// multiple nodes
    	var pIdx int
    	var l int
    	for ack := n; ack > 0; ack = ack - l {
    		l = b.read.Len()
    		if l >= ack {
    			pIdx += copy(p[pIdx:], b.read.Next(ack))
    			break
    		} else if l > 0 {
    			pIdx += copy(p[pIdx:], b.read.Next(l))
    		}
    		b.read = b.read.next
    	}
    	_ = pIdx
    	return p
    }
    
    

    Reader 的 接口貌似返回的都是对数组的拷贝, No Copy 是指扩容缩容 No Copy 吗

  • WSL2 run go build report netpoll error

    WSL2 run go build report netpoll error

    root@JERRY test $ go build

    github.com/cloudwego/netpoll

    ../../pkg/mod/github.com/cloudwego/[email protected]/sys_sendmsg_linux.go:41:11: cannot use uint64(iovLen) (value of type uint64) as type uint32 in struct literal ../../pkg/mod/github.com/cloudwego/[email protected]/sys_sendmsg_linux.go:47:40: undefined: syscall.SYS_SENDMSG ../../pkg/mod/github.com/cloudwego/[email protected]/sys_zerocopy_linux.go:33:9: cannot use sec (variable of type int64) as type int32 in struct literal ../../pkg/mod/github.com/cloudwego/[email protected]/sys_zerocopy_linux.go:34:9: cannot use usec (variable of type int64) as type int32 in struct literal

    github.com/henrylee2cn/ameda

    ../../pkg/mod/github.com/henrylee2cn/[email protected]/int.go:13:9: cannot use math.MaxInt64 (untyped int constant 9223372036854775807) as int value in return statement (overflows) ../../pkg/mod/github.com/henrylee2cn/[email protected]/uint.go:13:9: cannot use math.MaxUint64 (untyped int constant 18446744073709551615) as uint value in return statement (overflows)

    any suggestions?

  • fix: poller read all data before connection close

    fix: poller read all data before connection close

    What type of PR is this?

    chore

    What this PR does / why we need it (en: English/zh: Chinese):

    en: fix: poller read all data before connection close zh: fix: poller 在连接关闭之前,尝试读取完所有数据

    Which issue(s) this PR fixes:

  • func OnRequest: context get 0 value under high concurrency

    func OnRequest: context get 0 value under high concurrency

    Describe the bug I set value to context in func OnConnect like: context.WithValue(ctx, "key", val). Within ten threads, I can get correct value from OnRequest's context, but under high concurrency like 40 threads, the context return 0 value and can't get value which I set in OnConnect.

    To Reproduce test with sysbench, --threads=40

    Expected behavior Get context values correct.

    Panic logger.go:190: [Error] GOPOOL: panic in pool: gopool.DefaultPool: interface conversion: interface {} is nil, not *server.ClientConn: goroutine 1858 [running]

    image

  • WIP: perf: replace mcache with pcache

    WIP: perf: replace mcache with pcache

    What type of PR is this?

    perf

    What this PR does / why we need it (en: English/zh: Chinese):

    en: replace mcache with pcache zh: 使用 pcache 替代 mcache

    Which issue(s) this PR fixes:

  • Are you using Netpoll ?

    Are you using Netpoll ?

    The purpose of this issue

    We are always interested in finding out who is using Netpoll, what attracted you to using it, how we can listen to your needs and if you are interested, help promote your organization.

    • We have people reaching out to us asking, who uses Netpoll in production?
    • We’d like to listen to what you would like to see in Netpoll and your scenarios?
    • We'd like to help promote your organization and work with you

    What we would like from you

    Submit a comment in this issue to include the following information

    • Your organization or company
    • Link to your website
    • Your country
    • Your contact info to reach out to you: blog, email or Twitter (at least one).
    • What is your scenario for using Netpoll?
    • Are you running you application in Testing or Production?
    Organization/Company: ByteDance
    Website: https://bytedance.com
    Country: China
    Contact: [email protected]
    Usage scenario: Using Netpoll as a default net lib in Kitex & Hertz to build large scale Cloud Native applications
    Status: Production
    
Related tags
High-performance, non-blocking, event-driven, easy-to-use networking framework written in Go, support tls/http1.x/websocket.

High-performance, non-blocking, event-driven, easy-to-use networking framework written in Go, support tls/http1.x/websocket.

Jan 8, 2023
🚀Gev is a lightweight, fast non-blocking TCP network library based on Reactor mode. Support custom protocols to quickly and easily build high-performance servers.
🚀Gev is a lightweight, fast non-blocking TCP network library based on Reactor mode. Support custom protocols to quickly and easily build high-performance servers.

gev 中文 | English gev is a lightweight, fast non-blocking TCP network library based on Reactor mode. Support custom protocols to quickly and easily bui

Jan 6, 2023
Native macOS networking for QEMU using vmnet.framework and socket networking.

qemu-vmnet Native macOS networking for QEMU using vmnet.framework and socket networking. Getting started TODO -netdev socket,id=net0,udp=:1234,localad

Jan 5, 2023
High performance async-io(proactor) networking for Golang。golangのための高性能非同期io(proactor)ネットワーキング
High performance async-io(proactor) networking for Golang。golangのための高性能非同期io(proactor)ネットワーキング

gaio Introduction 中文介绍 For a typical golang network program, you would first conn := lis.Accept() to get a connection and go func(net.Conn) to start a

Dec 29, 2022
the pluto is a gateway new time, high performance, high stable, high availability, easy to use

pluto the pluto is a gateway new time, high performance, high stable, high availability, easy to use Acknowledgments thanks nbio for providing low lev

Sep 19, 2021
🧙 High-performance PHP-to-Golang IPC/RPC bridge

High-performance PHP-to-Golang IPC bridge Goridge is high performance PHP-to-Golang codec library which works over native PHP sockets and Golang net/r

Dec 28, 2022
`kawipiko` -- blazingly fast static HTTP server -- focused on low latency and high concurrency, by leveraging Go, `fasthttp` and the CDB embedded database
`kawipiko` -- blazingly fast static HTTP server -- focused on low latency and high concurrency, by leveraging Go, `fasthttp` and the CDB embedded database

kawipiko -- blazingly fast static HTTP server kawipiko is a lightweight static HTTP server written in Go; focused on serving static content as fast an

Jan 3, 2023
Antenna RPC is an RPC protocol for distributed computing, it's based on QUIC and Colfer. its currently an WIP.

aRPC - Antenna Remote Procedure Call Antenna remote procedure call (aRPC) is an RPC protocol focused on distributed processing and HPC. aRPC is implem

Jun 16, 2021
rpc/v2 support for JSON-RPC 2.0 Specification.

rpc rpc/v2 support for JSON-RPC 2.0 Specification. gorilla/rpc is a foundation for RPC over HTTP services, providing access to the exported methods of

Jul 4, 2021
Go Substrate RPC Client (GSRPC)Go Substrate RPC Client (GSRPC)

Go Substrate RPC Client (GSRPC) Substrate RPC client in Go. It provides APIs and types around Polkadot and any Substrate-based chain RPC calls. This c

Nov 11, 2021
A high-performance concurrent scanner written by go, which can be used for survival detection, tcp port detection, and web service detection.
A high-performance concurrent scanner written by go, which can be used for survival detection, tcp port detection, and web service detection.

aScan A high-performance concurrent scanner written by go, which can be used for survival detection, tcp port detection, and web service detection. Fu

Aug 15, 2022
Network-wide ads & trackers blocking DNS server
Network-wide ads & trackers blocking DNS server

Privacy protection center for you and your devices Free and open source, powerful network-wide ads & trackers blocking DNS server. AdGuard.com | Wiki

Dec 31, 2022
meek is a blocking-resistant pluggable transport for Tor.

meek is a blocking-resistant pluggable transport for Tor. It encodes a data stream as a sequence of HTTPS requests and responses. Requests are reflect

Nov 9, 2021
Middleware for Blocking IP ranges by inserting CIDR Blocks and searching IPs through those blocks

firewall Middleware for Blocking IP ranges by inserting CIDR Blocks and searching IPs through those blocks. Features Easy to use Efficient and Fast Co

Oct 9, 2022
Nov 9, 2022
Simple, fast and scalable golang rpc library for high load

gorpc Simple, fast and scalable golang RPC library for high load and microservices. Gorpc provides the following features useful for highly loaded pro

Dec 19, 2022
A simple server which can be used as an RPC endpoint in popular Ethereum wallets.

RPC Endpoint This repository contains code for a simple server which can be used as an RPC endpoint in popular Ethereum wallets. The endpoint is https

Jan 2, 2023
scrapligo -- is a Go library focused on connecting to devices, specifically network devices (routers/switches/firewalls/etc.) via SSH and NETCONF.
scrapligo -- is a Go library focused on connecting to devices, specifically network devices (routers/switches/firewalls/etc.) via SSH and NETCONF.

scrapligo -- scrap(e c)li (but in go!) -- is a Go library focused on connecting to devices, specifically network devices (routers/switches/firewalls/etc.) via SSH and NETCONF.

Jan 4, 2023
A memory-safe SSH server, focused on listening only on VPN networks such as Tailscale

Features Is tested to work with SCP Integrates well with systemd Quickstart Download binary for your architecture. We only support Linux. If you don't

Jun 10, 2022