🚀 gnet is a high-performance, lightweight, non-blocking, event-driven networking framework written in pure Go./ gnet 是一个高性能、轻量级、非阻塞的事件驱动 Go 网络框架。

gnet

English | 🇨🇳 中文

📖 Introduction

gnet is an event-driven networking framework that is fast and lightweight. It makes direct epoll and kqueue syscalls rather than using the standard Go net package and works in a similar manner as netty and libuv, which makes gnet achieve a much higher performance than Go net.

gnet is not designed to displace the standard Go net package, but to create a networking server framework for Go that performs on par with Redis and Haproxy for networking packets handling.

gnet sells itself as a high-performance, lightweight, non-blocking, event-driven networking framework written in pure Go which works on transport layer with TCP/UDP protocols and Unix Domain Socket , so it allows developers to implement their own protocols(HTTP, RPC, WebSocket, Redis, etc.) of application layer upon gnet for building diversified network applications, for instance, you get an HTTP Server or Web Framework if you implement HTTP protocol upon gnet while you have a Redis Server done with the implementation of Redis protocol upon gnet and so on.

gnet derives from the project: evio while having a much higher performance and more features.

🚀 Features

  • High-performance event-loop under networking model of multiple threads/goroutines
  • Built-in goroutine pool powered by the library ants
  • Built-in memory pool with bytes powered by the library bytebufferpool
  • Lock-free during the entire runtime
  • Concise and easy-to-use APIs
  • Efficient and reusable memory buffer: Ring-Buffer
  • Supporting multiple protocols/IPC mechanism: TCP, UDP and Unix Domain Socket
  • Supporting multiple load-balancing algorithms: Round-Robin, Source-Addr-Hash and Least-Connections
  • Supporting two event-driven mechanisms: epoll on Linux and kqueue on FreeBSD/DragonFly/Darwin
  • Supporting asynchronous write operation
  • Flexible ticker event
  • SO_REUSEPORT socket option
  • Built-in multiple codecs to encode/decode network frames into/from TCP stream: LineBasedFrameCodec, DelimiterBasedFrameCodec, FixedLengthFrameCodec and LengthFieldBasedFrameCodec, referencing netty codec, also supporting customized codecs
  • Supporting Windows platform with event-driven mechanism of IOCP Go stdlib: net
  • Implementation of gnet Client

📊 Performance

Benchmarks on TechEmpower

# Hardware Environment
CPU: 28 HT Cores Intel(R) Xeon(R) Gold 5120 CPU @ 2.20GHz
Mem: 32GB RAM
OS : Ubuntu 18.04.3 4.15.0-88-generic #88-Ubuntu
Net: Switched 10-gigabit ethernet
Go : go1.14.x linux/amd64

All language

This is the top 50 on the framework ranking of all programming languages consists of a total of 422 frameworks from all over the world where gnet is the runner-up.

Golang

This is the full framework ranking of Go and gnet tops all the other frameworks, which makes gnet the fastest networking framework in Go.

To see the full ranking list, visit TechEmpower Plaintext Benchmark.

Contrasts to the similar networking libraries

On Linux (epoll)

Test Environment

# Machine information
        OS : Ubuntu 20.04/x86_64
       CPU : 8 processors, AMD EPYC 7K62 48-Core Processor
    Memory : 16.0 GiB

# Go version and settings
Go Version : go1.15.7 linux/amd64
GOMAXPROCS : 8

# Netwokr settings
TCP connections : 300
Test duration   : 30s

Echo Server

HTTP Server

On FreeBSD (kqueue)

Test Environment

# Machine information
        OS : macOS Catalina 10.15.7/x86_64
       CPU : 6-Core Intel Core i7
    Memory : 16.0 GiB

# Go version and configurations
Go Version : go1.15.7 darwin/amd64
GOMAXPROCS : 12

# Netwokr settings
TCP connections : 100
Test duration   : 20s

Echo Server

HTTP Server

🏛 Website

Please visit the official website for more details about architecture, usage and other information of gnet.

⚠️ License

Source code in gnet is available under the MIT License.

👏 Contributors

Please read the Contributing Guidelines before opening a PR and thank you to all the developers who already made contributions to gnet!

Relevant Articles

🎡 User cases

The following companies/organizations use gnet as the underlying network service in production.

      

If your project is also using gnet, feel free to open a pull request to refresh this list of user cases.

💰 Backers

Support us with a monthly donation and help us continue our activities.

💎 Sponsors

Become a bronze sponsor with a monthly donation of $10 and get your logo on our README on Github.

☕️ Buy me a coffee

Please be sure to leave your name, Github account or other social media accounts when you donate by the following means so that I can add it to the list of donors as a token of my appreciation.

        

💴 Donors

Patrick Othmer Jimmy ChenZhen Mai Yang 王开帅 Unger Alejandro

💵 Paid Support

If you need a tailored version of gnet and want the author to help develop it, or bug fix/fast resolution/consultation which takes a lot of effort, you can request paid support here.

🔑 JetBrains OS licenses

gnet had been being developed with GoLand IDE under the free JetBrains Open Source license(s) granted by JetBrains s.r.o., hence I would like to express my thanks here.

🔋 Sponsorship

This project is supported by:

Owner
Andy Pan
A front-line programmer based in South China. I mostly write server-side code. Also a modern agnostic.
Andy Pan
Comments
  • v1.3.2版本发现连接建立后偶发没有触发OnOpened方法

    v1.3.2版本发现连接建立后偶发没有触发OnOpened方法

    潘神,你好。 我们使用了gnet的1.3.2版本做IM的服务器,上线一段时候后,我们发现一个非常诡异的现象。

    当我们的服务运行一段时间后,会莫名其妙的造成系统内存一直增加,而且用netstat命令查看,发现很多连接的Recv-Q有大量的堆积。然后通过查询日志发现这一类有Recv-Q堆积的连接都没有触发onOpend方法,也就是说我的程序根本就没有感知到这个连接的建立(是不是因此这个连接的接收数据会一直得不到接收,一直堆在接收缓冲区上?)

    附上recv-q的图 image

    期待得到你的回复。十分感谢。

  • 大量连接断开nil错误

    大量连接断开nil错误

    Steps to reproduce the behavior:

    1. 监听tcp端口,使用多核,使用端口重用
    2. 客户端打开1W个连接,并且等到1W个连接,连接成功
    3. 关掉客户端
    4. 服务器抛出一个nil错误,并且退出

    image

    System Info (please complete the following information):

    • OS (e.g. Linux): ubuntu 19.10
    • Go Version (e.g. Go 1.13): 1.13.3
    • gnet Version (e.g. v1.0.0-beta.3): v1.0.0-beta7

    Additional context 客户端协议使用 github.com/gorilla/websocket 实现,服务端协议轻微修改

  • 何时加个主动close?

    何时加个主动close?

    RT,还有建议加个GID等工具,使用协程本地变量,自动判断是EVENTLOOP的协程还是其他协程,这样写和关闭连接可以只用一个接口。API里那些返回 action的close感觉代码业务只会太耦合 Is your feature request related to a problem? Please describe. A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

    Describe the solution you'd like A clear and concise description of what you want to happen.

    Describe alternatives you've considered A clear and concise description of any alternative solutions or features you've considered.

    Additional context Add any other context or screenshots about the feature request here.

  • opt: byte alignment, optimization from 144 to 136 byte

    opt: byte alignment, optimization from 144 to 136 byte


    name: Pull request about: opt title: 'byte alignment, optimization from 144 to 136 byte' labels: '' assignees: ''

    1. Are you opening this pull request for bug-fixes, optimizations or new feature?

    optimizations

    2. Please describe how these code changes achieve your intention.

    如题,字节对齐,每创建一个 conn 结构体,可节省 8 字节

    3. Please link to the relevant issues (if any).

    4. Which documentation changes (if any) need to be made/updated because of this PR?

    connection.go

    4. Checklist

    • [x] I have squashed all insignificant commits.
    • [x] I have commented my code for explaining package types, values, functions, and non-obvious lines.
    • [ ] I have written unit tests and verified that all tests passes (if needed).
    • [ ] I have documented feature info on the README (only when this PR is adding a new feature).
    • [x] (optional) I am willing to help maintain this change if there are issues with it later.
  • 建议实现 net.Conn

    建议实现 net.Conn

    Is your feature request related to a problem? Please describe. A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

    Describe the solution you'd like A clear and concise description of what you want to happen.

    Describe alternatives you've considered A clear and concise description of any alternative solutions or features you've considered.

    Additional context Add any other context or screenshots about the feature request here.

  • gnet@v1.6.5/acceptor_unix.go:38 Accept() fails due to error: too many open files

    [email protected]/acceptor_unix.go:38 Accept() fails due to error: too many open files

    不知道是使用不当,还是程序的bug,出现这个后,程序出现假死,收不到消息了,但程序并未退出,问题如下: image 即出现这几行错误日志后,程序不再接收消息。我查了一下代码,调用如下 1.start后,server_unix.go的171行开始调用 image 2.然后是reactor_default_linux.go的35行调用后,返回第一个错误 image 3.调用的是acceptor_unix.go的这个方法, image

    从以上看出返回的错误是3步的38行,打印出Accept() fails due to error: too many open files这行错误日志 然后走到2是39行打印出main reactor is exiting due to error: accept a new connection error这行日志错误后,程序不再接收消息 linux是默认1024个连接,表面上看是未关闭客户端连接造成的。

  • 高并发下面报文乱序问题

    高并发下面报文乱序问题

    Describe the bug 两个机器,一个机器启动一定量的client, 不停的发数据,一个机器做echo. 会出现消息跨连接错乱情况

    To Reproduce 代码如下: client:

    package main
    
    import (
    	"encoding/binary"
    	"fmt"
    	"github.com/panjf2000/gnet"
    	"sync/atomic"
    	"test/utils"
    	"time"
    )
    
    type BytesINFO struct {
    	Data  []byte
    	Index uint64
    }
    
    type Client struct {
    	gnet.EventServer
    	connS map[gnet.Conn]struct{}
    	c     *gnet.Client
    }
    
    func NewClient() *Client {
    	c := &Client{
    		connS: map[gnet.Conn]struct{}{},
    	}
    	gc, err := gnet.NewClient(c)
    	utils.Must(err)
    	c.c = gc
    	gc.Start()
    	return c
    }
    
    func (this_ *Client) Start(addr string, count int) {
    	go func() {
    		for i := 0; i < count; i++ {
    			c, err := this_.c.Dial("tcp4", addr)
    			utils.Must(err)
    			this_.connS[c] = struct{}{}
    		}
    	}()
    }
    
    const packetSize = 1024
    
    func (this_ *Client) OnOpened(c gnet.Conn) (out []byte, action gnet.Action) {
    	// startIndex := uint64(0) //atomic.AddUint64(&initIndex, 10000000)
    	startIndex := atomic.AddUint64(&initIndex, 10000000)
    	bi := &BytesINFO{Index: startIndex}
    	c.SetContext(bi)
    	go func() {
    
    		index := startIndex
    		for {
    			b := make([]byte, packetSize)
    			binary.LittleEndian.PutUint64(b, index)
    			index++
    			err := c.AsyncWrite(b)
    			if err != nil {
    				panic(err)
    			}
    			time.Sleep(time.Millisecond)
    		}
    	}()
    	return nil, gnet.None
    }
    
    var (
    	initIndex = uint64(0)
    )
    
    func (this_ *Client) React(packet []byte, c gnet.Conn) (out []byte, action gnet.Action) {
    	bi := c.Context().(*BytesINFO)
    	bi.Data = append(bi.Data, packet...)
    	temp := bi.Data
    	for len(temp) >= packetSize {
    		index := binary.LittleEndian.Uint64(temp[:8])
    		if bi.Index != index {
    			panic(fmt.Sprintf("bi.Index[%d] != index[%d]", bi.Index, index))
    		}
    		bi.Index++
    		atomic.AddUint64(&count, 1)
    		temp = temp[packetSize:]
    	}
    	if len(temp) > 0 {
    		copy(bi.Data, temp)
    		bi.Data = bi.Data[:len(temp)]
    	} else {
    		bi.Data = bi.Data[:0]
    	}
    
    	return nil, gnet.None
    }
    
    func (this_ *Client) OnClosed(c gnet.Conn, err error) (action gnet.Action) {
    	fmt.Println("close:", c.RemoteAddr(), err)
    	return 0
    }
    
    func main() {
    	Start("10.23.20.53:9922", 10)
    	oldCount := uint64(0)
    	for {
    		time.Sleep(time.Second)
    		c := atomic.LoadUint64(&count)
    		fmt.Println(c, c-oldCount)
    		oldCount = c
    	}
    }
    
    var (
    	m     = map[*Client]struct{}{}
    	count uint64
    )
    
    func Start(addr string, count int) {
    	for i := 0; i < count; i++ {
    		c := NewClient()
    		c.Start(addr, 10)
    		m[c] = struct{}{}
    	}
    }
    
    

    server:

    package main
    
    import "github.com/panjf2000/gnet"
    
    type Server struct {
    	gnet.EventServer
    }
    
    func (this_ *Server) React(packet []byte, c gnet.Conn) (out []byte, action gnet.Action) {
    	return packet, gnet.None
    }
    
    func main() {
    	s := &Server{}
    	gnet.Serve(s, "tcp://0.0.0.0:9922", gnet.WithMulticore(true))
    }
    
    

    panic截图: {98c0bd53-ca49-4fed-b77c-b09bf39dfe1d}

    Expected behavior 如上图,期望从240000000开始,但是却收到了 1400000001.这个 1400000001是其他连接的数字,不应该收到 1400000001

    System Info (please complete the following information):

    • OS : client : Ubuntu 20.04 desktop ,server: CentOS Linux release 7.9.2009 (Core)
    • Go : 1.17和1.18beta1 都出现
    • gnet version : v1.6.4-0.20211215141823-afde9cd848f6

    Additional context 我们在接入gnet之前会做大量各种情况的 高并发压测。这个bug,出现概率 25%左右,并不是每次运行都出现。 目前只晓得数据跨连接错乱了。具体是client 端导致的还是server导致的还没找到原因

  • 关于异步任务处理的疑问

    关于异步任务处理的疑问

    感谢潘少的项目 ∩^∩ 有个疑问啊,你在文中提到如果有大量的goroutine会给go的调度带来压力,我印象中go官方提到过不会带来压力。是我理解错了吗?谢谢。

    type Poller struct {
    	fd            int    // epoll fd
    	wfd           int    // wake fd
    	wfdBuf        []byte // wfd buffer to read packet
    	asyncJobQueue internal.AsyncJobQueue
    }
    

    另外能大概解析一下上面 wfd 具体作用吗?谢谢。

    在处理EpollWait返回事件有以下代码,请问wfd有什么作用呢?

    	if fd := int(el.events[i].Fd); fd != p.wfd { // 为什么要做这个判断呢?
    		if err = callback(fd, el.events[i].Events); err != nil {
    			return
    		}
    	} else {
    		wakenUp = true
    		fmt.Println("wfdbuf: ", p.wfdBuf)
    		_, _ = unix.Read(p.wfd, p.wfdBuf)
    	}
    

    -------------分割线 补上

    了解了eventfd的用法,对应上面的wfd,为了让异步执行的任务也统一通过epollWait来处理,但有两个疑问: 1,为什么要统一通过epollWait来统一调度呢,不可以想写数据的时候直接就往connfd中write呢?这样会有什么问题呢?谢谢。 2,

    func (q *AsyncJobQueue) ForEach() (err error) {
    	q.mu.Lock()
    	jobs := q.jobs
    	q.jobs = nil
    	q.mu.Unlock()
    	for i := range jobs {
    		if err = jobs[i](); err != nil {
    			return err
    		}
    	}
    	return
    }
    

    ForEach 这个函数会遍历所有的jobs并执行,那么只要一个jobs写入的eventfd被调度了,那么就会执行所有的jobs,而且定时器也会定时触发这个调度。这样感觉会导致一些异步任务的执行并不是由这个任务本身去触发的,而是由其他的任务触发的,而且还会有多次write(eventfd)后,多次read(eventfd)没必要的情况(因为jobs一次就已经全部执行完了)。 请问是我哪里理解错了吗?谢谢。

  • Gnet cannot support large value(>10K) redis benchmark test

    Gnet cannot support large value(>10K) redis benchmark test

    Describe the bug I try to implement a redis-server by gnet, but I cannot test by redis-benchmark

    cmd: redis-benchmark -h <host> -p<port> -t set,get -n 1000000 -d 1000000 -r 1000000

    how I implement it in React:

    1. read cmd in frame // async
    2. using immemeory map set/get
    3. return value image

    React: React(frame []byte, c gnet.Conn) (out []byte, action gnet.Action) To Reproduce just run bench

    Expected behavior it can test and show the result

    Screenshots the Screenshot shows only when -d (data size) is small it can run image

    System Info (please complete the following information):

    • OS (e.g. Ubuntu 16.04 & osx1.15):
    • Go version (e.g. Go 1.16):
    • gnet version (e.g. v1.6.1):

    Additional context the code is simple, if you need, I can upload later

  • 出现大量的 <Error: failed to delete fd=xxx from poller in event-loop(0): epoll_ctl del: no such file or directory>

    出现大量的

    Describe the bug 服务突然出现大量的此类错误,然后整个服务不可用。 想问下什么场景会出现此类错误

    Expected behavior A clear and concise description of what you expected to happen.

    Screenshots If applicable, add screenshots to help explain your problem.

    System Info (please fill out the following information):

    • OS (e.g. Ubuntu 18.04): Debian 5.4.143
    • Go version (e.g. Go 1.13): go1.18
    • gnet version (e.g. v1.0.0): v2.1.0
  • FreeBSD 系统报错:software caused connection abort

    FreeBSD 系统报错:software caused connection abort

    Describe the bug 频繁出现Main reactor is exiting due to error: accept a new connection error,会是什么问题呢? 2021-02-18T19:53:02.000+0800 INFO [email protected]/reactor_bsd.go:40 Main reactor is exiting due to error: accept a new connection error 2021-02-18T19:53:02.000+0800 INFO [email protected]/reactor_bsd.go:67 Event-loop(2) is exiting normally on the signal error: server is going to be shutdown 2021-02-18T19:53:02.000+0800 INFO [email protected]/reactor_bsd.go:67 Event-loop(3) is exiting normally on the signal error: server is going to be shutdown 2021-02-18T19:53:02.000+0800 INFO [email protected]/reactor_bsd.go:67 Event-loop(1) is exiting normally on the signal error: server is going to be shutdown 2021-02-18T19:53:02.000+0800 INFO [email protected]/reactor_bsd.go:67 Event-loop(0) is exiting normally on the signal error: server is going to be shutdown 2021-02-18T19:53:02.000+0800 INFO [email protected]/reactor_bsd.go:67 Event-loop(4) is exiting normally on the signal error: server is going to be shutdown 2021-02-18T19:53:02.000+0800 INFO [email protected]/reactor_bsd.go:67 Event-loop(5) is exiting normally on the signal error: server is going to be shutdown 2021-02-18T19:53:02.000+0800 INFO [email protected]/reactor_bsd.go:67 Event-loop(6) is exiting normally on the signal error: server is going to be shutdown 2021-02-18T19:53:02.000+0800 INFO [email protected]/reactor_bsd.go:67 Event-loop(7) is exiting normally on the signal error: server is going to be shutdown 2021-02-18T19:57:42.380+0800 INFO [email protected]/reactor_bsd.go:40 Main reactor is exiting due to error: accept a new connection error 2021-02-18T19:57:42.380+0800 INFO [email protected]/reactor_bsd.go:67 Event-loop(4) is exiting normally on the signal error: server is going to be shutdown 2021-02-18T19:57:42.380+0800 INFO [email protected]/reactor_bsd.go:67 Event-loop(6) is exiting normally on the signal error: server is going to be shutdown 2021-02-18T19:57:42.380+0800 INFO [email protected]/reactor_bsd.go:67 Event-loop(5) is exiting normally on the signal error: server is going to be shutdown 2021-02-18T19:57:42.380+0800 INFO [email protected]/reactor_bsd.go:67 Event-loop(0) is exiting normally on the signal error: server is going to be shutdown 2021-02-18T19:57:42.380+0800 INFO [email protected]/reactor_bsd.go:67 Event-loop(2) is exiting normally on the signal error: server is going to be shutdown 2021-02-18T19:57:42.380+0800 INFO [email protected]/reactor_bsd.go:67 Event-loop(3) is exiting normally on the signal error: server is going to be shutdown 2021-02-18T19:57:42.380+0800 INFO [email protected]/reactor_bsd.go:67 Event-loop(7) is exiting normally on the signal error: server is going to be shutdown 2021-02-18T19:57:42.380+0800 INFO [email protected]/reactor_bsd.go:67 Event-loop(1) is exiting normally on the signal error: server is going to be shutdown

    System Info (please complete the following information):

    • OS (e.g. Ubuntu 18.04): FreeBSD 12.2
    • Go version (e.g. Go 1.13): go version go1.15.6 freebsd/amd64
    • gnet version (e.g. v1.0.0): v1.3.3-0.20210209145202

    Additional context Add any other context about the problem here.

  • Is it possible to find out the number of bytes written with `conn.AsyncWrite()`?

    Is it possible to find out the number of bytes written with `conn.AsyncWrite()`?

    Please fill out the following system information before opening an issue: OS: Ubuntu 20.04.5 LTS Go version: go1.19.3 linux/amd64 gnet version: 2.1.1

    What is your question about gnet? conn.Write() returns the number of bytes written. Is it possible to find out the number of bytes written with conn.AsyncWrite()?

  • byteslice.Get 内存增长导致程序OOM

    byteslice.Get 内存增长导致程序OOM

    Describe the bug

    在阿里云k8s环境部署后,进行压测,向客户端写入数据时,内存持续快速增长,导致程序OOM

    Screenshots image

    System Info (please fill out the following information):

    • OS : Docker(alpine 3.16.2 )
    • Go version : go1.19.2
    • gnet version: v2.1.2
  • feat: add multiple listener support

    feat: add multiple listener support

    1. Are you opening this pull request for bug-fixes, optimizations or new feature?

    New feature

    2. Please describe how these code changes achieve your intention.

    This PR will add multi-listener support. The user can specify multiple IP:port pairs separated by commas.

    3. Please link to the relevant issues (if any).

    4. Which documentation changes (if any) need to be made/updated because of this PR?

    The gnet.Run command will accept multiple IP:port pairs like this:

    gnet.Run(echo, "tcp://192.168.0.100:8000,192.168.0.100:8001,192.168.0.100:8002")
    

    4. Checklist

    • [x] I have squashed all insignificant commits.
    • [x] I have commented my code for explaining package types, values, functions, and non-obvious lines.
    • [ ] I have written unit tests and verified that all tests passes (if needed).
    • [ ] I have documented feature info on the README (only when this PR is adding a new feature).
    • [x] (optional) I am willing to help maintain this change if there are issues with it later.
  • Question about the

    Question about the "read" function of "eventloop"

    Please fill out the following system information before opening an issue:

    • OS : windows 11
    • Go version : 1.18.1
    • gnet version : v2.0.3

    What is your question about gnet? Whether it need to use lock to prevent multiple conn data confusion when fd is reused on unix.

Netpoll is a high-performance non-blocking I/O networking framework, which focused on RPC scenarios, developed by ByteDance.
Netpoll is a high-performance non-blocking I/O networking framework, which focused on RPC scenarios, developed by ByteDance.

Netpoll is a high-performance non-blocking I/O networking framework, which focused on RPC scenarios, developed by ByteDance. RPC is usually heavy on processing logic and therefore cannot handle I/O serially. But Go's standard library net designed blocking I/O API, so that the RPC framework can only follow the One Conn One Goroutine design.

Jan 2, 2023
🚀Gev is a lightweight, fast non-blocking TCP network library based on Reactor mode. Support custom protocols to quickly and easily build high-performance servers.
🚀Gev is a lightweight, fast non-blocking TCP network library based on Reactor mode. Support custom protocols to quickly and easily build high-performance servers.

gev 中文 | English gev is a lightweight, fast non-blocking TCP network library based on Reactor mode. Support custom protocols to quickly and easily bui

Jan 6, 2023
Native macOS networking for QEMU using vmnet.framework and socket networking.

qemu-vmnet Native macOS networking for QEMU using vmnet.framework and socket networking. Getting started TODO -netdev socket,id=net0,udp=:1234,localad

Jan 5, 2023
High performance async-io(proactor) networking for Golang。golangのための高性能非同期io(proactor)ネットワーキング
High performance async-io(proactor) networking for Golang。golangのための高性能非同期io(proactor)ネットワーキング

gaio Introduction 中文介绍 For a typical golang network program, you would first conn := lis.Accept() to get a connection and go func(net.Conn) to start a

Dec 29, 2022
the pluto is a gateway new time, high performance, high stable, high availability, easy to use

pluto the pluto is a gateway new time, high performance, high stable, high availability, easy to use Acknowledgments thanks nbio for providing low lev

Sep 19, 2021
Fast event-loop networking for Go
Fast event-loop networking for Go

evio is an event loop networking framework that is fast and small. It makes direct epoll and kqueue syscalls rather than using the standard Go net pac

Dec 31, 2022
Event driven modular status-bar for dwm; written in Go & uses Unix sockets for signaling.

dwmstat A simple event-driven modular status-bar for dwm. It is written in Go & uses Unix sockets for signaling. The status bar is conceptualized as a

Dec 25, 2021
Packiffer is a lightweight cross-platform networking toolkit that let you sniff/analyze/inject/filter packets.
Packiffer is a lightweight cross-platform networking toolkit that let you sniff/analyze/inject/filter packets.

Packiffer is a lightweight cross-platform networking toolkit that let you sniff/analyze/inject/filter packets.

Dec 19, 2022
An event driven remote access trojan for experimental purposes.

erat An event driven remote access trojan for experimental purposes. This example is very simple and leverages ssh failed login events to trigger erat

Jan 16, 2022
Package event-driven makes it easy for you to drive events between services
Package event-driven makes it easy for you to drive events between services

Event-Driven Event-driven architecture is a software architecture and model for application design. With an event-driven system, the capture, communic

Apr 20, 2022
A fast, high performance Cross-platform lightweight Nat Tracker Server,
A fast, high performance Cross-platform lightweight Nat Tracker Server,

NatTrackerServer A fast, high performance Cross-platform lightweight Nat Tracker Server suport IPv4 and IPv6 Tracker Server protocol 1、get NAT public

Apr 15, 2022
Network-wide ads & trackers blocking DNS server
Network-wide ads & trackers blocking DNS server

Privacy protection center for you and your devices Free and open source, powerful network-wide ads & trackers blocking DNS server. AdGuard.com | Wiki

Dec 31, 2022
meek is a blocking-resistant pluggable transport for Tor.

meek is a blocking-resistant pluggable transport for Tor. It encodes a data stream as a sequence of HTTPS requests and responses. Requests are reflect

Nov 9, 2021
Middleware for Blocking IP ranges by inserting CIDR Blocks and searching IPs through those blocks

firewall Middleware for Blocking IP ranges by inserting CIDR Blocks and searching IPs through those blocks. Features Easy to use Efficient and Fast Co

Oct 9, 2022
A decentralized P2P networking stack written in Go.

noise noise is an opinionated, easy-to-use P2P network stack for decentralized applications, and cryptographic protocols written in Go. noise is made

Dec 29, 2022
Nov 9, 2022
Fake server, Consumer Driven Contracts and help with testing performance from one configuration file with zero system dependencies and no coding whatsoever
Fake server, Consumer Driven Contracts and help with testing performance from one configuration file with zero system dependencies and no coding whatsoever

mockingjay server Mockingjay lets you define the contract between a consumer and producer and with just a configuration file you get: A fast to launch

Jan 6, 2023
High-performance PHP application server, load-balancer and process manager written in Golang
High-performance PHP application server, load-balancer and process manager written in Golang

RoadRunner is an open-source (MIT licensed) high-performance PHP application server, load balancer, and process manager. It supports running as a serv

Jan 1, 2023