High-performance, non-blocking, event-driven, easy-to-use networking framework written in Go, support tls/http1.x/websocket.

NBIO - NON-BLOCKING IO

Mentioned in Awesome Go

GoDoc MIT licensed Build Status Go Report Card Coverage Statusd

Contents

Features

  • linux: epoll
  • macos(bsd): kqueue
  • windows: golang std net
  • nbio.Conn implements a non-blocking net.Conn(except windows)
  • writev supported
  • least dependency
  • TLS supported
  • HTTP/HTTPS 1.x
  • Websocket, Passes the Autobahn Test Suite
  • HTTP 2.0

Installation

  1. Get and install nbio
$ go get -u github.com/lesismal/nbio
  1. Import in your code:
import "github.com/lesismal/nbio"

Quick Start

  • start a server
import "github.com/lesismal/nbio"

g := nbio.NewGopher(nbio.Config{
    Network: "tcp",
    Addrs:   []string{"localhost:8888"},
})

// echo
g.OnData(func(c *nbio.Conn, data []byte) {
    c.Write(append([]byte{}, data...))
})

err := g.Start()
if err != nil {
    panic(err)
}
// ...
  • start a client
import "github.com/lesismal/nbio"

g := nbio.NewGopher(nbio.Config{})

g.OnData(func(c *nbio.Conn, data []byte) {
    // ...
})

err := g.Start()
if err != nil {
    fmt.Printf("Start failed: %v\n", err)
}
defer g.Stop()

c, err := nbio.Dial("tcp", addr)
if err != nil {
    fmt.Printf("Dial failed: %v\n", err)
}
g.AddConn(c)

buf := make([]byte, 1024)
c.Write(buf)
// ...

API Examples

New Gopher For Server-Side

g := nbio.NewGopher(nbio.Config{
    Network: "tcp",
    Addrs:   []string{"localhost:8888"},
})

New Gopher For Client-Side

g := nbio.NewGopher(nbio.Config{})

Start Gopher

err := g.Start()
if err != nil {
    fmt.Printf("Start failed: %v\n", err)
}
defer g.Stop()

Custom Other Config For Gopher

conf := nbio.Config struct {
    // Name describes your gopher name for logging, it's set to "NB" by default
    Name: "NB",

    // MaxLoad represents the max online num, it's set to 10k by default
    MaxLoad: 1024 * 10, 

    // NListener represents the listener goroutine num on *nix, it's set to 1 by default
    NListener: 1,

    // NPoller represents poller goroutine num, it's set to runtime.NumCPU() by default
    NPoller: runtime.NumCPU(),

    // ReadBufferSize represents buffer size for reading, it's set to 16k by default
    ReadBufferSize: 1024 * 16,

    // MaxWriteBufferSize represents max write buffer size for Conn, it's set to 1m by default.
    // if the connection's Send-Q is full and the data cached by nbio is 
    // more than MaxWriteBufferSize, the connection would be closed by nbio.
    MaxWriteBufferSize uint32

    // LockListener represents listener's goroutine to lock thread or not, it's set to false by default.
	LockListener bool

    // LockPoller represents poller's goroutine to lock thread or not.
    LockPoller bool
}

SetDeadline/SetReadDeadline/SetWriteDeadline

var c *nbio.Conn = ...
c.SetDeadline(time.Now().Add(time.Second * 10))
c.SetReadDeadline(time.Now().Add(time.Second * 10))
c.SetWriteDeadline(time.Now().Add(time.Second * 10))

Bind User Session With Conn

var c *nbio.Conn = ...
var session *YourSessionType = ... 
c.SetSession(session)
var c *nbio.Conn = ...
session := c.Session().(*YourSessionType)

Writev / Batch Write

var c *nbio.Conn = ...
var data [][]byte = ...
c.Writev(data)

Handle New Connection

g.OnOpen(func(c *Conn) {
    // ...
    c.SetReadDeadline(time.Now().Add(time.Second*30))
})

Handle Disconnected

g.OnClose(func(c *Conn) {
    // clear sessions from user layer
})

Handle Data

g.OnData(func(c *Conn, data []byte) {
    // decode data
    // ...
})

Handle Memory Allocation/Free For Reading

import "sync"

var memPool = sync.Pool{
    New: func() interface{} {
        return make([]byte, yourSize)
    },
}

g.OnReadBufferAlloc(func(c *Conn) []byte {
    return memPool.Get().([]byte)
})
g.OnReadBufferFree(func(c *Conn, b []byte) {
    memPool.Put(b)
})

Handle Memory Free For Writing

g.OnWriteBufferFree(func(c *Conn, b []byte) {
    // ...
})

Handle Conn Before Read

// BeforeRead registers callback before syscall.Read
// the handler would be called only on windows
g.OnData(func(c *Conn, data []byte) {
    c.SetReadDeadline(time.Now().Add(time.Second*30))
})

Handle Conn After Read

// AfterRead registers callback after syscall.Read
// the handler would be called only on *nix
g.BeforeRead(func(c *Conn) {
    c.SetReadDeadline(time.Now().Add(time.Second*30))
})

Handle Conn Before Write

g.OnData(func(c *Conn, data []byte) {
    c.SetWriteDeadline(time.Now().Add(time.Second*5))
})

Std Net Examples

Echo Examples

TLS Examples

HTTP Examples

HTTPS Examples

HTTP 1M Connections Examples

Websocket Examples

Websocket TLS Examples

Websocket 1M Connections Examples

Bench Examples

refer to this test, or write your own test cases:

Use With Other STD Based Frameworkds

Owner
lesismal
less is more.
lesismal
Comments
  • 使用websocket服务端时客户端刚连接进来就断开

    使用websocket服务端时客户端刚连接进来就断开

    版本: v1.2.21 环境: k8s中的golang:1.19镜像 问题: 连接进来的客户端刚升级完就断开连接,Close中没有找到任何的错误 日志: 日志里的调用栈也实在是没看出什么问题来 image 代码: 连接进来时

    ctx := r.Context()
    	snPassword := mux.Vars(r)["charge_station"]
    	// 获取sn号
    	if snPassword == "" {
    		w.WriteHeader(http.StatusBadRequest)
    		_, _ = w.Write(responseErr(ctx, protocol.ErrGenericError, "sn empty"))
    		return
    	}
    
    	var password string
    	spilt := strings.Split(snPassword, ":")
    	if len(spilt) > 1 {
    		password = spilt[1]
    	}
    
    	sn := spilt[0]
    	var certSerialNumber string
    	if r.TLS != nil {
    		if len(r.TLS.PeerCertificates) > 0 {
    			log.Logger().Debug("证书中sn号为:", zap.String("sn", r.TLS.PeerCertificates[0].Subject.OrganizationalUnit[0]))
    			log.Logger().Debug("实际sn号为:", zap.String("sn", sn))
    			certInfo := r.TLS.PeerCertificates[0]
    			if certInfo.Subject.OrganizationalUnit[0] != sn {
    				w.WriteHeader(http.StatusBadRequest)
    				_, _ = w.Write(responseErr(ctx, protocol.ErrGenericError, "参数sn不等于证书中的sn"))
    				return
    			}
    			certSerialNumber = certInfo.SerialNumber.String()
    		}
    	}
    	port := strings.Split(r.Host, ":")[1]
    	remoteAddress := readUserIP(r)
    	accVerReq := &api.AccessVerifyRequest{
    		DeviceSerialNumber:    sn,
    		DeviceProtocol:        Hub.Protocol,
    		DeviceProtocolVersion: Hub.ProtocolVersion,
    		RemoteAddress:         remoteAddress,
    		RequestPort:           port,
    	}
    
    	if certSerialNumber != "" {
    		accVerReq.CertSerialNumber = certSerialNumber
    	}
    	if password != "" {
    		accVerReq.Password = password
    	}
    	// 确认是否能够访问
    	response, err := api.AccessVerify(accVerReq)
    	if err != nil {
    		w.WriteHeader(http.StatusBadRequest)
    		_, _ = w.Write(responseErr(ctx, protocol.ErrGenericError, "access verify failed"+err.Error()))
    		log.Logger().Error(err.Error(), zap.String("sn", sn))
    		return
    	}
    
    	// 升级成websocket协议
    	ws, err := s.upgrader.Upgrade(w, r, nil)
    	if err != nil {
    		responseErr(ctx, protocol.ErrGenericError, err.Error())
    		return
    	}
    	log.Logger().Debug("创建连接成功", zap.String("sn", sn))
    	conn := ws.(*websocket.Conn)
    	if response != nil {
    		// 解析出平台ID
    		coreId, _ := strconv.ParseUint(response.CoreID, 10, 64)
    		if _client, ok := Hub.Clients.Load(coreId); ok {
    			_client.(lib.ClientInterface).Close(errors.New("旧连接还存在先关闭"))
    		}
    		// 创建新的客户端
    		client := NewClient(interfaces.NewDefaultChargeStation(sn, response.Registered, coreId), Hub, conn, response.KeepAlive, remoteAddress, log.Logger(), true)
    		// 设置基本路由
    		client.SetBaseURL(response.BaseURL)
    		client.SetClientOfflineFunc(func(err error) {
    			// 如果已经注册过了需要通知平台
    			if client.ChargeStation().Registered() {
    				reason := "websocket关闭连接"
    				_ = ClientOfflineNotify(client.ChargeStation(), reason)
    			}
    			Hub.RegClients.Delete(client.ChargeStation().CoreID())
    		})
    		// 读取加密解密数据的密钥
    		encryptKey := readEncryptKey(r)
    		if len(encryptKey) != 0 {
    			client.SetEncryptKey(encryptKey)
    		}
    		// 如果注册过了订阅命令消息, 没有就订阅注册消息
    		if client.ChargeStation().Registered() {
    			go client.SubMQTT()
    		} else {
    			go client.SubRegMQTT()
    		}
    		conn.SetSession(client)
    	}
    
    // 连接关闭时
    	c.once.Do(func() {
    		if err != nil {
    			c.log.Error(err.Error(), zap.String("sn", c.chargeStation.SN()))
    		}
    		c.hub.Clients.Delete(c.chargeStation.CoreID())
    		c.hub.RegClients.Delete(c.chargeStation.CoreID())
    		c.log.Error("关闭连接", zap.String("sn", c.chargeStation.SN()))
    		close(c.close)
    		c.clientOfflineNotifyFunc(err)
    	})
    	return nil
    

    可以看到日志里面打印了创建连接成功说明Upgrade是成功的,但是立马就进入到了Close,并且也没发生任何错误

  • Should I receive one OnClose for every connection created and closed?

    Should I receive one OnClose for every connection created and closed?

    Hi. I am using nbio hard and it is performing very well. But here might be a small issue: I depend on OnClose for properly reverting things to the initial state. For example, I create a bunch of connections (say, 20000) using a synthetic test tool. And then I close all of these connections at once. In such case I seem to often end up with a small percentage of connections not receiving an OnClose event. The problem seems to occur especially when I just kill the client tool, ending all connections without properly closing them down. The vast majority of the connections does still receive it's OnClose event on the server side. But some (say, 279 out of 20000) do not. This could very well be an oddity in my app. But I can't find the issue. NOTE: I have print statements in OnClose. As a result it often takes some significant amount of time to gp through all OnClose events. Could this be causing the issue? Same context: What do the different OnClose error messages mean: 1. no error, 2. "EOF", 3. "broken pipe" and 4. "connection timed out". What can I derive from these error messages? Do any of them demand special (different) action on my part? Currently I am NOT closing any connection for which I receive an OnClose event (assuming they are closed already). Is this the correct handling of it?

  • 关于重复连接的问题

    关于重复连接的问题

    我是根据路由的一个参数来决定该连接的id,然后每次进行websocket升级的时候会先从连接池里去看看连接是否存在,然后把旧连接关闭,然后阻塞到close channel关闭再开一个新的连接并设置好keepalive, 但是当出现了大量的重复升级的请求,就会出现大量的read timeout导致新的连接被连接池里删掉,我不知道是我这种策略不合适还是什么bug,我能问下对于这种情况怎么处理比较好?

  • How to read messages from *websocket.Connection?

    How to read messages from *websocket.Connection?

    Hi,

    I am looking at this example https://github.com/lesismal/nbio-examples/blob/master/websocket_1m/server/server.go and I have few questions

    1. I see that is 1m websocket server you are using mux but in the the readme I see this
    g := nbio.NewGopher(nbio.Config{
        Network: "tcp",
        Addrs:   []string{"localhost:8888"},
    })
    

    which one to use for large number of websocket connections like say 1m?

    1. why does connection object doesn't have onMessage? why only upgrader has? What is the way to read messages without allocating more buffers in the application? especially my goal is to scale for 1m webscokets?

    2. I want to send a stream of notifications over 1m socket connections so what is the best way to approach that using this library? since this is the only library I found that can use epoll as well as reuse buffers for low memory footprint. please suggest

  • Concurrency issues with NewServerTLS()

    Concurrency issues with NewServerTLS()

    In some cases my server may want to disconnect an existing websocket connection. It is a case that does not show up in any of your examples. Imagine you would like to do the following here

    if qps==1000 {
        wsConn.Close()
    }
    
    

    If I do something like this, the connection will get closed - the client will receive an onClose event. But wsConn.Close() then hangs. Any idea what could be causing this?

  • Keeping many idle clients alive

    Keeping many idle clients alive

    In another thread you said:

    which would cost lots of goroutines when there are lots of connections, problems about mem/gc/schedule/stw comes with the huge number of goroutines.

    My current keep-alive implementation is using 1 goroutine per client to send PingMessages in perfect frequency. It is using case <-time.After(): which is simple (almost elegant) and is working really well. But it creates a lot of goroutines.

    The alternative "1 goroutine for all clients" implementation, that I can think of, would be way more complex. And I am not certain it would be really so much more efficient. Any thoughts you can share on this?

  • Auto-closing WS in 2min

    Auto-closing WS in 2min

    Hi.

    Just got an issue during with websocket automatically closes itself in 2 min after connection was established. An error message tells me: websocket conn is closed [::1]:33776 read timeout. So, it looks like the net.Conn.SetReadDeadline(2min) call made somewhere inside and never been called again after message is received.

    Maybe it's my responsibility to call this method after message was received, but I guess it should be more clear whether should I call or not. Moreover, my thoughts are simple: "If I didn't make the first call, why should I make the next ones?".

    The example is simple. Just open the connection and wait. You will get your connection been closed in 2 min.

    Example
    ```golang
    package main
    
    import (
    	"context"
    	"fmt"
    	"log"
    	"net/http"
    	"os"
    	"os/signal"
    	"sync"
    	"syscall"
    	"time"
    
    	"github.com/lesismal/nbio/nbhttp"
    	"github.com/lesismal/nbio/nbhttp/websocket"
    )
    
    func OnWebsocket(w http.ResponseWriter, r *http.Request) {
    	upgrader := websocket.NewUpgrader()
    	upgrader.CheckOrigin = func(_ *http.Request) bool {
    		return true
    	}
    	upgrader.OnMessage(func(c *websocket.Conn, messageType websocket.MessageType, data []byte) {
    		fmt.Println(time.Now().String(), "websocket:", messageType, string(data), len(data))
    		c.WriteMessage(messageType, data)
    	})
    	upgrader.OnOpen(func(c *websocket.Conn) {
    		fmt.Println(time.Now().String(), "websocket conn is open", c.RemoteAddr().String())
    	})
    	upgrader.OnClose(func(c *websocket.Conn, err error) {
    		fmt.Println(time.Now().String(), "websocket conn is closed", c.RemoteAddr().String(), err)
    	})
    
    	conn, err := upgrader.Upgrade(w, r, nil)
    	if err != nil {
    		panic(err)
    	}
    	wsConn := conn.(*websocket.Conn)
    	_ = wsConn
    }
    
    func main() {
    
    	ctx, cancelFunc := context.WithCancel(context.Background())
    	wg := new(sync.WaitGroup)
    
    	sig := make(chan os.Signal, 1)
    	defer close(sig)
    	signal.Notify(sig, syscall.SIGINT, syscall.SIGTERM, syscall.SIGHUP)
    
    	srv1 := nbhttp.NewServer(nbhttp.Config{
    		Addrs: []string{":13000"},
    		Handler: http.HandlerFunc(OnWebsocket),
    	})
    
    	if err := srv1.Start(); err != nil {
    		log.Fatalln("Failed to start server 1", err)
    	}
    
    	wg.Add(1)
    	go func() {
    		defer wg.Done()
    		<-ctx.Done()
    
    		ctx, cancelFunc := context.WithTimeout(context.Background(), 5 * time.Second)
    		defer cancelFunc()
    
    		if err := srv1.Shutdown(ctx); err != nil {
    			log.Println("Failed to shutdown server 1", err)
    		}
    	}()
    
    	<-sig
    	cancelFunc()
    
    	wg.Wait()
    }
    ```
    
  • can not reconnect on the tls websocket example

    can not reconnect on the tls websocket example

    when I have the server running in another shell

    $ go run .
    2021/05/13 17:11:43 connecting to wss://localhost:8888/wss
    2021/05/13 17:11:43 write: hello world
    2021/05/13 17:11:43 read : hello world
    2021/05/13 17:11:44 write: hello world
    2021/05/13 17:11:44 read : hello world
    2021/05/13 17:11:45 write: hello world
    2021/05/13 17:11:45 read : hello world
    2021/05/13 17:11:46 write: hello world
    2021/05/13 17:11:46 read : hello world
    ^Csignal: interrupt
    $ go run .
    2021/05/13 17:11:49 connecting to wss://localhost:8888/wss
    2021/05/13 17:11:49 dial:EOF
    exit status 1
    $ go run .
    2021/05/13 17:11:51 connecting to wss://localhost:8888/wss
    2021/05/13 17:11:51 dial:EOF
    exit status 1
    

    the server is still running the in the other shell

    I am on a mac if that helps

  • Webtransport

    Webtransport

    Hi, I've been using nbio for a while now and I love the package. Thank you for building it :)

    I've been looking into Webtransport as a replacement for websockets. I understand that, with nbio, your goal was to implement a truly "non-blocking io", and I think the library perfectly reaches this goal.

    I was wondering if you were following the development of Webtransport (over http/3 - quic), as that seems a likely replacement for websocket moving forward, and what your thoughts were on implementing the protocol itself within nbio (or an nbio version that uses webtransport in place of websockets).

  • after upgrader.Upgrade(w, r, nil),  if client send message right now, and wsConn.OnMessage handler set delay, message will miss,handler should set when upgrader.Upgrade

    after upgrader.Upgrade(w, r, nil), if client send message right now, and wsConn.OnMessage handler set delay, message will miss,handler should set when upgrader.Upgrade

    func onWebsocket(w http.ResponseWriter, r *http.Request) { upgrader := websocket.NewUpgrader() upgrader.EnableCompression = true conn, err := upgrader.Upgrade(w, r, nil) if err != nil { panic(err) } wsConn := conn.(*websocket.Conn) wsConn.EnableWriteCompression(true) wsConn.SetReadDeadline(time.Time{}) wsConn.OnMessage(func(c *websocket.Conn, messageType websocket.MessageType, data []byte) { // echo if *print { switch messageType { case websocket.TextMessage: fmt.Println("OnMessage:", messageType, string(data), len(data)) case websocket.BinaryMessage: fmt.Println("OnMessage:", messageType, data, len(data)) } } c.WriteMessage(messageType, data) }) wsConn.OnClose(func(c *websocket.Conn, err error) { if *print { fmt.Println("OnClose:", c.RemoteAddr().String(), err) } }) if *print { fmt.Println("OnOpen:", wsConn.RemoteAddr().String()) } }

  • support tls and non-tls servers and clients with the same engine/mux

    support tls and non-tls servers and clients with the same engine/mux

    not sure if all of these together are supported at the same time or if it's better to create multiple servers/engines.

    currently my server is using nbio for the tls connection and the stock go http server for the non-tls connection and gorrilla for the outbound websocket connections.

    I'm wondering if it is a good idea/possible to use a single common nbio engine for all these scenarios and share the mux for for the tls and non-tls servers.

  • Benchmark against FastHTTP

    Benchmark against FastHTTP

    I think it would be interesting to see how NBIO stands compared to FastHTTP https://github.com/valyala/fasthttp in terms of memory usage and performance. Maybe we should do an effort to test that out ?

  • Crash on exit when using IOModMixed

    Crash on exit when using IOModMixed

    My code is using NBIO server, and I recently started playing with IOModMixed. Now when I try to stop my main process, I see below

    2023/01/03 03:36:16 Starting router 
    2023/01/03 03:36:16 Added realm: realm1
    2023/01/03 03:36:16.841 [INF] NBIO[oriconn] start
    2023/01/03 03:36:16.841 [INF] Serve  NonTLS On: [tcp@[::]:8080]
    ^Cpanic: runtime error: invalid memory address or nil pointer dereference
    [signal SIGSEGV: segmentation violation code=0x1 addr=0x18 pc=0x6c4087]
    
    goroutine 85 [running]:
    github.com/lesismal/nbio/nbhttp.(*Engine).AddConnNonTLSNonBlocking(0xc0001a6090, {0x0, 0x0}, 0x0?, 0x0?)
    	/home/om26er/go/pkg/mod/github.com/lesismal/[email protected]/nbhttp/engine.go:566 +0x347
    github.com/lesismal/nbio/nbhttp.(*Engine).listen.func1()
    	/home/om26er/go/pkg/mod/github.com/lesismal/[email protected]/nbhttp/engine.go:273 +0x185
    created by github.com/lesismal/nbio/nbhttp.(*Engine).listen
    	/home/om26er/go/pkg/mod/github.com/lesismal/[email protected]/nbhttp/engine.go:265 +0x105
    
Related tags
Netpoll is a high-performance non-blocking I/O networking framework, which focused on RPC scenarios, developed by ByteDance.
Netpoll is a high-performance non-blocking I/O networking framework, which focused on RPC scenarios, developed by ByteDance.

Netpoll is a high-performance non-blocking I/O networking framework, which focused on RPC scenarios, developed by ByteDance. RPC is usually heavy on processing logic and therefore cannot handle I/O serially. But Go's standard library net designed blocking I/O API, so that the RPC framework can only follow the One Conn One Goroutine design.

Jan 2, 2023
A fast port forwarding or reverse forwarding tool over HTTP1.0/HTTP1.1
A fast port forwarding or reverse forwarding tool over HTTP1.0/HTTP1.1

gogw What's gogw ? gogw is a port forwarding/reverse forwarding tool over HTTP implements by golang. port forwarding/port reverse forwarding support T

Sep 27, 2022
🚀Gev is a lightweight, fast non-blocking TCP network library based on Reactor mode. Support custom protocols to quickly and easily build high-performance servers.
🚀Gev is a lightweight, fast non-blocking TCP network library based on Reactor mode. Support custom protocols to quickly and easily build high-performance servers.

gev 中文 | English gev is a lightweight, fast non-blocking TCP network library based on Reactor mode. Support custom protocols to quickly and easily bui

Jan 6, 2023
Http-server - A HTTP server and can be accessed via TLS and non-TLS mode

Application server.go runs a HTTP/HTTPS server on the port 9090. It gives you 4

Feb 3, 2022
the pluto is a gateway new time, high performance, high stable, high availability, easy to use

pluto the pluto is a gateway new time, high performance, high stable, high availability, easy to use Acknowledgments thanks nbio for providing low lev

Sep 19, 2021
Native macOS networking for QEMU using vmnet.framework and socket networking.

qemu-vmnet Native macOS networking for QEMU using vmnet.framework and socket networking. Getting started TODO -netdev socket,id=net0,udp=:1234,localad

Jan 5, 2023
Fork of Go stdlib's net/http that works with alternative TLS libraries like refraction-networking/utls.

github.com/ooni/oohttp This repository contains a fork of Go's standard library net/http package including patches to allow using this HTTP code with

Sep 29, 2022
High performance async-io(proactor) networking for Golang。golangのための高性能非同期io(proactor)ネットワーキング
High performance async-io(proactor) networking for Golang。golangのための高性能非同期io(proactor)ネットワーキング

gaio Introduction 中文介绍 For a typical golang network program, you would first conn := lis.Accept() to get a connection and go func(net.Conn) to start a

Dec 29, 2022
Package event-driven makes it easy for you to drive events between services
Package event-driven makes it easy for you to drive events between services

Event-Driven Event-driven architecture is a software architecture and model for application design. With an event-driven system, the capture, communic

Apr 20, 2022
grpc-http1: A gRPC via HTTP/1 Enabling Library for Go

grpc-http1: A gRPC via HTTP/1 Enabling Library for Go This library enables using all the functionality of a gRPC server even if it is exposed behind a

Dec 17, 2022
Fast event-loop networking for Go
Fast event-loop networking for Go

evio is an event loop networking framework that is fast and small. It makes direct epoll and kqueue syscalls rather than using the standard Go net pac

Dec 31, 2022
Event driven modular status-bar for dwm; written in Go & uses Unix sockets for signaling.

dwmstat A simple event-driven modular status-bar for dwm. It is written in Go & uses Unix sockets for signaling. The status bar is conceptualized as a

Dec 25, 2021
An event driven remote access trojan for experimental purposes.

erat An event driven remote access trojan for experimental purposes. This example is very simple and leverages ssh failed login events to trigger erat

Jan 16, 2022
Network-wide ads & trackers blocking DNS server
Network-wide ads & trackers blocking DNS server

Privacy protection center for you and your devices Free and open source, powerful network-wide ads & trackers blocking DNS server. AdGuard.com | Wiki

Dec 31, 2022
meek is a blocking-resistant pluggable transport for Tor.

meek is a blocking-resistant pluggable transport for Tor. It encodes a data stream as a sequence of HTTPS requests and responses. Requests are reflect

Nov 9, 2021
Middleware for Blocking IP ranges by inserting CIDR Blocks and searching IPs through those blocks

firewall Middleware for Blocking IP ranges by inserting CIDR Blocks and searching IPs through those blocks. Features Easy to use Efficient and Fast Co

Oct 9, 2022
A decentralized P2P networking stack written in Go.

noise noise is an opinionated, easy-to-use P2P network stack for decentralized applications, and cryptographic protocols written in Go. noise is made

Dec 29, 2022