Fast HTTP package for Go. Tuned for high performance. Zero memory allocations in hot paths. Up to 10x faster than net/http

fasthttp Build Status GoDoc Go Report Sourcegraph

FastHTTP – Fastest and reliable HTTP implementation in Go

Fast HTTP implementation for Go.

Currently fasthttp is successfully used by VertaMedia in a production serving up to 200K rps from more than 1.5M concurrent keep-alive connections per physical server.

TechEmpower Benchmark round 19 results

Server Benchmarks

Client Benchmarks

Install

Documentation

Examples from docs

Code examples

Awesome fasthttp tools

Switching from net/http to fasthttp

Fasthttp best practices

Tricks with byte buffers

Related projects

FAQ

HTTP server performance comparison with net/http

In short, fasthttp server is up to 10 times faster than net/http. Below are benchmark results.

GOMAXPROCS=1

net/http server:

$ GOMAXPROCS=1 go test -bench=NetHTTPServerGet -benchmem -benchtime=10s
BenchmarkNetHTTPServerGet1ReqPerConn                	 1000000	     12052 ns/op	    2297 B/op	      29 allocs/op
BenchmarkNetHTTPServerGet2ReqPerConn                	 1000000	     12278 ns/op	    2327 B/op	      24 allocs/op
BenchmarkNetHTTPServerGet10ReqPerConn               	 2000000	      8903 ns/op	    2112 B/op	      19 allocs/op
BenchmarkNetHTTPServerGet10KReqPerConn              	 2000000	      8451 ns/op	    2058 B/op	      18 allocs/op
BenchmarkNetHTTPServerGet1ReqPerConn10KClients      	  500000	     26733 ns/op	    3229 B/op	      29 allocs/op
BenchmarkNetHTTPServerGet2ReqPerConn10KClients      	 1000000	     23351 ns/op	    3211 B/op	      24 allocs/op
BenchmarkNetHTTPServerGet10ReqPerConn10KClients     	 1000000	     13390 ns/op	    2483 B/op	      19 allocs/op
BenchmarkNetHTTPServerGet100ReqPerConn10KClients    	 1000000	     13484 ns/op	    2171 B/op	      18 allocs/op

fasthttp server:

$ GOMAXPROCS=1 go test -bench=kServerGet -benchmem -benchtime=10s
BenchmarkServerGet1ReqPerConn                       	10000000	      1559 ns/op	       0 B/op	       0 allocs/op
BenchmarkServerGet2ReqPerConn                       	10000000	      1248 ns/op	       0 B/op	       0 allocs/op
BenchmarkServerGet10ReqPerConn                      	20000000	       797 ns/op	       0 B/op	       0 allocs/op
BenchmarkServerGet10KReqPerConn                     	20000000	       716 ns/op	       0 B/op	       0 allocs/op
BenchmarkServerGet1ReqPerConn10KClients             	10000000	      1974 ns/op	       0 B/op	       0 allocs/op
BenchmarkServerGet2ReqPerConn10KClients             	10000000	      1352 ns/op	       0 B/op	       0 allocs/op
BenchmarkServerGet10ReqPerConn10KClients            	20000000	       789 ns/op	       2 B/op	       0 allocs/op
BenchmarkServerGet100ReqPerConn10KClients           	20000000	       604 ns/op	       0 B/op	       0 allocs/op

GOMAXPROCS=4

net/http server:

$ GOMAXPROCS=4 go test -bench=NetHTTPServerGet -benchmem -benchtime=10s
BenchmarkNetHTTPServerGet1ReqPerConn-4                  	 3000000	      4529 ns/op	    2389 B/op	      29 allocs/op
BenchmarkNetHTTPServerGet2ReqPerConn-4                  	 5000000	      3896 ns/op	    2418 B/op	      24 allocs/op
BenchmarkNetHTTPServerGet10ReqPerConn-4                 	 5000000	      3145 ns/op	    2160 B/op	      19 allocs/op
BenchmarkNetHTTPServerGet10KReqPerConn-4                	 5000000	      3054 ns/op	    2065 B/op	      18 allocs/op
BenchmarkNetHTTPServerGet1ReqPerConn10KClients-4        	 1000000	     10321 ns/op	    3710 B/op	      30 allocs/op
BenchmarkNetHTTPServerGet2ReqPerConn10KClients-4        	 2000000	      7556 ns/op	    3296 B/op	      24 allocs/op
BenchmarkNetHTTPServerGet10ReqPerConn10KClients-4       	 5000000	      3905 ns/op	    2349 B/op	      19 allocs/op
BenchmarkNetHTTPServerGet100ReqPerConn10KClients-4      	 5000000	      3435 ns/op	    2130 B/op	      18 allocs/op

fasthttp server:

$ GOMAXPROCS=4 go test -bench=kServerGet -benchmem -benchtime=10s
BenchmarkServerGet1ReqPerConn-4                         	10000000	      1141 ns/op	       0 B/op	       0 allocs/op
BenchmarkServerGet2ReqPerConn-4                         	20000000	       707 ns/op	       0 B/op	       0 allocs/op
BenchmarkServerGet10ReqPerConn-4                        	30000000	       341 ns/op	       0 B/op	       0 allocs/op
BenchmarkServerGet10KReqPerConn-4                       	50000000	       310 ns/op	       0 B/op	       0 allocs/op
BenchmarkServerGet1ReqPerConn10KClients-4               	10000000	      1119 ns/op	       0 B/op	       0 allocs/op
BenchmarkServerGet2ReqPerConn10KClients-4               	20000000	       644 ns/op	       0 B/op	       0 allocs/op
BenchmarkServerGet10ReqPerConn10KClients-4              	30000000	       346 ns/op	       0 B/op	       0 allocs/op
BenchmarkServerGet100ReqPerConn10KClients-4             	50000000	       282 ns/op	       0 B/op	       0 allocs/op

HTTP client comparison with net/http

In short, fasthttp client is up to 10 times faster than net/http. Below are benchmark results.

GOMAXPROCS=1

net/http client:

$ GOMAXPROCS=1 go test -bench='HTTPClient(Do|GetEndToEnd)' -benchmem -benchtime=10s
BenchmarkNetHTTPClientDoFastServer                  	 1000000	     12567 ns/op	    2616 B/op	      35 allocs/op
BenchmarkNetHTTPClientGetEndToEnd1TCP               	  200000	     67030 ns/op	    5028 B/op	      56 allocs/op
BenchmarkNetHTTPClientGetEndToEnd10TCP              	  300000	     51098 ns/op	    5031 B/op	      56 allocs/op
BenchmarkNetHTTPClientGetEndToEnd100TCP             	  300000	     45096 ns/op	    5026 B/op	      55 allocs/op
BenchmarkNetHTTPClientGetEndToEnd1Inmemory          	  500000	     24779 ns/op	    5035 B/op	      57 allocs/op
BenchmarkNetHTTPClientGetEndToEnd10Inmemory         	 1000000	     26425 ns/op	    5035 B/op	      57 allocs/op
BenchmarkNetHTTPClientGetEndToEnd100Inmemory        	  500000	     28515 ns/op	    5045 B/op	      57 allocs/op
BenchmarkNetHTTPClientGetEndToEnd1000Inmemory       	  500000	     39511 ns/op	    5096 B/op	      56 allocs/op

fasthttp client:

$ GOMAXPROCS=1 go test -bench='kClient(Do|GetEndToEnd)' -benchmem -benchtime=10s
BenchmarkClientDoFastServer                         	20000000	       865 ns/op	       0 B/op	       0 allocs/op
BenchmarkClientGetEndToEnd1TCP                      	 1000000	     18711 ns/op	       0 B/op	       0 allocs/op
BenchmarkClientGetEndToEnd10TCP                     	 1000000	     14664 ns/op	       0 B/op	       0 allocs/op
BenchmarkClientGetEndToEnd100TCP                    	 1000000	     14043 ns/op	       1 B/op	       0 allocs/op
BenchmarkClientGetEndToEnd1Inmemory                 	 5000000	      3965 ns/op	       0 B/op	       0 allocs/op
BenchmarkClientGetEndToEnd10Inmemory                	 3000000	      4060 ns/op	       0 B/op	       0 allocs/op
BenchmarkClientGetEndToEnd100Inmemory               	 5000000	      3396 ns/op	       0 B/op	       0 allocs/op
BenchmarkClientGetEndToEnd1000Inmemory              	 5000000	      3306 ns/op	       2 B/op	       0 allocs/op

GOMAXPROCS=4

net/http client:

$ GOMAXPROCS=4 go test -bench='HTTPClient(Do|GetEndToEnd)' -benchmem -benchtime=10s
BenchmarkNetHTTPClientDoFastServer-4                    	 2000000	      8774 ns/op	    2619 B/op	      35 allocs/op
BenchmarkNetHTTPClientGetEndToEnd1TCP-4                 	  500000	     22951 ns/op	    5047 B/op	      56 allocs/op
BenchmarkNetHTTPClientGetEndToEnd10TCP-4                	 1000000	     19182 ns/op	    5037 B/op	      55 allocs/op
BenchmarkNetHTTPClientGetEndToEnd100TCP-4               	 1000000	     16535 ns/op	    5031 B/op	      55 allocs/op
BenchmarkNetHTTPClientGetEndToEnd1Inmemory-4            	 1000000	     14495 ns/op	    5038 B/op	      56 allocs/op
BenchmarkNetHTTPClientGetEndToEnd10Inmemory-4           	 1000000	     10237 ns/op	    5034 B/op	      56 allocs/op
BenchmarkNetHTTPClientGetEndToEnd100Inmemory-4          	 1000000	     10125 ns/op	    5045 B/op	      56 allocs/op
BenchmarkNetHTTPClientGetEndToEnd1000Inmemory-4         	 1000000	     11132 ns/op	    5136 B/op	      56 allocs/op

fasthttp client:

$ GOMAXPROCS=4 go test -bench='kClient(Do|GetEndToEnd)' -benchmem -benchtime=10s
BenchmarkClientDoFastServer-4                           	50000000	       397 ns/op	       0 B/op	       0 allocs/op
BenchmarkClientGetEndToEnd1TCP-4                        	 2000000	      7388 ns/op	       0 B/op	       0 allocs/op
BenchmarkClientGetEndToEnd10TCP-4                       	 2000000	      6689 ns/op	       0 B/op	       0 allocs/op
BenchmarkClientGetEndToEnd100TCP-4                      	 3000000	      4927 ns/op	       1 B/op	       0 allocs/op
BenchmarkClientGetEndToEnd1Inmemory-4                   	10000000	      1604 ns/op	       0 B/op	       0 allocs/op
BenchmarkClientGetEndToEnd10Inmemory-4                  	10000000	      1458 ns/op	       0 B/op	       0 allocs/op
BenchmarkClientGetEndToEnd100Inmemory-4                 	10000000	      1329 ns/op	       0 B/op	       0 allocs/op
BenchmarkClientGetEndToEnd1000Inmemory-4                	10000000	      1316 ns/op	       5 B/op	       0 allocs/op

Install

go get -u github.com/valyala/fasthttp

Switching from net/http to fasthttp

Unfortunately, fasthttp doesn't provide API identical to net/http. See the FAQ for details. There is net/http -> fasthttp handler converter, but it is better to write fasthttp request handlers by hand in order to use all of the fasthttp advantages (especially high performance :) ).

Important points:

  • Fasthttp works with RequestHandler functions instead of objects implementing Handler interface. Fortunately, it is easy to pass bound struct methods to fasthttp:

    type MyHandler struct {
    	foobar string
    }
    
    // request handler in net/http style, i.e. method bound to MyHandler struct.
    func (h *MyHandler) HandleFastHTTP(ctx *fasthttp.RequestCtx) {
    	// notice that we may access MyHandler properties here - see h.foobar.
    	fmt.Fprintf(ctx, "Hello, world! Requested path is %q. Foobar is %q",
    		ctx.Path(), h.foobar)
    }
    
    // request handler in fasthttp style, i.e. just plain function.
    func fastHTTPHandler(ctx *fasthttp.RequestCtx) {
    	fmt.Fprintf(ctx, "Hi there! RequestURI is %q", ctx.RequestURI())
    }
    
    // pass bound struct method to fasthttp
    myHandler := &MyHandler{
    	foobar: "foobar",
    }
    fasthttp.ListenAndServe(":8080", myHandler.HandleFastHTTP)
    
    // pass plain function to fasthttp
    fasthttp.ListenAndServe(":8081", fastHTTPHandler)
  • The RequestHandler accepts only one argument - RequestCtx. It contains all the functionality required for http request processing and response writing. Below is an example of a simple request handler conversion from net/http to fasthttp.

    // net/http request handler
    requestHandler := func(w http.ResponseWriter, r *http.Request) {
    	switch r.URL.Path {
    	case "/foo":
    		fooHandler(w, r)
    	case "/bar":
    		barHandler(w, r)
    	default:
    		http.Error(w, "Unsupported path", http.StatusNotFound)
    	}
    }
    // the corresponding fasthttp request handler
    requestHandler := func(ctx *fasthttp.RequestCtx) {
    	switch string(ctx.Path()) {
    	case "/foo":
    		fooHandler(ctx)
    	case "/bar":
    		barHandler(ctx)
    	default:
    		ctx.Error("Unsupported path", fasthttp.StatusNotFound)
    	}
    }
  • Fasthttp allows setting response headers and writing response body in an arbitrary order. There is no 'headers first, then body' restriction like in net/http. The following code is valid for fasthttp:

    requestHandler := func(ctx *fasthttp.RequestCtx) {
    	// set some headers and status code first
    	ctx.SetContentType("foo/bar")
    	ctx.SetStatusCode(fasthttp.StatusOK)
    
    	// then write the first part of body
    	fmt.Fprintf(ctx, "this is the first part of body\n")
    
    	// then set more headers
    	ctx.Response.Header.Set("Foo-Bar", "baz")
    
    	// then write more body
    	fmt.Fprintf(ctx, "this is the second part of body\n")
    
    	// then override already written body
    	ctx.SetBody([]byte("this is completely new body contents"))
    
    	// then update status code
    	ctx.SetStatusCode(fasthttp.StatusNotFound)
    
    	// basically, anything may be updated many times before
    	// returning from RequestHandler.
    	//
    	// Unlike net/http fasthttp doesn't put response to the wire until
    	// returning from RequestHandler.
    }
  • Fasthttp doesn't provide ServeMux, but there are more powerful third-party routers and web frameworks with fasthttp support:

    Net/http code with simple ServeMux is trivially converted to fasthttp code:

    // net/http code
    
    m := &http.ServeMux{}
    m.HandleFunc("/foo", fooHandlerFunc)
    m.HandleFunc("/bar", barHandlerFunc)
    m.Handle("/baz", bazHandler)
    
    http.ListenAndServe(":80", m)
    // the corresponding fasthttp code
    m := func(ctx *fasthttp.RequestCtx) {
    	switch string(ctx.Path()) {
    	case "/foo":
    		fooHandlerFunc(ctx)
    	case "/bar":
    		barHandlerFunc(ctx)
    	case "/baz":
    		bazHandler.HandlerFunc(ctx)
    	default:
    		ctx.Error("not found", fasthttp.StatusNotFound)
    	}
    }
    
    fasthttp.ListenAndServe(":80", m)
  • net/http -> fasthttp conversion table:

    • All the pseudocode below assumes w, r and ctx have these types:
      var (
      	w http.ResponseWriter
      	r *http.Request
      	ctx *fasthttp.RequestCtx
      )
  • VERY IMPORTANT! Fasthttp disallows holding references to RequestCtx or to its' members after returning from RequestHandler. Otherwise data races are inevitable. Carefully inspect all the net/http request handlers converted to fasthttp whether they retain references to RequestCtx or to its' members after returning. RequestCtx provides the following band aids for this case:

    • Wrap RequestHandler into TimeoutHandler.
    • Call TimeoutError before returning from RequestHandler if there are references to RequestCtx or to its' members. See the example for more details.

Use this brilliant tool - race detector - for detecting and eliminating data races in your program. If you detected data race related to fasthttp in your program, then there is high probability you forgot calling TimeoutError before returning from RequestHandler.

Performance optimization tips for multi-core systems

  • Use reuseport listener.
  • Run a separate server instance per CPU core with GOMAXPROCS=1.
  • Pin each server instance to a separate CPU core using taskset.
  • Ensure the interrupts of multiqueue network card are evenly distributed between CPU cores. See this article for details.
  • Use Go 1.13 as it provides some considerable performance improvements.

Fasthttp best practices

  • Do not allocate objects and []byte buffers - just reuse them as much as possible. Fasthttp API design encourages this.
  • sync.Pool is your best friend.
  • Profile your program in production. go tool pprof --alloc_objects your-program mem.pprof usually gives better insights for optimization opportunities than go tool pprof your-program cpu.pprof.
  • Write tests and benchmarks for hot paths.
  • Avoid conversion between []byte and string, since this may result in memory allocation+copy. Fasthttp API provides functions for both []byte and string - use these functions instead of converting manually between []byte and string. There are some exceptions - see this wiki page for more details.
  • Verify your tests and production code under race detector on a regular basis.
  • Prefer quicktemplate instead of html/template in your webserver.

Tricks with []byte buffers

The following tricks are used by fasthttp. Use them in your code too.

  • Standard Go functions accept nil buffers
var (
	// both buffers are uninitialized
	dst []byte
	src []byte
)
dst = append(dst, src...)  // is legal if dst is nil and/or src is nil
copy(dst, src)  // is legal if dst is nil and/or src is nil
(string(src) == "")  // is true if src is nil
(len(src) == 0)  // is true if src is nil
src = src[:0]  // works like a charm with nil src

// this for loop doesn't panic if src is nil
for i, ch := range src {
	doSomething(i, ch)
}

So throw away nil checks for []byte buffers from you code. For example,

srcLen := 0
if src != nil {
	srcLen = len(src)
}

becomes

srcLen := len(src)
  • String may be appended to []byte buffer with append
dst = append(dst, "foobar"...)
  • []byte buffer may be extended to its' capacity.
buf := make([]byte, 100)
a := buf[:10]  // len(a) == 10, cap(a) == 100.
b := a[:100]  // is valid, since cap(a) == 100.
  • All fasthttp functions accept nil []byte buffer
statusCode, body, err := fasthttp.Get(nil, "http://google.com/")
uintBuf := fasthttp.AppendUint(nil, 1234)

Related projects

  • fasthttp - various useful helpers for projects based on fasthttp.
  • fasthttp-routing - fast and powerful routing package for fasthttp servers.
  • router - a high performance fasthttp request router that scales well.
  • fastws - Bloatless WebSocket package made for fasthttp to handle Read/Write operations concurrently.
  • gramework - a web framework made by one of fasthttp maintainers
  • lu - a high performance go middleware web framework which is based on fasthttp.
  • websocket - Gorilla-based websocket implementation for fasthttp.
  • fasthttpsession - a fast and powerful session package for fasthttp servers.
  • atreugo - High performance and extensible micro web framework with zero memory allocations in hot paths.
  • kratgo - Simple, lightweight and ultra-fast HTTP Cache to speed up your websites.
  • kit-plugins - go-kit transport implementation for fasthttp.
  • Fiber - An Expressjs inspired web framework running on Fasthttp
  • Gearbox - ⚙️ gearbox is a web framework written in Go with a focus on high performance and memory optimization

FAQ

  • Why creating yet another http package instead of optimizing net/http?

    Because net/http API limits many optimization opportunities. For example:

    • net/http Request object lifetime isn't limited by request handler execution time. So the server must create a new request object per each request instead of reusing existing objects like fasthttp does.
    • net/http headers are stored in a map[string][]string. So the server must parse all the headers, convert them from []byte to string and put them into the map before calling user-provided request handler. This all requires unnecessary memory allocations avoided by fasthttp.
    • net/http client API requires creating a new response object per each request.
  • Why fasthttp API is incompatible with net/http?

    Because net/http API limits many optimization opportunities. See the answer above for more details. Also certain net/http API parts are suboptimal for use:

  • Why fasthttp doesn't support HTTP/2.0 and WebSockets?

    HTTP/2.0 support is in progress. WebSockets has been done already. Third parties also may use RequestCtx.Hijack for implementing these goodies.

  • Are there known net/http advantages comparing to fasthttp?

    Yes:

    • net/http supports HTTP/2.0 starting from go1.6.
    • net/http API is stable, while fasthttp API constantly evolves.
    • net/http handles more HTTP corner cases.
    • net/http should contain less bugs, since it is used and tested by much wider audience.
    • net/http works on Go older than 1.5.
  • Why fasthttp API prefers returning []byte instead of string?

    Because []byte to string conversion isn't free - it requires memory allocation and copy. Feel free wrapping returned []byte result into string() if you prefer working with strings instead of byte slices. But be aware that this has non-zero overhead.

  • Which GO versions are supported by fasthttp?

    Go1.5+. Older versions won't be supported, since their standard package miss useful functions.

    NOTE: Go 1.9.7 is the oldest tested version. We recommend you to update as soon as you can. As of 1.11.3 we will drop 1.9.x support.

  • Please provide real benchmark data and server information

    See this issue.

  • Are there plans to add request routing to fasthttp?

    There are no plans to add request routing into fasthttp. Use third-party routers and web frameworks with fasthttp support:

    See also this issue for more info.

  • I detected data race in fasthttp!

    Cool! File a bug. But before doing this check the following in your code:

  • I didn't find an answer for my question here

    Try exploring these questions.

Owner
Aliaksandr Valialkin
Working on @VictoriaMetrics
Aliaksandr Valialkin
Comments
  • Fasthttp behind Aws load balancer. Keepalive conn are causing trouble

    Fasthttp behind Aws load balancer. Keepalive conn are causing trouble

    Hi!

    We're using a light/fast fasthttp server as a proxy in our services infrastructure. However, we've been experiencing some issues when we use an amazon Load Balancer. Sometimes (and this is randomly) the ALB returns 502 because the request can't find the fasthttp service. Note that ALB uses keepalive connections by default and that can't be changed.

    After a while doing some research, we were suspicious that fasthttp was closing the keepalive connections at some point, and the ALB couldn't re-use it, so it would return a 502.

    If we set the Server.DisableKeepAlive = true everything works as expected (with a lot more of load of course)

    We reduced our implementation to the minimum to test:

    s := &fasthttp.Server{
    		Handler:     OurHandler,
    		Concurrency: fasthttp.DefaultConcurrency,
    	}
    	s.DisableKeepalive = true // If this is false, we see the error randomly.
    
    	log.Fatal(s.ListenAndServe(":" + strconv.Itoa(port)))
    
    

    The handler basically does this:

            // h is an instance of *fasthttp.HostClient configured with some parameters
    	if err := h.proxy.Do(req, resp); err != nil {
    		log.Error("error when proxying the request: ", err)
    	}
    

    Is there any chance someone has experienced this? I'm not sure how we should proceed with the keepalive connections in the fasthttp.Server, as we are using pretty much all the default parameters.

    Thanks in advance!

  • CORS: allow every origin with credentials

    CORS: allow every origin with credentials

    I have a simple question. I want to set "Access-Control-Allow-Credentials" to "true" and "Access-Control-Allow-Origin" to the current request origin, so every origin is allowed to access my API. Which method should I use on the RequestCtx to retrieve the current request origin which is eligible to show up in the CORS header? I tried different ones like ctx.RemoteAddr.String() or ctx.Request.URI().Host() but none of them worked.

    I want that because I want to achieve a JWT authentication using the Authorization header, which is only received if I allow credentials.

    Greetings

  • Reponse body io.Reader

    Reponse body io.Reader

    Hey guys. How to set custom response body writer? I've tried to use SetBodyStream, but i don't see any body readers from response which i need to pass in io.Reader. P.S I'm trying to implement throttler.

  • Too many timeouts when working with high concurrency

    Too many timeouts when working with high concurrency

    I use fasthttp client in an application that collects information about millions of sites on the network. To do this really quickly and in parallel, I create a bunch of goroutines in which I execute c.httpClient.DoTimeout (...) requests

    If I run no more than ~ 100 threads per core, then I successfully receive answers for all requests. If I run more than 100 threads per core, then part of the requests will be interrupted by timeout and I will get errors

    The problem is definitely not in the sites themselves.

    I think there are some restrictions on the number of open connections or something like that, but I don’t know where to dig.

    I will be very glad to any prompts.

  • the master branch's code is ok ?

    the master branch's code is ok ?

    1、today i go get new fasthttp code, i found when the browse send a request, somtimes it recevie a error message "Error when parsing request"

    2、why use os.Command().Output() int my handler, the bin will output "child exited" ?

  • Error 7 at 1-3 Open Connections

    Error 7 at 1-3 Open Connections

    Hi there,

    I'm currently at a loss as to what is bottle necking. As soon as I hit fasthttp with +1000 reqs/s (same origin), it starts to fail. Curl is throwing: curl: (7) Couldn't connect to server.

    I'm using fasthttp with fasthttp-router and their default configurations. Seeing that Open Connections is always below 5 from 262144 I don't see what could be causing this and have my doubts of it being caused by the app.

    Could this be a limit from the OS it self, are there any unix settings we need to change? In terms of connections we are using net.core.somaxconn = 16384

    Thanks for you help

  • Response.ContentLength Not correct

    Response.ContentLength Not correct

    Code:

    if len(string(resp.Body())) > 0 {
    	logs.DebugLog( "i got %d %s", resp.Header.ContentLength(), resp.String() )
    }
    

    #read log: [[DEBUG]]11:18:38 task.go:105: i got 90 HTTP/1.1 206 Partial Content Date: Mon, 25 May 2020 08:18:38 GMT Content-Type: text/plain; charset=utf-8 Content-Length: 89

    What??? resp.Header.ContentLength()=90, resp.String() thinks different

    (I found this paradox out when wrote code:

    If resp.Header.ContentLength() > 0 {
    logs.StatusLog(resp.Body() )
    }
    

    and got many empty row on log ! Reading body is not match performance on empty response :-) )

    *Go 1.14 *github.com/valyala/fasthttp v1.9.0

  • Panics at on context close, v1.33.0

    Panics at on context close, v1.33.0

    Hello, unfortunately there are no more details except this message I could find.

    panic: runtime error: invalid memory address or nil pointer dereference
    [signal SIGSEGV: segmentation violation code=0x1 addr=0x200 pc=0x901667]
    
    goroutine 19473 [running]:
    github.com/valyala/fasthttp.(*RequestCtx).Done(0x8c4639)
            /go/src/dsp/vendor/github.com/valyala/fasthttp/server.go:2691 +0x7
    context.propagateCancel.func1()
            /usr/local/go/src/context/context.go:280 +0x50
    created by context.propagateCancel
            /usr/local/go/src/context/context.go:278 +0x1d0
    
  • Suggestion: Continuous Fuzzing

    Suggestion: Continuous Fuzzing

    Hi, I'm Yevgeny Pats Founder of Fuzzit - Continuous fuzzing as a service platform.

    We have a free plan for OSS and I would be happy to contribute a PR if that's interesting. The PR will include the following

    • go-fuzz fuzzers (This is generic step not-connected to fuzzit)
    • Continuous Fuzzing of master branch which will generate new corpus and look for new crashes
    • Regression on every PR that will run the fuzzers through all the generated corpus and fixed crashes from previous step. This will prevent new or old bugs from crippling into master.

    You can see our basic example here and you can see an example of "in the wild" integration here.

    Let me know if this is something worth working on.

    might be related to this https://github.com/valyala/fasthttp/issues/33

    Cheers, Yevgeny

  • Reverse Proxy?

    Reverse Proxy?

    The golang httputil package has a ReverseProxy that will serve from an http.Request.

    Is there any comporable revrese proxy for fasthttp that will serve from a fasthttp.Request?

  • I had write a simply httpproxy by fasthttp,but I had test the speed find the qps is so bad

    I had write a simply httpproxy by fasthttp,but I had test the speed find the qps is so bad

    fasthttp proxy wrk report that:

      ./wrk -c 1000 -t 1000 -d 30s http://127.0.0.1:8080/
    Running 30s test @ http://127.0.0.1:8080/
      1000 threads and 1000 connections
      Thread Stats   Avg      Stdev     Max   +/- Stdev
        Latency   123.28ms  283.75ms   1.99s    88.85%
        Req/Sec    49.95     49.44   653.00     68.67%
      103307 requests in 30.12s, 12.31MB read
      Socket errors: connect 0, read 0, write 0, timeout 2795
      Non-2xx or 3xx responses: 2136
    Requests/sec:   3429.97
    Transfer/sec:    418.45KB
    

    I use nginx proxy reports this:

     ./wrk -c 1000 -t 1000 -d 30s http://127.0.0.1:5555/
    Running 30s test @ http://127.0.0.1:5555/
      1000 threads and 1000 connections
      Thread Stats   Avg      Stdev     Max   +/- Stdev
        Latency   245.24ms  329.09ms   1.25s    79.48%
        Req/Sec    42.25     35.39     1.97k    60.26%
      434163 requests in 30.11s, 100.60MB read
      Socket errors: connect 0, read 0, write 0, timeout 604
    Requests/sec:  14421.42
    Transfer/sec:      3.34MB
    

    the fasthttp proxy code is that:

    package main
    import (
        "github.com/valyala/fasthttp"
        "time"
    )
    func test(ctx *fasthttp.RequestCtx){
      time.Sleep(100)
      ctx.WriteString("ok ")
    }
    func proxytest( ctx *fasthttp.RequestCtx){
    
        s:= &fasthttp.HostClient{Addr:"127.0.0.1:666"}
        req:=fasthttp.AcquireRequest()
        resp:=fasthttp.AcquireResponse()
        defer fasthttp.ReleaseRequest(req)
        defer fasthttp.ReleaseResponse(resp)
        ctx.Request.CopyTo(req)
        err:=s.Do(req,resp )
        if err!=nil{
            ctx.Error(err.Error(),504  )
        }
        resp.Header.Add("Server","waf")
        resp.WriteTo( ctx.Conn())
    }
    func main() {
    
        web:=&fasthttp.Server{Handler:proxytest }
        test:=&fasthttp.Server{ TCPKeepalive:true,TCPKeepalivePeriod:30*time.Second  ,  Handler:test }
        go test.ListenAndServe(":666")
        web.ListenAndServe(":8080")
        
    }
    

    the nginx conf is that:

    location /
    {
        proxy_pass http://127.0.0.1:666;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header REMOTE-HOST $remote_addr;
        
    
        add_header X-Cache $upstream_cache_status;
        
        
    }
    

    why nginx is Requests/sec: 14421.42 ,but the fasthttp proxy is Requests/sec: 3429.97. how to improve my code

  • http client throttling

    http client throttling

    I using the fasthttp for an OpenRTB proxy that receives JSON requests from SSP, changes incoming json and forwards to other DSPs. Under a heavy load I see in CPU profile that almost all CPU is spent on connection to DSPs: Dial, then Syscall6 (Write). Load average is multiple time higher than CPUs count. The http client returns a lot of net.OpError errors. The MaxConnsPerHost is 20480 and MaxIdleConnDuration is 1 hour so the keep alive should work fine.

    As far I understood this happens because: We have a load of 10000 QPS but each requests is processed in 100ms e.g. we need 1000 parallel connections to a DSP. But when we reached a conn limit of the DSP itself our request is failed with connection's net.OpError. But the http client keeps to try to establish new connections and this eats all the CPU.

    I tried to implement a simple throttling that makes a 400ms delay when on connection error occurs. It looks like:

    var hostClient fasthttp.HostClient
    var throttlingEnabled atomic.Bool
    var throttlingStarted time.Time
    var throttlingDelay = 400 * time.Millisecond
    
    func performRequest(req *fasthttp.Request, res *fasthttp.Response) {
    	if throttlingEnabled.Load() {
    		if time.Now().Sub(throttlingStarted) > throttlingDelay {
    			throttlingEnabled.Store(false)
    			log.Printf("throtling: disable\n")
    		} else {
    			log.Printf("throtling: skip request\n")
    		}
    		return
    	}
    	requestTimeout := 200 * time.Millisecond
    	connErr := hostClient.DoTimeout(req, res, requestTimeout)
    	errName := reflect.TypeOf(connErr).String()
    	if errName == "*net.OpError" {
    		throttlingEnabled.Store(true)
    		throttlingStarted = time.Now()
    	}
    }
    

    Now the processed QPS fallen down at least twice but yes, no any load spikes. Is anything better than the solution? Maybe I can reuse the rate.Limiter from golang.org/x/time/rate package. I see that the PipedClient do have some throttling so maybe something similar can be added to a HostClient? We need something that will work smarter and with recovery from a heavy load.

    Another one question is what will happen if the connection and TLS handshake takes longer than the DoTimeout() timeout? For example connection takes 200ms but the request timeout is 100ms. Then it looks like no any connection will ever established.

    Do we have any article/documentation on configuring a server for a heavy load? Like increase allowed opened files in systemd unit to DefaultLimitNOFILE=524288 and etc. Can anyone recommend to me something to read.

  • mustDiscard() panics and the whole server crashes

    mustDiscard() panics and the whole server crashes

    In fasthttp, in file header.go, this function:

    func mustDiscard(r *bufio.Reader, n int) {
    	if _, err := r.Discard(n); err != nil {
    		panic(fmt.Sprintf("bufio.Reader.Discard(%d) failed: %v", n, err))
    	}
    }
    

    sometimes, under high concurrency, panics with error panic: bufio.Reader.Discard(517) failed: read tcp4 ADDR-1->:8081->ADDR-2:30116: i/o timeout and the server crashes.

    Why it is not handled by error? Because the panic is on the side of the worker, and the thrown panic cannot be recovered by any of the RouterPanic or Recovery middleware. What should be done here?

  • fasthttp.SaveMultipartFile() generate tmp file without clean

    fasthttp.SaveMultipartFile() generate tmp file without clean

    After fasthttp.SaveMultipartFile(file, filePath) for several files I found the tmp file under /tmp was not cleaned, is this designed behavior? how could I get the tmp file path for each SaveMultipartFile and remove after file received?

    $ ls /tmp
    multipart-1116084598  multipart-1806005773  multipart-2359895161  multipart-2868180902  multipart-3505259536  multipart-4200780900  multipart-957202837
    multipart-1432350850  multipart-2089011025  multipart-2449044315  multipart-2881117185  multipart-3587317179  multipart-423553165
    multipart-147667864   multipart-2165895272  multipart-2650738835  multipart-3031107618  multipart-371009225   multipart-520911278
    multipart-1499565166  multipart-2181355762  multipart-2700390650  multipart-3223448677  multipart-3975570320  multipart-687744821
    multipart-1576810887  multipart-2347928608  multipart-2827028297  multipart-3254124333  multipart-408984157   multipart-931328463
    
  • When i used fasthttp send Get request, cpu rise to 100% occasionally 5 - 10s .

    When i used fasthttp send Get request, cpu rise to 100% occasionally 5 - 10s .

    My service deploy on server as a log transfer, QPS about 18000/s, QPM about 1000000/s. In order to make each nginx worker running balance ,so I config maxidelConnDuration to 20ms for making each idle connection close in advance, the connect will restart establish with nginx worker.

    Here is my fasthttp config: { Readtimeout: 300ms, Writetimeout:300ms, maxidleConnDuration:20ms, }

    Howerver, the cpu rise to 100% occasionally keep 5 ~ 10s, and then the cpu usage down to the normal.When I remove the Writetimeout, it means the Writetimeout is unlimited, cpu usage will not rise to 100% anymore. I have no idea why this happend.

Related tags
the pluto is a gateway new time, high performance, high stable, high availability, easy to use

pluto the pluto is a gateway new time, high performance, high stable, high availability, easy to use Acknowledgments thanks nbio for providing low lev

Sep 19, 2021
Go package to simulate bandwidth, latency and packet loss for net.PacketConn and net.Conn interfaces

lossy Go package to simulate bandwidth, latency and packet loss for net.PacketConn and net.Conn interfaces. Its main usage is to test robustness of ap

Oct 14, 2022
🌍 Package tcplisten provides a customizable TCP net.Listener with various performance-related options

Package tcplisten provides customizable TCP net.Listener with various performance-related options: SO_REUSEPORT. This option allows linear scaling ser

Nov 14, 2022
This plugin allows you to start a local server with hot reloading with Esbuild

esbuild-dev-server This plugin allows you to start a local server with hot reloading with Esbuild Installation npm npm i esbuild-dev-server -D yarn y

Nov 4, 2022
🚀Gev is a lightweight, fast non-blocking TCP network library based on Reactor mode. Support custom protocols to quickly and easily build high-performance servers.
🚀Gev is a lightweight, fast non-blocking TCP network library based on Reactor mode. Support custom protocols to quickly and easily build high-performance servers.

gev 中文 | English gev is a lightweight, fast non-blocking TCP network library based on Reactor mode. Support custom protocols to quickly and easily bui

Jan 6, 2023
A fast, high performance Cross-platform lightweight Nat Tracker Server,
A fast, high performance Cross-platform lightweight Nat Tracker Server,

NatTrackerServer A fast, high performance Cross-platform lightweight Nat Tracker Server suport IPv4 and IPv6 Tracker Server protocol 1、get NAT public

Apr 15, 2022
Wrapper around bufcli to make it do cross-repo compiles for private repos and use full paths.
Wrapper around bufcli to make it do cross-repo compiles for private repos and use full paths.

Bufme A tool for compiling protos with full directory paths and cross repo compiles. Introduction Protocol buffers rock, but protoc should die in a fi

Feb 5, 2022
Fake server, Consumer Driven Contracts and help with testing performance from one configuration file with zero system dependencies and no coding whatsoever
Fake server, Consumer Driven Contracts and help with testing performance from one configuration file with zero system dependencies and no coding whatsoever

mockingjay server Mockingjay lets you define the contract between a consumer and producer and with just a configuration file you get: A fast to launch

Jan 6, 2023
High Performance HTTP(S) Proxy Checker Written in GO

Go Proxy Checker 中文版文档 High Performance HTTP(S) Proxy Checker Written in GO It can Batch check whether your HTTP/HTTPS proxies is valid and anonymous,

Dec 30, 2022
Go net wrappers that enable TCP Fast Open.

tfo-go tfo-go provides a series of wrappers around net.Listen, net.ListenTCP, net.DialContext, net.Dial, net.DialTCP that seamlessly enable TCP Fast O

Nov 7, 2022
Simple GUI to convert Charles headers to golang's default http client (net/http)

Charles-to-Go Simple GUI to convert Charles headers to golang's default http client (net/http) Usage Compile code to a binary, go build -ldflags -H=wi

Dec 14, 2021
`kawipiko` -- blazingly fast static HTTP server -- focused on low latency and high concurrency, by leveraging Go, `fasthttp` and the CDB embedded database
`kawipiko` -- blazingly fast static HTTP server -- focused on low latency and high concurrency, by leveraging Go, `fasthttp` and the CDB embedded database

kawipiko -- blazingly fast static HTTP server kawipiko is a lightweight static HTTP server written in Go; focused on serving static content as fast an

Jan 3, 2023
Develop, update, and restart your ESP32 applications in less than two seconds
Develop, update, and restart your ESP32 applications in less than two seconds

Jaguar Develop, update, and restart your ESP32 applications in less than two seconds. Use the really fast development cycle to iterate quickly and lea

Jan 8, 2023
ipx provides general purpose extensions to golang's IP functions in net package

ipx ipx is a library which provides a set of extensions on go's standart IP functions in net package. compability with net package ipx is fully compat

May 24, 2021
EasyTCP is a light-weight and less painful TCP server framework written in Go (Golang) based on the standard net package.

EasyTCP is a light-weight TCP framework written in Go (Golang), built with message router. EasyTCP helps you build a TCP server easily fast and less painful.

Jan 7, 2023
High performance async-io(proactor) networking for Golang。golangのための高性能非同期io(proactor)ネットワーキング
High performance async-io(proactor) networking for Golang。golangのための高性能非同期io(proactor)ネットワーキング

gaio Introduction 中文介绍 For a typical golang network program, you would first conn := lis.Accept() to get a connection and go func(net.Conn) to start a

Dec 29, 2022