🚀 fgprof is a sampling Go profiler that allows you to analyze On-CPU as well as Off-CPU (e.g. I/O) time together.

go.dev reference GitHub Workflow Status GitHub

🚀 fgprof - The Full Go Profiler

fgprof is a sampling Go profiler that allows you to analyze On-CPU as well as Off-CPU (e.g. I/O) time together.

Go's builtin sampling CPU profiler can only show On-CPU time, but it's better than fgprof at that. Go also includes tracing profilers that can analyze I/O, but they can't be combined with the CPU profiler.

fgprof is designed for analyzing applications with mixed I/O and CPU workloads. This kind of profiling is also known as wall-clock profiling.

Quick Start

If this is the first time you hear about fgprof, you should start by reading about The Problem & How it Works.

There is no need to choose between fgprof and the builtin profiler. Here is how to add both to your application:

package main

import(
	_ "net/http/pprof"
	"github.com/felixge/fgprof"
)

func main() {
	http.DefaultServeMux.Handle("/debug/fgprof", fgprof.Handler())
	go func() {
		log.Println(http.ListenAndServe(":6060", nil))
	}()

	// <code to profile>
}

fgprof is compatible with the go tool pprof visualizer, so taking and analyzing a 3s profile is as simple as:

go tool pprof --http=:6061 http://localhost:6060/debug/fgprof?seconds=3

Additionally fgprof supports the plain text format used by Brendan Gregg's FlameGraph utility:

git clone https://github.com/brendangregg/FlameGraph
cd FlameGraph
curl -s 'localhost:6060/debug/fgprof?seconds=3&format=folded' > fgprof.folded
./flamegraph.pl fgprof.folded > fgprof.svg

Which tool you prefer is up to you, but one thing I like about Gregg's tool is that you can filter the plaintext files using grep which can be very useful when analyzing large programs.

If you don't have a program to profile right now, you can go run ./example which should allow you to reproduce the graphs you see above. If you've never seen such graphs before, and are unsure how to read them, head over to Brendan Gregg's Flame Graph page.

The Problem

Let's say you've been tasked to optimize a simple program that has a loop calling out to three functions:

func main() {
	for {
		// Http request to a web service that might be slow.
		slowNetworkRequest()
		// Some heavy CPU computation.
		cpuIntensiveTask()
		// Poorly named function that you don't understand yet.
		weirdFunction()
	}
}

One way to decide which of these three functions you should focus your attention on would be to wrap each function call like this:

start := time.Start()
slowNetworkRequest()
fmt.Printf("slowNetworkRequest: %s\n", time.Since(start))
// ...

However, this can be very tedious for large programs. You'll also have to figure out how to average the numbers in case they fluctuate. And once you've done that, you'll have to repeat the process for the functions called by the function you decide to focus on.

/debug/pprof/profile

So, this seems like a perfect use case for a profiler. Let's try the /debug/pprof/profile endpoint of the builtin net/http/pprof pkg to analyze our program for 10s:

import _ "net/http/pprof"

func main() {
	go func() {
		log.Println(http.ListenAndServe(":6060", nil))
	}()

	// <code to profile>
}
go tool pprof -http=:6061 http://localhost:6060/debug/pprof/profile?seconds=10

That was easy! Looks like we're spending all our time in cpuIntensiveTask(), so let's focus on that?

But before we get carried away, let's quickly double check this assumption by manually timing our function calls with time.Since() as described above:

slowNetworkRequest: 66.815041ms
cpuIntensiveTask: 30.000672ms
weirdFunction: 10.64764ms
slowNetworkRequest: 67.194516ms
cpuIntensiveTask: 30.000912ms
weirdFunction: 10.105371ms
// ...

Oh no, the builtin CPU profiler is misleading us! How is that possible? Well, it turns out the builtin profiler only shows On-CPU time. Time spent waiting on I/O is completely hidden from us.

/debug/pprof/trace

Let's try something else. The /debug/pprof/trace endpoint includes a "synchronization blocking profile", maybe that's what we need?

curl -so pprof.trace http://localhost:6060/debug/pprof/trace?seconds=10
go tool trace --pprof=sync pprof.trace > sync.pprof
go tool pprof --http=:6061 sync.pprof

Oh no, we're being mislead again. This profiler thinks all our time is spent on slowNetworkRequest(). It's completely missing cpuIntensiveTask(). And what about weirdFunction()? It seems like no builtin profiler can see it?

/debug/fgprof

So what can we do? Let's try fgprof, which is designed to analyze mixed I/O and CPU workloads like the one we're dealing with here. We can easily add it alongside the builtin profilers.

import(
	_ "net/http/pprof"
	"github.com/felixge/fgprof"
)

func main() {
	http.DefaultServeMux.Handle("/debug/fgprof", fgprof.Handler())
	go func() {
		log.Println(http.ListenAndServe(":6060", nil))
	}()

	// <code to profile>
}
go tool pprof --http=:6061 http://localhost:6060/debug/fgprof?seconds=10

Finally, a profile that shows all three of our functions and how much time we're spending on them. It also turns out our weirdFunction() was simply calling time.Sleep(), how weird indeed!

How it Works

fgprof

fgprof is implemented as a background goroutine that wakes up 99 times per second and calls runtime.GoroutineProfile. This returns a list of all goroutines regardless of their current On/Off CPU scheduling status and their call stacks.

This data is used to maintain an in-memory stack counter which can be converted to the pprof or folded output format. The meat of the implementation is super simple and < 100 lines of code, you should check it out.

The overhead of fgprof increases with the number of active goroutines (including those waiting on I/O, Channels, Locks, etc.) executed by your program. If your program typically has less than 1000 active goroutines, you shouldn't have much to worry about. However, at 10k or more goroutines fgprof is likely to become unable to maintain its sampling rate and to significantly degrade the performance of your application, see BenchmarkProfilerGoroutines.txt. The latter is due to runtime.GoroutineProfile() calling stopTheWorld(). For now the advise is to test the impact of the profiler on a development environment before running it against production instances. There are ideas for making fgprof more scalable and safe for programs with a high number of goroutines, but they will likely need improved APIs from the Go core.

Go's builtin CPU Profiler

The builtin Go CPU profiler uses the setitimer(2) system call to ask the operating system to be sent a SIGPROF signal 100 times a second. Each signal stops the Go process and gets delivered to a random thread's sigtrampgo() function. This function then proceeds to call sigprof() or sigprofNonGo() to record the thread's current stack.

Since Go uses non-blocking I/O, Goroutines that wait on I/O are parked and not running on any threads. Therefore they end up being largely invisible to Go's builtin CPU profiler.

The Future of Go Profiling

There is a great proposal for hardware performance counters for CPU profiling in Go. The proposal is aimed at making the builtin CPU Profiler even more accurate, especially under highly parallel workloads on many CPUs. It also includes a very in-depth analysis of the current profiler. Based on the design, I think the proposed profiler would also be blind to I/O workloads, but still seems appealing for CPU based workloads.

As far as fgprof itself is concerned, I might implement streaming output, leaving the final aggregation to other tools. This would open the door to even more advanced analysis, perhaps by integrating with tools such as flamescope.

Additionally I'm also open to the idea of contributing fgprof to the Go project itself. I've floated the idea on the golang-dev mailing list, so let's see what happens.

Known Issues

There is no perfect approach to profiling, and fgprof is no exception. Below is a list of known issues that will hopefully not be of practical concern for most users, but are important to highlight.

  • fgprof can't catch goroutines while they are running in loops without function calls, only when they get asynchronously preempted. This can lead to reporting inaccuracies. Use the builtin CPU profiler if this is a problem for you.
  • fgprof may not work in Go 1.13 if another goroutine is in a loop without function calls the whole time. Async preemption in Go 1.14 should mostly fix this issue.
  • Internal C functions are not showing up in the stack traces, e.g. runtime.nanotime which is called by time.Since in the example program.
  • The current implementation is relying on the Go scheduler to schedule the internal goroutine at a fixed sample rate. Scheduler delays, especially biased ones, might cause inaccuracies.

Credits

The following articles helped me to learn more about how profilers in general, and the Go profiler in particular work.

License

fgprof is licensed under the MIT License.

Owner
Felix Geisendörfer
Felix Geisendörfer
Comments
  • "No source information" in pprof format

    I added github.com/felixge/fgprof to one of the Prometheus unit tests, which gives me good charts but no source line info:

    $ go tool pprof -list processWALSamples ./tsdb.test cpu2
    Total: 59.55s
    No source information for github.com/prometheus/prometheus/tsdb.(*walSubsetProcessor).processWALSamples
    

    I can work on a repro if it's interesting; didn't know whether I should expect it to work.

    I'm running go version go1.17.8 linux/amd64.

  • PeriodType, PeriodUnit and Duration unset in resulting pprof

    PeriodType, PeriodUnit and Duration unset in resulting pprof

    I just noticed that the period type, period unit, and duration are not set in the resulting profiles created by fgprof. While not strictly required I feel it would be good to set them. Duration can be measured, and period type I assume should be cpu and period unit should be nanoseconds?

  • Reduce allocations while creating a slice of keys from map and fix a …

    Reduce allocations while creating a slice of keys from map and fix a …

    …typo

    • This PR reduces the no. of allocations by ~83% and increases the speed by ~48% which creating the slice while forming the key slice from a map.
    • Also fixed a small typo and added one doc string.

    Benchmark script: https://play.golang.org/p/1Qv2z4W0EMO

    Results:

    ➜  dev-vm go test -v -run=^$ -bench=BenchmarkMapSlice a_test.go --count=20 >> {old,new}.txt
    ➜  dev-vm benchstat old.txt new.txt                                                                  
    name        old time/op    new time/op    delta
    MapSlice-8    1.29µs ± 4%    0.68µs ± 6%  -47.57%  (p=0.000 n=20+20)
    
    name        old alloc/op   new alloc/op   delta
    MapSlice-8    1.01kB ± 0%    0.51kB ± 0%  -49.21%  (p=0.000 n=20+20)
    
    name        old allocs/op  new allocs/op  delta
    MapSlice-8      6.00 ± 0%      1.00 ± 0%  -83.33%  (p=0.000 n=20+20)
    
  • Automatically include fgprof in /debug/index

    Automatically include fgprof in /debug/index

    There is an open issue on conprof (a continuous profiling project), discussing how there could be support for fgprof. It more generally started a discussion around custom/non-built-in profiles.

    I haven't had a chance to dig into the fgprof code a lot yet, but looking at the bits and pieces involved, it appears that if profiles are created using the pprof.NewProfile function, then they are automatically included in profiles returned by pprof.Profiles(), which is what the /debug/pprof index page calls to retrieve the list of available profiles.

    I'll try to understand the code more and see if this is possible, but maybe you can tell directly off the top of your head, and whether if possible this is something you would accept a patch for? :)

    @felixge

  • A small performance optimization

    A small performance optimization

    Hey Felix,

    I was playing with runtime package/profilers a bit for learning and while trying out different things, I happen to see following code is used to retrieve all Goroutines:

    for {
    	n, ok := runtime.GoroutineProfile(p.stacks)
    	if !ok {
    		p.stacks = make([]runtime.StackRecord, int(float64(n)*1.1))
    	} else {
    		return p.stacks[0:n]
    	}
    }
    

    However it turns out following is a bit faster as runtime.NumGoroutine() just returns len(gcount()) whereas GoroutineProfile seems to do bunch of other things.

    p.stacks = make([]runtime.StackRecord, int(float64(runtime.NumGoroutine())*1.1))
    for {
       ...
    }
    

    I have benchmarked following function:

    func GoroutineProfile(useNumG bool) {
    	var stacks []runtime.StackRecord
    
    	if useNumG {
    		nGoroutines := runtime.NumGoroutine()
    		stacks = make([]runtime.StackRecord, int(float64(nGoroutines)*1.1))
    	}
    
    	for {
    		n, ok := runtime.GoroutineProfile(stacks)
    		if !ok {
                            // this can happen regardless of our approach
    			stacks = make([]runtime.StackRecord, int(float64(n)*1.1))
    		} else {
    			break
    		}
    	}
    }
    

    And the results are:

    go test -bench=. -benchmem                                                                                                   ~/Desktop/p/go-tryouts  
    goos: linux
    goarch: amd64
    BenchmarkGoroutineProfile/Using_runtime.NumGoroutine()-12                 111543              9944 ns/op             768 B/op          1 allocs/op
    BenchmarkGoroutineProfile/Not_using_runtime.NumGoroutine()-12              66451             16140 ns/op             768 B/op          1 allocs/op
    

    While not too significant, it is not bad for a single line of change.

    Would you be interested for a PR for this? Is there any other problem with the approach?

  • Please mention format GET parameter in the quick start guide

    Please mention format GET parameter in the quick start guide

    In my first test of fgprof I profiled my app by curling a remote host from another remote host where neither have go tools installed, copying the artifact local for analysis. I was expecting the pprof format as the default based on my reading of the quick start. It took a bit of digging through the source to find guessFormat in handler.go to figure out why I was getting text vs binary output. Your quick start guide is excellent, but a quick mention of the extra GET parameter somewhere on that first page would have saved me significant time.

  • profile was collecting within 120s, but Total samples is 29838s

    profile was collecting within 120s, but Total samples is 29838s

    Hello!

    Could you help me to explain it, please?

    What may be a reason of it? Is it commonly happens with fgprof or it is just my weird case?

    What did I do:

    1. I added the module and code from README to my app source code:
    	// Start profiling http endpoint if enabled
    	if profileEnabled {
    		http.DefaultServeMux.Handle("/debug/fgprof", fgprof.Handler()) // I've added this line to the existing part of code
    		go func() {
    			logger.Infof("Starting profiling server with listenAddress = %s", profileListenAddress)
    			if profileErr := http.ListenAndServe(profileListenAddress, nil); profileErr != nil {
    				logger.Errorf("Error starting profiler: %s", profileErr)
    			}
    		}()
    	}
    
    1. I ran my application which consists of 5 different docker containers.
    2. I ran command go tool pprof http://localhost:6060/debug/pprof/goroutine?seconds=120
    3. I downloaded profiling files to my client machine from the host.
    4. I ran command go tool pprof -http :9098 pprof.samples.time.005.pb.gz and saw weird "Total samples" time for this profile:
    Type: time
    Time: Sep 29, 2022 at 10:16am (MSK)
    Duration: 120s, Total samples = 29838.05s (24864.86%)
    Showing nodes accounting for 29838.05s, 100% of 29838.05s total
    See https://git.io/JfYMW for how to read the graph
    

    lscpu

    Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian Address sizes: 40 bits physical, 57 bits virtual CPU(s): 4 On-line CPU(s) list: 0-3 Thread(s) per core: 2 Core(s) per socket: 2 Socket(s): 1 NUMA node(s): 1 Vendor ID: GenuineIntel CPU family: 6 Model: 106 Model name: Intel Xeon Processor (Icelake) Stepping: 0 CPU MHz: 1999.990 BogoMIPS: 3999.98 Hypervisor vendor: KVM Virtualization type: full L1d cache: 64 KiB L1i cache: 64 KiB L2 cache: 8 MiB L3 cache: 16 MiB NUMA node0 CPU(s): 0-3 Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Mmio stale data: Not affected Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx pdpe1gb rdtscp lm constant_t sc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline _timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsa vec xgetbv1 arat avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq md_clear arch_capabi lities

    go env GO111MODULE="" GOARCH="amd64" GOBIN="" GOCACHE="/home/ivanl/.cache/go-build" GOENV="/home/ivanl/.config/go/env" GOEXE="" GOEXPERIMENT="" GOFLAGS="" GOHOSTARCH="amd64" GOHOSTOS="linux" GOINSECURE="" GOMODCACHE="/home/ivanl/go/pkg/mod" GONOPROXY="" GONOSUMDB="" GOOS="linux" GOPATH="/home/ivanl/go" GOPRIVATE="" GOPROXY="https://proxy.golang.org,direct" GOROOT="/usr/local/go" GOSUMDB="sum.golang.org" GOTMPDIR="" GOTOOLDIR="/usr/local/go/pkg/tool/linux_amd64" GOVCS="" GOVERSION="go1.18.4" GCCGO="gccgo" GOAMD64="v1" AR="ar" CC="gcc" CXX="g++" CGO_ENABLED="1" GOMOD="/dev/null" GOWORK="" CGO_CFLAGS="-g -O2" CGO_CPPFLAGS="" CGO_CXXFLAGS="-g -O2" CGO_FFLAGS="-g -O2" CGO_LDFLAGS="-g -O2" PKG_CONFIG="pkg-config" GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build2800466746=/tmp/go-build -gno-record-gcc-switches"

    docker images' go-version: 1.16.7

  • Default time to 30s, to align with the pprof/profile default timeout

    Default time to 30s, to align with the pprof/profile default timeout

    Currently if you call the standard pprof endpoint without ?timeout set, it will default to 30. With fgprof, this just fails. It would be nice to retain this defaulting behavior.

  • Populate PeriodType appropriately

    Populate PeriodType appropriately

    PeriodType is intended to explain the kind of events between sampled occurrences. While not strictly required to be set, all the Go stdlib profilers populate it, and this patch sets it equivalent to the standard CPU profiles.

    Resolves part of #16

  • Added go-recipes badge

    Added go-recipes badge

    Hello, Hi!! 👋🏻

    I like your project and I think broader Go community will benefit from it too.

    Thus, I added it to the curated list of Go tools.

    I hope this badge will serve as a mark of quality and appreciation to your project.

    Once again, thank you for your work!!

    ❤️

    -- Nikolay

  • Replace fmt.Sscanf with strconv.Atoi for faster int conversion

    Replace fmt.Sscanf with strconv.Atoi for faster int conversion

    Ran a benchmark with this change and saw significant decrease(98.6%) in the runtime. Also added early return in else if block to fail early.

    Test code:

    func BenchmarkFmtSscanf(b *testing.B) {
    	var seconds int
    	for i := 0; i < b.N; i++ {
    		_, _ = fmt.Sscanf("300", "%d", &seconds)
    		_ = seconds
    
    	}
    }
    
    func BenchmarkAtoi(b *testing.B) {
    	for i := 0; i < b.N; i++ {
    		seconds, _ := strconv.Atoi("-300")
    		_ = seconds
    	}
    }
    

    Test Results:

    ➜  go-code git:(master) ✗ go test -bench=. a_test.go
    goos: darwin
    goarch: amd64
    BenchmarkFmtSscanf-8           2430686               486 ns/op
    BenchmarkAtoi-8                    176275149                6.66 ns/op
    
  • Reduce risk of running fgprof in production

    Reduce risk of running fgprof in production

    The goal for this issue is to figure out how to reduce the risk of running fgprof in production. People should feel safe turning it on, knowing that in the worst case it will just bail out or reduce the amount of data being gathered rather than having a negative impact on the application.

    There has been some initial discussion here: https://github.com/felixge/fgprof/pull/11#discussion_r543452969

A simple OCR API server, seriously easy to be deployed by Docker, on Heroku as well
A simple OCR API server, seriously easy to be deployed by Docker, on Heroku as well

ocrserver Simple OCR server, as a small working sample for gosseract. Try now here https://ocr-example.herokuapp.com/, and deploy your own now. Deploy

Dec 28, 2022
Fast (linear time) implementation of the Gaussian Blur algorithm in Go.
Fast (linear time) implementation of the Gaussian Blur algorithm in Go.

Song2 Fast (linear time) implementation of the Gaussian Blur algorithm in Go.

Oct 25, 2022
Spice.ai is an open source, portable runtime for training and using deep learning on time series data.
Spice.ai is an open source, portable runtime for training and using deep learning on time series data.

Spice.ai Spice.ai is an open source, portable runtime for training and using deep learning on time series data. ⚠️ DEVELOPER PREVIEW ONLY Spice.ai is

Dec 15, 2022
FlyML perfomant real time mashine learning libraryes in Go

FlyML perfomant real time mashine learning libraryes in Go simple & perfomant logistic regression (~100 LoC) Status: WIP! Validated on mushrooms datas

May 30, 2022
Another AOC repo (this time in golang!)

advent-of-code Now with 100% more golang! (It's going to be a long advent of code...) To run: Get your data for a given year/day and copy paste it to

Dec 14, 2021
This is a tool to interact with Apigee APIs. The tool lets you manage (create, del, get, list) environments, proxies, etc.

apigeecli This is a tool to interact with Apigee APIs for Apigee hybrid and Apigee's managed offering. The tool lets you manage (Create,Get, List, Upd

Dec 3, 2022
Nats-subject-profiler - NATS Subject Profiler With Golang

NATS Subject Profiler Example Connect it to the demo NATS server. nats-subject-p

Feb 7, 2022
Feb 9, 2022
CPU usage percentage is the ratio of the total time the CPU was active, to the elapsed time of the clock on your wall.

Docker-Kubernetes-Container-CPU-Utilization Implementing CPU Load goroutine requires the user to call the goroutine from the main file. go CPULoadCalc

Dec 15, 2021
gProfiler combines multiple sampling profilers to produce unified visualization of what your CPU
gProfiler combines multiple sampling profilers to produce unified visualization of what your CPU

gProfiler combines multiple sampling profilers to produce unified visualization of what your CPU is spending time on, displaying stack traces of your processes across native programs1 (includes Golang), Java and Python runtimes, and kernel routines.

Dec 27, 2022
System resource usage profiler tool which regularly takes snapshots of the memory and CPU load of one or more running processes so as to dynamically build up a profile of their usage of system resources.
System resource usage profiler tool which regularly takes snapshots of the memory and CPU load of one or more running processes so as to dynamically build up a profile of their usage of system resources.

Vegeta is a system resource usage tracking tool built to regularly take snapshots of the memory and CPU load of one or more running processes, so as to dynamically build up a profile of their usage of system resources.

Jan 16, 2022
OctoLinker — Links together, what belongs together
OctoLinker — Links together, what belongs together

What is OctoLinker? OctoLinker is a browser extension for GitHub, that turns language-specific statements like include require or import into links. I

Jan 1, 2023
Cpu-profiling - Basic example of CPU Profiling in Golang which shows the bottlenecks and how much time is spent per function

cpu-profiling Basic example of CPU Profiling in Golang which shows the bottlenec

Aug 2, 2022
Proc-peepin - Capture process cpu and memory and send it off to influx

proc-peepin Capture process cpu and memory and send it off to influx Running loc

Feb 13, 2022
Go module that allows you to authenticate to Azure with a well known client ID using interactive logon and grab the token

azureimposter Go module that pretends to be any clientID and grabs an authentication token from Azure using interactive login (w/mfa if enabled) and r

Dec 14, 2022
🚀Statsview is a real-time Golang runtime stats visualization profiler
🚀Statsview is a real-time Golang runtime stats visualization profiler

Statsview is a real-time Golang runtime stats visualization profiler. It is built top on another open-source project, go-echarts, which helps statsview to show its graphs on the browser.

Dec 29, 2022
Library for multi-armed bandit selection strategies, including efficient deterministic implementations of Thompson sampling and epsilon-greedy.
Library for multi-armed bandit selection strategies, including efficient deterministic implementations of Thompson sampling and epsilon-greedy.

Mab Multi-Armed Bandits Go Library Description Installation Usage Creating a bandit and selecting arms Numerical integration with numint Documentation

Jan 2, 2023
Data Availability Sampling (DAS) on a Discovery-v5 DHT overlay

Implementing Data Availability Sampling (DAS) There's a lot of history to unpack here. Vitalik posted about the "Endgame": where ethereum could be hea

Nov 12, 2022
Packiffer is a lightweight cross-platform networking toolkit that let you sniff/analyze/inject/filter packets.
Packiffer is a lightweight cross-platform networking toolkit that let you sniff/analyze/inject/filter packets.

Packiffer is a lightweight cross-platform networking toolkit that let you sniff/analyze/inject/filter packets.

Dec 19, 2022
Stochastic flame graph profiler for Go programs

go-torch go-torch is deprecated, use pprof instead As of Go 1.11, flamegraph visualizations are available in go tool pprof directly! # This will liste

Dec 16, 2022