gProfiler combines multiple sampling profilers to produce unified visualization of what your CPU

gProfiler

gProfiler combines multiple sampling profilers to produce unified visualization of what your CPU is spending time on, displaying stack traces of your processes across native programs1 (includes Golang), Java and Python runtimes, and kernel routines.

gProfiler can upload its results to the Granulate Performance Studio, which aggregates the results from different instances over different periods of time and can give you a holistic view of what is happening on your entire cluster. To upload results, you will have to register and generate a token on the website.

gProfiler runs on Linux.

Granulate Performance Studio example view

Running

This section describes the possible options to control gProfiler's output, and the various execution modes (as a container, as an executable, etc...)

Output options

gProfiler can produce output in two ways:

  • Create an aggregated, collapsed stack samples file (profile_.col) and a flamegraph file (profile_.html). Two symbolic links (last_profile.col and last_flamegraph.html) always point to the last output files.

    Use the --output-dir/-o option to specify the output directory.

    If --rotating-output is given, only the last results are kept (available via last_profle.col and last_flamegraph.html). This can be used to avoid increasing gProfiler's disk usage over time. Useful in conjunction with --upload-results (explained ahead) - historical results are available in the Granulate Performance Studio, and the very latest results are available locally.

    --no-flamegraph can be given to avoid generation of the profile_.html file - only the collapsed stack samples file will be created.

  • Send the results to the Granulate Performance Studio for viewing online with filtering, insights, and more.

    Use the --upload-results/-u flag. Pass the --token option to specify the token provided by Granulate Performance Studio, and the --service-name option to specify an identifier for the collected profiles, as will be viewed in the Granulate Performance Studio. Profiles sent from numerous gProfilers using the same service name will be aggregated together.

Note: both flags can be used simultaneously, in which case gProfiler will create the local files and upload the results.

Profiling options

  • --profiling-frequency: The sampling frequency of the profiling, in hertz.
  • --profiling-duration: The duration of the each profiling session, in seconds.
  • --profiling-interval: The interval between each profiling session, in seconds.

The default profiling frequency is 11 hertz. Using higher frequency will lead to more accurate results, but will create greater overhead on the profiled system & programs.

The default duration is 60 seconds, and the default interval matches it. So gProfiler runs the profiling sessions back-to-back - the next session starts as soon as the previous session is done.

  • --no-java, --no-python: Disable the runtime-specific profilers of Java and/or Python, accordingly.

Continuous mode

gProfiler can be run in a continuous mode, profiling periodically, using the --continuous/-c flag. Note that when using --continuous with --output-dir, a new file will be created during each sampling interval. Aggregations are only available when uploading to the Granulate Performance Studio.

Running as a Docker container

Run the following to have gProfiler running continuously, uploading to Granulate Performance Studio:

docker pull granulate/gprofiler:latest
docker run --name gprofiler -d --restart=always \
    --pid=host --userns=host --privileged \
    -v /lib/modules:/lib/modules:ro -v /usr/src:/usr/src:ro \
    -v /var/run/docker.sock:/var/run/docker.sock \
	granulate/gprofiler:latest -cu --token <token> --service-name <service> [options]

For profiling with eBPF, kernel headers must be accessible from within the container at /lib/modules/$(uname -r)/build. On Ubuntu, this directory is a symlink pointing to /usr/src. The command above mounts both of these directories.

Running as an executable

Run the following to have gprofiler running continuously, uploading to Granulate Performance Studio:

wget https://github.com/Granulate/gprofiler/releases/latest/download/gprofiler
sudo chmod +x gprofiler
sudo ./gprofiler -cu --token <token> --service-name <service> [options]

gProfiler unpacks executables to /tmp by default; if your /tmp is marked with noexec, you can add TMPDIR=/proc/self/cwd to have everything unpacked in your current working directory.

sudo TMPDIR=/proc/self/cwd ./gprofiler -cu --token <token> --service-name <service> [options]

Executable known issues

The following platforms are currently not supported with the gProfiler executable:

  • Ubuntu 14.04
  • Alpine

Remark: container-based execution works and can be used in those cases.

Running as a Kubernetes DaemonSet

See gprofiler.yaml for a basic template of a DaemonSet running gProfiler. Make sure to insert the GPROFILER_TOKEN and GPROFILER_SERVICE variables in the appropriate location!

Running from source

gProfiler requires Python 3.6+ to run.

pip3 install -r requirements.txt
./scripts/build.sh

Then, run the following as root:

python3 -m gprofiler [options]

Theory of operation

Each profiling interval, gProfiler invokes perf in system wide mode, collecting profiling data for all running processes. Alongside perf, gProfiler invokes runtime-specific profilers for processes based on these environments:

  • Java runtimes (version 7+) based on the HotSpot JVM, including the Oracle JDK and other builds of OpenJDK like AdoptOpenJDK and Azul Zulu.
    • Uses async-profiler.
  • The CPython interpreter, versions 2.7 and 3.5-3.9.
    • eBPF profiling (based on PyPerf) requires Linux 4.14 or higher. Profiling using eBPF incurs lower overhead. This requires kernel headers to be installed.
    • If eBPF is not available for whatever reason, py-spy is used.
  • PHP (Zend Engine), versions 7.0-8.0.

The runtime-specific profilers produce stack traces that include runtime information (i.e, stacks of Java/Python functions), unlike perf which produces native stacks of the JVM / CPython interpreter. The runtime stacks are then merged into the data collected by perf, substituting the native stacks perf has collected for those processes.

Contribute

We welcome all feedback and suggestion through Github Issues:

Releasing a new version

  1. Update __version__ in __init__.py.
  2. Create a tag with the same version (after merging the __version__ update) and push it.

We recommend going through our contribution guide for more details.

Credits

Footnotes

1: Currently requires profiled native programs to be compiled with frame pointer.

Comments
  • Upgrade py-spy v0.3.12, rbspy v0.12.1

    Upgrade py-spy v0.3.12, rbspy v0.12.1

    I cleaned up the rust build environment a bit.

    • Changed the base image to rust:1.58-alpine3.15 (instead of rust:xxx which is based on Debian, we just use an Alpine image).
    • Upgraded rust from 1.52 to 1.58. This was required for the new rbspy & py-spy. I tried to upgrade to newer rust (1.62) but the build fails (of either of the tools, I don't recall). We will need to upgrade them again if/when we upgrade rust.
    • Removed duplicate logic for the libunwind builds - now using libunwind_build.sh. What I could otherwise do is get rid of cfg(unwind) because we don't currently use it.. but meh. It's nicer to keep the option, I guess :shrug:.
    • The same PR upgrades both py-spy and rbspy, see explaining comment.

    py-spy rebase diff - only the comment.

    $ git range-diff v0.3.10..v0.3.10g1 v0.3.12..v0.3.12g1
    1:  b23d457 ! 1:  10feb0f Don't gather thread activity for all threads when --nonblocking is provided
        @@ src/python_spy.rs: impl PythonSpy {
         -            thread_activity.insert(threadid, thread.active()?);
         -        }
          
        --        // Lock the process if appropiate. Note we have to lock AFTER getting the thread
        +-        // Lock the process if appropriate. Note we have to lock AFTER getting the thread
         -        // activity status from the OS (otherwise each thread would report being inactive always).
         -        // This has the potential for race conditions (in that the thread activity could change
         -        // between getting the status and locking the thread, but seems unavoidable right now
    2:  f30e48f = 2:  8b041f9 Don't grab stack trace of non-GIL threads when --gil is provided
    3:  480deec = 3:  b45bd56 Add a suffix to the stacks to easily identify Python stacks (#1)
    
    
  • perf segfaults

    perf segfaults

    On Fedora 35 running Kernel 5.16.19-200.fc35.x86_64, gProfiler's perf segfaults on exeuction.

    # setsid ./gprofiler -cu --token '<my_token>' --service-name 'test' &
    

    Yields

    [16:34:56] Running gprofiler (version 1.2.19), commandline: '-cu --token lN4td5dwQNA5PyF7D6wYQGstP8Vxh-RYPJIv6ZB0eZA --service-name pmemdev1'
    [16:34:56] gProfiler Python version: 3.6.8 (default, Nov 16 2020, 16:55:22)
    [GCC 4.8.5 20150623 (Red Hat 4.8.5-44)]
    [16:34:56] gProfiler deployment mode: standalone_executable
    [16:34:56] Kernel uname release: 5.16.19-200.fc35.x86_64
    [16:34:56] Kernel uname version: #1 SMP PREEMPT Fri Apr 8 15:34:44 UTC 2022
    [16:34:56] Total CPUs: 96
    [16:34:56] Total RAM: 250.58 GB
    [16:34:56] Linux distribution: Fedora Linux | 35 |
    [16:34:56] libc version: glibc-2.34
    [16:34:56] Hostname: hostname
    [16:34:57] The connection to the server was successfully established (service 'pmemdev1')
    [16:35:03] Initialized JavaProfiler (frequency: 11hz, duration: 60s)
    [16:35:03] Couldn't create the Java profiler, continuing without it
    Traceback (most recent call last):
      File "gprofiler/profilers/factory.py", line 41, in get_profilers
      File "gprofiler/profilers/java.py", line 608, in __init__
      File "gprofiler/profilers/java.py", line 624, in _init_ap_mode
      File "gprofiler/utils/perf.py", line 19, in can_i_use_perf_events
      File "gprofiler/utils/__init__.py", line 265, in run_process
    gprofiler.exceptions.CalledProcessError: Command '['/tmp/_MEID3BPp6/gprofiler/resources/perf', 'record', '-o', '/dev/null', '--', '/bin/true']' died with <Signals.SIGSEGV: 11>.
    stdout: b''
    stderr: b''
    [16:35:03] Initialized SystemProfiler (frequency: 11hz, duration: 60s)
    [16:35:03] Initialized PythonEbpfProfiler (frequency: 11hz, duration: 60s)
    [16:35:05] Initialized RbSpyProfiler (frequency: 11hz, duration: 60s)
    [16:35:05] Could not find a Docker daemon or CRI-compatible daemon, profiling data will not include the container names. If you do have a containers runtime and it's not supported, please open a new issue here: https://github.com/Granulate/gprofiler/issues/new
    [16:35:05] gProfiler initialized and ready to start profiling
    [16:35:05] Starting profiling of Python processes with PyPerf
    [16:35:06] Starting perf (fp mode)
    [16:35:11] perf failed to start. stdout b'' stderr b''
    [16:35:11] Unexpected error occurred
    Traceback (most recent call last):
      File "gprofiler/main.py", line 771, in main
      File "gprofiler/main.py", line 357, in run_continuous
      File "gprofiler/main.py", line 148, in __enter__
      File "gprofiler/main.py", line 233, in start
      File "gprofiler/profilers/perf.py", line 192, in start
      File "gprofiler/profilers/perf.py", line 72, in start
      File "gprofiler/utils/__init__.py", line 155, in wait_event
    TimeoutError
    

    I see the following in dmesg

    # dmesg -T | tail
    [Fri Apr 15 16:35:02 2022] perf[13307]: segfault at 10 ip 00007fabc1498b34 sp 00007ffc7b92c5b0 error 4 in libssh.so.4.8.7[7fabc1486000+44000]
    [Fri Apr 15 16:35:02 2022] Code: 00 00 00 31 c0 5b c3 b8 ff ff ff ff c3 66 0f 1f 84 00 00 00 00 00 f3 0f 1e fa 48 83 ec 08 48 8d 3d 49 c4 04 00 e8 2c fc fe ff <8b> 80 10 00 00 00 48 83 c4 08 c3 90 f3 0f 1e fa 48 85 ff 74 1b 53
    

    Running the command from gProfiler manually also yields a core dump

    # /tmp/_MEID3BPp6/gprofiler/resources/perf record -o /dev/null -- /bin/true
    Segmentation fault (core dumped)
    

    Running the included perf does not

    # /usr/bin/perf record -o /dev/null -- /bin/true
    [ perf record: Woken up 5 times to write data ]
    [ perf record: Captured and wrote 0.000 MB /dev/null ]
    
  • GProfiler with Py-Spy on Windows

    GProfiler with Py-Spy on Windows

    PR provides batch scripts to build py-spy and gprofiler using py-spy on Windows

    Description

    Provided batch scripts to build py-spy and gprofiler from source on Windows. This PR contains a build.bat file, a build-pyspy.bat file, py-spy (v0.3.10g1) submodule, granulate-utils submodule along with some Windows specific dependencies. In addition to the changes made to the gprofiler repo (which is tracked), the other new additions can all be found in the deps, py-spy, granulate-utils subfolders.

    Motivation and Context

    This PR addresses Granulates initial foray into providing Windows support. To be specific, this PR provides python profiling on Windows using py-spy.

  • CPU data from debian 11

    CPU data from debian 11

    I used gprofiler to profile restAPI in different programming languages (java, javascript, c# and python), since friday (it worked fine before) gprofiler stopped collecting data about CPU and Memory (only 1 data every 15 min) , samples are high (around 2000-5000 per 15min), i used default settings to run gprofiler (i tried CLI and docker both not working). Flame graph works fine, only CPU and Memory are not working. I tried reinstalling Operative System and still doesn't work.

  • Feature/stacks container name

    Feature/stacks container name

    Description

    The V2 profile upload API now sends the container name for stacks from processes that are inside a Docker container An example profile file:

    #{"containers": ["amazing_lamport", "datadog_1"], "hostname": "ip-10-0-0-9", "container_names_disabled": false}
    ;swapper;secondary_startup_64_[k];start_secondary_[k];cpu_startup_entry_[k];do_idle_[k];default_idle_call_[k];arch_cpu_idle_[k];native_safe_halt_[k] 51
    ;swapper;secondary_startup_64_[k];x86_64_start_kernel_[k];x86_64_start_reservations_[k];start_kernel_[k];arch_call_rest_init_[k];rest_init_[k];cpu_startup_entry_[k];do_idle_[k];default_idle_call_[k];arch_cpu_idle_[k];native_safe_halt_[k] 46
    datadog_1;process-agent;[/opt/datadog-agent/embedded/bin/process-agent];[/opt/datadog-agent/embedded/bin/process-agent];_start;_start;_start 1
    ;py-spy;process_vm_readv 1
    amazing_lamport;python;<module> (<string>) 100
    

    Checklist:

    • [x] My code follows the code style of this project.
    • [x] I have updated the relevant documentation.
    • [x] I have read the CONTRIBUTING document.
    • [ ] I have added tests to cover my changes.
    • [ ] All new and existing tests passed.
  • Report dso from pyperf

    Report dso from pyperf

    Report dso name for native frames generated by PyPerf.

    Description

    Include PyPerf version recently updated with the implementation of DSO name reporting.
    Feature is not enabled by default - has to be enabled with run-time flag --insert-dso-name.

    Related Issue

    #432

    Motivation and Context

    How Has This Been Tested?

    Test has been defined python test suite.

    Screenshots

    --

    Checklist:

    • [x] I have read the CONTRIBUTING document.
    • [ ] I have updated the relevant documentation.
    • [x] I have added tests for new logic.
  • k8s Daemonset pods stuck in

    k8s Daemonset pods stuck in "CrashLoopBackOff"

    just installed gprofiler daemonset on eks k8s cluster. Pods keep restarting:

    [cloudshell-user@ip-10-0-20-221 ~]$ kubectl get pods | grep granulate granulate-gprofiler-t7dwh 0/1 CrashLoopBackOff 19 (75s ago) 74m granulate-gprofiler-t855m 0/1 CrashLoopBackOff 19 (96s ago) 74m granulate-gprofiler-w4x96 0/1 CrashLoopBackOff 19 (82s ago) 74m granulate-maestro-6df6db74c9-t26bj 2/2 Running 0 73m

    [cloudshell-user@ip-10-0-20-221 ~]$ kubectl describe pods granulate-gprofiler-t7dwh Name: granulate-gprofiler-t7dwh Namespace: default Priority: 0 Node: ip-192-168-17-133.us-east-2.compute.internal/192.168.17.133 Start Time: Wed, 20 Jul 2022 19:02:24 +0000 Labels: app=granulate-gprofiler controller-revision-hash=57dbd966bd pod-template-generation=1 Annotations: kubernetes.io/psp: eks.privileged Status: Running IP: 192.168.34.230 IPs: IP: 192.168.34.230 Controlled By: DaemonSet/granulate-gprofiler Containers: granulate-gprofiler: Container ID: docker://90d9dcab2ac6423596e6553fb12b9c651f72e65e5c7bf31bc2b8ef39dbc30d1d Image: index.docker.io/granulate/gprofiler:latest Image ID: docker-pullable://granulate/gprofiler@sha256:ab80eda157f96962e0dcbfd4e6f878a185802f5346b5c93c447f085de59203b8 Port: Host Port: Args: -cu --token $(GPROFILER_TOKEN) --service-name $(GPROFILER_SERVICE) State: Waiting Reason: CrashLoopBackOff Last State: Terminated Reason: Error Exit Code: 2 Started: Wed, 20 Jul 2022 20:45:46 +0000 Finished: Wed, 20 Jul 2022 20:45:47 +0000 Ready: False Restart Count: 25 Limits: cpu: 500m memory: 1Gi Requests: cpu: 100m memory: 256Mi Environment: GPROFILER_TOKEN: -2VzyckD4HHifIF8fuvspXEgHgJ5WRhfdlqVNzzZWDw GPROFILER_SERVICE: EKS_Demo GPROFILER_IN_K8S: 1 Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fpz88 (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: kube-api-access-fpz88: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true QoS Class: Burstable Node-Selectors: Tolerations: node.kubernetes.io/disk-pressure:NoSchedule op=Exists node.kubernetes.io/memory-pressure:NoSchedule op=Exists node.kubernetes.io/not-ready:NoExecute op=Exists node.kubernetes.io/pid-pressure:NoSchedule op=Exists node.kubernetes.io/unreachable:NoExecute op=Exists node.kubernetes.io/unschedulable:NoSchedule op=Exists Events: Type Reason Age From Message


    Warning BackOff 113s (x492 over 106m) kubelet Back-off restarting failed container [cloudshell-user@ip-10-0-20-221 ~]$

  • Support merging the result of fp and dwarf perfs for accuracy

    Support merging the result of fp and dwarf perfs for accuracy

    Description

    In order to increase the accuracy and reliability of perf results, 2 global perfs will run in parallel - one with FP (frame pointers) and one with Dwarf. The result with the highest average of frames per stack will be chosen for the final output, per process.

    How Has This Been Tested?

    I used the new --perf-mode parameter to test all of the different modes - FP, dwarf and "smart" and compared the results.

    Screenshots:

    FP: fp flamegraph

    Dwarf: dwarf flamegraph

    Smart: smart flamegraph

    Types of changes

    • [ ] Bug fix (non-breaking change which fixes an issue)
    • [x] New feature (non-breaking change which adds functionality)
    • [ ] Breaking change (fix or feature that would cause existing functionality to change)

    Checklist:

    • [x] My code follows the code style of this project.
    • [x] I have updated the relevant documentation.
    • [x] I have read the CONTRIBUTING document.
    • [ ] I have added tests to cover my changes.
    • [ ] All new and existing tests passed.
  • java: Upgrade to async-profiler v2.9

    java: Upgrade to async-profiler v2.9

    This resolves https://github.com/Granulate/gprofiler/issues/563 because it was fixed by https://github.com/jvm-profiling-tools/async-profiler/commit/b0a44524bad111a7f18c8309c0ebec56ccfdcf2a which is included.

    EDIT 29th Nov: After upgrading to 2.9, our CentOS 6 broke. We've had a old hack which forced linking with memcpy@GLIBC_2.2.5, and it fails on 2.9, from bisecting I found the breaking commit to be https://github.com/jvm-profiling-tools/async-profiler/commit/b0a44524bad111a7f18c8309c0ebec56ccfdcf2a which, I figure, introduced lots of implicit string handling which added calls to memcpy that were not affected by the symbol version pinning.

    Soooo after lots of trying around of different .symver directives and other hacks such as directly passing lib.so of glibc-2.12 in the build command line (which causes gcc to take symbol versions from that DSO, but it might have adverse effects if I built against headers of version X then linked against incompatible version Y...)

    I ended up -

    • Removing devtoolset-7 installation - this is a remnant from #139 which enabled -static-libstdc++ which is not supported on CentOS 6 GCC (see description in that PR). However, we've since switched to CentOS 7 (in #304) so it's no longer relevant - GCC on CentOS 7 supports it.
    • Installing compat-glibc and building against it with -I /usr/lib/x86_64-redhat-linux6E/include -B /usr/lib/x86_64-redhat-linux6E/lib64/.
    • Removed the hack of force linking memcpy.
  • Make NodeJS profiling attachable

    Make NodeJS profiling attachable

    Description

    Adds --nodejs-mode=attachable, which injects DSO into already running NodeJS processes.

    Related Issue

    https://github.com/Granulate/gprofiler/issues/418

    Motivation and Context

    At current state, resolving function addresses in NodeJS requires restarting each process, which might be very hard if not impossible in production. This features allows generating perf maps at runtime.

    How Has This Been Tested?

    Tested manually in container and as executable. NodeJS targets were tested in both same namespace as gprofiler and in separate namespace (docker container)

    Checklist:

    • [x] I have read the CONTRIBUTING document.
    • [x] I have updated the relevant documentation.
    • [x] I have added tests for new logic.
  • Application metadata

    Application metadata

    Description

    MVP of application metadata: retrieve per-pid information like runtime version, file buildids, ...

    Tasks:

    • [x] Update protocol version to v3
    • [x] Make sure that if v1 is given, no app metadata & no container names are sent (and strip the two ;)
    • [x] Remove code dups
    • [x] ~~Rethink of design with objects instead of classes?~~
  • utils: run_process(): Don't try writing stdin more than once

    utils: run_process(): Don't try writing stdin more than once

    Fixes:

    [2023-01-04 18:16:52,954] ERROR: gprofiler: Profiling run failed!
    Traceback (most recent call last):
      File "/app/gprofiler/main.py", line 361, in run_continuous
        self._snapshot()
      File "/app/gprofiler/main.py", line 327, in _snapshot
        self._generate_output_files(merged_result, local_start_time, local_end_time)
      File "/app/gprofiler/main.py", line 213, in _generate_output_files
        run_process(
      File "/app/gprofiler/utils/__init__.py", line 279, in run_process
        raise reraise_exc
      File "/app/gprofiler/utils/__init__.py", line 250, in run_process
        stdout, stderr = process.communicate(timeout=0.001, **communicate_kwargs)
      File "/usr/lib/python3.8/subprocess.py", line 1003, in communicate
        raise ValueError("Cannot send input after starting communication")
    ValueError: Cannot send input after starting communication
    

    It's fairly rare as it'll happen only if burn takes more than 1s... but reproduces in some cases.

  • java: async-profiler: test that no output is written to process stdout/stderr

    java: async-profiler: test that no output is written to process stdout/stderr

    async-profiler writes in the process' stdout/stderr. We have removed those prints - see https://github.com/Granulate/async-profiler/commit/35531a45a197048a450aca38386d144dfa1060bb and https://github.com/Granulate/async-profiler/commit/473b925f4a82bd7bd53b0737e3ead238c38f7a09.

    We should have a test that proves that no additional output is written to stdout/stderr during the cycle of a single profiling session, i.e start/stop of AP. stdout & stderr should remain empty, or, should contain the only output that the profiled app is printing. I suggest adding a System.out.println(...) and System.err.println(...) to our Fibonacci Java app and then compare the output of the process to those 2 prints.

  • java: async-profiler: remove our semicolon patch

    java: async-profiler: remove our semicolon patch

    From skimming the code it seems that https://github.com/jvm-profiling-tools/async-profiler/commit/b0a44524bad111a7f18c8309c0ebec56ccfdcf2a now prevents ; to be outputted in stackcollapses, and it is replaced by |.

    If that's the case, we can remove our patch - the less changes to upstream AP, the better.

    When doing this, please add a test that | is indeed output where in cases where it should be, instead of ;. I suppose that in public static void main(final String[] args) we'll be able to see Ljava/lang/String; vs Ljava/lang/String|.

  • Windows CI

    Windows CI

    Description

    Related Issue

    Motivation and Context

    How Has This Been Tested?

    Screenshots

    Checklist:

    • [ ] I have read the CONTRIBUTING document.
    • [ ] I have updated the relevant documentation.
    • [ ] I have added tests for new logic.
  • dotnet profiler on windows

    dotnet profiler on windows

    Description

    Attempt to enable dotnet profiler on the windows OS.

    Related Issue

    https://github.com/Granulate/gprofiler/issues/623

    Motivation and Context

    Improving windows compatibility will result in increased usability of gprofiler

    How Has This Been Tested?

    Screenshots

    Checklist:

    • [x] I have read the CONTRIBUTING document.
    • [x] I have updated the relevant documentation.
    • [ ] I have added tests for new logic.
  • Processes exiting during a session do not get enriched with container name

    Processes exiting during a session do not get enriched with container name

    We enrich container names when merging profilers: https://github.com/Granulate/gprofiler/blob/c2899341813c5db57b8401356c02c74cc6ec75d5/gprofiler/merge.py#L318-L319

    This is too late if the process already exited. With the default profiling duration of 60s, this is common.

    • For process profiles (_profile_process), we already get the application metadata & appid when the profile begins, so we always get them.
    • For perf profiles, we don't have the PIDs that were profiled until we run perf script. We can just collect container names for ALL live processes, but then we have a race with PID reuse, not sure how likely is that for a 60s profiling session. If perf could output a stronger identifier, e.g PID+start-time, it'd help.
Related tags
CPU feature identification for Go

cpuid Package cpuid provides information about the CPU running the current program. CPU features are detected on startup, and kept for fast access thr

Dec 29, 2022
Automatically set GOMAXPROCS to match Linux container CPU quota.

automaxprocs Automatically set GOMAXPROCS to match Linux container CPU quota. Installation go get -u go.uber.org/automaxprocs Quick Start import _ "go

Dec 29, 2022
Continuous profiling for analysis of CPU, memory usage over time, and down to the line number. Saving infrastructure cost, improving performance, and increasing reliability.
Continuous profiling for analysis of CPU, memory usage over time, and down to the line number. Saving infrastructure cost, improving performance, and increasing reliability.

Continuous profiling for analysis of CPU, memory usage over time, and down to the line number. Saving infrastructure cost, improving performance, and increasing reliability.

Jan 2, 2023
A tool to find redirection chains in multiple URLs
A tool to find redirection chains in multiple URLs

UnChain A tool to find redirection chains in multiple URLs Introduction UnChain automates process of finding and following `30X` redirects by extracti

Dec 12, 2022
go-i18n is a Go package and a command that helps you translate Go programs into multiple languages.

go-i18n is a Go package and a command that helps you translate Go programs into multiple languages.

Jan 2, 2023
LogAnalyzer - Analyze logs with custom regex patterns.Can search for particular patterns on multiple files in a directory.
LogAnalyzer - Analyze logs with custom regex patterns.Can search for particular patterns on multiple files in a directory.

LogAnalyzer Analyze logs with custom regex patterns.Can search for particular patterns on multiple files in a directory

May 31, 2022
Split multiple Kubernetes files into smaller files with ease. Split multi-YAML files into individual files.

Split multiple Kubernetes files into smaller files with ease. Split multi-YAML files into individual files.

Dec 29, 2022
Split multiple Kubernetes files into smaller files with ease. Split multi-YAML files into individual files.

kubectl-slice: split Kubernetes YAMLs into files kubectl-slice is a neat tool that allows you to split a single multi-YAML Kubernetes manifest into mu

Jan 3, 2023
Tiny Go tool for running multiple functions concurrently and collecting their results into an error slice.

Overview Short for "ConCurrent". Tiny Go tool for running multiple functions concurrently and collecting their results into an error slice. Dependency

Nov 22, 2021
Small tool for splitting files found in a path into multiple groups

Small tool for splitting files found in a path into multiple groups. Usefull for parallelisation of whatever can be paralleled with multiple files.

Jan 30, 2022
Hotswap provides a solution for reloading your go code without restarting your server, interrupting or blocking any ongoing procedure.
Hotswap provides a solution for reloading your go code without restarting your server, interrupting or blocking any ongoing procedure.

Hotswap provides a solution for reloading your go code without restarting your server, interrupting or blocking any ongoing procedure. Hotswap is built upon the plugin mechanism.

Jan 5, 2023
Get cloud instances with your favourite software pre-loaded

This Golang package can be used to provision cloud hosts using a simple CRUD-style API along with a cloud-init user-data script. It could be used to automate anything from k3s clusters, to blogs, or CI runners. We use it to create the cheapest possible hosts in the cloud with a public IP address.

Dec 14, 2022
流媒体NetFlix解锁检测脚本 / A script used to determine whether your network can watch native Netflix movies or not
流媒体NetFlix解锁检测脚本 / A script used to determine whether your network can watch native Netflix movies or not

netflix-verify 流媒体NetFlix解锁检测脚本,使用Go语言编写 在VPS网络正常的情况下,哪怕是双栈网络也可在几秒内快速完成IPv4/IPv6的解锁判断 鸣谢 感谢 @CoiaPrant 指出对于地域检测更简便的方法 感谢 @XmJwit 解决了IPV6 Only VPS无法下载脚

Dec 29, 2022
Visualize your Go data structures using graphviz

memviz How would you rather debug a data structure? "Pretty" printed Visual graph (*test.fib)(0xc04204a5a0)({ index: (int) 5, prev: (*test.fib)(0xc0

Dec 22, 2022
Dynamically generated Last.fm stats for your profile readme

GitHub Readme Last.fm Stats Dynamically generated last.fm stats in your profile readme Contents Usage Options Demo Development & Deployment Issues, Re

Oct 12, 2022
this is an api that execute your deno code and send you the output

this a simple api that execute your deno code and send you the output, has not limit per request example request: in deno: const rawResponse = await f

Dec 23, 2022
🚀 Use Lanyard API easily in your Go app!

?? Go Lanyard Use Lanyard API easily in your Go app! ?? Installation Initialize your project (go mod init example.com/example) Add package (go get git

Mar 11, 2022
A simple API for computing diffs of your documents over the time built on a scalable technology stack.

Diffme API WIP - this is an API to compute diffs between documents. It serves as a way to easily create audit logs for documents in your system, think

Sep 8, 2021
A comprehensive list of alternatives to your favorite software

alternativeto A comprehensive list of alternatives to your favorite software. Please do not edit this file directly. Instead, follow the steps outline

Jun 16, 2022