exo: a process manager & log viewer for dev

exo: a process manager & log viewer for dev

exo- prefix – external; from outside.

The Exo GUI

Features

  • Procfile compatible process manager.
  • Terminal commands and browser-based-GUI for all functionality.
  • Multiplexed, colorizing log tailing. Toggle visibility of individual logs.
  • Dynamic process supervision: create, start, stop, restart, delete.

Coming Soon

  • Docker integration with docker-compose.yml compatibility.

Getting Started

Install exo:

curl -sL https://exo.deref.io/install | sh

If you prefer manual installation, see ./doc/install.md for details, including uninstall instructions.

Navigate to your code directory and then launch the exo gui:

exo gui

To use exo as a drop-in replacement for Foreman and similar Procfile runners, do this instead:

exo run ./path/to/Procfile

For more, checkout the exo guide or consult the builtin help by running exo help.


Telemetry

exo collects limited and anonymous telemetry data by default. This behavior can be disabled by adding the following setting to your exo config (located at ~/.exo/config.toml by default):

[telemetry]
disable: true
Comments
  • Stop button in the UI has different behavior than `Ctrl+C`-ing the same process

    Stop button in the UI has different behavior than `Ctrl+C`-ing the same process

    Not sure if this is intentional or not, but it leads to some annoying behavior when developing.

    When I run exo run tools/goreman/procfiles/Procfile.rexec with this procfile in our repo: https://github.com/buildbuddy-io/buildbuddy

    app: bazel run enterprise/server -- --config_file=enterprise/config/buildbuddy.local.yaml
    exec: bazel run enterprise/server/cmd/executor:executor -- --monitoring_port=9091 --executor.docker_socket=
    redis: redis-server
    

    And hit stop in the UI on exec, I get no logs about a shutdown. When I hit play again - the error message

    07:05:47  exec 2021/08/02 14:05:47.344 FTL listen tcp 0.0.0.0:9091: bind: address already in use
    

    This leads me to believe the server that bazel run spun up was never killed.

    When I run the same exec process on it's own in a terminal and hit Ctrl+C, I get the following logs:

    ^CCaught interrupt signal; shutting down...
    2021/08/02 14:13:04.832 INF Stopping queue processing, machine is shutting down. name=rN5IABEi
    2021/08/02 14:13:04.861 INF Graceful stop of executor succeeded.
    Server "prod-buildbuddy-executor" stopped.
    

    And don't get the port collision error when I start it again.

    My workaround for now is to hit Ctrl+C on exo run tools/goreman/procfiles/Procfile.rexec, which solves the issue - but means I have to restart all 3 processes.

  • [BUG] Logs for container components not showing up in GUI or CLI

    [BUG] Logs for container components not showing up in GUI or CLI

    Describe the bug Container component logs don't show up in the GUI or in the terminal.

    To Reproduce

    1. Create exo.hcl with this content:

      exo = "0.1"
      
      components {
        container "db" {
          image = "postgres:11.7-alpine"
      
          environment = {
            POSTGRES_DB = "foo"
            POSTGRES_PASSWORD = "bar"
          }
      
          ports = ["5432:5432"]
        }
      }
      
    2. Run exo init.

    3. Run exo gui.

    Expected behavior If I use a similar Docker Compose manifest and type docker compose up, Docker Compose starts tailing Postgres logs. Based on both the description of the app and the screenshot in the repo README, I expected Exo to do the same. Have I misunderstood how the app works, or did I miss a step?

    System Info (please complete the following information):

    • OS: macOS 11.6
    • Component: GUI, CLI
    • Version: 2021.10.29

    Additional context

    For process components, logs do seem to show up in both the GUI and CLI.

  • Improve `ProcessList` / Component List

    Improve `ProcessList` / Component List

    This

    Before

    This is better now, but still has some problems.

    The problems with this implementation

    • We should not have to use describeWorkspaces() to get the root directory of the active workspace.
      • My main issue here is we are sending the entire list to the client and selecting the correct one there. This seems like a bad idea, if you assume the number of workspaces and detail of each workspace description grows - instead it would be useful to either have a singular describeWorkspace(workspaceId) which takes one id and returns one workspace description, or an AWS API-esque optional parameter to our read functions like describeWorkspaces({ workspaceId }) where the normal describeWorkspaces() still returns the full list, but giving an id returns a list of length 1.
      • Update: I think @BenElgar has fixed this issue in #369 . Presumably we can use workspace.describeSelf() instead.
    • Arguably, due to the nature of the local exo application it would make sense to have such important information readily available from any page in the app without using promises. Not sure what approach would make the most sense here.
      • A strategy here could be to use localStorage to store a cache of e.g. workspace and component descriptions, so that the frontend can snap between pages super fast without visual loading and re-hydrate with real server data in the background. The difference here would be small in the local exo application (I'm assuming localStorage API calls are very fast, but the local exo server API calls should also be fairly fast. However, the big difference here comes if you're using exo remotely through a web app, where the server would be possibly hundreds of milliseconds delayed and localStorage probably <1 ms.
    • The Panel could perhaps expose a better way to override the whole header part in cases like this where we want to control it in a way that is not just a usual text title + optional icon button.
    • ~~We should not have to {#await ...} the root directory of the active workspace, especially when you have navigated to this page by clicking a button which already had it displayed.~~ Update: changed my mind
    • ~~The way we format and display workspace names should probably use a smart approach that finds the shortest unique path-tail among all your exo workspaces, or just the name of the root directory itself, depending on context.~~ Update: fixed, thanks @brandonbloom

    This PR's solution

    Current main version of the above

    Recent mock design of the next version of the above

  • [BUG] Cannot recover from bad spec

    [BUG] Cannot recover from bad spec

    This is only an issue on the interpolation branch.

    If a user applies a docker compose spec that is invalid in such a way that it cannot be unmarshalled this cannot be recovered from. This is because the spec is saved in the state file before it is validated and every future attempt to manipulate the spec attempts to unmarshal this bad spec, which fails.

    To recreate, apply a spec like this:

    services:
      t0:
        image: bash
        command: sleep infinity
        ports:
          - "443:" # This is an invalid port specification
    

    Running this with dexo run will appropriately fail with an invalid port mapping syntax error. The problem is that even if the invalid specification is fixed, future commands will fail until the spec is manually removed from the state file:

    deleting frontend unmarshalling spec: invalid port mapping syntax
    

    The solution is presumably to validate the spec before writing.

  • [BUG] Exo logging slow with lots of services

    [BUG] Exo logging slow with lots of services

    Describe the bug The following docker compose file takes a long time to start outputting logs in the terminal.

    services:
      t1:
        image: bash
        command: sh -c 'i=0; while true; do echo $((i++)); sleep 1; done'
      t2:
        image: bash
        command: sh -c 'i=0; while true; do echo $((i++)); sleep 1; done'
      t3:
        image: bash
        command: sh -c 'i=0; while true; do echo $((i++)); sleep 1; done'
      t4:
        image: bash
        command: sh -c 'i=0; while true; do echo $((i++)); sleep 1; done'
      t5:
        image: bash
        command: sh -c 'i=0; while true; do echo $((i++)); sleep 1; done'
      t6:
        image: bash
        command: sh -c 'i=0; while true; do echo $((i++)); sleep 1; done'
      t7:
        image: bash
        command: sh -c 'i=0; while true; do echo $((i++)); sleep 1; done'
      t8:
        image: bash
        command: sh -c 'i=0; while true; do echo $((i++)); sleep 1; done'
      t9:
        image: bash
        command: sh -c 'i=0; while true; do echo $((i++)); sleep 1; done'
    

    It takes about 30 seconds to start getting logs. From docker ps and from the output in the logs I can see that the containers themselves have been running for almost all of that time. The logs are then updated about once every 20 seconds.

  • [FEATURE] package for popular package managers

    [FEATURE] package for popular package managers

    Package Managers

    Which package managers? Whatever is supported by https://goreleaser.com/

    Priorities:

    • [x] Mac homebrew formula in our private tap here: https://github.com/deref/homebrew-tap
    • [x] Linux as .deb, .rpm and .apk

    Also consider:

    • [ ] Snapcraft for Ubuntu software store
    • [ ] others?
  • [BUG] `.exo` directory growing very large in size

    [BUG] `.exo` directory growing very large in size

    Describe the bug My production .exo/var directory has inflated to >2 GB in size, ~99% of which is in logs

    Expected behavior Some sort of reasonable garbage collection to keep the storage used small. Ideally it would be nice to have a user configurable storage limit so that users could decide whether they want unlimited history, or any byte amount.

    Screenshots

    ~/.exo/var/logs $ ls -l -h -a
    total 424K
    drwx------ 2 jwmza jwmza 4.0K Sep 15 13:06 .
    drwx------ 4 jwmza jwmza 4.0K Sep 17 18:28 ..
    -rw-r--r-- 1 jwmza jwmza 303K Sep 14 12:03 000012.sst
    -rw-r--r-- 1 jwmza jwmza 7.2K Sep 14 15:59 000013.sst
    -rw-r--r-- 1 jwmza jwmza 2.6K Sep 15 13:06 000014.sst
    -rw-r--r-- 1 jwmza jwmza   20 Sep 15 13:06 000018.vlog
    -rw-r--r-- 1 jwmza jwmza 2.0G Sep 15 13:06 000019.vlog      << This one seems to be the problem
    -rw-r--r-- 1 jwmza jwmza 128M Sep 17 18:28 00013.mem
    -rw-r--r-- 1 jwmza jwmza 1.0M Aug 19 12:55 DISCARD
    -rw------- 1 jwmza jwmza   28 Aug 19 12:55 KEYREGISTRY
    -rw-r--r-- 1 jwmza jwmza    4 Sep 15 13:06 LOCK
    -rw------- 1 jwmza jwmza  282 Sep 15 13:06 MANIFEST
    
  • New config pages, manual theme setting & theme generation

    New config pages, manual theme setting & theme generation

    image

    This adds:

    • User preferences config with theme picker
    • Theme file generation from a minimal definition file, expanded into a CSS file which allows manual overrides and automatic theming
    • "New component" page with select for component types

    Also fixes #235

    This also begins to add some config page components for a new set of CRU~D~ pages and components for configuration, specifically aimed at:

    • Component type selection
    • Process creation and modification
    • Docker container creation and modification
    • Timer creation and modification

    Example

    image

  • [BUG] Timestamps are wrong (timezone/hours offset)

    [BUG] Timestamps are wrong (timezone/hours offset)

    Describe the bug In my actual local time, it is 12:38. The logs from tick correctly show this time, but the timestamps we create on the far left shows 4:38, an incorrect time.

    Expected behavior Our timestamps === actual user local time.

    Screenshots image

    System Info:

    • OS: Windows (WSL Ubuntu)
    • Component: GUI
    • Version: Dev (>2021.07.30)
  • [BUG] server crash on docker-compose.yml parse error

    [BUG] server crash on docker-compose.yml parse error

    Describe the bug

    With a particular docker-compose.yml file in my workspace directory, exo new process causes exo server to crash.

    To Reproduce Steps to reproduce the behavior:

    1. create a workspace
    2. put a docker-compose.yml file with following contents in the workspace dir
    version: 2
    
    x-defaults: &defaults
      sysctls:
        net.ipv6.conf.all.disable_ipv6: 1
      mem_swappiness: 0
    
    services:
      kibana:
        image: docker.elastic.co/kibana/kibana:7.6.1
        <<: *defaults
        mem_limit: 300m
        ports:
          - 5601:5601 # The HTTP UI Port
        environment:
          ELASTICSEARCH_URL: http://elasticsearch:9200
          ELASTICSEARCH_HOSTS: http://elasticsearch:9200
        profiles: ["elasticsearch"]
    
    1. run exo new process foo -- bash -c hello
    2. See error:
    Job URL: http://localhost:43643/#/jobs/x51rc0t9n0wmxdns9s2gjar8y0
    Error: describing tasks: posting: Post "http://localhost:43643/_exo/kernel/describe-tasks": EOF
    describing tasks: posting: Post "http://localhost:43643/_exo/kernel/describe-tasks": EOF
    
    1. See error in ~/.exo/var/exod.stderr
    cat ~/.exo/var/exod.stderr                                                                                                                                                                                                                                                                                                                         panic: unexpected yaml node tag: "!!int"
    
    goroutine 71 [running]:
    github.com/deref/exo/internal/manifest/compose.yamlToHCL({0x49c02c0, 0xc0001ee6e0})
            /go/src/github.com/deref/exo/internal/manifest/compose/import.go:364 +0x14f3
    github.com/deref/exo/internal/manifest/compose.yamlToHCL({0x49bee80, 0xc00007c280})
            /go/src/github.com/deref/exo/internal/manifest/compose/import.go:309 +0xbd5
    github.com/deref/exo/internal/manifest/compose.yamlToHCL({0x4a24d40, 0xc0000c8000})
            /go/src/github.com/deref/exo/internal/manifest/compose/import.go:400 +0xf2d
    github.com/deref/exo/internal/manifest/compose.makeComponentBlock({0x4a2ab11, 0x9}, {0xc00051e670, 0x6}, {0x4a24d40, 0xc0000c8000}, {0xc00007a840, 0x1, 0x1})
            /go/src/github.com/deref/exo/internal/manifest/compose/import.go:249 +0x7c
    github.com/deref/exo/internal/manifest/compose.(*Importer).Import(0xc00007a160, 0xc00041ea50, {0xc0003d61a0, 0x19d, 0x1a0})
            /go/src/github.com/deref/exo/internal/manifest/compose/import.go:229 +0xe58
    github.com/deref/exo/internal/manifest.(*Loader).Load(0xc00069b8e0, 0xc0003d6000)
            /go/src/github.com/deref/exo/internal/manifest/manifest.go:74 +0x266
    github.com/deref/exo/internal/core/server.(*Workspace).loadManifest(0xc000378000, {0x557de68, 0xc000486180}, {0xc00051e038, 0x8}, 0xc00069b970)
            /go/src/github.com/deref/exo/internal/core/server/config.go:91 +0x414
    github.com/deref/exo/internal/core/server.(*Workspace).tryLoadManifest(0x1, {0x557de68, 0xc000486180})
            /go/src/github.com/deref/exo/internal/core/server/config.go:43 +0x72
    github.com/deref/exo/internal/core/server.(*Workspace).getEnvironment(0xc000378000, {0x557de68, 0xc000486180})
            /go/src/github.com/deref/exo/internal/core/server/environment.go:19 +0x46
    github.com/deref/exo/internal/core/server.(*Workspace).newController(0xc000378000, {0x557de68, 0xc000486180}, {{0xc00003a100, 0x1a}, {0xc00042a168, 0x3}, {0xc00042a170, 0x7}, {0xc0005c8000, ...}, ...})
            /go/src/github.com/deref/exo/internal/core/server/workspace.go:384 +0x105
    github.com/deref/exo/internal/core/server.(*Workspace).control(0xc000378000, {0x557de68, 0xc000486180}, {{0xc00003a100, 0x1a}, {0xc00042a168, 0x3}, {0xc00042a170, 0x7}, {0xc0005c8000, ...}, ...}, ...)
            /go/src/github.com/deref/exo/internal/core/server/workspace.go:1116 +0x7f
    github.com/deref/exo/internal/core/server.(*Workspace).createComponent(0xc000378000, {0x557de68, 0xc000486180}, 0xc00007e140, {0xc00003a100, 0x1a})
            /go/src/github.com/deref/exo/internal/core/server/workspace.go:501 +0x4da
    github.com/deref/exo/internal/core/server.(*Workspace).CreateComponent.func1()
            /go/src/github.com/deref/exo/internal/core/server/workspace.go:459 +0xa9
    created by github.com/deref/exo/internal/core/server.(*Workspace).CreateComponent
            /go/src/github.com/deref/exo/internal/core/server/workspace.go:456 +0x18f
    

    Expected behavior

    • I don't expect exo to read the docker-compose.yml file.
    • I expect the server to continue running

    System Info (please complete the following information):

    • OS: macOS
    • Component server
    • Version 2021.10.28-2
  • Fix Firefox subpixel rendering bug

    Fix Firefox subpixel rendering bug

    Applies a minimal 1px outline to all shadows and rebalance other shadows in order to fix Firefox's subpixel rendering bug.

    Fixes #414

    I'm conflicted on whether this should actually be solved this way. Ultimately, this is Mozilla's fault in rendering subpixel shadows incorrectly (there's really no room for debate on this as far as I understand it - if designers intentionally create a shadow with subpixel parameters, the mathematics of computing those partial values is unquestionable. Their rendering engine is handling these wrong.)

    This fix not only sets a precedent of even more complicated manual shadow definitions, but it also actually fucks up the way these shadows render on Chrome-based browsers. The downgrade isn't huge (as it is balanced to be fairly decent for both), but especially on 4K+ displays like mine, you can tell the faint 1px solid border is there and it doesn't look quite as crisp and "3D" as before this change.

    I don't have an issue with having a fix for their bug anyways, but the way this is implemented is quite complicated. It would be nicer if we could code-gen something that doesn't require us to manually add these to each shadow definition.

    • Maybe use Sass/SCSS and have a function that creates an optimized pseudo-3D border/shadow given a few parameters?

    • Maybe use some sort of browser query to downgrade these shadows to a small selection of basic 1px borders if the user is on Firefox?


    Edit: Here's an look into what's going on here:

    image

    Above is a representation of the same box under 3 views.

    First, we have a 12 DPR (12 real pixels per logical pixel) view - as you can see, despite the shadow having grown 0.75px in all directions from the box itself, the 0.5px offset in the y direction has shifted it, such that the top shadow is thinner and the bottom shadow is thicker, relative to the sides.

    This creates the nice pseudo-3D effect, as if this box is a real object being affected by some light source from above, with real physical thickness in its material. The same is true in the correct 1 DPR rendering shown in the middle. These subpixel dark regions are correctly computed into lighter or darker 1-pixel borders. Why the claim that the mathematics is straightforward and not up for debate? Because it is. Each of the four parameters does a very specific thing - perhaps you could have some leeway around the blur parameter, but we aren't using that in these. The other 3 are unquestionable: you duplicate the shape of the box itself, offset it by the X and Y offsets, and expand it in all directions perpendicularly by the fourth thickness parameter. There is only one way to do this.

    You can see a recreation of Firefox's incorrect rendering on the right. Somewhere their system is deciding not to draw anything at all above the box, despite there being a 0.25px-thick logical border there. Clearly, then, the result should be a 1px border whose color is 75% that of the background and 25% that of the logical border. But they don't do this at all.

    Notably, Firefox understands these subpixel borders. On my 2 DPR display, they correctly render sub(logical)pixel borders down to 0.5px - because there are 4 real pixels in every 1 logical pixel on a 2 DPR display. The issue is they refuse to draw most subpixel values that are below 1 real pixel. This really doesn't make any sense, and there is no good reason to do this.

  • [FEATURE] Make `exo logs` print the exit code if the process is stopped

    [FEATURE] Make `exo logs` print the exit code if the process is stopped

    Is your feature request related to a problem? Please describe.

    Sometimes when I exo logs <service-name>, the process is stopped but it just seems like it isn't logging at the moment.

    Describe the solution you'd like

    I think printing <process name> exited with status 1 or something similar would help.

    Describe alternatives you've considered

    Having exo logs exit if no processes are are still running is also a possibility.

  • [FEATURE] more direct way to refresh daemon environment

    [FEATURE] more direct way to refresh daemon environment

    Right now, you have to do exo exit then restart the daemon to pickup any new environment variables at that level. It shouldn't be necessary to do this.

    We may wish to explicitly invoke the user's preferred shell, so that we get a "clean" environment, instead of whatever environment exo happened to have been run in when it started the daemon.

  • [BUG] Attempting to create privileged containers produces an opaque error

    [BUG] Attempting to create privileged containers produces an opaque error

    Describe the bug It would appear that the privileged: true flag in the docker-compose manifest format isn't supported. Rather than producing an error to this effect, the daemon seems to encounter an internal failure and produces a user-opaque error which had to be isolated by bisecting a services list.

    To Reproduce Steps to reproduce the behavior: docker-compose format spec -

    service:
      db2:
        image: ibmcom/db2:11.5.4.0
        container_name: divebell_db2_1
        restart: always
        privileged: true
        environment: {}
        volumes: []
        ports:
          - 50000:50000
    
    $ exo apply docker-compose.yml
    Error: posting: Post "http://localhost:43643/_exo/workspace/apply?id=e9tens6phhessw300s45pvy9fr": EOF
    posting: Post "http://localhost:43643/_exo/workspace/apply?id=e9tens6phhessw300s45pvy9fr": EOF
    

    Expected behavior An error stating that privileged containers are not supported, or that they're a bad idea and don't do that.

    System Info (please complete the following information):

    • OS: Ubuntu
    • Component: CLI/daemon
    • Version: 2021.11.16
  • [BUG] Sometimes we attempt to control containers that don't exist

    [BUG] Sometimes we attempt to control containers that don't exist

    I believe the issue is a mismatch between what we refer to in exo as a container and what docker refers to as a container. Docker treats containers as disposable single use objects, to be lazily garbage collected at some undefined point in the future whilst, in exo, a container is really more akin to a docker-compose service. This causes problems when a docker container has been garbage collected by the docker daemon and we're still trying to interact with it:

    e.g.

    14:33:17    server 2021/12/07 14:33:17 container to be removed not found: "5df3bf4437c4acc2ec02ed219bf8734a7ca87057dc07980f534104181aeb55df"
    

    More importantly this sometimes breaks the start mechanism, since the docker container may not exist when exo tries to start the container.

    func (c *Container) Start(ctx context.Context, input *core.StartInput) (*core.StartOutput, error) {
    	//c.Initialize(ctx, &core.InitializeInput{Spec: })
    	if err := c.start(ctx); err != nil {
    		return nil, fmt.Errorf("starting process container: %w", err)
    	}
    	return &core.StartOutput{}, nil
    }
    
    func (c *Container) start(ctx context.Context) error {
    	err := c.Docker.ContainerStart(ctx, c.State.ContainerID, types.ContainerStartOptions{})
    	if err != nil {
    		c.State.Running = true
    		return fmt.Errorf("starting container: %w", err)
    	}
    	return nil
    }
    
  • [BUG] Manifest allows components with the same name

    [BUG] Manifest allows components with the same name

    This is a manifest that is apparently valid and is successfully applied:

    exo = "0.1"
    components {
      container "t0" {
        image = "bash"
        command = "sleep infinity"
      }
      container "t0" {
        image = "alpine"
        command = "sleep infinity"
      }
    }
    

    In practice this seems to consistently run the bash container:

    ❯ dexo apply
    Job URL: http://localhost:44643/#/jobs/8kjz638hb3838jwdhspg5ctqsm
    applying
    ✓ ├─ deleting t0
    ✓ └─ re-creating t0
    
    ❯ docker ps
    CONTAINER ID   IMAGE          COMMAND                  CREATED          STATUS          PORTS     NAMES
    72b76bf1e65f   8856ae160078   "docker-entrypoint.s…"   25 seconds ago   Up 24 seconds             focused_proskuriakova
    
    ❯ docker inspect 8856ae160078 | jq '.[0].RepoTags'
    [
      "bash:5",
      "bash:latest"
    ]
    
Realtime log viewer for docker containers.
Realtime log viewer for docker containers.

Dozzle - dozzle.dev Dozzle is a small lightweight application with a web based interface to monitor Docker logs. It doesn’t store any log files. It is

Dec 30, 2022
Gowl is a process management and process monitoring tool at once. An infinite worker pool gives you the ability to control the pool and processes and monitor their status.
Gowl is a process management and process monitoring tool at once. An infinite worker pool gives you the ability to control the pool and processes and monitor their status.

Gowl is a process management and process monitoring tool at once. An infinite worker pool gives you the ability to control the pool and processes and monitor their status.

Nov 10, 2022
An golang log lib, supports tracking and level, wrap by standard log lib

Logex An golang log lib, supports tracing and level, wrap by standard log lib How To Get shell go get gopkg.in/logex.v1 source code import "gopkg.in/

Nov 27, 2022
Nginx-Log-Analyzer is a lightweight (simplistic) log analyzer for Nginx.
Nginx-Log-Analyzer is a lightweight (simplistic) log analyzer for Nginx.

Nginx-Log-Analyzer is a lightweight (simplistic) log analyzer, used to analyze Nginx access logs for myself.

Nov 29, 2022
Distributed-Log-Service - Distributed Log Service With Golang
Distributed-Log-Service - Distributed Log Service With Golang

Distributed Log Service This project is essentially a result of my attempt to un

Jun 1, 2022
Log-analyzer - Log analyzer with golang

Log Analyzer what do we have here? Objective Installation and Running Applicatio

Jan 27, 2022
A flexible process data collection, metrics, monitoring, instrumentation, and tracing client library for Go
A flexible process data collection, metrics, monitoring, instrumentation, and tracing client library for Go

Package monkit is a flexible code instrumenting and data collection library. See documentation at https://godoc.org/gopkg.in/spacemonkeygo/monkit.v3 S

Dec 14, 2022
gosivy - Real-time visualization tool for Go process metrics
 gosivy - Real-time visualization tool for Go process metrics

Gosivy tracks Go process's metrics and plot their evolution over time right into your terminal, no matter where it's running on. It helps you understand how your application consumes the resources.

Nov 27, 2022
Monitor a process and trigger a notification.
Monitor a process and trigger a notification.

noti Monitor a process and trigger a notification. Never sit and wait for some long-running process to finish. Noti can alert you when it's done. You

Jan 3, 2023
a golang log lib supports level and multi handlers

go-log a golang log lib supports level and multi handlers Use import "github.com/siddontang/go-log/log" //log with different level log.Info("hello wo

Dec 29, 2022
Structured log interface

Structured log interface Package log provides the separation of the logging interface from its implementation and decouples the logger backend from yo

Sep 26, 2022
lumberjack is a log rolling package for Go

lumberjack Lumberjack is a Go package for writing logs to rolling files. Package lumberjack provides a rolling logger. Note that this is v2.0 of lumbe

Jan 1, 2023
CoLog is a prefix-based leveled execution log for Go
CoLog is a prefix-based leveled execution log for Go

What's CoLog? CoLog is a prefix-based leveled execution log for Go. It's heavily inspired by Logrus and aims to offer similar features by parsing the

Dec 14, 2022
OpenTelemetry log collection library

opentelemetry-log-collection Status This project was originally developed by observIQ under the name Stanza. It has been contributed to the OpenTeleme

Sep 15, 2022
A simple web service for storing text log files

logpaste A minimalist web service for uploading and sharing log files. Run locally go run main.go Run in local Docker container The Docker container a

Dec 30, 2022
Write log entries, get X-Ray traces.

logtoxray Write to logs, get X-Ray traces. No distributed tracing instrumenation library required. ?? ?? ?? THIS PROJECT IS A WORK-IN-PROGRESS PROTOTY

Apr 24, 2022
Binalyze logger is an easily customizable wrapper for logrus with log rotation

logger logger is an easily customizable wrapper for logrus with log rotation Usage There is only one function to initialize logger. logger.Init() When

Oct 2, 2022
Log-structured virtual disk in Ceph
Log-structured virtual disk in Ceph

lsd_ceph Log-structured virtual disk in Ceph 1. Vision and Goals of the Project Implement the basic librbd API to work with the research block device

Dec 13, 2021
Multi-level logger based on go std log

mlog the mlog is multi-level logger based on go std log. It is: Simple Easy to use NOTHING ELSE package main import ( log "github.com/ccpaging/lo

May 18, 2022