Alert dashboard for Prometheus Alertmanager

karma

Alert dashboard for Prometheus Alertmanager.


Alertmanager >=0.19.0 is required as older versions might not show all receivers in karma, see issue #812 for details.


See GitHub Releases for release changelog.

Feature overview

Alertmanager UI is useful for browsing alerts and managing silences, but it's lacking as a dashboard tool - karma aims to fill this gap.

Alert aggregation and deduplication

Starting with the 0.7.0 release it can aggregate alerts from multiple Alertmanager instances, running either in HA mode or separate. Unique alerts are displayed by filtering duplicates. Each alert is tagged with the names of all Alertmanager instances it was found at and can be filtered based on those tags (@alertmanager). Note that @alertmanager tags will be visible only if karma is configured with multiple Alertmanager instances. If alertmanger is configured to use HA clusters then @cluster will be available as well, to set a custom name for each cluster see CONFIGURATION.md.

Screenshot

Alert visualization

Alert groups

Alerts are displayed grouped preserving group_by configuration option in Alertmanager. Note that a unique alert group will be created for each receiver it uses in alertmanager as they can have different group_by settings. If a group contains multiple alerts only the first few alerts will be presented. Alerts are expanded or hidden using - / + buttons. The default number of alerts can be configured in the UI settings module. Each group can be collapsed to only show the title bar using top right toggle icon. Each individual alert will show unique labels and annotations. Labels and annotations that are shared between all alerts are moved to the footer.

Example

Alert history

Alertmanager doesn't currently provide any long term storage of alert events or a way to query for historical alerts, but each Prometheus server sending alerts stores metrics related to triggered alerts. When history:enabled is true karma will use source fields from each alert to try querying alert related metrics on remote Prometheus servers. The result is the number of times given alert group triggered an alert per hour in the last 24h, displayed as 24 blocks. The darker the color the more alerts were triggered in that hour, as compared by all other hours.

Example

For this feature to work karma must be able to connect to all Prometheus servers sending alerts. Be sure to set --web.external-url Prometheus flag to a publicly reachable URL of each server.

Inhibited alerts

Inhibited alerts (suppressed by other alerts, see Alertmanager docs) will have a "muted" button.

Inhibited alert

Clicking on that button will bring a modal with a list of inhibiting alerts.

Inhibiting alerts

Silence deduplication

If all alerts in a group were suppressed by the same silence then, to save screen space, the silence will also be moved to the footer.

Deduplicated silence

Label based multi-grid

To help separate alerts from different environments or with different level of severity multi-grid mode can be enabled, which adds another layer of visually grouping alert groups. To enable this mode go to the configuration modal and select a label name, all alerts will be grouped by that label, each label value will have a dedicated grid, including an extra grid for alerts without that label present.

Example

Silence management

Silence modal allows to create new silences and manage all silences already present in Alertmanager. Silence ACL rules can be used to control silence creation and editing, see ACLs docs for more details.

Silence browser

Alert overview

Clicking on the alert counter in the top left corner will open the overview modal, which allows to quickly get an overview of the top label values for all current alerts.

Overview

Alert acknowledgement

Starting with v0.50 karma can create short lived silences to acknowledge alerts with a single button click. To create silences that will resolve itself only after all alerts are resolved you can use kthxbye. See configuration docs for details.

Dead Man’s Switch support

Starting with v0.78 karma can be configured to check for Dead Man’s Switch style alerts (alert that is always firing). If no alert is found in given alertmanager karma will show an error in the UI. See healthcheck:filters option on configuration docs for details.

Dark mode

Starting with v0.52 release karma includes both light and dark themes. By default it will follow browser preference using prefers-color-scheme media queries.

Dark mode

Demo

Online demo is running latest main branch or PR branch version. It might include features that are experimental and not yet ready to be included.

Release notes

Release notes can be found on GitHub Release Page.

To get notifications about new karma releases go to GitHub karma page, click Watch and select Releases only. This requires GitHub user account. To subscribe to email notifications without GitHub account you can subscribe to the RSS feed that GitHub provides. To get email notifications from those feeds use one of the free services providing RSS to email notifications, like Blogtrottr.

History

I created karma while working for Cloudflare, originally it was called unsee. This project is based on that code but the UI part was rewritten from scratch using React. New UI required changes to the backend so the API is also incompatible. Given that the React rewrite resulted in roughly 50% of new code and to avoid confusion for user I've decided to rename it to karma, especially that the original project wasn't being maintained anymore.

Supported Alertmanager versions

Alertmanager's API isn't stable yet and can change between releases, see VERSIONS in internal/mock/Makefile for list of all Alertmanager releases that are tested and supported by karma. Due to API differences between those releases some features will work differently or be missing, it's recommended to use the latest supported Alertmanager version.

Security

karma doesn't in any way alter alerts in any Alertmanager instance it collects data from. This is true for both the backend and the web UI. The web UI allows to manage silences by sending requests to Alertmanager instances, this can be done directly (browser to Alertmanager API) or by proxying such requests via karma backend (browser to karma backend to Alertmanager API) if proxy mode is enabled in karma config.

If you wish to deploy karma as a read-only tool without giving users any ability to modify data in Alertmanager instance, then please ensure that:

  • the karma process is able to connect to the Alertmanager API
  • read-only users are able to connect to the karma web interface
  • read-only users are NOT able to connect to the Alertmanager API
  • readonly is set to true in alertmanager:servers config section for all alertmanager instances, this options will disable any UI elements that could trigger updates (like silence management)

To restrict some users from creating silences or enforce some matcher rules use silence ACL rules. This feature requires proxy to be enabled.

Metrics

karma process metrics are accessible under /metrics path by default. If you set the --listen.prefix option a path relative to it will be used.

Building and running

Building from source

To clone git repo and build the binary yourself run:

git clone https://github.com/prymitive/karma $GOPATH/src/github.com/prymitive/karma
cd $GOPATH/src/github.com/prymitive/karma

To finally compile karma the binary run:

make

Note that building locally from sources requires Go, nodejs and yarn. See Docker build options below for instructions on building from withing docker container.

Running

karma can be configured using config file, command line flags or environment variables. Config file is the recommended method, it's also the only way to configure karma to use multiple Alertmanager servers for collecting alerts. To run karma with a single Alertmanager server set ALERTMANAGER_URI environment variable or pass --alertmanger.uri flag on the command line, with Alertmanager URI as argument, example:

ALERTMANAGER_URI=https://alertmanager.example.com karma
karma --alertmanager.uri https://alertmanager.example.com

There is a make target which will compile and run a demo karma docker image:

make run-demo

By default it will listen on port 8080 and will have mock alerts.

Docker

Running pre-build docker image

Official docker images are built and hosted on Github.

Images are built automatically for:

  • release tags in git - ghcr.io/prymitive/karma:vX.Y.Z
  • main branch commits - ghcr.io/prymitive/karma:latest

NOTE karma uses uber-go/automaxprocs to automatically adjust GOMAXPROCS to match Linux container CPU quota.

Examples

To start a release image run:

docker run -e ALERTMANAGER_URI=https://alertmanager.example.com ghcr.io/prymitive/karma:vX.Y.Z

Latest release details can be found on GitHub.

To start docker image build from lastet main branch run:

docker run -e ALERTMANAGER_URI=https://alertmanager.example.com ghcr.io/prymitive/karma:latest

Note that latest main branch might have bugs or breaking changes. Using release images is strongly recommended for any production use.

Building a Docker image

make docker-image

This will build a Docker image locally from sources.

Health checks

/health endpoint can be used for health check probes, it always responds with 200 OK code and Pong response body.

Configuration

Please see CONFIGURATION for full list of available configuration options and example.yaml for a config file example.

Contributing

Please see CONTRIBUTING for details.

License

Apache License 2.0, please see LICENSE.

Comments
  • Don't work with the latest version api (v2) alertmanager

    Don't work with the latest version api (v2) alertmanager

    Hi!

    In new version of alertmanager API version will be changed to v2. URI /api/v1/ will no longer work. And dashboard not work - request to http://alertmanager:9093/api/v1/alerts/groups failed with 404 Not Found

    docker image for test prometheus/alertmanager:master

  • Follow 302 redirects when fetching /alerts.json

    Follow 302 redirects when fetching /alerts.json

    I use Karma with alertmanager.proxy: true and authentication/authorization handled by a reverse proxy (Cloudflare Access to be precise).

    The issue is that the session tokens set by Cloudflare Access expire after a while, so when Karma tries to fetch from /alerts.json, it gets an HTTP 302 reply and stops there. If it followed the redirects instead, it would be able to renew its token (through the magic of SSO), and then it could fetch the alerts as if nothing had happened.

    Does that make sense?

  • Sometimes, no rendering alerts

    Sometimes, no rendering alerts

    We use Karma on top of an AlertManager high availability cluster (with 2 or more AM). We use filter on labels. Sometimes no alerts rendering on Karma. In this example, the number on top indicate that no alert match 'pf="sdr"' but in AM we can show the alerts.

    2019-07-09 08_51_23-Greenshot

    2019-07-09 08_51_31-Greenshot (2)

    We don't see error in Karma logs

    time="2019-07-09T07:06:46Z" level=info msg="Pulling latest alerts and silences from Alertmanager"
    time="2019-07-09T07:06:46Z" level=info msg="[sdr] Collecting alerts and silences"
    time="2019-07-09T07:06:46Z" level=info msg="[xymon] Collecting alerts and silences"
    time="2019-07-09T07:06:46Z" level=info msg="GET http://alertmanager.xymon:9093/metrics timeout=10s"
    time="2019-07-09T07:06:46Z" level=info msg="GET http://alertmanager.prometheus:9093/metrics timeout=10s"
    time="2019-07-09T07:06:46Z" level=info msg="Upstream version: 0.18.0"
    time="2019-07-09T07:06:46Z" level=info msg="GET http://alertmanager.prometheus:9093/api/v1/status timeout=10s"
    time="2019-07-09T07:06:46Z" level=info msg="[sdr] Remote Alertmanager version: 0.18.0"
    time="2019-07-09T07:06:46Z" level=info msg="[sdr] Got 3 silences(s) in 764.867µs"
    time="2019-07-09T07:06:46Z" level=info msg="[sdr] Detecting JIRA links in silences (3)"
    time="2019-07-09T07:06:46Z" level=info msg="Upstream version: 0.17.0"
    time="2019-07-09T07:06:46Z" level=info msg="GET http://alertmanager.xymon:9093/api/v1/status timeout=10s"
    time="2019-07-09T07:06:46Z" level=info msg="[xymon] Remote Alertmanager version: 0.17.0"
    time="2019-07-09T07:06:46Z" level=info msg="[xymon] Got 0 silences(s) in 4.333911ms"
    time="2019-07-09T07:06:46Z" level=info msg="[xymon] Detecting JIRA links in silences (0)"
    time="2019-07-09T07:06:46Z" level=info msg="[sdr] Got 136 alert group(s) in 67.698158ms"
    time="2019-07-09T07:06:46Z" level=info msg="[sdr] Deduplicating alert groups (136)"
    time="2019-07-09T07:06:46Z" level=info msg="[sdr] Processing unique alert groups (56)"
    time="2019-07-09T07:06:46Z" level=info msg="[sdr] Merging autocomplete data (480)"
    time="2019-07-09T07:06:46Z" level=info msg="[xymon] Got 997 alert group(s) in 585.932343ms"
    time="2019-07-09T07:06:46Z" level=info msg="[xymon] Deduplicating alert groups (997)"
    time="2019-07-09T07:06:46Z" level=info msg="[xymon] Processing unique alert groups (838)"
    time="2019-07-09T07:06:46Z" level=info msg="[xymon] Merging autocomplete data (2908)"
    time="2019-07-09T07:06:46Z" level=info msg="Pull completed"
    
    time="2019-07-09T06:59:34Z" level=info msg="[10.244.6.0 MIS] <200> GET /alerts.json?q=pf%3Dsdr&q=%40state%3Dactive took 15.087184ms"
    time="2019-07-09T07:00:34Z" level=info msg="[10.244.6.0 MIS] <200> GET /alerts.json?q=pf%3Dsdr&q=%40state%3Dactive took 18.785712ms"
    time="2019-07-09T07:01:35Z" level=info msg="[10.244.6.0 MIS] <200> GET /alerts.json?q=pf%3Dsdr&q=%40state%3Dactive took 15.025122ms"
    time="2019-07-09T07:02:35Z" level=info msg="[10.244.6.0 MIS] <200> GET /alerts.json?q=pf%3Dsdr&q=%40state%3Dactive took 11.880314ms"
    time="2019-07-09T07:03:36Z" level=info msg="[10.244.6.0 MIS] <200> GET /alerts.json?q=pf%3Dsdr&q=%40state%3Dactive took 16.137345ms"
    time="2019-07-09T07:04:37Z" level=info msg="[10.244.6.0 MIS] <200> GET /alerts.json?q=pf%3Dsdr&q=%40state%3Dactive took 19.860792ms"
    time="2019-07-09T07:05:37Z" level=info msg="[10.244.6.0 MIS] <200> GET /alerts.json?q=pf%3Dsdr&q=%40state%3Dactive took 18.43056ms"
    time="2019-07-09T07:06:38Z" level=info msg="[10.244.6.0 MIS] <200> GET /alerts.json?q=pf%3Dsdr&q=%40state%3Dactive took 21.778087ms"
    time="2019-07-09T07:07:38Z" level=info msg="[10.244.6.0 MIS] <200> GET /alerts.json?q=pf%3Dsdr&q=%40state%3Dactive took 22.852808ms"
    time="2019-07-09T07:08:39Z" level=info msg="[10.244.6.0 MIS] <200> GET /alerts.json?q=pf%3Dsdr&q=%40state%3Dactive took 14.763488ms"
    

    Did you encounter this problem ?

  • authentication not passed to /alert.json

    authentication not passed to /alert.json

    We authenticate to karma using apache

    But now somehow with the rewrite to react, the /alerts.json is not authenticated like it was with unsee. In the console I see 401 errors on that endpoint.

  • Wrong receiver

    Wrong receiver

    Hi. I've got strange situation. I see the alert in Karma with non-legal receiver image I expect to see team-kafka-wake-up there

    {
              "annotations": {
                "summary": "K2DWH Lag is growing. Info: group=k2dwh, count=8.172123e+06, location=fr"
              },
              "endsAt": "2020-06-11T09:54:41.353Z",
              "fingerprint": "52034abab298fe32",
              "receivers": [
                {
                  "name": "team-kafka-wake-up"
                },
                {
                  "name": "team-kafka-wake-up"
                }
              ],
              "startsAt": "2020-06-11T09:45:26.353Z",
              "status": {
                "inhibitedBy": [],
                "silencedBy": [],
                "state": "active"
              },
              "updatedAt": "2020-06-11T09:51:41.451Z",
              "generatorURL": "https://****",
              "labels": {
                "alertname": "k2dwh_lag",
                "consumer_group": "k2dwh",
                "kafka_location": "fr",
                "severity": "critical",
                "team": "Team_Kafka"
              }
            },
    
  • Animation flickers back and forth

    Animation flickers back and forth

    See video on https://www.dropbox.com/s/lfyr7jv8n6dwzur/Screen%20Recording%202019-05-22%20at%2010.34.26.mov?dl=0

    Running with chart v1.1.13 which is karma 0.34: https://github.com/helm/charts/blob/master/stable/karma/values.yaml#L9

  • Feature request: Nightmode

    Feature request: Nightmode

    First of all, thank you for developing this dashboard! It is very handy for us at trivago.

    Would it be possible to have a "Nightmode" or "Dark mode" setting in the dashboard?

    Some of us work in a low luminosity environment and the karma dashboard is bright. It strains the eye after a while.

    If the little alarm windows could have a dark gray background for example, and the window could have an even darker shade of gray in this option, that would be great! Grafana does this very well.

    Do you think this would be possible? It would certainly make some of your users very happy!

  • Annotation hidden on groups

    Annotation hidden on groups

    in Karma V109 we are seeing some weird behaviour with the summary annotations in groups. We have the following set so the summary annotation should always be visible:

        annotations:
          default:
            hidden: false
          visible:
            - summary
          order:
            - summary
    

    but weve noticed that on grouped alerts, if the summary annotation is unique it shows up, but if its the same for all group members it moves to the bottom of the card and you can only see it by clicking the plus. This is unexpected behaviour as we have the app configured to always show summary annotation.

    example: summary-1 There is no summary annotation displayed.

    but if we click on the plus: summary-2 The summary annotation appears.

  • Regex - escaped characters not matching in alert manager

    Regex - escaped characters not matching in alert manager

    Hi,

    We upgraded to 0.95 this morning, and following on from https://github.com/prymitive/karma/pull/3881, we're now seeing silences not matching correctly when special chars are escaped in the regex.

    Example - Top matcher is an expired alert, and bottom is a new remaking of it that's failing to match:

    image

  • custom Slience Management URL when proxy enabled

    custom Slience Management URL when proxy enabled

    Hi, I have karma deployed behind a proxy from Rancher. So the karma URL is something like https://example.com/k8s/clusters/project/api/v1/namespaces/monitoring/services/http:karma:80/proxy/?q=

    When the proxy mode is enabled, silence management request will be sent to https://example.com/proxy/alertmanager, instead of https://example.com/k8s/clusters/project/api/v1/namespaces/monitoring/services/http:karma:80/proxy/alertmanager

    Both list alerts and list silence work find, but silence management requests are sent to an invalid URL.

    I found the code that returning InternalURI wrote: https://github.com/prymitive/karma/blob/main/internal/alertmanager/models.go#L183-L195

    // InternalURI is the URI of this Alertmanager that will be used for all request made by the UI
    func (am *Alertmanager) InternalURI() string {
    	if am.ProxyRequests {
    		sub := fmt.Sprintf("/proxy/alertmanager/%s", am.Name)
    		if strings.HasPrefix(config.Config.Listen.Prefix, "/") {
    			return path.Join(config.Config.Listen.Prefix, sub)
    		}
    		return path.Join("/"+config.Config.Listen.Prefix, sub)
    	}
    
    	// strip all user/pass information, fetch() doesn't support it anyway
    	return uri.WithoutUserinfo(am.PublicURI())
    }
    

    It would be nice if InternalURI can be customized. And if no customized URL is specified, fallback to this code, for backward-compatible

  • Feature request: allow custom colorTitlebar based on severity

    Feature request: allow custom colorTitlebar based on severity

    Hello,

    Thanks for the awesome project -- we use it everyday to help us monitor our network.

    We have this running on a large TV and we want an easy way to identify critical alerts from far away. The colorTitlebar feature is nice, but is limited to red (non-silenced) or green (silenced).

    It would be great if I could pass in a custom mapping of severity to color for the titlebar (similar to how the label color config is done). When using this feature, you could force the alert count to have a black background with white text.

    The yaml could maybe look something like this (just a sketch, feel free to change):

    titlebar:
      color:
        silenced: "#18bc9c"
        custom:
          severity:
            - value: info
              color: "#87c4e0"
            - value: warning
              color: "#ffae42"
            - value: critical
              color: "#ff220c"
    

    Thanks

  • Apply color by annotation

    Apply color by annotation

    The default Alertmanager<->OpsGenie integration is using 'annotation.priority' as it's preferred field to determine severity-level. I would like to show these alerts in Karma and match the priority-color according to OpsGenie colors. However, the configuration of Karma seems to only allow 'labels' as colorable.

    I have 2 basic ideas:

    1. Expand the config to allow of colorizing 'annotations'
    2. Add a setting/function to 'promote' certain annotations into labels (thus using the existing config) (this could also allow a 'demote'-setting to convert a label into an annotation)

    I'm willing to try to do a PR for option 2, but only if you feel it's a good-enough option

  • Alertmanager Consul discovery

    Alertmanager Consul discovery

    Is possible to scrape alertmanger targets using consul discovery ?

    Prometheus example:

    scrape_configs:
      - job_name: 'haproxy'
        consul_sd_configs:
        - server: 'localhost:8500'
          services:
            - 'haproxy_exporter'
    

    I think this can be useful to add alertmanager instances running in aws ec2 or k8s.

  • feat: Implement history transport authentication option

    feat: Implement history transport authentication option

    We are using victoria-metrics with http basic auth as the database that stores the history so i implemented this feature. Im not a go programmer so something may not be written in the best way, sorry for that.

Every 10 minutes, memory, cpu and storage usage is checked and if they over 80%, sending alert via email.

linux-alert Every 10 minutes, memory, cpu and storage usage is checked and if they over 80%, sending alert via email. Usage Create .env file from .env

Feb 6, 2022
A Postgres Metrics Dashboard
A Postgres Metrics Dashboard

#Pome Pome stands for Postgres Metrics. Pome is a PostgreSQL Metrics Dashboard to keep track of the health of your database. This project is at a very

Dec 22, 2022
The Prometheus monitoring system and time series database.

Prometheus Visit prometheus.io for the full documentation, examples and guides. Prometheus, a Cloud Native Computing Foundation project, is a systems

Dec 31, 2022
The open and composable observability and data visualization platform. Visualize metrics, logs, and traces from multiple sources like Prometheus, Loki, Elasticsearch, InfluxDB, Postgres and many more.
The open and composable observability and data visualization platform. Visualize metrics, logs, and traces from multiple sources like Prometheus, Loki, Elasticsearch, InfluxDB, Postgres and many more.

The open-source platform for monitoring and observability. Grafana allows you to query, visualize, alert on and understand your metrics no matter wher

Jan 3, 2023
Monitor your network and internet speed with Docker & Prometheus
Monitor your network and internet speed with Docker & Prometheus

Stand-up a Docker Prometheus stack containing Prometheus, Grafana with blackbox-exporter, and speedtest-exporter to collect and graph home Internet reliability and throughput.

Dec 26, 2022
Like Prometheus, but for logs.
Like Prometheus, but for logs.

Loki: like Prometheus, but for logs. Loki is a horizontally-scalable, highly-available, multi-tenant log aggregation system inspired by Prometheus. It

Dec 30, 2022
gin-gonic/gin metrics for prometheus.
gin-gonic/gin  metrics for prometheus.

gin-metrics gin-gonic/gin metrics exporter for Prometheus. 中文 Introduction gin-metrics defines some metrics for gin http-server. There have easy way t

Jan 1, 2023
An example logging system using Prometheus, Loki, and Grafana.
An example logging system using Prometheus, Loki, and Grafana.

Logging Example Structure Collector Export numerical data for Prometheus and log data for Promtail. Exporter uses port 8080 Log files are saved to ./c

Nov 21, 2022
Prometheus exporter for connext subgraphs

Subgraph monitoring exporter Prometheus exporter which provides metrics for monitoring multi subgraphs and rpc nodes by graphql request to graph-node

Nov 30, 2021
Go starter project with Gin, Viper, postgres , redis, zap, prometheus metrics etc setup

Go REST Service Starter/Boilerplate Easily extendible REST API Service boilerplate aiming to follow idiomatic go and best practice. Any feedback and p

Jun 23, 2022
Nightingale - A Distributed and High-Performance Monitoring System. Prometheus enterprise edition
Nightingale - A Distributed and High-Performance Monitoring System. Prometheus enterprise edition

Introduction ?? A Distributed and High-Performance Monitoring System. Prometheus

Jan 7, 2022
Alertmanager go message broker - A simple message broker made to integrate with alertmanager/prometheus

Alertmanager message broker Prerequisites Go 1.16+ Sqllite driver About: The alertmanager message broker is a project made to meet some of my needs to

Dec 27, 2021
Alertmanager-cli is a cli writtin in golang to silence alerts in AlertManager

Alertmanager-cli is a cli writtin in golang to silence alerts in AlertManager

Aug 27, 2022
Wechatbot for prometheus alertmanager webhook

prometheus-wechatbot-webhook wechatbot for prometheus alertmanager webhook Build

Aug 19, 2022
Terraform-grafana-dashboard - Grafana dashboard Terraform module

terraform-grafana-dashboard terraform-grafana-dashboard for project Requirements

May 2, 2022
Github-workflow-dashboard - WEB and CLI dashboard for github action workflows
Github-workflow-dashboard - WEB and CLI dashboard for github action workflows

CLI capable of retrieving github action workflows stats Example usage Dashboard

Aug 30, 2022
A golang implementation of endlessh exporting Prometheus metrics, visualized by a Grafana dashboard.
A golang implementation of endlessh exporting Prometheus metrics, visualized by a Grafana dashboard.

endlessh-go A golang implementation of endlessh exporting Prometheus metrics, visualized by a Grafana dashboard. Introduction Endlessh is a great idea

Dec 23, 2022
Exporter your cypress.io dashboard into prometheus Metrics

Cypress.io dashboard Prometheus exporter Prometheus exporter for a project from Cypress.io dashboards, giving the ability to alert, make special opera

Feb 8, 2022
Serverless SOAR (Security Orchestration, Automation and Response) framework for automatic inspection and evaluation of security alert
Serverless SOAR (Security Orchestration, Automation and Response) framework for automatic inspection and evaluation of security alert

DeepAlert DeepAlert is a serverless framework for automatic response of security alert. Overview DeepAlert receives a security alert that is event of

Jan 3, 2023
Squzy - is a high-performance open-source monitoring, incident and alert system written in Golang with Bazel and love.

Squzy - opensource monitoring, incident and alerting system About Squzy - is a high-performance open-source monitoring and alerting system written in

Dec 12, 2022