⛑ Gatus - Automated service health dashboard

Gatus

build Go Report Card codecov Go version Docker pulls Follow TwiN

Gatus is a health dashboard that gives you the ability to monitor your services using HTTP, ICMP, TCP, and even DNS queries as well as evaluate the result of said queries by using a list of conditions on values like the status code, the response time, the certificate expiration, the body and many others. The icing on top is that each of these health checks can be paired with alerting via Slack, PagerDuty, Discord, Twilio and more.

I personally deploy it in my Kubernetes cluster and let it monitor the status of my core applications: https://status.twin.sh/

Quick start
docker run -p 8080:8080 --name gatus twinproduction/gatus

For more details, see Usage

Gatus dashboard conditions

Have any feedback or want to share your good/bad experience with Gatus? Feel free to email me at [email protected]

Table of Contents

Why Gatus?

Before getting into the specifics, I want to address the most common question:

Why would I use Gatus when I can just use Prometheus’ Alertmanager, Cloudwatch or even Splunk?

Neither of these can tell you that there’s a problem if there are no clients actively calling the endpoint. In other words, it's because monitoring metrics mostly rely on existing traffic, which effectively means that unless your clients are already experiencing a problem, you won't be notified.

Gatus, on the other hand, allows you to configure health checks for each of your features, which in turn allows it to monitor these features and potentially alert you before any clients are impacted.

A sign you may want to look into Gatus is by simply asking yourself whether you'd receive an alert if your load balancer was to go down right now. Will any of your existing alerts be triggered? Your metrics won’t report an increase in errors if there’s no traffic that makes it to your applications. This puts you in a situation where your clients are the ones that will notify you about the degradation of your services rather than you reassuring them that you're working on fixing the issue before they even know about it.

Features

The main features of Gatus are:

  • Highly flexible health check conditions: While checking the response status may be enough for some use cases, Gatus goes much further and allows you to add conditions on the response time, the response body and even the IP address.
  • Ability to use Gatus for user acceptance tests: Thanks to the point above, you can leverage this application to create automated user acceptance tests.
  • Very easy to configure: Not only is the configuration designed to be as readable as possible, it's also extremely easy to add a new service or a new endpoint to monitor.
  • Alerting: While having a pretty visual dashboard is useful to keep track of the state of your application(s), you probably don't want to stare at it all day. Thus, notifications via Slack, Mattermost, Messagebird, PagerDuty, Twilio and Teams are supported out of the box with the ability to configure a custom alerting provider for any needs you might have, whether it be a different provider or a custom application that manages automated rollbacks.
  • Metrics
  • Low resource consumption: As with most Go applications, the resource footprint that this application requires is negligibly small.
  • Badges: Uptime 7d Response time 24h
  • Dark mode

Gatus dashboard dark mode

Usage

By default, the configuration file is expected to be at config/config.yaml.

You can specify a custom path by setting the GATUS_CONFIG_FILE environment variable.

Here's a simple example:

endpoints:
  - name: website                 # Name of your endpoint, can be anything
    url: "https://twin.sh/health"
    interval: 5m                  # Duration to wait between every status check (default: 60s)
    conditions:
      - "[STATUS] == 200"         # Status must be 200
      - "[BODY].status == UP"     # The json path "$.status" must be equal to UP
      - "[RESPONSE_TIME] < 300"   # Response time must be under 300ms
  - name: example
    url: "https://example.org/"
    interval: 60s
    conditions:
      - "[STATUS] == 200"

This example would look similar to this:

Simple example

Note that you can also use environment variables in the configuration file (e.g. $DOMAIN, ${DOMAIN})

If you want to test it locally, see Docker.

Configuration

Parameter Description Default
debug Whether to enable debug logs. false
metrics Whether to expose metrics at /metrics. false
storage Storage configuration {}
endpoints List of endpoints to monitor. Required []
endpoints[].enabled Whether to monitor the endpoint. true
endpoints[].name Name of the endpoint. Can be anything. Required ""
endpoints[].group Group name. Used to group multiple endpoints together on the dashboard.
See Endpoint groups.
""
endpoints[].url URL to send the request to. Required ""
endpoints[].method Request method. GET
endpoints[].conditions Conditions used to determine the health of the endpoint.
See Conditions.
[]
endpoints[].interval Duration to wait between every status check. 60s
endpoints[].graphql Whether to wrap the body in a query param ({"query":"$body"}). false
endpoints[].body Request body. ""
endpoints[].headers Request headers. {}
endpoints[].dns Configuration for an endpoint of type DNS.
See Monitoring an endpoint using DNS queries.
""
endpoints[].dns.query-type Query type (e.g. MX) ""
endpoints[].dns.query-name Query name (e.g. example.com) ""
endpoints[].alerts[].type Type of alert.
Valid types: slack, discord, email, pagerduty, twilio, mattermost, messagebird, teams custom.
Required ""
endpoints[].alerts[].enabled Whether to enable the alert. false
endpoints[].alerts[].failure-threshold Number of failures in a row needed before triggering the alert. 3
endpoints[].alerts[].success-threshold Number of successes in a row before an ongoing incident is marked as resolved. 2
endpoints[].alerts[].send-on-resolved Whether to send a notification once a triggered alert is marked as resolved. false
endpoints[].alerts[].description Description of the alert. Will be included in the alert sent. ""
endpoints[].client Client configuration. {}
endpoints[].ui UI configuration at the endpoint level. {}
endpoints[].ui.hide-hostname Whether to include the hostname in the result. false
endpoints[].ui.dont-resolve-failed-conditions Whether to resolve failed conditions for the UI. false
alerting Alerting configuration. {}
security Security configuration. {}
security.basic Basic authentication security configuration. {}
security.basic.username Username for Basic authentication. Required ""
security.basic.password-sha512 Password's SHA512 hash for Basic authentication. Required ""
disable-monitoring-lock Whether to disable the monitoring lock. false
skip-invalid-config-update Whether to ignore invalid configuration update.
See Reloading configuration on the fly.
false
web Web configuration. {}
web.address Address to listen on. 0.0.0.0
web.port Port to listen on. 8080
ui UI configuration. {}
ui.title Title of the page. Health Dashboard ǀ Gatus
ui.logo URL to the logo to display ""
Maintenance Maintenance. {}

Conditions

Here are some examples of conditions you can use:

Condition Description Passing values Failing values
[STATUS] == 200 Status must be equal to 200 200 201, 404, ...
[STATUS] < 300 Status must lower than 300 200, 201, 299 301, 302, ...
[STATUS] <= 299 Status must be less than or equal to 299 200, 201, 299 301, 302, ...
[STATUS] > 400 Status must be greater than 400 401, 402, 403, 404 400, 200, ...
[STATUS] == any(200, 429) Status must be either 200 or 429 200, 429 201, 400, ...
[CONNECTED] == true Connection to host must've been successful true false
[RESPONSE_TIME] < 500 Response time must be below 500ms 100ms, 200ms, 300ms 500ms, 501ms
[IP] == 127.0.0.1 Target IP must be 127.0.0.1 127.0.0.1 0.0.0.0
[BODY] == 1 The body must be equal to 1 1 {}, 2, ...
[BODY].user.name == john JSONPath value of $.user.name is equal to john {"user":{"name":"john"}}
[BODY].data[0].id == 1 JSONPath value of $.data[0].id is equal to 1 {"data":[{"id":1}]}
[BODY].age == [BODY].id JSONPath value of $.age is equal JSONPath $.id {"age":1,"id":1}
len([BODY].data) < 5 Array at JSONPath $.data has less than 5 elements {"data":[{"id":1}]}
len([BODY].name) == 8 String at JSONPath $.name has a length of 8 {"name":"john.doe"} {"name":"bob"}
has([BODY].errors) == false JSONPath $.errors does not exist {"name":"john.doe"} {"errors":[]}
has([BODY].users) == true JSONPath $.users exists {"users":[]} {}
[BODY].name == pat(john*) String at JSONPath $.name matches pattern john* {"name":"john.doe"} {"name":"bob"}
[BODY].id == any(1, 2) Value at JSONPath $.id is equal to 1 or 2 1, 2 3, 4, 5
[CERTIFICATE_EXPIRATION] > 48h Certificate expiration is more than 48h away 49h, 50h, 123h 1h, 24h, ...

Placeholders

Placeholder Description Example of resolved value
[STATUS] Resolves into the HTTP status of the request 404
[RESPONSE_TIME] Resolves into the response time the request took, in ms 10
[IP] Resolves into the IP of the target host 192.168.0.232
[BODY] Resolves into the response body. Supports JSONPath. {"name":"john.doe"}
[CONNECTED] Resolves into whether a connection could be established true
[CERTIFICATE_EXPIRATION] Resolves into the duration before certificate expiration 24h, 48h, 0 (if not protocol with certs)
[DNS_RCODE] Resolves into the DNS status of the response NOERROR

Functions

Function Description Example
len Returns the length of the object/slice. Works only with the [BODY] placeholder. len([BODY].username) > 8
has Returns true or false based on whether a given path is valid. Works only with the [BODY] placeholder. has([BODY].errors) == false
pat Specifies that the string passed as parameter should be evaluated as a pattern. Works only with == and !=. [IP] == pat(192.168.*)
any Specifies that any one of the values passed as parameters is a valid value. Works only with == and !=. [BODY].ip == any(127.0.0.1, ::1)

NOTE: Use pat only when you need to. [STATUS] == pat(2*) is a lot more expensive than [STATUS] < 300.

Storage

Parameter Description Default
storage Storage configuration {}
storage.path Path to persist the data in. Only supported for types sqlite and postgres. ""
storage.type Type of storage. Valid types: memory, sqlite, postgres. "memory"
  • If storage.type is memory (default):
# Note that this is the default value, and you can omit the storage configuration altogether to achieve the same result.
# Because the data is stored in memory, the data will not survive a restart.
storage:
  type: memory
  • If storage.type is sqlite, storage.path must not be blank:
storage:
  type: sqlite
  path: data.db

See examples/docker-compose-sqlite-storage for an example.

  • If storage.type is postgres, storage.path must be the connection URL:
storage:
  type: postgres
  path: "postgres://user:[email protected]:5432/gatus?sslmode=disable"

See examples/docker-compose-postgres-storage for an example.

Client configuration

In order to support a wide range of environments, each monitored endpoint has a unique configuration for the client used to send the request.

Parameter Description Default
client.insecure Whether to skip verifying the server's certificate chain and host name. false
client.ignore-redirect Whether to ignore redirects (true) or follow them (false, default). false
client.timeout Duration before timing out. 10s

Note that some of these parameters are ignored based on the type of endpoint. For instance, there's no certificate involved in ICMP requests (ping), therefore, setting client.insecure to true for an endpoint of that type will not do anything.

This default configuration is as follows:

client:
  insecure: false
  ignore-redirect: false
  timeout: 10s

Note that this configuration is only available under endpoints[], alerting.mattermost and alerting.custom.

Here's an example with the client configuration under endpoints[]:

endpoints:
  - name: website
    url: "https://twin.sh/health"
    client:
      insecure: false
      ignore-redirect: false
      timeout: 10s
    conditions:
      - "[STATUS] == 200"

Alerting

Gatus supports multiple alerting providers, such as Slack and PagerDuty, and supports different alerts for each individual endpoints with configurable descriptions and thresholds.

Note that if an alerting provider is not properly configured, all alerts configured with the provider's type will be ignored.

Parameter Description Default
alerting.discord Configuration for alerts of type discord.
See Configuring Discord alerts.
{}
alerting.email Configuration for alerts of type email.
See Configuring Email alerts.
{}
alerting.mattermost Configuration for alerts of type mattermost.
See Configuring Mattermost alerts.
{}
alerting.messagebird Configuration for alerts of type messagebird.
See Configuring Messagebird alerts.
{}
alerting.opsgenie Configuration for alerts of type opsgenie.
See Configuring Opsgenie alerts.
{}
alerting.pagerduty Configuration for alerts of type pagerduty.
See Configuring PagerDuty alerts.
{}
alerting.slack Configuration for alerts of type slack.
See Configuring Slack alerts.
{}
alerting.teams Configuration for alerts of type teams.
See Configuring Teams alerts.
{}
alerting.telegram Configuration for alerts of type telegram.
See Configuring Telegram alerts.
{}
alerting.twilio Settings for alerts of type twilio.
See Configuring Twilio alerts.
{}
alerting.custom Configuration for custom actions on failure or alerts.
See Configuring Custom alerts.
{}

Configuring Discord alerts

Parameter Description Default
alerting.discord Configuration for alerts of type discord {}
alerting.discord.webhook-url Discord Webhook URL Required ""
alerting.discord.default-alert Default alert configuration.
See Setting a default alert
N/A
alerting:
  discord: 
    webhook-url: "https://discord.com/api/webhooks/**********/**********"

endpoints:
  - name: website
    url: "https://twin.sh/health"
    interval: 30s
    conditions:
      - "[STATUS] == 200"
      - "[BODY].status == UP"
      - "[RESPONSE_TIME] < 300"
    alerts:
      - type: discord
        enabled: true
        description: "healthcheck failed"
        send-on-resolved: true

Configuring Email alerts

Parameter Description Default
alerting.email Configuration for alerts of type email {}
alerting.email.from Email used to send the alert Required ""
alerting.email.password Password of the email used to send the alert Required ""
alerting.email.host Host of the mail server (e.g. smtp.gmail.com) Required ""
alerting.email.port Port the mail server is listening to (e.g. 587) Required 0
alerting.email.to Email(s) to send the alerts to Required ""
alerting.email.default-alert Default alert configuration.
See Setting a default alert
N/A
alerting:
  email:
    from: "[email protected]"
    password: "hunter2"
    host: "mail.example.com"
    port: 587
    to: "[email protected],[email protected]"

endpoints:
  - name: website
    url: "https://twin.sh/health"
    interval: 5m
    conditions:
      - "[STATUS] == 200"
      - "[BODY].status == UP"
      - "[RESPONSE_TIME] < 300"
    alerts:
      - type: email
        enabled: true
        description: "healthcheck failed"
        send-on-resolved: true

NOTE: Some mail servers are painfully slow.

Configuring Mattermost alerts

Parameter Description Default
alerting.mattermost Configuration for alerts of type mattermost {}
alerting.mattermost.webhook-url Mattermost Webhook URL Required ""
alerting.mattermost.client Client configuration.
See Client configuration.
{}
alerting.mattermost.default-alert Default alert configuration.
See Setting a default alert.
N/A
alerting:
  mattermost: 
    webhook-url: "http://**********/hooks/**********"
    client:
      insecure: true

endpoints:
  - name: website
    url: "https://twin.sh/health"
    interval: 30s
    conditions:
      - "[STATUS] == 200"
      - "[BODY].status == UP"
      - "[RESPONSE_TIME] < 300"
    alerts:
      - type: mattermost
        enabled: true
        description: "healthcheck failed"
        send-on-resolved: true

Here's an example of what the notifications look like:

Mattermost notifications

Configuring Messagebird alerts

Parameter Description Default
alerting.messagebird Settings for alerts of type messagebird {}
alerting.messagebird.access-key Messagebird access key Required ""
alerting.messagebird.originator The sender of the message Required ""
alerting.messagebird.recipients The recipients of the message Required ""
alerting.messagebird.default-alert Default alert configuration.
See Setting a default alert
N/A

Example of sending SMS text message alert using Messagebird:

alerting:
  messagebird:
    access-key: "..."
    originator: "31619191918"
    recipients: "31619191919,31619191920"

endpoints:
  - name: website
    interval: 30s
    url: "https://twin.sh/health"
    conditions:
      - "[STATUS] == 200"
      - "[BODY].status == UP"
      - "[RESPONSE_TIME] < 300"
    alerts:
      - type: messagebird
        enabled: true
        failure-threshold: 3
        send-on-resolved: true
        description: "healthcheck failed"

Configuring Opsgenie alerts

Parameter Description Default
alerting.opsgenie Configuration for alerts of type opsgenie {}
alerting.opsgenie.api-key Opsgenie API Key Required ""
alerting.opsgenie.priority Priority level of the alert. P1
alerting.opsgenie.source Source field of the alert. gatus
alerting.opsgenie.entity-prefix Entity field prefix. gatus-
alerting.opsgenie.alias-prefix Alias field prefix. gatus-healthcheck-
alerting.opsgenie.tags Tags of alert. []

Opsgenie provider will automatically open and close alerts.

alerting:
  opsgenie:
    api-key: "00000000-0000-0000-0000-000000000000"

Configuring PagerDuty alerts

Parameter Description Default
alerting.pagerduty Configuration for alerts of type pagerduty {}
alerting.pagerduty.integration-key PagerDuty Events API v2 integration key ""
alerting.pagerduty.default-alert Default alert configuration.
See Setting a default alert
N/A
alerting.pagerduty.overrides List of overrides that may be prioritized over the default configuration []
alerting.pagerduty.overrides[].group Endpoint group for which the configuration will be overridden by this configuration ""
alerting.pagerduty.overrides[].integration-key PagerDuty Events API v2 integration key ""

It is highly recommended to set endpoints[].alerts[].send-on-resolved to true for alerts of type pagerduty, because unlike other alerts, the operation resulting from setting said parameter to true will not create another incident, but mark the incident as resolved on PagerDuty instead.

Behavior:

  • By default, alerting.pagerduty.integration-key is used as the integration key
  • If the endpoint being evaluated belongs to a group (endpoints[].group) matching the value of alerting.pagerduty.overrides[].group, the provider will use that override's integration key instead of alerting.pagerduty.integration-key's
alerting:
  pagerduty: 
    integration-key: "********************************"
    # You can also add group-specific integration keys, which will 
    # override the integration key above for the specified groups
    overrides:
     - group: "core"
       integration-key: "********************************"

endpoints:
  - name: website
    url: "https://twin.sh/health"
    interval: 30s
    conditions:
      - "[STATUS] == 200"
      - "[BODY].status == UP"
      - "[RESPONSE_TIME] < 300"
    alerts:
      - type: pagerduty
        enabled: true
        failure-threshold: 3
        success-threshold: 5
        send-on-resolved: true
        description: "healthcheck failed"

  - name: back-end
    group: core
    url: "https://example.org/"
    interval: 5m
    conditions:
      - "[STATUS] == 200"
      - "[CERTIFICATE_EXPIRATION] > 48h"
    alerts:
      - type: pagerduty
        enabled: true
        failure-threshold: 3
        success-threshold: 5
        send-on-resolved: true
        description: "healthcheck failed"

Configuring Slack alerts

Parameter Description Default
alerting.slack Configuration for alerts of type slack {}
alerting.slack.webhook-url Slack Webhook URL Required ""
alerting.slack.default-alert Default alert configuration.
See Setting a default alert
N/A
alerting:
  slack: 
    webhook-url: "https://hooks.slack.com/services/**********/**********/**********"

endpoints:
  - name: website
    url: "https://twin.sh/health"
    interval: 30s
    conditions:
      - "[STATUS] == 200"
      - "[BODY].status == UP"
      - "[RESPONSE_TIME] < 300"
    alerts:
      - type: slack
        enabled: true
        description: "healthcheck failed 3 times in a row"
        send-on-resolved: true
      - type: slack
        enabled: true
        failure-threshold: 5
        description: "healthcheck failed 5 times in a row"
        send-on-resolved: true

Here's an example of what the notifications look like:

Slack notifications

Configuring Teams alerts

Parameter Description Default
alerting.teams Configuration for alerts of type teams {}
alerting.teams.webhook-url Teams Webhook URL Required ""
alerting.teams.default-alert Default alert configuration.
See Setting a default alert
N/A
alerting:
  teams:
    webhook-url: "https://********.webhook.office.com/webhookb2/************"

endpoints:
  - name: website
    url: "https://twin.sh/health"
    interval: 30s
    conditions:
      - "[STATUS] == 200"
      - "[BODY].status == UP"
      - "[RESPONSE_TIME] < 300"
    alerts:
      - type: teams
        enabled: true
        description: "healthcheck failed"
        send-on-resolved: true

Here's an example of what the notifications look like:

Teams notifications

Configuring Telegram alerts

Parameter Description Default
alerting.telegram Configuration for alerts of type telegram {}
alerting.telegram.token Telegram Bot Token Required ""
alerting.telegram.id Telegram User ID Required ""
alerting.telegram.default-alert Default alert configuration.
See Setting a default alert
N/A
alerting:
  telegram: 
    token: "123456:ABC-DEF1234ghIkl-zyx57W2v1u123ew11"
    id: "0123456789"

endpoints:
  - name: website
    url: "https://twin.sh/health"
    interval: 30s
    conditions:
      - "[STATUS] == 200"
      - "[BODY].status == UP"
    alerts:
      - type: telegram
        enabled: true
        send-on-resolved: true

Here's an example of what the notifications look like:

Telegram notifications

Configuring Twilio alerts

Parameter Description Default
alerting.twilio Settings for alerts of type twilio {}
alerting.twilio.sid Twilio account SID Required ""
alerting.twilio.token Twilio auth token Required ""
alerting.twilio.from Number to send Twilio alerts from Required ""
alerting.twilio.to Number to send twilio alerts to Required ""
alerting.twilio.default-alert Default alert configuration.
See Setting a default alert
N/A
alerting:
  twilio:
    sid: "..."
    token: "..."
    from: "+1-234-567-8901"
    to: "+1-234-567-8901"

endpoints:
  - name: website
    interval: 30s
    url: "https://twin.sh/health"
    conditions:
      - "[STATUS] == 200"
      - "[BODY].status == UP"
      - "[RESPONSE_TIME] < 300"
    alerts:
      - type: twilio
        enabled: true
        failure-threshold: 5
        send-on-resolved: true
        description: "healthcheck failed"

Configuring custom alerts

Parameter Description Default
alerting.custom Configuration for custom actions on failure or alerts {}
alerting.custom.url Custom alerting request url Required ""
alerting.custom.method Request method GET
alerting.custom.body Custom alerting request body. ""
alerting.custom.headers Custom alerting request headers {}
alerting.custom.client Client configuration.
See Client configuration.
{}
alerting.custom.default-alert Default alert configuration.
See Setting a default alert
N/A

While they're called alerts, you can use this feature to call anything.

For instance, you could automate rollbacks by having an application that keeps tracks of new deployments, and by leveraging Gatus, you could have Gatus call that application endpoint when an endpoint starts failing. Your application would then check if the endpoint that started failing was part of the recently deployed application, and if it was, then automatically roll it back.

The placeholders [ALERT_DESCRIPTION] and [ENDPOINT_NAME] are automatically substituted for the alert description and the endpoint name. These placeholders can be used in the body (alerting.custom.body) and in the url (alerting.custom.url).

If you have an alert using the custom provider with send-on-resolved set to true, you can use the [ALERT_TRIGGERED_OR_RESOLVED] placeholder to differentiate the notifications. The aforementioned placeholder will be replaced by TRIGGERED or RESOLVED accordingly, though it can be modified (details at the end of this section).

For all intents and purposes, we'll configure the custom alert with a Slack webhook, but you can call anything you want.

alerting:
  custom:
    url: "https://hooks.slack.com/services/**********/**********/**********"
    method: "POST"
    body: |
      {
        "text": "[ALERT_TRIGGERED_OR_RESOLVED]: [ENDPOINT_NAME] - [ALERT_DESCRIPTION]"
      }
endpoints:
  - name: website
    url: "https://twin.sh/health"
    interval: 30s
    conditions:
      - "[STATUS] == 200"
      - "[BODY].status == UP"
      - "[RESPONSE_TIME] < 300"
    alerts:
      - type: custom
        enabled: true
        failure-threshold: 10
        success-threshold: 3
        send-on-resolved: true
        description: "health check failed"

Note that you can customize the resolved values for the [ALERT_TRIGGERED_OR_RESOLVED] placeholder like so:

alerting:
  custom:
    placeholders:
      ALERT_TRIGGERED_OR_RESOLVED:
        TRIGGERED: "partial_outage"
        RESOLVED: "operational"

As a result, the [ALERT_TRIGGERED_OR_RESOLVED] in the body of first example of this section would be replaced by partial_outage when an alert is triggered and operational when an alert is resolved.

Setting a default alert

Parameter Description Default
alerting.*.default-alert.enabled Whether to enable the alert N/A
alerting.*.default-alert.failure-threshold Number of failures in a row needed before triggering the alert N/A
alerting.*.default-alert.success-threshold Number of successes in a row before an ongoing incident is marked as resolved N/A
alerting.*.default-alert.send-on-resolved Whether to send a notification once a triggered alert is marked as resolved N/A
alerting.*.default-alert.description Description of the alert. Will be included in the alert sent N/A

While you can specify the alert configuration directly in the endpoint definition, it's tedious and may lead to a very long configuration file.

To avoid such problem, you can use the default-alert parameter present in each provider configuration:

alerting:
  slack: 
    webhook-url: "https://hooks.slack.com/services/**********/**********/**********"
    default-alert:
      enabled: true
      description: "health check failed"
      send-on-resolved: true
      failure-threshold: 5
      success-threshold: 5

As a result, your Gatus configuration looks a lot tidier:

endpoints:
  - name: example
    url: "https://example.org"
    conditions:
      - "[STATUS] == 200"
    alerts:
      - type: slack

  - name: other-example
    url: "https://example.com"
    conditions:
      - "[STATUS] == 200"
    alerts:
      - type: slack

It also allows you to do things like this:

endpoints:
  - name: example
    url: "https://example.org"
    conditions:
      - "[STATUS] == 200"
    alerts:
      - type: slack
        failure-threshold: 5
      - type: slack
        failure-threshold: 10
      - type: slack
        failure-threshold: 15

Of course, you can also mix alert types:

alerting:
  slack:
    webhook-url: "https://hooks.slack.com/services/**********/**********/**********"
    default-alert:
      enabled: true
      failure-threshold: 3
  pagerduty:
    integration-key: "********************************"
    default-alert:
      enabled: true
      failure-threshold: 5

endpoints:
  - name: endpoint-1
    url: "https://example.org"
    conditions:
      - "[STATUS] == 200"
    alerts:
      - type: slack
      - type: pagerduty

  - name: endpoint-2
    url: "https://example.org"
    conditions:
      - "[STATUS] == 200"
    alerts:
      - type: slack
      - type: pagerduty

Maintenance

If you have maintenance windows, you may not want to be annoyed by alerts. To do that, you'll have to use the maintenance configuration:

Parameter Description Default
maintenance.enabled Whether the maintenance period is enabled true
maintenance.start Time at which the maintenance window starts in hh:mm format (e.g. 23:00) Required ""
maintenance.duration Duration of the maintenance window (e.g. 1h, 30m) Required ""
maintenance.every Days on which the maintenance period applies (e.g. [Monday, Thursday]).
If left empty, the maintenance window applies every day
[]

Note that the maintenance configuration uses UTC.

Here's an example:

maintenance:
  start: 23:00
  duration: 1h
  every: [Monday, Thursday]

Note that you can also specify each day on separate lines:

maintenance:
  start: 23:00
  duration: 1h
  every:
    - Monday
    - Thursday

Deployment

Many examples can be found in the .examples folder, but this section will focus on the most popular ways of deploying Gatus.

Docker

To run Gatus locally with Docker:

docker run -p 8080:8080 --name gatus twinproduction/gatus

Other than using one of the examples provided in the .examples folder, you can also try it out locally by creating a configuration file, we'll call it config.yaml for this example, and running the following command:

docker run -p 8080:8080 --mount type=bind,source="$(pwd)"/config.yaml,target=/config/config.yaml --name gatus twinproduction/gatus

If you're on Windows, replace "$(pwd)" by the absolute path to your current directory, e.g.:

docker run -p 8080:8080 --mount type=bind,source=C:/Users/Chris/Desktop/config.yaml,target=/config/config.yaml --name gatus twinproduction/gatus

To build the image locally:

docker build . -t twinproduction/gatus

Helm Chart

Helm must be installed to use the chart. Please refer to Helm's documentation to get started.

Once Helm is set up properly, add the repository as follows:

helm repo add gatus https://avakarev.github.io/gatus-chart

To get more details, please check chart's configuration and helmfile example

Terraform

Gatus can be deployed on Terraform by using the following module: terraform-kubernetes-gatus.

Running the tests

go test ./... -mod vendor

Using in Production

See the Deployment section.

FAQ

Sending a GraphQL request

By setting endpoints[].graphql to true, the body will automatically be wrapped by the standard GraphQL query parameter.

For instance, the following configuration:

endpoints:
  - name: filter-users-by-gender
    url: http://localhost:8080/playground
    method: POST
    graphql: true
    body: |
      {
        users(gender: "female") {
          id
          name
          gender
          avatar
        }
      }
    conditions:
      - "[STATUS] == 200"
      - "[BODY].data.users[0].gender == female"

will send a POST request to http://localhost:8080/playground with the following body:

{"query":"      {\n        users(gender: \"female\") {\n          id\n          name\n          gender\n          avatar\n        }\n      }"}

Recommended interval

NOTE: This does not apply if disable-monitoring-lock is set to true, as the monitoring lock is what tells Gatus to only evaluate one endpoint at a time.

To ensure that Gatus provides reliable and accurate results (i.e. response time), Gatus only evaluates one endpoint at a time In other words, even if you have multiple endpoints with the exact same interval, they will not execute at the same time.

You can test this yourself by running Gatus with several endpoints configured with a very short, unrealistic interval, such as 1ms. You'll notice that the response time does not fluctuate - that is because while endpoints are evaluated on different goroutines, there's a global lock that prevents multiple endpoints from running at the same time.

Unfortunately, there is a drawback. If you have a lot of endpoints, including some that are very slow or prone to timing out (the default timeout is 10s), then it means that for the entire duration of the request, no other endpoint can be evaluated.

The interval does not include the duration of the request itself, which means that if an endpoint has an interval of 30s and the request takes 2s to complete, the timestamp between two evaluations will be 32s, not 30s.

While this does not prevent Gatus' from performing health checks on all other endpoints, it may cause Gatus to be unable to respect the configured interval, for instance:

  • Endpoint A has an interval of 5s, and times out after 10s to complete
  • Endpoint B has an interval of 5s, and takes 1ms to complete
  • Endpoint B will be unable to run every 5s, because endpoint A's health evaluation takes longer than its interval

To sum it up, while Gatus can really handle any interval you throw at it, you're better off having slow requests with higher interval.

As a rule of the thumb, I personally set interval for more complex health checks to 5m (5 minutes) and simple health checks used for alerting (PagerDuty/Twilio) to 30s.

Default timeouts

Endpoint type Timeout
HTTP 10s
TCP 10s
ICMP 10s

To modify the timeout, see Client configuration.

Monitoring a TCP endpoint

By prefixing endpoints[].url with tcp:\\, you can monitor TCP endpoints at a very basic level:

endpoints:
  - name: redis
    url: "tcp://127.0.0.1:6379"
    interval: 30s
    conditions:
      - "[CONNECTED] == true"

Placeholders [STATUS] and [BODY] as well as the fields endpoints[].body, endpoints[].headers, endpoints[].method and endpoints[].graphql are not supported for TCP endpoints.

NOTE: [CONNECTED] == true does not guarantee that the endpoint itself is healthy - it only guarantees that there's something at the given address listening to the given port, and that a connection to that address was successfully established.

Monitoring an endpoint using ICMP

By prefixing endpoints[].url with icmp:\\, you can monitor endpoints at a very basic level using ICMP, or more commonly known as "ping" or "echo":

endpoints:
  - name: ping-example
    url: "icmp://example.com"
    conditions:
      - "[CONNECTED] == true"

Only the placeholders [CONNECTED], [IP] and [RESPONSE_TIME] are supported for endpoints of type ICMP. You can specify a domain prefixed by icmp://, or an IP address prefixed by icmp://.

Monitoring an endpoint using DNS queries

Defining a dns configuration in an endpoint will automatically mark said endpoint as an endpoint of type DNS:

endpoints:
  - name: example-dns-query
    url: "8.8.8.8" # Address of the DNS server to use
    interval: 30s
    dns:
      query-name: "example.com"
      query-type: "A"
    conditions:
      - "[BODY] == 93.184.216.34"
      - "[DNS_RCODE] == NOERROR"

There are two placeholders that can be used in the conditions for endpoints of type DNS:

  • The placeholder [BODY] resolves to the output of the query. For instance, a query of type A would return an IPv4.
  • The placeholder [DNS_RCODE] resolves to the name associated to the response code returned by the query, such as NOERROR, FORMERR, SERVFAIL, NXDOMAIN, etc.

Monitoring an endpoint using STARTTLS

If you have an email server that you want to ensure there are no problems with, monitoring it through STARTTLS will serve as a good initial indicator:

endpoints:
  - name: starttls-smtp-example
    url: "starttls://smtp.gmail.com:587"
    interval: 30m
    client:
      timeout: 5s
    conditions:
      - "[CONNECTED] == true"
      - "[CERTIFICATE_EXPIRATION] > 48h"

Monitoring an endpoint using TLS

Monitoring endpoints using SSL/TLS encryption, such as LDAP over TLS, can help detect certificate expiration:

endpoints:
  - name: tls-ldaps-example
    url: "tls://ldap.example.com:636"
    interval: 30m
    client:
      timeout: 5s
    conditions:
      - "[CONNECTED] == true"
      - "[CERTIFICATE_EXPIRATION] > 48h"

Basic authentication

You can require Basic authentication by leveraging the security.basic configuration:

security:
  basic:
    username: "john.doe"
    password-sha512: "6b97ed68d14eb3f1aa959ce5d49c7dc612e1eb1dafd73b1e705847483fd6a6c809f2ceb4e8df6ff9984c6298ff0285cace6614bf8daa9f0070101b6c89899e22"

The example above will require that you authenticate with the username john.doe as well as the password hunter2.

disable-monitoring-lock

Setting disable-monitoring-lock to true means that multiple endpoints could be monitored at the same time.

While this behavior wouldn't generally be harmful, conditions using the [RESPONSE_TIME] placeholder could be impacted by the evaluation of multiple endpoints at the same time, therefore, the default value for this parameter is false.

There are three main reasons why you might want to disable the monitoring lock:

  • You're using Gatus for load testing (each endpoint are periodically evaluated on a different goroutine, so technically, if you create 100 endpoints with a 1 seconds interval, Gatus will send 100 requests per second)
  • You have a lot of endpoints to monitor
  • You want to test multiple endpoints at very short interval (< 5s)

Reloading configuration on the fly

For the sake of convenience, Gatus automatically reloads the configuration on the fly if the loaded configuration file is updated while Gatus is running.

By default, the application will exit if the updating configuration is invalid, but you can configure Gatus to continue running if the configuration file is updated with an invalid configuration by setting skip-invalid-config-update to true.

Keep in mind that it is in your best interest to ensure the validity of the configuration file after each update you apply to the configuration file while Gatus is running by looking at the log and making sure that you do not see the following message:

The configuration file was updated, but it is not valid. The old configuration will continue being used.

Failure to do so may result in Gatus being unable to start if the application is restarted for whatever reason.

I recommend not setting skip-invalid-config-update to true to avoid a situation like this, but the choice is yours to make.

If you are not using a file storage, updating the configuration while Gatus is running is effectively the same as restarting the application.

NOTE: Updates may not be detected if the config file is bound instead of the config folder. See #151.

Endpoint groups

Endpoint groups are used for grouping multiple endpoints together on the dashboard.

endpoints:
  - name: frontend
    group: core
    url: "https://example.org/"
    interval: 5m
    conditions:
      - "[STATUS] == 200"

  - name: backend
    group: core
    url: "https://example.org/"
    interval: 5m
    conditions:
      - "[STATUS] == 200"

  - name: monitoring
    group: internal
    url: "https://example.org/"
    interval: 5m
    conditions:
      - "[STATUS] == 200"

  - name: nas
    group: internal
    url: "https://example.org/"
    interval: 5m
    conditions:
      - "[STATUS] == 200"

  - name: random endpoint that isn't part of a group
    url: "https://example.org/"
    interval: 5m
    conditions:
      - "[STATUS] == 200"

The configuration above will result in a dashboard that looks like this:

Gatus Endpoint Groups

Exposing Gatus on a custom port

By default, Gatus is exposed on port 8080, but you may specify a different port by setting the web.port parameter:

web:
  port: 8081

If you're using a PaaS like Heroku that doesn't let you set a custom port and exposes it through an environment variable instead, you can use that environment variable directly in the configuration file:

web:
  port: ${PORT}

Badges

Uptime

Uptime 1h Uptime 24h Uptime 7d

Gatus can automatically generate a SVG badge for one of your monitored endpoints. This allows you to put badges in your individual applications' README or even create your own status page, if you desire.

The path to generate a badge is the following:

/api/v1/endpoints/{key}/uptimes/{duration}/badge.svg

Where:

  • {duration} is 7d, 24h or 1h
  • {key} has the pattern <GROUP_NAME>_<ENDPOINT_NAME> in which both variables have , /, _, , and . replaced by -.

For instance, if you want the uptime during the last 24 hours from the endpoint frontend in the group core, the URL would look like this:

https://example.com/api/v1/endpoints/core_frontend/uptimes/7d/badge.svg

If you want to display an endpoint that is not part of a group, you must leave the group value empty:

https://example.com/api/v1/endpoints/_frontend/uptimes/7d/badge.svg

Example:

![Uptime 24h](https://status.twin.sh/api/v1/endpoints/core_blog-external/uptimes/24h/badge.svg)

If you'd like to see a visual example of each badges available, you can simply navigate to the endpoint's detail page.

Response time

Response time 1h Response time 24h Response time 7d

The endpoint to generate a badge is the following:

/api/v1/endpoints/{key}/response-times/{duration}/badge.svg

Where:

  • {duration} is 7d, 24h or 1h
  • {key} has the pattern <GROUP_NAME>_<ENDPOINT_NAME> in which both variables have , /, _, , and . replaced by -.

API

Gatus provides a simple read-only API which can be queried in order to programmatically determine endpoint status and history.

All endpoints are available via a GET request to the following endpoint:

/api/v1/endpoints/statuses

Example: https://status.twin.sh/api/v1/endpoints/statuses

Specific endpoints can also be queried by using the following pattern:

/api/v1/endpoints/{group}_{endpoint}/statuses

Example: https://status.twin.sh/api/v1/endpoints/core_blog-home/statuses

Gzip compression will be used if the Accept-Encoding HTTP header contains gzip.

The API will return a JSON payload with the Content-Type response header set to application/json. No such header is required to query the API.

High level design overview

Gatus diagram

Sponsors

You can find the full list of sponsors here.

Comments
  • Add html content check condition

    Add html content check condition

    As mentioned in my previous issue #61 i'm currently using healthchecks, and the last feature i would need is some generic content regex / string presence check. now, i might be missing something everybody already knows, since you have this BODY placeholder, and maybe that's just it.

    anyway, it seems that the process basically would mean gatus curls the site content and runs a regex or literal string against it, and when it finds it, test passes.

    what do you think?

    in terms of performance, i think that this is actually the most heavy that healthchecks is doing, but already now i'm running all the tests in parallel, and gatus outperforms healthchecks by magnitudes. so nice.

  • GoogleChat alerts are blank under certain conditions for ICMP endpoints

    GoogleChat alerts are blank under certain conditions for ICMP endpoints

    Describe the bug

    When using ICMP and [CONNECTED] == true, whenever an endpoint fails, the Google Chat/Spaces App only sends a blank message with no data.

    What do you see?

    Nothing is displayed, it sends an empty message.

    What do you expect to see?

    Full alert, like with HTTPS, which works just fine. This is ICMP only.

    List the steps that must be taken to reproduce this issue

    Setup googlechat alert, have an ICMP endpoint fail.

    Version

    latest branch

    Additional information

    No response

  • Redirection checks are not working

    Redirection checks are not working

    First of all, thanks for such a simple tool and for making it open source :) I tried to see if the following issue was posted already, but I couldn't find anything.

    All my 301 and 302 checks are failing, and it seems to be because when making the HTTP requests, it's following redirects by default.

    Here's an example. https://inbox.google.com does a 301 to https://mail.google.com/, but the status code in the image below is 200.

    Is this by design?

    image

  • Scalability limits?

    Scalability limits?

    I'm monitoring around 50-55 services with gatus, most are HTTP and 29 of them are using the pat keyword (with wildcards, so about as expensive a query as it can get). All using the default poll interval of 60s.

    I am starting to see some responses of context deadline exceeded (Client.Timeout exceeded while awaiting headers) in the body check. I have manually checked the services in question and they're healthy. Restarting gatus, or just waiting a few minutes seems to resolve this. This does not occur continuously, but have seen it twice in the space of a few hours.

    I can only assume that this is due to a concurrency issue, as it's more than possible that the combination of service times takes longer than 60s to respond. I do not know enough of the gatus architecture to know if this is a problem or not.

    I am running v2.3.0 with #100 changed locally (as I've not updated since I tested it). I will repeat the test with 2.4.0 and report the results here.

  • Support grouping services

    Support grouping services

    Supporting service groups could allow a cuter front end experience.

    i.e.

    services:
      - name: k8s-cluster-watch-dog
        url: http://k8s-cluster-watch-dog-v1.tools-${ENVIRONMENT}:8080/health
        group: core         <-------
        interval: 1m
        conditions:
          - "[STATUS] == 200"
          - "[BODY].status == UP"
      - name: prometheus
        url: http://prometheus-operator-prometheus.kube-system:9090/-/healthy
        group: core         <-------
        interval: 1m
        conditions:
          - "[STATUS] == 200"
          - "[BODY] == Prometheus is Healthy."
    

    would generate a dashboard that puts both k8s-cluster-watch-dog and prometheus under the "core" folder.

    image

    Could also support tags, and allow filtering by tags instead

    i.e.

    services:
      - name: k8s-cluster-watch-dog
        url: http://k8s-cluster-watch-dog-v1.tools-${ENVIRONMENT}:8080/health
        tags:              <-------
          - core
        interval: 1m
        conditions:
          - "[STATUS] == 200"
          - "[BODY].status == UP"
      - name: prometheus
        url: http://prometheus-operator-prometheus.kube-system:9090/-/healthy
        tags:              <-------
          - core
          - metrics
        interval: 1m
        conditions:
          - "[STATUS] == 200"
          - "[BODY] == Prometheus is Healthy."
    

    image

    Forgive the terrible drafts, just thought of this on the fly.

  • ICMP does not work on Mac OS

    ICMP does not work on Mac OS

    MAC OS Big Sur version 11.0.1

    Everything works perfectly ok except icmp in MAC OS.

    Config:

    • name: node-1 url: "icmp://104.237.x.x" group: Uptime (Ping) conditions:
      • "[CONNECTED] == true"
      • "[RESPONSE_TIME] > 0"

    image

    the x.x is in ip is set intentionally.

    Is this wrong config?

  • Support wecom alerting provider

    Support wecom alerting provider

    Hello, I'm a Chinese and my team use Wechat Company (Wecom) to watch status of servers.

    I'm not a developer, I try to write wecom alerting code and it work's fine on my machine. But because of my Go language level, there may be many exceptions that have not been handled. If possible, I hope it can be merge into the official code. Thanks.

    Push to Wecom, we need 3 necessary parameters, Aid, Cid and Secret.

    package wecom
    
    import (
    	"encoding/json"
    	"fmt"
    	"io/ioutil"
    	"net/http"
    
    	"github.com/TwinProduction/gatus/alerting/alert"
    	"github.com/TwinProduction/gatus/alerting/provider/custom"
    	"github.com/TwinProduction/gatus/core"
    )
    
    // AlertProvider is the configuration necessary for sending an alert using Wecom
    type AlertProvider struct {
    	Aid    string `yaml:"aid"`
    	Cid    string `yaml:"cid"`
    	Secret string `yaml:"secret"`
    	ToUser string `yaml:"touser"` // Not necessary
    	ToParty string `yaml:"toparty"` // Not necessary
    
    	// DefaultAlert is the default alert configuration to use for services with an alert of the appropriate type
    	DefaultAlert *alert.Alert `yaml:"default-alert"`
    }
    

    Then, we need use Cid and Secret to get accessToken.

    	var wetoken string
    	wetoken = fmt.Sprintf("%s%s%s%s", "https://qyapi.weixin.qq.com/cgi-bin/gettoken?corpid=", provider.Cid, "&corpsecret=", provider.Secret)
    	client := &http.Client{}
    	req, err := http.NewRequest("GET", wetoken, nil)
    	resp, err := client.Do(req)
    	if err != nil {
    		fmt.Println("Failure : ", err)
    	}
    	respBody, _ := ioutil.ReadAll(resp.Body)
    	r := make(map[string]interface{})
    	json.Unmarshal([]byte(respBody), &r)
    	accessToken := r["access_token"]
    

    Last, use accessToken to push.

    	return &custom.AlertProvider{
    		URL:    fmt.Sprintf("https://qyapi.weixin.qq.com/cgi-bin/message/send?access_token=%s", accessToken),
    		Method: http.MethodPost,
    		Body: fmt.Sprintf( TODO:BODY ),
    		Headers: map[string]string{"Content-Type": "application/json"},
    	}
    

    If ToUser and ToParty are empty, the BODY will be:

    {
    	"agentid": %s,
    	"msgtype": "text",
    	"text": {
    		"content": "%s"
    	},
    	"duplicate_check_interval": 180
    }
    

    If one of the two is empty, the BODY will be:

    {
    	"toparty": %s,
    	"agentid": %s,
    	"msgtype": "text",
    	"text": {
    		"content": "%s"
    	},
    	"duplicate_check_interval": 180
    }
    

    or

    {
    	"touser": %s,
    	"agentid": %s,
    	"msgtype": "text",
    	"text": {
    		"content": "%s"
    	},
    	"duplicate_check_interval": 180
    }
    

    If both parameters are not empty, the BODY will be:

    {
    	"touser": %s,
    	"toparty": %s,
    	"agentid": %s,
    	"msgtype": "text",
    	"text": {
    		"content": "%s"
    	},
    	"duplicate_check_interval": 180
    }
    
  • Implement Postgres as a storage solution

    Implement Postgres as a storage solution

    I stumbled on this neat status page solution the other day and am trying it out, cool stuff!

    I see Store is an interface, and that there is one implementation (in-memory cache with a file used to persist over restarts). However I think it's missing some way to persist the observed results which is more usable in a stateless container, where you don't want to write to a file system.

    Is there perhaps already some extension out there that provides a Store backed by e.g. Redis or Postgres? I would not mind contributing, but if the effort had already been made then that would be unnecessary.

    The point would not be to enable some high availability configuration (I think that would add quite a bit more synchronization work), but simply being able to restart or reconfigure gatus without losing the previous results would be nice.

    Also, the auto save interval could be made configurable.

  • feat(metrics): Add more metrics (e.g. duration)

    feat(metrics): Add more metrics (e.g. duration)

    Describe the feature request

    In this project https://github.com/prometheus/blackbox_exporter, many metrics like probe_success, probe_duration etc... provided. And in dns probe https://github.com/prometheus/blackbox_exporter/blob/master/prober/dns.go#L129 also has metrics for dns.

    Why do you personally want this feature to be implemented?

    Maybe we can add these metrics in gatus, and user can scrpe to promethus and make dashboard on Grafana.

    How long have you been using this project?

    3 month

    Additional information

    If the team agrees to add this feature, I can submit PR to contribute gatus

  • Change license to Apache 2

    Change license to Apache 2

    Call me indecisive if you want, since I've done this once before (see 70c9c4b87c8e595f4374b895853312cefc2152f2), but after thing about the pros and cons, I decided that Apache 2 offers better protection for Gatus.

    Long story short:

    • v0.0.1 to v1.3.0: Apache 2
    • v1.3.1 to v3.3.5: MIT
    • v3.3.6 and up: Apache 2

    This PR is here for traceability.

  • Bind mounted config.yaml does not automatically reload

    Bind mounted config.yaml does not automatically reload

    Step to reproduce:

    $ cat config/config.yaml
    services:
      - name: example
        url: "https://example.org/"
        interval: 30s
        conditions:
          - "[STATUS] == 200"
    
    $ docker run \
        --rm \
        -p 8080:8080 \
        -v "$(pwd)"/config/config.yaml:/config/config.yaml \
        twinproduction/gatus
    

    Change the name field in the config/config.yaml.

    Observation:

    In the HasLoadedConfigurationFileBeenModified function, the result of the comparison of config.lastFileModTime.Unix() != fileInfo.ModTime().Unix() is always false.

  • feat: API for adding or removing configuration

    feat: API for adding or removing configuration

    Describe the feature request

    I'd like Gatus to have an API that allows managing its configuration so I can create a Kubernetes controller (or maybe extend https://github.com/stakater/IngressMonitorController) to watch Ingress resources and automatically create checks in Gatus for them.

    An alternative could be to leverage something like https://github.com/TwiN/gatus/issues/326 and have a sidecar like the https://github.com/kiwigrid/k8s-sidecar to pass the configuration to Gatus but that would have more moving parts.

    Why do you personally want this feature to be implemented?

    To be able to programatically setup checks for services running in a Kuberntes cluster.

    How long have you been using this project?

    No response

    Additional information

    No response

  • Allow configuration to be distributed

    Allow configuration to be distributed

    Summary

    This PR allows configuration to split across multiple files using the common config.d/-path style. The approach reads all *.yml and *.yaml files and merges them in memory into a single file before parsing. This seemed easier und required fewer changes than parsing each file individually and merging the objects afterwards.

    Fixes #326

    As I did not work with golang before, feel free to remark on obvious errors and bad patterns.

    I envisaged to write tests, but did not figure out yet, how invasive tests are intended to be structured for this project. I hope the chosen way is in you interest.

    Checklist

    • [x] Tested and/or added tests to validate that the changes work as intended, if applicable.
    • [x] Added the documentation in README.md, if applicable.
  • bug: Investigate flaky `TestStore_InsertCleansUpOldUptimeEntriesProperly` test

    bug: Investigate flaky `TestStore_InsertCleansUpOldUptimeEntriesProperly` test

    Describe the bug

    Sometimes, the TestStore_InsertCleansUpOldUptimeEntriesProperly test will fail

    What do you see?

    https://github.com/TwiN/gatus/actions/runs/3723336201/attempts/1

     --- FAIL: TestStore_InsertCleansUpOldUptimeEntriesProperly (0.04s)
        sql_test.go:109: oldest endpoint uptime entry should've been ~5 hours old, was 4h59m40.064240509s
        sql_test.go:119: oldest endpoint uptime entry should've been ~5 hours old, was 4h59m40.068831372s
        sql_test.go:129: oldest endpoint uptime entry should've been ~8 hours old, was 7h59m40.073425836s
        sql_test.go:139: oldest endpoint uptime entry should've been ~239h0m0s hours old, was 238h59m40.0780609s
        sql_test.go:150: oldest endpoint uptime entry should've been ~8 hours old, was 7h59m40.08307097s
    

    What do you expect to see?

    Test should pass

    List the steps that must be taken to reproduce this issue

    1. Execute TestStore_InsertCleansUpOldUptimeEntriesProperly between hh:59:30 and hh:59:59
    2. Note how the test will fail

    Version

    latest (master)

    Additional information

    I suspect this has something to do with https://github.com/TwiN/gatus/blob/f6a621da285dd43b3207ec21e4101b4b11eff6c5/storage/store/sql/sql_test.go#L100

  • feat: repeating notifications?

    feat: repeating notifications?

    Describe the feature request

    Ability to send repeated notifications every x min/hr if the endpoint is still offline/dead.

    Why do you personally want this feature to be implemented?

    We now had over 300 nodes monitoring via Gatus and we received the notifications via Slack and Telegram. One scenario is when some nodes are offline during the weekend. We always forget to fix them in the next workday.

    I've tried some other services and seems like it's a common feature to alert repeatedly if something still goes wrong.

    How long have you been using this project?

    No response

    Additional information

    No response

  • Unclear error message `invalid character '<' looking for beginning of value`

    Unclear error message `invalid character '<' looking for beginning of value`

    Describe the bug

    A JSON API endpoint I'm monitoring briefly fails from time to time with the error message

    invalid character '<' looking for beginning of value
    

    The failing condition is len([BODY].25447.bill) >= 1.

    I don't really understand what the error message is supposed to mean. No JSON payload present in the [BODY]?

    What do you see?

    An opaque error message.

    What do you expect to see?

    A clear/understandable error message.

    List the steps that must be taken to reproduce this issue

    Set up an endpoint with

    - name: "TEST"
      enabled: true
      url: "https://api.votelog.ch/api/v1/bill/380b49bf-8044-4f65-a2e5-ac3a00e3e083/votes?lang=de"
      interval: 60s
      conditions:
        - "[STATUS] == 200"
        - "len([BODY].25447.bill) >= 1"
    

    Version

    4.3.2

    Additional information

    No response

  • Allow to set separate logo for dark mode

    Allow to set separate logo for dark mode

    Describe the feature request

    When using a black-and-white or dark logo with transparent background[*], it becomes barely visible when dark mode is toggled on. The same is true for light mode with a white/light logo with transparent background. Thus it would be nice if we could configure two seperate logo files/URLs, one for the (default) light mode and one for the dark mode.

    Config could become ui.logo.light and ui.logo.dark, maybe with a fallback to the current behaviour when ui.logo is directly set to a string instead of the two subkeys.

    Why do you personally want this feature to be implemented?

    It would improve Gatus' web frontend aesthetically. 💅

    How long have you been using this project?

    3 days (awesome project! ❤️)

    Additional information

    [*] Example Gatus instance with such a logo is found here: https://status.votelog.ch/

A simple and flexible health check library for Go.

Health A simple and flexible health check library for Go. Documentation · Report Bug · Request Feature Table of Contents Getting started Synchronous v

Jan 4, 2023
An extensible tool for creating your own in cluster health endpoints

healthyk8s an extensible tool for creating your own "in cluster" health endpoints Why? allows for creating a health endpoint for anything - external r

Oct 26, 2021
This is Reperio Health's GoLang backend assessment

reperio-backend-assessment This is Reperio Health's GoLang backend assessment. N

Dec 22, 2021
Multi cluster kubernetes dashboard with batteries included. Build by developers, for developers.

kubetower Multi cluster kubernetes dashboard with batteries included. Built by developers, for developers. Features Restart deployments with one click

Nov 28, 2022
:bento: Highly Configurable Terminal Dashboard for Developers and Creators
:bento: Highly Configurable Terminal Dashboard for Developers and Creators

DevDash is a highly configurable terminal dashboard for developers and creators who want to choose and display the most up-to-date metrics they need,

Jan 3, 2023
Grafana Dashboard Manager

Grafana dash-n-grab Grafana Dash-n-Grab (GDG) -- Dashboard/DataSource Manager. The purpose of this project is to provide an easy to use CLI to interac

Dec 31, 2022
A Grafana backend plugin for automatic synchronization of dashboard between multiple Grafana instances.

Grafana Dashboard Synchronization Backend Plugin A Grafana backend plugin for automatic synchronization of dashboard between multiple Grafana instance

Dec 23, 2022
Simple Kubernetes real-time dashboard and management.
Simple Kubernetes real-time dashboard and management.

Skooner - Kubernetes Dashboard We are changing our name from k8dash to Skooner! Please bear with us as we update our documentation and codebase to ref

Dec 28, 2022
Kubernetes-native automatic dashboard for Ingress
Kubernetes-native automatic dashboard for Ingress

ingress-dashboard Automatic dashboard generation for Ingress objects. Features: No JS Supports OIDC (Keycloak, Google, Okta, ...) and Basic authorizat

Oct 20, 2022
Exporter your cypress.io dashboard into prometheus Metrics

Cypress.io dashboard Prometheus exporter Prometheus exporter for a project from Cypress.io dashboards, giving the ability to alert, make special opera

Feb 8, 2022
A beautiful CLI dashboard for GitHub 🚀
A beautiful CLI dashboard for GitHub 🚀

gh-dash ✨ A GitHub (gh) CLI extension to display a dashboard with pull requests and issues by filters you care about. Installation Install the gh CLI

Dec 30, 2022
Automated configuration documentation library for Go Projects.

Cato Cato is an automated documentation generation library for Go Projects. Through the use of custom tags for struct fields, Cato can extract informa

Aug 28, 2020
Gola is a Golang tool for automated scripting purpose

Gola Gola is a Golang tool for automated scripting purpose. How To Install You can find the install script here. Example Configuration commands: - n

Aug 12, 2022
Raspberry Pi Archlinux Automated Offline Installer with Wi-Fi. Windows, Mac and more features coming.
Raspberry Pi Archlinux Automated Offline Installer with Wi-Fi. Windows, Mac and more features coming.

Raspberry Pi Archlinux Automated Installer with Wi-Fi. Windows, Mac and more features coming. Download Go to releases page and download the zip file f

Nov 22, 2022
Mutagen Compose is a modified version of Docker Compose that offers automated integration with Mutagen.

Mutagen Compose Mutagen Compose is a (minimally) modified version of Docker Compose that offers automated integration with Mutagen. This allows you to

Dec 22, 2022
A Golang library for testing infrastructure in automated ways.

Infratest Infratest is a Golang library that we hope makes testing your infrastructure using tests that are written in Golang easier to do. The genera

Nov 2, 2022
Automated-gke-cilium-networkpolicy-demo - Quickly provision and tear down a GKE cluster with Cilium enabled for working with Network Policy.

Automated GKE Network Policy Demo Before running the automation, make sure you have the correct variables in env-automation/group_vars/all.yaml. There

Jan 1, 2022
Rustpm - Process manager and automated updates for RustDedicated
Rustpm - Process manager and automated updates for RustDedicated

rustpm (WIP) Process manager for RustDedicated A drop in replacement for RustDed

Feb 15, 2022
Automated refactoring for Terraform

tfrefactor Automated refactoring for Terraform. Currently supports: Rename local / var / data / resource across all files in a config Move items or ca

Oct 21, 2022