Scalable datastore for metrics, events, and real-time analytics

InfluxDB CircleCI

Slack Status

InfluxDB is an open source time series platform. This includes APIs for storing and querying data, processing it in the background for ETL or monitoring and alerting purposes, user dashboards, and visualizing and exploring the data and more. The master branch on this repo now represents the latest InfluxDB, which now includes functionality for Kapacitor (background processing) and Chronograf (the UI) all in a single binary.

The list of InfluxDB Client Libraries that are compatible with the latest version can be found in our documentation.

If you are looking for the 1.x line of releases, there are branches for each minor version as well as a master-1.x branch that will contain the code for the next 1.x release. The master-1.x working branch is here. The InfluxDB 1.x Go Client can be found here.

Installing

We have nightly and versioned Docker images, Debian packages, RPM packages, and tarballs of InfluxDB available at the InfluxData downloads page. We also provide the influx command line interface (CLI) client as a separate binary available at the same location.

If you are interested in building from source, see the building from source guide for contributors.

Getting Started

For a complete getting started guide, please see our full online documentation site.

To write and query data or use the API in any way, you'll need to first create a user, credentials, organization and bucket. Everything in InfluxDB is organized under a concept of an organization. The API is designed to be multi-tenant. Buckets represent where you store time series data. They're synonymous with what was previously in InfluxDB 1.x a database and retention policy.

The simplest way to get set up is to point your browser to http://localhost:8086 and go through the prompts.

You can also get set up from the CLI using the command influx setup:

$ bin/$(uname -s | tr '[:upper:]' '[:lower:]')/influx setup
Welcome to InfluxDB 2.0!
Please type your primary username: marty

Please type your password: 

Please type your password again: 

Please type your primary organization name.: InfluxData

Please type your primary bucket name.: telegraf

Please type your retention period in hours.
Or press ENTER for infinite.: 72


You have entered:
  Username:          marty
  Organization:      InfluxData
  Bucket:            telegraf
  Retention Period:  72 hrs
Confirm? (y/n): y

UserID                  Username        Organization    Bucket
033a3f2c5ccaa000        marty           InfluxData      Telegraf
Your token has been stored in /Users/marty/.influxdbv2/credentials

You can run this command non-interactively using the -f, --force flag if you are automating the setup. Some added flags can help:

$ bin/$(uname -s | tr '[:upper:]' '[:lower:]')/influx setup \
--username marty \
--password F1uxKapacit0r85 \
--org InfluxData \
--bucket telegraf \
--retention 168 \
--token where-were-going-we-dont-need-roads \
--force

Once setup is complete, a configuration profile is created to allow you to interact with your local InfluxDB without passing in credentials each time. You can list and manage those profiles using the influx config command.

$ bin/$(uname -s | tr '[:upper:]' '[:lower:]')/influx config
Active	Name	URL			            Org
*	    default	http://localhost:8086	InfluxData

Writing Data

Write to measurement m, with tag v=2, in bucket telegraf, which belongs to organization InfluxData:

$ bin/$(uname -s | tr '[:upper:]' '[:lower:]')/influx write --bucket telegraf --precision s "m v=2 $(date +%s)"

Since you have a default profile set up, you can omit the Organization and Token from the command.

Write the same point using curl:

curl --header "Authorization: Token $(bin/$(uname -s | tr '[:upper:]' '[:lower:]')/influx auth list --json | jq -r '.[0].token')" \
--data-raw "m v=2 $(date +%s)" \
"http://localhost:8086/api/v2/write?org=InfluxData&bucket=telegraf&precision=s"

Read that back with a simple Flux query:

$ bin/$(uname -s | tr '[:upper:]' '[:lower:]')/influx query 'from(bucket:"telegraf") |> range(start:-1h)'
Result: _result
Table: keys: [_start, _stop, _field, _measurement]
                   _start:time                      _stop:time           _field:string     _measurement:string                      _time:time                  _value:float
------------------------------  ------------------------------  ----------------------  ----------------------  ------------------------------  ----------------------------
2019-12-30T22:19:39.043918000Z  2019-12-30T23:19:39.043918000Z                       v                       m  2019-12-30T23:17:02.000000000Z                             2

Use the -r, --raw option to return the raw flux response from the query. This is useful for moving data from one instance to another as the influx write command can accept the Flux response using the --format csv option.

Introducing Flux

Flux is an MIT-licensed data scripting language (previously named IFQL) used for querying time series data from InfluxDB. The source for Flux is available on GitHub. Learn more about Flux from CTO Paul Dix's presentation.

Contributing to the Project

InfluxDB is an MIT licensed open source project and we love our community. The fastest way to get something fixed is to open a PR. Check out our contributing guide if you're interested in helping out. Also, join us on our Community Slack Workspace if you have questions or comments for our engineering teams.

CI and Static Analysis

CI

All pull requests will run through CI, which is currently hosted by Circle. Community contributors should be able to see the outcome of this process by looking at the checks on their PR. Please fix any issues to ensure a prompt review from members of the team.

The InfluxDB project is used internally in a number of proprietary InfluxData products, and as such, PRs and changes need to be tested internally. This can take some time, and is not really visible to community contributors.

Static Analysis

This project uses the following static analysis tools. Failure during the running of any of these tools results in a failed build. Generally, code must be adjusted to satisfy these tools, though there are exceptions.

  • go vet checks for Go code that should be considered incorrect.
  • go fmt checks that Go code is correctly formatted.
  • go mod tidy ensures that the source code and go.mod agree.
  • staticcheck checks for things like: unused code, code that can be simplified, code that is incorrect and code that will have performance issues.

staticcheck

If your PR fails staticcheck it is easy to dig into why it failed, and also to fix the problem. First, take a look at the error message in Circle under the staticcheck build section, e.g.,

tsdb/tsm1/encoding.gen.go:1445:24: func BooleanValues.assertOrdered is unused (U1000)
tsdb/tsm1/encoding.go:172:7: receiver name should not be an underscore, omit the name if it is unused (ST1006)

Next, go and take a look here for some clarification on the error code that you have received, e.g., U1000. The docs will tell you what's wrong, and often what you need to do to fix the issue.

Generated Code

Sometimes generated code will contain unused code or occasionally that will fail a different check. staticcheck allows for entire files to be ignored, though it's not ideal. A linter directive, in the form of a comment, must be placed within the generated file. This is problematic because it will be erased if the file is re-generated. Until a better solution comes about, below is the list of generated files that need an ignores comment. If you re-generate a file and find that staticcheck has failed, please see this list below for what you need to put back:

File Comment
query/promql/promql.go //lint:file-ignore SA6001 Ignore all unused code, it's generated

End-to-End Tests

CI also runs end-to-end tests. These test the integration between the influx server the ui. You can run them locally in two steps:

  • Start the server in "testing mode" by running make run-e2e.
  • Run the tests with make e2e.
Comments
  • Windows Support for InfluxData Platform

    Windows Support for InfluxData Platform

    We need to gauge importance of Windows Server support is to the community at large. Let us know the version, use case and if scalability features like clustering matter to you. Also please also specify if Windows support is critical for other components of the TICK stack specifically - Telegraf, Chronograf and Kapacitor. +1 on this issue to register your vote.

  • Should support moving averages aggregate function

    Should support moving averages aggregate function

    Should be able to calculate moving averages of different types. Simple, weighted, exponential, etc. Need to come up with specific syntax and stuff, but for now there's more reading here http://en.wikipedia.org/wiki/Moving_average

  • [0.9.4-0.9.5-rc3] Influx service becomes unavailable,

    [0.9.4-0.9.5-rc3] Influx service becomes unavailable, "failed to store statistics: timeout"

    After loading data of few megs, influx service becomes unavailable. Logs :

    [monitor] 2015/09/16 06:05:49 failed to store statistics: timeout
    [wal] 2015/09/16 06:05:58 Flushing 5 measurements and 41 series to index
    [wal] 2015/09/16 06:05:58 Metadata flush took 2.830629ms
    [wal] 2015/09/16 06:05:58 Flushing 36 measurements and 108 series to index
    [wal] 2015/09/16 06:05:58 Metadata flush took 2.719846ms
    [wal] 2015/09/16 06:05:58 Flushing 36 measurements and 252 series to index
    [wal] 2015/09/16 06:05:58 Metadata flush took 2.775602ms
    [wal] 2015/09/16 06:05:58 Flushing 36 measurements and 144 series to index
    [wal] 2015/09/16 06:05:58 Metadata flush took 2.134082ms
    [wal] 2015/09/16 06:05:58 Flushing 36 measurements and 144 series to index
    [wal] 2015/09/16 06:05:58 Metadata flush took 2.145459ms
    [wal] 2015/09/16 06:05:58 Flushing 36 measurements and 144 series to index
    [wal] 2015/09/16 06:05:58 Metadata flush took 2.546441ms
    [wal] 2015/09/16 06:05:58 Flushing 36 measurements and 144 series to index
    [wal] 2015/09/16 06:05:58 Metadata flush took 2.441598ms
    [wal] 2015/09/16 06:05:58 Flushing 36 measurements and 72 series to index
    [wal] 2015/09/16 06:05:58 Metadata flush took 2.315491ms
    [monitor] 2015/09/16 06:05:59 failed to store statistics: timeout
    [monitor] 2015/09/16 06:06:09 failed to store statistics: timeout
    [monitor] 2015/09/16 06:06:19 failed to store statistics: timeout
    [monitor] 2015/09/16 06:06:29 failed to store statistics: timeout
    [monitor] 2015/09/16 06:06:39 failed to store statistics: timeout
    [monitor] 2015/09/16 06:06:49 failed to store statistics: timeout
    [monitor] 2015/09/16 06:06:59 failed to store statistics: timeout
    [monitor] 2015/09/16 06:07:09 failed to store statistics: timeout
    [monitor] 2015/09/16 06:07:19 failed to store statistics: timeout
    [monitor] 2015/09/16 06:07:29 failed to store statistics: timeout
    [monitor] 2015/09/16 06:07:39 failed to store statistics: timeout
    [monitor] 2015/09/16 06:07:49 failed to store statistics: timeout
    [monitor] 2015/09/16 06:07:59 failed to store statistics: timeout
    [monitor] 2015/09/16 06:08:09 failed to store statistics: timeout
    
  • Line protocol write API

    Line protocol write API

    This PR adds a new write HTTP endpoint (/write_points) that uses a text based line protocol instead of JSON. The protocol is a list of points separated by newlines \n.

    Each point is composed of three blocks separated by whitespace. The first block is the measurement name and tags separated by commas. The second block is fields separated by commas. The last block is optional and is the timestamp for the point as a unix epoch in nanoseconds.

    measurement[,tag=value,tag2=value2...] field=value[,field2=value2...] [unixnano]
    

    Each point must have a measurement name. Tags are optional. Measurement, tag and values can not have any spaces. If the value contains a comma, it needs to be escaped with \,.

    Each point must have at least one value. The format of a field is name=value. Fields can be one of four types: integer, float, boolean or string. Integers are all numeric and cannot have a decimal point .. Floats are all numeric and must have a decimal point. Booleans are the values true and false. Strings must be surrounded by double-quores ". If the value contains a quote, it must be escaped \". There can be no spaces between consecutive field values.

    For example,

    cpu,host=serverA,region=us-west value=1.0 10000000000
    cpu,host=serverB,region=us-west value=3.3 10000000000
    cpu,host=serverB,region=us-east user=123415235,event="overloaded" 20000000000
    mem,host=serverB,regstion=us-east swapping=true 2000000000
    

    Points written in this format should be sent to the /write_points endpoint. The request should be a POST with the points in the body of the request. The content can also be gzip encoded.

    The following URL params may also be sent:

    • db: required The database to write points
    • rp: optional The retention policy to write points. If not specified, the default retention policy will be used.
    • precision: optional The precision of the time stamps (n, u, ms, s,m,h). If not specified, n is used.
    • consistency: optional The write consistency level required for the write to succeed. Can be one of one, any, all,quorum. Defaults to all.
    • u: optional The username for authentication
    • p: optional The password for authentication

    A successful response to the request will return a 204. If a parameter or point is not valid, a 400 will be returned.


    PR Notes:

    The parser has been tuned to minimize allocations and extra work during parsing. For example, the raw byte slice read in is held onto as much as possible until there is a need to modify it. Similarly, values are not unmarshaled into Go types until necessary. It also tries to validate the input using a single pass over the data as much as possible. Tags need to be sorted so it is preferable to send them in already sorted to avoid sorting on the server. The sort has been tuned as well so that it performs consistently over a large range of inputs.

    My local benchmarks have parsing performing around 750k-2m/points/sec depending on the shape of the point data.

  • Support per-query timezone offsets

    Support per-query timezone offsets

    Extending the discussion started in #2071 and re-starting the old discussion from #388, we should support time zones on a per-query basis.

    A quick example might look like:

    select mean(value) from cpu
    where time >= today()
    group by time(10m)
    time_zone(PST)
    

    Or you could also do time_zone(+8) or time_zone(-2).

    Any other suggestions?

  • Compaction crash loops and data loss on Raspberry Pi 3 B+ under minimal load

    Compaction crash loops and data loss on Raspberry Pi 3 B+ under minimal load

    Following up on this post with a fresh issue to highlight worse symptoms that don't seem explainable by a db-size cutoff (as was speculated on #6975 and elsewhere):

    In the month since that post, I've had to forcibly mv the data/collectd directory twice to unstick influx from 1-2min crash loops that lasted days, seemingly due to compaction errors.

    Today I'm noticing that my temps database (which I've not messed with during these collectd db problems, and gets about 5 points per second written to it) is missing large swaths of data from the 2 months I've been writing to it:

    The last gap, between 1/14 and 1/17, didn't exist this morning (when influx was still crash-looping, before the most recent time I ran mv /var/lib/influxdb/data/collectd ~/collectd.bak). That data was just recently discarded, it seems, possibly around the time I performed my "work-around" for the crash loop:

    sudo service influxdb stop
    sudo mv /var/lib/influxdb/data/collectd ~/collectd.bak
    sudo service influxdb start
    influx -execute 'create database collectd'
    

    The default retention policy should not be discarding data, afaict:

    > show retention policies on temps
    name    duration shardGroupDuration replicaN default
    ----    -------- ------------------ -------- -------
    autogen 0s       168h0m0s           1        true
    

    Here's the last ~7d of syslogs from the RPi server, 99.9% of which is logs from Influx crash-looping.

    There seem to be messages about:

    • WAL paths "already existing" when they were to be written to, and
    • compactions failing because they couldn't allocated memory
      • that's confusing because this RPi has consistently been using ≈½ of 1GB memory and 0% of 2GB swap, with a 64GB USB flash drive as a hard disk which is about 30% full).

    Is running InfluxDB on an RPi supposed to generally work, or am I in uncharted territory just by attempting it?

  • InfluxDB 1.7 uses way more memory and disk i/o than 1.6.4

    InfluxDB 1.7 uses way more memory and disk i/o than 1.6.4

    System info: InfluxDB 1.7, upgraded from 1.6.4. Running on the standard Docker image.

    Steps to reproduce:

    I upgraded a large InfluxDB server to InfluxDB 1.7. Nothing else changed. We are running two InfluxDB servers of a similar size, the other one was left at 1.6.4.

    This ran fine for about a day, then it started running into our memory limit and continually OOMing. We upped the memory and restarted. It ran fine for about 4 hours then started using very high disk i/o (read) which caused our stats writing processes to back off.

    Please see the timeline on the heap metrics below:

    • you can see relatively stable heap usage before we upgraded at 6am on Nov 9
    • at around 4pm there is a step-up in heap usage
    • around 11:30pm there is another step-up and it starts hitting the limit (OOM, causes a restart)
    • at 12:45pm on the 10th we restart with more RAM
    • around 4 hours later it starts using high i/o and you can see spikes in heap usage

    image

  • startup script influxd-systemd-start.sh stuck in while loop if http auth set

    startup script influxd-systemd-start.sh stuck in while loop if http auth set

    Actual behavior:

    systemd service stuck in /usr/lib/influxdb/scripts/influxd-systemd-start.sh in while loop if http authentification is set I belive that problem is introduced in commit c8de72ddbc5fdf20f821ca473f1fdf92820f9ac3

    Environment info:

    • System info: Linux 5.10.53-sunxi64 aarch64
    • InfluxDB version: InfluxDB v1.8.7 (git: 1.8 v1.8.7)
    • Other relevant environment details: dpkg -l | grep influx ii influxdb 1.8.7-1 arm64 Distributed time-series database. cat /etc/issue Ubuntu 18.04.5 LTS
  • [feature request] Insert new tags to existing values, like update

    [feature request] Insert new tags to existing values, like update

    Can we have a query syntax which allows to insert/attach a new set of tags (along with exiting ones) to values/rows that are already part of a measurement?

    My use case: I created a measurement from PerfMon log, which already has Host= tag. Now I want to categorize the data by applications, so I want to add tags like "App1=, App2=" assuming I can have two apps hosted on same server.

    Then I want to be able to say Update <measurement name> add <tag name=value> where <some condition based on tags>

  • InfluxDB goes unresponsive

    InfluxDB goes unresponsive

    Bug report

    System info: [Include InfluxDB version, operating system name, and other relevant details] Version: 0b4528b OS: FreeBSD

    Steps to reproduce:

    1. ???

    Expected behavior: [What you expected to happen] Keep working

    Actual behavior: [What actually happened] Stops responding. Process still running, but no longer responds to any queries.

    Additional info: [Include gist of relevant config, logs, etc.] Happens at random, but it seems the more activity, the more likely the issue will occur. Haven't identified a pattern yet.

    Here's the output of SIGQUIT. https://gist.githubusercontent.com/phemmer/251eab87914681d30f0e0435c664e9f7/raw/e79a847b896a7533586f6b384bd2aeb4c4c98083/log

  • Influxdb crashes after 2 hours

    Influxdb crashes after 2 hours

    Bug report

    System info: Influxdb 1.1.0 Os: Debian

    Steps to reproduce:

    1. Start influxdb
    2. Create 500 different databases with 500 users
    3. Auth should be on

    Expected behavior:

    Should run normally

    Actual behavior:

    Crashes after two hours

  • unexpected fault address resulted in 7 day and 2 hour data loss

    unexpected fault address resulted in 7 day and 2 hour data loss

    Steps to reproduce: ¯\_(ツ)_/¯

    1. run influxdb for a few months and hope it happens to you?

    note: i've had this happen before i think it was always a loss of ~7 days, and always not the most recent 7 days (just looking back, i found a loss from 2022-03-06 23:00:00 to 2022-03-14 02:00:00 on another bucket) i think i had more than these two, maybe something got recovered automatically?

    Expected behavior: influxdb to either not crash, or crash but not lose data

    Actual behavior: influxdb, apparently spontaneously, decided to segfault it lost 7 days and 2 hours of data in some measurements in 1 bucket it lost this for a period approximately 4 days before the crash (so going backwards in time, i have: today's data up to the crash (2022-12-30 02:46:50) data ~4 days before the crash (2022-12-26 01:00:00) 7 days and 2 hours missing (2022-12-18 23:36:20) older data is still there)

    Environment info:

    • System info: Linux 6.1.0 x86_64
      • NixOS unstable
    • InfluxDB version: InfluxDB 2.5.1 (git: v2.5.1) build_date: 2022-12-30T20:33:07Z
      • reported build_date is time of executing influxd version

    Config: apparently nothing non-stock, except a mistyped attempt to disable telemetry reporting

    Logs: influxdb2-crash-trimmed.log

  • fix: Update UI to resolve Dashboard crash and All Access Token creati…

    fix: Update UI to resolve Dashboard crash and All Access Token creati…

    Clean cherry-pick of https://github.com/influxdata/influxdb/pull/24014 to master.

    Note for reviewers:

    Check the semantic commit type:

    • Feat: a feature with user-visible changes
    • Fix: a bug fix that we might tell a user “upgrade to get this fix for your issue”
    • Chore: version bumps, internal doc (e.g. README) changes, code comment updates, code formatting fixes… must not be user facing (except dependency version changes)
    • Build: build script changes, CI config changes, build tool updates
    • Refactor: non-user-visible refactoring
    • Check the PR title: we should be able to put this as a one-liner in the release notes
  • Anti brute force protection

    Anti brute force protection

    Hello; i looked everywhere but did not find anything in the docs or by searching, so i thought i should ask here.

    Does influxdb2 support some form of anti-brute force protection, like Grafana (https://grafana.com/docs/grafana/latest/setup-grafana/configure-grafana/#disable_brute_force_login_protection) ?

    If not, how can this be achieved (which logs should contain authentication errors, how should they look like, how to create a fail2ban configuration...) ?

  • OSS: 2.6.1 - Getting 404 accessing the node.js setup page

    OSS: 2.6.1 - Getting 404 accessing the node.js setup page

    Steps to reproduce:

    1. Install OSS 2.6.1 (create ORG, users etc etc)
    2. Go to http://localhost:8086 and authenticate
    3. Place cursor over 'Up arrow' icon, displaying sub-menu "Sources|Buckets|Telegraf|Scapers|API Tokens"
    4. Select "Sources"
    5. Click JavaScript/Node.js under client Libraries

    Observed: Getting 404 Page Not Found Expected: Directed to the node.js Setting up Page

    Environment info:

    • System info: Run uname -srm and copy the output here Linux 4.18.0-408.el8.x86_64 x86_64

    • InfluxDB version: Run influxd version and copy the output here InfluxDB v2.6.1 (git: 9dcf880fe0) build_date: 2022-12-29T15:53:07Z

    • Other relevant environment details: Container runtime, disk info, etc CentOS Stream release 8 influxdb2-2.6.1-1.x86_64 influxdb2-cli-2.6.1-1.x86_64

    Config: bolt-path = "/var/lib/influxdb/influxd.bolt" engine-path = "/var/lib/influxdb/engine" flux-log-enabled = true ui-disabled = false hardening-enabled = false

  • /authorizations/{authID} PATCH - missing or invalid request body results in HTTP 500

    /authorizations/{authID} PATCH - missing or invalid request body results in HTTP 500

    Steps to reproduce: List the minimal actions needed to reproduce the behavior.

    Testing against the API

    1. prepare a PATCH request to be sent to the endpoint /authorizations/{authID}
    2. do one of the following
      1. leave out the request body
      2. Use non string values for either the status or the description properties. e.g. { foo: "bar"} or Math.PI
    3. send the request

    Expected behavior: Expected a missing or invalid request body to be caught by the server and that an HTTP 400 status would be returned with a message that the request body is missing or that properties are invalid.

    Actual behavior: The server returned HTTP 500

    Environment info:

    Testing against K8S-IDPE

    latest commit

    commit 0e28da062cc917e43809c80f235976d244947bb0 (HEAD -> master, origin/master, origin/HEAD)
    Author: influx-acs[bot] <107396960+influx-acs[bot]@users.noreply.github.com>
    Date:   Tue Dec 27 20:12:20 2022 +0000
    
    
  • Finite retention period cannot be removed

    Finite retention period cannot be removed

    Steps to reproduce: List the minimal actions needed to reproduce the behavior.

    1. Start an InfluxDB docker container. I did it like this:
    docker run -d \
          --name influxdb \
          -p 8086:8086 \
          -e TZ=europe/london \
          --restart=unless-stopped \
          -v /home/me/influx/data:/var/lib/influxdb2 \
          influxdb:latest
    
    1. Go to localhost:8086 and complete the initial set up. Select "Quick Start" option.
    2. Go to "Load Data" -> "Buckets"
    3. The retention period for the default bucket should say "forever".
    4. Click the settings for that bucket and change it to 7 days.
    5. The retention period should change to 7 days.
    6. Click the settings again and change it to "never".

    Expected behavior: The retention period should change back to "forever".

    Actual behavior: The retention period continues to be "7 days" regardless of reloading the page etc.

    Environment info: Docker influxdb:latest InfluxDB v2.6.0 (git: 24a2b621ea) build_date: 2022-12-15T18:47:00Z

    Config: All defaults.

Fast specialized time-series database for IoT, real-time internet connected devices and AI analytics.
Fast specialized time-series database for IoT, real-time internet connected devices and AI analytics.

unitdb Unitdb is blazing fast specialized time-series database for microservices, IoT, and realtime internet connected devices. As Unitdb satisfy the

Jan 1, 2023
A GPU-powered real-time analytics storage and query engine.
A GPU-powered real-time analytics storage and query engine.

AresDB AresDB is a GPU-powered real-time analytics storage and query engine. It features low query latency, high data freshness and highly efficient i

Jan 7, 2023
Real-time Geospatial and Geofencing
Real-time Geospatial and Geofencing

Tile38 is an open source (MIT licensed), in-memory geolocation data store, spatial index, and realtime geofence. It supports a variety of object types

Dec 30, 2022
VectorSQL is a free analytics DBMS for IoT & Big Data, compatible with ClickHouse.

NOTICE: This project have moved to fuse-query VectorSQL is a free analytics DBMS for IoT & Big Data, compatible with ClickHouse. Features High Perform

Jan 6, 2023
RadonDB is an open source, cloud-native MySQL database for building global, scalable cloud services

OverView RadonDB is an open source, Cloud-native MySQL database for unlimited scalability and performance. What is RadonDB? RadonDB is a cloud-native

Dec 31, 2022
The Prometheus monitoring system and time series database.

Prometheus Visit prometheus.io for the full documentation, examples and guides. Prometheus, a Cloud Native Computing Foundation project, is a systems

Dec 31, 2022
VictoriaMetrics: fast, cost-effective monitoring solution and time series database
VictoriaMetrics: fast, cost-effective monitoring solution and time series database

VictoriaMetrics VictoriaMetrics is a fast, cost-effective and scalable monitoring solution and time series database. It is available in binary release

Jan 8, 2023
LinDB is an open-source Time Series Database which provides high performance, high availability and horizontal scalability.
LinDB is an open-source Time Series Database which provides high performance, high availability and horizontal scalability.

LinDB is an open-source Time Series Database which provides high performance, high availability and horizontal scalability. LinDB stores all monitoring data of ELEME Inc, there is 88TB incremental writes per day and 2.7PB total raw data.

Jan 1, 2023
TalariaDB is a distributed, highly available, and low latency time-series database for Presto
TalariaDB is a distributed, highly available, and low latency time-series database for Presto

TalariaDB is a distributed, highly available, and low latency time-series database that stores real-time data. It's built on top of Badger DB.

Nov 16, 2022
Export output from pg_stat_activity and pg_stat_statements from Postgres into a time-series database that supports the Influx Line Protocol (ILP).

pgstat2ilp pgstat2ilp is a command-line program for exporting output from pg_stat_activity and pg_stat_statements (if the extension is installed/enabl

Dec 15, 2021
:handbag: Cache arbitrary data with an expiration time.

cache Cache arbitrary data with an expiration time. Features 0 dependencies About 100 lines of code 100% test coverage Usage // New cache c := cache.N

Jan 5, 2023
Time Series Database based on Cassandra with Prometheus remote read/write support

SquirrelDB SquirrelDB is a scalable high-available timeseries database (TSDB) compatible with Prometheus remote storage. SquirrelDB store data in Cass

Oct 20, 2022
🤔 A minimize Time Series Database, written from scratch as a learning project.
🤔 A minimize Time Series Database, written from scratch as a learning project.

mandodb ?? A minimize Time Series Database, written from scratch as a learning project. 时序数据库(TSDB: Time Series Database)大多数时候都是为了满足监控场景的需求,这里先介绍两个概念:

Jan 3, 2023
Nipo is a powerful, fast, multi-thread, clustered and in-memory key-value database, with ability to configure token and acl on commands and key-regexes written by GO

Welcome to NIPO Nipo is a powerful, fast, multi-thread, clustered and in-memory key-value database, with ability to configure token and acl on command

Dec 28, 2022
Owl is a db manager platform,committed to standardizing the data, index in the database and operations to the database, to avoid risks and failures.

Owl is a db manager platform,committed to standardizing the data, index in the database and operations to the database, to avoid risks and failures. capabilities which owl provides include Process approval、sql Audit、sql execute and execute as crontab、data backup and recover .

Nov 9, 2022
Being played at The Coffee House and try to find and play it on Spotify
Being played at The Coffee House and try to find and play it on Spotify

The Coffee House Muzik Follow the music that is being played at The Coffee House and try to find and play it on Spotify. Installation Clone this proje

May 25, 2022
Walrus - Fast, Secure and Reliable System Backup, Set up in Minutes.
Walrus - Fast, Secure and Reliable System Backup, Set up in Minutes.

Walrus is a fast, secure and reliable backup system suitable for modern infrastructure. With walrus, you can backup services like MySQL, PostgreSQL, Redis, etcd or a complete directory with a short interval and low overhead. It supports AWS S3, digitalocean spaces and any S3-compatible object storage service.

Jan 5, 2023
🔑A high performance Key/Value store written in Go with a predictable read/write performance and high throughput. Uses a Bitcask on-disk layout (LSM+WAL) similar to Riak.

bitcask A high performance Key/Value store written in Go with a predictable read/write performance and high throughput. Uses a Bitcask on-disk layout

Sep 26, 2022
BuntDB is an embeddable, in-memory key/value database for Go with custom indexing and geospatial support
BuntDB is an embeddable, in-memory key/value database for Go with custom indexing and geospatial support

BuntDB is a low-level, in-memory, key/value store in pure Go. It persists to disk, is ACID compliant, and uses locking for multiple readers and a sing

Dec 30, 2022