Influxdb-cluster - InfluxDB Cluster for replacing InfluxDB Enterprise

InfluxDB Circle CI Go Report Card Docker pulls

ATTENTION:

Around January 11th, 2019, master on this repository will become InfluxDB 2.0 code. The content of infludata/platform will be moved to this repository. If you rely on master, you should update your dependencies to track the 1.7 branch.

An Open-Source Time Series Database

InfluxDB is an open source time series database with no external dependencies. It's useful for recording metrics, events, and performing analytics.

Features

  • Built-in HTTP API so you don't have to write any server side code to get up and running.
  • Data can be tagged, allowing very flexible querying.
  • SQL-like query language.
  • Simple to install and manage, and fast to get data in and out.
  • It aims to answer queries in real-time. That means every data point is indexed as it comes in and is immediately available in queries that should return in < 100ms.

Installation

We recommend installing InfluxDB using one of the pre-built packages. Then start InfluxDB using:

  • service influxdb start if you have installed InfluxDB using an official Debian or RPM package.
  • systemctl start influxdb if you have installed InfluxDB using an official Debian or RPM package, and are running a distro with systemd. For example, Ubuntu 15 or later.
  • $GOPATH/bin/influxd if you have built InfluxDB from source.

Getting Started

Create your first database

curl -XPOST "http://localhost:8086/query" --data-urlencode "q=CREATE DATABASE mydb"

Insert some data

curl -XPOST "http://localhost:8086/write?db=mydb" \
-d 'cpu,host=server01,region=uswest load=42 1434055562000000000'

curl -XPOST "http://localhost:8086/write?db=mydb" \
-d 'cpu,host=server02,region=uswest load=78 1434055562000000000'

curl -XPOST "http://localhost:8086/write?db=mydb" \
-d 'cpu,host=server03,region=useast load=15.4 1434055562000000000'

Query for the data

curl -G "http://localhost:8086/query?pretty=true" --data-urlencode "db=mydb" \
--data-urlencode "q=SELECT * FROM cpu WHERE host='server01' AND time < now() - 1d"

Analyze the data

curl -G "http://localhost:8086/query?pretty=true" --data-urlencode "db=mydb" \
--data-urlencode "q=SELECT mean(load) FROM cpu WHERE region='uswest'"

Documentation

Contributing

If you're feeling adventurous and want to contribute to InfluxDB, see our contributing doc for info on how to make feature requests, build from source, and run tests.

Licensing

See LICENSE and DEPENDENCIES.

Looking for Support?

InfluxDB offers a number of services to help your project succeed. We offer Developer Support for organizations in active development, Managed Hosting to make it easy to move into production, and Enterprise Support for companies requiring the best response times, SLAs, and technical fixes. Visit our support page or contact [email protected] to learn how we can best help you succeed.

Owner
Shiwen Cheng
Life is short, do something.
Shiwen Cheng
Comments
  • 节点不同步问题

    节点不同步问题

    1、当初始化集群加入data-node节点后,数据库自带的_internal库并非同步的,是正常的吗? 2、当删除某个节点的库或表后,其他节点的表和库有时并未删除,导致节点不同步? 3、删除操作必须使用接口操作吗?能使用控制台或可视化客户端输入sql语句进行删除? 4、当节点状态不同步后,如何进行再同步?

  • docker-compose部署chronogaf疑问

    docker-compose部署chronogaf疑问

    https://github.com/chengshiwen/influxdb-cluster/blob/master/docker/quick/docker-compose.yml

    docker-compose的部署方案 data对外开放的端口是8186,8286 meta对外不开放端口 那么,外层还需要自己加一个slb指向这两个端口吗? 还是说在docker-compose里: 加一个nginx,起8086端口,指向data节点 加一个Chronogaf指向各个节点的8086,8091端口?

  • meta node can not add data node

    meta node can not add data node

    When I use influxd-ctl add-data influxdb-data-02:8088 in meta-01. I get these:

    root@influxdb-meta-01:~/go/bin# influxd-ctl add-data influxdb-data-02:8088 add-data: operation exited with error: Get "http://localhost:8091/status": dial tcp 127.0.0.1:8091: connect: connection refused

    It seems that the meta-01 can not visit data-02 http://localhost:8091.

    And here is the output in data-02. 2022-08-01T09:28:30.028071Z info Failed to create storage {"log_id": "0c2B_9Xl000", "service": "monitor", "db_instance": "_internal", "error": "Post "http://localhost:8091/execute": dial tcp 127.0.0.1:8091: connect: connection refused"} 2022-08-01T09:28:30.029493Z info failed to store statistics {"log_id": "0c2B_9Xl000", "service": "monitor", "error": "database not found: _internal"} 2022-08-01T09:28:36.979346Z info Failure getting snapshot {"log_id": "0c2B_9Xl000", "service": "metaclient", "server": "localhost:8091", "error": "Get "http://localhost:8091?index=0": dial tcp 127.0.0.1:8091: connect: connection refused"}

    How can i solve this error? and add data02?

  • influxd-ctl show在两个meta节点上查询差异

    influxd-ctl show在两个meta节点上查询差异

    版本信息: [1.8.10-c1.1.0/influxdb-cluster_1.8.10-c1.1.0_static_linux_amd64.tar.gz]

    问题现象:

    1. 三个meta节点存在于两个服务器上
    2. 在其中一个服务器上influxd-ctl show正常,另一个服务器上查询异常
    3. 如下图所示: 服务器1: image 服务器2: image

    Please run those if possible and link them from a gist or simply attach them as a comment to the issue.

    Please note, the quickest way to fix a bug is to open a Pull Request.

  • Failure to add new data node to cluster

    Failure to add new data node to cluster

    System info: [Include InfluxDB version, operating system name, and other relevant details] influx version: 1.8.10-c1.1.2, EC2-ami ubuntu/images/hvm-ssd/ubuntu-jammy-22.04-amd64-server-20221201

    Steps to reproduce:

    1. [First Step] Followed documentation of creating influxdb cluster using prebuilts. Setup was one meta node and trying to attach 1 data node. Started meta node as single server.
    sudo /home/ubuntu/influxdb-cluster-1.8.10-c1.1.2-1/usr/bin/influxd-meta -config /home/ubuntu/influxdb-cluster-1.8.10-c1.1.2-1/etc/influxdb/influxdb.conf -single-server &
    

    Server was up. Started data node on different ubuntu box with changes to hostname in the influxdb config file as specified in docs.

    sudo /home/ubuntu/influxdb-cluster-1.8.10-c1.1.2-1/usr/bin/influxd -config /home/ubuntu/influxdb-cluster-1.8.10-c1.1.2-1/etc/influxdb/influxdb.conf
    
    ubuntu@ip-172-16-1-144:~/influxdb-cluster-1.8.10-c1.1.2-1/usr/bin$ /home/ubuntu/influxdb-cluster-1.8.10-c1.1.2-1/usr/bin/influxd-ctl show
    Data Nodes
    ==========
    ID	TCP Address	Version
    
    Meta Nodes
    ==========
    ID	TCP Address	Version
    1	localhost:8091	1.8.10-c1.1.2
    
    1. Tried to attach data node with below command by running below command from meta-01
    /home/ubuntu/influxdb-cluster-1.8.10-c1.1.2-1/usr/bin/influxd-ctl add-data influxdb-data-03:8088
    add-data: operation exited with error: read message size: EOF
    

    Below Error on datanode influx server

    panic: runtime error: invalid memory address or nil pointer dereference
    [signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0xf5a79d]
    
    goroutine 45 [running]:
    github.com/influxdata/influxdb/coordinator.(*JoinClusterResponse).MarshalBinary(0xc000290678, 0x1, 0x1, 0x0, 0x0, 0xc000290678)
    	/root/influxdb/**coordinator/rpc.go:1330 +0x5d**
    github.com/influxdata/influxdb/coordinator.EncodeLV(0x2562e00, 0xc000010fc0, 0x255f560, 0xc000290678, 0x0, 0x2562e00)
    	/root/influxdb/coordinator/service.go:1594 +0x35
    github.com/influxdata/influxdb/coordinator.EncodeTLV(0x2562e00, 0xc000010fc0, 0xc000010f28, 0x255f560, 0xc000290678, 0x1, 0x8)
    	/root/influxdb/coordinator/service.go:1586 +0x85
    github.com/influxdata/influxdb/coordinator.(*Service).processJoinClusterRequest(0xc000106d80, 0x259fb80, 0xc000010fc0)
    	/root/influxdb/coordinator/service.go:1366 +0x2ab
    github.com/influxdata/influxdb/coordinator.(*Service).handleConn(0xc000106d80, 0x259fb80, 0xc000010fc0)
    	/root/influxdb/coordinator/service.go:422 +0x1466
    github.com/influxdata/influxdb/coordinator.(*Service).serve.func1(0xc000106d80, 0x259fb80, 0xc000010fc0)
    	/root/influxdb/coordinator/service.go:284 +0x6f
    created by github.com/influxdata/influxdb/coordinator.(*Service).serve
    	/root/influxdb/coordinator/service.go:282 +0x13f
    

    Checking source code looks like node_id is not being passed or is null. Please help to find a fix.

  • 分布式data node写入性能提高不明显

    分布式data node写入性能提高不明显

    理论上将data node部署在不同的服务器上可以并发写入,提高写入的效率,但是实践中我发现写入性能并没有提高。 我使用了两台服务器作为两个data node,复制因子设置为1,检查内存发现确实每个服务器上只写入了一半的数据。但是我在向influx-cluster写入380w条数据时用时需要20s(全部请求发送给一个node),我在向一个普通的influxdb写入相同数据时用时也是20s。 请问influx-cluster是不支持多个data node并发写入吗,还是因为我的写入操作问题呢?

  • 模拟故障1节点时查询其它节点必crash

    模拟故障1节点时查询其它节点必crash

    3 meta-node, 4 data-node,写入设置--replication-factor=2

    此时 stop 1 个 influxd,在其它节点查询时 节点会一个个全 crash 掉,必现

    panic: runtime error: invalid memory address or nil pointer dereference
    [signal SIGSEGV: segmentation violation code=0x1 addr=0x18 pc=0x9a1650]
    
    goroutine 25225082 [running]:
    github.com/influxdata/influxdb/query.Iterators.Close(0xc05c47e7c0, 0x1, 0x2, 0xc004348a00, 0x2531498)
            /root/influxdb/query/iterator.go:48 +0x50
    github.com/influxdata/influxdb/coordinator.(*ClusterShardMapping).CreateIterator(0xc0043489b0, 0x253aee0, 0xc047f69290, 0xc004348a00, 0x2531498, 0xc047f694a0, 0x0, 0x0, 0x0, 0x0, ...)
            /root/influxdb/coordinator/shard_mapper.go:469 +0x445
    github.com/influxdata/influxdb/query.(*exprIteratorBuilder).callIterator.func1(0xc0064ad940, 0x253aee0, 0xc047f69290, 0xc0064abc48, 0xc0064abc00, 0xc047f694a0, 0x7f, 0x1279e25)
            /root/influxdb/query/select.go:583 +0x535
    github.com/influxdata/influxdb/query.(*exprIteratorBuilder).callIterator(0xc00059d940, 0x253aee0, 0xc047f69290, 0xc047f694a0, 0x2531498, 0xc047f694a0, 0x0, 0x0, 0x0, 0x0, ...)
            /root/influxdb/query/select.go:608 +0xe5
    github.com/influxdata/influxdb/query.(*exprIteratorBuilder).buildCallIterator.func1(0xc047f694a0, 0x253aee0, 0xc047f69290, 0xc00059d940, 0xc00059cd20, 0xc00cdb3860, 0x250da80, 0x250da60, 0xc036229aa0)
            /root/influxdb/query/select.go:515 +0xe5
    github.com/influxdata/influxdb/query.(*exprIteratorBuilder).buildCallIterator(0xc0064ad940, 0x253aee0, 0xc047f69290, 0xc047f694a0, 0x7f07a5581338, 0xc00d2df9c0, 0x866349, 0xc04593cd50)
            /root/influxdb/query/select.go:559 +0x745
    github.com/influxdata/influxdb/query.buildExprIterator(0x253aee0, 0xc047f69290, 0x2531498, 0xc047f694a0, 0x7f08202f3848, 0xc0043489b0, 0xc015582d10, 0x1, 0x1, 0x2531498, ...)
            /root/influxdb/query/select.go:156 +0x285
    github.com/influxdata/influxdb/query.buildFieldIterator(0x253aee0, 0xc047f69290, 0x2531498, 0xc047f694a0, 0x7f08202f3848, 0xc0043489b0, 0xc015582d10, 0x1, 0x1, 0x0, ...)
            /root/influxdb/query/select.go:870 +0x4b6
    github.com/influxdata/influxdb/query.buildCursor.func1(0x0, 0x0)
            /root/influxdb/query/select.go:744 +0x12b
    golang.org/x/sync/errgroup.(*Group).Go.func1(0xc047f694d0, 0xc003ac0af0)
            /root/go/pkg/mod/golang.org/x/[email protected]/errgroup/errgroup.go:57 +0x59
    created by golang.org/x/sync/errgroup.(*Group).Go
            /root/go/pkg/mod/golang.org/x/[email protected]/errgroup/errgroup.go:54 +0x66
    panic: runtime error: invalid memory address or nil pointer dereference
    [signal SIGSEGV: segmentation violation code=0x1 addr=0x18 pc=0x9a1650]
    
  • how to install in k8s

    how to install in k8s

    Proposal: [Description of the feature]

    Current behavior: [What currently happens]

    Desired behavior: [What you would like to happen]

    Use case: [Why is this important (helps with prioritizing requests)]

    Requests may be closed if we're not actively planning to work on them.

  • remove-data 后如何重新加入集群

    remove-data 后如何重新加入集群

    请假 2个问题: 假设 replication-factor = 2, consistency=one

    1. 模拟 1 datanode 故障,正常的节点会暂存 hinted handoff, 内部有健康检查机制吗,当故障节点恢复时,会继续写入,健康检查间隔是怎样的?
    2. 模拟 1 datanode 故障,如果执行了 remove-data 之后,想重新加入集群 add-data,influx-meta 还是无法感知到数据恢复 在日志里看到 lvl=error msg="Failed to determine if node is active" log_id=0anK~crG000 service=handoff node=6 error="node not found", 实际 node 已经变成了10 update-data 提示前后相同,也无效
Jaeger-influxdb - The repository that contains InfluxDB Storage gRPC plugin for Jaeger

NOTICE: This repository is archived and is no longer maintained. Please use http

Feb 16, 2022
Enterprise-grade container platform tailored for multicloud and multi-cluster management
Enterprise-grade container platform tailored for multicloud and multi-cluster management

KubeSphere Container Platform What is KubeSphere English | 中文 KubeSphere is a distributed operating system providing cloud native stack with Kubernete

Jan 2, 2023
Enterprise-grade application development platform

Erda Overview Feature list Architecture Related repositories erda-proto erda-infra erda-ui Quick start To start using erda To start developing erda Do

Dec 28, 2022
KubeCube is an open source enterprise-level container platform
KubeCube is an open source enterprise-level container platform

KubeCube English | 中文文档 KubeCube is an open source enterprise-level container platform that provides enterprises with visualized management of Kuberne

Jan 4, 2023
The GCP Enterprise Cloud Cost Optimiser, or gecco for short, helps teams optimise their cloud project costs.
The GCP Enterprise Cloud Cost Optimiser, or gecco for short, helps teams optimise their cloud project costs.

gecco helps teams optimise their cloud resource costs. Locate abandoned, idle, and inefficiently configured resources quickly. gecco helps teams build

Jan 9, 2022
Pulumi provider for the Elasticsearch Service and Elastic Cloud Enterprise

Terraform Bridge Provider Boilerplate This repository contains boilerplate code for building a new Pulumi provider which wraps an existing Terraform p

Nov 18, 2022
An open source alternative to terraform enterprise.
An open source alternative to terraform enterprise.

oTF An open source alternative to terraform enterprise. Functionality is currently limited: Remote execution mode (plans and applies run remotely) Sta

Jan 2, 2023
A tool for managing complex enterprise Kubernetes environments as code.

kubecfg A tool for managing Kubernetes resources as code. kubecfg allows you to express the patterns across your infrastructure and reuse these powerf

Dec 14, 2022
k6 extension for InfluxDB v2

xk6-output-influxdb k6 extension for InfluxDB v2, it adds the support for the latest v2 version and the compatibility API for v1.8+. Why is this outpu

Dec 26, 2022
Crossplane provider for InfluxDB Cloud

provider-template provider-template is a minimal Crossplane Provider that is meant to be used as a template for implementing new Providers. It comes w

Jan 10, 2022
Download your Fitbit weight history and connect to InfluxDB and Grafana

WemonFit Weight monitoring for Fitbit, using InfluxDB and Grafana Generating a new certificate openssl req -new -newkey rsa:2048 -nodes -keyout lo

Oct 22, 2022
Scrapes tibber API and write to influxdb.

tibber-influxdb This will write data points to influxdb based on consumption and currentPrice. The points are written to influxdb with the timestamp f

Apr 28, 2022
kubetnl tunnels TCP connections from within a Kubernetes cluster to a cluster-external endpoint, e.g. to your local machine. (the perfect complement to kubectl port-forward)

kubetnl kubetnl (kube tunnel) is a command line utility to tunnel TCP connections from within a Kubernetes to a cluster-external endpoint, e.g. to you

Dec 16, 2022
A pod scaler golang app that can scale replicas either inside of cluster or out of the cluster

pod-scaler A simple pod scaler golang application that can scale replicas via manipulating the deployment Technologies The project has been created us

Oct 24, 2021
cluster-api-state-metrics (CASM) is a service that listens to the Kubernetes API server and generates metrics about the state of custom resource objects related of Kubernetes Cluster API.

Overview cluster-api-state-metrics (CASM) is a service that listens to the Kubernetes API server and generates metrics about the state of custom resou

Oct 27, 2022
Go-gke-pulumi - A simple example that deploys a GKE cluster and an application to the cluster using pulumi

This example deploys a Google Cloud Platform (GCP) Google Kubernetes Engine (GKE) cluster and an application to it

Jan 25, 2022
A Terraform module to manage cluster authentication (aws-auth) for an Elastic Kubernetes (EKS) cluster on AWS.

Archive Notice The terraform-aws-modules/eks/aws v.18.20.0 release has brought back support aws-auth configmap! For this reason, I highly encourage us

Dec 4, 2022
Multi cluster kubernetes dashboard with batteries included. Build by developers, for developers.

kubetower Multi cluster kubernetes dashboard with batteries included. Built by developers, for developers. Features Restart deployments with one click

Nov 28, 2022
A serverless cluster computing system for the Go programming language

Bigslice Bigslice is a serverless cluster data processing system for Go. Bigslice exposes composable API that lets the user express data processing ta

Dec 14, 2022