Mogo: a lightweight browser-based logs analytics and logs search platform for some datasource(ClickHouse, MySQL, etc.)

mogo

Go Report Card Release GitHub license

Mogo is a lightweight browser-based logs analytics and logs search platform for some datasource(ClickHouse, MySQL, etc.)

Live demo

  • log search page log-search

  • configuration page log-search

TODO.

Features

  • visual query dashboard, support query Histogram and raw logs for SQL.
  • shows percentage for specified fields.
  • vscode style configuration board, you can easily emit your fluent-bit configuration to Kubernetes ConfigMap.
  • Out of the box, easily deployment with kubectl.
  • Support for GitHub and GitLab Authentication.

Architecture

image

Installation

  • For host
# download release
# go to https://github.com/shimohq/mogo/releases and choose specific release to download.
latest=$(curl -sL https://api.github.com/repos/shimohq/mogo/releases/latest | grep  ".tag_name" | sed -E 's/.*"([^"]+)".*/\1/')
# for MacOS
wget https://github.com/shimohq/mogo/releases/download/${latest}/mogo_${latest}_darwin_x86_64.tar.gz -O mogo.tar.gz 
# for Linux
wget https://github.com/shimohq/mogo/releases/download/${latest}/mogo_${latest}_linux_x86_64.tar.gz -O mogo.tar.gz  

# extract zip file
tar xvf mogo.tar.gz -O 

# start api server


# configure nginx config
  • For Docker
git clone https://github.com/shimohq/mogo.git
docker-compose up

# then go to browser and visit http://localhost:9001
# username: admin
# password: admin
  • For helm

Main Tasks

-[x] task1

-[x] task2

Bugs or features

If you want to report a bug or request for a feature, create a issue here.

Contributors

Owner
Shimo HQ
💻 A cloud-based productivity suite that combines documents, spreadsheets, slides and more in a simple interface.
Shimo HQ
Comments
  • 添加clickhouse实例失败

    添加clickhouse实例失败

    你好,在添加clickhouse的时候失败了,提示【 {"data":{"code":1,"msg":"DNS configuration exception, database connection failure: could not load time location: unknown time zone Asia/Shanghai","data":null}} 】 ,这个是什么原因?

    mogo尝试配置的数据库类型有mysql5.6、mysql5.7、tidb5.3; mogo运行的方式有二进制、docker、k8s 。

  • 【求助】clickhouse数据库中时间是date格式,按时间筛选会报错

    【求助】clickhouse数据库中时间是date格式,按时间筛选会报错

    我计划用clickvisual查看clickhouse中的日志信息,但是引入日志表后,按时间筛选会报错。版本是v0.3.0-rc3。 Code: 386. DB::Exception: There is no supertype for types DateTime64(3), UInt32 because some of them are Date/Date32/DateTime/DateTime64 and some of them are not. 我怀疑是因为clickvisual自动生成的sql语句是将时间换成int类型,但是clickhouse不支持时间格式和int类型的直接比较。

    不知道是不是我哪里配置有问题。 1653562421(1) 1653562460(1)

  • rancher集群添加失败

    rancher集群添加失败

    添加基于rancher创建的k8s集群失败,看了下日志大概是因为证书的问题,请问如何添加自签证书的k8s集群?有没有跳过验证的方法?

    W0927 08:34:52.364791 304277 reflector.go:324] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:167: failed to list *v1.Deployment: Get "https://10.1.97.50/k8s/clusters/c-nrglh/apis/apps/v1/deployments?limit=500&resourceVersion=0": x509: certificate signed by unknown authority E0927 08:34:52.364995 304277 reflector.go:138] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:167: Failed to watch *v1.Deployment: failed to list *v1.Deployment: Get "https://10.1.97.50/k8s/clusters/c-nrglh/apis/apps/v1/deployments?limit=500&resourceVersion=0": x509: certificate signed by unknown authority W0927 08:34:53.749217 304277 reflector.go:324] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:167: failed to list *v1.Namespace: Get "https://10.1.97.51:6443/api/v1/namespaces?limit=500&resourceVersion=0": x509: certificate signed by unknown authority E0927 08:34:53.749319 304277 reflector.go:138] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:167: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.1.97.51:6443/api/v1/namespaces?limit=500&resourceVersion=0": x509: certificate signed by unknown authority W0927 08:34:56.793869 304277 reflector.go:324] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:167: failed to list *v1.Pod: Get "https://10.1.97.50/k8s/clusters/c-nrglh/api/v1/pods?limit=500&resourceVersion=0": x509: certificate signed by unknown authority E0927 08:34:56.793988 304277 reflector.go:138] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:167: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://10.1.97.50/k8s/clusters/c-nrglh/api/v1/pods?limit=500&resourceVersion=0": x509: certificate signed by unknown authority W0927 08:34:57.437284 304277 reflector.go:324] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:167: failed to list *v1.Service: Get "https://10.1.97.51:6443/api/v1/services?limit=500&resourceVersion=0": x509: certificate signed by unknown authority E0927 08:34:57.437546 304277 reflector.go:138] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:167: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.1.97.51:6443/api/v1/services?limit=500&resourceVersion=0": x509: certificate signed by unknown authority W0927 08:34:59.780796 304277 reflector.go:324] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:167: failed to list *v1.Node: Get "https://10.1.97.50/k8s/clusters/c-nrglh/api/v1/nodes?limit=500&resourceVersion=0": x509: certificate signed by unknown authority E0927 08:34:59.780912 304277 reflector.go:138] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:167: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.1.97.50/k8s/clusters/c-nrglh/api/v1/nodes?limit=500&resourceVersion=0": x509: certificate signed by unknown authority W0927 08:35:04.067085 304277 reflector.go:324] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:167: failed to list *v1.Event: Get "https://10.1.97.50/k8s/clusters/c-nrglh/api/v1/events?limit=500&resourceVersion=0": x509: certificate signed by unknown authority E0927 08:35:04.067206 304277 reflector.go:138] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://10.1.97.50/k8s/clusters/c-nrglh/api/v1/events?limit=500&resourceVersion=0": x509: certificate signed by unknown authority W0927 08:35:04.253143 304277 reflector.go:324] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:167: failed to list *v1.Service: Get "https://10.1.97.50/k8s/clusters/c-nrglh/api/v1/services?limit=500&resourceVersion=0": x509: certificate signed by unknown authority E0927 08:35:04.253242 304277 reflector.go:138] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:167: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.1.97.50/k8s/clusters/c-nrglh/api/v1/services?limit=500&resourceVersion=0": x509: certificate signed by unknown authority W0927 08:35:04.922634 304277 reflector.go:324] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:167: failed to list *v1.Endpoints: Get "https://10.1.97.50/k8s/clusters/c-nrglh/api/v1/endpoints?limit=500&resourceVersion=0": x509: certificate signed by unknown authority E0927 08:35:04.922828 304277 reflector.go:138] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:167: Failed to watch *v1.Endpoints: failed to list *v1.Endpoints: Get "https://10.1.97.50/k8s/clusters/c-nrglh/api/v1/endpoints?limit=500&resourceVersion=0": x509: certificate signed by unknown authority

  • 分析字段带有_host,开启hash 的时候, 用字段查询报错,查询语句生成的字段名称错误

    分析字段带有_host,开启hash 的时候, 用字段查询报错,查询语句生成的字段名称错误

    版本:0.4.0-rc1 clickhouse: 21.8.4.51 查询条件: upstream_proxy_host='fabio-crm-api' 实际查询为:upstream_proxy__inner_siphash_host_ = sipHash64('fabio-crm-api') clickhouse 实际字段为: _inner_siphash_upstream_proxy_host_

    image image image
  • 当ch实例为集群时,新增数据库不显示集群选项

    当ch实例为集群时,新增数据库不显示集群选项

    使用最新版clickvisual-v0.4.0-rc3

    我安装了ch 3分片2副本的集群,当我创建实例为集群时,新增数据库不显示集群选项

    image

    image

    日志:

    2022/08/21 14:58:11 /home/runner/go/pkg/mod/gorm.io/driver/[email protected]/migrator.go:181
    [0.809ms] [rows:-] SELECT column_name, column_default, is_nullable = 'YES', data_type, character_maximum_length, column_type, column_key, extra, column_comment, numeric_precision, numeric_scale , datetime_precision FROM information_schema.columns WHERE table_schema = 'clickvisual' AND table_name = 'cv_pms_casbin_rule' ORDER BY ORDINAL_POSITION
    
    2022/08/21 14:58:11 /home/runner/work/clickvisual/clickvisual/api/internal/service/install/install.go:127 Error 1062: Duplicate entry '1' for key 'PRIMARY'
    [0.322ms] [rows:0] INSERT INTO `cv_pms_casbin_rule` VALUES (1, 'p', 'role__root', '*', '*', '*', '', '', '','');
    
    2022/08/21 14:58:11 /home/runner/work/clickvisual/clickvisual/api/internal/service/install/install.go:128 Error 1062: Duplicate entry '2' for key 'PRIMARY'
    [0.245ms] [rows:0] INSERT INTO `cv_pms_casbin_rule` VALUES (2, 'g3', 'user__1', 'role__root', '', '', '', '', '', '');
    

    另外,我将架构设置为:vector(任意个)+kafka集群+负载均衡(3个clickvistal)+clickhouse集群

  • 自建表,时间字段为DateTime64(3),查询报错

    自建表,时间字段为DateTime64(3),查询报错

    错误信息 {"data":{"code":1,"msg":"query failed: code: 53, message: Type mismatch in IN or VALUES section. Expected: DateTime64(3). Got: UInt64","data":null}}

  • how to set datasource

    how to set datasource

    kafak 日志格式如下: 但是我设置数据源以后一直无法读取数据,想问下要如何设置数据源和指定字段?

    查询返回:{"code":0,"msg":"the query data is empty","data":null}

    image

    image

    kafka log fomat:

    {
    "_time_":1664175112.321259
    "log":"I0926 06:51:52.317190 1 reflector.go:530] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Watch close - *v1.ConfigMap total 0 items received "
    "stream":"stderr"
    "time":"2022-09-26T06:51:52.321258473Z"
    "kubernetes":{
    "pod_name":"rancher-monitoring-prometheus-adapter-5ddcd656d9-cjr48"
    "namespace_name":"cattle-monitoring-system"
    "pod_id":"4453c017-fc1f-493b-a229-2e8da62310b6"
    "host":"ip-172-18-6-20.cn-north-1.compute.internal"
    "container_name":"prometheus-adapter"
    "docker_id":"426f7c05a793d9fb947c0168038c38f70f0e8f81134dfd9adfe0aef1f1dccb64"
    "container_hash":"rancher/mirrored-directxman12-k8s-prometheus-adapter@sha256:c46d807d011cf127af2c298121c2d29ff8ce6f6b71061f69b9cca1731a9e3fde"
    "container_image":"rancher/mirrored-directxman12-k8s-prometheus-adapter:v0.8.4"
    }
    }
    
    

    datasource

    {
    "_time_":"1663744997.845103",
    "stream":"stderr",
    "time":"2022-09-23T02:53:18.54929225Z",
    "kubernetes":"{\"pod_name\":\"argocd-server-74785876b4-l45lb\",\"namespace_name\":\"argocd\",\"pod_id\":\"6582181e-fb41-4137-a983-f385c7a6dfe6\",\"host\":\"ip-172-18-6-20.cn-north-1.compute.internal\",\"container_name\":\"argocd-server\",\"docker_id\":\"ca5ebec90a2ef991e13e193acaad5ef51b409d079a4ad3bd183560498dd0ba3d\",\"container_hash\":\"quay.io/argoproj/argocd@sha256:358c244c96313ca3bf9f588dc870d8123fc22ffa5c231c57da10f77b8d671c66\",\"container_image\":\"quay.io/argoproj/argocd:v2.2.2\"}"
    }
    
  • 配置告警时,提示prometheus的rules路径不存在,无法创建rule

    配置告警时,提示prometheus的rules路径不存在,无法创建rule

    clickvisual后台日志: {"lv":"error","ts":1663053027,"msg":"alarm","step":"alarm create failed 09","err":"open /opt/bitnami/rules/cv-3da6cb78-11ff-4378-b23c-19f5d29b085d.yaml: no such file or directory"} {"lv":"warn","ts":1663053027,"msg":"biz warning","value":"alarm create failed 02: open /opt/bitnami/rules/cv-3da6cb78-11ff-4378-b23c-19f5d29b085d.yaml: no such file or directory","value":null,"tid":"77939ae5d8288a9dffea138874977ee

    cat prometheus.yaml alerting: alertmanagers:

    • static_configs:
      • targets: ["172.17.0.1:9093"] rule_files:
    • /opt/bitnami/rules/*.yaml remote_read:
    • url: "http://172.17.0.1:9201/read" read_recent: true remote_write:
    • url: "http://172.17.0.1:9201/write" queue_config: capacity: 10000 max_shards: 1 max_samples_per_send: 500

    手动在 /opt/bitnami/rules/目录下创建rule,没有问题,prometheus的页面可以识别到。

  • 创建告警时,提示MergeTree engine is deprecated

    创建告警时,提示MergeTree engine is deprecated

    数据源是clickhouse

    表结构 CREATE TABLE student_mt( id Int, sno String, name String, cno String, create_time DateTime ) ENGINE = MergeTree PARTITION BY create_time ORDER BY create_time;

    在检查统计中的查询框中输入了name = 'alex',预览中能看到数据,但一直报错,错误如下: 请求失败 错误:alarm create failed 02: code: 36, message: This syntax for *MergeTree engine is deprecated. Use extended storage definition syntax with ORDER BY/PRIMARY KEY clause.See also allow_deprecated_syntax_for_merge_tree setting.

    clickvisual版本是v0.4.0 clickhouse版本是:

    ClickHouse client version 22.8.2.11 (official build). ClickHouse server version 22.8.2.11 (official build).

  • 当kafka开启sasl时,clickvisual是否支持?

    当kafka开启sasl时,clickvisual是否支持?

    1、创建日志库的source含义?,原始日志是否只能为json格式。示例,当原始日志为以下内容时:

    {
    "file":"/tmp/nginx.log"
    "host":"k8s-node1"
    "message":"{"host":"173.88.189.116", "user-identifier":"dicki2125", "datetime":"27/Aug/2022:08:59:20 +0000", "method": "HEAD", "request": "/global/morph/virtual", "protocol":"HTTP/1.0", "status":203, "bytes":12192, "referer": "http://www.globalschemas.io/streamline"}"
    "source_type":"file"
    "timestamp":"2022-08-27T09:25:48.332296833Z"
    }
    

    简单设置为:

    {
    "file":"/tmp/nginx.log",
    "host":"k8s-node1",
    "message":"",
    "source_type":"file",
    "timestamp":"2022-08-27T08:59:42.258757055Z"
    }
    

    复杂设置为:

    {
    "file":"/tmp/nginx.log",
    "host":"k8s-node1",
    "message":"{\"host\":\"99.176.14.146\", \"user-identifier\":\"dicki6073\", \"datetime\":\"27/Aug/2022:08:59:20 +0000\", \"method\": \"DELETE\", \"request\": \"/implement\", \"protocol\":\"HTTP/1.1\", \"status\":100, \"bytes\":17719, \"referer\": \"https://www.dynamicstrategize.name/whiteboard/metrics\"}",
    "source_type":"file",
    "timestamp":"2022-08-27T08:59:42.258757055Z"
    }
    

    2、当clickhouse与kafka都为集群时,创建分析字段有以下问题:

    image

    image

    image

    错误:query failed: code: 47, message: Received from 192.168.10.150:9001. DB::Exception: There's no column 'nginx_local.status' in table 'nginx_local': While processing nginx_local.status. Stack trace: 0. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0xba37dda in /usr/bin/clickhouse 1. DB::TranslateQualifiedNamesMatcher::visit(DB::ASTIdentifier&, std::__1::shared_ptr<DB::IAST>&, DB::TranslateQualifiedNamesMatcher::Data&) @ 0x16e817d2 in /usr/bin/clickhouse 2. DB::TranslateQualifiedNamesMatcher::visit(std::__1::shared_ptr<DB::IAST>&, DB::TranslateQualifiedNamesMatcher::Data&) @ 0x16e81432 in /usr/bin/clickhouse 3. DB::InDepthNodeVisitor<DB::TranslateQualifiedNamesMatcher, true, false, std::__1::shared_ptr<DB::IAST> >::visit(std::__1::shared_ptr<DB::IAST>&) @ 0x16e25e97 in /usr/bin/clickhouse 4. DB::InDepthNodeVisitor<DB::TranslateQualifiedNamesMatcher, true, false, std::__1::shared_ptr<DB::IAST> >::visit(std::__1::shared_ptr<DB::IAST>&) @ 0x16e25eaf in /usr/bin/clickhouse 5. DB::InDepthNodeVisitor<DB::TranslateQualifiedNamesMatcher, true, false, std::__1::shared_ptr<DB::IAST> >::visit(std::__1::shared_ptr<DB::IAST>&) @ 0x16e25eaf in /usr/bin/clickhouse 6. DB::TreeRewriter::analyzeSelect(std::__1::shared_ptr<DB::IAST>&, DB::TreeRewriterResult&&, DB::SelectQueryOptions const&, std::__1::vector<DB::TableWithColumnNamesAndTypes, std::__1::allocator<DB::TableWithColumnNamesAndTypes> > const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&, std::__1::shared_ptr<DB::TableJoin>) const @ 0x16e0e56b in /usr/bin/clickhouse 7. ? @ 0x16b9b3cc in /usr/bin/clickhouse 8. DB::InterpreterSelectQuery::InterpreterSelectQuery(std::__1::shared_ptr<DB::IAST> const&, std::__1::shared_ptr<DB::Context> const&, std::__1::optional<DB::Pipe>, std::__1::shared_ptr<DB::IStorage> const&, DB::SelectQueryOptions const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, std::__1::unordered_map<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, DB::SubqueryForSet, std::__1::hash<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::equal_to<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::allocator<std::__1::pair<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const, DB::SubqueryForSet> > >, std::__1::unordered_map<DB::PreparedSetKey, std::__1::shared_ptr<DB::Set>, DB::PreparedSetKey::Hash, std::__1::equal_to<DB::PreparedSetKey>, std::__1::allocator<std::__1::pair<DB::PreparedSetKey const, std::__1::shared_ptr<DB::Set> > > >) @ 0x16b97aa0 in /usr/bin/clickhouse 9. DB::InterpreterSelectQuery::InterpreterSelectQuery(std::__1::shared_ptr<DB::IAST> const&, std::__1::shared_ptr<DB::Context> const&, DB::SelectQueryOptions const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&) @ 0x16b94f37 in /usr/bin/clickhouse 10. DB::InterpreterSelectWithUnionQuery::buildCurrentChildInterpreter(std::__1::shared_ptr<DB::IAST> const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&) @ 0x16be5946 in /usr/bin/clickhouse 11. DB::InterpreterSelectWithUnionQuery::InterpreterSelectWithUnionQuery(std::__1::shared_ptr<DB::IAST> const&, std::__1::shared_ptr<DB::Context>, DB::SelectQueryOptions const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&) @ 0x16be3614 in /usr/bin/clickhouse 12. DB::InterpreterFactory::get(std::__1::shared_ptr<DB::IAST>&, std::__1::shared_ptr<DB::Context>, DB::SelectQueryOptions const&) @ 0x16b49a83 in /usr/bin/clickhouse 13. ? @ 0x16ecd420 in /usr/bin/clickhouse 14. DB::executeQuery(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum) @ 0x16ecaed5 in /usr/bin/clickhouse 15. DB::TCPHandler::runImpl() @ 0x17b4035c in /usr/bin/clickhouse 16. DB::TCPHandler::run() @ 0x17b533d9 in /usr/bin/clickhouse 17. Poco::Net::TCPServerConnection::start() @ 0x1a98c1b3 in /usr/bin/clickhouse 18. Poco::Net::TCPServerDispatcher::run() @ 0x1a98d5ad in /usr/bin/clickhouse 19. Poco::PooledThread::run() @ 0x1ab4923d in /usr/bin/clickhouse 20. Poco::ThreadImpl::runnableEntry(void*) @ 0x1ab46882 in /usr/bin/clickhouse 21. start_thread @ 0x81cf in /usr/lib64/libpthread-2.28.so 22. __GI___clone @ 0x39d83 in /usr/lib64/libc-2.28.so : While executing Remote
    

    日志库表结构:

    CREATE TABLE `test`.`nginx_local` on cluster 'cloki'
    (
      `source_type` String,
    `file` String,
    `host` String,
      _time_second_ DateTime,
      _time_nanosecond_ DateTime64(9, 'Asia/Shanghai'),
      _raw_log_ String CODEC(ZSTD(1)),
      INDEX idx_raw_log _raw_log_ TYPE tokenbf_v1(30720, 2, 0) GRANULARITY 1
    )
    ENGINE = ReplicatedMergeTree('/clickhouse/tables/test.nginx_local/{shard}', '{replica}')
    PARTITION BY toYYYYMMDD(_time_second_)
    ORDER BY _time_second_
    TTL toDateTime(_time_second_) + INTERVAL 7 DAY
    SETTINGS index_granularity = 8192
    ;
    ALTER TABLE `test`.`nginx_local` ON CLUSTER `cloki` ADD COLUMN IF NOT EXISTS `status` Nullable(Int64);
    ALTER TABLE `test`.`nginx` ON CLUSTER `cloki` ADD COLUMN IF NOT EXISTS `status` Nullable(Int64);
    
    ALTER TABLE `test`.`nginx` ON CLUSTER `cloki` DROP COLUMN IF EXISTS `status`;
    ALTER TABLE `test`.`nginx_local` ON CLUSTER `cloki` DROP COLUMN IF EXISTS `status`;
    
    ALTER TABLE `test`.`nginx_local` ON CLUSTER `cloki` ADD COLUMN IF NOT EXISTS `status` Nullable(Int64);
    ALTER TABLE `test`.`nginx` ON CLUSTER `cloki` ADD COLUMN IF NOT EXISTS `status` Nullable(Int64);
    
    ALTER TABLE `test`.`nginx` ON CLUSTER `cloki` DROP COLUMN IF EXISTS `status`;
    ALTER TABLE `test`.`nginx_local` ON CLUSTER `cloki` DROP COLUMN IF EXISTS `status`;
    ALTER TABLE `test`.`nginx_local` ON CLUSTER `cloki` ADD COLUMN IF NOT EXISTS `status` Nullable(String);
    ALTER TABLE `test`.`nginx` ON CLUSTER `cloki` ADD COLUMN IF NOT EXISTS `status` Nullable(String);
    
    ALTER TABLE `test`.`nginx` ON CLUSTER `cloki` DROP COLUMN IF EXISTS `status`;
    ALTER TABLE `test`.`nginx_local` ON CLUSTER `cloki` DROP COLUMN IF EXISTS `status`;
    ALTER TABLE `test`.`nginx_local` ON CLUSTER `cloki` ADD COLUMN IF NOT EXISTS `status` Nullable(Int64);
    ALTER TABLE `test`.`nginx` ON CLUSTER `cloki` ADD COLUMN IF NOT EXISTS `status` Nullable(Int64);
    
    ALTER TABLE `test`.`nginx` ON CLUSTER `cloki` DROP COLUMN IF EXISTS `status`;
    ALTER TABLE `test`.`nginx_local` ON CLUSTER `cloki` DROP COLUMN IF EXISTS `status`;
    
    ALTER TABLE `test`.`nginx_local` ON CLUSTER `cloki` ADD COLUMN IF NOT EXISTS `status` Nullable(Int64);
    ALTER TABLE `test`.`nginx` ON CLUSTER `cloki` ADD COLUMN IF NOT EXISTS `status` Nullable(Int64);
    
    ALTER TABLE `test`.`nginx` ON CLUSTER `cloki` DROP COLUMN IF EXISTS `status`;
    ALTER TABLE `test`.`nginx_local` ON CLUSTER `cloki` DROP COLUMN IF EXISTS `status`;
    
    ALTER TABLE `test`.`nginx_local` ON CLUSTER `cloki` ADD COLUMN IF NOT EXISTS `status` Nullable(Int64);
    ALTER TABLE `test`.`nginx` ON CLUSTER `cloki` ADD COLUMN IF NOT EXISTS `status` Nullable(Int64);
    

    3、当短时间添加、删除、修改多个解析字段时,有可能会超过clickhouse的DDL限制

    Cannot execute replicated DDL query, maximum retries exceeded
    
  • OTEL标准的logs表Timestamp无法识别为时间字段

    OTEL标准的logs表Timestamp无法识别为时间字段

    Timestamp DateTime64(9) CODEC(Delta, ZSTD(1)),
    

    https://github.com/clickvisual/clickvisual/blob/c777970d11fdd1febce52bf0e7273c392f77f7a4/api/internal/service/inquiry/clickhouse.go#L865-L867

    only support DateTime64(3)

  • [BUG]: 日志列表里 Bool 类型字段只显示 key 并且 value 显示空白

    [BUG]: 日志列表里 Bool 类型字段只显示 key 并且 value 显示空白

    Describe the bug A clear and concise description of what the bug is, ideally within 20 words.

    ClickVisual Running Environment Please provide the following information:

    • ClickVisual version:

    • ClickVisual.LOG:

  • 建议钉钉告警支持@某一个钉钉用户

    建议钉钉告警支持@某一个钉钉用户

    钉钉群告警中,因为告警群中人员较多,希望精准推送到@某一个用户。 prometheus-webhook-dingtalk支持钉钉@用户 希望clickvisual告警规则选项中支持添加钉钉用户手机号

    groups:
    - name: default
      rules:
      - alert: ec639018_631c_46e1_959d_5da00911a1d9_70
        expr: clickvisual_alert_metrics{uuid="ec639018-631c-46e1-959d-5da00911a1d9",alarmId="22",filterId="70"} offset 10s>1
        for: 1m
        labels:
          service: dingtalk
          severity: warning
        annotations:
          summary: "告警 {{ $labels.name }}"
          description: "{{ $labels.desc }}  (当前值: {{ $value }})"
          user: "@138xxxxxxxx"
    
  • user list

    user list

    Welcome to use ClickVisual To know who is using it, pls append your org info as follow, Organization:ClickVisual (Required) Location: Wuhan, China(Required) Contact: Email or Official website (Optional) Purpose:use as our biz logger ui (Required)

    Thanks again for your participation!

    欢迎使用 ClickVisual,首先感谢你的使用,其次您可以参考下面的样例来提供您的信息以收集下使用场景:

    组织:ClickVisual(Required) 地点:中国武汉(Required) 联系方式:邮箱或官方网站(Optional) 场景:作为业务日志、大数据行为分析使用(Required) 再次感谢你的参与!!!

Related tags
Go-clickhouse - ClickHouse client for Go

ClickHouse client for Go 1.18+ This client uses native protocol to communicate w

Jan 9, 2023
Bifrost ---- 面向生产环境的 MySQL 同步到Redis,MongoDB,ClickHouse,MySQL等服务的异构中间件
Bifrost ---- 面向生产环境的 MySQL 同步到Redis,MongoDB,ClickHouse,MySQL等服务的异构中间件

Bifrost ---- 面向生产环境的 MySQL 同步到Redis,ClickHouse等服务的异构中间件 English 漫威里的彩虹桥可以将 雷神 送到 阿斯加德 和 地球 而这个 Bifrost 可以将 你 MySQL 里的数据 全量 , 实时的同步到 : Redis MongoDB Cl

Dec 30, 2022
ClickHouse http proxy and load balancer
ClickHouse http proxy and load balancer

chproxy English | 简体中文 Chproxy, is an http proxy and load balancer for ClickHouse database. It provides the following features: May proxy requests to

Jan 3, 2023
Collects many small inserts to ClickHouse and send in big inserts

ClickHouse-Bulk Simple Yandex ClickHouse insert collector. It collect requests and send to ClickHouse servers. Installation Download binary for you pl

Dec 28, 2022
Distributed tracing using OpenTelemetry and ClickHouse

Distributed tracing backend using OpenTelemetry and ClickHouse Uptrace is a dist

Jan 2, 2023
mysql to mysql 轻量级多线程的库表数据同步

goMysqlSync golang mysql to mysql 轻量级多线程库表级数据同步 测试运行 设置当前binlog位置并且开始运行 go run main.go -position mysql-bin.000001 1 1619431429 查询当前binlog位置,参数n为秒数,查询结

Nov 15, 2022
BQB is a lightweight and easy to use query builder that works with sqlite, mysql, mariadb, postgres, and others.

Basic Query Builder Why Simple, lightweight, and fast Supports any and all syntax by the nature of how it works Doesn't require learning special synta

Dec 7, 2022
support clickhouse

Remote storage adapter This is a write adapter that receives samples via Prometheus's remote write protocol and stores them in Graphite, InfluxDB, cli

Dec 7, 2022
Jaeger ClickHouse storage plugin implementation

Jaeger ClickHouse Jaeger ClickHouse gRPC storage plugin. This is WIP and it is based on https://github.com/bobrik/jaeger/tree/ivan/clickhouse/plugin/s

Feb 15, 2022
Clickhouse support for GORM

clickhouse Clickhouse support for GORM Quick Start package main import ( "fmt" "github.com/sweetpotato0/clickhouse" "gorm.io/gorm" ) // User

Oct 18, 2022
Zinc Search engine. A lightweight alternative to elasticsearch that requires minimal resources, written in Go.
Zinc Search engine. A lightweight alternative to elasticsearch that requires minimal resources, written in Go.

Zinc Zinc is a search engine that does full text indexing. It is a lightweight alternative to elasticsearch and runs in less than 100 MB of RAM. It us

Jan 8, 2023
🐳 A most popular sql audit platform for mysql
🐳 A most popular sql audit platform for mysql

?? A most popular sql audit platform for mysql

Jan 6, 2023
Use SQL to query databases, logs and more from PlanetScale

Use SQL to instantly query PlanetScale databases, branches and more. Open source CLI. No DB required.

Sep 30, 2022
WAL-G is an archival restoration tool for PostgreSQL, MySQL/MariaDB, and MS SQL Server (beta for MongoDB and Redis).

WAL-G is an archival restoration tool for PostgreSQL, MySQL/MariaDB, and MS SQL Server (beta for MongoDB and Redis).

Jan 1, 2023
A Go rest API project that is following solid and common principles and is connected to local MySQL database.
A Go rest API project that is following solid and common principles and is connected to local MySQL database.

This is an intermediate-level go project that running with a project structure optimized RESTful API service in Go. API's of that project is designed based on solid and common principles and connected to the local MySQL database.

Dec 25, 2022
Single binary CLI for generating structured JSON, CSV, Excel, etc.

fakegen: Single binary CLI for generating a random schema of M columns to populate N rows of JSON, CSV, Excel, etc. This program generates a random sc

Dec 26, 2022
Jobbuzz - Brunei job search database and alert notification

JobBuzz Brunei open source job search database and alert notification Developmen

Jul 30, 2022
MySQL replication topology management and HA
MySQL replication topology management and HA

orchestrator [Documentation] orchestrator is a MySQL high availability and replication management tool, runs as a service and provides command line ac

Jan 4, 2023