1、创建日志库的source含义?,原始日志是否只能为json格式。示例,当原始日志为以下内容时:
{
"file":"/tmp/nginx.log"
"host":"k8s-node1"
"message":"{"host":"173.88.189.116", "user-identifier":"dicki2125", "datetime":"27/Aug/2022:08:59:20 +0000", "method": "HEAD", "request": "/global/morph/virtual", "protocol":"HTTP/1.0", "status":203, "bytes":12192, "referer": "http://www.globalschemas.io/streamline"}"
"source_type":"file"
"timestamp":"2022-08-27T09:25:48.332296833Z"
}
简单设置为:
{
"file":"/tmp/nginx.log",
"host":"k8s-node1",
"message":"",
"source_type":"file",
"timestamp":"2022-08-27T08:59:42.258757055Z"
}
复杂设置为:
{
"file":"/tmp/nginx.log",
"host":"k8s-node1",
"message":"{\"host\":\"99.176.14.146\", \"user-identifier\":\"dicki6073\", \"datetime\":\"27/Aug/2022:08:59:20 +0000\", \"method\": \"DELETE\", \"request\": \"/implement\", \"protocol\":\"HTTP/1.1\", \"status\":100, \"bytes\":17719, \"referer\": \"https://www.dynamicstrategize.name/whiteboard/metrics\"}",
"source_type":"file",
"timestamp":"2022-08-27T08:59:42.258757055Z"
}
2、当clickhouse与kafka都为集群时,创建分析字段有以下问题:
错误:query failed: code: 47, message: Received from 192.168.10.150:9001. DB::Exception: There's no column 'nginx_local.status' in table 'nginx_local': While processing nginx_local.status. Stack trace: 0. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0xba37dda in /usr/bin/clickhouse 1. DB::TranslateQualifiedNamesMatcher::visit(DB::ASTIdentifier&, std::__1::shared_ptr<DB::IAST>&, DB::TranslateQualifiedNamesMatcher::Data&) @ 0x16e817d2 in /usr/bin/clickhouse 2. DB::TranslateQualifiedNamesMatcher::visit(std::__1::shared_ptr<DB::IAST>&, DB::TranslateQualifiedNamesMatcher::Data&) @ 0x16e81432 in /usr/bin/clickhouse 3. DB::InDepthNodeVisitor<DB::TranslateQualifiedNamesMatcher, true, false, std::__1::shared_ptr<DB::IAST> >::visit(std::__1::shared_ptr<DB::IAST>&) @ 0x16e25e97 in /usr/bin/clickhouse 4. DB::InDepthNodeVisitor<DB::TranslateQualifiedNamesMatcher, true, false, std::__1::shared_ptr<DB::IAST> >::visit(std::__1::shared_ptr<DB::IAST>&) @ 0x16e25eaf in /usr/bin/clickhouse 5. DB::InDepthNodeVisitor<DB::TranslateQualifiedNamesMatcher, true, false, std::__1::shared_ptr<DB::IAST> >::visit(std::__1::shared_ptr<DB::IAST>&) @ 0x16e25eaf in /usr/bin/clickhouse 6. DB::TreeRewriter::analyzeSelect(std::__1::shared_ptr<DB::IAST>&, DB::TreeRewriterResult&&, DB::SelectQueryOptions const&, std::__1::vector<DB::TableWithColumnNamesAndTypes, std::__1::allocator<DB::TableWithColumnNamesAndTypes> > const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&, std::__1::shared_ptr<DB::TableJoin>) const @ 0x16e0e56b in /usr/bin/clickhouse 7. ? @ 0x16b9b3cc in /usr/bin/clickhouse 8. DB::InterpreterSelectQuery::InterpreterSelectQuery(std::__1::shared_ptr<DB::IAST> const&, std::__1::shared_ptr<DB::Context> const&, std::__1::optional<DB::Pipe>, std::__1::shared_ptr<DB::IStorage> const&, DB::SelectQueryOptions const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, std::__1::unordered_map<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, DB::SubqueryForSet, std::__1::hash<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::equal_to<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::allocator<std::__1::pair<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const, DB::SubqueryForSet> > >, std::__1::unordered_map<DB::PreparedSetKey, std::__1::shared_ptr<DB::Set>, DB::PreparedSetKey::Hash, std::__1::equal_to<DB::PreparedSetKey>, std::__1::allocator<std::__1::pair<DB::PreparedSetKey const, std::__1::shared_ptr<DB::Set> > > >) @ 0x16b97aa0 in /usr/bin/clickhouse 9. DB::InterpreterSelectQuery::InterpreterSelectQuery(std::__1::shared_ptr<DB::IAST> const&, std::__1::shared_ptr<DB::Context> const&, DB::SelectQueryOptions const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&) @ 0x16b94f37 in /usr/bin/clickhouse 10. DB::InterpreterSelectWithUnionQuery::buildCurrentChildInterpreter(std::__1::shared_ptr<DB::IAST> const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&) @ 0x16be5946 in /usr/bin/clickhouse 11. DB::InterpreterSelectWithUnionQuery::InterpreterSelectWithUnionQuery(std::__1::shared_ptr<DB::IAST> const&, std::__1::shared_ptr<DB::Context>, DB::SelectQueryOptions const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&) @ 0x16be3614 in /usr/bin/clickhouse 12. DB::InterpreterFactory::get(std::__1::shared_ptr<DB::IAST>&, std::__1::shared_ptr<DB::Context>, DB::SelectQueryOptions const&) @ 0x16b49a83 in /usr/bin/clickhouse 13. ? @ 0x16ecd420 in /usr/bin/clickhouse 14. DB::executeQuery(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum) @ 0x16ecaed5 in /usr/bin/clickhouse 15. DB::TCPHandler::runImpl() @ 0x17b4035c in /usr/bin/clickhouse 16. DB::TCPHandler::run() @ 0x17b533d9 in /usr/bin/clickhouse 17. Poco::Net::TCPServerConnection::start() @ 0x1a98c1b3 in /usr/bin/clickhouse 18. Poco::Net::TCPServerDispatcher::run() @ 0x1a98d5ad in /usr/bin/clickhouse 19. Poco::PooledThread::run() @ 0x1ab4923d in /usr/bin/clickhouse 20. Poco::ThreadImpl::runnableEntry(void*) @ 0x1ab46882 in /usr/bin/clickhouse 21. start_thread @ 0x81cf in /usr/lib64/libpthread-2.28.so 22. __GI___clone @ 0x39d83 in /usr/lib64/libc-2.28.so : While executing Remote
日志库表结构:
CREATE TABLE `test`.`nginx_local` on cluster 'cloki'
(
`source_type` String,
`file` String,
`host` String,
_time_second_ DateTime,
_time_nanosecond_ DateTime64(9, 'Asia/Shanghai'),
_raw_log_ String CODEC(ZSTD(1)),
INDEX idx_raw_log _raw_log_ TYPE tokenbf_v1(30720, 2, 0) GRANULARITY 1
)
ENGINE = ReplicatedMergeTree('/clickhouse/tables/test.nginx_local/{shard}', '{replica}')
PARTITION BY toYYYYMMDD(_time_second_)
ORDER BY _time_second_
TTL toDateTime(_time_second_) + INTERVAL 7 DAY
SETTINGS index_granularity = 8192
;
ALTER TABLE `test`.`nginx_local` ON CLUSTER `cloki` ADD COLUMN IF NOT EXISTS `status` Nullable(Int64);
ALTER TABLE `test`.`nginx` ON CLUSTER `cloki` ADD COLUMN IF NOT EXISTS `status` Nullable(Int64);
ALTER TABLE `test`.`nginx` ON CLUSTER `cloki` DROP COLUMN IF EXISTS `status`;
ALTER TABLE `test`.`nginx_local` ON CLUSTER `cloki` DROP COLUMN IF EXISTS `status`;
ALTER TABLE `test`.`nginx_local` ON CLUSTER `cloki` ADD COLUMN IF NOT EXISTS `status` Nullable(Int64);
ALTER TABLE `test`.`nginx` ON CLUSTER `cloki` ADD COLUMN IF NOT EXISTS `status` Nullable(Int64);
ALTER TABLE `test`.`nginx` ON CLUSTER `cloki` DROP COLUMN IF EXISTS `status`;
ALTER TABLE `test`.`nginx_local` ON CLUSTER `cloki` DROP COLUMN IF EXISTS `status`;
ALTER TABLE `test`.`nginx_local` ON CLUSTER `cloki` ADD COLUMN IF NOT EXISTS `status` Nullable(String);
ALTER TABLE `test`.`nginx` ON CLUSTER `cloki` ADD COLUMN IF NOT EXISTS `status` Nullable(String);
ALTER TABLE `test`.`nginx` ON CLUSTER `cloki` DROP COLUMN IF EXISTS `status`;
ALTER TABLE `test`.`nginx_local` ON CLUSTER `cloki` DROP COLUMN IF EXISTS `status`;
ALTER TABLE `test`.`nginx_local` ON CLUSTER `cloki` ADD COLUMN IF NOT EXISTS `status` Nullable(Int64);
ALTER TABLE `test`.`nginx` ON CLUSTER `cloki` ADD COLUMN IF NOT EXISTS `status` Nullable(Int64);
ALTER TABLE `test`.`nginx` ON CLUSTER `cloki` DROP COLUMN IF EXISTS `status`;
ALTER TABLE `test`.`nginx_local` ON CLUSTER `cloki` DROP COLUMN IF EXISTS `status`;
ALTER TABLE `test`.`nginx_local` ON CLUSTER `cloki` ADD COLUMN IF NOT EXISTS `status` Nullable(Int64);
ALTER TABLE `test`.`nginx` ON CLUSTER `cloki` ADD COLUMN IF NOT EXISTS `status` Nullable(Int64);
ALTER TABLE `test`.`nginx` ON CLUSTER `cloki` DROP COLUMN IF EXISTS `status`;
ALTER TABLE `test`.`nginx_local` ON CLUSTER `cloki` DROP COLUMN IF EXISTS `status`;
ALTER TABLE `test`.`nginx_local` ON CLUSTER `cloki` ADD COLUMN IF NOT EXISTS `status` Nullable(Int64);
ALTER TABLE `test`.`nginx` ON CLUSTER `cloki` ADD COLUMN IF NOT EXISTS `status` Nullable(Int64);
3、当短时间添加、删除、修改多个解析字段时,有可能会超过clickhouse的DDL限制
Cannot execute replicated DDL query, maximum retries exceeded