Distributed tracing using OpenTelemetry and ClickHouse

Distributed tracing backend using OpenTelemetry and ClickHouse

Uptrace is a distributed tracing system that uses OpenTelemetry to collect data and ClickHouse database to store it. ClickHouse is the only dependency.

Screenshot goes here

Features:

  • OpenTelemetry protocol via gRPC (:14317) and HTTP (:14318)
  • Span/Trace grouping
  • SQL-like query language
  • Percentiles
  • Systems dashboard

Roadmap:

  • Errors/logs support
  • More dashboards for services and hosts
  • ClickHouse cluster support
  • TLS support
  • Improved SQL support using CockroachDB SQL parser

Getting started

  • Docker example allows to run Uptrace with a single command.
  • Installation guide with pre-compiled binaries for Linux, MacOS, and Windows.

Running Uptrace locally

To run Uptrace locally, you need Go 1.18 and ClickHouse.

Step 1. Create uptrace ClickHouse database:

clickhouse-client -q "CREATE DATABASE uptrace"

Step 2. Reset ClickHouse database schema:

go run cmd/uptrace/main.go ch reset

Step 3. Start Uptrace:

go run cmd/uptrace/main.go serve

Step 4. Open Uptrace UI at http://localhost:14318

Uptrace will monitor itself using uptrace-go OpenTelemetry distro. To get some test data, just reload the UI few times.

Running UI locally

You can also start the UI locally:

cd vue
pnpm install
pnpm serve

And open http://localhost:19876

Owner
Uptrace
All-in-one tool to optimize performance and monitor errors & logs
Uptrace
Comments
  • Update K8s chart

    Update K8s chart

    We want to install uptrace via https://github.com/uptrace/helm-charts, and have the following problems.

    • The default value for config.listen.grpc, and config.listen.http is wrong. It shall be {addr: ":14317"}, not :14317.
    • uptrace has error db.dsn option can not be empty, what value shall we provide for this?
  • Update screenshots to use correct redirect url

    Update screenshots to use correct redirect url

    So I trying to setup OpenID Connect using Google, following this tutorial https://uptrace.dev/get/auth-google.html. looks like the screenshot image and the Uptrace config doesn't match. in the screenshot the Authorized redirect URIs set to http://localhost:14318/api/v1/sso/oidc/callback but the Uptrace config providing id: google, which is the Authorized redirect URIs supposed to set to http://localhost:14318/api/v1/sso/google/callback

    The same issue exists with keycloak docs - https://uptrace.dev/get/auth-keycloak.html#create-a-client-for-uptrace

    If anyone has a correct screenshot, please upload it here.

    • [x] update google screenshot
    • [x] update keycloak screenshot
  • Add user templates

    Add user templates

    Why

    Creating new users requires a configuration update and restart. This patch allows external auth providers to submit valid tokens with arbitrary usernames, so that users do not need to maintain uptrace credentials.

    Addresses #76 as a first step. This implementation can be extended to have native support for some common auth providers.

    How

    Allows configuring 'user templates', which resolves the user ID based on the configured audience (aud) of the token. The subject (sub) is then used as the username. This allows external auth providers to sign valid Uptrace JWTs, while passing an arbitrary username.

    Replaced deprecated jwt library https://github.com/dgrijalva/jwt-go with https://github.com/golang-jwt/jwt.

    Administrators may need to write their own scripts to map external provider JWTs to valid Uptrace ones; for example validating and converting a Cloudflare Access JWT and mapping the email claim to sub, and setting the configured audience. The script can MITM uptrace to set the token cookie.

    This implementation is vendor-agnostic by design, as to allow any auth service to be integrated (LDAP, PKI, Cloudflare, etc.) in front of Uptrace. It also minimizes the amount of auth code to add to Uptrace.

    Future Considerations

    1. Consider passing user.id and user.username in uptrace internal spans for auditing purposes.
    2. Consider supporting some external auth providers as built-ins, such as LDAP, SSO, Cloudflare, etc. This could be configured as a property of each user_template entry.

    Example

    With the following uptrace.yml snippet:

    user_templates:
       - id: 2
         audience: uptrace-users
    
    secret_key: 102c1a557c314fc28198acd017960843
    

    We can generate a token using the jwt-cli:

    $ jwt encode --secret 102c1a557c314fc28198acd017960843 '{"sub": "amperes", "aud": "uptrace-users"}'
    eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJhdWQiOiJ1cHRyYWNlLXVzZXJzIiwiaWF0IjoxNjYyOTMxNDQ4LCJzdWIiOiJhbXBlcmVzIn0.SBxfygB9Fw9nOZaOVrtz6HOLTr3H5TM5XLaoZ7rFlbs
    

    Setting this token as the token cookie automatically signs into uptrace:

    image
  • feat: add oidc sso user provider

    feat: add oidc sso user provider

    TODO:

    • [x] Config options
    • [x] Initialize provider
    • [x] Add button in login page (if configured)
    • [x] Add callback endpoint, exchange token/user info
    • [x] Update config example
    • [x] Polish UI
    • [x] Add claim config property (to use something else than email address)
    • [x] Check that claim value is not empty
  • Initial Migrations do no longer work

    Initial Migrations do no longer work

    Description

    If a new deployment is created with the latest uptrace-dev docker image, the initial Clickhouse migrations will not pass.

    This is because the Clickhouse formatter does not fully replace all arguments within the migrations.

    The following lines declare which arguments do exist https://github.com/uptrace/uptrace/blob/93bb17c7b46c4d1dc721073b948bd1fb80b3a4af/pkg/bunapp/app.go#L363-L370

    But the resulting query looks like this:

    CREATE TABLE uptrace.spans_index ?ON_CLUSTER (
      project_id UInt32 Codec(DoubleDelta, ?CODEC),
      system LowCardinality(String) Codec(?CODEC),
      group_id UInt64 Codec(Delta, ?CODEC),
    
      trace_id UUID Codec(?CODEC),
      id UInt64 Codec(?CODEC),
      parent_id UInt64 Codec(?CODEC),
      name LowCardinality(String) Codec(?CODEC),
      event_name String Codec(?CODEC),
      is_event UInt8 ALIAS event_name != '',
      kind LowCardinality(String) Codec(?CODEC),
      time DateTime Codec(Delta, ?CODEC),
      duration Int64 Codec(Delta, ?CODEC),
      count Float32 Codec(?CODEC),
    
      status_code LowCardinality(String) Codec(?CODEC),
      status_message String Codec(?CODEC),
    
      link_count UInt8 Codec(?CODEC),
      event_count UInt8 Codec(?CODEC),
      event_error_count UInt8 Codec(?CODEC),
      event_log_count UInt8 Codec(?CODEC),
    
      all_keys Array(LowCardinality(String)) Codec(?CODEC),
      attr_keys Array(LowCardinality(String)) Codec(?CODEC),
      attr_values Array(String) Codec(?CODEC),
    
      "service.name" LowCardinality(String) Codec(?CODEC),
      "host.name" LowCardinality(String) Codec(?CODEC),
    
      "db.system" LowCardinality(String) Codec(?CODEC),
      "db.statement" String Codec(?CODEC),
      "db.operation" LowCardinality(String) Codec(?CODEC),
      "db.sql.table" LowCardinality(String) Codec(?CODEC),
    
      "log.severity" LowCardinality(String) Codec(?CODEC),
      "log.message" String Codec(?CODEC),
    
      "exception.type" LowCardinality(String) Codec(?CODEC),
      "exception.message" String Codec(?CODEC),
    
      INDEX idx_attr_keys attr_keys TYPE bloom_filter(0.01) GRANULARITY 64,
      INDEX idx_duration duration TYPE minmax GRANULARITY 1
    )
    ENGINE = ?REPLICATEDMergeTree()
    ORDER BY (project_id, system, group_id, time)
    PARTITION BY toDate(time)
    TTL toDate(time) + INTERVAL ?SPANS_TTL DELETE
    SETTINGS ttl_only_drop_parts = 1,
             storage_policy = ?SPANS_STORAGE
    

    Only the first argument ?DB is replaced correctly.

  • [question] Alert rule doesn't work

    [question] Alert rule doesn't work

    Hello. I have configured custom metrics from a span

      - name: uptrace.tracing.spans_duration
        description: Spans duration (excluding events)
        instrument: histogram
        unit: microseconds
        value: span.duration / 1000
        attrs:
          - span.system as system
          - service.name as service
          - host.name as host
          - span.status_code as status
          - span.name as span_name
        where: not span.is_event      
    

    and alert rule:

      - name: Duration of RPC more than 1min 
        metrics:
          - uptrace.tracing.spans_duration as $spans_duration
        query:
          - $spans_duration > 10s group by $spans_duration.span_name
        for: 2m
        projects: [4]
        annotations:
          summary: "Duration span {{ $labels.span_name }} = {{ $values.spans_duration }}"
    

    but it doesn't work. However, the dashboards shows that one of the spans exceeds 10 seconds uptrace_dash What is wrong with the alert rule. Thanks so much for the help!

  • feat: add missing Tempo APIs

    feat: add missing Tempo APIs

    So how I test / develop it. From the root directory:

    docker-compose up -d
    
    # make sure everything is up
    docker-compose ps
    
    # start uptrace without a container
    DEBUG=2 go run cmd/uptrace/main.go serve
    
    # open grafana at http://localhost:3000 and chose Uptrace datasource
    

    That is a development workflow that does NOT run Uptrace in a container so we can quickly reload it. We probably also need a separate docker-compose example that does that.

  • Source available, not open source

    Source available, not open source

    You claim this project is "open source" while you are using a non OSI approved license. This project is not open source.

    If you want to understand why this is a problem and misleading have a read of this article: https://www.theregister.com/2022/09/05/open_source_databases/

  • Feat/spans json response

    Feat/spans json response

    This PR adds a new endpoint like this:

    curl -v http://localhost:14318/api/traces/0b0462c3-4698-ff90-dbc6-6870ced6775b/json
    
    {
       "resourceSpans":[
          {
             "instrumentationLibrarySpans":[
                {
                   "spans":[
                      {
                         "traceId":"CwRiw0aY/5DbxmhwztZ3Ww==",
                         "spanId":"JdNs9MF0KeQ=",
                         "parentSpanId":"AAAAAAAAAAA=",
                         "name":"GET /*path",
                         "kind":"SPAN_KIND_SERVER",
                         "startTimeUnixNano":"1651733829634968350",
                         "endTimeUnixNano":"1651733829783441370",
                         "attributes":[
                            {
                               "key":"http.flavor",
                               "value":{
                                  "stringValue":"1.1"
                               }
                            },
                            {
                               "key":"net.host.name",
                               "value":{
                                  "stringValue":"localhost"
                               }
                            },
                            {
                               "key":"http.wrote_bytes",
                               "value":{
                                  "intValue":"2680172"
                               }
                            },
                            {
                               "key":"telemetry.sdk.name",
                               "value":{
                                  "stringValue":"opentelemetry"
                               }
                            },
                            {
                               "key":"http.user_agent.version",
                               "value":{
                                  "stringValue":"101.0.4951.41"
                               }
                            },
                            {
                               "key":"http.route",
                               "value":{
                                  "stringValue":"/*path"
                               }
                            },
                            {
                               "key":"service.name",
                               "value":{
                                  "stringValue":"serve"
                               }
                            },
                            {
                               "key":"http.user_agent.name",
                               "value":{
                                  "stringValue":"Chrome"
                               }
                            },
                            {
                               "key":"telemetry.sdk.language",
                               "value":{
                                  "stringValue":"go"
                               }
                            },
                            {
                               "key":"http.target",
                               "value":{
                                  "stringValue":"/js/chunk-vendors.76f0d740.js.map"
                               }
                            },
                            {
                               "key":"http.user_agent.os",
                               "value":{
                                  "stringValue":"Linux"
                               }
                            },
                            {
                               "key":"http.route.param.path",
                               "value":{
                                  "stringValue":"js/chunk-vendors.76f0d740.js.map"
                               }
                            },
                            {
                               "key":"http.user_agent",
                               "value":{
                                  "stringValue":"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36"
                               }
                            },
                            {
                               "key":"host.name",
                               "value":{
                                  "stringValue":"vmihailenco"
                               }
                            },
                            {
                               "key":"net.peer.ip",
                               "value":{
                                  "stringValue":"::1"
                               }
                            },
                            {
                               "key":"net.host.port",
                               "value":{
                                  "intValue":"14318"
                               }
                            },
                            {
                               "key":"otel.library.version",
                               "value":{
                                  "stringValue":"semver:0.31.0"
                               }
                            },
                            {
                               "key":"http.method",
                               "value":{
                                  "stringValue":"GET"
                               }
                            },
                            {
                               "key":"http.host",
                               "value":{
                                  "stringValue":"localhost:14318"
                               }
                            },
                            {
                               "key":"http.scheme",
                               "value":{
                                  "stringValue":"http"
                               }
                            },
                            {
                               "key":"net.transport",
                               "value":{
                                  "stringValue":"ip_tcp"
                               }
                            },
                            {
                               "key":"net.peer.port",
                               "value":{
                                  "intValue":"46672"
                               }
                            },
                            {
                               "key":"http.user_agent.os_version",
                               "value":{
                                  "stringValue":"x86_64"
                               }
                            },
                            {
                               "key":"http.client_ip",
                               "value":{
                                  "stringValue":"::1"
                               }
                            },
                            {
                               * Connection #0 to host localhost left intact"key":"otel.library.name",
                               "value":{
                                  "stringValue":"go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp"
                               }
                            },
                            {
                               "key":"telemetry.sdk.version",
                               "value":{
                                  "stringValue":"1.6.3"
                               }
                            },
                            {
                               "key":"http.status_code",
                               "value":{
                                  "intValue":"200"
                               }
                            }
                         ],
                         "status":{
                            "code":"STATUS_CODE_OK"
                         }
                      }
                   ]
                }
             ]
          }
       ]
    }
    

    @lmangani I am using the official protobuf bindings for Go from go.opentelemetry.io/proto/otlp/trace/v1 and it seems to work fine, but the example you provided also contains some fields that are not part of OTLP, for example:

          "instrumentationLibrarySpans": [
            {
              "instrumentationLibrary": {},
              "spans": [
                {
                  "traceID": "d6e9329d67b6146b",
                  "spanID": "1234",
                  "name": "span from bash!",
                  "references": [],
                  "startTime": 1651401486889077,
                  "startTimeUnixNano": 1651401486889077000,
                  "endTimeUnixNano": 1651401486989077000,
    
    // OTLP DOES NOT HAVE THESE FIELDS START
    
                  "duration": 100000,
                  "tags": [
                    {
                      "key": "http.method",
                      "value": "GET",
                      "type": "string"
                    },
                    {
                      "key": "http.path",
                      "value": "/api",
                      "type": "string"
                    }
                  ],
                  "logs": [],
                  "processID": "p1",
                  "warnings": null,
                  "localEndpoint": {
                    "serviceName": "shell script"
                  },
    
    // OTLP DOES NOT HAVE THESE FIELDS END
    
                  "traceId": "AAAAAAAAAADW6TKdZ7YUaw==",
                  "spanId": "AAAAAAAAEjQ="
                }
              ]
            }
          ]
    
  • Clickhouse request errors

    Clickhouse request errors

    Good day!

    I've started uptrace setup via Docker and got an errors in UI:

    Screenshot 2021-12-29 at 12 45 35

    *ch.Error: DB::Exception: Unknown function toFloat64OrDefault. Maybe you meant: ['toFloat64OrNull','dictGetFloat64OrDefault']: While processing toFloat64OrDefault(span.duration)

    Uptrace logs:

    [bunrouter]  09:45:30.094   500     16.918ms   GET      /api/tracing/groups?time_gte=2021-12-29T08:46:00.000Z&time_lt=2021-12-29T09:46:00.000Z&query=group+by+span.group_id+%7C+span.count_per_min+%7C+span.error_pct+%7C+p50(span.duration)+%7C+p90(span.duration)+%7C+p99(span.duration)&system=http:unknown_service          *ch.Error: DB::Exception: Unknown function toFloat64OrDefault. Maybe you meant: ['toFloat64OrNull','dictGetFloat64OrDefault']: While processing toFloat64OrDefault(`span.duration`)
    
    [ch]  09:45:30.696   SELECT               68.642ms  SELECT count() / 60 AS "span.count_per_min", countIf(`span.status_code` = 'error') / count() AS "span.error_pct", quantileTDigest(0.5)(toFloat64OrDefault(s."span.duration")) AS "p50(span.duration)", quantileTDigest(0.9)(toFloat64OrDefault(s."span.duration")) AS "p90(span.duration)", quantileTDigest(0.99)(toFloat64OrDefault(s."span.duration")) AS "p99(span.duration)", s."span.group_id" AS "span.group_id", any(s."span.system") AS "span.system", any(s."span.name") AS "span.name" FROM "spans_index_buffer" AS "s" WHERE (s.`span.time` >= '2021-12-29 08:46:00') AND (s.`span.time` < '2021-12-29 09:46:00') AND (s.`span.system` = 'http:unknown_service') GROUP BY "span.group_id" LIMIT 1000       *ch.Error: DB::Exception: Unknown function toFloat64OrDefault. Maybe you meant: ['toFloat64OrNull','dictGetFloat64OrDefault']: While processing toFloat64OrDefault(`span.duration`) 
    
    [bunrouter]  09:45:30.607   500    108.234ms   GET      /api/tracing/groups?time_gte=2021-12-29T08:46:00.000Z&time_lt=2021-12-29T09:46:00.000Z&query=group+by+span.group_id+%7C+span.count_per_min+%7C+span.error_pct+%7C+p50(span.duration)+%7C+p90(span.duration)+%7C+p99(span.duration)&system=http:unknown_service          *ch.Error: DB::Exception: Unknown function toFloat64OrDefault. Maybe you meant: ['toFloat64OrNull','dictGetFloat64OrDefault']: While processing toFloat64OrDefault(`span.duration`)
    

    Clickhouse version: altinity/clickhouse-server:21.8.12.1.testingarm (cause I have macbook on m1 chip)

  • Duration in Explore view seems to be 1000 too small

    Duration in Explore view seems to be 1000 too small

    Hello,

    I am using

    • uptrace 1.0.3
    • Data comes from
      • Opentelemetry collector
      • Jaeger + Opentracing source data

    I have some issues with the duration unit in some dashboard :

    On the dashboard /explore/<project_id>/groups, I see : image

    We see that for group "Celery:run:check_clients_nonprd_resources" we have a max span duration of 302 ms :

    • In the API call , we can see the max value is 301977114000 which is ~301e9
    • When I deep dive, I see that the longest span are 5 minutes which is in fact 301 seconds (I verified with Jaeger) image
    • When I look in Clickhouse database (via the query log), I replayed the Clickhouse query :
    SELECT
        group_id AS `span.group_id`,
        sum(count) / 60 AS `span.count_per_min`,
        sumIf(count, status_code = 'error') / sum(count) AS `span.error_pct`,
        quantileTDigest(0.5)(toFloat64OrDefault(duration)) AS `p50(span.duration)`,
        quantileTDigest(0.9)(toFloat64OrDefault(duration)) AS `p90(span.duration)`,
        quantileTDigest(0.99)(toFloat64OrDefault(duration)) AS `p99(span.duration)`,
        max(duration) AS `max(span.duration)`,
        any(system) AS `span.system`,
        any(name) AS `span.name`,
        any(event_name) AS `span.event_name`
    FROM spans_index AS s
    WHERE (project_id = 2) AND (time >= toDateTime('2022-09-21 12:02:00', 'UTC')) AND (time < toDateTime('2022-09-21 13:02:00', 'UTC')) AND (system = 'service:ccp-paris_prd') AND (kind = 'server')
    GROUP BY `span.group_id`
    ORDER BY `p99(span.duration)` DESC
    LIMIT 1000
    
    Query id: 32d3e1dd-3116-428a-ba9b-142c58e1324e
    
    ┌────────span.group_id─┬───span.count_per_min─┬─────span.error_pct─┬─p50(span.duration)─┬─p90(span.duration)─┬─p99(span.duration)─┬─max(span.duration)─┬─span.system───────────┬─span.name─────────────────────────────────────────────────────┬─span.event_name─┐
    │ 14900324603554143372 │                 0.85 │                  0 │       284028930000 │       296918750000 │       301977100000 │       301977114000 │ service:ccp-paris_prd │ Celery:run:check_clients_nonprd_resources                     │                 │
    

    I suppose an issue at the front visualization . As I am not a vue expert, I was not able to dig in

  • Failed to convert spans to metrics.

    Failed to convert spans to metrics.

    The error logs:

    2022-12-28T11:20:27.143Z	info	uptrace/main.go:111	starting Uptrace...	{"version": "v1.2.4", "config": "/etc/uptrace/uptrace.yml"}
    2022-12-28T11:20:27.160Z	info	tracing/span_processor.go:47	starting processing spans...	{"threads": 8, "batch_size": 8000, "buffer_size": 128000}
    2022-12-28T11:20:27.163Z	info	metrics/measure_processor.go:64	starting processing metrics...	{"threads": 8, "batch_size": 8000, "buffer_size": 128000}
    [ch]  11:20:27.384   CREATE VIEW         108.458ms  CREATE MATERIALIZED VIEW "metrics_uptrace_tracing_spans_duration_mv" ON CLUSTER "default_cluster" TO measure_minutes AS SELECT s.project_id, 'uptrace.tracing.spans_duration' AS metric, toStartOfMinute(s.time) AS time, 'histogram' AS instrument, xxHash64(arrayStringConcat([toString(s."system"), toString(s."group_id"), toString(s."service_name"), toString(s."host_name"), toString(s."status_code")], '-')) AS attrs_hash, ['system', 'group_id', 'service', 'host', 'status'] AS attr_keys, [toString(s."system"), toString(s."group_id"), toString(s."service_name"), toString(s."host_name"), toString(s."status_code")] AS attr_values, toJSONString(map('span.name', toString(any(s."name")))) AS annotations, count() AS count, sum(s."duration" / 1000) AS sum, quantilesBFloat16State(0.5)(toFloat32(s."duration" / 1000)) AS histogram FROM spans_index AS s WHERE (NOT toFloat64OrDefault(s.system IN ('log:trace', 'log:debug', 'log:info', 'log:warn', 'log:error', 'log:fatal', 'log:panic', 'exceptions', 'other-events')) = 1) GROUP BY s.project_id, toStartOfMinute(s.time), toString(s."system"), toString(s."group_id"), toString(s."service_name"), toString(s."host_name"), toString(s."status_code") 	  *ch.Error: DB::Exception: There was an error on [9.0.16.13:9000]: Code: 60. DB::Exception: Table default.spans_index doesn't exist. (UNKNOWN_TABLE) (version 22.3.10.22) 
    2022-12-28T11:20:27.384Z	error	metrics/init.go:38	initSpanMetrics failed	{"error": "createSpanMetric \"uptrace.tracing.spans_duration\" failed: createMatView failed: DB::Exception: There was an error on [9.0.16.13:9000]: Code: 60. DB::Exception: Table default.spans_index doesn't exist. (UNKNOWN_TABLE) (version 22.3.10.22)"}
    github.com/uptrace/uptrace/pkg/metrics.Init
    	github.com/uptrace/uptrace/pkg/metrics/init.go:38
    main.glob..func2
    	github.com/uptrace/uptrace/cmd/uptrace/main.go:139
    github.com/urfave/cli/v2.(*Command).Run
    	github.com/urfave/cli/[email protected]/command.go:271
    github.com/urfave/cli/v2.(*Command).Run
    	github.com/urfave/cli/[email protected]/command.go:264
    github.com/urfave/cli/v2.(*App).RunContext
    	github.com/urfave/cli/[email protected]/app.go:329
    github.com/urfave/cli/v2.(*App).Run
    	github.com/urfave/cli/[email protected]/app.go:306
    main.main
    	github.com/uptrace/uptrace/cmd/uptrace/main.go:68
    runtime.main
    	runtime/proc.go:250
    2022-12-28T11:20:27.384Z	info	uptrace/main.go:351	starting monitoring metrics...	{"rules": 0}
    

    The metrics_from_spans config is same as https://github.com/uptrace/uptrace/blob/master/config/uptrace.yml#L67

  • How to override kind, status value from vector?

    How to override kind, status value from vector?

    https://github.com/uptrace/uptrace/blob/b402522a5188d7d14c0d697102e78708285fdda5/pkg/tracing/vector_handler.go#L90-L92

    curl -X POST \
      'http://uptrace:14318/api/v1/vector/logs' \
      --header 'uptrace-dsn: http://project2_secret_token@localhost:14318/2' \
      --header 'Content-Type: application/x-ndjson' \
      --data-raw '{
      "kind": "CLIENT",
      "log.message": "asdfasdfe-xx-28",
      "path": "/aaasdfew",
      "service.name": "aaa",
      "service.version": "1.1",
      "source_type": "http"
    }'
    

    This Kind can't override. Is any other way to override them?

  • Wordpress integration

    Wordpress integration

    Hello

    I'm new into APM in general, I have only tried New Relic for a short time and I wonder how does this compare to New Relic, and specific how does it work for eg Wordpress applications (and others)? Does this work "out of the box" to collect metrics, slow queries, slow php functions etc... or does this require also custom/special development from the application side to have it working in Uptrace?

    I know New Relic requires us to install their plugin. Is there some equivalent for Uptrace? Or can we do something in the root application to make it work?

    Thanks in advance and very nice solution! Looking forward to try it in a few of our projects.

  • API: Enable Jaeger Compatible Format

    API: Enable Jaeger Compatible Format

    Jaeger is still widely in use. Enabling a method to write traces to Uptrace and retrieving them in Jaeger format could help migrate towards the Uptrace UI/interface. This would prevent duplicate storage requirements from supporting Jaeger while transitioning to Uptrace and allow users to view traces within Grafana if needed.

    Operations used in Grafana

    /api/services
    	=> /api/services
    /api/operations
    	=> /api/services/MyService/operations
    /api/traces
    	=> /api/traces?operation=/&service=test_service&start=1664979972328000&end=1664983572328000&lookback=custom
    /traces/{trace_id}
    	=> /api/traces/{trace_id}
    

    Reference: https://github.com/jaegertracing/jaeger-idl/blob/main/proto/api_v2/query.proto

Go-clickhouse - ClickHouse client for Go

ClickHouse client for Go 1.18+ This client uses native protocol to communicate w

Jan 9, 2023
ClickHouse http proxy and load balancer
ClickHouse http proxy and load balancer

chproxy English | 简体中文 Chproxy, is an http proxy and load balancer for ClickHouse database. It provides the following features: May proxy requests to

Jan 3, 2023
Collects many small inserts to ClickHouse and send in big inserts

ClickHouse-Bulk Simple Yandex ClickHouse insert collector. It collect requests and send to ClickHouse servers. Installation Download binary for you pl

Dec 28, 2022
Mogo: a lightweight browser-based logs analytics and logs search platform for some datasource(ClickHouse, MySQL, etc.)
Mogo: a lightweight browser-based logs analytics and logs search platform for some datasource(ClickHouse, MySQL, etc.)

mogo Mogo is a lightweight browser-based logs analytics and logs search platform

Dec 30, 2022
Bifrost ---- 面向生产环境的 MySQL 同步到Redis,MongoDB,ClickHouse,MySQL等服务的异构中间件
Bifrost ---- 面向生产环境的 MySQL 同步到Redis,MongoDB,ClickHouse,MySQL等服务的异构中间件

Bifrost ---- 面向生产环境的 MySQL 同步到Redis,ClickHouse等服务的异构中间件 English 漫威里的彩虹桥可以将 雷神 送到 阿斯加德 和 地球 而这个 Bifrost 可以将 你 MySQL 里的数据 全量 , 实时的同步到 : Redis MongoDB Cl

Dec 30, 2022
support clickhouse

Remote storage adapter This is a write adapter that receives samples via Prometheus's remote write protocol and stores them in Graphite, InfluxDB, cli

Dec 7, 2022
Jaeger ClickHouse storage plugin implementation

Jaeger ClickHouse Jaeger ClickHouse gRPC storage plugin. This is WIP and it is based on https://github.com/bobrik/jaeger/tree/ivan/clickhouse/plugin/s

Feb 15, 2022
Clickhouse support for GORM

clickhouse Clickhouse support for GORM Quick Start package main import ( "fmt" "github.com/sweetpotato0/clickhouse" "gorm.io/gorm" ) // User

Oct 18, 2022
An observability database aims to ingest, analyze and store Metrics, Tracing and Logging data.
An observability database aims to ingest, analyze and store Metrics, Tracing and Logging data.

BanyanDB BanyanDB, as an observability database, aims to ingest, analyze and store Metrics, Tracing and Logging data. It's designed to handle observab

Dec 31, 2022
OpenTelemetry instrumentation for database/sql

otelsql It is an OpenTelemetry instrumentation for Golang database/sql, a port from https://github.com/open-telemetry/opentelemetry-go-contrib/pull/50

Dec 28, 2022
OpenTelemetry instrumentations for Go

OpenTelemetry instrumentations for Go Instrumentation Package Metrics Traces database/sql ✔️ ✔️ GORM ✔️ ✔️ sqlx ✔️ ✔️ logrus ✔️ Zap ✔️ Contributing To

Dec 26, 2022
Otelsql - OpenTelemetry SQL database driver wrapper for Go
Otelsql - OpenTelemetry SQL database driver wrapper for Go

OpenTelemetry SQL database driver wrapper for Go Add a OpenTelemetry wrapper to

Dec 15, 2022
go mysql driver, support distributed transaction

Go-MySQL-Driver A MySQL-Driver for Go's database/sql package Features Requirements Installation Usage DSN (Data Source Name) Password Protocol Address

Jul 23, 2022
a lightweight distributed transaction management service, support xa tcc saga
a lightweight distributed transaction management service, support xa tcc saga

a lightweight distributed transaction management service, support xa tcc saga

Dec 29, 2022
PgSQL compatible on distributed database TiDB

TiDB for PostgreSQL Introduction TiDB for PostgreSQL is an open source launched by Digital China Cloud Base to promote and integrate into the open sou

Dec 26, 2022
OctoSQL is a query tool that allows you to join, analyse and transform data from multiple databases and file formats using SQL.
OctoSQL is a query tool that allows you to join, analyse and transform data from multiple databases and file formats using SQL.

OctoSQL OctoSQL is a query tool that allows you to join, analyse and transform data from multiple databases, streaming sources and file formats using

Dec 29, 2022
Use SQL to query host, DNS and exploit information using Shodan. Open source CLI. No DB required.

Shodan Plugin for Steampipe Query Shodan with SQL Use SQL to query host, DNS and exploit information using Shodan. For example: select * from shod

Nov 10, 2022
Query and Provision Cloud Infrastructure using an extensible SQL based grammar
Query and Provision Cloud Infrastructure using an extensible SQL based grammar

Deploy, Manage and Query Cloud Infrastructure using SQL [Documentation] [Developer Guide] Cloud infrastructure coding using SQL InfraQL allows you to

Oct 25, 2022