OpenTelemetry log collection library

opentelemetry-log-collection

Status

This project was originally developed by observIQ under the name Stanza. It has been contributed to the OpenTelemetry project in order to accelerate development of the collector's log collection capabilities.

Note This repository is not currently stable and likely will undergo significant changes in the near term.

Community

Stanza is an open source project. If you'd like to contribute, take a look at the and developer guide.

Code of Conduct

Stanza follows the CNCF Code of Conduct.

Other questions?

Check out our FAQ, or open an issue with your question. We'd love to hear from you.

Owner
OpenTelemetry - CNCF
OpenTelemetry makes robust, portable telemetry a built-in feature of cloud-native software.
OpenTelemetry - CNCF
Comments
  • track_rotated

    track_rotated

    fixes #85 implementing https://github.com/open-telemetry/opentelemetry-log-collection/issues/85#issuecomment-852280642

    in addition

    • i removed unnecessary fingerprint comparison for copytrunc cases because it is repetitive.
    • I modified file batching test case (2nd part of test) because with this new approach, the way it handles already-discovered-recently-appended log files differently.
  • Update Changelog ahead of major set of breaking changes

    Update Changelog ahead of major set of breaking changes

    This PR is a proposal for a release that would contain several breaking changes.

    If this PR is approved, the following PRs will be merged, and a new release will be cut:

    • #364
    • #370
    • #371
    • #372
    • #397
    • #429
  • Bump github.com/onsi/gomega from 1.13.0 to 1.16.0 in /internal/tools

    Bump github.com/onsi/gomega from 1.13.0 to 1.16.0 in /internal/tools

    Bumps github.com/onsi/gomega from 1.13.0 to 1.16.0.

    Release notes

    Sourced from github.com/onsi/gomega's releases.

    v1.16.0

    Features

    • feat: HaveHTTPStatus multiple expected values (#465) [aa69f1b]
    • feat: HaveHTTPHeaderWithValue() matcher (#463) [dd83a96]
    • feat: HaveHTTPBody matcher (#462) [504e1f2]
    • feat: formatter for HTTP responses (#461) [e5b3157]

    v1.15.0

    1.15.0

    Fixes

    The previous version (1.14.0) introduced a change to allow Eventually and Consistently to support functions that make assertions. This was accomplished by overriding the global fail handler when running the callbacks passed to Eventually/Consistently in order to capture any resulting errors. Issue #457 uncovered a flaw with this approach: when multiple Eventuallys are running concurrently they race when overriding the singleton global fail handler.

    1.15.0 resolves this by requiring users who want to make assertions in Eventually/Consistently call backs to explicitly pass in a function that takes a Gomega as an argument. The passed-in Gomega instance can be used to make assertions. Any failures will cause Eventually to retry the callback. This cleaner interface avoids the issue of swapping out globals but comes at the cost of changing the contract introduced in v1.14.0. As such 1.15.0 introduces a breaking change with respect to 1.14.0 - however we expect that adoption of this feature in 1.14.0 remains limited.

    In addition, 1.15.0 cleans up some of Gomega's internals. Most users shouldn't notice any differences stemming from the refactoring that was made.

    v1.14.0

    1.14.0

    Features

    • gmeasure.SamplingConfig now suppers a MinSamplingInterval [e94dbca]
    • Eventually and Consistently support functions that make assertions [2f04e6e]
      • Eventually and Consistently now allow their passed-in functions to make assertions. These assertions must pass or the function is considered to have failed and is retried.
      • Eventually and Consistently can now take functions with no return values. These implicitly return nil if they contain no failed assertion. Otherwise they return an error wrapping the first assertion failure. This allows these functions to be used with the Succeed() matcher.
      • Introduce InterceptGomegaFailure - an analogue to InterceptGomegaFailures - that captures the first assertion failure and halts execution in its passed-in callback.

    Fixes

    • Call Verify GHTTPWithGomega receiver funcs (#454) [496e6fd]
    • Build a binary with an expected name (#446) [7356360]
    Changelog

    Sourced from github.com/onsi/gomega's changelog.

    1.16.0

    Features

    • feat: HaveHTTPStatus multiple expected values (#465) [aa69f1b]
    • feat: HaveHTTPHeaderWithValue() matcher (#463) [dd83a96]
    • feat: HaveHTTPBody matcher (#462) [504e1f2]
    • feat: formatter for HTTP responses (#461) [e5b3157]

    1.15.0

    Fixes

    The previous version (1.14.0) introduced a change to allow Eventually and Consistently to support functions that make assertions. This was accomplished by overriding the global fail handler when running the callbacks passed to Eventually/Consistently in order to capture any resulting errors. Issue #457 uncovered a flaw with this approach: when multiple Eventuallys are running concurrently they race when overriding the singleton global fail handler.

    1.15.0 resolves this by requiring users who want to make assertions in Eventually/Consistently call backs to explicitly pass in a function that takes a Gomega as an argument. The passed-in Gomega instance can be used to make assertions. Any failures will cause Eventually to retry the callback. This cleaner interface avoids the issue of swapping out globals but comes at the cost of changing the contract introduced in v1.14.0. As such 1.15.0 introduces a breaking change with respect to 1.14.0 - however we expect that adoption of this feature in 1.14.0 remains limited.

    In addition, 1.15.0 cleans up some of Gomega's internals. Most users shouldn't notice any differences stemming from the refactoring that was made.

    1.14.0

    Features

    • gmeasure.SamplingConfig now suppers a MinSamplingInterval [e94dbca]
    • Eventually and Consistently support functions that make assertions [2f04e6e]
      • Eventually and Consistently now allow their passed-in functions to make assertions. These assertions must pass or the function is considered to have failed and is retried.
      • Eventually and Consistently can now take functions with no return values. These implicitly return nil if they contain no failed assertion. Otherwise they return an error wrapping the first assertion failure. This allows these functions to be used with the Succeed() matcher.
      • Introduce InterceptGomegaFailure - an analogue to InterceptGomegaFailures - that captures the first assertion failure and halts execution in its passed-in callback.

    Fixes

    • Call Verify GHTTPWithGomega receiver funcs (#454) [496e6fd]
    • Build a binary with an expected name (#446) [7356360]
    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
  • Remove '$' from field syntax

    Remove '$' from field syntax

    This is a possible resolution to #164. This would be a major breaking change, so if accepted, the timing of a merge should be considered carefully.


    This library has a construct called 'field', which has the purpose of providing a concise way to refer to values in a log record. The special symbol $ has been used in key words that refer to top level fields in the log record. i.e. $body, $attributes, $resource

    Additionally, some shorthand notion was supported. Notably, $.foo and .foo were equivalent to $body.foo.

    This change proposes to remove the usage of $ in field syntax. The main idea here is that all usages of field syntax MUST begin with a key word i.e. body, attributes, resource. With this requirement built in, there is no need to differentiate between top level fields and arbitrary keys. The distinction is implicit because the first word is always a top level field and the remainder are arbitrary nested keys.

  • Resouce and Attributes should support arbitrary data

    Resouce and Attributes should support arbitrary data

    The otel-logs data model defines both Resource and Attributes as type map<string, any> (equivalent to map[string]interface{}. However, both are currently implemented as map<string, string> (equivalent to map[string]string).

    Both fields should be updated from map[string]string to map[string]interface{}.

    This change may have ramifications throughout this codebase. Some particular areas that will need addressing:

    • Any operators that currently expect map[string]string must be updated to support map[string]interface{}.
    • Field syntax should be updated to allow deeper references on Resource and Attributes (i.e. $resource.one.two.value)
    • Any operator that currently manipulates Resource or Attributes fields (i.e. move, copy, etc) will require new unit tests to ensure that the new arbitrary data format is fully supported.

    Other changes may be necessary, and if not addressed in the initial PR, should be captured and tracked via an issue.

  • Ideal parser settings

    Ideal parser settings

    Parsing behavior in this library was originally designed for a different data model than the one we now have. The expected usage of body and attributes have been redefined. As such, the behavior of parsers should be reevaluated.

    This is not my first attempt to propose changes to the way parsers behave. I believe my previous attempts have been too focused on proposing changes, which is difficult to do without lengthy explanations of the current behavior. The following is instead meant to be read as a standalone proposal for how parsers should behave. Please read accordingly.

    I suggest the following would be ideal behavior:

    1. Parsing should be non-destructive by default.
    2. Parsing should be destructive only when the same value is specified for both parse_from and parse_to.
    3. The parse_from setting should default to body.
    4. The parse_to setting should default to attributes.
    5. The preserve_to setting should not exist.

    Rationale

    1. Parsing should be non-destructive by default.

    The primary purpose of parsing operations is to isolate information. Where the extracted information should be placed is a secondary, but obvious, concern. However, what happens to the original value is easily overlooked. As a general principle, destruction of the original value should require intentional configuration.

    1. Parsing should be destructive only when the same value is specified for both parse_from and parse_to.

    A somewhat common use case involves applying structure to an unstructured log. For example, when a user extracts information from a line of text, they may wish keep only the key value pairs they have isolated. When this is the case, an intuitive choice is to overwrite the original value. This should be supported but should require explicit configuration.

    1. The parse_from setting should default to body.

    The body typically contains an unstructured log. Parsing this value is the most common parsing use case. Therefore, parsers should default to parsing the body.

    1. The parse_to setting should default to attributes.

    The result of a parsing operation is a structured object. This is true of all parsers that have a parse_to field.

    The data model strongly encourages that structured data belong in attributes, so this is a natural default value for parse_to.

    Additionally, the default value for parse_to should be different than that of parse_from, else we have a situation where parsing operations are destructive by default.

    1. The preserve_to setting should not exist.

    If parsing is non-destructive by default, there is no need to provide a special setting for preservation. In the case where a user wishes to "back up" a value and destroy the original, they can do so by copying or moving the value before parsing.

  • fix(helpers/multiline): fix force flushing with multiline

    fix(helpers/multiline): fix force flushing with multiline

    I found an issue with multiline and force flushing. If forcePeriod is low, then sometimes logs buffer (data) have been pushed without proper multiline detection (just force flushed).

    In order to fix that issue, I changed the forceFlusher to be oriented on changes inside splitFunc. It uses two variables to track changes:

    • lastDataChange which keeps timestamp of last change (flush, new data, basically change in data)
    • previousDataLength which keep length of data with -1 as special variable (which means that data have been flushed, what prevents to force flush in case data changed but it length does not)

    The splitFuncs are rather complicated, so I'm mostly relaying on unit tests


    In order to reproduce the issue which started all of the changes

    main.py

    import time
    
    LOG='Mar 14 22:00:14 ip-11-11-11-111 sadd[25288]: asdasdasdfadfdsgasdgb asd asd asd sad asd asd asd as s'
    
    with open('test.log', 'w') as f:
        for i in range(10):
            for i in range(5):
                f.write(LOG)
                f.write('\n')
            f.flush()
            time.sleep(0.3)
    
    

    config.yaml

    receivers:
      filelog/syslog:
        start_at: beginning
        include_file_path_resolved: true
        include_file_name: false
        force_flush_period: 50ms
        max_log_size: 32MiB  # increase max_log_size to scrape long lines
        include:
          - ./test.log
        multiline:
          line_start_pattern: \w{3}\s+\d{1,2} \d{2}:\d{2}:\d{2}
        operators:
          - type: regex_parser
            regex: (?P<timestamp>\w{3}\s+\d{1,2} \d{2}:\d{2}:\d{2})
            preserve_to: $$body.log
            timestamp:
              parse_from: $$body.timestamp
              layout_type: gotime
              layout: Jan _2 15:04:05
          - type: restructure
            ops:
              - move:
                  from: $$body.log
                  to: $$body
        attributes:
          _a: syslog
          _b: syslog
          c: syslog
        resource:
          _d: otelcol
          e: otelcol
          f: otelcol
          g: otelcol
          h: otelcol
    exporters:
      logging:
        logLevel: debug
    service:
      pipelines:
        logs:
          receivers:
            - filelog/syslog
          exporters:
            - logging
    
    
  • filelog: Multiline merging mixes up logs from different files

    filelog: Multiline merging mixes up logs from different files

    this works fine.

    operators:
        - default: clean-up-log-record
          routes:
          - expr: ($$resource["k8s.namespace.name"]) matches ".*" && ($$resource["k8s.container.name"])
              == "loggen"
            output: .*_loggen
          - expr: ($$resource["k8s.container.name"]) == "loggen2"
            output: loggen2
          type: router
        - combine_field: log
          id: .*_loggen
          is_first_entry: '($.log) matches "num: \\d+0\\s"'
          output: clean-up-log-record
          type: recombine
        - combine_field: log
          id: loggen2
          is_first_entry: '($$.log) matches "num: \\d+0\\s"'
          output: clean-up-log-record
          type: recombine
        - id: clean-up-log-record
          ops:
          - move:
              from: log
              to: $
          type: restructure
    

    because loggen container and loggen2 container is separately routed to its own recombine operator. but with configurations like this one

    operators:
        - default: clean-up-log-record
          routes:
          - expr: ($$resource["k8s.namespace.name"]) matches ".*" && ($$resource["k8s.container.name"])
              == "loggen.*"
            output: .*_loggen.*
          type: router
        - combine_field: log
          id: .*_loggen.*
          is_first_entry: '($.log) matches "num: \\d+0\\s"'
          output: clean-up-log-record
          type: recombine
        - id: clean-up-log-record
          ops:
          - move:
              from: log
              to: $
          type: restructure
    

    they get mixed up

  • Bump github.com/mitchellh/mapstructure from 1.4.1 to 1.4.2

    Bump github.com/mitchellh/mapstructure from 1.4.1 to 1.4.2

    Bumps github.com/mitchellh/mapstructure from 1.4.1 to 1.4.2.

    Changelog

    Sourced from github.com/mitchellh/mapstructure's changelog.

    1.4.2

    • Custom name matchers to support any sort of casing, formatting, etc. for field names. GH-250
    • Fix possible panic in ComposeDecodeHookFunc GH-251
    Commits
    • 5ac1f6a CHANGELOG
    • 18f04e6 Default MatchName in NewDecoder
    • 14df28d Merge pull request #247 from veikokaap/fix-doc-typo-decodehook
    • 251d52b improve comments for matchname
    • 1b7c3d8 Merge pull request #250 from abeMedia/feat/custom-name-matchers
    • 381a76b initialize data to interface for decode hook (tested)
    • 6577afa Merge pull request #251 from eh-steve/bugfix-panic-with-empty-composed-decode...
    • edc5649 Fix possible panic when using ComposeDecodeHookFunc() with no funcs
    • fd87e0d Support custom name matchers
    • 963925d fix typo in documentation
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
  • Bump github.com/golangci/golangci-lint from 1.39.0 to 1.40.1 in /internal/tools

    Bump github.com/golangci/golangci-lint from 1.39.0 to 1.40.1 in /internal/tools

    ⚠️ Dependabot is rebasing this PR ⚠️

    Rebasing might not happen immediately, so don't worry if this takes some time.

    Note: if you make any changes to this PR yourself, they will take precedence over the rebase.


    Bumps github.com/golangci/golangci-lint from 1.39.0 to 1.40.1.

    Release notes

    Sourced from github.com/golangci/golangci-lint's releases.

    v1.40.1

    Changelog

    f2ba4bc1 build(deps): bump hosted-git-info from 2.7.1 to 2.8.9 in /tools (#1968) ce672624 build(deps): bump hosted-git-info from 2.8.8 to 2.8.9 in /.github/peril (#1966) b282d301 build(deps): bump lodash from 4.17.19 to 4.17.21 in /tools (#1964) 3aeafb8a build(deps): bump lodash from 4.17.20 to 4.17.21 in /.github/peril (#1967) 47baa2c1 doc: fix example config yaml identation (#1976) f95b1ed3 golint: deprecation (#1965) 589c49ef govet: fix sigchanyzer (#1975) 625445b1 runner: non-zero exit code when a linter produces a panic (#1979)

    v1.40.0

    Changelog

    6844f6ab Add promlinter to lint metrics name (#1265) 93df6f75 Add tagliatelle linter (#1906) c5891c0d Bump importas to HEAD (#1899) c213e4ed Bump wastedassign to v1.0.0 (#1955) 92fda268 Update Wrapcheck to v2, add configuration (#1947) a7865d24 Update errorlint to HEAD (#1933) ffe80615 Update importas to HEAD (#1934) 12e3251a Update wrapcheck to v1.2.0 (#1927) d6bcf9f4 Update wsl to 3.3.0, sort config in example config (#1922) 53a4b41f build(deps): bump actions/cache from v2.1.4 to v2.1.5 (#1918) d2526706 build(deps): bump emotion-theming from 10.0.27 to 11.0.0 in /docs . (#1623) 5baff12c build(deps): bump github.com/hashicorp/go-multierror from 1.0.0 to 1.1.1 (#1877) 0f3f9ef8 build(deps): bump github.com/mgechev/revive from 1.0.5 to 1.0.6 (#1908) 7ee6e4dd build(deps): bump github.com/shirou/gopsutil/v3 from 3.21.2 to 3.21.3 (#1890) f1ca6822 build(deps): bump github.com/shirou/gopsutil/v3 from 3.21.3 to 3.21.4 (#1951) 9e11a08a build(deps): bump github.com/tetafro/godot from 1.4.4 to 1.4.5 (#1907) 54bfbb9a build(deps): bump github.com/tetafro/godot from 1.4.5 to 1.4.6 (#1935) 42cc7843 build(deps): bump github.com/tomarrell/wrapcheck from 1.0.0 to 1.1.0 (#1891) 2c008326 build(deps): bump github.com/tommy-muehle/go-mnd/v2 from 2.3.1 to 2.3.2 (#1919) c9ec73fd build(deps): bump golangci/golangci-lint-action from v2.5.1 to v2.5.2 (#1889) 96a7f62b build(deps): bump honnef.co/go/tools from 0.1.3 to 0.1.4 (#1952) 07ae8774 build(deps): bump y18n from 4.0.0 to 4.0.1 in /.github/peril (#1879) c610079e feat: set the minimum Go version to go1.15 (#1926) 1fb67fe4 fix release stats badge (#1901) 9cb902cd fix: comma in exclude pattern leads to unexpected results (#1917) 8f3ad45e gosec: add configuration (#1930) cd9d8bb7 govet: Update vet passes (#1950) 07a0568d importas: add message if settings contain no aliases (#1956) 5c6adb63 importas: allow repeated aliases (#1960) 34ffdc24 revive: convert hard coded excludes into default exclude patterns (#1938) 12ed5fac staticcheck: configurable Go version. (#1946) a833cc16 typecheck: improve error stack parsing. (#1886) 8da9d3aa update go-critic to v0.5.6 (#1925)

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
  • Add GoSec Security Scan

    Add GoSec Security Scan

    Motivation

    Follow up to issue open-telemetry/oteps#144

    GoSec is a static analysis engine which scans go source code for security vulnerabilities. As the project grows and we near GA it might be useful to have a workflow which checks for security vulnerabilities so we can ensure every incremental change is following best development practices. Also passing basic security checks will also make sure that there aren't any glaring issues for our users.

    Changes

    This PR adds GoSec security checks to the repo

    • After every run the workflow uploads the results to GitHub. Details on the run and security alerts will show up in the security tab of this repo.

    Workflow Triggers

    • daily cron job at 1:30am
    • workflow_dispatch (in case maintainers want to trigger a security check manually)

    cc @alolita

An golang log lib, supports tracking and level, wrap by standard log lib

Logex An golang log lib, supports tracing and level, wrap by standard log lib How To Get shell go get gopkg.in/logex.v1 source code import "gopkg.in/

Nov 27, 2022
Nginx-Log-Analyzer is a lightweight (simplistic) log analyzer for Nginx.
Nginx-Log-Analyzer is a lightweight (simplistic) log analyzer for Nginx.

Nginx-Log-Analyzer is a lightweight (simplistic) log analyzer, used to analyze Nginx access logs for myself.

Nov 29, 2022
Distributed-Log-Service - Distributed Log Service With Golang
Distributed-Log-Service - Distributed Log Service With Golang

Distributed Log Service This project is essentially a result of my attempt to un

Jun 1, 2022
Log-analyzer - Log analyzer with golang

Log Analyzer what do we have here? Objective Installation and Running Applicatio

Jan 27, 2022
Used to test the log collection function.

Used to test the log collection function.

Nov 30, 2021
Example instrumentation of Golang Application with OpenTelemetry with supported configurations to export to Sentry.
Example instrumentation of Golang Application with OpenTelemetry with supported configurations to export to Sentry.

Sentry + Opentelemetry Go Example Requirements To run this example, you will need a kubernetes cluster. This example has been tried and tested on Mini

Oct 27, 2022
Tool for generating OpenTelemetry tracing decorators.

tracegen Tool for generating OpenTelemetry tracing decorators. Installation go get -u github.com/KazanExpress/tracegen/cmd/... Usage tracegen generate

Apr 7, 2022
logger wraps uber/zap and trace with opentelemetry

logger 特性 支持 uber/zap 日志 支持 log rolling,使用 lumberjace 支持日志追踪 支持debug、info、warn、e

Sep 17, 2022
Shikhandi: a tiny load generator for opentelemetry and heavily

shikhandi is a tiny load generator for opentelemetry and heavily inspired by thi

Oct 7, 2022
Tracetest - Trace-based testing. End-to-end tests powered by your OpenTelemetry Traces.
Tracetest - Trace-based testing. End-to-end tests powered by your OpenTelemetry Traces.

End-to-end tests powered by OpenTelemetry. For QA, Dev, & Ops. Live Demo | Documentation | Twitter | Discord | Blog Click on the image or this link to

Jan 3, 2023
A flexible process data collection, metrics, monitoring, instrumentation, and tracing client library for Go
A flexible process data collection, metrics, monitoring, instrumentation, and tracing client library for Go

Package monkit is a flexible code instrumenting and data collection library. See documentation at https://godoc.org/gopkg.in/spacemonkeygo/monkit.v3 S

Dec 14, 2022
a golang log lib supports level and multi handlers

go-log a golang log lib supports level and multi handlers Use import "github.com/siddontang/go-log/log" //log with different level log.Info("hello wo

Dec 29, 2022
Structured log interface

Structured log interface Package log provides the separation of the logging interface from its implementation and decouples the logger backend from yo

Sep 26, 2022
lumberjack is a log rolling package for Go

lumberjack Lumberjack is a Go package for writing logs to rolling files. Package lumberjack provides a rolling logger. Note that this is v2.0 of lumbe

Jan 1, 2023
CoLog is a prefix-based leveled execution log for Go
CoLog is a prefix-based leveled execution log for Go

What's CoLog? CoLog is a prefix-based leveled execution log for Go. It's heavily inspired by Logrus and aims to offer similar features by parsing the

Dec 14, 2022
A simple web service for storing text log files

logpaste A minimalist web service for uploading and sharing log files. Run locally go run main.go Run in local Docker container The Docker container a

Dec 30, 2022
exo: a process manager & log viewer for dev
 exo: a process manager & log viewer for dev

exo: a process manager & log viewer for dev exo- prefix – external; from outside. Features Procfile compatible process manager.

Dec 28, 2022
Write log entries, get X-Ray traces.

logtoxray Write to logs, get X-Ray traces. No distributed tracing instrumenation library required. ?? ?? ?? THIS PROJECT IS A WORK-IN-PROGRESS PROTOTY

Apr 24, 2022
Binalyze logger is an easily customizable wrapper for logrus with log rotation

logger logger is an easily customizable wrapper for logrus with log rotation Usage There is only one function to initialize logger. logger.Init() When

Oct 2, 2022