OpenTelemetry-Go is the Go implementation of OpenTelemetry

OpenTelemetry-Go

CI codecov.io PkgGoDev Go Report Card Slack

OpenTelemetry-Go is the Go implementation of OpenTelemetry. It provides a set of APIs to directly measure performance and behavior of your software and send this data to observability platforms.

Project Status

Signal Status Project
Traces Stable N/A
Metrics Alpha N/A
Logs Frozen [1] N/A
  • [1]: The Logs signal development is halted for this project while we develop both Traces and Metrics. No Logs Pull Requests are currently being accepted.

Progress and status specific to this repository is tracked in our local project boards and milestones.

Project versioning information and stability guarantees can be found in the versioning documentation.

Compatibility

This project is tested on the following systems.

OS Go Version Architecture
Ubuntu 1.16 amd64
Ubuntu 1.15 amd64
Ubuntu 1.16 386
Ubuntu 1.15 386
MacOS 1.16 amd64
MacOS 1.15 amd64
Windows 1.16 amd64
Windows 1.15 amd64
Windows 1.16 386
Windows 1.15 386

While this project should work for other systems, no compatibility guarantees are made for those systems currently.

Getting Started

You can find a getting started guide on opentelemetry.io.

OpenTelemetry's goal is to provide a single set of APIs to capture distributed traces and metrics from your application and send them to an observability platform. This project allows you to do just that for applications written in Go. There are two steps to this process: instrument your application, and configure an exporter.

Instrumentation

To start capturing distributed traces and metric events from your application it first needs to be instrumented. The easiest way to do this is by using an instrumentation library for your code. Be sure to check out the officially supported instrumentation libraries.

If you need to extend the telemetry an instrumentation library provides or want to build your own instrumentation for your application directly you will need to use the go.opentelemetry.io/otel/api package. The included examples are a good way to see some practical uses of this process.

Export

Now that your application is instrumented to collect telemetry, it needs an export pipeline to send that telemetry to an observability platform.

All officially supported exporters for the OpenTelemetry project are contained in the exporters directory.

Exporter Metrics Traces
Jaeger
OTLP
Prometheus
stdout
Zipkin

Additionally, OpenTelemetry community supported exporters can be found in the contrib repository.

Contributing

See the contributing documentation.

Owner
OpenTelemetry - CNCF
OpenTelemetry makes robust, portable telemetry a built-in feature of cloud-native software.
OpenTelemetry - CNCF
Comments
  • Why so many interfaces?

    Why so many interfaces?

    My expectation when I started looking at the code base was that I would find mostly concrete implementations of the basic data types (Traces, Spans, SpanContexts, Logs, Measurements, Metrics, etc) with interfaces for vendors to hook into for enriching that data and exporting it either in batch or streaming forms.

    Looking through the code base I'm sort of astonished by the number of interfaces that exist over concrete data types.

    I realize that vendors want some control over various aspects of the implementation, but starting with making everything an interface seems like a poor design choice to me.

    Can anyone explain this or point me to the docs that explain the motivation behind this?

  • Metrics SDK work-in-progress

    Metrics SDK work-in-progress

    This PR branches off of #100 and adds a metrics SDK.
    It's missing a lot, but I thought it would be best to post this now, by way of assisting with #100 as well as the associated spec change https://github.com/open-telemetry/opentelemetry-specification/pull/250 and directly addresses part of spec issue https://github.com/open-telemetry/opentelemetry-specification/issues/259 which is that it's difficult to fully understand the API specification without seeing a real SDK.

    This is very incomplete, for example it needs:

    1. It only supports counters
    2. It needs more testing
    3. It has no real exporter yet
    4. Needs more documentation on the lock-free algorithm (and reviewers)

    More changes will be needed to accommodate measures. Before the work in https://github.com/open-telemetry/opentelemetry-specification/issues/259 can be finished, I'd like to complete this prototype with:

    1. a statsd exporter
    2. a configuration struct allowing configurable aggregation and group-by for all instruments
    3. a dynamic config update API
    4. an example YAML file that could serve to configure metrics behavior

    There is one issue that is worth considering (@bogdandrutu). This SDK aggregates metrics by (DescriptorID, LabelSet) in the SDK itself, but does not "fold" for the group-by operation. I.e., it does not apply the group-by or support selecting a subset of aggregation keys directly. My idea is to implement group-by in the exporter. It's simpler this way, since it's not on the critical path for callers. This design could be extended with further complexity to address the group-by up-front, but I would like to be convinced it's worthwhile.

  • Add sdk/metric/reader package structure to new_sdk/main

    Add sdk/metric/reader package structure to new_sdk/main

    ~Blocked by #2799~

    Add in the needed interfaces and high-level types for the new SDK design from new_sdk/example. This is not expected to be a fully working implementation.

    sdk/metric/reader

    • [ ] type Reader interface
      • [ ] type Registeree interface (might be a better idea to rename this idiomatic to Go Registerer)
      • [ ] type Producer interface
      • [ ] type Metrics struct
      • [ ] type Scope struct
      • [ ] type Instrument struct
      • [ ] type Point struct
    • [ ] type Exporter interface
    • [ ] type Option interface to be passed to the new Provider method option
      • [ ] type config struct stub
  • Deviation from semconv exception spec?

    Deviation from semconv exception spec?

    Just noticed that the trace implementation doesn't seem to follow the spec for exceptions -- looks like this package uses error as the prefix instead of exception. Looking at the JS library, they seem to be using exception as well, so I'm guessing it's the golang lib that's out of compliance.

    Are there any plans or motivation to bring this into compliance? I don't know if there's a backwards compatible way of making this change, aside of perhaps recording two events, one with the error prefix, one with the exception prefix. That said, it might be best to just rip the bandaid off, rename the event and attributes and make it a (potentially) breaking change?

    Happy to contribute these changes if I can get some direction on what you think is the best path forward :)

  • OTLP exporter not exporting ValueRecorder & ValueObserver to Prometheus backend

    OTLP exporter not exporting ValueRecorder & ValueObserver to Prometheus backend

    Describe the bug I am using an OTLP exporter and metrics pusher with otel-collector with a Prometheus exporter and Logging exporter. In the Logging exporter, all collected metrics are showing but in Prometheus backend, only ValueCounter instrument values are showing.

    I am using the latest otel-collector-contrib image and running it alongside a demo service, some DBs, latest Prometheus image using docker.

    What config did you use?

    • otel-collector-config.yaml
    receivers:
      otlp:
        endpoint: 0.0.0.0:55678
    
    exporters:
      prometheus:
        endpoint: "0.0.0.0:8889"
        namespace: versionsvc
    
      logging:
        loglevel: debug
        
      stackdriver:
        project: digital-waters-276111
        metric_prefix: versionsvc
        number_of_workers: 3
        skip_create_metric_descriptor: true
    
    processors:
      batch:
      queued_retry:
    
    extensions:
      health_check:
      pprof:
        endpoint: :1888
      zpages:
        endpoint: :55679
    
    service:
      extensions: [pprof, zpages, health_check]
      pipelines:
        traces:
          receivers: [otlp]
          exporters: [logging,stackdriver]
          processors: [batch, queued_retry]
        metrics:
          receivers: [otlp]
          exporters: [logging,prometheus]
    
    • creation of the exporter and pusher:
    
    func initProviders() (*otlp.Exporter, *push.Controller) {
    	collectorAddr, ok := os.LookupEnv("OTEL_RECIEVER_ENDPOINT")
    	if !ok {
    		collectorAddr = otlp.DefaultCollectorHost + ":" + string(otlp.DefaultCollectorHost)
    	}
    	exporter, err := otlp.NewExporter(otlp.WithAddress(collectorAddr),
    		otlp.WithInsecure(),
    		otlp.WithGRPCDialOption(grpc.WithBlock()))
    
    	if err != nil {
    		log.Fatal(err)
    	}
    
    	tp, err := sdktrace.NewProvider(
    		sdktrace.WithConfig(sdktrace.Config{DefaultSampler: sdktrace.AlwaysSample()}),
    		sdktrace.WithSyncer(exporter))
    	if err != nil {
    		log.Fatal(err)
    	}
    
    	global.SetTraceProvider(tp)
    
    	pusher := push.New(
    		simple.NewWithExactDistribution(),
    		exporter,
    		push.WithStateful(true),
    		push.WithPeriod(2*time.Second),
    	)
    
    	global.SetMeterProvider(pusher.Provider())
    	pusher.Start()
    	return exporter, pusher
    }
    
    • docker-compose.yaml
    version: "3.1"
    services:
    
      redis:
        image: redis:4
        ports:
          - "6379:6379"
        entrypoint: 
          "redis-server"
    
      db:
        image: postgres:11
        ports:
          - "5432:5432"
        environment:
          POSTGRES_USER: postgres
          POSTGRES_PASSWORD: roottoor
          POSTGRES_DB: backend
    
      open-telemetry-demo:
        build: ../.
        environment:
          - GO111MODULE=on
          - OTEL_RECIEVER_ENDPOINT=otel-collector:55678
        depends_on:
          - otel-collector
          - db 
          - redis   
        ports: 
          - "8088:8088"
    
      otel-collector:
        image: ${OTELCOL_IMG}
        command: ["--config=/etc/otel-collector-config.yaml", "${OTELCOL_ARGS}"]
        volumes:
          - ./otel-collector-config.yaml:/etc/otel-collector-config.yaml
        ports:
          - "1888:1888"   
          - "8888:8888"   
          - "8889:8889"   
          - "13133:13133" 
          - "55678:55678"       
          - "55680:55679"
      
      prometheus:
        container_name: prometheus
        image: prom/prometheus:latest
        volumes:
          - ./prometheus.yaml:/etc/prometheus/prometheus.yml
        ports:
          - "9090:9090"
    
    • prometheus.yaml
    scrape_configs:
        - job_name: 'otel-collector'
          scrape_interval: 1s
          static_configs:
            - targets: ['otel-collector:8889']
    

    Environment OS: MacOS Catalina Compiler(if manually compiled): go 1.14

    Additional context

    • The code I wrote using OTEL format to export metrics works because I added a Prometheus receiver in the collector and a Prometheus exporter without changing any of the counter and value recorders code and it worked fine. So the issue is most likely associated with OTEL-exporter not able to export some Metric values that prometheus backend can identify.

    • I have this code with a demo service in the repo:

    https://github.com/zeyadkhaled/OpenTelemetry-Go-Project-with-Collector-and-OTLP-Exporter

  • Replace the view implementation in the metric SDK

    Replace the view implementation in the metric SDK

    There are design and usability issues with the current view implementation. This issue introduces a proposal to replace the current view implementation with one that addresses these issues.

    Issues

    View package is used as a namespace for a sprawl of options

    It seems the main motivation for the view package is to encapsulate the large number of configuration options that exist, and those that could exist. This choice was to help prevent the pollution of the sdk/metric code API and its documentation. However, it only splits the issue to another package.

    Instead of having a sprawl of options, ideally, view definitions should be defined concisely.

    View package includes unrelated types.

    The included Instrument and InstrumentKind types refer to metric instrument concepts, not view concepts. They are only included in the package to avoid an import cycle.

    Ideally, these types would live in the sdk/metric package.

    Creation of a view returns an error

    The view.New function is declared as follows.

    func New(opts ...Option) (View, error) {
    

    Because this function is declared to return not only a View, but also an error it cannot be used in-line by users during the SDK setup. For example:

    view, err := view.New(/* user options */)
    if err != nil {
    // Error handling.
    }
    NewMeterProvider(/* use view here */)
    

    Ideally, the view could be defined in-line with the MeterProvider. E.g.

    NewMeterProvider(WithView(NewView(/* user options */)))
    

    This would be possible if a design can be determined that would remove the two explicit errors the New function returns.

    The implicit errors, those sent to the logging system by options, are not impediments to resolving this issue.

    Multiple user options of the same kind are dropped

    The view.New function docs read:

    // New returns a new configured View. If there are any duplicate Options passed,
    // the last one passed will take precedence. The unique, de-duplicated,
    // Options are all applied to the View.
    

    This seems like appropriate behavior, but is also means that a user needs to read this documentation to understand how multiple options are handled. If they want to match instruments with name foo and bar so they pass both as options to the view it will cause frustration when both are not matched.

    If it were not possible for a user to pass multiple values for a match or replacement criteria by design they would understand without reading docs or frustration that they need to use multiple views.

    The current view is not extendable by a user

    If a user wants to do something unique with a View outside of the provided options they are not enabled. This may seem ideal as it restricts users to only create views defined by OTel. However, this project should not be about restricting users. It should be about providing useful packages that help them achieve what they want.

    It would be a benefit to the project if Views could be defined in a way that allows users to implement any idea they have, but also provide users with a way to create views directly based on what OTel prescribes.

    Proposal

    A complete example of this proposal can be found here

    Move Instrument and InstrumentKind to the sdk/metric package

    Both types can be moved to the sdk/metric package. The import cycle that prevented this is resolved in the rest of the proposal.

    The Instrument type can be extended to include all properties of a new instrument. That way users can match with complete context of the instrument being created:

    // Instrument describes properties an instrument is created with.
    type Instrument struct {
    	Name string
    	Description string
    	Kind InstrumentKind
    	Unit unit.Unit
    	Scope instrumentation.Scope
    }
    

    An added benefit of this complete definition is that the function parameters of the pipeline can be unified with this type. See the example for how this is done.

    Add a Stream type to the sdk/metric package to describe the metric data stream

    // Stream describes the stream of data an instrument produces.
    type Stream struct {
    	// Instrument describes the instrument that created the stream.
    	Instrument
    
    	Temporality metricdata.Temporality
    	Aggregation aggregation.Aggregation
    	AttributeFilter attribute.Filter
    }
    

    This type is a representation of the data stream from the OTel specification.

    Replace the view.View with a View func in sdk/metric package

    Currently a View is defined as a struct type that only conveys the configuration output of view.New. It is not configurable outside of the view package.

    If instead we define a view as a function translating an instrument definition into a data stream definition, a user will be enabled to define view configuration as they see fit and we are still able to correctly create pipelines. For example:

    // View is an override to the default behavior of the SDK. It defines how data
    // should be collected for certain instruments. It returns true and the exact
    // Stream to use for matching Instruments. Otherwise, if the view does not
    // match, false is returned.
    type View func(Instrument) (Stream, bool)
    

    With this definition a user is able to add whatever matching conditions they need, even if they exist outside what OTel specifies, and create data streams in the context of the instrument that is being created.

    Alone, however, this would require users to recreate common views (i.e. renames, aggregation setting, description updates), and it doesn't provide the functional niceties specified by OTel (wildcard name matching). These issues are addressed by including a NewView function to create a View with these niceties.

    Define NewView to create a View based on OTel specification

    // NewView returns a View that applies the Stream mask for all instruments that
    // match criteria. The returned View will only apply mask if all non-zero-value
    // fields of criteria match the corresponding Instrument passed to the view. If
    // no criteria are provided, all field of criteria are their zero-values, a
    // view that matches no instruments is returned.
    //
    // The Name field of criteria supports wildcard pattern matching. The wildcard
    // "*" is recognised as matching zero or more characters, and "?" is recognised
    // as matching exactly one character. For example, a pattern of "*" will match
    // all instrument names.
    //
    // The Stream mask only applies updates for non-zero-value fields. By default,
    // the Instrument the View matches against will be use for the returned Stream
    // and no Aggregation or AttributeFilter are set. If mask has a non-zero-value
    // value for any of the Aggregation or AttributeFilter fields, or any of the
    // Instrument fields, that value is used instead of the default. If you need to
    // zero out an Stream field returned from a View, create a View directly.
    func NewView(criteria Instrument, mask Stream) View
    

    Adding this function allows for the matching of instrument properties, including wildcard name matching, and the replacement functionality defined by the OTel specification. It also facilitates common matching/replacement views similar to the existing view.New.

    Why not define this function to accept Options? Accepting options makes sense as a way to allow forward compatibility, when new options need to be added new functions returning new options can be added in a backwards compatible way. They also prevent the user from having to pass empty arguments as parameters when no options are desired.

    To the latter point, when a user is using this convenience function to create a View they should never be passing "no options". If they do not pass a match criteria or mask, the view is effectively a disablement or no-op. Neither of these is likely what the user wants to do. It is expected that the common use of this function will be by users that want to match something and set something. Given this expected use, designing for the no-option possibility here is non-ergonomic.

    As for the extensibility of future options, both the Instrument and Stream are struct that can be extended. The zero-value of fields for these structs are ignored so adding new fields will behave the same as the previous non-included-field version.

    As an added benefit of fully specifying each allowed parameter, there is no possibility for a user to provide duplicate options. It becomes clear by design that only one name, description, instrument kind, etc. is matches per created View. This is something that only was understood in documentation with the option approach.

    This added benefit of single fully specified match criteria also removes the existing error of view.New where an exact and wildcard name match is asked for.

    As for the other existing error, where a user does not provide any matching criteria, it is translated into a returning a view that never matches. As mentioned above, the inherent design of the function discourages this use.

    Having both possible explicit errors removed, the function no longer needs to return an error. It is now able to be included in-line with MeterProvider creation options.

    The existing view.New also allows for logged errors when it is passed aggregations that are misconfigured. This new function can use this approach to error handling for that error type. Additionally, it can use that approach for any additions to the parameters it receives that result in errors.

    Deprecate the view package in favor of implementation in sdk/metric

    • Deprecate sdk/metric/view.Instrument in favor of sdk/metric.Instrument
    • Deprecate sdk/metric/view.InstrumentKind in favor of sdk/metric.InstrumentKind
    • Deprecate sdk/metric/view.View in favor of sdk/metric.View
    • Deprecate sdk/metric/view.New in favor of sdk/metric.NewView
    • Update sdk/metric to stop using all deprecated types

    Tasks

    How to split the proposal into reviewable PRs.

    • [x] Add View, NewView, Instrument, Stream, and InstrumentKind to sdk/metric with unit tests
      • https://github.com/open-telemetry/opentelemetry-go/pull/3459
    • [x] Update sdk/metric to use View, NewView, Instrument, Stream, and InstrumentKind from sdk/metric
      • https://github.com/open-telemetry/opentelemetry-go/pull/3461
    • [x] Add example tests for NewView and View.
      • https://github.com/open-telemetry/opentelemetry-go/pull/3460
    • [x] Deprecate the view package
      • https://github.com/open-telemetry/opentelemetry-go/pull/3476
  • sdk/trace: use sync.Pool in randomIDGenerator instead of Mutex

    sdk/trace: use sync.Pool in randomIDGenerator instead of Mutex

    randomIDGenerator used a sync.Mutex to coordinate generation of TraceID and SpanID. When used in relatively hot code paths this would cause significant mutex contention as observed in pprof/mutex profiles.

    This uses a sync.Pool instead which allows using generating and using of *rand.Rand as needed without the need for a Lock.

    name                 old time/op    new time/op    delta
    SpanIDGeneration-10     191ns ± 5%      64ns ± 2%  -66.60%  (p=0.000 n=9+10)
    
    name                 old alloc/op   new alloc/op   delta
    SpanIDGeneration-10     0.00B          0.00B          ~     (all equal)
    
    name                 old allocs/op  new allocs/op  delta
    SpanIDGeneration-10      0.00           0.00          ~     (all equal)
    

    Signed-off-by: Maisem Ali [email protected]

  • propose additional CODEOWNERS

    propose additional CODEOWNERS

  • AsyncCounter and AsyncUpDownCounter expected callback value changed in v0.32.0

    AsyncCounter and AsyncUpDownCounter expected callback value changed in v0.32.0

    Before 0.32, the value in a callback function was expected to be an absolute value, but since 0.32 it must be a delta or metrics are incorrect:

    hitsCounter, _ := meter.AsyncInt64().Counter("some.prefix.cache_hits")
    
    // before 0.32
    hitsCounter.Observe(ctx, hitsValue) // abs value
    
    // 0.32
    hitsCounter.Observe(ctx, hitsValue-prevHitsValue) // delta value
    

    Is that intentional? It is not documented anywhere and not mentioned in the changelog. Also, runtimemetrics still uses the old way to report observable values.

  • Impossible to create Instrument with the same name from the different MeterProvider

    Impossible to create Instrument with the same name from the different MeterProvider

    Description

    More that year have passed since I wrote about this in Slack and it is still here. I am trying to use Meter API, count some data and collect it with Prometheus. It is possible situation, when you need the same Instrument name with the different prefixes. I guess that Meter name is the prefix, but not. And If I will try to create two Meters and for each one create Instrument with the same name than I will get an error.

    Environment

    • OS: OS X
    • Architecture: x86_64
    • Go Version: 1.19
    • opentelemetry-go version: v1.8.0

    Steps To Reproduce

    config := prometheus.Config{}
    	ctrl := controller.New(
    		processor.NewFactory(
    			selector.NewWithHistogramDistribution(
    				histogram.WithExplicitBoundaries(config.DefaultHistogramBoundaries),
    			),
    			aggregation.CumulativeTemporalitySelector(),
    			processor.WithMemory(true),
    		),
    	)
    
    	exporter, err := prometheus.New(config, ctrl)
    	if err != nil {
    		return fmt.Errorf("setup monitor routes: %w", err)
    	}
    
    	provider := exporter.MeterProvider()
    
    	httpInstrumentProvider := provider.Meter("http").SyncInt64()
    	httpErrorCounter, err := httpInstrumentProvider.Counter("error_count", instrument.WithUnit(unit.Dimensionless),
    		instrument.WithDescription("count of http request errors"))
    	if err != nil {
    		return fmt.Errorf("setup monitor routes: %w", err)
    	}
    
    	httpErrorCounter.Add(ctx, 1)
    
    	sqlInstrumentProvider := provider.Meter("sql").SyncInt64()
    	sqlErrorCounter, err := sqlInstrumentProvider.Counter("error_count", instrument.WithUnit(unit.Dimensionless),
    		instrument.WithDescription("count of sql query errors"))
    	if err != nil {
    		return fmt.Errorf("setup monitor routes: %w", err)
    	}
    
    	sqlErrorCounter.Add(ctx, 1)
    
    	be.monitorRouter.HandleFunc("/metrics", exporter.ServeHTTP)
    

    Expected behavior

    I expect that I could create different Meters which isolate their Instruments (like namespace) and Instruments with the same name will not conflict between themselves in case if it was created with different Meters.

  • Change resource.New() to use functional options; add builtin attributes for (host.*, telemetry.sdk.*)

    Change resource.New() to use functional options; add builtin attributes for (host.*, telemetry.sdk.*)

    This adds resource.NewConfig(ctx, ...) with functional options:

    • WithAttributes(...) for directly adding key:values
    • WithDetectors(...) for directly adding resource detectors
    • WithTelemetrySDK(...) to override the default telemetry.sdk.* resources
    • WithHost(...) to override the default host.* resources
    • WithFromEnv(...) to override the default environment configuration
    • WithoutBuiltin() to disable all builtin resources

    The intention is to make it easy to configure a *Resource with the standard resources. This adds standard resource detectors for the telemetry SDK and host name.

    Resolves #1076.

  • Update ClientRequest HTTPS determination

    Update ClientRequest HTTPS determination

    The ClientRequest function will only report a peer port attribute if that peer port differs from the standard 80 for HTTP and 443 for HTTPS. In determining if the request is for HTTPS use the request URL scheme. This is not perfect. If a user doesn't provide a scheme this will not be correctly detected. However, the current approach of checking if the TLS field is non-nil will always be wrong, requests made by client ignore this field and it is always nil. Therefore, switching to using the URL field is the best we can do without having already made the request.

    Closes #3528

  • Deprecate the syncint64/syncfloat64/asyncint64/asyncfloat64 packages

    Deprecate the syncint64/syncfloat64/asyncint64/asyncfloat64 packages

    Flatten the instruments from the go.opentelemetry.io/otel/metric/instrument/{syncint64,syncfloat64,asyncint64,asyncfloat64} packages into go.opentelemetry.io/otel/metric/instrument.

    Follow up to https://github.com/open-telemetry/opentelemetry-go/pull/3507#issuecomment-1372704092

    Why Flatten?

    Our design choice to partition instrument names was made a while ago, why now flatten these packages?

    The general reason is our recent API redesign work^1 has made this partitioned package structure not fit with naming and code grouping design. It has become a relic of the prior, non-specification-compliant, design.

    Match Specification Recommended Names

    Fixes https://github.com/open-telemetry/opentelemetry-go/issues/3453

    Our old design used the asynchronous and synchronous terms to identify instrument. The specification has specific naming recommendations for asynchronous instruments though. They should use the "Obervable" term instead. This was fixed in on the meter^2, but by flattening the package structure the instrument names are also brought into compliance with the specification naming recommendation.

    Unify instruments in instrument package

    Each of the go.opentelemetry.io/otel/metric/instrument/{syncint64,syncfloat64,asyncint64,asyncfloat64} packages was a wrapper around 3 instrument types each. The go.opentelemetry.io/otel/metric/instrument package provided configuration and common embedded types for the other packages. This mean that the provided instrument functionality was spread across 5 different packages. By flattening into the instrument package all instrument functionality is provided by 1 unified package.

    Configuration is flattened, instruments were not

    Instrument configuration is already split in the instrument package^1 for each instrument type. Having the configuration for an instrument live in a separate package and not having those instruments split the same way in the same package creates an unneeded package boundary. Users looking at documentation about how an instrument is configured or implementing the configuration of an instrument need to look or import separate packages. This goes against the common Go idea of keeping related things close.

    Deprecate instead of remove

    To prevent adding the go.opentelemetry.io/otel/metric/instrument/{syncint64,syncfloat64,asyncint64,asyncfloat64} packages to our abandoned packages pile, this deprecates the packages and expects them to be removed in the next release.

  • Implement IsSampled for OpenTelemetry bridgeSpanContext

    Implement IsSampled for OpenTelemetry bridgeSpanContext

    This is a PR in response to issue https://github.com/open-telemetry/opentelemetry-go/issues/3532 .

    IsSampled is not exposed by OpenTracing but its implementations did (at least some; sometimes called differently; Jaeger does). Cosidering that in OpenTracing this information was accessed by casting types, this implementation is no different - instances can be casted to an interface with expected methods in order to access it. Once moved to the bridge, eventually bridge can be dropped and this information would continue being exposed through the OpenTelemetry's span context.

  • trace.NewTracerProvider logs configuration before storing spanProcessors

    trace.NewTracerProvider logs configuration before storing spanProcessors

    Description

    TracerProvider creation logs incomplete configuration, leaving out defined SpanProcessor structs.

    Environment

    • OS: MacOS
    • Architecture: x86_64
    • Go Version: 1.19
    • opentelemetry-go version: v1.11.2

    Steps To Reproduce

    1. Configure a logr compatible logger with verbosity set higher than 5 (as per OTEL documentation, to enable debug logging).
    2. Create a TraceProvider by using go.opentelemetry.io/otel/sdk/trace.NewTracerProvider(), with a configured SpanProcessor
    3. The log emitted after the provider is created will be incorrect because it will display SpanProcessors as empty.

    Expected behavior

    The log message should promptly display the full configuration of the TraceProvider struct.

    Comments

    I believe the offending block of code is the following:

    // NewTracerProvider returns a new and configured TracerProvider.
    //
    // By default the returned TracerProvider is configured with:
    //   - a ParentBased(AlwaysSample) Sampler
    //   - a random number IDGenerator
    //   - the resource.Default() Resource
    //   - the default SpanLimits.
    //
    // The passed opts are used to override these default values and configure the
    // returned TracerProvider appropriately.
    func NewTracerProvider(opts ...TracerProviderOption) *TracerProvider {
    	o := tracerProviderConfig{
    		spanLimits: NewSpanLimits(),
    	}
    	o = applyTracerProviderEnvConfigs(o)
    
    	for _, opt := range opts {
    		o = opt.apply(o)
    	}
    
    	o = ensureValidTracerProviderConfig(o)
    
    	tp := &TracerProvider{
    		namedTracer: make(map[instrumentation.Scope]*tracer),
    		sampler:     o.sampler,
    		idGenerator: o.idGenerator,
    		spanLimits:  o.spanLimits,
    		resource:    o.resource,
    	}
    	global.Info("TracerProvider created", "config", o)
    
    	spss := spanProcessorStates{}
    	for _, sp := range o.processors {
    		spss = append(spss, newSpanProcessorState(sp))
    	}
    	tp.spanProcessors.Store(spss)
    
    	return tp
    }
    

    As you can see, tp.spanProcessors.Store(spss) is called after the log entry has been created. For that reason, the spanProcessors attribute is empty on the log message.

Terraform provider implementation for interacting with the Tailscale API.

Terraform Provider Tailscale This repository contains a Terraform provider implementation for interacting with the Tailscale API.

Oct 3, 2022
A simple implementation to upload file to AWS S3

A simple implementation to upload file to AWS S3.

Nov 19, 2021
School POC of an AES implementation in an API/Client system

poc_aes_implement School POC of an AES implementation in an API/Client system How to use : Start the api with : poc-aes -api start Client commands : p

Nov 29, 2021
A work-in-progress implementation of MobileMe.

mobileme A work-in-progress implementation of MobileMe. At the moment, authentication is assumed to be with the username someuser and password testing

May 28, 2022
An implementation of a simple RESTful API in Golang on AWS infrastructure.

go-api An implementation of a simple RESTful API in Golang on AWS infrastructure. Tech Stack Serverless framework Go language AWS API Gateway AWS Lamb

Dec 25, 2021
Unofficial golang implementation for the pipl.com search API
Unofficial golang implementation for the pipl.com search API

go-pipl The unofficial golang wrapper for the pipl.com API. Table of Contents Installation Documentation Examples & Tests Benchmarks Code Standards Us

Nov 6, 2022
Unofficial golang implementation for the Preev API
Unofficial golang implementation for the Preev API

go-preev The unofficial golang implementation for the Preev.pro API Table of Contents Installation Documentation Examples & Tests Benchmarks Code Stan

Sep 13, 2022
Arweave-api - Arweave API implementation in golang

Arweave API Go implementation of the Arweave API Todo A list of endpoints that a

Jan 16, 2022
ABAG - The implementation for the alternating trees problem specified in the task

ABAG - GO task This repo contains the implementation for the alternating trees p

Jan 6, 2022
Implementation of Technical Test - Article API
Implementation of Technical Test - Article API

Technical Test on Article API Abstract For the technical test on an set of article API, this document outlines its requirements, and the design, devel

Feb 8, 2022
🔗 Unofficial golang implementation for the NOWNodes API
🔗 Unofficial golang implementation for the NOWNodes API

go-nownodes The unofficial golang implementation for the NOWNodes.io API Table of Contents Installation Documentation Examples & Tests Benchmarks Code

Jan 30, 2022
Qfy - Self-hosted implementation of Synthetics - Monitoring checks to validate your service availability

qfy Self-hosted implementation of Synthetics - Monitoring checks to validate you

Feb 23, 2022
OpenTelemetry log collection library

opentelemetry-log-collection Status This project was originally developed by observIQ under the name Stanza. It has been contributed to the OpenTeleme

Sep 15, 2022
A CLI tool that generates OpenTelemetry Collector binaries based on a manifest.

OpenTelemetry Collector builder This program generates a custom OpenTelemetry Collector binary based on a given configuration. TL;DR $ go get github.c

Sep 14, 2022
OpenTelemetry instrumentation for database/sql

otelsql It is an OpenTelemetry instrumentation for Golang database/sql, a port from https://github.com/open-telemetry/opentelemetry-go-contrib/pull/50

Dec 28, 2022
Example instrumentation of Golang Application with OpenTelemetry with supported configurations to export to Sentry.
Example instrumentation of Golang Application with OpenTelemetry with supported configurations to export to Sentry.

Sentry + Opentelemetry Go Example Requirements To run this example, you will need a kubernetes cluster. This example has been tried and tested on Mini

Oct 27, 2022
OpenTelemetry integration for Watermill

Watermill OpenTelemetry integration Bringing distributed tracing support to Watermill with OpenTelemetry.

Sep 18, 2022
Tool for generating OpenTelemetry tracing decorators.

tracegen Tool for generating OpenTelemetry tracing decorators. Installation go get -u github.com/KazanExpress/tracegen/cmd/... Usage tracegen generate

Apr 7, 2022
OpenTelemetry instrumentations for Go

OpenTelemetry instrumentations for Go Instrumentation Package Metrics Traces database/sql ✔️ ✔️ GORM ✔️ ✔️ sqlx ✔️ ✔️ logrus ✔️ Zap ✔️ Contributing To

Dec 26, 2022
Instrumentations of third-party libraries using opentelemetry-go library

OpenTelemetry Go Contributions About This reopsitory hosts instrumentations of the following OpenTelemetry libraries: confluentinc/confluent-kafka-go

Nov 14, 2022