A distributed, fault-tolerant pipeline for observability data

Build Status GoDoc

Table of Contents

What Is Veneur?

Veneur (/vɛnˈʊr/, rhymes with “assure”) is a distributed, fault-tolerant pipeline for runtime data. It provides a server implementation of the DogStatsD protocol or SSF for aggregating metrics and sending them to downstream storage to one or more supported sinks. It can also act as a global aggregator for histograms, sets and counters.

More generically, Veneur is a convenient sink for various observability primitives with lots of outputs!

Use Case

Once you cross a threshold into dozens, hundreds or (gasp!) thousands of machines emitting metric data for an application, you've moved into that world where data about individual hosts is uninteresting except in aggregate form. Instead of paying to store tons of data points and then aggregating them later at read-time, Veneur can calculate global aggregates, like percentiles and forward those along to your time series database, etc.

Veneur is also a StatsD or DogStatsD protocol transport, forwarding the locally collected metrics over more reliable TCP implementations.

Here are some examples of why Stripe and other companies are using Veneur today:

  • reducing cost by pre-aggregating metrics such as timers into percentiles
  • creating a vendor-agnostic metric collection pipeline
  • consolidating disparate observability data (from trace spans to metrics, and more!)
  • improving efficiency over other metric aggregator implementations
  • improving reliability by building a more resilient forwarding system over single points of failure

See Also

We wanted percentiles, histograms and sets to be global. We wanted to unify our observability clients, be vendor agnostic and build automatic features like SLI measurement. Veneur helps us do all this and more!

Status

Veneur is currently handling all metrics for Stripe and is considered production ready. It is under active development and maintenance! Starting with v1.6, Veneur operates on a six-week release cycle, and all releases are tagged in git. If you'd like to contribute, see CONTRIBUTING!

Building Veneur requires Go 1.11 or later.

Features

Vendor And Backend Agnostic

Veneur has many sinks such that your data can be sent one or more vendors, TSDBs or tracing stores!

Modern Metrics Format (Or Others!)

Unify metrics, spans and logs via the Sensor Sensibility Format. Also works with DogStatsD, StatsD and Prometheus.

Global Aggregation

If configured to do so, Veneur can selectively aggregate global metrics to be cumulative across all instances that report to a central Veneur, allowing global percentile calculation, global counters or global sets.

For example, say you emit a timer foo.bar.call_duration_ms from 20 hosts that are configured to forward to a central Veneur. You'll see the following:

  • Metrics that have been "globalized"
    • foo.bar.call_duration_ms.50percentile: the p50 across all hosts, by tag
    • foo.bar.call_duration_ms.90percentile: the p90 across all hosts, by tag
    • foo.bar.call_duration_ms.95percentile: the p95 across all hosts, by tag
    • foo.bar.call_duration_ms.99percentile: the p99 across all hosts, by tag
  • Metrics that remain host-local
    • foo.bar.call_duration_ms.avg: by-host tagged average
    • foo.bar.call_duration_ms.count: by-host tagged count which (when summed) shows the total count of times this metric was emitted
    • foo.bar.call_duration_ms.max: by-host tagged maximum value
    • foo.bar.call_duration_ms.median: by-host tagged median value
    • foo.bar.call_duration_ms.min: by-host tagged minimum value
    • foo.bar.call_duration_ms.sum: by-host tagged sum value representing the total time

Clients can choose to override this behavior by including the tag veneurlocalonly.

Approximate Histograms

Because Veneur is built to handle lots and lots of data, it uses approximate histograms. We have our own implementation of Dunning's t-digest, which has bounded memory consumption and reduced error at extreme quantiles. Metrics are consistently routed to the same worker to distribute load and to be added to the same histogram.

Datadog's DogStatsD — and StatsD — uses an exact histogram which retains all samples and is reset every flush period. This means that there is a loss of precision when using Veneur, but the resulting percentile values are meant to be more representative of a global view.

Datadog Distributions

Because Veneur already handles "global" histograms, any DogStatsD packets received with type dDatadog's distribution type — will be considered a histogram and therefore compatible with all sinks. Veneur does not send any metrics to Datadog typed as a Datadog-native distribution.

Approximate Sets

Veneur uses HyperLogLogs for approximate unique sets. These are a very efficient unique counter with fixed memory consumption.

Global Counters

Via an optional magic tag Veneur will forward counters to a global host for accumulation. This feature was primarily developed to control tag cardinality. Some counters are valuable but do not require per-host tagging.

Concepts

  • Global metrics are those that benefit from being aggregated for chunks — or all — of your infrastructure. These are histograms (including the percentiles generated by timers) and sets.
  • Metrics that are sent to another Veneur instance for aggregation are said to be "forwarded". This terminology helps to decipher configuration and metric options below.
  • Flushed, in Veneur, means metrics or spans processed by a sink.

By Metric Type Behavior

To clarify how each metric type behaves in Veneur, please use the following:

  • Counters: Locally accrued, flushed to sinks (see magic tags for global version)
  • Gauges: Locally accrued, flushed to sinks (see magic tags for global version)
  • Histograms: Locally accrued, count, max and min flushed to sinks, percentiles forwarded to forward_address for global aggregation when set.
  • Timers: Locally accrued, count, max and min flushed to sinks, percentiles forwarded to forward_address for global aggregation when set.
  • Sets: Locally accrued, forwarded to forward_address for sinks aggregation when set.

Expiration

Veneur expires all metrics on each flush. If a metric is no longer being sent (or is sent sparsely) Veneur will not send it as zeros! This was chosen because the combination of the approximation's features and the additional hysteresis imposed by retaining these approximations over time was deemed more complex than desirable.

Other Notes

  • Veneur aligns its flush timing with the local clock. For the default interval of 10s Veneur will generally emit metrics at 00, 10, 20, 30, … seconds after the minute.
  • Veneur will delay it's first metric emission to align the clock as stated above. This may result in a brief quiet period on a restart at worst < interval seconds long.

Usage

veneur -f example.yaml

See example.yaml for a sample config. Be sure to set the appropriate *_api_key!

Setup

Here we'll document some explanations of setup choices you may make when using Veneur.

Clients

Veneur is capable of ingesting:

  • DogStatsD including events and service checks
  • SSF
  • StatsD as a subset of DogStatsD, but this may cause trouble depending on where you store your metrics.

To use clients with Veneur you need only configure your client of choice to the proper host and port combination. This port should match one of:

  • statsd_listen_addresses for UDP- and TCP-based clients
  • ssf_listen_addresses for SSF-based clients using UDP or UNIX domain sockets.
  • grpc_listen_addresses for both SSF and dogstatsd based clients using GRPC (over TCP).

Einhorn Usage

When you upgrade Veneur (deploy, stop, start with new binary) there will be a brief period where Veneur will not be able to handle HTTP requests. At Stripe we use Einhorn as a shared socket manager to bridge the gap until Veneur is ready to handle HTTP requests again.

You'll need to consult Einhorn's documentation for installation, setup and usage. But once you've done that you can tell Veneur to use Einhorn by setting http_address to einhorn@0. This informs goji/bind to use its Einhorn handling code to bind to the file descriptor for HTTP.

Forwarding

Veneur instances can be configured to forward their global metrics to another Veneur instance. You can use this feature to get the best of both worlds: metrics that benefit from global aggregation can be passed up to a single global Veneur, but other metrics can be published locally with host-scoped information. Note: Forwarding adds an additional delay to metric availability corresponding to the value of the interval configuration option, as the local veneur will flush it to its configured upstream, which will then flush any recieved metrics when its interval expires.

If a local instance receives a histogram or set, it will publish the local parts of that metric (the count, min and max) directly to sinks, but instead of publishing percentiles, it will package the entire histogram and send it to the global instance. The global instance will aggregate all the histograms together and publish their percentiles to sinks.

Note that the global instance can also receive metrics over UDP. It will publish a count, min and max for the samples that were sent directly to it, but not counting any samples from other Veneur instances (this ensures that things don't get double-counted). You can even chain multiple levels of forwarding together if you want. This might be useful if, for example, your global Veneur is under too much load. The root of the tree will be the Veneur instance that has an empty forward_address. (Do not tell a Veneur instance to forward metrics to itself. We don't support that and it doesn't really make sense in the first place.)

With respect to the tags configuration option, the tags that will be added are those of the Veneur that actually publishes to a sink. If a local instance forwards its histograms and sets to a global instance, the local instance's tags will not be attached to the forwarded structures. It will still use its own tags for the other metrics it publishes, but the percentiles will get extra tags only from the global instance.

Proxy

To improve availability, you can leverage veneur-proxy in conjunction with Consul service discovery.

The proxy can be configured to query the Consul API for instances of a service using consul_forward_service_name. Each healthy instance is then entered in to a hash ring. When choosing which host to forward to, Veneur will use a combination of metric name and tags to consistently choose the same host for forwarding.

See more documentation for Proxy Veneur.

Static Configuration

For static configuration you need one Veneur, which we'll call the global instance, and one or more other Veneurs, which we'll call local instances. The local instances should have their forward_address configured to the global instance's http_address. The global instance should have an empty forward_address (ie just don't set it). You can then report metrics to any Veneur's statsd_listen_addresses as usual.

Magic Tag

If you want a metric to be strictly host-local, you can tell Veneur not to forward it by including a veneurlocalonly tag in the metric packet, eg foo:1|h|#veneurlocalonly. This tag will not actually appear in storage; Veneur removes it.

Global Counters And Gauges

Relatedly, if you want to forward a counter or gauge to the global Veneur instance to reduce tag cardinality, you can tell Veneur to flush it to the global instance by including a veneurglobalonly tag in the metric's packet. This veneurglobalonly tag is stripped and will not be passed on to sinks.

Note: For global counters to report correctly, the local and global Veneur instances should be configured to have the same flush interval.

Note: Global gauges are "random write wins" since they are merged in a non-deterministic order at the global Veneur.

Routing metrics

Veneur supports specifying that metrics should only be routed to a specific metric sink, with the veneursinkonly:<sink_name> tag. The <sink_name> value can be any configured metric sink. Currently, that's datadog, kafka, signalfx, prometheus. It's possible to specify multiple sink destination tags on a metric, which will cause the metric to be routed to each sink specified.

Configuration

Veneur expects to have a config file supplied via -f PATH. The included example.yaml explains all the options!

The config file can be validated using a pair of flags:

  • -validate-config: checks that the config file specified via -f is valid YAML, and has correct datatypes for all fields.
  • -validate-config-strict: checks the above, and also that there are no unknown fields.

Configuration via Environment Variables

Veneur and veneur-proxy each allow configuration via environment variables using envconfig. Options provided via environment variables take precedent over those in config. This allows stuff like:

VENEUR_DEBUG=true veneur -f someconfig.yml

Note: The environment variables used for configuration map to the field names in config.go, capitalized, with the prefix VENEUR_. For example, the environment variable equivalent of datadog_api_hostname is VENEUR_DATADOGAPIHOSTNAME.

You may specify configurations that are arrays by separating them with a comma, for example VENEUR_AGGREGATES="min,max"

Monitoring

Here are the important things to monitor with Veneur:

At Local Node

When running as a local instance, you will be primarily concerned with the following metrics:

  • veneur.flush*.error_total as a count of errors when flushing metrics. This should rarely happen. Occasional errors are fine, but sustained is bad.

Forwarding

If you are forwarding metrics to central Veneur, you'll want to monitor these:

  • veneur.forward.error_total and the cause tag. This should pretty much never happen and definitely not be sustained.
  • veneur.forward.duration_ns and veneur.forward.duration_ns.count. These metrics track the per-host time spent performing a forward. The time should be minimal!

At Global Node

When forwarding you'll want to also monitor the global nodes you're using for aggregation:

  • veneur.import.request_error_total and the cause tag. This should pretty much never happen and definitely not be sustained.
  • veneur.import.response_duration_ns and veneur.import.response_duration_ns.count to monitor duration and number of received forwards. This should not fail and not take very long. How long it takes will depend on how many metrics you're forwarding.
  • And the same veneur.flush.* metrics from the "At Local Node" section.

Metrics

Veneur will emit metrics to the stats_address configured above in DogStatsD form. Those metrics are:

  • veneur.sink.metric_flush_total_duration_ns.* - Duration of flushes per-sink, tagged by sink.
  • veneur.packet.error_total - Number of packets that Veneur could not parse due to some sort of formatting error by the client. Tagged by packet_type and reason.
  • veneur.forward.post_metrics_total - Indicates how many metrics are being forwarded in a given POST request. A "metric", in this context, refers to a unique combination of name, tags and metric type.
  • veneur.*.content_length_bytes.* - The number of bytes in a single POST body. Remember that Veneur POSTs large sets of metrics in multiple separate bodies in parallel. Uses a histogram, so there are multiple metrics generated depending on your local DogStatsD config.
  • veneur.forward.duration_ns - Same as flush.duration_ns, but for forwarding requests.
  • veneur.flush.error_total - Number of errors received POSTing via sinks.
  • veneur.forward.error_total - Number of errors received POSTing to an upstream Veneur. See also import.request_error_total below.
  • veneur.gc.number - Number of completed GC cycles.
  • veneur.gc.pause_total_ns - Total seconds of STW GC since the program started.
  • veneur.mem.heap_alloc_bytes - Total number of reachable and unreachable but uncollected heap objects in bytes.
  • veneur.worker.metrics_processed_total - Total number of metric packets processed between flushes by workers, tagged by worker. This helps you find hot spots where a single worker is handling a lot of metrics. The sum across all workers should be approximately proportional to the number of packets received.
  • veneur.worker.metrics_flushed_total - Total number of metrics flushed at each flush time, tagged by metric_type. A "metric", in this context, refers to a unique combination of name, tags and metric type. You can use this metric to detect when your clients are introducing new instrumentation, or when you acquire new clients.
  • veneur.worker.metrics_imported_total - Total number of metrics received via the importing endpoint. A "metric", in this context, refers to a unique combination of name, tags, type and originating host. This metric indicates how much of a Veneur instance's load is coming from imports.
  • veneur.import.response_duration_ns - Time spent responding to import HTTP requests. This metric is broken into part tags for request (time spent blocking the client) and merge (time spent sending metrics to workers).
  • veneur.import.request_error_total - A counter for the number of import requests that have errored out. You can use this for monitoring and alerting when imports fail.
  • veneur.listen.received_per_protocol_total - A counter for the number of metrics/spans/etc. received by direct listening on global Veneur instances. This can be used to observe metrics that were received from direct emits as opposed to imports. Tagged by protocol.

Error Handling

In addition to logging, Veneur will dutifully send any errors it generates to a Sentry instance. This will occur if you set the sentry_dsn configuration option. Not setting the option will disable Sentry reporting.

Performance

Processing packets quickly is the name of the game.

Benchmarks

The common use case for Veneur is as an aggregator and host-local replacement for DogStatsD, therefore processing UDP fast is no longer the priority. That said, we were processing > 60k packets/second in production before shifting to the current local aggregation method. This outperformed both the Datadog-provided DogStatsD and StatsD in our infrastructure.

SO_REUSEPORT

As other implementations have observed, there's a limit to how many UDP packets a single kernel thread can consume before it starts to fall over. Veneur supports the SO_REUSEPORT socket option on Linux, allowing multiple threads to share the UDP socket with kernel-space balancing between them. If you've tried throwing more cores at Veneur and it's just not going fast enough, this feature can probably help by allowing more of those cores to work on the socket (which is Veneur's hottest code path by far). Note that this is only supported on Linux (right now). We have not added support for other platforms, like darwin and BSDs.

TCP connections

Veneur supports reading the statsd protocol from TCP connections. This is mostly to support TLS encryption and authentication, but might be useful on its own. Since TCP is a continuous stream of bytes, this requires each stat to be terminated by a new line character ('\n'). Most statsd clients only add new lines between stats within a single UDP packet, and omit the final trailing new line. This means you will likely need to modify your client to use this feature.

TLS encryption and authentication

If you specify the tls_key and tls_certificate options, Veneur will only accept TLS connections on its TCP port. This allows the metrics sent to Veneur to be encrypted.

If you specify the tls_authority_certificate option, Veneur will require clients to present a client certificate, signed by this authority. This ensures that only authenticated clients can connect.

You can generate your own set of keys using openssl:

# Generate the authority key and certificate (2048-bit RSA signed using SHA-256)
openssl genrsa -out cakey.pem 2048
openssl req -new -x509 -sha256 -key cakey.pem -out cacert.pem -days 1095 -subj "/O=Example Inc/CN=Example Certificate Authority"

# Generate the server key and certificate, signed by the authority
openssl genrsa -out serverkey.pem 2048
openssl req -new -sha256 -key serverkey.pem -out serverkey.csr -days 1095 -subj "/O=Example Inc/CN=veneur.example.com"
openssl x509 -sha256 -req -in serverkey.csr -CA cacert.pem -CAkey cakey.pem -CAcreateserial -out servercert.pem -days 1095

# Generate a client key and certificate, signed by the authority
openssl genrsa -out clientkey.pem 2048
openssl req -new -sha256 -key clientkey.pem -out clientkey.csr -days 1095 -subj "/O=Example Inc/CN=Veneur client key"
openssl x509 -req -in clientkey.csr -CA cacert.pem -CAkey cakey.pem -CAcreateserial -out clientcert.pem -days 1095

Set statsd_listen_addresses, tls_key, tls_certificate, and tls_authority_certificate:

statsd_listen_addresses:
  - "tcp://localhost:8129"
tls_certificate: |
  -----BEGIN CERTIFICATE-----
  MIIC8TCCAdkCCQDc2V7P5nCDLjANBgkqhkiG9w0BAQsFADBAMRUwEwYDVQQKEwxC
  ...
  -----END CERTIFICATE-----
tls_key: |
    -----BEGIN RSA PRIVATE KEY-----
  MIIEpAIBAAKCAQEA7Sntp4BpEYGzgwQR8byGK99YOIV2z88HHtPDwdvSP0j5ZKdg
  ...
  -----END RSA PRIVATE KEY-----
tls_authority_certificate: |
  -----BEGIN CERTIFICATE-----
  ...
  -----END CERTIFICATE-----

Performance implications of TLS

Establishing a TLS connection is fairly expensive, so you should reuse connections as much as possible. RSA keys are also far more expensive than using ECDH keys. Using localhost on a machine with one CPU, Veneur was able to establish ~700 connections/second using ECDH prime256v1 keys, but only ~110 connections/second using RSA 2048-bit keys. According to the Go profiling for a Veneur instance using TLS with RSA keys, approximately 25% of the CPU time was in the TLS handshake, and 13% was decrypting data.

Name

The veneur is a person acting as superintendent of the chase and especially of hounds in French medieval venery and being an important officer of the royal household. In other words, it is the master of dogs. :)

Comments
  • Drop statsd client from veneur's internal metrics reporting

    Drop statsd client from veneur's internal metrics reporting

    Summary

    This PR reworks veneur such that its metrics get reported not through statsd, spewing packets at its own UDP listening port, but sending them through its own *trace.Client (typically internal), by attaching them to an SSF span.

    Motivation

    I'd like us to cause as little network traffic as possible, reduce syscalls, gain more clarity around what gets reported how.

    Test plan

    I made sure the tests still work, but then most of them don't exercise the path where we actually send metrics through the trace backend.

    To try it out, roll this to our QA and see:

    • [x] do metrics still get reported in the first place? (They should be!)
    • [x] how are dashboards affected on the hosts that this is running on?
    • [x] how is CPU / memory usage affected by this, possibly syscall volume too?

    Rollout/monitoring/revert plan

    This will likely affect dashboards - if you roll this out, you will notice a severe drop in UDP packets.

    There is an ordering requirement for rolling it out, too:

    1. Roll out local veneur servers
    2. Roll out global veneur servers
    3. Roll out veneur proxies

    Proxies must come last because they report SSF metrics to a local veneur server, and that server must know to expect them/report them correctly - so the versions must match up

  • Added a more performant Hyperloglog.

    Added a more performant Hyperloglog.

    Summary

    Changed the current implementation of the Hyperloglog by Clark Duvall to the Axiom one.

    Motivation

    Based on the intent on https://news.ycombinator.com/item?id=14636699 I took a stab at using the axiomhq implementation to reduce resource usage.

    Test plan

    I adapted and used the existing samplers_test.go test case.

    Rollout/monitoring/revert plan

    A rollout should be fairly easy as changing the import and changing a few method calls.

  • Support forwarding over gRPC

    Support forwarding over gRPC

    Summary

    As requested by @cory-stripe in #390, this adds a gRPC service that duplicates the functionality of /import, allowing forwarding over gRPC instead of HTTP. I implemented this for both veneur and veneur-proxy. For both services it should be possible to run both the HTTP and the gRPC servers simultaneously, allowing for a gradual switch of local Veneur's over time. For Veneur's that are forwarding metrics, I added a configuration option that controls weather forwarding happens over gRPC or HTTP.

    I tried to break this change down into somewhat descriptive and self-contained commits, which I think might be more useful then my essay in #390 😄 Let me know if you'd like more description and I'm happy to add it.

    Sorry for making this so gigantic! I'm happy to break this down into smaller PRs if you'd like. One division that I could see working would be 3 PRs consisting of the following, although this is pretty much contained in the commits I made:

    • Implementing the veneur gRPC server
    • Implementing the veneur-proxy gRPC server
    • Adding forwarding support over gRPC

    Motivation

    We (@quantcast) are very motivated to contribute changes that fix issue #155 so that we can heavily reduce the cardinality of many of our metrics that are submitted to Datadog. Hopefully this puts us on a path where we can get that merged!

    Test plan

    I added a pretty good number of tests to verify the new endpoint. This includes the following:

    • Tests for the new gRPC proxy server
    • Tests for the gRPC import server
    • Tests of the new flushing behavior over gRPC
    • Tests for the Worker's ingesting metrics from gRPC
    • An end-to-end test that follows the chain of forwarding from a local Veneur to being flushed by a global instance.

    I've also created a small development setup that I'm currently testing these changes on, and it's looking good so far. If all goes well we will probably try to canary this to our production proxies and global Veneur's in the next week.

    TODOs

    • [x] Add tests for the new functions added to the various samplers

    Rollout/monitoring/revert plan

    This can be deployed to local Veneur's, global Veneur's, and proxies in any order, and should cause no behavior change until gRPC configuration options are edited.

    Obviously before enabling gRPC on local Veneur's you will want to have enabled the gRPC servers on both the proxies and global Veneur's 😄

    CC: @stripe/observability

  • Accept statsd from authenticated TLS connections

    Accept statsd from authenticated TLS connections

    Summary

    By default, Veneur (and statsd, and dogstatsd) accepts stats from any client that can send UDP packets to the correct port. For slightly bizarre reasons, it is useful to only accept data from authenticated clients. An "easy" way to do this securely is to use an encrypted TLS connection, and require the client to present a certificate signed by a trusted authority. This reuses the existing authentication mechanism in TLS, which is hopefully secure.

    This adds configuration options to read the statsd protocol from a TLS connection. It will only accept connections with a client certificate, signed by the configured authority.

    Motivation

    See Issue #117

    Test plan

    Currently we (Bluecore Inc) are running this change as an experiment for getting metrics out of App Engine. We will be ramping up our usage of it over the next couple weeks.

    There are no unit tests here yet: If this seems reasonable, I'll attempt to add some for the TCP connection reader, and possibly for the TLS listen socket setup.

    Rollout/monitoring/revert plan

    This adds tls.connects and tls.disconnects counters to monitor connections with TLS. Maybe this should be a gauge, but then you might not see if clients are in reconnect loops.

  • Add a Kafka sink

    Add a Kafka sink

    Summary

    Add a sink for sinking metrics or spans to a Kafka topic

    Motivation

    From @parabuzzle,

    I want to be able to sink my metrics to a Kafka topic for later consuming and stream processing.

    Me too! I've made lots of changes to his original PR. Partly because the interface changed underneath him, but also because we wanted to include span!

    This PR adds a large pile of config option — sorry not sorry, Kafka has lots of options! — and sets up an asynchronous Kafka producer that writes to the supplied topic(s). Users can configure their producer to "flush" based on messages, bytes or durations and even configure spans to be one of JSON or Protobuf.

    Test plan

    Tests added and local tests confirm things work!

  • Align tick with interval boundaries for bucketing.

    Align tick with interval boundaries for bucketing.

    Summary

    Align Veneur's internal flush ticker with the interval for nice bucketing.

    Motivation

    We got an issue from @hermanbanken in #296 that pointed our Veneur's choice to "tick" outside of a convenient boundary was causing some bucketing issues. This isn't the first time we've heard this, but it's the first time someone helped us understand why.

    To that end, this patch changes veneur to delay the start of it's ticker by <= interval so as to align with nice, round clock buckets.

    Test plan

    Test incoming, in writing this PR text I realized how to test it. :)

    r? @stripe/observability

  • Veneur panic

    Veneur panic

    Our veneur's v1.3.1 started crashing sporadically past thurday. Yesterday, I compiled from master, and updated all, but the crash seams to persist.

    The problem is that it's ocasional and I can't seam to be able to debug ingested metrics.

    Thank you for any help.

    LOG:
    time="2018-11-27T13:38:11Z" level=info msg="Completed flush to Datadog" metrics=2403
    fatal error: fault
    unexpected fault address 0x10f9000
    	/go/src/github.com/stripe/veneur/worker.go:259 +0x3ee fp=0xc000ac7fa8 sp=0xc000ac7c28 pc=0x10f854e
    github.com/stripe/veneur.(*Worker).Work(0xc000362fc0)
    	/go/src/github.com/stripe/veneur/worker.go:311 +0x9fa fp=0xc000ac7c28 sp=0xc000ac7a90 pc=0x10f8ffa
    github.com/stripe/veneur.(*Worker).ProcessMetric(0xc000362fc0, 0xc000ac7dc8)
    	/usr/local/go/src/runtime/signal_unix.go:397 +0x275 fp=0xc000ac7a90 sp=0xc000ac7a40 pc=0x441aa5
    runtime.sigpanic()
    	/usr/local/go/src/runtime/panic.go:608 +0x72 fp=0xc000ac7a40 sp=0xc000ac7a10 pc=0x42bf72
    runtime.throw(0x13cc01a, 0x5)
    goroutine 134 [running]:
    
    [signal SIGSEGV: segmentation violation code=0x2 addr=0x10f9000 pc=0x10f8ffa]
    net.(*conn).Read(0xc00000c050, 0xc0004e8000, 0x2000, 0x2000, 0x0, 0x0, 0x0)
    	/usr/local/go/src/net/fd_unix.go:202 +0x4f
    net.(*netFD).Read(0xc000401180, 0xc0004e8000, 0x2000, 0x2000, 0x408a7b, 0xc00002e000, 0x125a980)
    	/usr/local/go/src/internal/poll/fd_unix.go:169 +0x179
    internal/poll.(*FD).Read(0xc000401180, 0xc0004e8000, 0x2000, 0x2000, 0x0, 0x0, 0x0)
    	/usr/local/go/src/internal/poll/fd_poll_runtime.go:90 +0x3d
    internal/poll.(*pollDesc).waitRead(0xc000401198, 0xc0004e8000, 0x2000, 0x2000)
    	/usr/local/go/src/internal/poll/fd_poll_runtime.go:85 +0x9a
    internal/poll.(*pollDesc).wait(0xc000401198, 0x72, 0xffffffffffffff00, 0x15b5280, 0x206e558)
    	/usr/local/go/src/runtime/netpoll.go:173 +0x66
    internal/poll.runtime_pollWait(0x7f131c6a9950, 0x72, 0xc0006c2870)
    goroutine 36 [IO wait]:
    
    	/go/src/github.com/stripe/veneur/cmd/veneur/main.go:94 +0x2eb
    main.main()
    	/go/src/github.com/stripe/veneur/server.go:1074 +0x6f
    github.com/stripe/veneur.(*Server).Serve(0xc0005dc600)
    goroutine 1 [chan receive, 244 minutes]:
    
    	/go/src/github.com/stripe/veneur/server.go:273 +0x95c
    created by github.com/stripe/veneur.NewFromConfig
    	/usr/local/go/src/runtime/asm_amd64.s:1333 +0x1 fp=0xc000ac7fd8 sp=0xc000ac7fd0 pc=0x45bab1
    runtime.goexit()
    	/go/src/github.com/stripe/veneur/server.go:277 +0x51 fp=0xc000ac7fd0 sp=0xc000ac7fa8 pc=0x10fe351
    github.com/stripe/veneur.NewFromConfig.func1(0xc0005dc600, 0xc000362fc0)
    
    
    CONFIG:
    statsd_listen_addresses:
     - udp://0.0.0.0:8125
    tls_key: ""
    tls_certificate: ""
    tls_authority_certificate: ""
    forward_address: ""
    forward_use_grpc: false
    interval: "10s"
    synchronize_with_interval: false
    stats_address: "localhost:8125"
    http_address: "0.0.0.0:8127"
    grpc_address: "0.0.0.0:8128"
    indicator_span_timer_name: ""
    percentiles:
      - 0.99
      - 0.95
    aggregates:
      - "min"
      - "max"
      - "median"
      - "avg"
      - "count"
      - "sum"
    num_workers: 96
    num_readers: 4
    num_span_workers: 10
    span_channel_capacity: 100
    metric_max_length: 4096
    trace_max_length_bytes: 16384
    read_buffer_size_bytes: 4194304
    debug: false
    debug_ingested_spans: false
    debug_flushed_metrics: false
    mutex_profile_fraction: 0
    block_profile_rate: 0
    sentry_dsn: ""
    enable_profiling: false
    datadog_api_hostname: https://app.datadoghq.com
    datadog_api_key: "DATADOG_KEY"
    datadog_flush_max_per_body: 25000
    datadog_trace_api_address: ""
    datadog_span_buffer_size: 16384
    aws_access_key_id: ""
    aws_secret_access_key: ""
    aws_region: ""
    aws_s3_bucket: ""
    flush_file: ""
    

    ADDITIONAL INFO: They receive metrics via UDP at a rate no lower than 1000/s. We healthcheck via TCP/8127 /healthcheck and the servers are replaced on failure. (AWS ASG)

  • Add a gRPC

    Add a gRPC "import" server

    This PR contains a subset of the changes in #407, which enables full gRPC support

    Summary

    This adds a gRPC "import" server, which receives metrics and outputs them to a set of generic MetricIngesters. This should be used to receive metrics forwarded from another Veneur.

    importsrv package

    The importsrv package defines a gRPC server that implements the forwardrpc.Forward service. It receives batches of metrics, and sends them on to a set of sinks based on their key. To facilitate easy testing of this RPC, I made the server accept an interface MetricIngester, which is what is used to forward the metrics. Worker will later implement this interface, allowing for them to be passed directly into this server.

    Optimizations for gRPC imports

    From profiling global Veneurs in our production environment, we see that the garbage collector is generally dominating the CPU usage. Digging in a little more, it looks like the process to hash a metric to a destination worker in the "importsrv" package is actually doing a ton of allocations (about 20% of total allocated objects).

    To optimize this, I pulled in an external fnv hash implementation github.com/segmentio/fasthash that handles strings with zero allocations and doesn't use interface types. I also removed usages of "MetricKey", as it does a lot of string manipulations that perform a ton of allocations, in favor of just writing strings directly into the hash.

    The benchmarks show nice improvements in the number of allocations. There is a 2x reduction for a small number of metrics (10 to be specific) and a 6x reduction for 100 metrics, which is closer to our average of around 180 currently.

    While this seems like a nice improvement to me, it does require the additional dependency. I put these changes in a separate commit (fc6634a), and I'm happy to revert it if the performance gain doesn't seem worth it!

    Motivation

    Like I also said in #439, we (@quantcast) would like to contribute gRPC support to eventually enable global histogram aggregations (#390). We are also already running this.

    Test plan

    I wrote a number of tests for the new package. Very similar code is also currently running in our production environment and handling almost our entire metric volume without issues.

    Rollout/monitoring/revert plan

    As with #439, all of the new code is (so far) unused, and so there should be no functional changes and any deployment should be a no-op!

    CC: @sdboyer-stripe @cory-stripe @stripe/observability

  • Use shared http.Client for signalfx sink

    Use shared http.Client for signalfx sink

    Summary

    basically see commits

    r? @aditya-stripe

    Motivation

    the sfx sink should use the same http client object as the rest of veneur, so it can share keepalive connections

    Test plan

    been baking in qa for a while

    Rollout/monitoring/revert plan

    as normal

  • Extract and set host header, return X-Powered-By

    Extract and set host header, return X-Powered-By

    Summary

    In order to use Veneur behind a webserver or a proxy, some changes need to be applied. Veneur modifies the endpoint to avoid dns-caching and not preserving the original host header, this breaks Veneur if it sits behind a proxy that is listening to more than one site.

    Motivation

    Support usage of veneur behind another webserver / proxy (Nginx, Apache, etc...) Adding visibility once it reached the veneur instance using X-Powered-By

    Test plan

    I wrote a test to verify the new extractHostPort works as expected and that I didn't break any other tests.

  • sentry.go: sentryHook.Fire: Don't hang for Fatal logs without Sentry

    sentry.go: sentryHook.Fire: Don't hang for Fatal logs without Sentry

    Summary

    If s.c is nil, s.c.Capture returns a nil chan. Reading from it blocks forever. This means if someone logs at logrus Fatal or Panic levels without Sentry configured, the caller will hang instead of causing the process. This is the same bug as commit d04711e9c2, just in a different place.

    It turns out the unit tests were depending on this behaviour, since they were reusing ports and hitting this error. With this fix, they were crashing. PR #134 fixes that problem.

    Motivation

    I was debugging something and found that Veneur was hanging, not crashing. This was the cause.

    Test plan

    So far, I've only run the unit tests.

  • Bump github.com/aws/aws-sdk-go from 1.31.13 to 1.33.0

    Bump github.com/aws/aws-sdk-go from 1.31.13 to 1.33.0

    Bumps github.com/aws/aws-sdk-go from 1.31.13 to 1.33.0.

    Changelog

    Sourced from github.com/aws/aws-sdk-go's changelog.

    Release v1.33.0 (2020-07-01)

    Service Client Updates

    • service/appsync: Updates service API and documentation
    • service/chime: Updates service API and documentation
      • This release supports third party emergency call routing configuration for Amazon Chime Voice Connectors.
    • service/codebuild: Updates service API and documentation
      • Support build status config in project source
    • service/imagebuilder: Updates service API and documentation
    • service/rds: Updates service API
      • This release adds the exceptions KMSKeyNotAccessibleFault and InvalidDBClusterStateFault to the Amazon RDS ModifyDBInstance API.
    • service/securityhub: Updates service API and documentation

    SDK Features

    • service/s3/s3crypto: Introduces EncryptionClientV2 and DecryptionClientV2 encryption and decryption clients which support a new key wrapping algorithm kms+context. (#3403)
      • DecryptionClientV2 maintains the ability to decrypt objects encrypted using the EncryptionClient.
      • Please see s3crypto documentation for migration details.

    Release v1.32.13 (2020-06-30)

    Service Client Updates

    • service/codeguru-reviewer: Updates service API and documentation
    • service/comprehendmedical: Updates service API
    • service/ec2: Updates service API and documentation
      • Added support for tag-on-create for CreateVpc, CreateEgressOnlyInternetGateway, CreateSecurityGroup, CreateSubnet, CreateNetworkInterface, CreateNetworkAcl, CreateDhcpOptions and CreateInternetGateway. You can now specify tags when creating any of these resources. For more information about tagging, see AWS Tagging Strategies.
    • service/ecr: Updates service API and documentation
      • Add a new parameter (ImageDigest) and a new exception (ImageDigestDoesNotMatchException) to PutImage API to support pushing image by digest.
    • service/rds: Updates service documentation
      • Documentation updates for rds

    Release v1.32.12 (2020-06-29)

    Service Client Updates

    • service/autoscaling: Updates service documentation and examples
      • Documentation updates for Amazon EC2 Auto Scaling.
    • service/codeguruprofiler: Updates service API, documentation, and paginators
    • service/codestar-connections: Updates service API, documentation, and paginators
    • service/ec2: Updates service API, documentation, and paginators
      • Virtual Private Cloud (VPC) customers can now create and manage their own Prefix Lists to simplify VPC configurations.

    Release v1.32.11 (2020-06-26)

    Service Client Updates

    • service/cloudformation: Updates service API and documentation
      • ListStackInstances and DescribeStackInstance now return a new StackInstanceStatus object that contains DetailedStatus values: a disambiguation of the more generic Status value. ListStackInstances output can now be filtered on DetailedStatus using the new Filters parameter.
    • service/cognito-idp: Updates service API

    ... (truncated)

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

  • simplify batching on flush to cortex sink

    simplify batching on flush to cortex sink

    Summary

    This change uses a simple gofunc + channel to implement the batching logic. The break function requires a label, I did avoid using goto.

    Motivation

    Simplifying the code to enable the use of channel with select without a function closure.

    Test plan

    Updated existing tests, there is no functional change. However, the test does require one update due to the async nature changing subtly.

    Rollout/monitoring/revert plan

    This change can be reverted and should not change the behavior after deploy.

  • Conflicting documentation

    Conflicting documentation

    https://github.com/stripe/veneur/blob/3cec7b5735e6ea6b498aeb40ed36fc75af0cf824/example_host.yaml#L7-L10

    https://github.com/stripe/veneur/blob/3cec7b5735e6ea6b498aeb40ed36fc75af0cf824/example.yaml#L186-L195

    which field relates to SO_REUSEPORT requirements?

  • Support SO_REUSEPORT on darwin/etc

    Support SO_REUSEPORT on darwin/etc

    Support for SO_REUSEPORT on darwin now exists https://github.com/stripe/veneur/blob/3cec7b5735e6ea6b498aeb40ed36fc75af0cf824/socket_linux.go#LL23C5-L24C46

    https://github.com/golang/go/issues/16075

  • Fixing behavior where we bail on all batches if one fails

    Fixing behavior where we bail on all batches if one fails

    Summary

    No longer erring out if one batch fails

    Motivation

    We want as many metrics to try and get written, errors may be related to things like writing duplicate time series which shouldn't stop the entire loop.

    Test plan

    Rollout/monitoring/revert plan

  • Replace proto.Marshal with direct call to Marshal's

    Replace proto.Marshal with direct call to Marshal's

    Summary

    Replace proto.Marshal with direct call to Marshal's

    Motivation

    Saw veneur prevalent in performance profiles

    Test plan

    
    # before: proto.Marshal
    BenchmarkSerialization/UDP_plain_span_with_metrics-10         	  464810	      2403 ns/op	     176 B/op	       2 allocs/op
    BenchmarkSerialization/UDP_plain_span_no_metrics-10           	  541394	      2488 ns/op	      80 B/op	       2 allocs/op
    BenchmarkSerialization/UDP_plain_empty_span_with_metrics-10   	  536200	      2497 ns/op	     112 B/op	       2 allocs/op
    
    # span.Marshal
    BenchmarkSerialization/UDP_plain_span_with_metrics-10         	  511609	      2526 ns/op	     160 B/op	       1 allocs/op
    BenchmarkSerialization/UDP_plain_span_no_metrics-10           	  635616	      1916 ns/op	      64 B/op	       1 allocs/op
    BenchmarkSerialization/UDP_plain_empty_span_with_metrics-10   	  572904	      2157 ns/op	      96 B/op	       1 allocs/op
    

    Rollout/monitoring/revert plan

    • [ ] PRE merge (if accepted): CHANGELOG.md
A library for performing data pipeline / ETL tasks in Go.
A library for performing data pipeline / ETL tasks in Go.

Ratchet A library for performing data pipeline / ETL tasks in Go. The Go programming language's simplicity, execution speed, and concurrency support m

Jan 19, 2022
churro is a cloud-native Extract-Transform-Load (ETL) application designed to build, scale, and manage data pipeline applications.

Churro - ETL for Kubernetes churro is a cloud-native Extract-Transform-Load (ETL) application designed to build, scale, and manage data pipeline appli

Mar 10, 2022
Prometheus Common Data Exporter can parse JSON, XML, yaml or other format data from various sources (such as HTTP response message, local file, TCP response message and UDP response message) into Prometheus metric data.
Prometheus Common Data Exporter can parse JSON, XML, yaml or other format data from various sources (such as HTTP response message, local file, TCP response message and UDP response message) into Prometheus metric data.

Prometheus Common Data Exporter Prometheus Common Data Exporter 用于将多种来源(如http响应报文、本地文件、TCP响应报文、UDP响应报文)的Json、xml、yaml或其它格式的数据,解析为Prometheus metric数据。

May 18, 2022
Dud is a lightweight tool for versioning data alongside source code and building data pipelines.

Dud Website | Install | Getting Started | Source Code Dud is a lightweight tool for versioning data alongside source code and building data pipelines.

Jan 1, 2023
CUE is an open source data constraint language which aims to simplify tasks involving defining and using data.

CUE is an open source data constraint language which aims to simplify tasks involving defining and using data.

Jan 1, 2023
xyr is a very lightweight, simple and powerful data ETL platform that helps you to query available data sources using SQL.

xyr [WIP] xyr is a very lightweight, simple and powerful data ETL platform that helps you to query available data sources using SQL. Supported Drivers

Dec 2, 2022
Fast, efficient, and scalable distributed map/reduce system, DAG execution, in memory or on disk, written in pure Go, runs standalone or distributedly.

Gleam Gleam is a high performance and efficient distributed execution system, and also simple, generic, flexible and easy to customize. Gleam is built

Jan 5, 2023
DEPRECATED: Data collection and processing made easy.

This project is deprecated. Please see this email for more details. Heka Data Acquisition and Processing Made Easy Heka is a tool for collecting and c

Nov 30, 2022
Open source framework for processing, monitoring, and alerting on time series data

Kapacitor Open source framework for processing, monitoring, and alerting on time series data Installation Kapacitor has two binaries: kapacitor – a CL

Dec 24, 2022
Kanzi is a modern, modular, expendable and efficient lossless data compressor implemented in Go.

kanzi Kanzi is a modern, modular, expendable and efficient lossless data compressor implemented in Go. modern: state-of-the-art algorithms are impleme

Dec 22, 2022
Data syncing in golang for ClickHouse.
Data syncing in golang for ClickHouse.

ClickHouse Data Synchromesh Data syncing in golang for ClickHouse. based on go-zero ARCH A typical data warehouse architecture design of data sync Aut

Jan 1, 2023
sq is a command line tool that provides jq-style access to structured data sources such as SQL databases, or document formats like CSV or Excel.

sq: swiss-army knife for data sq is a command line tool that provides jq-style access to structured data sources such as SQL databases, or document fo

Jan 1, 2023
Machine is a library for creating data workflows.
Machine is a library for creating data workflows.

Machine is a library for creating data workflows. These workflows can be either very concise or quite complex, even allowing for cycles for flows that need retry or self healing mechanisms.

Dec 26, 2022
Stream data into Google BigQuery concurrently using InsertAll() or BQ Storage.

bqwriter A Go package to write data into Google BigQuery concurrently with a high throughput. By default the InsertAll() API is used (REST API under t

Dec 16, 2022
Dev Lake is the one-stop solution that integrates, analyzes, and visualizes software development data
Dev Lake is the one-stop solution that integrates, analyzes, and visualizes software development data

Dev Lake is the one-stop solution that integrates, analyzes, and visualizes software development data throughout the software development life cycle (SDLC) for engineering teams.

Dec 30, 2022
Distributed, fault-tolerant key-value storage written in go.
Distributed, fault-tolerant key-value storage written in go.

A simple, distributed, fault-tolerant key-value storage inspired by Redis. It uses Raft protocotol as consensus algorithm. It supports the following data structures: String, Bitmap, Map, List.

Jan 3, 2023
Dkron - Distributed, fault tolerant job scheduling system https://dkron.io
Dkron - Distributed, fault tolerant job scheduling system https://dkron.io

Dkron - Distributed, fault tolerant job scheduling system for cloud native environments Website: http://dkron.io/ Dkron is a distributed cron service,

Dec 28, 2022
A distributed fault-tolerant order book matching engine
A distributed fault-tolerant order book matching engine

Go - Between A distributed fault-tolerant order book matching engine. Features Limit orders Market orders Order book depth Calculate market price for

Dec 24, 2021
Easy to use Raft library to make your app distributed, highly available and fault-tolerant
Easy to use Raft library to make your app distributed, highly available and fault-tolerant

An easy to use customizable library to make your Go application Distributed, Highly available, Fault Tolerant etc... using Hashicorp's Raft library wh

Nov 16, 2022