:alarm_clock: :fire: A TCP proxy to simulate network and system conditions for chaos and resiliency testing

Toxiproxy

GitHub release Build Status IRC Channel

Toxiproxy is a framework for simulating network conditions. It's made specifically to work in testing, CI and development environments, supporting deterministic tampering with connections, but with support for randomized chaos and customization. Toxiproxy is the tool you need to prove with tests that your application doesn't have single points of failure. We've been successfully using it in all development and test environments at Shopify since October, 2014. See our blog post on resiliency for more information.

Toxiproxy usage consists of two parts. A TCP proxy written in Go (what this repository contains) and a client communicating with the proxy over HTTP. You configure your application to make all test connections go through Toxiproxy and can then manipulate their health via HTTP. See Usage below on how to set up your project.

For example, to add 1000ms of latency to the response of MySQL from the Ruby client:

Toxiproxy[:mysql_master].downstream(:latency, latency: 1000).apply do
  Shop.first # this takes at least 1s
end

To take down all Redis instances:

Toxiproxy[/redis/].down do
  Shop.first # this will throw an exception
end

While the examples in this README are currently in Ruby, there's nothing stopping you from creating a client in any other language (see Clients).

Table of Contents

  1. Why yet another chaotic TCP proxy?
  2. Clients
  3. Example
  4. Usage
    1. Installing
      1. Upgrading from 1.x
    2. Populating
    3. Using
  5. Toxics
    1. Latency
    2. Down
    3. Bandwidth
    4. Slow close
    5. Timeout
    6. Slicer
  6. HTTP API
    1. Proxy fields
    2. Toxic fields
    3. Endpoints
    4. Populating Proxies
  7. CLI example
  8. FAQ
  9. Development

Why yet another chaotic TCP proxy?

The existing ones we found didn't provide the kind of dynamic API we needed for integration and unit testing. Linux tools like nc and so on are not cross-platform and require root, which makes them problematic in test, development and CI environments.

Clients

Example

Let's walk through an example with a Rails application. Note that Toxiproxy is in no way tied to Ruby, it's just been our first use case. You can see the full example at sirupsen/toxiproxy-rails-example. To get started right away, jump down to Usage.

For our popular blog, for some reason we're storing the tags for our posts in Redis and the posts themselves in MySQL. We might have a Post class that includes some methods to manipulate tags in a Redis set:

class Post < ActiveRecord::Base
  # Return an Array of all the tags.
  def tags
    TagRedis.smembers(tag_key)
  end

  # Add a tag to the post.
  def add_tag(tag)
    TagRedis.sadd(tag_key, tag)
  end

  # Remove a tag from the post.
  def remove_tag(tag)
    TagRedis.srem(tag_key, tag)
  end

  # Return the key in Redis for the set of tags for the post.
  def tag_key
    "post:tags:#{self.id}"
  end
end

We've decided that erroring while writing to the tag data store (adding/removing) is OK. However, if the tag data store is down, we should be able to see the post with no tags. We could simply rescue the Redis::CannotConnectError around the SMEMBERS Redis call in the tags method. Let's use Toxiproxy to test that.

Since we've already installed Toxiproxy and it's running on our machine, we can skip to step 2. This is where we need to make sure Toxiproxy has a mapping for Redis tags. To config/boot.rb (before any connection is made) we add:

require 'toxiproxy'

Toxiproxy.populate([
  {
    name: "toxiproxy_test_redis_tags",
    listen: "127.0.0.1:22222",
    upstream: "127.0.0.1:6379"
  }
])

Then in config/environments/test.rb we set the TagRedis to be a Redis client that connects to Redis through Toxiproxy by adding this line:

TagRedis = Redis.new(port: 22222)

All calls in the test environment now go through Toxiproxy. That means we can add a unit test where we simulate a failure:

test "should return empty array when tag redis is down when listing tags" do
  @post.add_tag "mammals"

  # Take down all Redises in Toxiproxy
  Toxiproxy[/redis/].down do
    assert_equal [], @post.tags
  end
end

The test fails with Redis::CannotConnectError. Perfect! Toxiproxy took down the Redis successfully for the duration of the closure. Let's fix the tags method to be resilient:

def tags
  TagRedis.smembers(tag_key)
rescue Redis::CannotConnectError
  []
end

The tests pass! We now have a unit test that proves fetching the tags when Redis is down returns an empty array, instead of throwing an exception. For full coverage you should also write an integration test that wraps fetching the entire blog post page when Redis is down.

Full example application is at sirupsen/toxiproxy-rails-example.

Usage

Configuring a project to use Toxiproxy consists of three steps:

  1. Installing Toxiproxy
  2. Populating Toxiproxy
  3. Using Toxiproxy

1. Installing Toxiproxy

Linux

See Releases for the latest binaries and system packages for your architecture.

Ubuntu

$ wget -O toxiproxy-2.1.4.deb https://github.com/Shopify/toxiproxy/releases/download/v2.1.4/toxiproxy_2.1.4_amd64.deb
$ sudo dpkg -i toxiproxy-2.1.4.deb
$ sudo service toxiproxy start

OS X

$ brew tap shopify/shopify
$ brew install toxiproxy

Windows

Toxiproxy for Windows is available for download at https://github.com/Shopify/toxiproxy/releases/download/v2.1.4/toxiproxy-server-windows-amd64.exe

Docker

Toxiproxy is available on Docker Hub.

$ docker pull shopify/toxiproxy
$ docker run -it shopify/toxiproxy

If using Toxiproxy from the host rather than other containers, enable host networking with --net=host.

Source

If you have Go installed, you can build Toxiproxy from source using the make file:

$ make build
$ ./toxiproxy-server

Upgrading from Toxiproxy 1.x

In Toxiproxy 2.0 several changes were made to the API that make it incompatible with version 1.x. In order to use version 2.x of the Toxiproxy server, you will need to make sure your client library supports the same version. You can check which version of Toxiproxy you are running by looking at the /version endpoint.

See the documentation for your client library for specific library changes. Detailed changes for the Toxiproxy server can been found in CHANGELOG.md.

2. Populating Toxiproxy

When your application boots, it needs to make sure that Toxiproxy knows which endpoints to proxy where. The main parameters are: name, address for Toxiproxy to listen on and the address of the upstream.

Some client libraries have helpers for this task, which is essentially just making sure each proxy in a list is created. Example from the Ruby client:

# Make sure `shopify_test_redis_master` and `shopify_test_mysql_master` are
# present in Toxiproxy
Toxiproxy.populate([
  {
    name: "shopify_test_redis_master",
    listen: "127.0.0.1:22220",
    upstream: "127.0.0.1:6379"
  },
  {
    name: "shopify_test_mysql_master",
    listen: "127.0.0.1:24220",
    upstream: "127.0.0.1:3306"
  }
])

This code needs to run as early in boot as possible, before any code establishes a connection through Toxiproxy. Please check your client library for documentation on the population helpers.

Alternatively use the CLI to create proxies, e.g.:

toxiproxy-cli create shopify_test_redis_master -l localhost:26379 -u localhost:6379

We recommend a naming such as the above: <app>_<env>_<data store>_<shard>. This makes sure there are no clashes between applications using the same Toxiproxy.

For large application we recommend storing the Toxiproxy configurations in a separate configuration file. We use config/toxiproxy.json. This file can be passed to the server using the -config option, or loaded by the application to use with the populate function.

An example config/toxiproxy.json:

[
  {
    "name": "web_dev_frontend_1",
    "listen": "[::]:18080",
    "upstream": "webapp.domain:8080",
    "enabled": true
  },
  {
    "name": "web_dev_mysql_1",
    "listen": "[::]:13306",
    "upstream": "database.domain:3306",
    "enabled": true
  }
]

Use ports outside the ephemeral port range to avoid random port conflicts. It's 32,768 to 61,000 on Linux by default, see /proc/sys/net/ipv4/ip_local_port_range.

3. Using Toxiproxy

To use Toxiproxy, you now need to configure your application to connect through Toxiproxy. Continuing with our example from step two, we can configure our Redis client to connect through Toxiproxy:

# old straight to redis
redis = Redis.new(port: 6380)

# new through toxiproxy
redis = Redis.new(port: 22220)

Now you can tamper with it through the Toxiproxy API. In Ruby:

redis = Redis.new(port: 22220)

Toxiproxy[:shopify_test_redis_master].downstream(:latency, latency: 1000).apply do
  redis.get("test") # will take 1s
end

Or via the CLI:

toxiproxy-cli toxic add shopify_test_redis_master -t latency -a latency=1000

Please consult your respective client library on usage.

Toxics

Toxics manipulate the pipe between the client and upstream. They can be added and removed from proxies using the HTTP api. Each toxic has its own parameters to change how it affects the proxy links.

For documentation on implementing custom toxics, see CREATING_TOXICS.md

latency

Add a delay to all data going through the proxy. The delay is equal to latency +/- jitter.

Attributes:

  • latency: time in milliseconds
  • jitter: time in milliseconds

down

Bringing a service down is not technically a toxic in the implementation of Toxiproxy. This is done by POSTing to /proxies/{proxy} and setting the enabled field to false.

bandwidth

Limit a connection to a maximum number of kilobytes per second.

Attributes:

  • rate: rate in KB/s

slow_close

Delay the TCP socket from closing until delay has elapsed.

Attributes:

  • delay: time in milliseconds

timeout

Stops all data from getting through, and closes the connection after timeout. If timeout is 0, the connection won't close, and data will be delayed until the toxic is removed.

Attributes:

  • timeout: time in milliseconds

slicer

Slices TCP data up into small bits, optionally adding a delay between each sliced "packet".

Attributes:

  • average_size: size in bytes of an average packet
  • size_variation: variation in bytes of an average packet (should be smaller than average_size)
  • delay: time in microseconds to delay each packet by

limit_data

Closes connection when transmitted data exceeded limit.

  • bytes: number of bytes it should transmit before connection is closed

HTTP API

All communication with the Toxiproxy daemon from the client happens through the HTTP interface, which is described here.

Toxiproxy listens for HTTP on port 8474.

Proxy fields:

  • name: proxy name (string)
  • listen: listen address (string)
  • upstream: proxy upstream address (string)
  • enabled: true/false (defaults to true on creation)

To change a proxy's name, it must be deleted and recreated.

Changing the listen or upstream fields will restart the proxy and drop any active connections.

If listen is specified with a port of 0, toxiproxy will pick an ephemeral port. The listen field in the response will be updated with the actual port.

If you change enabled to false, it will take down the proxy. You can switch it back to true to reenable it.

Toxic fields:

  • name: toxic name (string, defaults to <type>_<stream>)
  • type: toxic type (string)
  • stream: link direction to affect (defaults to downstream)
  • toxicity: probability of the toxic being applied to a link (defaults to 1.0, 100%)
  • attributes: a map of toxic-specific attributes

See Toxics for toxic-specific attributes.

The stream direction must be either upstream or downstream. upstream applies the toxic on the client -> server connection, while downstream applies the toxic on the server -> client connection. This can be used to modify requests and responses separately.

Endpoints

All endpoints are JSON.

  • GET /proxies - List existing proxies and their toxics
  • POST /proxies - Create a new proxy
  • POST /populate - Create or replace a list of proxies
  • GET /proxies/{proxy} - Show the proxy with all its active toxics
  • POST /proxies/{proxy} - Update a proxy's fields
  • DELETE /proxies/{proxy} - Delete an existing proxy
  • GET /proxies/{proxy}/toxics - List active toxics
  • POST /proxies/{proxy}/toxics - Create a new toxic
  • GET /proxies/{proxy}/toxics/{toxic} - Get an active toxic's fields
  • POST /proxies/{proxy}/toxics/{toxic} - Update an active toxic
  • DELETE /proxies/{proxy}/toxics/{toxic} - Remove an active toxic
  • POST /reset - Enable all proxies and remove all active toxics
  • GET /version - Returns the server version number

Populating Proxies

Proxies can be added and configured in bulk using the /populate endpoint. This is done by passing an json array of proxies to toxiproxy. If a proxy with the same name already exists, it will be compared to the new proxy and replaced if the upstream and listen address don't match.

A /populate call can be included for example at application start to ensure all required proxies exist. It is safe to make this call several times, since proxies will be untouched as long as their fields are consistent with the new data.

CLI Example

$ toxiproxy-cli create redis -l localhost:26379 -u localhost:6379
Created new proxy redis
$ toxiproxy-cli list
Listen          Upstream        Name  Enabled Toxics
======================================================================
127.0.0.1:26379 localhost:6379  redis true    None

Hint: inspect toxics with `toxiproxy-client inspect <proxyName>`
$ redis-cli -p 26379
127.0.0.1:26379> SET omg pandas
OK
127.0.0.1:26379> GET omg
"pandas"
$ toxiproxy-cli toxic add redis -t latency -a latency=1000
Added downstream latency toxic 'latency_downstream' on proxy 'redis'
$ redis-cli -p 26379
127.0.0.1:26379> GET omg
"pandas"
(1.00s)
127.0.0.1:26379> DEL omg
(integer) 1
(1.00s)
$ toxiproxy-cli toxic remove redis -n latency_downstream
Removed toxic 'latency_downstream' on proxy 'redis'
$ redis-cli -p 26379
127.0.0.1:26379> GET omg
(nil)
$ toxiproxy-cli delete redis
Deleted proxy redis
$ redis-cli -p 26379
Could not connect to Redis at 127.0.0.1:26379: Connection refused

Frequently Asked Questions

How fast is Toxiproxy? The speed of Toxiproxy depends largely on your hardware, but you can expect a latency of < 100µs when no toxics are enabled. When running with GOMAXPROCS=4 on a Macbook Pro we achieved ~1000MB/s throughput, and as high as 2400MB/s on a higher end desktop. Basically, you can expect Toxiproxy to move data around at least as fast the app you're testing.

Can Toxiproxy do randomized testing? Many of the available toxics can be configured to have randomness, such as jitter in the latency toxic. There is also a global toxicity parameter that specifies the percentage of connections a toxic will affect. This is most useful for things like the timeout toxic, which would allow X% of connections to timeout.

I am not seeing my Toxiproxy actions reflected for MySQL. MySQL will prefer the local Unix domain socket for some clients, no matter which port you pass it if the host is set to localhost. Configure your MySQL server to not create a socket, and use 127.0.0.1 as the host. Remember to remove the old socket after you restart the server.

Toxiproxy causes intermittent connection failures. Use ports outside the ephemeral port range to avoid random port conflicts. It's 32,768 to 61,000 on Linux by default, see /proc/sys/net/ipv4/ip_local_port_range.

Should I run a Toxiproxy for each application? No, we recommend using the same Toxiproxy for all applications. To distinguish between services we recommend naming your proxies with the scheme: <app>_<env>_<data store>_<shard>. For example, shopify_test_redis_master or shopify_development_mysql_1.

Development

  • make. Build a toxiproxy development binary for the current platform.
  • make all. Build Toxiproxy binaries and packages for all platforms. Requires to have Go compiled with cross compilation enabled on Linux and Darwin (amd64) as well as fpm in your $PATH to build the Debian package.
  • make test. Run the Toxiproxy tests.
  • make darwin. Build binary for Darwin.
  • make linux. Build binary for Linux.
  • make windows. Build binary for Windows.

Release

  1. Ensure this release has run internally for Shopify/shopify for at least a day which is the best fuzzy test for robustness we have.
  2. Update CHANGELOG.md
  3. Bump VERSION
  4. Change versions in README.md
  5. Commit
  6. Tag
  7. make release to create binaries, packages and push new Docker image
  8. Create Github draft release against new tag and upload binaries and Debian package
  9. Bump version for Homebrew
Comments
  • Toxiproxy 2.0 Release

    Toxiproxy 2.0 Release

    As far as I know, everything is ready to be released for 2.0

    The released binary should be changed to toxiproxy-server or similar to allow for a cli binary later. I've done some testing myself, but it would be good to get some verification that this works with the Shopify test suite.

    Initial RFC is here: https://github.com/Shopify/toxiproxy/issues/54

    PR's contained in this branch:

    • Initial PR https://github.com/Shopify/toxiproxy/pull/62
    • Remove stream direction from api urls https://github.com/Shopify/toxiproxy/pull/73
    • Update Readme https://github.com/Shopify/toxiproxy/pull/74
    • Add toxicity field https://github.com/Shopify/toxiproxy/pull/75
    • Go client refactor https://github.com/Shopify/toxiproxy/pull/76
    • Interruptable ChanReader https://github.com/Shopify/toxiproxy/pull/77
    • Makefile cleanup https://github.com/Shopify/toxiproxy/pull/78
    • Fix toxic marshalling and updating https://github.com/Shopify/toxiproxy/pull/80

    These are all listed in the CHANGELOG.md

    @Sirupsen @eapache @pushrax

    This shouldn't really need much review other than an overall :+1: and :tophat: Each of the above PRs have already been individually reviewed.

    After this is merged, the Release steps just need to be followed (contained in the README)

  • toxiproxy api hangs when trying to do anything after a long wait, run from inside testcontainers

    toxiproxy api hangs when trying to do anything after a long wait, run from inside testcontainers

    Hello, I'm facing a weird behaviour: toxirpoxy doesn't respond to API calls, the connection just hangs and nothing happens.

    Here's the setup I'm currently using toxyproxi in: There're few containers run inside docker with testcontiners in a single docker network. Toxiproxy is used to create troubles communicating between components. So, basically, it's between all components. And when the following scenario is used:

    1. cut connection to the postgresql db for one of the components
    2. wait for 60 seconds
    3. restore the connection (or reset proxy or perform any other action by API, including GET requests).

    Toxiproxy is run inside docker, tested with images shopify/toxiproxy:2.1.4 and shopify/toxiproxy:2.1.0. Below are the logs from the docker container. Please let me know if I can provide more information.

    time="2019-06-19T12:09:40Z" level="info" msg="API HTTP server starting" host="0.0.0.0" port="8474" version="2.1.4"
    time="2019-06-19T12:09:40Z" level="info" msg="Started proxy" name="postgres:5432" proxy="[::]:8666" upstream="postgres:5432"
    time="2019-06-19T12:09:55Z" level="info" msg="Started proxy" proxy="[::]:8667" upstream="kafka:9093" name="kafka:9093"
    time="2019-06-19T12:09:55Z" level="info" msg="Started proxy" name="kafka:9092" proxy="[::]:8668" upstream="kafka:9092"
    time="2019-06-19T12:09:55Z" level="info" msg="Started proxy" name="zookeeper:2181" proxy="[::]:8669" upstream="zookeeper:2181"
    time="2019-06-19T12:10:03Z" level="info" msg="Accepted client" client=192.168.16.5:38948 proxy="[::]:8669" upstream="zookeeper:2181" name="zookeeper:2181"
    time="2019-06-19T12:10:03Z" level="warning" msg="Source terminated" err=read tcp 192.168.16.2:8669->192.168.16.5:38948: use of closed network connection name="zookeeper:2181" bytes=61
    time="2019-06-19T12:10:04Z" level="info" msg="Accepted client" proxy="[::]:8669" upstream="zookeeper:2181" name="zookeeper:2181" client=192.168.16.5:38952
    time="2019-06-19T12:10:06Z" level="info" msg="Accepted client" client=192.168.16.5:47974 proxy="[::]:8668" upstream="kafka:9092" name="kafka:9092"
    time="2019-06-19T12:10:07Z" level="info" msg="Accepted client" upstream="kafka:9092" name="kafka:9092" client=192.168.16.5:47978 proxy="[::]:8668"
    time="2019-06-19T12:10:07Z" level="info" msg="Accepted client" upstream="kafka:9092" name="kafka:9092" client=192.168.16.5:47982 proxy="[::]:8668"
    time="2019-06-19T12:10:07Z" level="warning" msg="Source terminated" bytes=605 err=read tcp 192.168.16.2:53148->192.168.16.5:9092: use of closed network connection name="kafka:9092"
    time="2019-06-19T12:10:07Z" level="warning" msg="Source terminated" name="kafka:9092" bytes=387 err=read tcp 192.168.16.2:53144->192.168.16.5:9092: use of closed network connection
    time="2019-06-19T12:10:11Z" level="info" msg="Started proxy" upstream="schema-registry:8081" name="schema-registry:8081" proxy="[::]:8670"
    time="2019-06-19T12:10:18Z" level="info" msg="Accepted client" proxy="[::]:8669" upstream="zookeeper:2181" name="zookeeper:2181" client=192.168.16.6:44368
    time="2019-06-19T12:10:18Z" level="warning" msg="Source terminated" name="zookeeper:2181" bytes=61 err=read tcp 192.168.16.2:43454->192.168.16.4:2181: use of closed network connection
    time="2019-06-19T12:10:19Z" level="info" msg="Accepted client" upstream="zookeeper:2181" name="zookeeper:2181" client=192.168.16.6:44372 proxy="[::]:8669"
    time="2019-06-19T12:10:19Z" level="warning" msg="Source terminated" name="zookeeper:2181" bytes=61 err=read tcp 192.168.16.2:43458->192.168.16.4:2181: use of closed network connection
    time="2019-06-19T12:10:19Z" level="info" msg="Accepted client" name="zookeeper:2181" client=192.168.16.6:44376 proxy="[::]:8669" upstream="zookeeper:2181"
    time="2019-06-19T12:10:19Z" level="warning" msg="Source terminated" name="zookeeper:2181" bytes=625 err=read tcp 192.168.16.2:43462->192.168.16.4:2181: use of closed network connection
    time="2019-06-19T12:10:19Z" level="info" msg="Accepted client" upstream="kafka:9092" name="kafka:9092" client=192.168.16.6:56448 proxy="[::]:8668"
    time="2019-06-19T12:10:19Z" level="info" msg="Accepted client" name="kafka:9092" client=192.168.16.6:56452 proxy="[::]:8668" upstream="kafka:9092"
    time="2019-06-19T12:10:20Z" level="warning" msg="Source terminated" name="kafka:9092" bytes=351 err=read tcp 192.168.16.2:53172->192.168.16.5:9092: use of closed network connection
    time="2019-06-19T12:10:20Z" level="warning" msg="Source terminated" err=read tcp 192.168.16.2:53168->192.168.16.5:9092: use of closed network connection name="kafka:9092" bytes=351
    time="2019-06-19T12:10:21Z" level="info" msg="Accepted client" name="zookeeper:2181" client=192.168.16.6:44388 proxy="[::]:8669" upstream="zookeeper:2181"
    time="2019-06-19T12:10:22Z" level="warning" msg="Source terminated" name="zookeeper:2181" bytes=209 err=read tcp 192.168.16.2:43474->192.168.16.4:2181: use of closed network connection
    time="2019-06-19T12:10:22Z" level="info" msg="Accepted client" client=192.168.16.6:44392 proxy="[::]:8669" upstream="zookeeper:2181" name="zookeeper:2181"
    time="2019-06-19T12:10:22Z" level="info" msg="Accepted client" upstream="zookeeper:2181" name="zookeeper:2181" client=192.168.16.6:44396 proxy="[::]:8669"
    time="2019-06-19T12:10:22Z" level="warning" msg="Source terminated" bytes=440 err=read tcp 192.168.16.2:43482->192.168.16.4:2181: use of closed network connection name="zookeeper:2181"
    time="2019-06-19T12:10:22Z" level="info" msg="Accepted client" client=192.168.16.6:56468 proxy="[::]:8668" upstream="kafka:9092" name="kafka:9092"
    time="2019-06-19T12:10:22Z" level="info" msg="Accepted client" name="kafka:9092" client=192.168.16.6:56472 proxy="[::]:8668" upstream="kafka:9092"
    time="2019-06-19T12:10:22Z" level="warning" msg="Source terminated" bytes=520 err=read tcp 192.168.16.2:53192->192.168.16.5:9092: use of closed network connection name="kafka:9092"
    time="2019-06-19T12:10:22Z" level="warning" msg="Source terminated" err=read tcp 192.168.16.2:53188->192.168.16.5:9092: use of closed network connection name="kafka:9092" bytes=351
    time="2019-06-19T12:10:23Z" level="info" msg="Accepted client" upstream="kafka:9092" name="kafka:9092" client=192.168.16.6:56480 proxy="[::]:8668"
    time="2019-06-19T12:10:23Z" level="info" msg="Accepted client" name="kafka:9092" client=192.168.16.6:56488 proxy="[::]:8668" upstream="kafka:9092"
    time="2019-06-19T12:10:23Z" level="info" msg="Accepted client" name="kafka:9092" client=192.168.16.6:56492 proxy="[::]:8668" upstream="kafka:9092"
    time="2019-06-19T12:10:23Z" level="info" msg="Accepted client" name="kafka:9092" client=192.168.16.6:56496 proxy="[::]:8668" upstream="kafka:9092"
    time="2019-06-19T12:10:26Z" level="info" msg="Accepted client" upstream="postgres:5432" name="postgres:5432" client=192.168.16.1:54766 proxy="[::]:8666"
    time="2019-06-19T12:10:26Z" level="info" msg="Accepted client" name="postgres:5432" client=192.168.16.1:54770 proxy="[::]:8666" upstream="postgres:5432"
    time="2019-06-19T12:10:26Z" level="info" msg="Accepted client" client=192.168.16.1:54774 proxy="[::]:8666" upstream="postgres:5432" name="postgres:5432"
    time="2019-06-19T12:10:26Z" level="info" msg="Accepted client" proxy="[::]:8666" upstream="postgres:5432" name="postgres:5432" client=192.168.16.1:54778
    time="2019-06-19T12:10:26Z" level="info" msg="Accepted client" name="postgres:5432" client=192.168.16.1:54782 proxy="[::]:8666" upstream="postgres:5432"
    time="2019-06-19T12:10:26Z" level="info" msg="Accepted client" client=192.168.16.1:54786 proxy="[::]:8666" upstream="postgres:5432" name="postgres:5432"
    time="2019-06-19T12:10:26Z" level="info" msg="Accepted client" client=192.168.16.1:54790 proxy="[::]:8666" upstream="postgres:5432" name="postgres:5432"
    time="2019-06-19T12:10:26Z" level="info" msg="Accepted client" name="postgres:5432" client=192.168.16.1:54794 proxy="[::]:8666" upstream="postgres:5432"
    time="2019-06-19T12:10:26Z" level="info" msg="Accepted client" name="postgres:5432" client=192.168.16.1:54798 proxy="[::]:8666" upstream="postgres:5432"
    time="2019-06-19T12:10:26Z" level="info" msg="Accepted client" upstream="postgres:5432" name="postgres:5432" client=192.168.16.1:54802 proxy="[::]:8666"
    time="2019-06-19T12:10:29Z" level="info" msg="Accepted client" name="kafka:9093" client=192.168.16.1:33124 proxy="[::]:8667" upstream="kafka:9093"
    time="2019-06-19T12:10:29Z" level="info" msg="Accepted client" name="kafka:9093" client=192.168.16.1:33128 proxy="[::]:8667" upstream="kafka:9093"
    time="2019-06-19T12:10:29Z" level="warning" msg="Source terminated" name="kafka:9093" bytes=618 err=read tcp 192.168.16.2:45516->192.168.16.5:9093: use of closed network connection
    time="2019-06-19T12:10:29Z" level="warning" msg="Source terminated" name="kafka:9093" bytes=351 err=read tcp 192.168.16.2:45512->192.168.16.5:9093: use of closed network connection
    time="2019-06-19T12:10:29Z" level="info" msg="Accepted client" client=192.168.16.1:54814 proxy="[::]:8666" upstream="postgres:5432" name="postgres:5432"
    time="2019-06-19T12:10:29Z" level="info" msg="Accepted client" proxy="[::]:8666" upstream="postgres:5432" name="postgres:5432" client=192.168.16.1:54818
    time="2019-06-19T12:11:31Z" level="warning" msg="Source terminated" bytes=557 err=read tcp 192.168.16.2:8666->192.168.16.1:54814: use of closed network connection name="postgres:5432"
    time="2019-06-19T12:11:31Z" level="warning" msg="Destination terminated" name="postgres:5432" bytes=479 err=readfrom tcp 192.168.16.2:45194->192.168.16.3:5432: write tcp 192.168.16.2:45194->192.168.16.3:5432: write: broken pipe
    
  • Cli 2.0 dev

    Cli 2.0 dev

    Here's toxiproxy-client v2.0!

    • [x] I should close both #58 and #91 as this PR replaces them.

    Not worrying about clear commit history when doing rapid ux changes. I'll start with a quick list of changes, followed by relevant screen shots.

    1. Updated version number to 2.0
    2. Removed graphic pipe in inspect because I found it confusing
    3. Made inspect a lot clearer
    4. Made list a lot clearer. Also table has Toxic and Enabled column
    5. Added cyan hints (might be kinda silly?)
    6. Added usage and examples to toplevel command and toxic command
    7. Updated the makefile (@Sirupsen: would you glance at this, I'm a little rusty with make)

    Screen Shots

    Part of the toplevel help screen. toplevel-help Part of the ./cli toxic screen. toxic-help Looks like we havn't created any toxics yet... ls-empty There's one! ls-one Let's try to inspect it! inspect-one No toxics? Let's add one! inspect-down

    Darn! It defaulted to downstream... let's add another upstream! inspect-up-down And let's list the proxies again. (This is after I added the enabled field) with-enabled

    TODO:

    • [x] #92 merge will require an update
    • [ ] do I need to update the CHANGELOG?
    • [x] ux review from @Sirupsen
    • [x] a ux review from someone who knows toxiproxy but not the client
    • [x] Redo inspect command
    • [ ] Configure toxiproxy host ip.
  • Unit toxiproxy.service not found

    Unit toxiproxy.service not found

    Hi :wave:

    I'm trying to install toxiproxy following the ubuntu instructions:

    $ wget -O toxiproxy-2.1.3.deb https://github.com/Shopify/toxiproxy/releases/download/v2.1.3/toxiproxy_2.1.3_amd64.deb
    $ sudo dpkg -i toxiproxy-2.1.3.deb
    $ sudo service toxiproxy start
    

    But i'm getting Failed to start toxiproxy.service: Unit toxiproxy.service not found. after running $ sudo service toxiproxy start

  • Toxiproxy 2.0

    Toxiproxy 2.0

    See proposal here https://github.com/Shopify/toxiproxy/issues/54

    Ready for review + merging

    TODO:

    • [x] Update Go client (and API tests)
    • [x] Add new tests for adding / removing / updating toxics
    • [x] Standardize error output in the API
    • [x] Run benchmark comparisons to v1.1
    • [x] Fix docker image builds
    • [x] Add toxicity field: https://github.com/Shopify/toxiproxy/pull/65 (can be done in a separate PR after this)
    • [x] Determine any deprecation / compatibility changes if necessary
    • [ ] Code cleanup and review

    cc @Sirupsen @pushrax @eapache

  • Toxiproxy hangs

    Toxiproxy hangs

    Scenario is:

    • Add proxy toxiproxy-cli create proxy_20000_to_20001 --listen localhost:20000 --upstream localhost:20001

    • Add latency toxic (also reproducing with others like bandwidth) toxiproxy-cli toxic add proxy_20000_to_20001 --upstream --toxicName LatencyIssue -t latency -a latency=500 -a jitter=0

    • Start sending some data through, i.e.

    while true; do; netcat -l -p 20001; done
    for i in {1..100000}; do sleep 0.1; echo "$i" | tee /dev/tty ; done | netcat localhost 20000
    
    • Add timeout toxic (data flow stops as expected) toxiproxy-cli toxic add proxy_20000_to_20001 --upstream --toxicName TimeoutIssue -t timeout -a timeout=0

    • Remove first toxic toxiproxy-cli toxic delete proxy_20000_to_20001 --toxicName LatencyIssue

    • As a result last command just hangs. If you try to run other commands like 'toxiproxy-cli list' they will hang as well. I'm using latest 2.1.0 release of toxiproxy and latest 1.19.1 release of cli.

  • toxiproxy-cli hangs in docker connecting to local toxproxy (related to internet?)

    toxiproxy-cli hangs in docker connecting to local toxproxy (related to internet?)

    hello! new toxiproxy user here, trying out my first toxic steps. I run it in docker, using the latest image that was just pushed today. just 1 proxy and 1 backend, very light load.

    i noticed that toxiproxy-cli, which worked fine first to add a proxy and a toxic, on subsequest commands started hanging. i ctrl-c'd the commands after they hung for minutes (the last 3 commands).

    interesting to mention: around the same time my internet connection also became slow and started timing out. while everything is local in a dockerstack entirely on my laptop, maybe my internet has something to do with it? has anyone experienced something like this before?

    root@toxiproxy:/app/src/github.com/Shopify/toxiproxy# toxiproxy-cli inspect cassandra
    Name: cassandra Listen: [::]:9042       Upstream: cassandra:9042
    ======================================================================
    Upstream toxics:
    Proxy has no Upstream toxics enabled.
    
    Downstream toxics:
    latency_downstream: type=latency stream=downstream toxicity=1.00 attributes=[ jitter=500 latency=1000 ]
    
    Hint: add a toxic with `toxiproxy-cli toxic add`
    root@toxiproxy:/app/src/github.com/Shopify/toxiproxy# toxiproxy-cli toxic delete cassandra -n latency_downstream:
    Failed to remove toxic: RemoveToxic: HTTP 404: toxic not found
    root@toxiproxy:/app/src/github.com/Shopify/toxiproxy# toxiproxy-cli toxic delete cassandra -n latency_downstream 
    ^C
    root@toxiproxy:/app/src/github.com/Shopify/toxiproxy# toxiproxy-cli toxic delete cassandra -n latency_downstream
    ^C
    root@toxiproxy:/app/src/github.com/Shopify/toxiproxy# toxiproxy-cli list                                         
    ^C
    
    
  • RFC for creating TCP Reset toxic

    RFC for creating TCP Reset toxic

    This PR is an RFC for creating a new toxic to simulate TCP RESET (Connection reset by peer) on the connections by closing the stub Input immediately or after a timeout.

    Currently, this toxic sends RST when we call the close(), as a result the unacked data is discarded by the OS. The behaviour of Close is set to discard any unsent/unacknowledged data by setting SetLinger to 0, ~= sets TCP RST flag and resets the connection. The stub output stream is dropped, since if we start sending any data and then close the connection, TCP treats this as a graceful connection close by emitting FIN-ACK. Added tests to check for syscall.ECONNRESET on the TCP Read.

    @jpittis @xthexder Let me know your thoughts on this.

    ./toxiproxy-cli create reset_example -l 0.0.0.0:8989 -u chaosbox.io:80 && ./toxiproxy-cli toxic add reset_example -n resetTCP -t reset_peer -d -a timeout=2000
    
    ➜  toxiproxy git:(RFC-reset-conn-toxic) ✗ curl -i localhost:8989
    curl: (56) Recv failure: Connection reset by peer
    
    rfc - tcp conn reset

    PS: If we want to simulate TCP RST while sending payload, think its possible when we do the toxic in the IP layer by creating raw packets, which will give us more control on the TCP flags.

  • Support for cluster of toxiproxy nodes

    Support for cluster of toxiproxy nodes

    It would be nice if we can setup a cluster of toxiproxy nodes so that can use a load balancer to balance loads between these nodes. This avoids a single point of failure.

    If a toxic is configured in any of the nodes, it should propagate to all nodes in the cluster automatically.

    Want to check if this feature looks interesting with everyone.

  • Embed toxic attribute

    Embed toxic attribute

    This patch embeds toxic specific attributes into their own nested attributes field, allowing for the following api changes:

    type Attributes map[string]interface{}
    
    type Toxic struct {
        Name       string     `json:"name"`
        Type       string     `json:"type"`
        Stream     string     `json:"stream"`
        Toxicity   float32    `json:"toxicity"`
        Attributes Attributes `json:"attributes"`
    }
    
    AddToxic(name, typeName, stream string, toxicity float32, attrs Attributes)
    UpdateToxic(name string, toxicity float32, attrs Attributes)
    

    We can't marshal the attributes of a toxic until we know what kind of toxic struct to marshal into. This has previously forced us to parse the json twice.

    I can't think of a better solution than to use an anonymous struct to extract the specific json that we need.

    This has also caused the previous toxic marsheling to be kinda sketchy. My change does not make it less sketchy.

    I'm happy enough with the approach. There might not be a pretty solution. @Sirupsen or @xthexder might be able to think of one?

    Update CHANGELOG, docs and squish the commits once we're ready to merge.

  • cli: initial commit

    cli: initial commit

    Adds a simple V1 of a CLI that enables you to list proxies and toggle their enabled flag. Really nice to get an overview of things and play around with it. Will support Toxics later, and then we'll figure out how to distribute it. It could be a separate binary with a name like toxiproxyctl or it could be part of the main one and then daemon just becomes a command (and it starts by default to be backwards compatible).

    This is really handy for quick and dirty testing yourself, or for black-box resiliency testing. A more friendly interface would be a web app, that can be done as well. It's remarkably easy with the Go client.

    screen shot 2015-05-18 at 16 13 52 screen shot 2015-05-18 at 16 13 34 screen shot 2015-05-18 at 16 13 24

    @xthexder

  • ToxiproxyContainer.ContainerProxy deprecated

    ToxiproxyContainer.ContainerProxy deprecated

    Hi, I did an implementation using ToxiproxyContainer.ContainerProxy and it works easily. However, the method is deprecated and I didn't find an example of how to use of new way. How can I create a proxy?

    Look my example:

    public abstract class CassandraTestBase {
    
    	protected CassandraTestBase() {
    	}
    
    	private static final Network network = Network.newNetwork();
    
    	@Container
    	public static final CassandraContainer<?> cassandra = new CassandraContainer<>("cassandra:3.11.2")
    			.withNetwork(network).withReuse(true);
    
    	@Container
    	public static final ToxiproxyContainer toxiproxy = new ToxiproxyContainer("ghcr.io/shopify/toxiproxy:2.5.0")
    			.withNetwork(network);
    	
    	public static ToxiproxyContainer.ContainerProxy proxy;
    
    	private static final String KEYSPACE_CREATION_QUERY = "CREATE KEYSPACE IF NOT EXISTS local WITH replication = { 'class': 'SimpleStrategy', 'replication_factor':'1' };";
    	private static final String KEYSPACE_ACTIVATE_QUERY = "USE local;";
    	private static final String LOCAL = "local";
    	private static CqlSession session;
    	private static String keyspace = "";
    	/** Datacenter used for local-test profile. */
    	private static String datacenter = "datacenter1";
    	private static final int REQUEST_TIMEOUT = 12000;
    
    	@BeforeAll
    	public static void startCassandraContainer() {
    		proxy = toxiproxy.getProxy(cassandra, 9042);
    		TestcontainersConfiguration.getInstance().updateUserConfig("testcontainers.reuse.enable", "true");
    		System.setProperty("spring.data.cassandra.contact-points", proxy.getContainerIpAddress());
    		System.setProperty("spring.data.cassandra.port", String.valueOf(proxy.getProxyPort()));
    		System.setProperty("spring.data.cassandra.local-datacenter", datacenter);
    		assumeTrue(DockerClientFactory.instance().isDockerAvailable());
    
    		session = CqlSession.builder()
    				.addContactPoint(new InetSocketAddress(proxy.getContainerIpAddress(), proxy.getProxyPort()))
    				.withLocalDatacenter(datacenter)
    				.withConfigLoader(DriverConfigLoader.programmaticBuilder()
    						.withDuration(DefaultDriverOption.METADATA_SCHEMA_REQUEST_TIMEOUT,
    								Duration.ofMillis(REQUEST_TIMEOUT))
    						.withDuration(DefaultDriverOption.CONNECTION_INIT_QUERY_TIMEOUT,
    								Duration.ofMillis(REQUEST_TIMEOUT))
    						.withDuration(DefaultDriverOption.REQUEST_TIMEOUT, Duration.ofMillis(REQUEST_TIMEOUT)).build())
    				.build();
    
    	}
    
    	/**
    	 * Creates the keyspace from the test resource file of the api
    	 * 
    	 * @param profile - Active profile used
    	 * @param loader  Class
    	 */
    	public static void createKeyspace(String profile, Class<?> loader) {
    		try (InputStream input = loader.getClassLoader()
    				.getResourceAsStream("application-" + profile + ".properties")) {
    			Properties applicationPoperties = new Properties();
    			applicationPoperties.load(input);
    
    			keyspace = applicationPoperties.getProperty("spring.data.cassandra.keyspace-name");
    
    		} catch (IOException e) {
    			log.error(e.getMessage(), e);
    		}
    		getSession().execute(KEYSPACE_CREATION_QUERY.replace(LOCAL, keyspace));
    		getSession().execute(KEYSPACE_ACTIVATE_QUERY.replace(LOCAL, keyspace));
    	}
    
    	public static CqlSession getSession() {
    		return session;
    	}
    
    }
    

    And my test (toxics are deprecated too):

    @DisplayName("Shouldn't update Timestamps when connection will be lost")
    void testInsertDataWithToxic() throws Exception {	
    		proxy.toxics().bandwidth("CUT_CONNECTION_DOWNSTREAM", ToxicDirection.DOWNSTREAM, 0);
    		proxy.toxics().bandwidth("CUT_CONNECTION_UPSTREAM", ToxicDirection.UPSTREAM, 0);
    
    		assertThrows(CassandraUncategorizedException.class, () -> {
    			userRepo.updateTimestamps();
    		});
    
    		proxy.toxics().get("CUT_CONNECTION_DOWNSTREAM").remove();
    		proxy.toxics().get("CUT_CONNECTION_UPSTREAM").remove();
    	}
    }
    
  • Bump goreleaser/goreleaser-action from 3.2.0 to 4.1.0

    Bump goreleaser/goreleaser-action from 3.2.0 to 4.1.0

    Bumps goreleaser/goreleaser-action from 3.2.0 to 4.1.0.

    Release notes

    Sourced from goreleaser/goreleaser-action's releases.

    v4.1.0

    What's Changed

    New Contributors

    Full Changelog: https://github.com/goreleaser/goreleaser-action/compare/v4...v4.1.0

    v4.0.0

    What's Changed

    Full Changelog: https://github.com/goreleaser/goreleaser-action/compare/v3...v4.0.0

    Commits
    • 8f67e59 chore: regenerate
    • 78df308 chore(deps): bump minimatch from 3.0.4 to 3.1.2 (#383)
    • 66134d9 Merge remote-tracking branch 'origin/master' into flarco/master
    • 3c08cfd chore(deps): bump yargs from 17.6.0 to 17.6.2
    • 5dc579b docs: add example when using workdir along with upload-artifact (#366)
    • 3b7d1ba feat!: remove auto-snapshot on dirty tag (#382)
    • 23e0ed5 fix: do not override GORELEASER_CURRENT_TAG (#370)
    • 1315dab update build
    • b60ea88 improve install
    • 4d25ab4 Update goreleaser.ts
    • See full diff in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
  • Bump github.com/urfave/cli/v2 from 2.23.0 to 2.23.7

    Bump github.com/urfave/cli/v2 from 2.23.0 to 2.23.7

    Bumps github.com/urfave/cli/v2 from 2.23.0 to 2.23.7.

    Release notes

    Sourced from github.com/urfave/cli/v2's releases.

    v2.24.0

    What's Changed

    Full Changelog: https://github.com/urfave/cli/compare/v2.23.6...v2.24.0

    v2.23.6

    What's Changed

    Full Changelog: https://github.com/urfave/cli/compare/v2.23.5...v2.23.6

    v2.23.5

    What's Changed

    New Contributors

    Full Changelog: https://github.com/urfave/cli/compare/v2.23.4...v2.23.5

    v2.23.4

    What's Changed

    Full Changelog: https://github.com/urfave/cli/compare/v2.23.3...v2.23.4

    v2.23.3

    What's Changed

    New Contributors

    Full Changelog: https://github.com/urfave/cli/compare/v2.23.2...v2.23.3

    Note. This is considered a minor release even though it has a new "feature" i.e support for int64slice for alstrc flags. The int64slice is verbatim copy of existing code and doesnt include any new behaviour compared to other altsrc flags.

    v2.23.2

    What's Changed

    ... (truncated)

    Commits
    • a6194b9 Merge pull request #1618 from dearchap/issue_1617
    • 659672b Fix docs issue
    • badc19f Fix:(issue_1617) Fix Bash completion for subcommands
    • f9652e3 Merge pull request #1608 from dearchap/issue_1591
    • ab2bf3c Fix:(issue_1591) Use AppHelpTemplate instead of SubCommandHelpTemplate
    • 5f57616 Merge pull request #1588 from feedmeapples/disable-slice-flag-separator
    • 9b0812c Update godoc v2 spacing
    • ceb75a1 godoc
    • 377947f replace test hardcode with defaultSliceFlagSeparator
    • 0f8707a Allow disabling SliceFlag separator altogether
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
  • Bump golangci/golangci-lint-action from 3.3.0 to 3.3.1

    Bump golangci/golangci-lint-action from 3.3.0 to 3.3.1

    Bumps golangci/golangci-lint-action from 3.3.0 to 3.3.1.

    Release notes

    Sourced from golangci/golangci-lint-action's releases.

    v3.3.1

    What's Changed

    Full Changelog: https://github.com/golangci/golangci-lint-action/compare/v3...v3.3.1

    Commits
    • 0ad9a09 build(deps-dev): bump @​typescript-eslint/parser from 5.41.0 to 5.42.0 (#599)
    • 235ea57 build(deps-dev): bump eslint from 8.26.0 to 8.27.0 (#598)
    • a6ed001 build(deps-dev): bump @​typescript-eslint/eslint-plugin from 5.41.0 to 5.42.0 ...
    • 3a7156a build(deps-dev): bump @​typescript-eslint/parser from 5.40.1 to 5.41.0 (#596)
    • 481f8ba build(deps): bump @​types/semver from 7.3.12 to 7.3.13 (#595)
    • 06edb37 build(deps-dev): bump @​typescript-eslint/eslint-plugin from 5.40.1 to 5.41.0 ...
    • c2f79a7 build(deps): bump @​actions/cache from 3.0.5 to 3.0.6 (#593)
    • d6eac69 build(deps-dev): bump @​typescript-eslint/eslint-plugin from 5.40.0 to 5.40.1 ...
    • 7268434 build(deps-dev): bump eslint from 8.25.0 to 8.26.0 (#591)
    • a926e2b build(deps-dev): bump @​typescript-eslint/parser from 5.40.0 to 5.40.1 (#590)
    • See full diff in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
  • Bump github.com/prometheus/client_golang from 1.13.0 to 1.14.0

    Bump github.com/prometheus/client_golang from 1.13.0 to 1.14.0

    Bumps github.com/prometheus/client_golang from 1.13.0 to 1.14.0.

    Release notes

    Sourced from github.com/prometheus/client_golang's releases.

    1.14.0 / 2022-11-08

    It might look like a small release, but it's quite opposite 😱 There were many non user facing changes and fixes and enormous work from engineers from Grafana to add native histograms in 💪🏾 Enjoy! 😍

    What's Changed

    • [FEATURE] Add Support for Native Histograms. #1150
    • [CHANGE] Extend prometheus.Registry to implement prometheus.Collector interface. #1103

    New Contributors

    Full Changelog: https://github.com/prometheus/client_golang/compare/v1.13.1...v1.14.0

    1.13.1 / 2022-11-02

    • [BUGFIX] Fix race condition with Exemplar in Counter. #1146
    • [BUGFIX] Fix CumulativeCount value of +Inf bucket created from exemplar. #1148
    • [BUGFIX] Fix double-counting bug in promhttp.InstrumentRoundTripperCounter. #1118

    Full Changelog: https://github.com/prometheus/client_golang/compare/v1.13.0...v1.13.1

    Changelog

    Sourced from github.com/prometheus/client_golang's changelog.

    1.14.0 / 2022-11-08

    • [FEATURE] Add Support for Native Histograms. #1150
    • [CHANGE] Extend prometheus.Registry to implement prometheus.Collector interface. #1103

    1.13.1 / 2022-11-01

    • [BUGFIX] Fix race condition with Exemplar in Counter. #1146
    • [BUGFIX] Fix CumulativeCount value of +Inf bucket created from exemplar. #1148
    • [BUGFIX] Fix double-counting bug in promhttp.InstrumentRoundTripperCounter. #1118
    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
K8s_dns_chaos: enables inject DNS chaos in a Kubernetes cluster for Chaos Engineering

k8s_dns_chaos Name k8s_dns_chaos - enables inject DNS chaos in a Kubernetes cluster for Chaos Engineering. Description This plugin implements the Kube

Dec 12, 2021
A function for chaos testing with OpenFaaS

chaos-fn A function for chaos testing with OpenFaaS Use-cases Test retries on certain HTTP codes Test timeouts Test certain lengths of HTTP request bo

May 26, 2022
Simulate network link speed

linkio linkio provides an io.Reader and io.Writer that simulate a network connection of a certain speed, e.g. to simulate a mobile connection. Quick s

Sep 27, 2022
Tcp-proxy - A dead simple reverse proxy server.

tcp-proxy A proxy that forwords from a host to another. Building go build -ldflags="-X 'main.Version=$(git describe --tags $(git rev-list --tags --max

Jan 2, 2022
Proxy - Minimalistic TCP relay proxy.

Proxy Minimalistic TCP relay proxy. Installation ensure you have go >= 1.17 installed clone the repo cd proxy go install main.go Examples Listen on po

May 22, 2022
A TCP proxy used to expose services onto a tailscale network without root. Ideal for container environments.

tailscale-sidecar This is barely tested software, I don't guarantee it works but please make an issue if you use it and find a bug. Pull requests are

Dec 30, 2022
TcpRoute , TCP 层的路由器。对于 TCP 连接自动从多个线路(电信、联通、移动)、多个域名解析结果中选择最优线路。

TcpRoute2 TcpRoute , TCP 层的路由器。对于 TCP 连接自动从多个线路(允许任意嵌套)、多个域名解析结果中选择最优线路。 TcpRoute 使用激进的选路策略,对 DNS 解析获得的多个IP同时尝试连接,同时使用多个线路进行连接,最终使用最快建立的连接。支持 TcpRoute

Dec 27, 2022
Multiplexer over TCP. Useful if target server only allows you to create limited tcp connections concurrently.

tcp-multiplexer Use it in front of target server and let your client programs connect it, if target server only allows you to create limited tcp conne

May 27, 2021
TCP output for beats to send events over TCP socket.

beats-tcp-output How To Use Clone this project to elastic/beats/libbeat/output/ Modify elastic/beats/libbeat/publisher/includes/includes.go : // add i

Aug 25, 2022
Tcp chat go - Create tcp chat in golang

TCP chat in GO libs Go net package and goroutines and channels tcp tcp or transm

Feb 5, 2022
Chaostheory task1 - This is repository for Chaos Theory Internship Program

Chaos Theory Internship - Take Home Task. Hyeonwoo KIM(clo3olb) City Universiry

Feb 11, 2022
Go package to simulate bandwidth, latency and packet loss for net.PacketConn and net.Conn interfaces

lossy Go package to simulate bandwidth, latency and packet loss for net.PacketConn and net.Conn interfaces. Its main usage is to test robustness of ap

Oct 14, 2022
Simulate which EC2 instances applied reserved instance.

Go - Reserved Instance Simulator (gori-simulator) Usage $ env AWS_PROFILE=YOUR_PROFILE ./gori-simulator Notices Convertible only (not Standard) Regio

Dec 5, 2021
Touch Simulation in Golang - Simulate Touch Points using UInput

Touch Simulation Touch Simulation is program made in Golang to simulate Touch Input in android devices using Virtual Display with UInput interface of

Jan 4, 2023
Charmedring - A smart TCP proxy to replicate and backup Charm FS files
Charmedring - A smart TCP proxy to replicate and backup Charm FS files

Charmed ?? Ring A smart TCP proxy to replicate and backup Charm FS files. Overvi

Dec 15, 2022
Websockify-go - A reverse proxy that support tcp, http, https, and the most important, noVNC, which makes it a websockify

websockify-go | mproxy a reverse proxy that support tcp, http, https, and the mo

Aug 14, 2022
🚀Gev is a lightweight, fast non-blocking TCP network library based on Reactor mode. Support custom protocols to quickly and easily build high-performance servers.
🚀Gev is a lightweight, fast non-blocking TCP network library based on Reactor mode. Support custom protocols to quickly and easily build high-performance servers.

gev 中文 | English gev is a lightweight, fast non-blocking TCP network library based on Reactor mode. Support custom protocols to quickly and easily bui

Jan 6, 2023