Easegress (formerly known as EaseGateway)is an all-rounder traffic orchestration system

Easegress

What is Easegress

Easegress (formerly known as EaseGateway)is an all-rounder traffic orchestration system, which is designed for:

  • High Availability: Built-in Raft consensus & leader election makes 99.99% availability.
  • Traffic Orchestration: Dynamically orchestrating various filters to a traffic pipeline.
  • High Performance: Lightweight and essential features speed up the performance.
  • Observability: There are many meaningful statistics periodically in a readable way.
  • Extensibility: It's easy to develop your own filter or controller with high-level programming language.
  • Integration: The simple interfaces make it easy to integrate with other systems, such as Kubernetes Ingress, EaseMesh sidecar, Workflow, etc.

The architecture of Easegress:

architecture

Features

  • Service Management
    • Multiple protocols:
      • HTTP/1.1
      • HTTP/2
      • HTTP/3(QUIC)
      • MQTT(coming soon)
    • Rich Routing Rules: exact path, path prefix, regular expression of the path, method, headers.
    • Resilience&Fault Tolerance
      • Circuit breaker: temporarily blocks possible failures.
      • Rate limiter: limits the rate of incoming requests.
      • Retryer: repeats failed executions.
      • Time limiter: limits the duration of execution.
    • Deployment Management
      • Blue-green Strategy: switches traffic at one time.
      • Canary Strategy: schedules traffic slightly.
    • API Management
      • API Aggregation: aggregates results of multiple APIs.
      • API Orchestration: orchestrates the flow of APIs.
    • Security
      • IP Filter: Limits access to IP addresses.
      • Static HTTPS: static certificate files.
      • API Signature: supports HMAC verification.
      • JWT Verification: verifies JWT Token.
      • OAuth2: validates OAuth/2 requests.
      • Let's Encrypt: automatically manage certificate files.
    • Pipeline-Filter Mechanism
      • Chain of Responsibility Pattern: orchestrates filters chain.
      • Filter Management: makes it easy to develop new filters.
    • Service Mesh
      • Mesh Master: is the control plane to manage the lifecycle of mesh services.
      • Mesh Sidecar: is the data plane as the endpoint to do traffic interception and routing.
      • Mesh Ingress Controller: is the mesh-specific ingress controller to route external traffic to mesh services.
    • Third-Part Integration
      • FaaS integrates with the serverless platform Knative.
      • Service Discovery integrates with Eureka, Consul, Etcd, and Zookeeper.
  • High Performance and Availability
    • Adaption: adapts request, response in the handling chain.
    • Validation: headers validation, OAuth2, JWT, and HMAC verification.
    • Load Balance: round-robin, random, weighted random, ip hash, header hash.
    • Cache: for the backend servers.
    • Compression: compresses body for the response.
    • Hot-Update: updates both config and binary of Easegress in place without losing connections.
  • Operation
    • Easy to Integrate: command line(egctl), MegaEase Portal, HTTP clients such as curl, postman, etc.
    • Distributed Tracing
    • Observability
      • Node: role(leader, writer, reader), health or not, last heartbeat time, and so on
      • Traffic: in multi-dimension: server and backend.
        • Throughput: total and error statistics of request count, TPS/m1, m5, m15, and error percent, etc.
        • Latency: p25, p50, p75, p95, 98, p99, p999.
        • Data Size: request and response size.
        • Status Codes: HTTP status codes.
        • TopN: sorted by aggregated APIs(only in server dimension).

Get Started

The basic common usage of Easegress is to quickly set up proxy for the backend servers. We split it into multiple simple steps to illustrate the essential concepts and operations.

Setting up Easegress

We can download the binary from release page. For example we use linux version:

$ wget https://github.com/megaease/easegress/releases/download/v1.0.0/easegress_Linux_x86_64.tar.gz
$ tar zxvf easegress_Linux_x86_64.tar.gz -C easegress && cd easegress

or use source code:

$ git clone https://github.com/megaease/easegress && cd easegress
$ make

Then we can add the binary directory to the PATH and execute the server:

$ export PATH=${PATH}:$(pwd)/bin/
$ easegress-server
2021-05-17T16:45:38.185+08:00	INFO	cluster/config.go:84	etcd config: init-cluster:eg-default-name=http://localhost:2380 cluster-state:new force-new-cluster:false
2021-05-17T16:45:38.185+08:00	INFO	cluster/cluster.go:379	client is ready
2021-05-17T16:45:39.189+08:00	INFO	cluster/cluster.go:590	server is ready
2021-05-17T16:45:39.21+08:00	INFO	cluster/cluster.go:451	lease is ready
2021-05-17T16:45:39.231+08:00	INFO	cluster/cluster.go:187	cluster is ready
2021-05-17T16:45:39.253+08:00	INFO	supervisor/supervisor.go:180	create system controller StatusSyncController
2021-05-17T16:45:39.253+08:00	INFO	cluster/cluster.go:496	session is ready
2021-05-17T16:45:39.253+08:00	INFO	api/api.go:96	api server running in localhost:2381
2021-05-17T16:45:44.235+08:00	INFO	cluster/member.go:210	self ID changed from 0 to 689e371e88f78b6a
2021-05-17T16:45:44.236+08:00	INFO	cluster/member.go:137	store clusterMembers: eg-default-name(689e371e88f78b6a)=http://localhost:2380
2021-05-17T16:45:44.236+08:00	INFO	cluster/member.go:138	store knownMembers  : eg-default-name(689e371e88f78b6a)=http://localhost:2380

The default target of Makefile is to compile two binary into the directory bin/. bin/easegress-server is the server-side binary, bin/egctl is the client-side binary. We could add it to the $PATH for simplifying the following commands.

We could run easegress-server without specifying any arguments, which launch itself by opening default ports 2379, 2380, 2381. Of course, we can change them in the config file or command arguments that are explained well in easegress-server --help.

$ egctl member list
- options:
    name: eg-default-name
    labels: {}
    cluster-name: eg-cluster-default-name
    cluster-role: writer
    cluster-request-timeout: 10s
    cluster-listen-client-urls:
    - http://127.0.0.1:2379
    cluster-listen-peer-urls:
    - http://127.0.0.1:2380
    cluster-advertise-client-urls:
    - http://127.0.0.1:2379
    cluster-initial-advertise-peer-urls:
    - http://127.0.0.1:2380
    cluster-join-urls: []
    api-addr: localhost:2381
    debug: false
    home-dir: ./
    data-dir: data
    wal-dir: ""
    log-dir: log
    member-dir: member
    cpu-profile-file: ""
    memory-profile-file: ""
  lastHeartbeatTime: "2021-05-05T15:43:27+08:00"
  etcd:
    id: a30c34bf7ec77546
    startTime: "2021-05-05T15:42:37+08:00"
    state: Leader

After launched successfully, we could check the status of the one-node cluster. It shows the static options and dynamic status of heartbeat and etcd.

Create an HTTPServer and Pipeline

Now let's create an HTTPServer listening on port 10080 to handle the HTTP traffic.

$ echo '
kind: HTTPServer
name: server-demo
port: 10080
keepAlive: true
https: false
rules:
  - paths:
    - pathPrefix: /pipeline
      backend: pipeline-demo' | egctl object create

The rules of routers above mean that it will lead the traffic with the prefix /pipeline to the pipeline pipeline-demo, which will be created below. If we curl it before created, it will return 503.

$ echo '
name: pipeline-demo
kind: HTTPPipeline
flow:
  - filter: proxy
filters:
  - name: proxy
    kind: Proxy
    mainPool:
      servers:
      - url: http://127.0.0.1:9095
      - url: http://127.0.0.1:9096
      - url: http://127.0.0.1:9097
      loadBalance:
        policy: roundRobin' | egctl object create

The pipeline means it will do proxy for 3 backend endpoints in load balance policy roundRobin.

Test

Now you can use some HTTP clients such as curl to test the feature:

$ curl -v http://127.0.0.1:10080/pipeline

If you are not set up some applications to handle the 9095, 9096, and 9097 in the localhost, it will return 503 too. We prepare a simple service to let us test handily, the example shows:

$ go run test/backend-service/mirror.go & # Running in background
$ curl http://127.0.0.1:10080/pipeline -d 'Hello, Easegress'
Your Request
===============
Method: POST
URL   : /pipeline
Header: map[Accept:[*/*] Accept-Encoding:[gzip] Content-Type:[application/x-www-form-urlencoded] User-Agent:[curl/7.64.1]]
Body  : Hello, Easegress

More Filters

Now we want to add more features to the pipeline, then we could add kinds of filters to the pipeline. For example, we want validation and request adaptation for the pipeline-demo.

$ cat pipeline-demo.yaml
name: pipeline-demo
kind: HTTPPipeline
flow:
  - filter: validator
    jumpIf: { invalid: END }
  - filter: requestAdaptor
  - filter: proxy
filters:
  - name: validator
    kind: Validator
    headers:
      Content-Type:
        values:
        - application/json
  - name: requestAdaptor
    kind: RequestAdaptor
    header:
      set:
        X-Adapt-Key: goodplan
  - name: proxy
    kind: Proxy
    mainPool:
      servers:
      - url: http://127.0.0.1:9095
      - url: http://127.0.0.1:9096
      - url: http://127.0.0.1:9097
      loadBalance:
        policy: roundRobin

$ egctl object update -f pipeline-demo.yaml

After updating the pipeline, the original curl -v http://127.0.0.1:10080/pipeline will get 400 because of the validating. So we changed it to satisfy the limitation:

$ curl http://127.0.0.1:10080/pipeline -H 'Content-Type: application/json' -d '{"message": "Hello, Easegress"}'
Your Request
===============
Method: POST
URL   : /pipeline
Header: map[Accept:[*/*] Accept-Encoding:[gzip] Content-Type:[application/json] User-Agent:[curl/7.64.1] X-Adapt-Key:[goodplan]]
Body  : {"message": "Hello, Easegress"}

We can also see Easegress send one more header X-Adapt-Key: goodplan to the mirror service.

Documentation

See reference and developer guide for more information.

Roadmap

See Easegress Roadmap for details.

License

Easegress is under the Apache 2.0 license. See the LICENSE file for details.

Comments
  • WASM:The new plugin mechanism for the Easegress

    WASM:The new plugin mechanism for the Easegress

    Backgroud

    Our flagship product Easegress(EaseGateway) has many splendid features, it is fascinating, especially our pipeline/plugin mechanism which empower customers to achieve their specific goal with the Easegress customizing way But the current pipeline/plugin mechanism still has too many barriers to use If a user really wants to extend the Easegress he needs to conquer the following issues:

    1. Master the Go language
    2. Master and understand low level pipeline/plugin mechanism
    3. Commit changes to the Easegress repository and rebuild the Easegress server
    4. Deploy Easegress, need to reboot the Easegress server

    I think the last two of these barriers are the biggest obstacles for users to extend the Easegress. So I think we need another pipeline/plugin mechanism for the EG customization.

    Goal

    Compare with other gateway productions, we can found they are all choosing a solution that is embedding a weak language to enhance the power of extensibility. but there are serval cons in these implementations.

    • Weak language (general the embedded language is the Lua), it's difficult to solve complicated business logic in users' scenarios
    • Performance penalty, as the Lua is a lightweight interpreted programming language, although the Lua introduces the JIT mechanism, it can't be high performance.

    If we want to provide a more flexible customization mechanism, we must solve the above disadvantages.

    Proposal

    After several days of study, I found we can leverage WebAssembly to solve the above problems.(被打脸了……), because the WebAssembly has the following feature:

    • Near-native performance.
    • Wasm expands to any language.
    • Wasm runs in an isolated VM and can dynamically update without EG restarts.
    • Eliminate the need to recompile and maintain a build of EG.

    Golang has rich ecology, I found an open-source Golang WebAssembly runtime library at [1].

    PS: I don't want to deprecate the current pipeline/plugin mechanism, but I think we need multiple customized abstraction, the different way to process the different scene. This solution has been adopted by Envoy as its filter's extensibility [2].

    [1] https://github.com/wasmerio/wasmer-go [2] https://www.envoyproxy.io/docs/envoy/latest/start/sandboxes/wasm-cc

  • [question]Is there a global filter ?

    [question]Is there a global filter ?

    Our company wants to customize and develop a filter. For business reasons, it is found that all pipelines must use this filter. So I want to ask if there are plans to support this global filter or how to implement this global filter is the best way.

  • BusinessController creation order issue

    BusinessController creation order issue

    I uses EtcdServiceRegistry in my HttpPipeline. When the easegress is started, the BussinessController will be created in turn. If the HttpPipeline is created before the EtcdServiceRegistry, an error will be reported. The details are as follows:

    2021-10-12T16:12:25.143+08:00   ERROR   proxy/server.go:139  get service palfish-service-registry/base/gateway-controller failed: palfish-service-registry not found
    

    Errors are often reported during startup. Although there is no impact on the program, there is an ERROR level log, I always feel that there is something wrong.

  • The Proxy filter lost all response headers set by other filters(eg: CORSAdaptor) before it

    The Proxy filter lost all response headers set by other filters(eg: CORSAdaptor) before it

    Describe the bug In Easegress v2.0.0,the Proxy filter will lost all response headers set by other filters. For example if using CORSAdaptor before Proxy filter,for simple cors request,it will set Vary Access-Control-Allow-Origin Access-Control-Allow-Credentials headers. All these headers will lost after Proxy filter handling. This will cause simple cors request cannot work properly when api and web page from different domain.

    To Reproduce Using following configuration under Easegress v2.0.0:(echo-backend service providing 9095 port is from Easegress example frontend just use ResponseBuilder to build a html page containing javascript that send Cors request to api domain)

    Make sure adding following configuration to/etc/hosts file:

    127.0.0.1 frontend.work api.backend.work 
    
    kind: HTTPServer
    name: http
    port: 80
    https: false
    rules:
      - host: frontend.work
        paths:
          - PathPrefix: /page
            backend: frontend
      - host: api.backend.work
        paths:
          - pathPrefix: /api
            backend: echo-backend
    ---
    name: frontend
    kind: Pipeline
    flow:
      - filter: respBuilder
      - filter: rsp
    filters:
      - name: respBuilder
        kind: ResponseBuilder
        template: |
          statusCode: 200
      - name: rsp
        kind: ResponseAdaptor
        body: >+
          <!DOCTYPE html>
          <html>   
          <body>
          <h1>This is Frontend Page!</h1>
          <script>
              fetch('http://api.backend.work/api', {
                  mode: "cors",
                  method: "POST",
                  credentials: "include",
                  cache: 'no-cache',              
              }).then(data => console.log(data))
                .catch((error) => {console.log('Error', error)});
          </script>
          </body>
          </html>
    ---
    name: echo-backend
    kind: Pipeline
    flow:
      - filter: cors-filter
        jumpIf: { preflighted: END }
      - filter: proxy
    filters:
      - name: cors-filter
        kind: CORSAdaptor
        supportCORSRequest: true
        allowedOrigins: ["http://frontend.work"]
        allowCredentials: true
        allowedMethods: ["GET","POST"]
      - name: proxy
        kind: Proxy
        pools:
          - servers:
              - url: http://127.0.0.1:9095
            loadBalance:
              policy: roundRobin
    

    Open up chrome web browser and access http://frontend.work/,after sending request to http://api.backend.work/api by javascript in html,the browser will report error: CORS error MissingAllowOriginHeader

    Expected behavior The Proxy filter should reserve all response headers set by other filters configuring before it.

  • ResourceExhausted error when creating many HTTPPipelines and HTTPServer paths

    ResourceExhausted error when creating many HTTPPipelines and HTTPServer paths

    Describe the bug Easegress server throws 2021-12-16T03:40:40.8Z ERROR statussynccontroller/statussynccontroller.go:217 sync status failed: rpc error: code = ResourceExhausted desc = trying to send message larger than max (2274303 vs. 2097152) when creating 1000 dummy HTTPPipelines and one HTTPServer with one backend rule for each pipeline.

    To Reproduce Steps to reproduce the behavior:

    1. Generate 1000 identical HTTPPipeline configurations, with unique names name: pipeline-$i. Run the second script provided below: bash generate_pipeline.sh > pipelines.yaml
    2. Generate one HTTPServer configuration with 1000 rules using the first script provided below: bash generate_server.sh > httpserver.yaml
    3. Start Easegress server: bin/easegress-server
    4. Create pipelines bin/easegress-server object create -f pipelines.yaml
    5. Create httpserver bin/easegress-server object create -f httpserver.yaml
    6. Easegress server will fail in the middle of the object creation.

    Expected behavior Easegress-server should not fail when creating (many) objects.

    Version 1.4.0

    Configuration

    • Easegress Configuration Default parameters.

    • HTTP server configuration The following bash script generates the server HTTPServer:

    #!/bin/bash
    echo "
    kind: HTTPServer
    name: server-demo
    port: 10080
    keepAlive: true
    https: false
    maxConnections: 10240
    rules:
      - paths:
    "
    for i in {0..1000..1}
       do
          echo "    - pathPrefix: /pipeline$i
          backend: pipeline-$i"
    done
    
    • Pipeline Configuration The following bash script generates the pipeline:
    #!/bin/bash
    for i in {0..1000..1}
       do
          echo "name: pipeline-$i
    kind: HTTPPipeline
    flow:
      - filter: proxy
    filters:
      - kind: Proxy
        name: proxy
        mainPool:
          loadBalance:
            policy: roundRobin
          servers:
          - url: http://172.20.2.14:9095
          - url: http://172.20.2.160:9095
    ---"
    done
    

    Logs This is the output of easegress-server when the error happens:

    2021-12-16T04:45:22.857Z	INFO	trafficcontroller/trafficcontroller.go:424	create http pipeline default/pipeline-977
    2021-12-16T04:45:22.858Z	INFO	trafficcontroller/trafficcontroller.go:424	create http pipeline default/pipeline-989
    2021-12-16T04:45:22.858Z	INFO	trafficcontroller/trafficcontroller.go:424	create http pipeline default/pipeline-980
    2021-12-16T04:45:22.858Z	INFO	trafficcontroller/trafficcontroller.go:424	create http pipeline default/pipeline-966
    2021-12-16T04:45:22.859Z	INFO	trafficcontroller/trafficcontroller.go:424	create http pipeline default/pipeline-974
    2021-12-16T04:46:05.821Z	ERROR	statussynccontroller/statussynccontroller.go:217	sync status failed: rpc error: code = ResourceExhausted desc = trying to send message larger than max (2274327 vs. 2097152)
    2021-12-16T04:46:10.775Z	ERROR	statussynccontroller/statussynccontroller.go:217	sync status failed: rpc error: code = ResourceExhausted desc = trying to send message larger than max (2274327 vs. 2097152)
    2021-12-16T04:46:15.795Z	ERROR	statussynccontroller/statussynccontroller.go:217
    

    OS and Hardware

    • OS: Ubuntu 20.04
    • CPU: Intel(R) Xeon(R)
    • Memory: 15GB
  • 配置http代理完成后,不能访问代理的内容

    配置http代理完成后,不能访问代理的内容

    Describe the bug 正常启动easegress,配置完成httpserver、httppipeline后,启动无异常。

    但是不能通过http://10.10..:10080 访问代理内容。

    To Reproduce Steps to reproduce the behavior:

    1. Execute ' easegress-server --ip-addr xxx.xxx.12.3'
    2. Send a request to 'postman http get'
    3. no response

    Expected behavior A clear and concise description of what you expected to happen.

    Version 1.4.1

    Configuration

    • Easegress Configuration default
    • HTTP server configuration
    autoCert: false
    caCertBase64: ""
    cacheSize: 0
    certBase64: ""
    certs: {}
    http3: false
    https: false
    keepAlive: true
    keepAliveTimeout: 60s
    keyBase64: ""
    keys: {}
    kind: HTTPServer
    maxConnections: 10240
    name: server-demo
    port: 10080
    rules:
    - host: ""
      hostRegexp: ""
      paths:
      - backend: pipeline-demo
        headers: []
        pathPrefix: /pipeline
        rewriteTarget: ""
    tracing: null
    xForwardedFor: false
    
    • Pipeline Configuration
    filters:
    - kind: Proxy
      mainPool:
        loadBalance:
          policy: roundRobin
        servers:
        - url: http://www.baidu.com
        - url: http://www.baidu.com
        - url: http://www.baidu.com
      name: proxy
    flow:
    - filter: proxy
      jumpIf: {}
    kind: HTTPPipeline
    name: pipeline-demo
    

    Logs

    Easegress logs, if applicable.
    

    OS and Hardware

    • OS: centos 7.x
    • CPU: Intel(R) Core(TM) i5-8265U
    • Memory: 32GB

    Additional context Add any other context about the problem here.

  • support windows

    support windows

  • can not build easegress cluster

    can not build easegress cluster

    sudo ./server --api-addr 192.168.42.103:38080 \
    --cluster-listen-client-urls  http://192.168.42.103:2379 \
    --cluster-listen-peer-urls http://192.168.42.103:2380 \
    --cluster-advertise-client-urls http://192.168.42.103:2379 \
    --cluster-initial-advertise-peer-urls http://192.168.42.103:2380 \
    --cluster-join-urls http://192.168.42.104:2380,http://192.168.42.103:2380,http://192.168.42.105:2380 \
    --cluster-name gw-cluster --force-new-cluster false --cluster-role writer --name gw1 
    
    sudo ./server --api-addr 192.168.42.104:38080 \
    --cluster-listen-client-urls  http://192.168.42.104:2379 \
    --cluster-listen-peer-urls http://192.168.42.104:2380 \
    --cluster-advertise-client-urls http://192.168.42.104:2379 \
    --cluster-initial-advertise-peer-urls http://192.168.42.104:2380 \
    --cluster-join-urls http://192.168.42.105:2380,http://192.168.42.103:2380,http://192.168.42.104:2380 \
    --force-new-cluster false --cluster-name gw-cluster --cluster-role writer --name gw2 
    
    sudo ./server --api-addr 192.168.42.105:38080 \
    --cluster-listen-client-urls  http://192.168.42.105:2379 \
    --cluster-listen-peer-urls http://192.168.42.105:2380 \
    --cluster-advertise-client-urls http://192.168.42.105:2379 \
    --cluster-initial-advertise-peer-urls http://192.168.42.105:2380 \
    --cluster-join-urls http://192.168.42.103:2380,http://192.168.42.105:2380,http://192.168.42.104:2380 \
    --force-new-cluster false --cluster-name gw-cluster --cluster-role writer --name gw3 
    

    I run above command on three nodes.

    图片

    Above is the easegress running result.

    I have create a http server like this:

    图片

    But easegress just start a http server on node(192.168.42.103), didn't create http server on node(42.104/42.105). T think when i create a http server on node(42.103), the easegree-cluster should create http server on node(42.104/105) by object watcher notify.

    And i find that each node etcd state is leader. I think etcd-cluster can only have one leader.

    图片 图片

  • Filter HttpResponse  return two same headers to client

    Filter HttpResponse return two same headers to client

    Describe the bug In any one filter,as follow example shows setting some response header:

    func (f *XXXFilter) Handle(ctx context.HTTPContext) (result string) {
        ctx.Response().Header().Set("key-xxx", "value-xxx")
        ctx.Response().Header().Set("key-yyy", "value-yyy")
    }
    

    The client(whether using curl or chrome browser) will receive two same headers: key-xxx: value-xxx key-xxx: value-xxx key-yyy: value-yyy key-yyy: value-yyy

    This will cause CORSAdaptor filter not working properly

    Version v1.4.1

  • fix httpserver httppipeline status not show error

    fix httpserver httppipeline status not show error

    Fix #438

    return status is map -> map[member id]status... Since our egctl does not specific namespace now, we return status from all namespace. this part may need change.

  • Proxy with mirrorPool panics under high QPS

    Proxy with mirrorPool panics under high QPS

    Describe the bug

    I created HTTPPipeline with Proxy filter that has mirrorPool pointing to an echo server and mainPool pointing to another echo server. I can query HTTPServer with curl without any problem, but if I send many queries in parallel with ab tool, there is an panic: close of closed channel error after few seconds.

    To Reproduce Steps to reproduce the behavior:

    1. Start Easegress server bin/easegress-server -f eg-server.yaml.
    2. Add HTTPServer with egctl object create -f httpserver.yaml
    3. Add HTTPPipeline with egctl object create -f mirrorpool-pipeline.yaml
    4. Start echo server at HOST2 and HOST3.
    5. Send queries in parallel using ab:
    ab -k -c 450 -n 1000000 -H "X-Mirror: mirror"  http://{HOST1}:10080/pipeline
    

    After few second Easegress server fails with panic: close of closed channel.

    Expected behavior A clear and concise description of what you expected to happen.

    Version 1.4.0

    Configuration

    • Easegress Configuration
    # eg-server.yaml
    name: machine-1
    cluster-name: cluster-stress-test
    cluster-role: primary
    api-addr: localhost:2381
    data-dir: ./data
    wal-dir: ""
    cpu-profile-file: "./cpu-profile.txt"
    memory-profile-file: "./memory-profile.txt"
    disable-access-log: true
    log-dir: ./log
    debug: false
    cluster:
      listen-peer-urls:
       - http://{HOST1}:2380
      listen-client-urls:
       - http://{HOST1}:2379
      advertise-client-urls:
       - http://{HOST1}:2379
      initial-advertise-peer-urls:
       - http://{HOST1}:2380
      initial-cluster:
       - machine-1: http://{HOST1}:2380
    
    • HTTP server configuration
    # httpserver.yaml
    kind: HTTPServer
    name: stress-server
    port: 10080
    keepAlive: true
    https: false
    maxConnections: 10240
    rules:
      - paths:
        - pathPrefix: /pipeline
          backend: mirrorpool-pipeline
    
    • Pipeline Configuration
    name: mirrorpool-pipeline
    kind: HTTPPipeline
    flow:
      - filter: proxy
    filters:
      - kind: Proxy
        name: proxy
        mainPool:
          loadBalance:
            policy: roundRobin
          servers:
          - url: http://{HOST2}:9095
        mirrorPool:
          filter:
            headers:
              "X-Mirror":
                exact: mirror
          loadBalance:
            policy: roundRobin
          servers:
          - url: http://{HOST3}:9095
    

    Logs

    panic: close of closed channel
    
    goroutine 4160 [running]:
    github.com/megaease/easegress/pkg/filter/proxy.(*masterReader).Read(0xc005800390, {0xc00582a000, 0x8000, 0x8000})
    	github.com/megaease/easegress/pkg/filter/proxy/masterslavereader.go:69 +0x17d
    github.com/megaease/easegress/pkg/util/callbackreader.(*CallbackReader).Read(0xc005806b90, {0xc00582a000, 0xc00007c070, 0xc005449000})
    	github.com/megaease/easegress/pkg/util/callbackreader/callbackreader.go:57 +0xa5
    net/http.(*readTrackingBody).Read(0x0, {0xc00582a000, 0xc002afdb00, 0x44ced2})
    	net/http/transport.go:634 +0x2a
    io.(*multiReader).Read(0xc0056cba28, {0xc00582a000, 0x8000, 0x8000})
    	io/multi.go:26 +0x9b
    io.copyBuffer({0x7f3e41f08db0, 0xc0048b1890}, {0x2e430c0, 0xc0056cba28}, {0x0, 0x0, 0x0})
    	io/io.go:423 +0x1b2
    io.Copy(...)
    	io/io.go:382
    net/http.(*transferWriter).doBodyCopy(0xc0058041e0, {0x7f3e41f08db0, 0xc0048b1890}, {0x2e430c0, 0xc0056cba28})
    	net/http/transfer.go:410 +0x4d
    net/http.(*transferWriter).writeBody(0xc0058041e0, {0x2e3d5e0, 0xc002797080})
    	net/http/transfer.go:357 +0x225
    net/http.(*Request).write(0xc00577ac00, {0x2e3d5e0, 0xc002797080}, 0x0, 0xc005775d70, 0x0)
    	net/http/request.go:698 +0xb4e
    net/http.(*persistConn).writeLoop(0xc00153ad80)
    	net/http/transport.go:2389 +0x189
    created by net/http.(*Transport).dialConn
    	net/http/transport.go:1748 +0x1e65
    

    OS and Hardware

    • OS: Ubuntu 20.04
    • CPU: Intel(R) Xeon(R)
    • Memory: 15GB
    • Host 1,2 and 3 are equivalent.
  • Bump go.opentelemetry.io/otel from 1.11.1 to 1.11.2

    Bump go.opentelemetry.io/otel from 1.11.1 to 1.11.2

    Bumps go.opentelemetry.io/otel from 1.11.1 to 1.11.2.

    Changelog

    Sourced from go.opentelemetry.io/otel's changelog.

    [1.11.2/0.34.0] 2022-12-05

    Added

    • The WithView Option is added to the go.opentelemetry.io/otel/sdk/metric package. This option is used to configure the view(s) a MeterProvider will use for all Readers that are registered with it. (#3387)
    • Add Instrumentation Scope and Version as info metric and label in Prometheus exporter. This can be disabled using the WithoutScopeInfo() option added to that package.(#3273, #3357)
    • OTLP exporters now recognize: (#3363)
      • OTEL_EXPORTER_OTLP_INSECURE
      • OTEL_EXPORTER_OTLP_TRACES_INSECURE
      • OTEL_EXPORTER_OTLP_METRICS_INSECURE
      • OTEL_EXPORTER_OTLP_CLIENT_KEY
      • OTEL_EXPORTER_OTLP_TRACES_CLIENT_KEY
      • OTEL_EXPORTER_OTLP_METRICS_CLIENT_KEY
      • OTEL_EXPORTER_OTLP_CLIENT_CERTIFICATE
      • OTEL_EXPORTER_OTLP_TRACES_CLIENT_CERTIFICATE
      • OTEL_EXPORTER_OTLP_METRICS_CLIENT_CERTIFICATE
    • The View type and related NewView function to create a view according to the OpenTelemetry specification are added to go.opentelemetry.io/otel/sdk/metric. These additions are replacements for the View type and New function from go.opentelemetry.io/otel/sdk/metric/view. (#3459)
    • The Instrument and InstrumentKind type are added to go.opentelemetry.io/otel/sdk/metric. These additions are replacements for the Instrument and InstrumentKind types from go.opentelemetry.io/otel/sdk/metric/view. (#3459)
    • The Stream type is added to go.opentelemetry.io/otel/sdk/metric to define a metric data stream a view will produce. (#3459)
    • The AssertHasAttributes allows instrument authors to test that datapoints returned have appropriate attributes. (#3487)

    Changed

    • The "go.opentelemetry.io/otel/sdk/metric".WithReader option no longer accepts views to associate with the Reader. Instead, views are now registered directly with the MeterProvider via the new WithView option. The views registered with the MeterProvider apply to all Readers. (#3387)
    • The Temporality(view.InstrumentKind) metricdata.Temporality and Aggregation(view.InstrumentKind) aggregation.Aggregation methods are added to the "go.opentelemetry.io/otel/sdk/metric".Exporter interface. (#3260)
    • The Temporality(view.InstrumentKind) metricdata.Temporality and Aggregation(view.InstrumentKind) aggregation.Aggregation methods are added to the "go.opentelemetry.io/otel/exporters/otlp/otlpmetric".Client interface. (#3260)
    • The WithTemporalitySelector and WithAggregationSelector ReaderOptions have been changed to ManualReaderOptions in the go.opentelemetry.io/otel/sdk/metric package. (#3260)
    • The periodic reader in the go.opentelemetry.io/otel/sdk/metric package now uses the temporality and aggregation selectors from its configured exporter instead of accepting them as options. (#3260)
    • Jaeger and Zipkin exporter use github.com/go-logr/logr as the logging interface, and add the WithLogr option. (#3497, #3500)

    Fixed

    • The go.opentelemetry.io/otel/exporters/prometheus exporter fixes duplicated _total suffixes. (#3369)
    • Remove comparable requirement for Readers. (#3387)
    • Cumulative metrics from the OpenCensus bridge (go.opentelemetry.io/otel/bridge/opencensus) are defined as monotonic sums, instead of non-monotonic. (#3389)
    • Asynchronous counters (Counter and UpDownCounter) from the metric SDK now produce delta sums when configured with delta temporality. (#3398)
    • Exported Status codes in the go.opentelemetry.io/otel/exporters/zipkin exporter are now exported as all upper case values. (#3340)
    • Aggregations from go.opentelemetry.io/otel/sdk/metric with no data are not exported. (#3394, #3436)
    • Reenabled Attribute Filters in the Metric SDK. (#3396)
    • Asynchronous callbacks are only called if they are registered with at least one instrument that does not use drop aggragation. (#3408)
    • Do not report empty partial-success responses in the go.opentelemetry.io/otel/exporters/otlp exporters. (#3438, #3432)
    • Handle partial success responses in go.opentelemetry.io/otel/exporters/otlp/otlpmetric exporters. (#3162, #3440)
    • Prevent duplicate Prometheus description, unit, and type. (#3469)
    • Prevents panic when using incorrect attribute.Value.As[Type]Slice(). (#3489)

    ... (truncated)

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
  • Bump github.com/hashicorp/golang-lru from 0.5.4 to 0.6.0

    Bump github.com/hashicorp/golang-lru from 0.5.4 to 0.6.0

    Bumps github.com/hashicorp/golang-lru from 0.5.4 to 0.6.0.

    Release notes

    Sourced from github.com/hashicorp/golang-lru's releases.

    Tagging prior to v2

    This is likely the last tag prior to the switch to generics and the v2 package.

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
  • Bump github.com/hashicorp/consul/api from 1.15.3 to 1.18.0

    Bump github.com/hashicorp/consul/api from 1.15.3 to 1.18.0

    Bumps github.com/hashicorp/consul/api from 1.15.3 to 1.18.0.

    Commits
    • 13836d5 Backport of ui: Add ServerExternalAddresses to peer token create form into re...
    • 18dffc5 Backport of peering: better represent non-passing states during peer check fl...
    • a445588 Backport of docs: Update acl-tokens.mdx into release/1.14.x (#15609)
    • e674f36 backport of commit 9cc1010534932586620323c5ed17244f76881dfa (#15606)
    • 657616a Backport of Remove log line about server mgmt token init into release/1.14.x ...
    • b152669 backport of commit 6f18c57f5b7e74154144cd23ec8e57bfa3037635 (#15529)
    • 7c0eec4 Add support for configuring Envoys route idle_timeout (#14340) (#15611)
    • c5dd81e Backport of docs: typo on cluster peering k8s into release/1.14.x (#15604)
    • 9a235cb Backport of docs: Clean up k8s cluster peering instructions into release/1.14...
    • e7f8505 Backport of Add peering .service and .node DNS lookups. into release/1.14...
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
  • Bump k8s.io/client-go from 0.24.8 to 0.26.0

    Bump k8s.io/client-go from 0.24.8 to 0.26.0

    Bumps k8s.io/client-go from 0.24.8 to 0.26.0.

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
  • Bump github.com/MicahParks/keyfunc from 1.7.0 to 1.9.0

    Bump github.com/MicahParks/keyfunc from 1.7.0 to 1.9.0

    Bumps github.com/MicahParks/keyfunc from 1.7.0 to 1.9.0.

    Release notes

    Sourced from github.com/MicahParks/keyfunc's releases.

    Multiple JWK Set support

    The purpose of this release is to add support for multiple JWK Sets. Through the use of the new keyfunc.GetMultiple function, package users can now specify multiple remote JWK Set resources and produce one jwt.Keyfunc.

    It is not recommended to use the RefreshUnknownKID field on keyfunc.Option field when using multiple JWK Sets.

    Thank you to @​aklinkert for this feature request!

    Related issues:

    Related pull requests:

    Allow manual refresh of a remote JWKS resource

    The purpose of this release is to add a method to manually refresh the remote JWKS resource. This can bypass the rate limit, if the option is set.

    Please see the new .Refresh method.

    Related issues:

    Related pull requests:

    Commits
    • f76c64f Merge pull request #78 from MicahParks/multiple_jwks
    • eaceb56 Add comment for RefreshUnknownKID option
    • 9ce014e Merge branch 'master' into multiple_jwks
    • fb3c60d Merge pull request #76 from MicahParks/manual_refresh
    • 6739ca5 Add note in README.md
    • 646644e Ass tests for multiple JWK Sets
    • 0278abb Add note in README.md
    • 040769c Start on tests for multiple JWKS
    • 81c7ee2 Add support for multiple JWKS
    • 24f9eb7 Add comment for exported data structure
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
  • Bump go.opentelemetry.io/otel/sdk from 1.11.1 to 1.11.2

    Bump go.opentelemetry.io/otel/sdk from 1.11.1 to 1.11.2

    Bumps go.opentelemetry.io/otel/sdk from 1.11.1 to 1.11.2.

    Changelog

    Sourced from go.opentelemetry.io/otel/sdk's changelog.

    [1.11.2/0.34.0] 2022-12-05

    Added

    • The WithView Option is added to the go.opentelemetry.io/otel/sdk/metric package. This option is used to configure the view(s) a MeterProvider will use for all Readers that are registered with it. (#3387)
    • Add Instrumentation Scope and Version as info metric and label in Prometheus exporter. This can be disabled using the WithoutScopeInfo() option added to that package.(#3273, #3357)
    • OTLP exporters now recognize: (#3363)
      • OTEL_EXPORTER_OTLP_INSECURE
      • OTEL_EXPORTER_OTLP_TRACES_INSECURE
      • OTEL_EXPORTER_OTLP_METRICS_INSECURE
      • OTEL_EXPORTER_OTLP_CLIENT_KEY
      • OTEL_EXPORTER_OTLP_TRACES_CLIENT_KEY
      • OTEL_EXPORTER_OTLP_METRICS_CLIENT_KEY
      • OTEL_EXPORTER_OTLP_CLIENT_CERTIFICATE
      • OTEL_EXPORTER_OTLP_TRACES_CLIENT_CERTIFICATE
      • OTEL_EXPORTER_OTLP_METRICS_CLIENT_CERTIFICATE
    • The View type and related NewView function to create a view according to the OpenTelemetry specification are added to go.opentelemetry.io/otel/sdk/metric. These additions are replacements for the View type and New function from go.opentelemetry.io/otel/sdk/metric/view. (#3459)
    • The Instrument and InstrumentKind type are added to go.opentelemetry.io/otel/sdk/metric. These additions are replacements for the Instrument and InstrumentKind types from go.opentelemetry.io/otel/sdk/metric/view. (#3459)
    • The Stream type is added to go.opentelemetry.io/otel/sdk/metric to define a metric data stream a view will produce. (#3459)
    • The AssertHasAttributes allows instrument authors to test that datapoints returned have appropriate attributes. (#3487)

    Changed

    • The "go.opentelemetry.io/otel/sdk/metric".WithReader option no longer accepts views to associate with the Reader. Instead, views are now registered directly with the MeterProvider via the new WithView option. The views registered with the MeterProvider apply to all Readers. (#3387)
    • The Temporality(view.InstrumentKind) metricdata.Temporality and Aggregation(view.InstrumentKind) aggregation.Aggregation methods are added to the "go.opentelemetry.io/otel/sdk/metric".Exporter interface. (#3260)
    • The Temporality(view.InstrumentKind) metricdata.Temporality and Aggregation(view.InstrumentKind) aggregation.Aggregation methods are added to the "go.opentelemetry.io/otel/exporters/otlp/otlpmetric".Client interface. (#3260)
    • The WithTemporalitySelector and WithAggregationSelector ReaderOptions have been changed to ManualReaderOptions in the go.opentelemetry.io/otel/sdk/metric package. (#3260)
    • The periodic reader in the go.opentelemetry.io/otel/sdk/metric package now uses the temporality and aggregation selectors from its configured exporter instead of accepting them as options. (#3260)
    • Jaeger and Zipkin exporter use github.com/go-logr/logr as the logging interface, and add the WithLogr option. (#3497, #3500)

    Fixed

    • The go.opentelemetry.io/otel/exporters/prometheus exporter fixes duplicated _total suffixes. (#3369)
    • Remove comparable requirement for Readers. (#3387)
    • Cumulative metrics from the OpenCensus bridge (go.opentelemetry.io/otel/bridge/opencensus) are defined as monotonic sums, instead of non-monotonic. (#3389)
    • Asynchronous counters (Counter and UpDownCounter) from the metric SDK now produce delta sums when configured with delta temporality. (#3398)
    • Exported Status codes in the go.opentelemetry.io/otel/exporters/zipkin exporter are now exported as all upper case values. (#3340)
    • Aggregations from go.opentelemetry.io/otel/sdk/metric with no data are not exported. (#3394, #3436)
    • Reenabled Attribute Filters in the Metric SDK. (#3396)
    • Asynchronous callbacks are only called if they are registered with at least one instrument that does not use drop aggragation. (#3408)
    • Do not report empty partial-success responses in the go.opentelemetry.io/otel/exporters/otlp exporters. (#3438, #3432)
    • Handle partial success responses in go.opentelemetry.io/otel/exporters/otlp/otlpmetric exporters. (#3162, #3440)
    • Prevent duplicate Prometheus description, unit, and type. (#3469)
    • Prevents panic when using incorrect attribute.Value.As[Type]Slice(). (#3489)

    ... (truncated)

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
Kagetaka : Decentralized Microservice Orchestration for Saito

Caravela : Decentralized Microservice Orchestration Shamelessly cloned and repurposed for Go 1.17 from André Pires: https://github.com/strabox/caravel

Oct 15, 2021
The Consul API Gateway is a dedicated ingress solution for intelligently routing traffic to applications running on a Consul Service Mesh.

The Consul API Gateway is a dedicated ingress solution for intelligently routing traffic to applications running on a Consul Service Mesh.

Dec 14, 2022
A controller(CES) for controlling container egress traffic. Working with F5 AFM.
A controller(CES) for controlling container egress traffic. Working with F5 AFM.

Container Egress Services (CES) Kubernetes is piloting projects transition to enterprise-wide application rollouts, companies must be able to extend t

Oct 18, 2022
Dubbo2istio watches Dubbo ZooKeeper registry and synchronize all the dubbo services to Istio.
Dubbo2istio watches Dubbo ZooKeeper registry and synchronize all the dubbo services to Istio.

Dubbo2Istio Dubbo2istio 将 Dubbo ZooKeeper 服务注册表中的 Dubbo 服务自动同步到 Istio 服务网格中。 Aeraki 根据 Dubbo 服务信息和用户设置的路由规则生成数据面相关的配置,通过 Istio 下发给数据面 Envoy 中的 Dubbo p

Dec 1, 2022
Pulls ARO RP versions and their corresponding upgrade streams for all regions

aro-rp-versions Description Pulls ARO RP versions and their corresponding upgrade streams for all regions in a table format by default or json if requ

Jul 6, 2022
Application tracing system for Go, based on Google's Dapper.

appdash (view on Sourcegraph) Appdash is an application tracing system for Go, based on Google's Dapper and Twitter's Zipkin. Appdash allows you to tr

Nov 28, 2022
a microservice framework for rapid development of micro services in Go with rich eco-system
a microservice framework for rapid development of micro services in Go with rich eco-system

中文版README Go-Chassis is a microservice framework for rapid development of microservices in Go. it focus on helping developer to deliver cloud native a

Dec 27, 2022
Demo Fully Isolated System Architecture

Fully Isolated System Architecture (Microservices) Arsitektur Request | | | Api Gateway --- Auth Provider |\________________________

Dec 13, 2022
This example showcases an event-sourced CQRS system based on github.com/romshark/eventlog

Eventlog Example This example is showcasing an eventually consistent, fault-tolerant, event sourced system following the CQRS (Command-Query-Responsib

Mar 13, 2022
HTTP load generator, ApacheBench (ab) replacement, formerly known as rakyll/boom
HTTP load generator, ApacheBench (ab) replacement, formerly known as rakyll/boom

hey is a tiny program that sends some load to a web application. hey was originally called boom and was influenced from Tarek Ziade's tool at tarekzia

Jan 3, 2023
HTTP load generator, ApacheBench (ab) replacement, formerly known as rakyll/boom
HTTP load generator, ApacheBench (ab) replacement, formerly known as rakyll/boom

hey is a tiny program that sends some load to a web application. hey was originally called boom and was influenced from Tarek Ziade's tool at tarekzia

Dec 31, 2022
SAP (formerly sybase) ASE/RS/IQ driver written in pure go

tds import "github.com/thda/tds" Package tds is a pure Go Sybase ASE/IQ/RS driver for the database/sql package. Status This is a beta release. This dr

Dec 7, 2022
Package zaperations provides a Google Cloud operations suite (formerly Stackdriver) compatible config for the uber-go/zap logger.

Package zaperations provides a Google Cloud Operations (formerly Stackdriver) compatible config for the excellent uber-go/zap logger. Example This exa

Nov 6, 2021
A programmable, observable and distributed job orchestration system.
A programmable, observable and distributed job orchestration system.

?? Overview Odin is a programmable, observable and distributed job orchestration system which allows for the scheduling, management and unattended bac

Dec 21, 2022
Catalyst is an incident response platform / SOAR (Security Orchestration, Automation and Response) system.
Catalyst is an incident response platform / SOAR (Security Orchestration, Automation and Response) system.

Catalyst Speed up your reactions Website - The Catalyst Handbook (Documentation) - Try online (user: bob, password: bob) Catalyst is an incident respo

Jan 6, 2023
Mizu - API traffic viewer for Kubernetes enabling you to view all API communication between microservices
Mizu - API traffic viewer for Kubernetes enabling you to view all API communication between microservices

The API Traffic Viewer for Kubernetes A simple-yet-powerful API traffic viewer f

Jan 9, 2023