A cloud-native application simulator for golang

Build and upload Docker images

Build docker images for main application and worker

  1. Under model directory, run:
docker build -t redis-demo .
  1. Under worker directory, run:
docker build -t redis-demo-worker .

After creating the docker images, upload them to each of the clusters 'i' by runnning:

kind load docker-image redis-demo --name={cluster$i}
kind load docker-image redis-demo-worker --name={cluster$i}

Dependecies

  1. kind
  2. tsung
  3. istioctl
  4. kubectl
  5. go (for installation, configuration and basic testing, follow instructions in e.g. How to Install GoLang (Go Programming Language) in Linux; make sure go environment variables and path are configured accordingly)

Environment Preparation

  1. Make sure there exist a kubernetes namespace with name edge-namespace with Istio sidecar injection enabled
  2. Make sure the application-generator folder is located under path ~/go_projects/src/ and initialize module by executing go mod init
  3. If needed, install go module dependencies, e.g. cobra and yaml
  4. Deploy InfluxDB by running startup.sh script and providing no of clusters as argument
    ./startup.sh {$no_of_clusters}
  5. Modify any of the service chain files under the chain directory according to the requirements.
  6. Modify any of the cluster placement files under the clusters directory according to the requirements.
  7. Generate and deploy kubernetes manifest files by running 'generator.sh' script. It accepts three arguments, path to chain file, path to cluster file and value for readiness probe in seconds.
./generator.sh {chain file} {cluster file} {readiness probe}
  1. Modify the necessary files for request generator
    • Change the initial field of json files under the tsung directory according the chain configuration.
    • Change the chain_no field of json files under the tsung directory according the chain configuration. For example, for first chain it should be 1
    • Update the request_task_type of json files under the tsung directory for assigning user defined task to each microservice in the chain
    • Change server host ip address in conf.xml file with istio-ingress gateway for first microservice in chain.
    • Change the chain json file under the request section in conf.xml to send request to the desired chain. For example, if first chain is targeted it should be chain1.json
  2. Change Kubernetes context to the main cluster
kubectl config use-context cluster1
  1. Open the istioctl grafana dashboard, and add custom data source for InfluxDB
istioctl dashboard grafana
  • Configure data source for InfluxDB as the following:
  • Add custom dashboard by importing the json file under the grafana folder.

Running

Note: Make sure there exist an Istio gateway and virtual service for the frontend service(s). For an example see under folder ./frontend/ After configuring environment correctly, you can just use the following command the start request generator.

tsung -f tsung/conf.xml -k start

You can observe the performance metrics for both istio and chain by using the dashboards on grafana interface. To stop traffic generation use´

tsung stop

For more information see doc folder and masther thesis report.

Owner
Ericsson Research
Early research results and innovative ideas from Ericsson Research
Ericsson Research
Comments
  • Latencies between microservices

    Latencies between microservices

    As we discussed, there seems to be a need for adding a feature to support latencies between microservices either using a service mesh or as a feature inside the application itself. We should decide which one do we need.

  • Option to inject delays to simulate several aspects

    Option to inject delays to simulate several aspects

    · Cross-cluster transport latencies (this may be already done with fortio, just include it in the chain semantics) · Application processing or queueing latencies · To specify transport latency as a ratio of the application processing latency

  • Feature/convert flask restful

    Feature/convert flask restful

    As we decided to use flask_restful package instead of flask, this PR propose the required changes such as:

    • dynamic urls
    • multiple endpoints (more fanout)

    It closes #3 , #36

  • Feature/wip/new service description format

    Feature/wip/new service description format

    It closes #6.

    It could also help for other listed issues as well.

    The old description and generated files are deleted. A new directory is created (input) and the new proposed description is listed there.

    Also I created another struct for deployment with affinity.

    Configmaps are also updated. By merging this PR, the Services are not able to send and receive requests.

  • Support multi-node kubernetes clusters

    Support multi-node kubernetes clusters

    Also support the placement in multi-node (affinity) environments (then it is possible for the community to use the tool for evaluating their single-cluster service placement algorithms)

  • add replication

    add replication

    Adding the replication factor for each service which could be specified by the user in each cluster. For instance, if the user specifies:

    ...
    "services": [
        {
          "name": "service1",
          "clusters": [
            {
              "cluster": "cluster-1",
              "replicas": 2,
              "namespace": "default",
              "node": "node-1"
            }
          ],
    ...
    

    After generating such the yaml files and deploying them to the cluster, we will have:

     % kubectl get pods         
    NAME                                         READY   STATUS    RESTARTS        AGE
    service1-75576df8cd-6djv8   2/2          Running   0                         3m27s
    service1-75576df8cd-kswrq   2/2.        Running   0                         3m27s
    
    

    It closes #46 .

  • Support customizable cpu, memory, network task complexities

    Support customizable cpu, memory, network task complexities

    closes #14, closes #9, closes #4, closes #38, closes #20

    New endpoint format:

    "endpoints": [ { "name": "end1", "protocol": "http", "execution_mode": "parallel", "cpu_complexity": { "execution_time": "5s", "method": "fibonacci", "workers": 2, "cpu_affinity": [ 0, 1 ], "cpu_load": "100%" }, "memory_complexity": { "execution_time": "5s", "method": "swap", "workers": 24, "bytes_load": "100%" }, "network_complexity": { "forward_requests": "asynchronous", "response_payload_size": 512, "called_services": [ { "service": "service2", "port": "80", "endpoint": "end2", "protocol": "http", "traffic_forward_ratio": 1, "request_payload_size": 256 } ] } } ]

    New response format:

    { "cpu_task":{ "services":[ "service1/end1", "service2/end2" ], "statuses":[ "stress-ng: info: [76] dispatching hogs: 2 cpu\nstress-ng: info: [76] successful run completed in 5.00s\nstress-ng: info: [76] stressor bogo ops real time usr time sys time bogo ops/s bogo ops/s\nstress-ng: info: [76] (secs) (secs) (secs) (real time) (usr+sys time)\nstress-ng: info: [76] cpu 64633075 5.00 9.61 0.00 12926973.13 6725606.14\n", "stress-ng: info: [33] dispatching hogs: 2 cpu\nstress-ng: info: [33] successful run completed in 5.32s\nstress-ng: info: [33] stressor bogo ops real time usr time sys time bogo ops/s bogo ops/s\nstress-ng: info: [33] (secs) (secs) (secs) (real time) (usr+sys time)\nstress-ng: info: [33] cpu 4349696 5.17 0.80 0.00 841800.59 5437120.00\n" ] }, "memory_task":{ "services":[ "service1/end1", "service2/end2" ], "statuses":[ "stress-ng: info: [80] dispatching hogs: 24 vm\nstress-ng: info: [80] successful run completed in 5.28s\nstress-ng: info: [80] stressor bogo ops real time usr time sys time bogo ops/s bogo ops/s\nstress-ng: info: [80] (secs) (secs) (secs) (real time) (usr+sys time)\nstress-ng: info: [80] vm 0 5.23 4.82 5.43 0.00 0.00\n", "stress-ng: info: [35] dispatching hogs: 24 vm\nstress-ng: info: [35] successful run completed in 5.33s\nstress-ng: info: [35] stressor bogo ops real time usr time sys time bogo ops/s bogo ops/s\nstress-ng: info: [35] (secs) (secs) (secs) (real time) (usr+sys time)\nstress-ng: info: [35] vm 0 5.31 4.03 5.62 0.00 0.00\n" ] }, "network_task":{ "services":[ "(service1/end1, service2/end2)" ], "statuses":[ 200 ], "payload":"JDYg8VVuaptwsGdS1rxq7Rwr04axGBdIaRBPaN55iuvcogCJIhpDCPnrcLpKI671sGHkBIylJ8DrCzW9QgI16DnYXKm8D0Of0wL55Tar0EHjwP563hPdAd3xhaoZDM3BIP0ZcNHjLWC5k1v2Y5OEZSTodecrkG5JvPAcl93G1rOU0KUR2ZMQ3aljh8d4uKaXD6j4RMmuNvd2VuuHicUnIkxcUA32WBnEgXkmYJmlF6nUggaU4TR93mZxWhQdMOTYydFsKrtlZQ39zOA66F2kxzV7eYtOobpjoz3XmjCiceEU2PfmnOJiKtsBDzevMRm0lWll2Ua4FZQETnORPWhpwnKnPdxlJcsqGAoAhzmR8yF8JXXACkFP9yfcaW6rVuQIShCHPbMCAi0uEhnbr1tiXoJHhWLUqLSVlh57PDGXC74fmZ1gpuQQiYyKoN5EmPSrOyqUe4iVgMDBg2sOHzWrvpSOg6QDB3rQg3jHP7srh87YpYnjaZBqM6ns6GlnlVKS" } }

  • Fix/unexpected output

    Fix/unexpected output

    As there was a problem with asynchronous and synchronous forwarding requests, in this PR, we suggest:

    • split them into separate coroutine functions and make run_task function a not a coroutine
    • Add post method for the endpoints.

    This PR is tested for the following architecture:

    graph TD;
        A-->B;
    

    The following are the outputs of testing on kind: The output for Asynchronous

    root@service1-75576df8cd-4jwqz:/usr/src/app# curl service1/end1
    {"services": ["http://service2:80/end2"], "statuses": [200]}
    root@service1-75576df8cd-4jwqz:/usr/src/app# curl service2/end2
    {"services": [], "statuses": []}
    
    

    The output for Synchronous

    root@service1-75576df8cd-7kz5h:/usr/src/app# curl service1/end1
    {"services": ["http://service2:80/end2"], "statuses": [200]}
    root@service1-75576df8cd-7kz5h:/usr/src/app# curl service2/end2
    {"services": [], "statuses": []}
    
  • Random generation of service description files

    Random generation of service description files

    closes #10

    The user can now run the service description generator under two different modes: (i) 'random' mode which generates a random description file or (ii) 'preset' mode which generates Kubernetes manifest based on a description file in the input directory".

    Then, via an interactive process the user can then choose between "simple" or "extended" configuration mode.

    In both cases, the user will need to specify as minimum the following params: cluster prefix, number of clusters, namespace prefix, number of namespaces. In the extended case, the user will need to also specify: maximum number of services, maximum number of cross-cluster replicas per service and maximum number of endpoints per service.

    After that, the code will generate both the json input file and the respective k8s yaml files.

  • Random service description file generation

    Random service description file generation

    closes #10

    The user can now run the service description generator under two different modes: (i) 'random' mode which generates a random description file or (ii) 'preset' mode which generates Kubernetes manifest based on a description file in the input directory".

    Then, via an interactive process the user can then choose between "simple" or "extended" configuration mode.

    In both cases, the user will need to specify as minimum the following params: cluster prefix, number of clusters, namespace prefix, number of namespaces. In the extended case, the user will need to also specify: maximum number of services, maximum number of cross-cluster replicas per service and maximum number of endpoints per service.

    After that, the code will generate both the json input file and the respective k8s yaml files.

  • optional nodename for deployment affintiy

    optional nodename for deployment affintiy

    This PR closes #26

    Previously we had two deployment models, one with affinity and one without affinity. Now we have one deployment model, and if the user does not enter the nodename in service description, we won't add an empty string in the generated files.

    Example: If the input json file is like this:

    .
    .
    "services": [
        {
          "name": "service-1",
          "clusters": [
            {
              "cluster": "cluster-1",
              "namespace": "ns-1",
            }
          ],
          "resources": {
    .
    .
    .
    

    Before this PR the output would be like:

    spec:
                nodeName: 
                containers:
                    - name: app
                      image: app-demo:latest
                      imagePullPolicy: Never
    

    After this PR the output is:

    spec:
                containers:
                    - name: app
                      image: app-demo:latest
                      imagePullPolicy: Never
    
  • Input validation for generator

    Input validation for generator

    At this point, there is no validation for the input json that we have. It seems that it is required to have validation at least for invalid characters in names or invalid structure of resource configuration.

  • Add support for other tasks

    Add support for other tasks

    To generate huge traffic or generally IO intensive tasks, for memory capacity, memory bandwidth, storage capacity, storage bandwidth, LLC capacity/bandwidth, etc and for a combination of resources There is already a task of type "communication" which generates traffic per hop based on fortio.

  • Possibility to have both short-lived and long-lived service-to-service interactions

    Possibility to have both short-lived and long-lived service-to-service interactions

    It looks like it’s possible to use the ‘sleep’ task to simulate the time spent by one microservice, if the interaction is changed/supporting to synchronized, the ‘sleep’ task could be use to support short- or long-lived interactions

provide api for cloud service like aliyun, aws, google cloud, tencent cloud, huawei cloud and so on

cloud-fitter 云适配 Communicate with public and private clouds conveniently by a set of apis. 用一套接口,便捷地访问各类公有云和私有云 对接计划 内部筹备中,后续开放,有需求欢迎联系。 开发者社区 开发者社区文档

Dec 20, 2022
The Cloud Native Application Proxy
The Cloud Native Application Proxy

Traefik (pronounced traffic) is a modern HTTP reverse proxy and load balancer that makes deploying microservices easy. Traefik integrates with your ex

Jan 9, 2023
This is a cloud-native application that focuses on the DevOps area.

Get started Install KubeSphere via kk (or other ways). This is an optional step, basically we need a Kubernetes Cluster and the front-end of DevOps. I

Jan 5, 2023
The OCI Service Operator for Kubernetes (OSOK) makes it easy to connect and manage OCI services from a cloud native application running in a Kubernetes environment.

OCI Service Operator for Kubernetes Introduction The OCI Service Operator for Kubernetes (OSOK) makes it easy to create, manage, and connect to Oracle

Sep 27, 2022
cloud native application deploy flow
cloud native application deploy flow

Triton-io/Triton English | 简体中文 Introduction Triton provides a cloud-native DeployFlow, which is safe, controllable, and policy-rich. For more introdu

May 28, 2022
This is a cloud-native application that focuses on the DevOps area.

KubeSphere DevOps integrates popular CI/CD tools, provides CI/CD Pipelines based on Jenkins, offers automation toolkits including Binary-to-Image (B2I

Jan 5, 2023
Cloud-Z gathers information and perform benchmarks on cloud instances in multiple cloud providers.

Cloud-Z Cloud-Z gathers information and perform benchmarks on cloud instances in multiple cloud providers. Cloud type, instance id, and type CPU infor

Jun 8, 2022
A web-based simulator for the Kubernetes scheduler
A web-based simulator for the Kubernetes scheduler

Web-based Kubernetes scheduler simulator Hello world. Here is web-based Kubernetes scheduler simulator. On the simulator, you can create/edit/delete t

Dec 22, 2022
Wirewold cellular automata simulator, running entirely on GPU.

Wireworld-gpu Wireworld implements the data and rules for the Wireworld cellular automata. This particular version is an experiment whereby the simula

Dec 31, 2022
Zeonica is a simulator for CGRA and Wafer-Scale Accelerators.

Zeonica Zeonica is a simulator for CGRA and wafer-scale accelerators. ISA Definition Register Space Special registers include: PC: Program Counter. TI

Oct 12, 2022
Kubernetes Operator for a Cloud-Native OpenVPN Deployment.

Meerkat is a Kubernetes Operator that facilitates the deployment of OpenVPN in a Kubernetes cluster. By leveraging Hashicorp Vault, Meerkat securely manages the underlying PKI.

Jan 4, 2023
Open Service Mesh (OSM) is a lightweight, extensible, cloud native service mesh that allows users to uniformly manage, secure, and get out-of-the-box observability features for highly dynamic microservice environments.
Open Service Mesh (OSM) is a lightweight, extensible, cloud native service mesh that allows users to uniformly manage, secure, and get out-of-the-box observability features for highly dynamic microservice environments.

Open Service Mesh (OSM) Open Service Mesh (OSM) is a lightweight, extensible, Cloud Native service mesh that allows users to uniformly manage, secure,

Jan 2, 2023
Zadig is a cloud native, distributed, developer-oriented continuous delivery product.

Zadig Developer-oriented Continuous Delivery Product English | 简体中文 Table of Contents Zadig Table of Contents What is Zadig Quick start How to use? Ho

Jan 8, 2023
Zadig is a cloud native, distributed, developer-oriented continuous delivery product.

Zadig Developer-oriented Continuous Delivery Product ⁣ English | 简体中文 Table of Contents Zadig Table of Contents What is Zadig Quick start How to use?

May 12, 2021
Interactive Cloud-Native Environment Client
Interactive Cloud-Native Environment Client

Fenix-CLI:Interactive Cloud-Native Environment Client English | 简体中文 Fenix-CLI is an interactive cloud-native operating environment client. The goal i

Dec 15, 2022
Polaris is a cloud-native service discovery and governance center

It can be used to solve the problem of service connection, fault tolerance, traffic control and secure in distributed and microservice architecture.

Dec 26, 2022
Cloud Native Configurations for Kubernetes

CNCK CNCK = Cloud Native Configurations for Kubernetes Make your Kubernetes applications more cloud native by injecting runtime cluster information in

Nov 4, 2021
Enables a FaaS experience for Knative / Cloud Native Runtimes.

Function Buildpacks for Knative Enables a FaaS experience for Knative / Cloud Native Runtimes. Will soon extend func to create deployable functions vi

Nov 2, 2022
Cloud Native Electronic Trading System built on Kubernetes and Knative Eventing

Ingenium -- Still heavily in prototyping stage -- Ingenium is a cloud native electronic trading system built on top of Kubernetes and Knative Eventing

Aug 29, 2022