Kubernetes-native framework for test definition and execution

████████ ███████ ███████ ████████ ██   ██ ██    ██ ██████  ███████ 
   ██    ██      ██         ██    ██  ██  ██    ██ ██   ██ ██      
   ██    █████   ███████    ██    █████   ██    ██ ██████  █████   
   ██    ██           ██    ██    ██  ██  ██    ██ ██   ██ ██      
   ██    ███████ ███████    ██    ██   ██  ██████  ██████  ███████ 
                                           /tɛst kjub/ by Kubeshop

Welcome to TestKube - your friendly Kubernetes testing framework!

TestKube decouples test artefacts and execution from CI/CD tooling; tests are meant to be part of your clusters state and can be executed as needed:

  • Manually via kubectl cli
  • Externally triggered via API (CI, external tooling, etc)
  • Automatically on deployment of annotated/labeled services/pods/etc (WIP)

Main TestKube components are:

  • kubectl TestKube plugin - simple - installed w/o 3rd party repositories (like Krew etc), communicates with
  • API Server - work orchestrator, runs executors, gather execution results
  • CRDs Operator - watches TestKube CR, handles changes, communicates with API Server
  • Executors - runs tests defined for specific runner
  • Results DB - for centralized test results aggregation and analysis
  • TestKube Dashboard - standalone web application for viewing real-time TestKube test results

TestKube attempts to:

  • Avoid vendor lock-in for test orchestration and execution in CI/CD pipelines
  • Make it easy to orchestrate and run any kind of tests - functional, load/performance, security, compliance, etc. - in your clusters, without having to wrap them in docker-images or providing network access
  • Make it possible to decouple test execution from build processes; engineers should be able to run specific tests whenever needed
  • Centralize all test results in a consistent format for "actionable QA analytics"
  • Provide a modular architecture for adding new types of test scripts and executors

Getting Started

Check out the Installation and Getting Started guides to set up TestKube and run your first tests!

Discord

Don't hesitate to say hi to the team and ask questions on our Discord server.

Documentation

Is available at https://kubeshop.github.io/testkube

Contributing

Go to contribution document to read more how can you help us 🔥

Feedback

Whether it helps you or not - we'd LOVE to hear from you. Please let us know what you think and of course, how we can make it better.

Owner
kubeshop
open-source accelerator-incubator focused on k8s
kubeshop
Comments
  • Need information on container executors

    Need information on container executors

    Hi

    I have a container image which takes care of some test executions. I would like to create it as a executor on testkube.

    have gone through the below documentation but it is not having any commands which I can follow : Like command to create a executor with the help of yaml and how to run the test which points to this container ! https://kubeshop.github.io/testkube/test-types/container-executor

    Could you please help me with the commands and steps that I need to follow ?

  • Documentation on how to use a Cypress project with a PRIVATE (bitbucket) git repository is incomplete

    Documentation on how to use a Cypress project with a PRIVATE (bitbucket) git repository is incomplete

    [SOLVED]


    Describe the bug The documentation on how to create a testkube test from a Cypress project does not explain how do do it when it's a private repository.

    To Reproduce

    1. Place code in a (bitbucket) repository that requires authentication before cloning
    2. Create new test from Cypress project via testkube dashboard, with the help of the documentation
    3. As type use "git directory"
    4. As URl use the HTTPS one, containing the username (like: https://[email protected]/...)
    5. As token use an Atlassian API token (see screenshots)
    6. Test is created, BUT: When executing it, it fails[1] (I guess logs in testkube are missing, since the test didn't run - this makes sense from a developer perspective, but it's not that end-user friendly)...
    7. Note that, when I am using a - publicly accessible! - Cypress demo project my testkube test created from Cypress project works[2]!

    [1] Screenshot 2022-08-17 at 10 15 09 Logs from teskube's API-server from DataDog... Screenshot 2022-08-17 at 10 11 24 Screenshot 2022-08-17 at 10 12 30

    [2] Example of succesfull test run, using publicly accessible testkube testrepo Screenshot 2022-08-17 at 10 02 51

    Expected behavior Cypress project is checked out, tests are run.

    Version / Cluster

    • testkube version: 1.4.21
    • AWS EKS
    • K8s version: 1.22

    Screenshots This is how my test config looks like... Screenshot 2022-08-17 at 09 47 29

    As "Token" I used the value of my personal Atlassian API-Token... Screenshot 2022-08-17 at 09 49 23 sian API-token...

  • Unable to copy parameter files from bitbucket to Kubernetes cluster through testkube create test open API

    Unable to copy parameter files from bitbucket to Kubernetes cluster through testkube create test open API

    Hi I am unable copy parameter files and script files from bitbucket to Kubernetes where testkube is running.

    Could you please let me know how can I achieve this through create new test API

  • After Isito injection testkube connection error happens

    After Isito injection testkube connection error happens

    Describe the bug A K8s environment with Istio TLS connection between pods. Without Istio injection "testkube" cannot connect to the actual test endpoint because TLS connection Connection reset by peer error happens.

    To Reproduce Steps to reproduce the behavior:

    1. Run 'kubectl testkube run test'
    2. Inject "testkube" name space with Istio kubectl label namespace testkube istio-injection=disabled --overwrite
    3. See error error: error trying to reach service: read tcp 172.17.0.1:59970->172.17.0.38:8088: read: connection reset by peer ⨯ getting test suites executions list (error: api/GET-testkube.TestSuiteExecutionsResult returned error: api server response: '{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"error trying to reach service: read tcp 172.17.0.1:59972-\u003e172.17.0.38:8088: read: connection reset by peer","reason":"ServiceUnavailable","code":503}

    Expected behavior A clear and concise description of what you expected to happen.

    Version / Cluster

    • Which testkube version? : 1.2.48
    • What Kubernetes cluster? (e.g. GKE, EKS, Openshift etc, local KinD, local Minikube) Minikube, Istio mTLS between pods.
    • What Kubernetes version? v1.21.11

    Screenshots

    $kube get pods -n testkube
    NAME                                                    READY   STATUS    RESTARTS   AGE
    testkube-api-server-f86c985b8-297fs                     2/2     Running   1          20h
    testkube-dashboard-6f5f84f8d8-5b9t7                     2/2     Running   0          20h
    testkube-minio-testkube-64cd475b94-fc5hb                2/2     Running   0          20h
    testkube-mongodb-6c9c5db4d5-wq9xh                       2/2     Running   0          20h
    testkube-operator-controller-manager-66ff4cdfd4-tblg2   3/3     Running   1          20h
    

    Additional context 172.17.0.37:8088 - API server internal IP address 172.17.0.1:59970 - Not sure which pod's IP address it is.

    API server POD log

    Available migrations for v1.3.0
    No migrations available for v1.3.0
    {"level":"warn","ts":1657104355.4716427,"caller":"api-server/main.go:105","msg":"Getting uniqe clusterId","error":null}
    {"level":"info","ts":1657104355.5050254,"caller":"v1/server.go:279","msg":"Testkube API configured","namespace":"testkube","clusterId":"clusterbb669eef1b556e914a11107ae51ccfa9","telemetry":false}
    segment 2022/07/06 10:45:55 ERROR: sending request - Post "https://api.segment.io/v1/batch": read tcp 172.17.0.37:40666->35.155.223.175:443: read: connection reset by peer
    segment 2022/07/06 10:45:55 ERROR: 1 messages dropped because they failed to be sent and the client was closed
    {"level":"info","ts":1657104355.5379503,"caller":"api-server/main.go:130","msg":"starting Testkube API server","telemetryEnabled":true,"clusterId":"clusterbb669eef1b556e914a11107ae51ccfa9","namespace":"testkube"}
    
     ┌───────────────────────────────────────────────────┐
     │                   Fiber v2.31.0                   │
     │               http://127.0.0.1:8088               │
     │       (bound on host 0.0.0.0 and port 8088)       │
     │                                                   │
     │ Handlers ........... 166  Processes ........... 1 │
     │ Prefork ....... Disabled  PID ................. 1 │
     └───────────────────────────────────────────────────┘
    
    segment 2022/07/06 10:45:55 ERROR: sending request - Post "https://api.segment.io/v1/batch": read tcp 172.17.0.37:32808->52.34.77.50:443: read: connection reset by peer
    segment 2022/07/06 10:45:55 ERROR: 1 messages dropped because they failed to be sent and the client was closed
    segment 2022/07/06 11:45:55 ERROR: sending request - Post "https://api.segment.io/v1/batch": EOF
    segment 2022/07/06 11:45:55 ERROR: 1 messages dropped because they failed to be sent and the client was closed
    segment 2022/07/06 12:45:55 ERROR: sending request - Post "https://api.segment.io/v1/batch": EOF
    segment 2022/07/06 12:45:55 ERROR: 1 messages dropped because they failed to be sent and the client was closed
    segment 2022/07/06 13:45:55 ERROR: sending request - Post "https://api.segment.io/v1/batch": EOF
    segment 2022/07/06 13:45:55 ERROR: 1 messages dropped because they failed to be sent and the client was closed
    segment 2022/07/06 14:45:55 ERROR: sending request - Post "https://api.segment.io/v1/batch": EOF
    segment 2022/07/06 14:45:55 ERROR: 1 messages dropped because they failed to be sent and the client was closed
    segment 2022/07/06 15:45:55 ERROR: sending request - Post "https://api.segment.io/v1/batch": EOF
    segment 2022/07/06 15:45:55 ERROR: 1 messages dropped because they failed to be sent and the client was closed
    segment 2022/07/06 16:45:55 ERROR: sending request - Post "https://api.segment.io/v1/batch": EOF
    segment 2022/07/06 16:45:55 ERROR: 1 messages dropped because they failed to be sent and the client was closed
    segment 2022/07/06 17:45:55 ERROR: sending request - Post "https://api.segment.io/v1/batch": EOF
    segment 2022/07/06 17:45:55 ERROR: 1 messages dropped because they failed to be sent and the client was closed
    segment 2022/07/06 18:45:55 ERROR: sending request - Post "https://api.segment.io/v1/batch": EOF
    segment 2022/07/06 18:45:55 ERROR: 1 messages dropped because they failed to be sent and the client was closed
    segment 2022/07/06 19:45:55 ERROR: sending request - Post "https://api.segment.io/v1/batch": read tcp 172.17.0.37:44758->44.241.139.196:443: read: connection reset by peer
    segment 2022/07/06 19:45:55 ERROR: 1 messages dropped because they failed to be sent and the client was closed
    < Ommiting same error logs>
    
  • 'chmod: .: Operation not permitted' running simple test on OpenShift

    'chmod: .: Operation not permitted' running simple test on OpenShift

    I created a simple bash executor as follows:

    apiVersion: executor.testkube.io/v1
    kind: Executor
    metadata:
      name: bash-executor
      namespace: ops
    spec:
      image: quay.io/openshift/origin-cli:4.7
      command: ["bash"]
      executor_type: container
      types:
      - bash-origin-cli/test
    

    And I have simple test

    apiVersion: tests.testkube.io/v3
    kind: Test
    metadata:
      name: simple-test
      namespace: ops
    spec:
      type: bash-origin-cli/test
      executionRequest:
        args:
        - echo "hello world!"
    
    

    The problem is, when I run the test it fails with the following output

    {"type":"event","content":"running test [637fb1f8de2c89f221a10eee]"}
    {"type":"line","content":"chmod: .: Operation not permitted\nchmod: .: Operation not permitted\n"}
    {"type":"error","content":"process error: exit status 1"}
    

    For what I've seen that's a problem with the ServiceAccount used by the Job creating the Pods to run the tests. I do have a ServiceAccount I can use, but I have changed every possible entry under the values file (I'm using Helm to deploy) but nothing seems to change. The ServiceAccount used by the Job is still default.

    What would be the right entry on the values file?

  • provide option to specify pod size while running test to generate specific throughput

    provide option to specify pod size while running test to generate specific throughput

    Testkube automatically create pod with best-effort-kube feature to run test . There is no option to specify test size for pod which we want to use to run k6 performance test . This will be a good feature to have to run test and to compete with other performance tool which provide these option .

    Solution :

    In test or Open API we can pass parameter to create pod with requested resource .

  • Allow my Testkube Test to fetch the test files from a branch Dynamically

    Allow my Testkube Test to fetch the test files from a branch Dynamically

    Context

    When I open a new PR from my feature branch my CI/CD pipeline creates a new environment for me to run tests on to see if it will break anything.In Testkube I set the branch where the tests are going to be fetched from when I'm creating the test, however, the branch will need to be specified for each execution because it will be a different one.

    Question

    How can I tell Testkube the branch which I want to fetch my tests from when I'm triggering an execution ?

  • Testkube executor initcontainer

    Testkube executor initcontainer "testkube-executor-init" tries to execute "chmod" and fails when uid is not "0"

    Describe the bug The Testkube executor initcontainer "testkube-executor-init" tries to execute "chmod" and fails when uid is not "0". This means, that Testkube does not work on OpenShift with scc "restricted", which is the default scc, or Kubernetes clusters where uid "0" is not allowed. The testkube executor initcontainer would run on OpenShift when scc "anyuid" would be allowed, but this is not the case in most clusters as this is a security issue.

    {"level":"info","ts":1671465831.2670317,"caller":"minio/minio.go:257","msg":"Getting the contents of buckets [test-curl-test]"} {"level":"info","ts":1671465831.267091,"caller":"minio/minio.go:55","msg":"connecting to minio","endpoint":"testkube-minio-service:9000","accessKeyID":"minio","location":"","token":"","ssl":false} {"level":"info","ts":1671465831.2713344,"caller":"minio/minio.go:268","msg":"Bucket test-curl-test does not exist"} {"type":"line","content":"chmod: .: Operation not permitted\nchmod: .: Operation not permitted\n"} {"type":"error","content":"process error: exit status 1"}

    To Reproduce Steps to reproduce the behavior:

    1. Run testkube on OpenShift or in a Kubernetes cluster, where uid "0" ist not allowed

    Expected behavior The initcontainer works also when run with an uid other than "0". Maybe there is a way to omit "chmod".

    Version / Cluster

    • Which testkube version? 1.8.2
    • What Kubernetes cluster? Openshift
    • What Kubernetes version? v1.23

    Screenshots See error message.

    Additional context Add any other context about the problem here.

  • running mvn test is failing for me, with this error: unable to access 'https://github.com/kubeshop/testkube-executor-maven.git/': Could not resolve host: github.com

    running mvn test is failing for me, with this error: unable to access 'https://github.com/kubeshop/testkube-executor-maven.git/': Could not resolve host: github.com

    Describe the bug running the maven test is failing for me, it cannot connect to github.

    To Reproduce testkube create test --git-uri https://github.com/kubeshop/testkube-executor-maven.git --git-path examples/hello-maven-settings --type maven/test --name maven-example-test --git-branch main

    testkube run test maven-example-test --copy-files "testkube-executor-maven/examples/hello-maven-settings/se ttings.xml:/tmp/settings.xml" --args "--settings" --args "/tmp/settings.xml" -v "TESTKUBE_MAVEN=true"

    expected successful test but it returns this error: ⨯ process error: exit status 128 output: Cloning into 'repo'... fatal: unable to access 'https://github.com/kubeshop/testkube-executor-maven.git/': Could not resolve host: github.com image

    so could you please tell me what is the issue for connecting to the github, if I try git clone https://github.com/kubeshop/testkube-executor-maven.git, is fine but it fails with testkube command. could you please help. thanks !

  • Envs are not passed to the executor container

    Envs are not passed to the executor container

    Describe the bug

    To Reproduce

    Having a test with this execution request

      executionRequest:
        envs:
          MY_ENV: bla
          SECOND: two
    

    does not pass them to the executor container

        Environment:
          DEBUG:                   
          RUNNER_ENDPOINT:         testkube-minio-service-testkube:9000
          RUNNER_ACCESSKEYID:      ***
          RUNNER_SECRETACCESSKEY:  ***
          RUNNER_LOCATION:         
          RUNNER_TOKEN:            
          RUNNER_SSL:              false
          RUNNER_SCRAPPERENABLED:  true
          RUNNER_DATADIR:          /data
        Mounts:
          /data from data-volume (rw)
          /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-t945n (ro)
    
  • TLS Errors and Excessive logging - Helm Chart 1.6 - Operator Controller Manager

    TLS Errors and Excessive logging - Helm Chart 1.6 - Operator Controller Manager

    Describe the bug

    The bug in https://github.com/kubeshop/testkube/issues/2231 is not fixed in the most recent Helm Chart (testkube-1.6.0)

    To Reproduce Steps to reproduce the behavior:

    1. Delete the previous install
    2. Run either testkube init or helm install
    3. Add a test
    4. Inspect the logs for the manager container in the testkube-operator-controller-manager-...
    5. Observe 4-5 log messages per second ...http: TLS handshake error from 10.42.213.64:52162: remote error: tls: bad certificate

    Expected behavior The certificate should be valid. The addition of a test should not cause thousands of errors to be logged even if the cert is not valid.

    Version / Cluster

    • Which testkube version?
    Client Version 1.5.41
    Server Version v1.6.0
    Commit 192e00e10fa9ae710c2d1f72ad2c328acc64faf2
    Built by goreleaser
    Build date 2022-09-30T08:15:11Z
    
    • What Kubernetes cluster? (e.g. GKE, EKS, Openshift etc, local KinD, local Minikube)
      • RKE1
      • k3s
    • What Kubernetes version?
      • Happens on multiple k8s bersions
        • Server Version: v1.22.11
        • Server Version: v1.23.2+k3s1
  • CRD generation is missing quotes for multiple field values

    CRD generation is missing quotes for multiple field values

    Describe the bug Files generated by --crd-only do not contain correct field values and cannot be used directly. There might be more cases than the ones mentioned below.

    To Reproduce Steps to reproduce the behavior: A1. Create any test or testsuite with variables A2. Run get test with --crd-only or check out test in dashboard under Settings->Definition A3. See that quotes are missing from spec.executionrequest.variables.<variable-name>.value content

    B1. Create any test or testsuite with schedule B2. Run get test with --crd-only or check out test in dashboard under Settings->Definition B3. See that quotes are missing from spec.schedule content (see image from dashboard)

    Expected behavior Quotes around value so yaml is valid and can be applied

    Version / Cluster

    • Which testkube version? 1.8.10

    image

  • Outdated crd genaration forTestsuite

    Outdated crd genaration forTestsuite

    Describe the bug Outdated crd genaration forTestsuite

    To Reproduce Steps to reproduce the behavior:

    1. Run 'kubectl testkube get testsuite '
    2. Specify '--crd-only'
    3. See error old field stopTestOnFailure

    Expected behavior stopTestOnFailure rename to stopOnFailure in CRD package for Test Suite

    Version / Cluster

    • Which testkube version? 1.8.10
    • What Kubernetes cluster? local KinD
    • What Kubernetes version? 1.22.3
  • Test execution failing while using Secret Variable

    Test execution failing while using Secret Variable

    Describe the bug We need to use postman collection and pass the uri using variables. I used following command to execute the test: kubectl testkube run test kubeshop-test-param --secret-variable ourteam="our-team"

    And I am getting below error: Test execution failed:

    ⨯ process error: exit status 1 newman

    Kubeshop

    → Home ┌ │ 'uri', undefined └ GET https://kubeshop.io/ [200 OK, 45.51kB, 216ms] ✓ Body matches string

    → Team GET https://kubeshop.io/*****

    To Reproduce Commands provided in screenshots section

    Expected behavior Passed variable with test run command should be used in postman collection and replace with provided value.

    Version / Cluster

    • Which testkube version?
    • What Kubernetes cluster? (e.g. GKE, EKS, Openshift etc, local KinD, local Minikube)
    • What Kubernetes version?

    Screenshots command to create test kubectl testkube create test --file .\kubeshop_requestParamvar.json --name kubeshop-test-param type postman/collection

    Definition in testkube UI: apiVersion: tests.testkube.io/v3 kind: Test metadata: name: kubeshop-test-param namespace: testkube spec: type: postman/collection content: type: string data: "{\n\t"info": {\n\t\t"_postman_id": "78f82a7a-c347-4716-bef8-08f42108e727",\n\t\t"name": "Kubeshop",\n\t\t"schema": "https://schema.getpostman.com/json/collection/v2.1.0/collection.json"\n\t},\n\t"item": [\n\t\t{\n\t\t\t"name": "Home",\n\t\t\t"event": [\n\t\t\t\t{\n\t\t\t\t\t"listen": "test",\n\t\t\t\t\t"script": {\n\t\t\t\t\t\t"exec": [\n\t\t\t\t\t\t\t"pm.test(\"Body matches string\", function () {",\n\t\t\t\t\t\t\t" pm.expect(pm.response.text()).to.include(\"Accelerator\");",\n\t\t\t\t\t\t\t"});"\n\t\t\t\t\t\t],\n\t\t\t\t\t\t"type": "text/javascript"\n\t\t\t\t\t}\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t"listen": "prerequest",\n\t\t\t\t\t"script": {\n\t\t\t\t\t\t"exec": [\n\t\t\t\t\t\t\t"console.log(\"uri\", pm.environment.get(\"kubeshop_uri\"));"\n\t\t\t\t\t\t],\n\t\t\t\t\t\t"type": "text/javascript"\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t],\n\t\t\t"request": {\n\t\t\t\t"method": "GET",\n\t\t\t\t"header": [],\n\t\t\t\t"url": {\n\t\t\t\t\t"raw": "https://kubeshop.io/",\n\t\t\t\t\t"protocol": "https",\n\t\t\t\t\t"host": [\n\t\t\t\t\t\t"kubeshop",\n\t\t\t\t\t\t"io"\n\t\t\t\t\t],\n\t\t\t\t\t"path": [\n\t\t\t\t\t\t""\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t},\n\t\t\t"response": []\n\t\t},\n\t\t{\n\t\t\t"name": "Team",\n\t\t\t"event": [\n\t\t\t\t{\n\t\t\t\t\t"listen": "test",\n\t\t\t\t\t"script": {\n\t\t\t\t\t\t"exec": [\n\t\t\t\t\t\t\t"hostName1 = pm.environment.get(\"kubeshop_uri\");",\n\t\t\t\t\t\t\t"console.log(\"hostName1 is \", hostName1);",\n\t\t\t\t\t\t\t"pm.test(\"Status code is 200\", function () {",\n\t\t\t\t\t\t\t" pm.response.to.have.status(200);",\n\t\t\t\t\t\t\t"});"\n\t\t\t\t\t\t],\n\t\t\t\t\t\t"type": "text/javascript"\n\t\t\t\t\t}\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t"listen": "prerequest",\n\t\t\t\t\t"script": {\n\t\t\t\t\t\t"exec": [\n\t\t\t\t\t\t\t""\n\t\t\t\t\t\t],\n\t\t\t\t\t\t"type": "text/javascript"\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t],\n\t\t\t"request": {\n\t\t\t\t"method": "GET",\n\t\t\t\t"header": [],\n\t\t\t\t"url": {\n\t\t\t\t\t"raw": "https://kubeshop.io/{{ourteam}}",\n\t\t\t\t\t"protocol": "https",\n\t\t\t\t\t"host": [\n\t\t\t\t\t\t"kubeshop",\n\t\t\t\t\t\t"io"\n\t\t\t\t\t],\n\t\t\t\t\t"path": [\n\t\t\t\t\t\t"{{ourteam}}"\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t},\n\t\t\t"response": []\n\t\t}\n\t]\n}"

    command to execute test kubectl testkube run test kubeshop-test-param --secret-variable ourteam="our-team" Additional context If I create the variable in testkube UI. It works fine but we need to pass this variable with testkube run command as part of autmation testing.

  • Unable to use non-root path for dashboard

    Unable to use non-root path for dashboard

    Describe the bug The testkube dashboard is not available when the ingress path is not /

    To Reproduce Steps to reproduce the behavior: Using the testkube Helm Chart

    • Set values
    testkube-dashboard:
      enabled: true
      ingress:
        enabled: true
        hosts:
          - myhost.example.com
        path: /test-svc
    
    • Deploy the chart
    • Access http://myhost.example.com/test-svc
    • Observe blank page. The app is loaded, but receives a 404 attempting to load env-config.js (and other files) and therefore does not recognise the non-root path.

    Expected behavior Expected app to load, as it does when configured for the domain root path

    Version / Cluster

    • Which testkube version? 1.8.8
    • What Kubernetes cluster? EKS
    • What Kubernetes version? 1.22

    Screenshots image

    Additional context This issue was raised previously https://github.com/kubeshop/testkube/issues/2362 and code was added, but I don't see how the code can function since it sets values in env-config.js which the app attempts to download from the root.

  • input data added with --file is not loaded into container executor

    input data added with --file is not loaded into container executor

    Describe the bug

    creating a test and adding input data with --file to a test does not add the file content to the file /data/test-content.

    In fact, the file /data/test-content is missing.

    To Reproduce Steps to reproduce the behavior:

    1. Run testkube create test --name container-test --type [your-custom-executor-type] --file test-content.txt
    2. Use ENTRYPOINT|CMD and find . to display all files and folders in the container
    3. Run testkube run test container-test
    4. /data/test-content is missing in log output:
    [...]
    ./srv
    ./data
    ./.dockerenv
    [...]
    

    Expected behavior

    The doc https://kubeshop.github.io/testkube/test-types/container-executor#input-data states that the --file should add a file /data/test-content with the content of the passed file to the container:

    [...]
    ./srv
    ./data
    ./data/test-content
    ./.dockerenv
    [...]
    

    Version / Cluster

    • Which testkube version? 1.7.29 and 1.8.7 (working with 1.7.28)
Schematic - Generates model and validators by schema definition

schematic Generates model and validators by schema definition. Install Warning:

Feb 10, 2022
Kusk makes your OpenAPI definition the source of truth for API resources in your cluster
Kusk makes your OpenAPI definition the source of truth for API resources in your cluster

Kusk - use OpenAPI to configure Kubernetes What is Kusk? Developers deploying their REST APIs in Kubernetes shouldn't have to worry about managing res

Dec 16, 2022
Litmus helps Kubernetes SREs and developers practice chaos engineering in a Kubernetes native way.
Litmus helps Kubernetes SREs and developers practice chaos engineering in a Kubernetes native way.

Litmus Cloud-Native Chaos Engineering Read this in other languages. ???? ???? ???? ???? Overview Litmus is a toolset to do cloud-native chaos engineer

Jan 1, 2023
The OCI Service Operator for Kubernetes (OSOK) makes it easy to connect and manage OCI services from a cloud native application running in a Kubernetes environment.

OCI Service Operator for Kubernetes Introduction The OCI Service Operator for Kubernetes (OSOK) makes it easy to create, manage, and connect to Oracle

Sep 27, 2022
Open Source runtime tool which help to detect malware code execution and run time mis-configuration change on a kubernetes cluster
Open Source runtime tool which help to detect malware code execution and run time mis-configuration change on a kubernetes cluster

Kube-Knark Project Trace your kubernetes runtime !! Kube-Knark is an open source tracer uses pcap & ebpf technology to perform runtime tracing on a de

Sep 19, 2022
KNoC is a Kubernetes Virtual Kubelet that uses an HPC cluster as the container execution environment
KNoC is a Kubernetes Virtual Kubelet that uses an HPC cluster as the container execution environment

Kubernetes Node on Cluster KNoC is a Virtual Kubelet Provider implementation that manages real pods and containers in a remote container runtime by su

Oct 26, 2022
Kubernetes Native Serverless Framework
Kubernetes Native Serverless Framework

kubeless is a Kubernetes-native serverless framework that lets you deploy small bits of code without having to worry about the underlying infrastructu

Dec 25, 2022
A best practices Go source project with unit-test and integration test, also use skaffold & helm to automate CI & CD at local to optimize development cycle

Dependencies Docker Go 1.17 MySQL 8.0.25 Bootstrap Run chmod +x start.sh if start.sh script does not have privileged to run Run ./start.sh --bootstrap

Apr 4, 2022
An open-source, distributed, cloud-native CD (Continuous Delivery) product designed for developersAn open-source, distributed, cloud-native CD (Continuous Delivery) product designed for developers
An open-source, distributed, cloud-native CD (Continuous Delivery) product designed for developersAn open-source, distributed, cloud-native CD (Continuous Delivery) product designed for developers

Developer-oriented Continuous Delivery Product ⁣ English | 简体中文 Table of Contents Zadig Table of Contents What is Zadig Quick start How to use? How to

Oct 19, 2021
Kubernetes OS Server - Kubernetes Extension API server exposing OS configuration like sysctl via Kubernetes API

KOSS is a Extension API Server which exposes OS properties and functionality using Kubernetes API, so it can be accessed using e.g. kubectl. At the moment this is highly experimental and only managing sysctl is supported. To make things actually usable, you must run KOSS binary as root on the machine you will be managing.

May 19, 2021
An Easy to use Go framework for Kubernetes based on kubernetes/client-go

k8devel An Easy to use Go framework for Kubernetes based on kubernetes/client-go, see examples dir for a quick start. How to test it ? Download the mo

Mar 25, 2022
Frisbee is a Kubernetes-native platform for exploring, testing, and benchmarking distributed applications.
Frisbee is a Kubernetes-native platform for exploring, testing, and benchmarking distributed applications.

Why Frisbee ? Frisbee is a next generation platform designed to unify chaos testing and perfomance benchmarking. We address the key pain points develo

Dec 14, 2022
Cloud Native Electronic Trading System built on Kubernetes and Knative Eventing

Ingenium -- Still heavily in prototyping stage -- Ingenium is a cloud native electronic trading system built on top of Kubernetes and Knative Eventing

Aug 29, 2022
Parallel S3 and local filesystem execution tool.
Parallel S3 and local filesystem execution tool.

s5cmd Overview s5cmd is a very fast S3 and local filesystem execution tool. It comes with support for a multitude of operations including tab completi

Jan 5, 2023
sget is a keyless safe script retrieval and execution tool

sget ⚠️ Not ready for use yet! sget is a keyless safe script retrieval and execution tool Security Should you discover any security issues, please ref

Dec 18, 2022
Kubernetes Operator for a Cloud-Native OpenVPN Deployment.

Meerkat is a Kubernetes Operator that facilitates the deployment of OpenVPN in a Kubernetes cluster. By leveraging Hashicorp Vault, Meerkat securely manages the underlying PKI.

Jan 4, 2023
Kubernetes Native Policy Management
Kubernetes Native Policy Management

Kyverno Kubernetes Native Policy Management Kyverno is a policy engine designed for Kubernetes. It can validate, mutate, and generate configurations u

Jan 2, 2023
Cloud Native Configurations for Kubernetes

CNCK CNCK = Cloud Native Configurations for Kubernetes Make your Kubernetes applications more cloud native by injecting runtime cluster information in

Nov 4, 2021
gokp aims to install a GitOps Native Kubernetes Platform

gokp gokp aims to install a GitOps Native Kubernetes Platform. This project is a Proof of Concept centered around getting a GitOps aware Kubernetes Pl

Nov 4, 2022