Client extension for interacting with Kubernetes clusters from your k6 tests.

⚠️ This is a proof of concept

As this is a proof of concept, it won't be supported by the k6 team. It may also break in the future as xk6 evolves. USE AT YOUR OWN RISK! Any issues with the tool should be raised here.



xk6-kubernetes

A k6 extension for interacting with Kubernetes clusters while testing. Built for k6 using xk6.

Build

To build a k6 binary with this extension, first ensure you have the prerequisites:

Then:

  1. Download xk6:
$ go get -u github.com/k6io/xk6
  1. Build the binary:
$ xk6 build --with github.com/k6io/xk6-kubernetes

Example

import { Kubernetes } from 'k6/x/kubernetes';

export default function () {
  kubernetes = new Kubernetes()
  console.log(`${kubernetes.pods.list()} Pods found:`)
}

Result output:

$ ./k6 run script.js

          /\      |‾‾| /‾‾/   /‾‾/   
     /\  /  \     |  |/  /   /  /    
    /  \/    \    |     (   /   ‾‾\  
   /          \   |  |\  \ |  (‾)  | 
  / __________ \  |__| \__\ \_____/ .io

  execution: local
     script: ../xk6-kubernetes/script.js
     output: -

  scenarios: (100.00%) 1 scenario, 1 max VUs, 10m30s max duration (incl. graceful stop):
           * default: 1 iterations for each of 1 VUs (maxDuration: 10m0s, gracefulStop: 30s)

INFO[0001] 16 Pods found:                                source=console

running (00m00.0s), 0/1 VUs, 1 complete and 0 interrupted iterations
default ✓ [======================================] 1 VUs  00m00.0s/10m0s  1/1 iters, 1 per VU

     data_received........: 0 B 0 B/s
     data_sent............: 0 B 0 B/s
     iteration_duration...: avg=9.64ms min=9.64ms med=9.64ms max=9.64ms p(90)=9.64ms p(95)=9.64ms
     iterations...........: 1   25.017512/s

Inspect examples folder for more details.

Owner
k6
Load testing for engineering teams
k6
Comments
  • Add documentation of the API methods on Readme

    Add documentation of the API methods on Readme

    We have added more features to this project, and besides the examples, the documentation is missing.

    It would be good to start with:

  • Add helpers

    Add helpers

    This change set introduces helpers to facilitate common tasks when setting tests in Kubernetes.

    It also introduces a set of structured operations that work on typed objects (e.g. Pods) as opposed to unstructured operations that work on generic objects. These structured operations are intended for the helpers, to avoid the complexity of manipulating generic objects in go.

  • Added PVC and PV pkg

    Added PVC and PV pkg

    I've added pkgs to work with Persistent Volumes and Persistent Volume Claims.

    With the PV example I only guesstimated in the apply-get-delete version because in our cluster we're not allowed to create PVs directly. I hope it works, but if anyone has a cluster where they could try it out I'd be happy if you tested it out and edited the YAML in case it doesn't work :)

  • Added Apply functions

    Added Apply functions

    I have added Apply function to Kubernetes classes. This function tries to parse a provided YAML string and if the type of the parsed object is the same as the class, the function creates the object on the Kubernetes server.

    I have added this change because I could not figure out how to pass Kubernetes objects to the Create function through javascript.

  • Error:  the server could not find the requested resource

    Error: the server could not find the requested resource

    After running the xk6-kubernetes example script add-ephemeral.js, I noticed an error on stout:

    ERRO[0002] GoError: the server could not find the requested resource
    running at reflect.methodValueCall (native)
    

    The pod gets created, though. so the functionality of the script works as expected. This error just makes it seem like there's a failure somewhere...

    When I try to kill (kill-pod.js) that pod created with the add-ephemeral.js script, I also get the error message. The pod does get killed.

    The create-pod.js script does not produce this error message. Killing that pod (with kill-pod.js) comes back successful as well (no error message).

  • Error authenticating to cloud (GCP)

    Error authenticating to cloud (GCP)

    While running the basic list-pods script against a cluster in GCP, I encounter the following error:

    ERRO[0000] no Auth Provider found for name "gcp"
            at file:///home/vw/dev/k6/scripts/list-pods.js:5:21(4)
            at native  executor=per-vu-iterations scenario=default source=stacktrace
    

    From my understanding, the expected action is to read from the kubeconfig filepath provided (or take from the default path).

    Is the default incorrect, or does the documentation need to be updated differently?

  • Add ephemeral container to a running pod

    Add ephemeral container to a running pod

    Add function to Pods for creating an ephemeral container in a running pod. Also added a function for creating a pod to facilitate testing.

    Note: tested with K8s cluster 1.23.x. Previous versions may not work or may need enabling ephemeral containers feature gate

    closes #30

  • support only some specific kubernetes version?

    support only some specific kubernetes version?

    I am trying to run test with kubernetes version v1.19.13 using most of example scripts.But I don't think it's working. For an example, get-pod.js - when I run it from local and pod not present in namespace,the test should display: pod not found

  • Adding support for Ingresses within the generic resource API

    Adding support for Ingresses within the generic resource API

    Fixes #82

    NOTE: The use of theApply method in the example script will fail if the resource already exists. This has been corrected by #81 which is yet to merge.

  • Apply fails if the resources already exist

    Apply fails if the resources already exist

    Self-explanatory, I think 😄

    The error:

    ERRO[0003] GoError: pods "httpbin" already exists
    Run     at reflect.methodValueCall (native)
    base    at setup (file:///Users/dgzlopes/go/src/github.com/grafana/xk6-disruptor/examples/httpbin/disrupt-pod.js:18:14(7))
    disrupt at native  hint="script exception"
    

    I have the feeling this shouldn't fail. Same way, while using kubectl, it doesn't fail if the resources already exist.

  • Add function for retrieving replicas (pods) of a deployment

    Add function for retrieving replicas (pods) of a deployment

    A very basic requirement when executing chaos tests on deployments is to act on one of replicas (for example, kill it). Even when it is possible to obtain the list of replicas by listing all pods in a namespace and filtering those that matches the label selector of the deployment, this in inconvenient.

    Therefore, it would be convenient to have a function that, given a deployment, returns the existing list of pods. This function could optionally accept some flag(s) for common field selectors, such as state (e.g. to filter-out non running replicas)

  • Add options parameter to Delete function in Generic API

    Add options parameter to Delete function in Generic API

    Presently the Delete function does not allow the specification of DeleteOptions. This options are important in certain use cases, as for example forcing the deletion by setting the gracePeriodSeconds to 0, or changing the propagationPolicy.

  • Re-implement generic Apply method using dynamic client's apply

    Re-implement generic Apply method using dynamic client's apply

    Presently the Apply method provided by the generic API is implemented as a Create from a yaml file. This is not consistent with the experience that users may have with thekubectl apply command that allows both creating new resources or modifying existing resources.

    Starting with v1.25 the dynamic client now offers an Apply method which implements the desired functionality, allowing both the creation or the modification of resources.

    Therefore, it would be convenient for the sake of consistency and convenience to re-implement the Apply method in the generic API using this newly provided Apply method in the dynamic client.

  • Add helper function for waiting a Deployment is ready

    Add helper function for waiting a Deployment is ready

    When setting the resources for a test, it is a common case that after deployment an application as a Deployment, the test must wait until all the replicas are ready. This is a simple tasks, but having a helper function will prevent repeating the code across tests.

  • Add helper function for creating random namespaces

    Add helper function for creating random namespaces

    The main use case for the xk6-kubernetes extension is facilitating the setup of tests, by providing a simple API for creating Kubernetes resources such as secrets, pods, services and others required for running a test application.

    When running multiple concurrent tests, it is convenient to isolate tests by using different namespaces when creating the resources needed by the tests. Moreover, it is convenient to use randomly generated namespaces for each test, to prevent interferences with other instances of the same tests running concurrently or that run previously and was not properly teared down.

    This means that of such test scripts must include a sequence of code similar to the example below:

    import { Kubernetes } from 'k6/x/kubernetes'
    import { randomString } from 'https://jslib.k6.io/k6-utils/1.2.0/index.js';
    
    const namespace = randomString(8)
           
    const nsObj = {
        apiVersion: "v1",
        kind: "Namespace",
        metadata: {
            name: "test-ns"
        }
    }
    
    
    export  function setup() {
      const k8s = new Kubernetes()
      k8s.create(nsObj)
    }
    

    Being this a very common use case, In order to prevent this redundancy of code on each test, it will be convenient to provide a helper function that creates a new namespace with a random name and return its name, reducing the above sequence to this show below:

    import { Kubernetes } from 'k6/x/kubernetes'
    
    
    export function setup() {
      const k8s = new Kubernetes()
      namespace = k8s.randomNamespace()
    }
  • Redesign scope of xk6-kubernetes API

    Redesign scope of xk6-kubernetes API

    The xk6-kubernetes extension ~~was initially developed with the intention of~~ has been used for chaos experiments, providing basic functionalities such as deleting resources (e.g. namespaces, secrets) and running work loads (for example, jobs). It also provided some other basic functions for creating resources but adding little or no additional abstractions over the native k8s api.

    The functions provided by xk6-kubernetes can also be used when setting up tests. However, some common tasks such as exposing a deployment as a service require considerable boiler plate code and can became tedious to program.

    At the same time, as the requirements for running chaos experiments has become more complex, some specialized functions has been added, such as attaching ephemeral containers to running pods, or executing commands in a container.

    It seems clear that the current design of xk6-kubernetes mixes functions that are too generic for been useful when setting up tests or too specialized for been in a general purpose k8s extension. Therefore the following changes are proposed:

    1. Focus on providing helpers that abstract common tasks such as the proposed in #62
    2. Cover other requirements by providing generic apply functions for creating resources using yaml definitions.
    3. Move specialized functionality introduced to support chaos experiments (ephemeral containers, command execution) to xk6-chaos

    Regarding the generic apply method, as handling inline yaml can be cumbersome (in particular if we want to substitute certain fields using variables) other alternatives can be explored:

    • rendering inline yaml using inline text templates (similar to using helm charts)
    • generating yaml using tools such as kustomize, embedding this tool(s) into the library to prevent dependencies with external binaries in the test environment.

    Edit: The generic interface could also be provided by using usinga generic golang client that handles resources as generic structs which should be easily mapped to javascript objects.

    One potential benefit of using a generic apply method is the possibility of removing the dependencies to many of the k8s api packages, reducing the size of the extension.

    https://github.com/grafana/xk6-kubernetes/issues/66#tasklist-block-b64ef893-bb0a-47f6-ae2e-2cb31de67c10

PolarDB Stack is a DBaaS implementation for PolarDB-for-Postgres, as an operator creates and manages PolarDB/PostgreSQL clusters running in Kubernetes. It provides re-construct, failover swtich-over, scale up/out, high-available capabilities for each clusters.
PolarDB Stack is a DBaaS implementation for PolarDB-for-Postgres, as an operator creates and manages PolarDB/PostgreSQL clusters running in Kubernetes. It provides re-construct, failover swtich-over, scale up/out, high-available capabilities for each clusters.

PolarDB Stack开源版生命周期 1 系统概述 PolarDB是阿里云自研的云原生关系型数据库,采用了基于Shared-Storage的存储计算分离架构。数据库由传统的Share-Nothing,转变成了Shared-Storage架构。由原来的N份计算+N份存储,转变成了N份计算+1份存储

Nov 8, 2022
kubequery is a Osquery extension that provides SQL based analytics for Kubernetes clusters

kubequery powered by Osquery kubequery is a Osquery extension that provides SQL based analytics for Kubernetes clusters kubequery will be packaged as

Dec 27, 2022
Kubernetes OS Server - Kubernetes Extension API server exposing OS configuration like sysctl via Kubernetes API

KOSS is a Extension API Server which exposes OS properties and functionality using Kubernetes API, so it can be accessed using e.g. kubectl. At the moment this is highly experimental and only managing sysctl is supported. To make things actually usable, you must run KOSS binary as root on the machine you will be managing.

May 19, 2021
In this repository, the development of the gardener extension, which deploys the flux controllers automatically to shoot clusters, takes place.

Gardener Extension for Flux Project Gardener implements the automated management and operation of Kubernetes clusters as a service. Its main principle

Dec 3, 2022
vcluster - Create fully functional virtual Kubernetes clusters - Each cluster runs inside a Kubernetes namespace and can be started within seconds
vcluster - Create fully functional virtual Kubernetes clusters - Each cluster runs inside a Kubernetes namespace and can be started within seconds

Website • Quickstart • Documentation • Blog • Twitter • Slack vcluster - Virtual Clusters For Kubernetes Lightweight & Low-Overhead - Based on k3s, bu

Jan 4, 2023
Kubernetes IN Docker - local clusters for testing Kubernetes
Kubernetes IN Docker - local clusters for testing Kubernetes

kind is a tool for running local Kubernetes clusters using Docker container "nodes".

Jan 5, 2023
provider-kubernetes is a Crossplane Provider that enables deployment and management of arbitrary Kubernetes objects on clusters

provider-kubernetes provider-kubernetes is a Crossplane Provider that enables deployment and management of arbitrary Kubernetes objects on clusters ty

Dec 14, 2022
Crossplane provider to provision and manage Kubernetes objects on (remote) Kubernetes clusters.

provider-kubernetes provider-kubernetes is a Crossplane Provider that enables deployment and management of arbitrary Kubernetes objects on clusters ty

Jan 3, 2023
Kubernetes IN Docker - local clusters for testing Kubernetes
Kubernetes IN Docker - local clusters for testing Kubernetes

Please see Our Documentation for more in-depth installation etc. kind is a tool for running local Kubernetes clusters using Docker container "nodes".

Feb 14, 2022
PolarDB-X Operator is a Kubernetes extension that aims to create and manage PolarDB-X cluster on Kubernetes.

GalaxyKube -- PolarDB-X Operator PolarDB-X Operator is a Kubernetes extension that aims to create and manage PolarDB-X cluster on Kubernetes. It follo

Dec 19, 2022
🐶 Kubernetes CLI To Manage Your Clusters In Style!
🐶 Kubernetes CLI To Manage Your Clusters In Style!

K9s - Kubernetes CLI To Manage Your Clusters In Style! K9s provides a terminal UI to interact with your Kubernetes clusters. The aim of this project i

Jan 9, 2023
Validation of best practices in your Kubernetes clusters
Validation of best practices in your Kubernetes clusters

Best Practices for Kubernetes Workload Configuration Fairwinds' Polaris keeps your clusters sailing smoothly. It runs a variety of checks to ensure th

Jan 9, 2023
Managing your Kubernetes clusters (including public, private, edge, etc) as easily as visiting the Internet

Clusternet Managing Your Clusters (including public, private, hybrid, edge, etc) as easily as Visiting the Internet. Clusternet (Cluster Internet) is

Dec 30, 2022
A pain of glass between you and your Kubernetes clusters.

kube-lock A pain of glass between you and your Kubernetes clusters. Sits as a middle-man between you and kubectl, allowing you to lock and unlock cont

Oct 20, 2022
Hot-swap Kubernetes clusters while keeping your microservices up and running.

Okra Okra is a Kubernetes controller and a set of CRDs which provide advanced multi-cluster appilcation rollout capabilities, such as canary deploymen

Nov 23, 2022
Manage large fleets of Kubernetes clusters
Manage large fleets of Kubernetes clusters

Introduction Fleet is GitOps at scale. Fleet is designed to manage up to a million clusters. It's also lightweight enough that it works great for a si

Dec 31, 2022
A best practices checker for Kubernetes clusters. 🤠

Clusterlint As clusters scale and become increasingly difficult to maintain, clusterlint helps operators conform to Kubernetes best practices around r

Dec 29, 2022
Kubernetes operator to autoscale Google's Cloud Bigtable clusters
Kubernetes operator to autoscale Google's Cloud Bigtable clusters

Bigtable Autoscaler Operator Bigtable Autoscaler Operator is a Kubernetes Operator to autoscale the number of nodes of a Google Cloud Bigtable instanc

Nov 5, 2021
Nebula Operator manages NebulaGraph clusters on Kubernetes and automates tasks related to operating a NebulaGraph cluster

Nebula Operator manages NebulaGraph clusters on Kubernetes and automates tasks related to operating a NebulaGraph cluster. It evolved from NebulaGraph Cloud Service, makes NebulaGraph a truly cloud-native database.

Dec 31, 2022