k6 is a modern load testing tool for developers and testers in the DevOps era.

k6

Like unit testing, for performance

A modern load testing tool for developers and testers in the DevOps era.

Github release Build status Go Report Card Codecov branch
@k6_io on Twitter Slack channel

Download · Install · Documentation · Community


---


k6 is a modern load testing tool, building on our years of experience in the load and performance testing industry. It provides a clean, approachable scripting API, local and cloud execution, and flexible configuration.

This is how load testing should look in the 21st century.

Menu

Features

There's even more! See all features available in k6.

Install

Mac

Install with Homebrew by running:

brew install k6

Windows

You can manually download and install the official .msi installation package or, if you use the chocolatey package manager, follow these instructions to set up the k6 repository.

Linux

Notice: Because Bintray is being shutdown we are going to start self-hosting our packages soon, before k6 v0.32.0. This means you will have to re-install them, since the old .rpm and .deb repos will stop working.

For Debian-based Linux distributions, you can install k6 from the private deb repo like this:

sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 379CE192D401AB61
echo "deb https://dl.bintray.com/loadimpact/deb stable main" | sudo tee -a /etc/apt/sources.list
sudo apt-get update
sudo apt-get install k6

And for rpm-based ones like Fedora and CentOS:

wget https://bintray.com/loadimpact/rpm/rpm -O bintray-loadimpact-rpm.repo
sudo mv bintray-loadimpact-rpm.repo /etc/yum.repos.d/
sudo dnf install k6   # use yum instead of dnf for older distros

Docker

docker pull loadimpact/k6

Pre-built binaries & other platforms

If there isn't an official package for your operating system or architecture, or if you don't want to install a custom repository, you can easily grab a pre-built binary from the GitHub Releases page. Once you download and unpack the release, you can optionally copy the k6 binary it contains somewhere in your PATH, so you are able to run k6 from any location on your system.

Build from source

k6 is written in Go, so it's just a single statically-linked executable and very easy to build and distribute. To build from source you need Git and Go (1.12 or newer). Follow these instructions:

  • Run go get github.com/loadimpact/k6 which will:
    • git clone the repo and put the source in $GOPATH/src/github.com/loadimpact/k6
    • build a k6 binary and put it in $GOPATH/bin
  • Make sure you have $GOPATH/bin in your PATH (or copy the k6 binary somewhere in your PATH), so you are able to run k6 from any location.
  • Tada, you can now run k6 using k6 run script.js

Running k6

k6 works with the concept of virtual users (VUs) that execute scripts - they're essentially glorified, parallel while(true) loops. Scripts are written using JavaScript, as ES6 modules, which allows you to break larger tests into smaller and more reusable pieces, making it easy to scale tests across an organization.

Scripts must contain, at the very least, an exported default function - this defines the entry point for your VUs, similar to the main() function in many languages. Let's create a very simple script that makes an HTTP GET request to a test website:

import http from "k6/http";

export default function() {
    let response = http.get("https://test-api.k6.io");
};

The script details and how we can extend and configure it will be explained below, but for now simply save the above snippet as a script.js file somewhere on your system. Assuming that you've installed k6 correctly, on Linux and Mac you can run the saved script by executing k6 run script.js from the same folder. For Windows the command is almost the same - k6.exe run script.js.

If you decide to use the k6 docker image, the command will be slightly different. Instead of passing the script filename to k6, a dash is used to instruct k6 to read the script contents directly via the standard input. This allows us to to avoid messing with docker volumes for such a simple single-file script, greatly simplifying the docker command: docker run -i loadimpact/k6 run - .

In some situations it may also be useful to execute remote scripts. You can do that with HTTPS URLs in k6 by importing them in the script via their URL or simply specifying their URL in the CLI command: k6 run github.com/k6io/k6/samples/http_2.js (k6 "knows" a bit about github and cdnjs URLs, so this command is actually shorthand for k6 run raw.githubusercontent.com/k6io/k6/master/samples/http_2.js)

For more information on how to get started running k6, please look at the Running k6 documentation page. If you want to know more about making and measuring HTTP requests with k6, take a look here and here. And for information about the commercial k6 services like distributed cloud execution (the k6 cloud command) or Cloud Results (k6 run -o cloud), you can visit k6.io or view the cloud documentation.

Overview

In this section we'll briefly explore some of the basic concepts and principles of how k6 works. If you want to learn more in-depth about the k6 scripting API, results output, and features, you can visit the full k6 documentation website at k6.io/docs.

Init and VU stages

Earlier, in the Running k6 section, we mentioned that scripts must contain a default function. "Why not just run my script normally, from top to bottom", you might ask - the answer is: we do, but code inside and outside your default function can do different things.

Each virtual user (VU) executes your script in a completely separate JavaScript runtime, parallel to all of the other running VUs. Code inside the default function is called VU code, and is run over and over, for as long as the test is running. Code outside of the default function is called init code, and is run only once per VU, when that VU is initialized.

VU code can make HTTP and websocket requests, emit metrics, and generally do everything you'd expect a load test to do, with a few important exceptions - you can't load anything from your local filesystem, or import any other modules. This all has to be done from the init code.

There are two reasons for this. The first is, of course: performance. If you read a file from disk on every single script iteration, it'd be needlessly slow. Even if you cache the contents of the file and any imported modules, it'd mean the first run of the script would be much slower than all the others. Worse yet, if you have a script that imports or loads things based on things that can only be known at runtime, you'd get slow iterations thrown in every time you load something new. That's also the reason why we initialize all needed VUs before any of them starts the actual load test by executing the default function.

But there's another, more interesting reason. By forcing all imports and file reads into the init context, we design for distributed execution. We know which files will be needed, so we distribute only those files to each node in the cluster. We know which modules will be imported, so we can bundle them up in an archive from the get-go. And, tying into the performance point above, the other nodes don't even need writable file systems - everything can be kept in-memory.

This means that if your script works when it's executed with k6 run locally, it should also work without any modifications in a distributed execution environment like k6 cloud (that executes it in the commercial k6 cloud infrastructure) or, in the future, with the planned k6 native cluster execution mode.

Script execution

For simplicity, unlike many other JavaScript runtimes, a lot of the operations in k6 are synchronous. That means that, for example, the let response = http.get("https://test-api.k6.io/") call from the Running k6 example script will block the VU execution until the HTTP request is completed, save the response information in the response variable and only then continue executing the rest of the script - no callbacks and promises needed.

This simplification works because k6 isn't just a single JavaScript runtime. Instead each VU independently executes the supplied script in its own separate and semi-isolated JavaScript runtime, in parallel to all of the other running VUs. This allows us to fully utilize modern multi-core hardware, while at the same time lowering the script complexity by having mostly synchronous functions. Where it makes sense, we also have in-VU parallelization as well, for example the http.batch() function (which allows a single VU to make multiple simultaneous HTTP requests like a browser/real user would) or the websocket support.

As an added bonus, there's an actual sleep() function! And you can also use the VU separation to reuse data between iterations (i.e. executions of the default function) in the same VU:

var vuLocalCounter = 0;
export default function() {
    vuLocalCounter++;
}

Script options and execution control

So we've mentioned VUs and iterations, but how are those things controlled?

By default, if nothing is specified, k6 runs a script with only 1 VU and for 1 iteration only. Useful for debugging, but usually not very useful when doing load testing. For actual script execution in a load test, k6 offers a lot of flexibility - there are a few different configuration mechanisms you can use to specify script options, and several different options to control the number of VUs and how long your script will be executed, among other things.

Let's say that you want to specify number of VUs in your script. In order of precedence, you can use any of the following configuration mechanisms to do it:

  1. Command-line flags: k6 run --vus 10 script.js, or via the short -u flag syntax if we want to save 3 keystrokes (k6 run -u 10 script.js).

  2. Environment variables: setting K6_VUS=20 before you run the script with k6. Especially useful when using the docker k6 image and when running in containerized environments like Kubernetes.

  3. Your script can export an options object that k6 reads and uses to set any options you want; for example, setting VUs would look like this:

    export let options = {
        vus: 30,
    };
    export default function() { /* ... do whatever ... */ }

    This functionality is very useful, because here you have access to key-value environment variables that k6 exposes to the script via the global __ENV object, so you can use the full power of JavaScript to do things like:

    if (__ENV.script_scenario == "staging") {
        export let options = { /* first set of options */ };
    } else {
        export let options = { /* second set of options */ };
    }

    Or any variation of the above, like importing different config files, etc. Also, having most of the script configuration right next to the script code makes k6 scripts very easily version-controllable.

  4. A global JSON config. By default k6 looks for it in the config home folder of the current user (OS-dependent, for Linux/BSDs k6 will look for config.json inside of ${HOME}/.config/loadimpact/k6), though that can be modified with the --config/-c CLI flag. It uses the same option keys as the exported options from the script file, so we can set the VUs by having config.json contain { "vus": 1 }. Although it rarely makes sense to set the number of VUs there, the global config file is much more useful for storing things like login credentials for the different outputs, as used by the k6 login subcommand...

Configuration mechanisms do have an order of precedence. As presented, options at the top of the list can override configuration mechanisms that are specified lower in the list. If we used all of the above examples for setting the number of VUs, we would end up with 10 VUs, since the CLI flags have the highest priority. Also please note that not all of the available options are configurable via all different mechanisms - some options may be impractical to specify via simple strings (so no CLI/environment variables), while other rarely-used ones may be intentionally excluded from the CLI flags to avoid clutter - refer to options docs for more information.

As shown above, there are several ways to configure the number of simultaneous virtual users k6 will launch. There are also different ways to specify how long those virtual users will be running. For simple tests you can:

  • Set the test duration by the --duration/-d CLI flag (or the K6_DURATION environment variable and the duration script/JSON option). For ease of use, duration is specified with human readable values like 1h30m10s - k6 run --duration 30s script.js, k6 cloud -d 15m10s script.js, export K6_DURATION=1h, etc. If set to 0, k6 wouldn't stop executing the script unless the user manually stops it.
  • Set the total number of script iterations with the --iterations/-i CLI flag (or the K6_ITERATIONS environment variable and the iterations script/JSON option). k6 will stop executing the script whenever the total number of iterations (i.e. the number of iterations across all VUs) reaches the specified number. So if you have k6 run --iterations 10 --vus 10 script.js, then each VU would make only a single iteration.

For more complex cases, you can specify execution stages. They are a combination of duration,target-VUs pairs. These pairs instruct k6 to linearly ramp up, ramp down, or stay at the number of VUs specified for the period specified. Execution stages can be set via the stages script/JSON option as an array of { duration: ..., target: ... } pairs, or with the --stage/-s CLI flags and the K6_STAGES environment variable via the duration:target,duration:target... syntax.

For example, the following options would have k6 linearly ramping up from 5 to 10 VUs over the period of 3 minutes (k6 starts with vus number of VUs, or 1 by default), then staying flat at 10 VUs for 5 minutes, then ramping up from 10 to 35 VUs over the next 10 minutes before finally ramping down to 0 VUs for another 90 seconds.

export let options = {
    vus: 5,
    stages: [
        { duration: "3m", target: 10 },
        { duration: "5m", target: 10 },
        { duration: "10m", target: 35 },
        { duration: "1m30s", target: 0 },
    ]
};

Alternatively, you can use the CLI flags --vus 5 --stage 3m:10,5m:10,10m:35,1m30s:0 or set the environment variables K6_VUS=5 K6_STAGES="3m:10,5m:10,10m:35,1m30s:0" to achieve the same results.

For a complete list of supported k6 options, refer to the documentation at k6.io/docs/using-k6/options.

Hint: besides accessing the supplied environment variables through the __ENV global object briefly mentioned above, you can also use the execution context variables __VU and __ITER to access the current VU number and the number of the current iteration for that VU. These variables can be very useful if you want VUs to execute different scripts/scenarios or to aid in generating different data per VU. http.post("https://some.example.website/signup", {username: `testuser${__VU}@testsite.com`, /* ... */})

For even more complex scenarios, you can use the k6 REST API and the k6 status, k6 scale, k6 pause, k6 resume CLI commands to manually control a running k6 test. For cloud-based tests, executed on our managed infrastructure via the k6 cloud command, you can also specify the VU distribution percentages for different load zones when executing load tests, giving you scalable and geographically-distributed test execution.

Setup and teardown

Beyond the init code and the required VU stage (i.e. the default function), which is code run for each VU, k6 also supports test wide setup and teardown stages, like many other testing frameworks and tools. The setup and teardown functions, like the default function, need to be exported. But unlike the default function, setup and teardown are only called once for a test - setup() is called at the beginning of the test, after the init stage but before the VU stage (default function), and teardown() is called at the end of a test, after the last VU iteration (default function) has finished executing. This is also supported in the distributed cloud execution mode via k6 cloud.

export function setup() {
    return {v: 1};
}

export default function(data) {
    console.log(JSON.stringify(data));
}

export function teardown(data) {
    if (data.v != 1) {
        throw new Error("incorrect data: " + JSON.stringify(data));
    }
}

A copy of whatever data setup() returns will be passed as the first argument to each iteration of the default function and to teardown() at the end of the test. For more information and examples, refer to the k6 docs here.

Metrics, tags and groups

By default k6 measures and collects a lot of metrics about the things your scripts do - the duration of different script iterations, how much data was sent and received, how many HTTP requests were made, the duration of those HTTP requests, and even how long did the TLS handshake of a particular HTTPS request take. To see a summary of these built-in metrics in the output, you can run a simple k6 test, e.g. k6 run github.com/k6io/k6/samples/http_get.js. More information about the different built-in metrics collected by k6 (and how some of them can be accessed from inside of the scripts) is available in the docs here.

k6 also allows the creation of user-defined Counter, Gauge, Rate and Trend metrics. They can be used to more precisely track and measure a custom subset of the things that k6 measures by default, or anything else the user wants, for example tracking non-timing information that is returned from the remote system. You can find more information about them here and a description of their APIs here.

Every measurement metric in k6 comes with a set of key-value tags attached. Some of them are automatically added by k6 - for example a particular http_req_duration metric may have the method=GET, status=200, url=https://loadimpact.com, etc. system tags attached to it. Others can be added by users - globally for a test run via the tags option, or individually as a parameter in a specific HTTP request, websocket connection, userMetric.Add() call, etc.

These tags don't show in the simple summary at the end of a k6 test (unless you reference them in a threshold), but they are invaluable for filtering and investigating k6 test results if you use any of the outputs mentioned below. k6 also supports simple hierarchical groups for easier code and result organization. You can find more information about groups and system and user-defined tags here.

Checks and thresholds

Checks and thresholds are some of the k6 features that make it very easy to use load tests like unit and functional tests and integrate them in a CI (continuous integration) workflow.

Checks are similar to asserts, but differ in that they don't halt execution. Instead they just store the result of the check, pass or fail, and let the script execution continue. Checks are great for codifying assertions relating to HTTP requests/responses. For example, making sure an HTTP response code is 2xx.

Thresholds are global pass/fail criteria that can be used to verify if any result metric is within a specified range. They can also reference a subset of values in a given metric, based on the used metric tags. Thresholds are specified in the options section of a k6 script. If they are exceeded during a test run, k6 would exit with a nonzero code on test completion, and can also optionally abort the test early. This makes thresholds ideally suited as checks in a CI workflow!

import http from "k6/http";
import { check, group, sleep } from "k6";
import { Rate } from "k6/metrics";

// A custom metric to track failure rates
var failureRate = new Rate("check_failure_rate");

// Options
export let options = {
    stages: [
        // Linearly ramp up from 1 to 50 VUs during first minute
        { target: 50, duration: "1m" },
        // Hold at 50 VUs for the next 3 minutes and 30 seconds
        { target: 50, duration: "3m30s" },
        // Linearly ramp down from 50 to 0 50 VUs over the last 30 seconds
        { target: 0, duration: "30s" }
        // Total execution time will be ~5 minutes
    ],
    thresholds: {
        // We want the 95th percentile of all HTTP request durations to be less than 500ms
        "http_req_duration": ["p(95)<500"],
        // Requests with the staticAsset tag should finish even faster
        "http_req_duration{staticAsset:yes}": ["p(99)<250"],
        // Thresholds based on the custom metric we defined and use to track application failures
        "check_failure_rate": [
            // Global failure rate should be less than 1%
            "rate<0.01",
            // Abort the test early if it climbs over 5%
            { threshold: "rate<=0.05", abortOnFail: true },
        ],
    },
};

// Main function
export default function () {
    let response = http.get("https://test.k6.io/");

    // check() returns false if any of the specified conditions fail
    let checkRes = check(response, {
        "http2 is used": (r) => r.proto === "HTTP/2.0",
        "status is 200": (r) => r.status === 200,
        "content is present": (r) => r.body.indexOf("Collection of simple web-pages suitable for load testing.") !== -1,
    });

    // We reverse the check() result since we want to count the failures
    failureRate.add(!checkRes);

    // Load static assets, all requests
    group("Static Assets", function () {
        // Execute multiple requests in parallel like a browser, to fetch some static resources
        let resps = http.batch([
            ["GET", "https://test.k6.io/static/css/site.css", null, { tags: { staticAsset: "yes" } }],
            ["GET", "https://test.k6.io/static/favicon.ico", null, { tags: { staticAsset: "yes" } }],
            ["GET", "https://test.k6.io/static/js/prisms.js", null, { tags: { staticAsset: "yes" } }],
        ]);
        // Combine check() call with failure tracking
        failureRate.add(!check(resps, {
            "status is 200": (r) => r[0].status === 200 && r[1].status === 200,
            "reused connection": (r) => r[0].timings.connecting == 0,
        }));
    });

    sleep(Math.random() * 3 + 2); // Random sleep between 2s and 5s
}

You can save the above example as a local file and run it, or you can also run it directly from the github copy of the file with the k6 run github.com/k6io/k6/samples/thresholds_readme_example.js command. You can find (and contribute!) more k6 script examples here: https://github.com/k6io/k6/tree/master/samples

Outputs

To make full use of your test results and to be able to fully explore and understand them, k6 can output the raw metrics to an external repository of your choice.

The simplest output option, meant primarily for debugging, is to send the JSON-encoded metrics to a file or to stdout. Other output options are sending the metrics to an InfluxDB instance, an Apache Kafka queue, or even to the k6 cloud. This allows you to run your load tests locally or behind a company firewall, early in the development process or as a part of a CI suite, while at the same time being able store their results in the k6 cloud, where you can compare and analyse them. You can find more information about the available outputs here and about k6 Cloud Results here and here.

Modules and JavaScript compatibility

k6 comes with several built-in modules for things like making (and measuring) HTTP requests and websocket connections, parsing HTML, reading files, calculating hashes, setting up checks and thresholds, tracking custom metrics, and others.

You can, of course, also write your own ES6 modules and import them in your scripts, potentially reusing code across an organization. The situation with importing JavaScript libraries is a bit more complicated. You can potentially use some JS libraries in k6, even ones intended for Node.js if you use browserify, though if they depend on network/OS-related APIs, they likely won't work. You can find more details and instructions about writing or importing JS modules here.

Support

To get help about usage, report bugs, suggest features, and discuss k6 with other users see SUPPORT.md.

Contributing

If you want to contribute or help with the development of k6, start by reading CONTRIBUTING.md. Before you start coding, especially when it comes to big changes and features, it might be a good idea to first discuss your plans and implementation details with the k6 maintainers. You can do this either in the github issue for the problem you're solving (create one if it doesn't exist) or in the #developers channel on Slack.

Owner
k6
Load testing for ambitious dev teams
k6
Comments
  • Feature/jquery Extend jquery api for #21

    Feature/jquery Extend jquery api for #21

    re: #21

    About the return types - most of the current api returned a plain ol' go type but the attr() method returns a goja type. Which would you prefer I use?

  • Proposal for adding a new standard  `http_req_failures` metric

    Proposal for adding a new standard `http_req_failures` metric

    I'm opening a new issue instead of building on top of https://github.com/loadimpact/k6/issues/1311 to start the discussion with a clean slate. I believe this proposal addresses most if not all issues brought up in the original proposal.

    Before discussing the implementation details and effort required to build this feature, let's discuss why we want this feature at all.

    1. Users want to know if their http requests fail or not, without adding boilerplate code.
    2. Users want to see the http timing metrics for successful requests.
    3. Users want to see the ratio between successful and failed requests (0.1% failures is often acceptable)
    4. Some users may want to know the absolute number of failed requests (15 requests failed)

    Basic requirements

    This basic script must:

    1. show the number of failed requests,
    2. show response time for successful requests,
    3. test must exit with non-0 exit code because the threshold is crossed.
    import { sleep } from 'k6'
    import http from 'k6/http'
    
    export let options = {
      thresholds: {
        // test fails if more than 10% of HTTP requests fail. 
        // default http_success_hook is used to determine successful status codes.
        http_reqs_failure: ['rate < 0.1'],
      }
    };
    
    export default function() {
     let response_success  = http.get("https://httpbin.test.k6.io/status/200");
     let response_failure1 = http.get("https://httpbin.test.k6.io/status/503");
     let response_failure2 = http.get("https://httpbin.test.k6.io/status/503");
    }
    

    Discussion about the metric type for http_reqs_failure

    There are two possible metric types for the new http_reqs_failure. Rate or Counter. Both types have their advantages, and it's not entirely clear which one is better for this use case.

    Advantages of Rate:

    • ability to easily configure thresholds http_reqs_failure: rate<0.1
    • shows rate...

    Advantages of Counter:

    • shows failures per second
    • shows the total number of failed requests (might be useful to some users)
    • is consistent with http_reqs metric.

    The end-of-test summary for this test should look similar to this:

    Output when using Counter metric

    http_reqs..................: 3      3.076683/s
    http_reqs_failure..........: 2      2.036683/s
    http_reqs_success..........: 1      1.036683/s
    

    Output when using Rate metric

    http_reqs..................: 3      3.134432/s
    http_reqs_failure..........: 33.33% ✓ 1 ✗ 2
    http_reqs_success..........: 66.66% ✓ 2 ✗ 1
    

    Neither Rate nor Counter covers all possible use-cases. I think Rate is preferable over Counter.

    If we added count to the Rate metric, the output could possibly look similar to this

    http_reqs..................: 3      0.136/s    
    http_reqs_failure..........: 33.33% ✓ 2 (66.66%) 3.136/s   ✗ 1 (33.33%) 0.136/s    
    http_reqs_success..........: 66.66% ✓ 1 (33.33%) 3.136/s   ✗ 2 (66.66%) 0.136/s    
    

    Note, I'm not really advocating for this, just pointing out that neither Rate nor Counter cover all use cases.

    Why do we have failures and successes as separate metrics?

    The obvious critique of the above suggestion is to say that http_reqs_success is unnecessary because it's opposite of http_reqs_failure. This is true, but some outputs don't allow to define logic, and therefore it's not possible to show http_reqs_success unless k6 itself produces it.

    Once the metric filtering feature is developed, I would suggest we exclude http_reqs_success by default.

    http_req_duration and other http_req_* metrics.

    The core requirement of this feature is to be able to see http_req_duration for successful requests only.

    There are two possibilities here:

    1. Don't emit http_req_duration for failures
    2. Tag http_req_duration with failed:true|false tag and display filtered values.

    Let's discuss both approaches

    Don't emit http Trend metrics for failed requests

    In this approach, http_req_duration and other http metrics won't include failed requests towards the metric's internal state.

    Users who want to track error timings can define custom metrics like this:

      let http_4xx = new Trend('http_4xx');
    
      let res = http.get('http://test.k6.io');
      if(res.status > 400 && res.status <= 499){
        http4xx.add(res.timings.http_req_duration);
      }
    

    Tag http_req_duration with failed:true|false tag and display filtered values.

    With this approach, we would emit the http_req_duration and friends as we used to, but we will tag values with failed:true|false

    The default-end-of-test summary would display only successful requests like this:

    http_req_duration{failed:false}...: avg=132.76ms min=127.19ms med=132.76ms max=138.33ms p(90)=137.22ms 
    http_reqs........................: 3      3.076683/s
    http_reqs_failure................: 2      2.036683/s
    http_reqs_success................: 1      1.036683/s
    iteration_duration...............: avg=932.4ms  min=932.4ms  med=932.4ms  max=932.4ms  p(90)=932.4ms  p(95)=932.4ms 
    iterations.......................: 1      1.038341/s
    

    The most problematic issue with this approach is that some outputs don't ingest tags and won't be able to display http_req_duration for successful requests only.

    Examples of http_req_duration and http_reqs_failure k6 should produce with this approach.

    {
      "type": "Point",
      "metric": "http_req_duration",
      "data": {
        "time": "2021-01-22T12:40:08.277031832+01:00",
        "value": 0.032868,
        "tags": {
          "error_code": "1501",
          "group": "",
          "method": "GET",
          "name": "https://httpbin.test.k6.io/status/501",
          "proto": "HTTP/2.0",
          "scenario": "default",
          "status": "501",
          "tls_version": "tls1.2",
          "url": "https://httpbin.test.k6.io/status/501",
          "failed": true,  // see reasoning below in the "cloud support" section
        }
      }
    }
    
    {
      "type": "Point",
      "metric": "http_reqs_failure",
      "data": {
        "time": "2021-01-22T12:40:08.277031832+01:00",
        "value": 1,
        "tags": {
          // same tags as for http_req_duration
          "error_code": "1501",
          "status": "501",
          "name": "https://httpbin.test.k6.io/status/501",
          "group": "",
          "method": "GET",
          "proto": "HTTP/2.0",
          "scenario": "default",
          "tls_version": "tls1.2",
        }
      }
    }
    
    

    Cloud support

    There are additional considerations for the k6 cloud support.

    Performance insights

    The "performance insights" feature and web app currently assume that successful requests have status 200-399.

    • The "URL table" displays statuses >=400 with red background
    • There are several performance alerts that show up when there are sufficiently many requests with statuses >=400

    Both approaches listed above solve these problems, although in different ways.

    In approach 1, we would only get timings for successful requests and therefore we won't show timings for failed requests. We will still get tagged http_reqs_failure metrics and therefore will be able to show errors without timings. image We would probably redesign this UI to separate failures from successes in a better way.

    In approach 2, we would get a new standard tag called failed to all http_req_* metrics, including http_req_li_all. Timings would still be shown for errors (although probably not useful), but the background of the row would be determined by the failed tag.

    image

      {
        "type": "Points",
        "metric": "http_req_li_all",
        "data": {
          "time": "1604394111659104",
          "type": "counter",
          "tags": {
            "tls_version": "tls1.2",
            "group": "",
            "scenario": "default",
            "url": "https://test.k6.io",
            "name": "https://test.k6.io",
            "method": "GET",
            "status": "200",
            "proto": "HTTP/2.0",
            "failed": false
          },
          "values": {
            "http_req_waiting": 123.88875,
            "http_req_receiving": 0.215741,
            "http_req_duration": 124.419757,
            "http_req_blocked": 432.893314,
            "http_req_connecting": 122.01245,
            "http_req_tls_handshaking": 278.872101,
            "http_req_sending": 0.315266,
            "http_reqs": 10,
            "http_reqs_success": 10
          }
        }
      },
      {
        "type": "Points",
        "metric": "http_req_li_all",
        "data": {
          "time": "1604394111659104",
          "type": "counter",
          "tags": {
            "tls_version": "tls1.2",
            "group": "",
            "scenario": "default",
            "url": "https://test.k6.io",
            "name": "https://test.k6.io",
            "method": "GET",
            "status": "200",
            "proto": "HTTP/2.0",
            "failed": true
          },
          "values": {
            "http_req_waiting": 23.88875,
            "http_req_receiving": 0.215741,
            "http_req_duration": 24.419757,
            "http_req_blocked": 32.893314,
            "http_req_connecting": 22.01245,
            "http_req_tls_handshaking": 78.872101,
            "http_req_sending": 0.315266,
            "http_reqs": 10,
            "http_reqs_failure": 10
          }
        }
      }
    

    (alternative) Why don't we skip the new metrics and purely rely on failed tag?

    It's possible to extend the existing http_reqs counter metric by tagging requests with failed and changing the metric type to Rate. If that's done, the following script would be possible,

    import { sleep } from 'k6'
    import http from 'k6/http'
    
    export let options = {
      thresholds: {
        'http_reqs{failed:true}': ['rate < 0.1'],
      }
    };
    
    export default function() {
     let response_success  = http.get("https://httpbin.test.k6.io/status/200");
     let response_failure1 = http.get("https://httpbin.test.k6.io/status/503");
     let response_failure2 = http.get("https://httpbin.test.k6.io/status/503");
    }
    

    Possible end-of-test summary:

    http_reqs...................: 3      ✓ 1 ✗ 2     3.134432/s
    http_reqs{failed:true}..: 33.33% ✓ 1 ✗ 2     1.034432/s
    http_reqs{failed:false}.: 66.66% ✓ 2 ✗ 1     2.104432/s
    

    Possible problems with this approach are:

    • some outputs don't ingest tags. It would not be possible to use this functionality with statsd
    • http_reqs would be backwards incompatible unless we combine rate and counter into a new metric type.
    • some "special case" handling would be required for displaying.

    I'm (currently) against this alternative.

    Defining failure

    To determine if a request has failed or succeeded, a JavaScript hook function is invoked after the request, but before the metrics emission. This proposal builds on https://github.com/loadimpact/k6/issues/1716

    import http from 'k6/http'
    
    export let options = {
      hooks: {
        http: {
          successHook: 'myHttpSuccessHook',
        }
      }
    };
    
    export function myHttpSuccessHook(response){
      // returns boolean true|false
      // adds failed = true|false tag
      // decides if the metric goes into http_req_duration.
      // default implementation: return response.status >= 200 && response.status <= 399
      return response.status >= 200 && response.status <= 204
    }
    
    export default function() {
     let response_success  = http.get("https://httpbin.test.k6.io/status/200");
     let response_failure1 = http.get("https://httpbin.test.k6.io/status/503");
     let response_failure2 = http.get("https://httpbin.test.k6.io/status/503");
    }
    
    

    per-request handling

    Sometimes users need to handle special cases.

    Alternative 1 - handle inside the hook

    import http from 'k6/http'
    
    export let options = {
      hooks: {
        http: {
          successHook: 'myHttpSuccessHook',
        }
      }
    };
    
    export function myHttpSuccessHook(response){
      if(response.request.name === 'https://httpbin.test.k6.io/status/403'){
        return response.status === 403 // expecting 403 for this specific URL
      }
      return response.status >= 200 && response.status <= 204
    }
    
    export default function() {
     let response_success  = http.get("https://httpbin.test.k6.io/status/200");
     let response_failure1 = http.get("https://httpbin.test.k6.io/status/503");
     let response_failure2 = http.get("https://httpbin.test.k6.io/status/503");
    }
    
    

    Alternative 2 - override the hook per request

    import { sleep } from 'k6'
    import { Rate } from 'k6/metrics'
    import http from 'k6/http'
    
    export default function() {
     let response_failure1 = http.get("https://httpbin.test.k6.io/status/503");
     let response_failure2 = http.get("https://httpbin.test.k6.io/status/503");
     let response_success  = http.get("https://httpbin.test.k6.io/status/403", {
      successHook: (r) => r.status===403
     });
    }
    
    

    What about the redirect chains?

    This is up for discussion.

    import { sleep } from 'k6'
    import { Rate } from 'k6/metrics'
    import http from 'k6/http'
    
    export default function() {
     let response_success  = http.get("http://httpbin.test.k6.io/absolute-redirect/5", {
      successHook: (r) => r.status===200
     });
    }
    

    Should the hook fire on every request in the chain or only on the last one?

    Concerns

    1. the performance penalty of executing the js hook function on every request.
    2. the performance penalty of adding more data to http_req_li_all and other http_req_* metrics.
  • Support for gRPC protocol

    Support for gRPC protocol

    Will happily take suggestions for what a gRPC JS API should look like. I guess https://grpc.io/docs/tutorials/basic/node.html and https://github.com/grpc/grpc-node would be good starting points.


    [Added on May 29th, 2019]

    To enable testing of more parts of modern software systems, microservices to be more specific, k6 needs to support gRPC. The implementation should support both simple RPC (request/response calls, "part/iteration 1") as well as streaming (client-side, server-side and bi-directional, "part/iteration 2").

    Authentication

    The implementation should implement the following authentication mechanisms:

    Transport:

    • Insecure: no authentication
    • TLS: make use of the APIs implemented as part of the PKI crypto issue for loading keys.

    Per-RPC:

    • Google OAuth2

    Request/Response RPC

    The expected JS API would look something like this for the request/response RPC part:

    import grpc from “k6/grpc”;
    
    let proto = open("EchoService.proto");
    let client = new grpc.Client(proto, {server: "localhost:50051", credentials: {...}});
    
    export default function() {
        let res = client.ping({ message: "Hello gRPC World!" });
        check(res, {
            "is successful": (r) => r.status === grpc.STATUS_OK
        });
    }
    

    Additional changes

    This would require the following changes to k6:

    • Add support for a new protocol “grpc” as a JS module “k6/grpc”
    • Add support for metrics tracking of key gRPC statistics:
      • DNS lookup, Connect, TLS Handshake, Latency & Response time: Trend
        • Tags:
          • "service": the name of the service called
          • "method": the service method called
          • "rpc_type": the type of RPC call (one of "simple", "server-streaming", "client-streaming" or "bi-streaming")
          • "status": the response status code of a RPC call
          • "request_message": the name of the request message
          • "response_message": the name of the response message
      • Requests/second: Rate
      • Active streams: Gauge
    • The response object of a RPC call should contain the following properties:
      • "service": the name of the service called
      • "method": the service method called
      • "rpc_type": the type of RPC call (one of "simple", "server-streaming", "client-streaming" or "bi-streaming")
      • "status": the response status code of a RPC call
      • "message": the response message as a JS Object
      • "request.message": the request message as a JS Object

    Resources

    • gRPC docs
    • Go library for reflecting Protocol Buffer files (.proto) and creating bindings dynamically: https://github.com/jhump/protoreflect
  • Controlling requests per second

    Controlling requests per second

    Hi guys, I think it might be very useful to run tests with setting wishful RPS. In general it can be like this: k6 run --vus 10 --duration 100s --rps 200 test_script.js

    One of the ways to add this functionality is dynamically change some wait time value between execution scenarios per VU.

    For example one VU needs to execute 20 requests every second. So, we need to calculate what time VU need to wait between starting to execution group with list of urls for achieve 20 requests per second. For this we need to calculate it dynamically paying attention to average response time of this group-urls.

    So, if we have avg response time for set of group urls like 20ms, and VU need to achieve 20 rps, wait time will calculate like:

    1000ms / 20rq = 50ms (wait time between execution group if response time == 0) and then 50ms - 20ms (average group response time), so it'll be like 30ms pause for VU between execution group-scenario for achieve 20 rps.

    Well, maybe I do it in a wrong way, but I think more godlike people will have some ideas for this feature. Lets start the conversation!

  • HAR converter WIP

    HAR converter WIP

    Closes #248

    I still working on this, I am testing different HAR exportation tools and some scenarios more. The objective of this PR is to validate if I'm on the right track and the code meets the requirements. I would be grateful to receive guidelines on what things I need to add, change or remove.

    The HAR struct is based from google/martian one but I have added some missing fields like Log.Browser, Log.Pages, Log.Comment, etc

    There are some helper functions to build the k6 script parts (BuildK6Request, BuildK6RequestObject, BuildK6HeadersValues, BuildK6CookiesValues..), they use directly HAR objects (ex: Har.Header) as parameters to avoid extra load but if you need support to convert any other formats to a k6 script these helper functions can be easily converted to a generic ones.

    The HAR entries (HTTP requests) are ordered as the specification recommends (http://w3c.github.io/web-performance/specs/HAR/Overview.html#sec-object-types-entries), although all tested HAR entries from exported files are already ordered.

    I have detected that configured firewall/antimalware/adblockers block some requests and these exported HAR entries/requests have empty responses, they are ignored for status code checks.

    By default the requests are grouped by page and batched in 500ms (what do you think?) intervals by their started time (you can change this value with --batch-inclusion-threshold 200). For batch requests I define a []req object with all k6 requests objects and this object is passed to the http.batch call.

    I like the http.request and http. calls, you can disable batch mode using --batch-inclusion-threshold 0 (maybe it isn't the best way) and the convert command creates a k6 script with these request types instead of the http.batch ones. Batch mode can be the preferred mode for HAR files but this option can be interesting for debugging. Should I keep this functionality/mode (no batch mode)?

    The --only and --skip options filters HAR requests, these flags checks if the URL.Host contains the given string so you can do things like --only https:// or --skip :8080 besides --only domain.com,cdn.domain.com (they works like "OR" conditions, domain.com or cdn.domain.com)

    TODO/Issues:

    • Complete TestBuildK6Request and TestBuildK6RequestObject tests cases

    • Improve command description/help

    • Add example HAR files to samples/har folder, include them in tests

    • The SPDY's colon headers (ex: ":Host") are not valid header requests (https://github.com/golang/net/blob/master/lex/httplex/httplex.go#L201), so BuildK6Headers ignores them.

    • The Firefox (54.0.1) exported HAR file from a multipart post request (example: uploading a file) includes the file content in binary mode. How can I build a k6 request body from that content? I mean the ideal way is this content was base64 encoded, should I parse/modify the boundary to a base64 one? (boundaries with content-transfer-encoding != base64 only. https://tools.ietf.org/html/rfc2045#section-6.1 https://golang.org/pkg/mime/multipart/#example_NewReader). FiddlerToLoadImpact tool (https://github.com/loadimpact/FiddlerToLoadImpact) creates external binary files, they are included to the k6 scripts with bin1 = open("bin1.bin"). What if you support base64 encoded body string parameter for the k6 http requests (I mean something like http.post(url, base64EncodedFormData, { base64: true });)? It's simpler just encode body requests parameter with no printable/binary content. The file content is missing in the HAR exportation from Chrome 59.

  • k6 gets stuck when executed in ConEmu

    k6 gets stuck when executed in ConEmu

    I used the test script in the documentation, I run with k6 run script.js and it gets stuck. I have to externally cancel the process because ctrl + C doesn't stop it. It looks like this: image

    version of k6: 0.32.0 OS: Windows 10

  • PKI extension to crypto module [bounty: $650]

    PKI extension to crypto module [bounty: $650]

    We want to extend the k6 crypto module with support for PKI crypto. This will mean adding functionality to generate cryptographically strong random numbers, read/parse x.509 certificates, read/parse PEM encoded keys, signing/verifying and encrypting/decrypting data. We want to support PKCS#1 version 1.5, PKCS#1 version 2 (also referred to as PSS and OAEP), DSA and ECDSA.

    Related issues:

    • https://github.com/loadimpact/k6/issues/637: Use AES/ECB/PKCS5Padding in k6
    • https://github.com/loadimpact/k6/issues/725: expose Crypto.getRandomValues() or nodejs' crypto
    • https://github.com/loadimpact/k6/issues/822: Error on using 'crypto' functions inside browserified NodeJS module

    Requirements

    Besides the user-facing JS APIs detailed below a completed bounty must also include tests and docs.

    Generate cryptographically strong random numbers

    Proposal for JS API

    import { randomBytes } from "k6/crypto";
    
    let rndBytes = randomBytes(numBytes); // returns a byte array
    

    Relevant links:

    • https://golang.org/pkg/crypto/rand/
    • https://nodejs.org/api/crypto.html#crypto_crypto_randombytes_size_callback

    Parsing x.509 encoded certificates

    Proposal for JS API (shorthand):

    import { x509 } from "k6/crypto";
    
    let issuer = x509.getIssuer(open("mycert.crt"));
    let altNames = x509.getAltNames(open("mycert.crt"));
    let subject = x509.getSubject(open("mycert.crt"));
    

    Proposal for JS API (full):

    import { x509 } from "k6/crypto";
    
    let certData = open(“mycert.crt”);
    let cert = x509.parse(certData);
    

    The Certificate object returned by x509.parse() should return a an Object with the following structure:

    { subject:
       { countryName: 'US',
         postalCode: '10010',
         stateOrProvinceName: 'NY',
         localityName: 'New York',
         streetAddress: '902 Broadway, 4th Floor',
         organizationName: 'Nodejitsu',
         organizationalUnitName: 'PremiumSSL Wildcard',
         commonName: '*.nodejitsu.com' },
      issuer:
       { countryName: 'GB',
         stateOrProvinceName: 'Greater Manchester',
         localityName: 'Salford',
         organizationName: 'COMODO CA Limited',
         commonName: 'COMODO High-Assurance Secure Server CA' },
      notBefore: 'Sun Oct 28 2012 20:00:00 GMT-0400 (EDT)',
      notAfter: 'Wed Nov 26 2014 18:59:59 GMT-0500 (EST)',
      altNames: [ '*.nodejitsu.com', 'nodejitsu.com' ],
      signatureAlgorithm: 'sha1WithRSAEncryption',
      fingerPrint: 'E4:7E:24:8E:86:D2:BE:55:C0:4D:41:A1:C2:0E:06:96:56:B9:8E:EC',
      publicKey: {
        algorithm: 'rsaEncryption',
        e: '65537',
        n: '.......' } }
    

    Relevant links:

    • https://golang.org/pkg/crypto/x509/

    Signing/Verifying data (RSA)

    Proposal for JS API (shorthand version):

    import { x509, createSign, createVerify, sign, verify } from "k6/crypto";
    import { pem } from "k6/encoding";
    
    // alternatively this can be called like:
    // x509.parse(open("mycert.crt")).publicKey();
    let pubKey = x509.parsePublicKey(pem.decode(open("mykey.pub")));
    let privKey = x509.parsePrivateKey(pem.decode(open("mykey.key.pem"), “optional password”));
    
    export default function() {
        let data = "...";
    
        // one of "base64", "hex" or "binary" ("binary" being the default).
        let outputEncoding = "hex";
    
        // for PSS you need to specify "type": "pss" and the optional "saltLength": number option, if options is empty or not passed to sign/verify then PKCS#1 v1.5 is used.
        let options = {...};
    
        // Signing a piece of data
        let signature = sign(privKey, "sha256", data, outputEncoding, options);
    
        // Verifying the signature of a piece of data
        if (verify(pubKey, "sha256", data, signature, options)) {
            ...
        }
    }
    

    [LOWER PRIO] Proposal for JS API (full version):

    import { x509, createSign, createVerify } from "k6/crypto";
    import { pem } from "k6/encoding";
    
    // alternatively this can be called like:
    // x509.parse(open("mycert.crt")).publicKey();
    let pubKey = x509.parsePublicKey(pem.decode(open("mykey.pub")));
    let privKey = x509.parsePrivateKey(pem.decode(open("mykey.pem"), "optional password"));
    
    export default function() {
        let data = "...";
    
        // one of "base64", "hex" or "binary" ("binary" being the default).
        let outputEncoding = "hex";
    
        // for PSS you need to specify "type": "pss" and the optional "saltLength": number option, if options is empty or not passed to sign/verify then PKCS#1 v1.5 is used.
        let options = {...};
    
        // Signing a piece of data
        let signer = createSign("sha256", options);
        signer.update(data, [inputEncoding]);
        let signature = signer.sign(privKey, outputEncoding);
    
        // Verifying the signature of a piece of data
        let verifier = createVerify("sha256", options);
        verifier.update(data, [inputEncoding]);
        if (verifier.verify(pubKey, signature)) {
            ...
        }
    }
    

    Relevant links:

    • https://golang.org/pkg/crypto/rsa
    • https://golang.org/pkg/encoding/pem/

    Signing/Verifying data (DSA)

    The API would be the same for sign/verify as for RSA, the type of encryption used would be inferred by the keys used, so by the following lines:

    let pubKey = x509.parsePublicKey(pem.decode(open("mykey.pub")));
    let privKey = x509.parsePrivateKey(pem.decode(open("mykey.pem"), "optional password"));
    

    Relevant links:

    • https://golang.org/pkg/crypto/dsa/

    Signing/Verifying data (ECDSA)

    The API would be the same for sign/verify as for RSA, the type of encryption used would be inferred by the keys used, so by the following lines:

    let pubKey = x509.parsePublicKey(pem.decode(open("mykey.pub")));
    let privKey = x509.parsePrivateKey(pem.decode(open("mykey.pem"), "optional password"));
    

    Relevant links:

    • https://golang.org/pkg/crypto/ecdsa/

    Encrypt/Decrypt data (RSA)

    Proposal for JS API:

    import { x509, encrypt, decrypt } from "k6/crypto";
    import { pem } from "k6/encoding";
    
    // alternatively this can be called like:
    // x509.parse(open("mycert.crt")).publicKey();
    let pubKey = x509.parsePublicKey(pem.decode(open("mykey.pub")));
    let privKey = x509.parsePrivateKey(pem.decode(open("mykey.pem"), "optional password"));
    
    export default function() {
        let data = "...";
    
        // one of "base64", "hex" or "binary" ("binary" being the default).
        let outputEncoding = "hex";
    
        // for OAEP you need to specify "type": "oaep" and the optional "hash": "sha256" (default) and "label": string options, if options is empty or not passed to encrypt/decrypt then PKCS#1 v1.5 is used.
        let options = {...};
    
        // Signing a piece of data
        let encrypted = encrypt(pubKey, data, outputEncoding, options);
    
        // Verifying the signature of a piece of data
        let plaintext = decrypt(privKey, encrypted, options));
    }
    

    Relevant links:

    • https://golang.org/pkg/crypto/rsa
  • Improve execution information in scripts

    Improve execution information in scripts

    Just realized that, with execution segments (#997), we can easily make the __VU constants much more useful when we're executing scripts in the cloud or in the future native k6 distributed execution.

    Currently, the __VU variable will start from 1 in each instance, so for multiple instances there would be duplicate values. With execution segments, we can very easily make each instance starts its VU numbers from the exact VU id that it should, so that each __VU value is globally unique, regardless of how many machines we run the load test on. And while I think this should be the default, I can sort of see a use case where we'd like to use the local machine sequential number, not the global one, so we should probably expose both...

    This ties neatly into another topic we should look into, probably after merging #1007 - exposing more execution information to user scripts. Though, instead of magic __WHATEVER constants, I think we should do it by exposing a nice JS API that queries these things from the k6 engine/execution scheduler/executors/etc. This would change the model from a push-based one (i.e. k6 having to update all __WHATEVER variables every time they're changed), to a pull based one (a function would query the already existing data in a thread-safe way), which is much easier to support and way, way more efficient (i.e. basically no performance overhead if users don't care for the information).

    So, something like this:

    import { getLocalVUNumber, getGlobalVUNumber, getStartTime } from "k6/execution";
    
    //
    
    vuNum := getGlobalVUNumber()
    
    

    :arrow_up: is definitely NOT the final API, since it probably makes sense to expose the logical execution-related entities (i.e. "Instance", "VU", "Iteration", "Executor") in some way instead of just having global functions, I'm just using it as an illustration...

    We probably should expose things like:

    • local and global VU numbers
    • iteration numbers:
      • have a method to duplicate __ITER, i.e. "which iteration IN this VU is currently running"
      • but since VUs can be reused, some other interesting numbers would be "iteration number for the VU in the current executor"
      • for the arrival-rate executors, in particular, it might also be useful to expose (and we can easily do it) the global number of that iteration across all instances
    • when did the script execution begin and/or what's the total execution time up until now
    • when did the current executor start and how long has it been running
    • which executor is currently responsible for running the code
    • for executors with stages, which stage are we currently at, maybe even the "progress" percent?
    • execution segment related things (though that probably should be a separate API that deals with data partitioning/streming/etc. ?)
    • for the current instance, the number of active/initialized VUs and the number of complete/interrupted iterations up until now (can't think of a use case for this, besides maybe something in teardown(), but we can easily expose it, since we keep track of it for the CLI interface)

    Probably other things as well, once we have the initial framework and API, we can add the actual objects/functions one by one, in order of usefulness, it doesn't have to be all at once...

  • Support for gRPC protocol

    Support for gRPC protocol

    fixes #441

    Initial implementation only supports Unary requests.

    Example JavaScript API:

    import grpc from 'k6/protocols/grpc';
    import { check } from "k6";
    
    const client = grpc.newClient();
    client.load([], "samples/grpc_server/route_guide.proto")
    
    
    export default () => {
        client.connect("localhost:10000", { plaintext: true })
    
        const response = client.invoke("main.RouteGuide/GetFeature", {
            latitude: 410248224,
            longitude: -747127767
        })
    
        check(response, { "status is OK": (r) => r && r.status === grpc.StatusOK });
    
        client.close()
    }
    
  • Plugin support

    Plugin support

    As a user, I would like to be able to implement custom functionality and protocols and expose it in the JavaScript VM so that I can write the heavy lifting work in Golang but write tests in JS.

    Feature Description

    As a protocol developer, I'd like to be able to develop a client that talks to my protocol via the JavaScript VM that k6 runs. To do this, I would prefer to be able to write a plugin that doesn't have to live in the main codebase and isn't the responsibility of the k6 team.

    Effectively, a plugin should expose:

    • A preflight function (to start any background services it might need)
    • A postflight function (to shutdown any background services it might have started)
    • A map[string]interface{} that gets added to the module mapping so that the plugin's functions can be imported from JavaScript.

    Suggested Solution (optional)

    My MVP proposal would be to use Golang's plugin package. Despite not supporting Windows in its current state, Windows users could use Docker to run plugins. Moreso, it would not introduce any new dependencies to k6.

    Plugins would be loaded by passing a -plugin argument when launching k6, which should receive a .so file path, and would be initialized (preflight) once, on test start, and cleaned up (postflight) once, on test finish.

    A plugin struct could look something like:

    type Plugin struct {
    	Name       string
    	Preflight  func() error
    	Postflight func() error
        Modules    map[string]interface{}
    }
    

    For each plugin file, k9 would run something to the lines of:

    // Omitting error checking for brevity
    for _, path := range plugins {
    	p, _ := plugin.Open(path)
    	m, _ := p.Lookup("Plugin")
    	meta := m.(Plugin)
    
    	// Add plugin modules to JavaScript module registry
    	// Add preflight and postflight functions to a callback list
    }
    

    Tagging @na--, @imiric, and @MStoykov as indicated in Slack.

  • Feature/ecommerce script sample

    Feature/ecommerce script sample

    This script depends on being able to access the redirected URL after a redirect happens (PR#134) and currently uses the res.effective_url name (i.e. this script needs to be updated in case we decide to call the redirected URL res.url instead).

  • use same timestamp in csv results as in json

    use same timestamp in csv results as in json

    Feature Description

    --out csv=results.csv has timestamp resolution in sec:

    >>> df = pd.read_csv("results.csv")             
    >>> print(df[['metric_name','timestamp']][:5])  
                    metric_name   timestamp
    0                 http_reqs  1672766933
    1         http_req_duration  1672766933
    2          http_req_blocked  1672766933
    3       http_req_connecting  1672766933
    4  http_req_tls_handshaking  1672766933
    

    while --out json=result.jsonhas microtimestamp and zone data:

    :
    {"metric":"http_req_failed","type":"Point","data":{"time":"2023-01-05T09:45:41.088631+01:00",........
    :
    

    I now convert output from json to csv with micro-timestamp, but it takes minutes on long scenarios with >1GB of data, and the json is much larger 1GB of joson is 200MB of csv.

    Suggested Solution (optional)

    same timestamp in csv as in json output

    Already existing or connected issues / PRs (optional)

    No response

  • Add experimental tracing js module

    Add experimental tracing js module

    Hi folks 👋🏻

    What

    This PR adds a k6/experimental/tracing module to k6. It implements the public API specified and agreed upon during our internal design process. As we talked about internally, it is embedded directly in the repository rather than imported from an xk6 extension repository.

    Using this module, users can transparently add tracing (trace context) headers to their HTTP calls and have k6 emit the used traced ids as part of the output's metadata. Calling the instrumentHTTP function exposed by the module will automatically wrap the imported http module's function to include tracing information without involving any changes from the user to their scripts. Each request will use a different trace id, and have a random span id.

    A lot of the code in this PR is, in fact, an adaptation of what already existed in xk6-distributed-tracing; so kudos @Blinkuu and @dgzlopes for that 🙇🏻

    Design note: this PR implements the logic in Go rather than executing JS directly. The main reason for that is convenience regarding debugging and the ability to set and delete metadata, which I believe is not yet exposed to our JS runtime.

    Scope

    In the spirit of keeping PRs small, this one only implements support for:

    • propagators: w3c, b3, and jaeger
    • instrumentHTTP wraps the delete, get, head, options, post, patch, put, request functions.

    Support for the batch functions shall be added in later iterations. As well as support for sampling control and the baggage headers specification.

    Support required ✋🏻

    In an ideal world, I'd like to add some tests for the instrumentHTTP function. I want to start a test HTTP server and assert that when running a test script using the instrumentHTTP function, the expected headers are received by the server. I'd also like to do the same for the output's metadata.

    The main blocking challenge I've encountered has been to make the init context available in the modulestest.Runtime, so that the require function is available in the context of the test script. I think my attempts so far could have been more fruitful. If you have ideas or guidance to help me achieve that, I'd be grateful 🙇🏻

    Demo

    // we import the HTTP module in a standard manner
    import http from "k6/http";
    import { check } from "k6";
    import tracing from "k6/experimental/tracing";
    
    // This is the only change users need to make to include tracing context to their requests
    // instrumentHTTP will ensure that all requests made by the http module
    // will be traced. The first argument is a configuration object that
    // can be used to configure the tracer.
    //
    // Currently supported HTTP methods are: get, post, put, patch, head, del,
    // and options.
    tracing.instrumentHTTP({
    	propagator: "w3c",
    });
    
    export default () => {
    	const params = {
    		headers: {
    			"X-My-Header": "something",
    		},
    	};
    
            // this http.get call will automatically include a traceparent header
            // and the used trace id will be included in the related data points in
            // the output's metadata.
    	let res = http.get("http://httpbin.org/get", params);
    	check(res, {
    		"status is 200": (r) => r.status === 200,
    	});
    
    	let data = { name: "Bert" };
    
             // this http.get call will automatically include a traceparent header
             // and the used trace id will be included in the related data points in
            // the output's metadata.
    	res = http.post("http://httpbin.org/post", JSON.stringify(data), params);
    	check(res, {
    		"status is 200": (r) => r.status === 200,
    	});
    };
    
    

    Produces the the following HTTP requests:

    INFO[0000] Request:
    GET /get HTTP/1.1
    Host: httpbin.org
    User-Agent: k6/0.42.0 (https://k6.io/)
    Traceparent: 00-dc0718b190d7e2d730a6219538e16e47-28cf4f6cd985afbb-01
    X-My-Header: something
    Accept-Encoding: gzip
    
    INFO[0003] Request:
    POST /post HTTP/1.1
    Host: httpbin.org
    User-Agent: k6/0.42.0 (https://k6.io/)
    Content-Length: 15
    Traceparent: 00-dc0718b1a5d7e2d7300e957d6ec9fb8b-ff9f9edcbfa41b91-01
    X-My-Header: something
    Accept-Encoding: gzip
    

    Edit

    • 5th of January 2023: added support for http.request, and added it to the description
  • Have parameter to abort if test doesn't reach target iterations per second

    Have parameter to abort if test doesn't reach target iterations per second

    Feature Description

    In the arrival rate executors, the test continues even if k6 can't reach the target iteration rate. If the duration property is long, the test will go on for a while, using resources to do something that that tester doesn't want to test for anyway.

    This is especially true if the testing set up is automated. I can imagine a situation where a commit to the SUT causes the median iteration of the test to increase, which in term means that the preAllocated number of VUs is no longer sufficient.

    Sorry if this has been discussed― I couldn't find anything about it. Or, maybe this just doesn't make sense from a testing standpoint. Anyway, at least this issue will create a chance to document the reason why it doesn't make sense.

    Suggested Solution (optional)

    Have an extra property called something like abortWhenTooFewVUS or abortOnInsufficientAllocation. Already k6 provides a warning, so I guess the only thing to do would be to send abort with a non-zero exit code if that property is set to true.

    Already existing or connected issues / PRs (optional)

    No response

  • Update template for release notes

    Update template for release notes

    The main goal is to reduce guessing from the writer's and the editor's work. Besides that, a standardized structure also makes it easier for readers to scan across multiple release-note pages.

    I based this on the v0.40.0 Release notes, which I thought were particularly successful.

  • Support for encrypted PKCS8 or PKCS12 private keys

    Support for encrypted PKCS8 or PKCS12 private keys

    Feature Description

    Hello, encrypted pkcs8 and pkcs12 keys are the only format available in my company. When do you think this will become available with k6? Or do you have any other way / library / extension to make it work with k6 already?

    Thanks, Lucas

    Suggested Solution (optional)

    No response

    Already existing or connected issues / PRs (optional)

    No response

The Oracle Database Operator for Kubernetes (a.k.a. OraOperator) helps developers, DBAs, DevOps and GitOps teams reduce the time and complexity of deploying and managing Oracle Databases

The Oracle Database Operator for Kubernetes (a.k.a. OraOperator) helps developers, DBAs, DevOps and GitOps teams reduce the time and complexity of deploying and managing Oracle Databases. It eliminates the dependency on a human operator or administrator for the majority of database operations.

Dec 14, 2022
:rocket: Modern cross-platform HTTP load-testing tool written in Go
:rocket: Modern cross-platform HTTP load-testing tool written in Go

English | 中文 Cassowary is a modern HTTP/S, intuitive & cross-platform load testing tool built in Go for developers, testers and sysadmins. Cassowary d

Dec 29, 2022
Fluxcdproj - The Ultimate Swiss Army knife for DevOps, Developers and Platform Engineers
Fluxcdproj -  The Ultimate Swiss Army knife for DevOps, Developers and Platform Engineers

Fluxcdproj - The Ultimate Swiss Army knife for DevOps, Developers and Platform Engineers

Feb 1, 2022
Multi cluster kubernetes dashboard with batteries included. Build by developers, for developers.

kubetower Multi cluster kubernetes dashboard with batteries included. Built by developers, for developers. Features Restart deployments with one click

Nov 28, 2022
Blast is a simple tool for API load testing and batch jobs

Blast Blast makes API requests at a fixed rate. The number of concurrent workers is configurable. The rate may be changed interactively during executi

Nov 10, 2022
HTTP load testing tool and library. It's over 9000!
HTTP load testing tool and library. It's over 9000!

Vegeta Vegeta is a versatile HTTP load testing tool built out of a need to drill HTTP services with a constant request rate. It can be used both as a

Dec 30, 2022
sail is an operation framework based on Ansible/Helm. sail follows the principles of Infrastructure as Code (IaC), Operation as Code (OaC), and Everything as Code. So it is a tool for DevOps.

sail 中文文档 sail is an operation framework based on Ansible/Helm. sail follows the principles of Infrastructure as Code (IaC), Operation as Code (OaC),a

Dec 16, 2021
Shared counter (with max limit) for k6 load testing tool

xk6-increment This is a k6 extension using the xk6 system. ❗ This is a proof of concept, isn't supported by the k6 team, and may break in the future.

Nov 30, 2021
Zeus - A Devops Automation Tool

With this tool we are trying generalize and minimize devops reperating task while trying to encourage shared responsibility model acorss devloper teams.

May 31, 2022
Supporting your devops by shortening your strings using common abbreviations and clever guesswork

abbreviate Shorten your strings using common abbreviations. Supported by Tidelift Motivation This tool comes out of a frustration of the name of resou

Dec 14, 2022
A Kubernetes Operator, that helps DevOps team accelerate their journey into the cloud and K8s.
A Kubernetes Operator, that helps DevOps team accelerate their journey into the cloud and K8s.

A Kubernetes Operator, that helps DevOps team accelerate their journey into the cloud and K8s. OAM operator scaffolds all of the code required to create resources across various cloud provides, which includes both K8s and Non-K8s resources

Nov 30, 2021
This is a cloud-native application that focuses on the DevOps area.

Get started Install KubeSphere via kk (or other ways). This is an optional step, basically we need a Kubernetes Cluster and the front-end of DevOps. I

Jan 5, 2023
Power-ups for the daily DevOps life

DevOps Loop Power-Ups Requirements Connected Kubernetes cluster. Some features need support for LoadBalancer services Permission to list, create and d

Nov 3, 2022
DevOps With Kubernetes exercise

todo-project [https://github.com/pasiol/todo-project/tree/1.05] Exercise 1.06 pasiol@lab:~$ k3d cluster delete INFO[0000] Deleting cluster 'k3s-defaul

Dec 8, 2021
Kubernetes operator for the Azure DevOps pipe-line agents

adoagent-operator Kubernetes operator for the Azure DevOps pipe-line agents init.sh #!/bin/bash # docker and github repo username export USERNAME='ba

Nov 11, 2021
This is a cloud-native application that focuses on the DevOps area.

KubeSphere DevOps integrates popular CI/CD tools, provides CI/CD Pipelines based on Jenkins, offers automation toolkits including Binary-to-Image (B2I

Jan 5, 2023
Kubernetes operator for the Azure DevOps self-hosted pipe-line agent.

Kubernetes operator for the Azure DevOps self-hosted pipe-line agent. The operator adds an extra layer of configuration on top of the default images like: proxy settings, pool settings and auth keys.

Sep 1, 2022
Repositório base p/ tema12 da trilha de DevOps.

tema12--Jenkins Informações Repositório base: https://github.com/brazdore/ilegra-devops-tema12.git Requisitos Jenkins Docker Packer JDK 11 ou maior Pr

Dec 21, 2021
DevOps Roadmap 2022

Want to learn DevOps the right way in 2022 ? You have come to the right place I have created the complete DevOps roadmap that anyone can follow and be

Dec 28, 2022