High-performance PHP application server, load-balancer and process manager written in Golang

RoadRunner

Total alerts

RoadRunner is an open-source (MIT licensed) high-performance PHP application server, load balancer, and process manager. It supports running as a service with the ability to extend its functionality on a per-project basis.

RoadRunner includes PSR-7/PSR-17 compatible HTTP and HTTP/2 server and can be used to replace classic Nginx+FPM setup with much greater performance and flexibility.

Official Website | Documentation

Features:

  • Production-ready
  • PCI DSS compliant
  • PSR-7 HTTP server (file uploads, error handling, static files, hot reload, middlewares, event listeners)
  • HTTPS and HTTP/2 support (including HTTP/2 Push, H2C)
  • A Fully customizable server, FastCGI support
  • Flexible environment configuration
  • No external PHP dependencies (64bit version required), drop-in (based on Goridge)
  • Load balancer, process manager and task pipeline
  • Integrated metrics (Prometheus)
  • Workflow engine by Temporal.io
  • Works over TCP, UNIX sockets and standard pipes
  • Automatic worker replacement and safe PHP process destruction
  • Worker create/allocate/destroy timeouts
  • Max jobs per worker
  • Worker lifecycle management (controller)
    • maxMemory (graceful stop)
    • TTL (graceful stop)
    • idleTTL (graceful stop)
    • execTTL (brute, max_execution_time)
  • Payload context and body
  • Protocol, worker and job level error management (including PHP errors)
  • Development Mode
  • Integrations with Symfony, Laravel, Slim, CakePHP, Zend Expressive
  • Application server for Spiral
  • Included in Laravel Octane
  • Automatic reloading on file changes
  • Works on Windows (Unix sockets (AF_UNIX) supported on Windows 10)

Installation:

$ composer require spiral/roadrunner:v2.0 nyholm/psr7
$ ./vendor/bin/rr get-binary

For getting roadrunner binary file you can use our docker image: spiralscout/roadrunner:X.X.X (more information about image and tags can be found here)

Configuration can be located in .rr.yaml file (full sample):

rpc:
  listen: tcp://127.0.0.1:6001

server:
  command: "php worker.php"

http:
  address: "0.0.0.0:8080"

logs:
  level: error

Read more in Documentation.

Example Worker:

<?php

use Spiral\RoadRunner;
use Nyholm\Psr7;

include "vendor/autoload.php";

$worker = RoadRunner\Worker::create();
$psrFactory = new Psr7\Factory\Psr17Factory();

$worker = new RoadRunner\Http\PSR7Worker($worker, $psrFactory, $psrFactory, $psrFactory);

while ($req = $worker->waitRequest()) {
    try {
        $rsp = new Psr7\Response();
        $rsp->getBody()->write('Hello world!');

        $worker->respond($rsp);
    } catch (\Throwable $e) {
        $worker->getWorker()->error((string)$e);
    }
}

Run:

To run application server:

$ ./rr serve

License:

The MIT License (MIT). Please see LICENSE for more information. Maintained by Spiral Scout.

Owner
Spiral Scout
Spiral Scout is a full-service digital agency, providing design, development and online marketing services to businesses around San Francisco and beyond.
Spiral Scout
Comments
  • Symfony: Default SESSION does not work (no cookie is set in Response)

    Symfony: Default SESSION does not work (no cookie is set in Response)

    Hi!

    I have this code:

    <?php
    /**
     * Created by PhpStorm.
     * User: richard
     * Date: 22.06.18
     * Time: 11:59
     */
    
    require __DIR__ . '/vendor/autoload.php';
    
    use App\Kernel;
    use Symfony\Bridge\PsrHttpMessage\Factory\DiactorosFactory;
    use Symfony\Bridge\PsrHttpMessage\Factory\HttpFoundationFactory;
    use Symfony\Component\Debug\Debug;
    use Symfony\Component\Dotenv\Dotenv;
    use Symfony\Component\HttpFoundation\Request;
    
    if (getenv('APP_ENV') === false) {
        (new Dotenv())->load(__DIR__.'/.env');
    }
    $env = getenv('APP_ENV') ?: 'dev';
    $debug = getenv('APP_DEBUG') ? ((bool) getenv('APP_DEBUG')) : !in_array($env, ['prod', 'k8s']);
    
    if ($debug) {
        umask(0000);
        Debug::enable();
    }
    if ($trustedProxies = $_SERVER['TRUSTED_PROXIES'] ?? false) {
        Request::setTrustedProxies(explode(',', $trustedProxies), Request::HEADER_X_FORWARDED_ALL ^ Request::HEADER_X_FORWARDED_HOST);
    }
    if ($trustedHosts = $_SERVER['TRUSTED_HOSTS'] ?? false) {
        Request::setTrustedHosts(explode(',', $trustedHosts));
    }
    $kernel = new Kernel($env, $debug);
    $httpFoundationFactory = new HttpFoundationFactory();
    
    
    
    
    $relay = new Spiral\Goridge\StreamRelay(STDIN, STDOUT);
    $psr7 = new Spiral\RoadRunner\PSR7Client(new Spiral\RoadRunner\Worker($relay));
    
    
    while ($req = $psr7->acceptRequest()) {
        try {
            $request = $httpFoundationFactory->createRequest($req);
            $response = $kernel->handle($request);
    
            $psr7factory = new DiactorosFactory();
            $psr7response = $psr7factory->createResponse($response);
            $psr7->respond($psr7response);
    
            $kernel->terminate($request, $response);
        } catch (\Throwable $e) {
            $psr7->getWorker()->error((string)$e);
        }
    }
    

    It's slighly modified Symfony code to handle env-variables :) (Also, it's stunningly fast!!!!)

    But I don't get any cookies returned, so I can see my logg inn is accepted, and I'm redirected to the dashboard, but there I get a permission denied and redirect to login (because no cookies have been set)...

    Any tips on how to troubleshoot, or what might be wrong?

  • [πŸ› BUG]: RR [`v2.5.7`] doesn't construct new workers after call resetting command

    [πŸ› BUG]: RR [`v2.5.7`] doesn't construct new workers after call resetting command

    No duplicates πŸ₯².

    • [X] I have searched for a similar issue in our bug tracker and didn't find any solutions.

    What happened?

    Sometimes i faced with issue when rr doesn't start workers after resetting

    I use the command for reloading application after deploying new version

    php -r '
    require_once "/var/www/..../vendor/autoload.php";
    $rpc = \Spiral\Goridge\RPC\RPC::create("tcp://127.0.0.1:6001");
    $rpc->call("resetter.Reset", "http");
    ' 2> /dev/null && echo "roadrunner restarted"
    
    

    And this works fine. In logs of the roadrunner, i can find such logs.

    {"level":"info","ts":1654876751.0767014,"logger":"http","msg":"HTTP plugin got restart request. Restarting..."}
    {"level":"debug","ts":1654876751.482103,"logger":"server","msg":"worker constructed","pid":18535}
    {"level":"debug","ts":1654876751.7076323,"logger":"server","msg":"worker constructed","pid":18539}
    {"level":"debug","ts":1654876751.92825,"logger":"server","msg":"worker constructed","pid":18543}
    {"level":"debug","ts":1654876752.1424391,"logger":"server","msg":"worker constructed","pid":18547}
    {"level":"debug","ts":1654876752.3567128,"logger":"server","msg":"worker constructed","pid":18551}
    {"level":"debug","ts":1654876752.5836997,"logger":"server","msg":"worker constructed","pid":18555}
    {"level":"debug","ts":1654876752.7970486,"logger":"server","msg":"worker constructed","pid":18569}
    {"level":"debug","ts":1654876753.012394,"logger":"server","msg":"worker constructed","pid":18573}
    {"level":"debug","ts":1654876753.2264977,"logger":"server","msg":"worker constructed","pid":18579}
    {"level":"debug","ts":1654876753.439048,"logger":"server","msg":"worker constructed","pid":18583}
    {"level":"debug","ts":1654876753.6546476,"logger":"server","msg":"worker constructed","pid":18587}
    {"level":"debug","ts":1654876753.8703601,"logger":"server","msg":"worker constructed","pid":18591}
    {"level":"info","ts":1654876753.8704066,"logger":"http","msg":"HTTP workers Pool successfully restarted"}
    {"level":"info","ts":1654876753.8704116,"logger":"http","msg":"HTTP handler listeners successfully re-added"}
    {"level":"info","ts":1654876753.8704147,"logger":"http","msg":"HTTP plugin successfully restarted"}
    

    But sometimes when i tried to reload workers i faced with unexpected behavior when rr doesn't construct new workers

    {"level":"info","ts":1654865797.447119,"logger":"http","msg":"HTTP plugin got restart request. Restarting..."}
    {"level":"debug","ts":1654865947.0437615,"logger":"rpc","msg":"Started RPC service","address":"tcp://127.0.0.1:6001","plugins":["informer","resetter"]}
    {"level":"debug","ts":1654865947.3511188,"logger":"server","msg":"worker constructed","pid":26076}
    {"level":"debug","ts":1654865947.6003292,"logger":"server","msg":"worker constructed","pid":26090}
    {"level":"debug","ts":1654865947.8625834,"logger":"server","msg":"worker constructed","pid":26094}
    {"level":"debug","ts":1654865948.083429,"logger":"server","msg":"worker constructed","pid":26098}
    {"level":"debug","ts":1654865948.2959368,"logger":"server","msg":"worker constructed","pid":26102}
    {"level":"debug","ts":1654865948.5081518,"logger":"server","msg":"worker constructed","pid":26106}
    {"level":"debug","ts":1654865948.7213166,"logger":"server","msg":"worker constructed","pid":26110}
    {"level":"debug","ts":1654865948.9349105,"logger":"server","msg":"worker constructed","pid":26114}
    {"level":"debug","ts":1654865949.1486824,"logger":"server","msg":"worker constructed","pid":26118}
    {"level":"debug","ts":1654865949.3617551,"logger":"server","msg":"worker constructed","pid":26122}
    {"level":"debug","ts":1654865949.5777884,"logger":"server","msg":"worker constructed","pid":26127}
    {"level":"debug","ts":1654865949.7925146,"logger":"server","msg":"worker constructed","pid":26131}
    

    Construction of new workers was after restarting the service

    And that's all. It seems like reload plugin was stopped by something. After that, i need to restart the service completely.

    Also, i have such logs in syslog that appeared after restarting the service, but i don't know does it relate to the issue or not

    rr[24396]: panic: runtime error: invalid memory address or nil pointer dereference
    rr[24396]: [signal SIGSEGV: segmentation violation code=0x1 addr=0x38 pc=0xd77dbd]
    rr[24396]: goroutine 55954476 [running]:
    rr[24396]: github.com/spiral/roadrunner-plugins/v2/http.(*Plugin).workers(...)
    rr[24396]: #011github.com/spiral/roadrunner-plugins/[email protected]/http/plugin.go:358
    rr[24396]: github.com/spiral/roadrunner-plugins/v2/http.(*Plugin).Workers(0x1529d40)
    rr[24396]: #011github.com/spiral/roadrunner-plugins/[email protected]/http/plugin.go:342 +0xbd
    rr[24396]: github.com/spiral/roadrunner-plugins/v2/informer.(*Plugin).Workers(...)
    rr[24396]: #011github.com/spiral/roadrunner-plugins/[email protected]/informer/plugin.go:38
    rr[24396]: github.com/spiral/roadrunner-plugins/v2/informer.(*rpc).Workers(0x2, {0xc000d5f490, 0x1}, 0xc000da9b78)
    rr[24396]: #011github.com/spiral/roadrunner-plugins/[email protected]/informer/rpc.go:31 +0x5e
    rr[24396]: reflect.Value.call({0xc0007b7500, 0xc00000f1b0, 0x13}, {0x17222dd, 0x4}, {0xc000befef8, 0x3, 0x3})
    rr[24396]: #011reflect/value.go:543 +0x814
    rr[24396]: reflect.Value.Call({0xc0007b7500, 0xc00000f1b0, 0x0}, {0xc000db46f8, 0x3, 0x3})
    rr[24396]: #011reflect/value.go:339 +0xc5
    rr[24396]: net/rpc.(*service).call(0xc0007a3700, 0xc000db47b0, 0xd79dd9, 0xc000d5ede0, 0xc0007ba300, 0xc000774360, {0x14c2860, 0xc000f168e0, 0x15ef4e0}, {0x156fa00, ...}, ...)
    rr[24396]: #011net/rpc/server.go:377 +0x239
    rr[24396]: created by net/rpc.(*Server).ServeCodec
    rr[24396]: #011net/rpc/server.go:474 +0x405
    systemd[1]: rr.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
    systemd[1]: rr.service: Failed with result 'exit-code'.
    systemd[1]: Stopped High-performance PHP application server.
    systemd[1]: Started High-performance PHP application server.
    

    PHP7.4 (opcache_cli enabled without storing in files) Ubuntu 18.04 x86_64 rr version 2.5.7 (build time: 2021-11-13T16:43:25+0000, go1.17.2)

    server:
      command: 'php /var/www/...../worker.php'
      relay: 'pipes'
    
    rpc:
      listen: tcp://127.0.0.1:6001
    
    http:
      address: 10.0.6.50:80
      pool:
        max_jobs: 50000
        num_workers: 12
    
    logs:
      mode: production
      level: debug
      file_logger_options:
        log_output: /var/log/rr/access.log
        max_size: 100
        max_age: 48
        compress: true
    
    

    Maybe i do something wrong when i try to reload workers? Thank you for any help.

    Version

    2.5.7

    Relevant log output

    No response

  • [πŸ› BUG]: Omit the bugfix version for the config version check

    [πŸ› BUG]: Omit the bugfix version for the config version check

    No duplicates πŸ₯².

    • [X] I have searched for a similar issue in our bug tracker and didn't find any solutions.

    What happened?

    config_plugin_init: RR version is older than configuration version, RR version: %!s(func() string=0x972b20), configuration version: 2.7.3 
    
    version: "2.7.3"
    
    rpc:
      listen: tcp://:6001
    
    server:
      command: "php /app/index.php"
      relay: pipes
    
    http:
      address: "0.0.0.0:3001"
    #  middleware: []
      pool:
        num_workers: 1
        debug: true
        max_jobs: 1
    

    Version

    2.7.3

    Relevant log output

    No response

  • Laravel - sessions don't work (cookies problem)

    Laravel - sessions don't work (cookies problem)

    Steps to reproduce

    • composer create-project --prefer-dist laravel/laravel laravel57
    • cd laravel57
    • edit env file - database connection
    • php artisan make:auth
    • php artisan migrate
    • do this steps: https://github.com/spiral/roadrunner/wiki/Laravel-Framework
    • rr -v -d
    • Open browser and go to http://0.0.0.0:8080/register, fill out the form, submit and see the 419 ERROR

    Environment

    • PHP 7.3
    • Laravel 5.7

    How to fix it? thanks

  • Too many opened tcp connections on low load

    Too many opened tcp connections on low load

    Recently I tried to do some stress testing stuff with my application (works with RR). I used artillery (https://artillery.io) to do this. When I firstly run the tests for 60 seconds with 10 requests/sec everything was fine, but many tcp connections were opened by RR process. On a longer distance (3 minutes) errors start to happen.

    2020/10/15 11:43:45 http: Accept error: accept tcp [::]:80: accept4: too many open files; retrying in 10ms
    2020/10/15 11:43:45 http: Accept error: accept tcp [::]:80: accept4: too many open files; retrying in 20ms
    2020/10/15 11:43:45 http: Accept error: accept tcp [::]:80: accept4: too many open files; retrying in 40ms
    2020/10/15 11:43:45 http: Accept error: accept tcp [::]:80: accept4: too many open files; retrying in 80ms
    2020/10/15 11:43:45 http: Accept error: accept tcp [::]:80: accept4: too many open files; retrying in 160ms
    2020/10/15 11:43:46 http: Accept error: accept tcp [::]:80: accept4: too many open files; retrying in 320ms
    2020/10/15 11:43:46 http: Accept error: accept tcp [::]:80: accept4: too many open files; retrying in 640ms
    2020/10/15 11:43:47 http: Accept error: accept tcp [::]:80: accept4: too many open files; retrying in 1s
    2020/10/15 11:43:48 http: Accept error: accept tcp [::]:80: accept4: too many open files; retrying in 1s
    2020/10/15 11:43:49 http: Accept error: accept tcp [::]:80: accept4: too many open files; retrying in 1s
    2020/10/15 11:43:50 http: Accept error: accept tcp [::]:80: accept4: too many open files; retrying in 1s
    2020/10/15 11:43:51 http: Accept error: accept tcp [::]:80: accept4: too many open files; retrying in 1s
    2020/10/15 11:43:52 http: Accept error: accept tcp [::]:80: accept4: too many open files; retrying in 1s
    2020/10/15 11:43:53 http: Accept error: accept tcp [::]:80: accept4: too many open files; retrying in 1s
    2020/10/15 11:43:54 http: Accept error: accept tcp [::]:80: accept4: too many open files; retrying in 1s
    2020/10/15 11:43:55 http: Accept error: accept tcp [::]:80: accept4: too many open files; retrying in 1s
    2020/10/15 11:43:56 http: Accept error: accept tcp [::]:80: accept4: too many open files; retrying in 1s
    2020/10/15 11:43:57 http: Accept error: accept tcp [::]:80: accept4: too many open files; retrying in 1s
    2020/10/15 11:43:58 http: Accept error: accept tcp [::]:80: accept4: too many open files; retrying in 1s
    

    lsof -a -p $(pidof rr) shows 30 worker pipes, some other connections and loads of tcp connections like ubuntu2004.localdomain:http->_gateway:58032. I've found only a temporary solution - increase opened files limit, but I think this is not the root of the problem. Maybe these connections are not closed immediately after the response and I don't know how to fix that.

  • Symfony worker (for inclusion in the wiki)

    Symfony worker (for inclusion in the wiki)

    Hi, thanks for this very interesting project!

    Here is a worker to serve apps based on the default Symfony skeleton:

    <?php
    // worker.php
    // Install the following mandatory packages:
    // composer req spiral/roadrunner symfony/psr-http-message-bridge
    
    ini_set('display_errors', 'stderr');
    
    use App\Kernel;
    use Spiral\Goridge\StreamRelay;
    use Spiral\RoadRunner\PSR7Client;
    use Spiral\RoadRunner\Worker;
    use Symfony\Bridge\PsrHttpMessage\Factory\DiactorosFactory;
    use Symfony\Bridge\PsrHttpMessage\Factory\HttpFoundationFactory;
    use Symfony\Component\Debug\Debug;
    use Symfony\Component\Dotenv\Dotenv;
    use Symfony\Component\HttpFoundation\Request;
    
    require 'vendor/autoload.php';
    
    // The check is to ensure we don't use .env in production
    if (!isset($_SERVER['APP_ENV']) && !isset($_ENV['APP_ENV'])) {
        if (!class_exists(Dotenv::class)) {
            throw new \RuntimeException('APP_ENV environment variable is not defined. You need to define environment variables for configuration or add "symfony/dotenv" as a Composer dependency to load variables from a .env file.');
        }
        (new Dotenv())->load(__DIR__.'/.env');
    }
    
    $env = $_SERVER['APP_ENV'] ?? $_ENV['APP_ENV'] ?? 'dev';
    $debug = (bool) ($_SERVER['APP_DEBUG'] ?? $_ENV['APP_DEBUG'] ?? ('prod' !== $env));
    
    if ($debug) {
        umask(0000);
    
        Debug::enable();
    }
    
    if ($trustedProxies = $_SERVER['TRUSTED_PROXIES'] ?? $_ENV['TRUSTED_PROXIES'] ?? false) {
        Request::setTrustedProxies(explode(',', $trustedProxies), Request::HEADER_X_FORWARDED_ALL ^ Request::HEADER_X_FORWARDED_HOST);
    }
    
    if ($trustedHosts = $_SERVER['TRUSTED_HOSTS'] ?? $_ENV['TRUSTED_HOSTS'] ?? false) {
        Request::setTrustedHosts(explode(',', $trustedHosts));
    }
    
    $kernel = new Kernel($env, $debug);
    $relay = new StreamRelay(STDIN, STDOUT);
    $psr7 = new PSR7Client(new Worker($relay));
    $httpFoundationFactory = new HttpFoundationFactory();
    $diactorosFactory = new DiactorosFactory();
    
    while ($req = $psr7->acceptRequest()) {
        try {
            $request = $httpFoundationFactory->createRequest($req);
            $response = $kernel->handle($request);
            $psr7->respond($diactorosFactory->createResponse($response));
            $kernel->terminate($request, $response);
            $kernel->reboot(null);
        } catch (\Throwable $e) {
            $psr7->getWorker()->error((string)$e);
        }
    }
    

    And the corresponding .rr.yaml file:

    http:
      address: 0.0.0.0:8080
      workers:
        command: "php test-sf/worker.php"
        pool:
          numWorkers: 4
    
    static:
      dir:   "test-sf/public"
      forbid: [".php", ".htaccess"]
    

    Would you mind including it in the wiki?

  • [BUG] Workers not restarting after stop

    [BUG] Workers not restarting after stop

    We have a local rr build to run our services with such config file

    rpc:
      listen: tcp://127.0.0.1:6001
    
    server:
      # Worker starting command, with any required arguments.
      #
      # This option is required.
      command: "php ./vendor/bin/rr-worker start --refresh-app --relay-dsn tcp://127.0.0.1:6001"
    
      # User name (not UID) for the worker processes. An empty value means to use the RR process user.
      #
      # Default: ""
      user: ""
    
      # Group name (not GID) for the worker processes. An empty value means to use the RR process user.
      #
      # Default: ""
      group: ""
    
      # Worker relay can be: "pipes", TCP (eg.: tcp://127.0.0.1:6001), or socket (eg.: unix:///var/run/rr.sock).
      #
      # Default: "pipes"
      relay: tcp://127.0.0.1:6001
    
      # Timeout for relay connection establishing (only for socket and TCP port relay).
      #
      # Default: 60s
      relay_timeout: 60s
    
    logs:
      # default
      mode: development
      level: debug
      encoding: console
      output: stdout
      err_output: stdout
      channels:
        http:
          mode: development
          level: debug
          encoding: console
          output: stdout
        server:
          mode: development
          level: debug
          encoding: console
          output: stdout
        rpc:
          mode: development
          level: debug
          encoding: console
          output: stdout
    
    http:
      address: "0.0.0.0:80"
      # middlewares for the http plugin, order matters
      middleware: ["static", "gzip", "headers"]
      # uploads
      uploads:
        forbid: [".php", ".exe", ".bat"]
      trusted_subnets:
        [
            "10.0.0.0/8",
            "127.0.0.0/8",
            "172.16.0.0/12",
            "192.168.0.0/16",
            "::1/128",
            "fc00::/7",
            "fe80::/10",
        ]
      # headers (middleware)
      headers:
        cors:
          allowed_origin: "*"
          allowed_headers: "*"
          allowed_methods: "GET,POST,PUT,PATCH,DELETE"
          allow_credentials: true
          exposed_headers: "Cache-Control,Content-Language,Content-Type,Expires,Last-Modified,Pragma"
          max_age: 600
      # http static (middleware)
      static:
        dir: "public"
        forbid: [".php"]
      pool:
        # default - num of logical CPUs
        num_workers: 4
        # default 0 - no limit
        max_jobs: 1
        # default 1 minute
        allocate_timeout: 60s
        # default 1 minute
        destroy_timeout: 60s
        # supervisor used to control http workers
        supervisor:
          # watch_tick defines how often to check the state of the workers (seconds)
          watch_tick: 1s
          # ttl defines maximum time worker is allowed to live (seconds)
          ttl: 0
          # idle_ttl defines maximum duration worker can spend in idle mode after first use. Disabled when 0 (seconds)
          idle_ttl: 10s
          # exec_ttl defines maximum lifetime per job (seconds)
          exec_ttl: 10s
          # max_worker_memory limits memory usage per worker (MB)
          max_worker_memory: 100
      ssl:
        # host and port separated by semicolon (default :443)
        address: :443
        redirect: false
        # ssl cert
        cert: /ssl-cert/self-signed.crt
        # ssl private key
        key: /ssl-cert/self-signed.key
      # HTTP service provides HTTP2 transport
      http2:
        h2c: false
        max_concurrent_streams: 128
      # Automatically detect PHP file changes and reload connected services (docs:
      # https://roadrunner.dev/docs/beep-beep-reload). Drop this section for this feature disabling.
      reload:
        # Sync interval.
        #
        # Default: "1s"
        interval: 1s
    
        # Global patterns to sync.
        #
        # Default: [".php"]
        patterns: [ ".php" ]
    
        # List of included for sync services (this is a map, where key name is a plugin name).
        #
        # Default: <empty map>
        services:
          server:
            # Directories to sync. If recursive is set to true, recursive sync will be applied only to the directories in
            # "dirs" section. Dot (.) means "current working directory".
            #
            # Default: []
            dirs: [ "." ]
    
            # Recursive search for file patterns to add.
            #
            # Default: false
            recursive: true
    
            # Ignored folders.
            #
            # Default: []
            ignore: [ "vendor" ]
    
            # Service specific file pattens to sync.
            #
            # Default: []
            patterns: [ ".php", ".go", ".md" ]
          http:
            # Directories to sync. If recursive is set to true, recursive sync will be applied only to the directories in
            # "dirs" section. Dot (.) means "current working directory".
            #
            # Default: []
            dirs: [ "." ]
    
            # Recursive search for file patterns to add.
            #
            # Default: false
            recursive: true
    
            # Ignored folders.
            #
            # Default: []
            ignore: [ "vendor" ]
    
            # Service specific file pattens to sync.
            #
            # Default: []
            patterns: [ ".php", ".go", ".md", ".js", ".css", ".json" ]
    

    Actually, it runs Laravel bridged workers. Due to Laravel Nova not compatible with async (it spawns a lot of DB connections and stores some settings in static vars outside the app container), we decided to set max_jobs: 1 until problem is solved. I expected to see this happen: explanation

    As for now, I receive error in browser:

    1 error occurred:
    	* supervised_exec_with_context: Timeout: context deadline exceeded
    

    Errortrace, Backtrace or Panictrace

    gb_admin | 2021-04-01T10:25:27.127Z     WARN    server  server/plugin.go:208    no free workers in pool {"error": "static_pool_exec_with_context: NoFreeWorkers:\n\tworker_watcher_get_free_worker: no free workers in the container, timeout exceed"}
    gb_admin | github.com/spiral/roadrunner/v2/plugins/server.(*Plugin).collectEvents
    gb_admin |      github.com/spiral/roadrunner/[email protected]/plugins/server/plugin.go:208
    gb_admin | github.com/spiral/roadrunner/v2/pkg/events.(*HandlerImpl).Push
    gb_admin |      github.com/spiral/roadrunner/[email protected]/pkg/events/general.go:37
    gb_admin | github.com/spiral/roadrunner/v2/pkg/pool.(*StaticPool).getWorker
    gb_admin |      github.com/spiral/roadrunner/[email protected]/pkg/pool/static_pool.go:230
    gb_admin | github.com/spiral/roadrunner/v2/pkg/pool.(*StaticPool).execWithTTL
    gb_admin |      github.com/spiral/roadrunner/[email protected]/pkg/pool/static_pool.go:175
    gb_admin | github.com/spiral/roadrunner/v2/pkg/pool.(*supervised).Exec.func1
    gb_admin |      github.com/spiral/roadrunner/[email protected]/pkg/pool/supervisor_pool.go:99
    
  • [RR2, QUESTION] Does RoadRunner respect X-Forwarded-Proto and X-Forwarded-Port?

    [RR2, QUESTION] Does RoadRunner respect X-Forwarded-Proto and X-Forwarded-Port?

    I created this issue but maybe the problem is deeper. I used nginx-proxy which allows running apps locally in docker using https. And it works without any issue. Then I tried to use RoadRunner and the relevant Symfony bundle. The thing is that Spiral\RoadRunner\Http\Request::$uri contains the url with http scheme instead of https and as such Symfony generates links with http scheme instead of https, which leads to further problems.

    Is there something I miss? How can I configure RoadRunner to treat the forwarded https connection, while serving http itself?

  • function call is_uploaded_file is not valid

    function call is_uploaded_file is not valid

    Symphony and Laravel UploadedFile class consider all uploded as invalid due to the is_uploaded_file check.

    vendor/symfony/http-foundation/File/UploadedFile.php

    public function isValid()
    {
        $isOk = UPLOAD_ERR_OK === $this->error;
    
        return $this->test ? $isOk : $isOk && is_uploaded_file($this->getPathname());
    }
    

    Swoole solution: https://github.com/swoole/swoole-src/pull/407

    My current workaround - I subclassed both HttpFoundationFactory, UploadedFile to get around this issue.

    Let me know if this issue is going to be solved here (upstream) or should I send a pull request for my workaround to https://github.com/UPDG/roadrunner-laravel.

    Thanks

  • [πŸ’‘FEATURE REQUEST]: Add OpenTelemetry API support

    [πŸ’‘FEATURE REQUEST]: Add OpenTelemetry API support

    Plugin

    Server

    I have an idea!

    Please add OpenTelemetry support. With OpenTelemetry https://opentelemetry.io/, you don't have to add the support for each vendor. aka: NewRelic, Datadog, AppInsight, etc.

    EDIT (@rustatian): Update middleware:

    • [x] gzip
    • [x] headers
    • [x] static
    • [x] sendfile
    • [x] cache
  • [BUG] RR2 stop passing requests to workers

    [BUG] RR2 stop passing requests to workers

    we are upgrading from RR1 to RR2

    after deployment everything works and after some time (few hours) the roadrunner stops responding

    I expected to see this happen: RR keeps proccessing requests indefinitely

    Instead, this happened: RR stops passing requests to workers

    the port is still open and listening to requests

    # curl --max-time 10 -vvv 127.0.0.1:8080
    *   Trying 127.0.0.1:8080...
    * Connected to 127.0.0.1 (127.0.0.1) port 8080 (#0)
    > GET / HTTP/1.1
    > Host: 127.0.0.1:8080
    > User-Agent: curl/7.74.0
    > Accept: */*
    > 
    * Operation timed out after 10000 milliseconds with 0 bytes received
    * Closing connection 0
    curl: (28) Operation timed out after 10000 milliseconds with 0 bytes received
    

    workers are sitting idle with no execs and i can list them

    # /usr/local/bin/rr -c '/etc/roadrunner/.rr.yaml' workers
    Workers of [http]:
    +---------+-----------+---------+---------+---------+--------------------+
    |   PID   |  STATUS   |  EXECS  | MEMORY  |  CPU%   |      CREATED       |
    +---------+-----------+---------+---------+---------+--------------------+
    |    2379 | ready     |       0 | 28 MB   |    0.01 | 4 minutes ago      |
    |    2380 | ready     |       0 | 28 MB   |    0.01 | 4 minutes ago      |
    +---------+-----------+---------+---------+---------+--------------------+
    

    they are rotated as they reach TTL but live past idle TTL and calling reset just hangs indefinitely i can just see the rotating dots

    Resetting plugin: [http]  β—βˆ™βˆ™
    

    also while restarting i am not even getting the HTTP plugin got restart request. Restarting... info log

    The version of RR used: 2.5.3 i used binary from docker image spiralscout/roadrunner:2.5.3 in php:8.0.12-cli

    My .rr.yaml configuration is:

    rpc:
      enable: true
      listen: unix:///etc/roadrunner/rr.sock
    
    server:
      command: "php -d opcache.enable_cli=1 /var/www/html/app/worker.php"
      relay: "pipes"
    
    http:
      address: :8080
    
      pool:
        num_workers: 2
    
        supervisor:
          watch_tick: 1s # check every 1s
    
          # Maximal worker memory usage in megabytes (soft limit). Zero means no limit.
          max_worker_memory: 150 # 150MB
          ttl: 300s # maximum time to live for the worker (soft)
          idle_ttl: 60s # maximum allowed amount of time worker can spend in idle before being removed (for weak db connections, soft)
          exec_ttl: 360s # max_execution_time (brutal)
    
    endure:
      log_level: info
    
    logs:
      mode: production
      level: info
    
    metrics:
      address: :8081
    

    Errortrace, Backtrace or Panictrace there are no errors in logs

  • [πŸ› BUG]: Delete all items from the priority queue on the pipeline destroy command

    [πŸ› BUG]: Delete all items from the priority queue on the pipeline destroy command

    No duplicates πŸ₯².

    • [X] I have searched for a similar issue in our bug tracker and didn't find any solutions.

    What happened?

    When the user calls rpc.Destroy for the pipeline, some jobs might already be in the priority queue. Since the priority queue is global for all pipelines, jobs from the destroyed pipeline might reach the worker. RR won't be able to ACK/NACK these JOBS, but they might sometimes break user logic inside the worker.

    We should delete all associated JOBS from the PQ and wait for the msgInFlight == 0 to be sure that there are no JOBS inside the PQ or currently processing by the PHP worker.

    Version (rr --version)

    <= 2.12.1

    How to reproduce the issue?

    1. Start RR.
    2. Use a simple JOBS worker with a sleep(1).
    3. Push a lot of JOBS and call rpc.Destroy with the pipeline name simultaneously.

    1. [ ] AMQP
    2. [ ] SQS
    3. [ ] Beanstalk
    4. [ ] NATS
    5. [ ] Kafka
    6. [ ] in-memory
    7. [ ] localdb
  • [πŸ’‘ FEATURE REQUEST]: Option to limit the concurrency of the JOBS drivers

    [πŸ’‘ FEATURE REQUEST]: Option to limit the concurrency of the JOBS drivers

    Plugin

    JOBS

    I have an idea!

    A new option that can limit the parallelism per pipeline. For example, if you use the SQS FIFO queue, at the moment, you might receive three messages at the same time. The workers might serve these three messages randomly (e.g. 2->1->3). For the cases when that is required to follow the FIFO order (for example), we need a new option to consume a new message from any queue in a particular order. For example: parallelism: 1: would mean we can have only 1 message (JOB) in the RR's internal queue before the ACK. So, the SQS FIFO order will be preserved.

    parallelism: N: would mean we can have N messages simultaneously in the internal priorities queue.

    EDIT:

    1. [x] AMQP -> supports this option, no changes are needed.
    2. [x] SQS -> doesn't support, add the prefetch configuration option.
    3. [ ] Beanstalk -> doesn't support, add the prefetch configuration option.
    4. [ ] NATS -> doesn't support, add the prefetch configuration option.
    5. [ ] Kafka -> not sure, need to investigate.
    6. [x] in-memory -> update the prefetch configuration option.
    7. [ ] localdb -> update the prefetch configuration option.
  • [πŸ’‘ FEATURE REQUEST]: Ability to change logs format

    [πŸ’‘ FEATURE REQUEST]: Ability to change logs format

    Plugin

    Logger

    I have an idea!

    Hi @rustatian,

    Every time when I start RoadRunner I see in console something like this

    2022-11-24T21:39:48.318+0400    ERROR   service         wait    {"error": "signal: interrupt"}
    

    It would be great to change logs format, like this

    logger:
         format: "H:i:s  %level_name%  %message% %context%"
    
  • [πŸ’‘ FEATURE REQUEST]: Healthchecks for the JOBS plugin drivers (`amqp`, `sqs`, etc.)

    [πŸ’‘ FEATURE REQUEST]: Healthchecks for the JOBS plugin drivers (`amqp`, `sqs`, etc.)

    Plugin

    JOBS

    I have an idea!

    Currently, there is no way to know the status of the underlying connection for the JOBS plugin drivers (e.g. amqp, beanstalk, sqs, etc.). I mean, in case the RR won't succeed in reconnecting to the queue server, there is only an error message in the logs. The user should either monitor for this message, or ... there is no second option.

    I propose implementing the Status plugin interfaces for the JOBS drivers to check the status per pipeline.

    For example, http://127.0.0.1:2222/health?plugin=jobs&pipeline=foo, where 127.0.0.1:2222 is a status plugin endpoint, plugin=jobs is the name of the plugin (as usual), and the pipeline is the pipeline which should be checked.

    1. [x] AMQP
    2. [ ] SQS
    3. [ ] Beanstalk
    4. [ ] NATS
    5. [ ] Kafka
    6. [ ] in-memory
    7. [ ] localdb
  • [πŸ“– DOCS]: Datadog integration documentation and examples

    [πŸ“– DOCS]: Datadog integration documentation and examples

    Plugin

    No response

    I have an idea!

    We use Datadog and are looking at using Roadrunner in combination with Laravel Octane. It would be really amazing if there were some documentation available about how to integrate Roadrunner and Datadog (I saw a Changelog of Open Telemetry, but I am unsure as to how to integrate with that)

Cat Balancer is line based load balancer for net cat nc.
Cat Balancer is line based load balancer for net cat nc.

Cat Balancer Cat Balancer is line based load balancer for net cat nc. Usage cb [-p <producers-port>] [-c <consumers-port>] One Producer to One Consum

Jul 6, 2022
Kiwi-balancer - A balancer is a gateway between the clients and the server

Task description Imagine a standard client-server relationship, only in our case

Feb 11, 2022
Nov 9, 2022
A distributed Layer 2 Direct Server Return (L2DSR) load balancer for Linux using XDP/eBPF

VC5 A distributed Layer 2 Direct Server Return (L2DSR) load balancer for Linux using XDP/eBPF This is very much a proof of concept at this stage - mos

Dec 22, 2022
πŸ§™ High-performance PHP-to-Golang IPC/RPC bridge

High-performance PHP-to-Golang IPC bridge Goridge is high performance PHP-to-Golang codec library which works over native PHP sockets and Golang net/r

Dec 28, 2022
A load balancer supporting multiple LB strategies written in Go
A load balancer supporting multiple LB strategies written in Go

farely A load balancer supporting multiple LB strategies written in Go. Goal The goal of this project is purley educational, I started it as a brainst

Dec 21, 2022
Lightweight http response time based load balancer written in Go

HTTP Load Balancer Specifications http servers should always return time taken to proceed request in headers as EXECUTION_TIME in ms this load balance

Feb 22, 2022
the pluto is a gateway new time, high performance, high stable, high availability, easy to use

pluto the pluto is a gateway new time, high performance, high stable, high availability, easy to use Acknowledgments thanks nbio for providing low lev

Sep 19, 2021
gobetween - modern & minimalistic load balancer and reverse-proxy for the ☁️ Cloud era.
gobetween - modern & minimalistic load balancer and reverse-proxy for the ☁️ Cloud era.

gobetween - modern & minimalistic load balancer and reverse-proxy for the ☁️ Cloud era. Current status: Maintenance mode, accepting PRs. Currently in

Dec 25, 2022
A modern layer 7 load balancer from baidu

BFE BFE is a modern layer 7 load balancer from baidu. Advantages Multiple protocols supported, including HTTP, HTTPS, SPDY, HTTP2, WebSocket, TLS, Fas

Dec 30, 2022
KgLb - L4 Load Balancer
KgLb - L4 Load Balancer

KgLb KgLb is L4 a load balancer built on top of linux ip virtual server (ip_vs). It provides rich functionality such as discovery, health checks for r

Dec 16, 2022
Simple Reverse Proxy Load Balancer

lb - a reverse proxy load-balancing server, It implements the Weighted Round Robin Balancing algorithm

Mar 23, 2022
Basic Load Balancer
Basic Load Balancer

Load Balancer Work flow based on code snippet Trade-offs: 1. Using etcd as a global variable map. 2. Using etcd to store request references rather tha

Nov 1, 2021
Vippy - A Virtual IP/BGP/IPVS Load-Balancer for Equinix Metal

Vippy - A Virtual IP/BGP/IPVS Load-Balancer for Equinix Metal If I figure out how to make it work.. How it works! The Vippy LB PoC uses BGP/IPVS and E

Mar 10, 2022
A Load-balancer made from steel
A Load-balancer made from steel

slb The Steel Load Balancer A load-balancer forged in the fires of Sheffield Getting slb Prebuilt binaries for armv7 and amd64 exist in the releases p

Nov 13, 2022
A Service Load Balancer for Kubernetes.

PureLB - is a Service Load Balancer for Kubernetes PureLB is a load-balancer orchestrator for Kubernetes clusters. It uses standard Linux networking a

Dec 24, 2022
Consistelancer - Consistent hashing load balancer for Kubernetes

Setup minikube start kubectl apply -f k8s-env.yaml skaffold dev # test locks ku

Sep 28, 2022
Simple load-balancer for npchat servers, based on the xor distance metric between node & user id

npchat-helmsman Simple load-balancer for npchat servers, based on the xor distance metric between node & user id. Local Development Clone this reposit

Jan 15, 2022
Squzy - is a high-performance open-source monitoring, incident and alert system written in Golang with Bazel and love.

Squzy - opensource monitoring, incident and alerting system About Squzy - is a high-performance open-source monitoring and alerting system written in

Dec 12, 2022