Pomerium is an identity-aware access proxy.

pomerium logo

pomerium chat GitHub Actions Go Report Card GoDoc LICENSE codecov Docker Pulls

Pomerium is an identity-aware proxy that enables secure access to internal applications. Pomerium provides a standardized interface to add access control to applications regardless of whether the application itself has authorization or authentication baked-in. Pomerium gateways both internal and external requests, and can be used in situations where you'd typically reach for a VPN.

Pomerium can be used to:

  • provide a single-sign-on gateway to internal applications.
  • enforce dynamic access policy based on context, identity, and device state.
  • aggregate access logs and telemetry data.
  • a VPN alternative.

Docs

For comprehensive docs, and tutorials see our documentation.

Comments
  • G Suite Service Account Group Membership Validation Fails

    G Suite Service Account Group Membership Validation Fails

    What happened?

    • Running on Kubernetes with the helm chart along with nginx ingress
    • We use G Suite Service account for group membership validation.
    • We have two users A and B
    • user A is part of roughly 20 something groups
    • user B is part of roughly 60 something groups (including owner of some groups)

    What did you expect to happen?

    • Group membership works for user A and user A can access everything downstream fine
    • For user B they get an error ERR_CONNECTION_CLOSED on Chrome and a blank page on Firefox
    • When tried on Safari user got kCFErrorDomainCFNetwork error 303
    • Also when user A who is an Pomerium admin tried to log in they got an 403 (creating separate issue for that)

    How'd it happen?

    1. User B tried to log in

    What's your environment like?

    • Pomerium version (retrieve with pomerium --version or /ping endpoint):
    pomerium/v0.6.3 (+github.com/pomerium/pomerium; 1c7d30b; go1.14)
    
    • Server Operating System/Architecture/Cloud:

    What's your config.yaml?

    - from: https://EXT
      to: http://INT
      allowed_groups:
        - [email protected]
    

    What did you see in the logs?

    [
        {
            "level": "debug",
            "X-Forwarded-For": [
                "XXXXX"
            ],
            "X-Forwarded-Host": [
                "INTERNAL"
            ],
            "X-Forwarded-Port": [
                "443"
            ],
            "X-Forwarded-Proto": [
                "https"
            ],
            "X-Real-Ip": [
                "XXXXX"
            ],
            "ip": "XXXXX",
            "user_agent": "REMOVED",
            "req_id": "f9ec84a4-14d9-ecc1-4632-aee4d192bd76",
            "error": "Forbidden: [email protected] is not authorized for INTERNAL",
            "time": "2020-03-26T19:18:33Z",
            "message": "proxy: AuthorizeSession"
        },
        {
            "level": "info",
            "X-Forwarded-For": [
                "INTERNAL"
            ],
            "X-Forwarded-Host": [
                "INTERNAL"
            ],
            "X-Forwarded-Port": [
                "443"
            ],
            "X-Forwarded-Proto": [
                "https"
            ],
            "X-Real-Ip": [
                "INTERNAL"
            ],
            "ip": "XXXXX",
            "user_agent": "REMOVED",
            "req_id": "f9ec84a4-14d9-ecc1-4632-aee4d192bd76",
            "error": "Forbidden: [email protected] is not authorized for INTERNAL",
            "time": "2020-03-26T19:18:33Z",
            "message": "httputil: ErrorResponse"
        }
    ]
    

    Additional context

    Add any other context about the problem here.

  • proxy returns `stream terminated by RST_STREAM with error code: PROTOCOL_ERROR` while trying to authorize

    proxy returns `stream terminated by RST_STREAM with error code: PROTOCOL_ERROR` while trying to authorize

    What happened?

    500 - Internal Server Error when trying to access a proxied service

    What did you expect to happen?

    Forwarding

    How'd it happen?

    1. Go to proxied domain
    2. Log in with Google
    3. 500

    What's your environment like?

    • Pomerium version: 0.5.1

    • Environment: Kubernetes Manifests:

    apiVersion: v1
    kind: Service
    metadata:
      name: pomerium
    spec:
      ports:
        - port: 80
          name: http
          targetPort: 443
        - name: metrics
          port: 9090
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: pomerium
    spec:
      replicas: 1
      template:
        spec:
          containers:
          - name: pomerium
            image: pomerium
            args:
            - --config=/etc/pomerium/config.yaml
            env:
            - {name: INSECURE_SERVER, value: "true"}
            - {name: POMERIUM_DEBUG, value: "true"}
            - {name: AUTHENTICATE_SERVICE_URL, value: https://pomerium-authn.$(DOMAIN)}
            - {name: FORWARD_AUTH_URL, value: https://pomerium-fwd.$(DOMAIN)}
            - {name: IDP_PROVIDER, value: google}
            - name: COOKIE_SECRET
              valueFrom:
                secretKeyRef:
                  name: pomerium
                  key: cookie-secret
            - name: SHARED_SECRET
              valueFrom:
                secretKeyRef:
                  name: pomerium
                  key: shared-secret
            - name: IDP_CLIENT_ID
              valueFrom:
                secretKeyRef:
                  name: pomerium
                  key: idp-client-id
            - name: IDP_CLIENT_SECRET
              valueFrom:
                secretKeyRef:
                  name: pomerium
                  key: idp-client-secret
            - name: IDP_SERVICE_ACCOUNT
              valueFrom:
                secretKeyRef:
                  name: pomerium
                  key: idp-service-account
            ports:
              - containerPort: 443
                name: http
              - containerPort: 9090
                name: metrics
            livenessProbe:
              httpGet:
                path: /ping
                port: 443
            readinessProbe:
              httpGet:
                path: /ping
                port: 443
            volumeMounts:
            - mountPath: /etc/pomerium/
              name: config
          volumes:
          - name: config
            configMap:
              name: pomerium
    ---
    apiVersion: extensions/v1beta1
    kind: Ingress
    metadata:
      name: pomerium
      annotations:
        certmanager.k8s.io/cluster-issuer: letsencrypt-prod
    spec:
      tls:
      - hosts:
        - pomerium-authn.$(DOMAIN)
        - pomerium-fwd.$(DOMAIN)
        secretName: pomerium-tls
      rules:
        - host: pomerium-authn.$(DOMAIN)
          http:
            paths:
              - path: /
                backend:
                  serviceName: pomerium
                  servicePort: 80
        - host: pomerium-fwd.$(DOMAIN)
          http:
            paths:
              - path: /
                backend:
                  serviceName: pomerium
                  servicePort: 80
    

    What's your config.yaml?

    policy: 
      - from: https://test-nginx.cs-eng-apps-europe-west3.gcp.infra.csol.cloud
        to: http://httpbin.pomerium/
        allowed_domains:
          - container-solutions.com
    

    What did you see in the logs?

    11:23AM ERR http-error error="rpc error: code = Internal desc = stream terminated by RST_STREAM with error code: PROTOCOL_ERROR" X-Forwarded-For=["10.156.0.9"] X-Forwarded-Host=["test-nginx.cs-eng-ops-europe-west3.gcp.infra.csol.cloud"] X-Forwarded-Port=["443"] X-Forwarded-Proto=["https"] X-Real-Ip=["10.156.0.9"] http-code=500 http-message="rpc error: code = Internal desc = stream terminated by RST_STREAM with error code: PROTOCOL_ERROR" ip=10.40.0.6 req_id=d7e56771-0e2c-a7de-4518-25e7267da9ed user_agent="Mozilla/5.0 (Windows NT 10.0; rv:68.0) Gecko/20100101 Firefox/68.0"
    11:23AM DBG proxy: AuthorizeSession error="rpc error: code = Internal desc = stream terminated by RST_STREAM with error code: PROTOCOL_ERROR" X-Forwarded-For=["10.156.0.9"] X-Forwarded-Host=["test-nginx.cs-eng-ops-europe-west3.gcp.infra.csol.cloud"] X-Forwarded-Port=["443"] X-Forwarded-Proto=["https"] X-Real-Ip=["10.156.0.9"] ip=10.40.0.6 req_id=d7e56771-0e2c-a7de-4518-25e7267da9ed user_agent="Mozilla/5.0 (Windows NT 10.0; rv:68.0) Gecko/20100101 Firefox/68.0"
    11:23AM DBG http-request X-Forwarded-For=["10.156.0.9"] X-Forwarded-Host=["test-nginx.cs-eng-ops-europe-west3.gcp.infra.csol.cloud"] X-Forwarded-Port=["443"] X-Forwarded-Proto=["https"] X-Real-Ip=["10.156.0.9"] duration=5.869461 [email protected] group=<groups> host=test-nginx.cs-eng-ops-europe-west3.gcp.infra.csol.cloud ip=10.40.0.6 method=GET path=/ req_id=d7e56771-0e2c-a7de-4518-25e7267da9ed service=all size=8150 status=500 user_agent="Mozilla/5.0 (Windows NT 10.0; rv:68.0) Gecko/20100101 Firefox/68.0"
    

    Additional context

    I get this error both with all-in-one mode and by deploying the services separately. I have been messing with GRPC ports etc. before opening this issue but I could not find what the problem is. It seems like the proxy wants to talk to authorize over gRPCs but that is not available in INSECURE_SERVER mode

  • Azure IDP implementation broken

    Azure IDP implementation broken

    What happened?

    When using the Azure IDP implementation, Pomerium does not retrieve all user data or may not process it properly. This issue blocks access to all configured applications as Pomerium will match the invalid user data with the configured policy.

    Your help is appreciated!

    What did you expect to happen?

    The Azure IDP implementation should provide the requested information.

    How'd it happen?

    The following data is shown after a successful Azure AD sign in on the Pomerium HTTPS endpoint:

    Name: <AD Display name>
    UserID: qm5pToiuIWNnS8o8WS_SNcbE2Q2<16 char redacted>
    User: qm5pToiuIWNnS8o8WS_SNcbE2Q2<16 char redacted>
    Group: ec9a2ef9-c01c-4525-8057-<12 char redacted>
    Expiry: 2020-07-08 22:08:10 +0000 UTC
    Issued: 2020-07-08 21:08:11 +0000 UTC
    Issuer: <Pomerium HTTPS endpoint>
    Audience: <Pomerium HTTPS endpoint>
    

    Note, that the value of UserID and User are equal but none of them show the email address of the user, which is configured in Azure AD. The following API permissions are configured: image

    Could the problem be related to these permissions? I have followed the Quickstart Guide, but noticed that the Azure AD set up section seems to show an older version the Azure Portal and doesn't mention 5 Azure Active Directory Graph permissions which are set in the screenshot but not granted by default.

    What's your environment like?

    • Pomerium version: Pomerium 0.9.2-1592838263+3b74053
    • Ubuntu 20.04, Running with docker-compose using Forward Authentication with traefik

    What's your config.yaml?

    pomerium_debug: true
    insecure_server: true
    address: :80
    forward_auth_url: http://auth
    authenticate_service_url: https://auth.redacted.domain.example.com
    
    idp_provider: azure
    idp_client_id: <secret>
    idp_client_secret: <client>
    idp_provider_url: https://login.microsoftonline.com/<endpoint id>/v2.0
    
    cookie_secret: <secret>
    shared_secret: <secret>
    
    policy:
    - from: https://traefik.redacted.domain.example.com
      to: http://traefik
      allowed_users:
        - <Azure AD user>
    - from: https://third-party.example.com
      to: http://<app dns-name>
      allowed_users:
        - <Azure AD user>
    

    What did you see in the logs?

    pomerium.log

  • v0.9.0 - Connection refused - no error in logs

    v0.9.0 - Connection refused - no error in logs

    What happened?

    After updating Pomerioum (docker compose) to the latest v0.9.0 I get a "connection refused" on all policies. I did not find any "fatal" or "error" in the logs. New connections are not shown in the logs. Full restart log below.

    What did you expect to happen?

    Since there were no breaking changes in the update the same configuration should still work as before.

    How'd it happen?

    1. fetched the new pomerium v0.9.0 image
    2. deployed my old docker compose file with the same config.yaml used in v0.8.3
    3. tried connect to to a service proxied by pomerium

    What's your environment like?

    • Pomerium 0.9.0
    • Docker Compose in Ubuntu 20.04

    What's your config.yaml?

    config.yaml
    policy:
    # Portainer
      - from: https://portainer.domain.com
        to: http://192.168.1.2:9000
        allowed_users:
          - [email protected] 
    
    ...more policies...
    

    What's your docker compose file?

    docker compose
    version: "2"
    services:
      pomerium:
        container_name: pomerium
        image: pomerium/pomerium:latest
        environment:
          - AUTHENTICATE_SERVICE_URL=https://authenticate.domain.com
          - AUTOCERT=true
          - AUTOCERT_USE_STAGING=false
          - AUTOCERT_DIR=/pomerium/certs/
          - IDP_PROVIDER=google
          - IDP_CLIENT_ID=XXX.apps.googleusercontent.com
          - IDP_CLIENT_SECRET=XXX
          - COOKIE_SECRET=XXX
          - ADMINISTRATORS="[email protected]"
          - HTTP_REDIRECT_ADDR=:80
        volumes:
          - pomerium_config:/pomerium/
        ports:
          - 80:80
          - 443:443
        restart: always
    

    What did you see in the logs?

    Full restart log, after the last line nothing happens.

    2020-05-31T19:41:43.950195768Z {"level":"fatal","error":"http: Server closed","time":"2020-05-31T19:41:43Z","message":"cmd/pomerium"}
    2020-05-31T19:41:46.150121440Z 2020/05/31 19:41:46 [INFO][cache:0xc0000f8e10] Started certificate maintenance routine
    2020-05-31T19:41:46.150296820Z {"level":"info","addr":":80","time":"2020-05-31T19:41:46Z","message":"starting http redirect server"}
    2020-05-31T19:41:46.274039653Z {"level":"info","version":"0.9.0-1590940862+914b952","time":"2020-05-31T19:41:46Z","message":"cmd/pomerium"}
    2020-05-31T19:41:46.275111005Z {"level":"info","port":"45679","time":"2020-05-31T19:41:46Z","message":"gRPC server started"}
    2020-05-31T19:41:46.275162790Z {"level":"info","port":"33161","time":"2020-05-31T19:41:46Z","message":"HTTP server started"}
    2020-05-31T19:41:46.283430991Z {"level":"debug","service":"envoy","location":"/tmp/.pomerium-envoy/envoy-config.yaml","time":"2020-05-31T19:41:46Z","message":"wrote config file to location"}
    2020-05-31T19:41:46.284438038Z {"level":"info","addr":"localhost:5443","time":"2020-05-31T19:41:46Z","message":"internal/grpc: grpc with insecure"}
    2020-05-31T19:41:46.461503150Z {"level":"warn","time":"2020-05-31T19:41:46Z","message":"google: no service account, will not fetch groups"}
    2020-05-31T19:41:46.465330339Z {"level":"info","host":"authenticate.domain.cc","time":"2020-05-31T19:41:46Z","message":"enabled authenticate service"}
    2020-05-31T19:41:46.553332606Z {"level":"info","checksum":"dca1e1bd47b186af","time":"2020-05-31T19:41:46Z","message":"authorize: updating options"}
    2020-05-31T19:41:46.553385609Z {"level":"info","PublicKey":"LS0tLS1CRUdJTiBQVUJMSUMgS0VZLS0tLS0KTUZrd0V3WUhLb1pJemowQ0FRWUlLb1pJemowREFRY0RRZ0FFUG9YUmp3U1VoWW9RbmF0SUUrVkxQR0lrUTBIMgp3NjFoZGJDUTlkQnNqUjVRMTB3ZFhheHByTmp1azlqbVVyQVhkQ2VjZVBEdHBDbWdrbmhaVnQvb2FBPT0KLS0tLS1FTkQgUFVCTElDIEtFWS0tLS0tCg==","time":"2020-05-31T19:41:46Z","message":"authorize: ecdsa public key"}
    2020-05-31T19:41:46.570117954Z {"level":"info","time":"2020-05-31T19:41:46Z","message":"enabled authorize service"}
    2020-05-31T19:41:46.651502935Z {"level":"info","checksum":"dca1e1bd47b186af","time":"2020-05-31T19:41:46Z","message":"authorize: updating options"}
    2020-05-31T19:41:46.651554263Z {"level":"info","PublicKey":"LS0tLS1CRUdJTiBQVUJMSUMgS0VZLS0tLS0KTUZrd0V3WUhLb1pJemowQ0FRWUlLb1pJemowREFRY0RRZ0FFYUI1NUk1eSt6c1dwZFRvM09Xa2RIODhNSXpQUgowZ3ptaGx4eG1FUTdzNXAvMnkvRXE1SkZwd3hnN1JUTmR6aWVJbmdlRjEyWGpjYVBUVDg2OE5jZ0lBPT0KLS0tLS1FTkQgUFVCTElDIEtFWS0tLS0tCg==","time":"2020-05-31T19:41:46Z","message":"authorize: ecdsa public key"}
    2020-05-31T19:41:46.666941534Z {"service":"autocache","time":"2020-05-31T19:41:46Z","message":"autocache: with options: &{PoolOptions:<nil> PoolScheme:http PoolPort:8333 PoolTransportFn:0xff8650 PoolContext:<nil> MemberlistConfig:<nil> Logger:0xc000d26dc0}"}
    2020-05-31T19:41:46.666998871Z {"service":"autocache","time":"2020-05-31T19:41:46Z","message":"autocache: defaulting to lan configuration"}
    2020-05-31T19:41:46.668890953Z {"service":"autocache","time":"2020-05-31T19:41:46Z","message":"autocache: self addr is: 172.28.0.2"}
    2020-05-31T19:41:46.668943733Z {"service":"autocache","time":"2020-05-31T19:41:46Z","message":"autocache: groupcache self: http://172.28.0.2:8333 options: &{BasePath: Replicas:0 HashFn:<nil>}"}
    2020-05-31T19:41:46.668983078Z {"level":"info","service":"autocache","insecure":true,"addr":":8333","time":"2020-05-31T19:41:46Z","message":"internal/httputil: http server started"}
    2020-05-31T19:41:46.669581812Z {"service":"autocache","time":"2020-05-31T19:41:46Z","message":"[DEBUG] memberlist: Initiating push/pull sync with:  127.0.0.1:7946"}
    2020-05-31T19:41:46.669763225Z {"service":"autocache","time":"2020-05-31T19:41:46Z","message":"[DEBUG] memberlist: Stream connection from=127.0.0.1:46502"}
    2020-05-31T19:41:46.671030086Z {"service":"autocache","time":"2020-05-31T19:41:46Z","message":"[DEBUG] memberlist: Failed to join ::1: dial tcp [::1]:7946: connect: cannot assign requested address"}
    2020-05-31T19:41:46.671515289Z {"service":"autocache","time":"2020-05-31T19:41:46Z","message":"[DEBUG] memberlist: Stream connection from=172.28.0.2:40150"}
    2020-05-31T19:41:46.671564111Z {"service":"autocache","time":"2020-05-31T19:41:46Z","message":"[DEBUG] memberlist: Initiating push/pull sync with:  172.28.0.2:7946"}
    2020-05-31T19:41:46.672329542Z {"level":"info","time":"2020-05-31T19:41:46Z","message":"enabled cache service"}
    2020-05-31T19:41:46.674609564Z {"level":"info","addr":"localhost:5443","time":"2020-05-31T19:41:46Z","message":"internal/grpc: grpc with insecure"}
    2020-05-31T19:41:46.675453547Z {"level":"info","addr":"127.0.0.1:45679","time":"2020-05-31T19:41:46Z","message":"starting control-plane gRPC server"}
    2020-05-31T19:41:46.675501306Z {"level":"info","addr":"127.0.0.1:33161","time":"2020-05-31T19:41:46Z","message":"starting control-plane HTTP server"}
    2020-05-31T19:41:48.413980648Z {"level":"info","service":"envoy","name":"main","time":"2020-05-31T19:41:48Z","message":"initializing epoch 0 (hot restart version=disabled)"}
    2020-05-31T19:41:48.414236986Z {"level":"info","service":"envoy","name":"main","time":"2020-05-31T19:41:48Z","message":"statically linked extensions:"}
    2020-05-31T19:41:48.414744052Z {"level":"info","service":"envoy","name":"main","time":"2020-05-31T19:41:48Z","message":"  envoy.filters.http: envoy.buffer, envoy.cors, envoy.csrf, envoy.ext_authz, envoy.fault, envoy.filters.http.adaptive_concurrency, envoy.filters.http.aws_lambda, envoy.filters.http.aws_request_signing, envoy.filters.http.buffer, envoy.filters.http.cache, envoy.filters.http.cors, envoy.filters.http.csrf, envoy.filters.http.dynamic_forward_proxy, envoy.filters.http.dynamo, envoy.filters.http.ext_authz, envoy.filters.http.fault, envoy.filters.http.grpc_http1_bridge, envoy.filters.http.grpc_http1_reverse_bridge, envoy.filters.http.grpc_json_transcoder, envoy.filters.http.grpc_stats, envoy.filters.http.grpc_web, envoy.filters.http.gzip, envoy.filters.http.header_to_metadata, envoy.filters.http.health_check, envoy.filters.http.ip_tagging, envoy.filters.http.jwt_authn, envoy.filters.http.lua, envoy.filters.http.on_demand, envoy.filters.http.original_src, envoy.filters.http.ratelimit, envoy.filters.http.rbac, envoy.filters.http.router, envoy.filters.http.squash, envoy.filters.http.tap, envoy.grpc_http1_bridge, envoy.grpc_json_transcoder, envoy.grpc_web, envoy.gzip, envoy.health_check, envoy.http_dynamo_filter, envoy.ip_tagging, envoy.lua, envoy.rate_limit, envoy.router, envoy.squash"}
    2020-05-31T19:41:48.415085345Z {"level":"info","service":"envoy","name":"main","time":"2020-05-31T19:41:48Z","message":"  envoy.dubbo_proxy.filters: envoy.filters.dubbo.router"}
    2020-05-31T19:41:48.415283131Z {"level":"info","service":"envoy","name":"main","time":"2020-05-31T19:41:48Z","message":"  envoy.resource_monitors: envoy.resource_monitors.fixed_heap, envoy.resource_monitors.injected_resource"}
    2020-05-31T19:41:48.415624112Z {"level":"info","service":"envoy","name":"main","time":"2020-05-31T19:41:48Z","message":"  envoy.thrift_proxy.protocols: auto, binary, binary/non-strict, compact, twitter"}
    2020-05-31T19:41:48.415699524Z {"level":"info","service":"envoy","name":"main","time":"2020-05-31T19:41:48Z","message":"  envoy.health_checkers: envoy.health_checkers.redis"}
    2020-05-31T19:41:48.415712361Z {"level":"info","service":"envoy","name":"main","time":"2020-05-31T19:41:48Z","message":"  envoy.retry_host_predicates: envoy.retry_host_predicates.omit_canary_hosts, envoy.retry_host_predicates.omit_host_metadata, envoy.retry_host_predicates.previous_hosts"}
    2020-05-31T19:41:48.415724314Z {"level":"info","service":"envoy","name":"main","time":"2020-05-31T19:41:48Z","message":"  envoy.resolvers: envoy.ip"}
    2020-05-31T19:41:48.415734991Z {"level":"info","service":"envoy","name":"main","time":"2020-05-31T19:41:48Z","message":"  http_cache_factory: envoy.extensions.http.cache.simple"}
    2020-05-31T19:41:48.415745972Z {"level":"info","service":"envoy","name":"main","time":"2020-05-31T19:41:48Z","message":"  envoy.stats_sinks: envoy.dog_statsd, envoy.metrics_service, envoy.stat_sinks.dog_statsd, envoy.stat_sinks.hystrix, envoy.stat_sinks.metrics_service, envoy.stat_sinks.statsd, envoy.statsd"}
    2020-05-31T19:41:48.415763154Z {"level":"info","service":"envoy","name":"main","time":"2020-05-31T19:41:48Z","message":"  envoy.thrift_proxy.filters: envoy.filters.thrift.rate_limit, envoy.filters.thrift.router"}
    2020-05-31T19:41:48.415777476Z {"level":"info","service":"envoy","name":"main","time":"2020-05-31T19:41:48Z","message":"  envoy.transport_sockets.upstream: envoy.transport_sockets.alts, envoy.transport_sockets.raw_buffer, envoy.transport_sockets.tap, envoy.transport_sockets.tls, raw_buffer, tls"}
    2020-05-31T19:41:48.415789885Z {"level":"info","service":"envoy","name":"main","time":"2020-05-31T19:41:48Z","message":"  envoy.dubbo_proxy.route_matchers: default"}
    2020-05-31T19:41:48.415849352Z {"level":"info","service":"envoy","name":"main","time":"2020-05-31T19:41:48Z","message":"  envoy.grpc_credentials: envoy.grpc_credentials.aws_iam, envoy.grpc_credentials.default, envoy.grpc_credentials.file_based_metadata"}
    2020-05-31T19:41:48.416121836Z {"level":"info","service":"envoy","name":"main","time":"2020-05-31T19:41:48Z","message":"  envoy.filters.udp_listener: envoy.filters.udp.dns_filter, envoy.filters.udp_listener.udp_proxy"}
    2020-05-31T19:41:48.416146091Z {"level":"info","service":"envoy","name":"main","time":"2020-05-31T19:41:48Z","message":"  envoy.clusters: envoy.cluster.eds, envoy.cluster.logical_dns, envoy.cluster.original_dst, envoy.cluster.static, envoy.cluster.strict_dns, envoy.clusters.aggregate, envoy.clusters.dynamic_forward_proxy, envoy.clusters.redis"}
    2020-05-31T19:41:48.416187131Z {"level":"info","service":"envoy","name":"main","time":"2020-05-31T19:41:48Z","message":"  envoy.transport_sockets.downstream: envoy.transport_sockets.alts, envoy.transport_sockets.raw_buffer, envoy.transport_sockets.tap, envoy.transport_sockets.tls, raw_buffer, tls"}
    2020-05-31T19:41:48.416352357Z {"level":"info","service":"envoy","name":"main","time":"2020-05-31T19:41:48Z","message":"  envoy.filters.network: envoy.client_ssl_auth, envoy.echo, envoy.ext_authz, envoy.filters.network.client_ssl_auth, envoy.filters.network.direct_response, envoy.filters.network.dubbo_proxy, envoy.filters.network.echo, envoy.filters.network.ext_authz, envoy.filters.network.http_connection_manager, envoy.filters.network.kafka_broker, envoy.filters.network.local_ratelimit, envoy.filters.network.mongo_proxy, envoy.filters.network.mysql_proxy, envoy.filters.network.ratelimit, envoy.filters.network.rbac, envoy.filters.network.redis_proxy, envoy.filters.network.sni_cluster, envoy.filters.network.tcp_proxy, envoy.filters.network.thrift_proxy, envoy.filters.network.zookeeper_proxy, envoy.http_connection_manager, envoy.mongo_proxy, envoy.ratelimit, envoy.redis_proxy, envoy.tcp_proxy"}
    2020-05-31T19:41:48.416373386Z {"level":"info","service":"envoy","name":"main","time":"2020-05-31T19:41:48Z","message":"  envoy.thrift_proxy.transports: auto, framed, header, unframed"}
    2020-05-31T19:41:48.416398357Z {"level":"info","service":"envoy","name":"main","time":"2020-05-31T19:41:48Z","message":"  envoy.retry_priorities: envoy.retry_priorities.previous_priorities"}
    2020-05-31T19:41:48.416457641Z {"level":"info","service":"envoy","name":"main","time":"2020-05-31T19:41:48Z","message":"  envoy.dubbo_proxy.protocols: dubbo"}
    2020-05-31T19:41:48.416520463Z {"level":"info","service":"envoy","name":"main","time":"2020-05-31T19:41:48Z","message":"  envoy.filters.listener: envoy.filters.listener.http_inspector, envoy.filters.listener.original_dst, envoy.filters.listener.original_src, envoy.filters.listener.proxy_protocol, envoy.filters.listener.tls_inspector, envoy.listener.http_inspector, envoy.listener.original_dst, envoy.listener.original_src, envoy.listener.proxy_protocol, envoy.listener.tls_inspector"}
    2020-05-31T19:41:48.416563294Z {"level":"info","service":"envoy","name":"main","time":"2020-05-31T19:41:48Z","message":"  envoy.udp_listeners: raw_udp_listener"}
    2020-05-31T19:41:48.416614102Z {"level":"info","service":"envoy","name":"main","time":"2020-05-31T19:41:48Z","message":"  envoy.tracers: envoy.dynamic.ot, envoy.lightstep, envoy.tracers.datadog, envoy.tracers.dynamic_ot, envoy.tracers.lightstep, envoy.tracers.opencensus, envoy.tracers.xray, envoy.tracers.zipkin, envoy.zipkin"}
    2020-05-31T19:41:48.416648092Z {"level":"info","service":"envoy","name":"main","time":"2020-05-31T19:41:48Z","message":"  envoy.dubbo_proxy.serializers: dubbo.hessian2"}
    2020-05-31T19:41:48.416659869Z {"level":"info","service":"envoy","name":"main","time":"2020-05-31T19:41:48Z","message":"  envoy.access_loggers: envoy.access_loggers.file, envoy.access_loggers.http_grpc, envoy.access_loggers.tcp_grpc, envoy.file_access_log, envoy.http_grpc_access_log, envoy.tcp_grpc_access_log"}
    2020-05-31T19:41:48.425046560Z {"level":"info","service":"envoy","name":"main","time":"2020-05-31T19:41:48Z","message":"admin address: 127.0.0.1:9901"}
    2020-05-31T19:41:48.425825817Z {"level":"debug","service":"envoy","name":"main","time":"2020-05-31T19:41:48Z","message":"No overload action is configured for envoy.overload_actions.shrink_heap."}
    2020-05-31T19:41:48.425938675Z {"level":"debug","service":"envoy","name":"main","time":"2020-05-31T19:41:48Z","message":"No overload action is configured for envoy.overload_actions.stop_accepting_connections."}
    2020-05-31T19:41:48.425952821Z {"level":"debug","service":"envoy","name":"main","time":"2020-05-31T19:41:48Z","message":"No overload action is configured for envoy.overload_actions.stop_accepting_connections."}
    2020-05-31T19:41:48.425968369Z {"level":"debug","service":"envoy","name":"main","time":"2020-05-31T19:41:48Z","message":"No overload action is configured for envoy.overload_actions.stop_accepting_connections."}
    2020-05-31T19:41:48.425980358Z {"level":"debug","service":"envoy","name":"main","time":"2020-05-31T19:41:48Z","message":"No overload action is configured for envoy.overload_actions.stop_accepting_connections."}
    2020-05-31T19:41:48.426293122Z {"level":"info","service":"envoy","name":"main","time":"2020-05-31T19:41:48Z","message":"runtime: layers:\n  - name: base\n    static_layer:\n      {}\n  - name: admin\n    admin_layer:\n      {}"}
    2020-05-31T19:41:48.426620029Z {"level":"info","service":"envoy","name":"config","time":"2020-05-31T19:41:48Z","message":"loading tracing configuration"}
    2020-05-31T19:41:48.426686792Z {"level":"info","service":"envoy","name":"config","time":"2020-05-31T19:41:48Z","message":"loading 0 static secret(s)"}
    2020-05-31T19:41:48.426699113Z {"level":"info","service":"envoy","name":"config","time":"2020-05-31T19:41:48Z","message":"loading 1 cluster(s)"}
    2020-05-31T19:41:48.426939128Z {"level":"debug","service":"envoy","name":"grpc","time":"2020-05-31T19:41:48Z","message":"completionThread running"}
    2020-05-31T19:41:48.428789217Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"transport socket match, socket default selected for host with address 127.0.0.1:45679"}
    2020-05-31T19:41:48.428851262Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"adding TLS initial cluster pomerium-control-plane-grpc"}
    2020-05-31T19:41:48.428997296Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"initializing Primary cluster pomerium-control-plane-grpc completed"}
    2020-05-31T19:41:48.429160022Z {"level":"debug","service":"envoy","name":"init","time":"2020-05-31T19:41:48Z","message":"init manager Cluster pomerium-control-plane-grpc contains no targets"}
    2020-05-31T19:41:48.429384565Z {"level":"debug","service":"envoy","name":"init","time":"2020-05-31T19:41:48Z","message":"init manager Cluster pomerium-control-plane-grpc initialized, notifying ClusterImplBase"}
    2020-05-31T19:41:48.429483193Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"membership update for TLS cluster pomerium-control-plane-grpc added 1 removed 0"}
    2020-05-31T19:41:48.429528136Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"cm init: init complete: cluster=pomerium-control-plane-grpc primary=0 secondary=0"}
    2020-05-31T19:41:48.429610601Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"maybe finish initialize state: 0"}
    2020-05-31T19:41:48.429623781Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"cm init: adding: cluster=pomerium-control-plane-grpc primary=0 secondary=0"}
    2020-05-31T19:41:48.429675575Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"maybe finish initialize state: 1"}
    2020-05-31T19:41:48.429773073Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"maybe finish initialize primary init clusters empty: true"}
    2020-05-31T19:41:48.429788830Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"maybe finish initialize secondary init clusters empty: true"}
    2020-05-31T19:41:48.430044032Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"maybe finish initialize cds api ready: true"}
    2020-05-31T19:41:48.430055808Z {"level":"info","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"cm init: initializing cds"}
    2020-05-31T19:41:48.430066753Z {"level":"debug","service":"envoy","name":"config","time":"2020-05-31T19:41:48Z","message":"gRPC mux addWatch for type.googleapis.com/envoy.config.cluster.v3.Cluster"}
    2020-05-31T19:41:48.430077913Z {"level":"debug","service":"envoy","name":"config","time":"2020-05-31T19:41:48Z","message":"No stream available to sendDiscoveryRequest for type.googleapis.com/envoy.config.cluster.v3.Cluster"}
    2020-05-31T19:41:48.430089471Z {"level":"debug","service":"envoy","name":"config","time":"2020-05-31T19:41:48Z","message":"[bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:47] Establishing new gRPC bidi stream for rpc StreamAggregatedResources(stream .envoy.service.discovery.v3.DiscoveryRequest) returns (stream .envoy.service.discovery.v3.DiscoveryResponse);"}
    2020-05-31T19:41:48.430101750Z {"level":"debug","service":"envoy","name":"envoy","time":"2020-05-31T19:41:48Z"}
    2020-05-31T19:41:48.430115659Z {"level":"debug","service":"envoy","name":"router","time":"2020-05-31T19:41:48Z","message":"\"[C0][S10772420691089791976] cluster \\'pomerium-control-plane-grpc\\' match for URL \\'/envoy.service.discovery.v3.AggregatedDiscoveryService/StreamAggregatedResources\\'\""}
    2020-05-31T19:41:48.430136395Z {"level":"debug","service":"envoy","name":"router","time":"2020-05-31T19:41:48Z","message":"\"[C0][S10772420691089791976] router decoding headers:\\n\\':method\\', \\'POST\\'\\n\\':path\\', \\'/envoy.service.discovery.v3.AggregatedDiscoveryService/StreamAggregatedResources\\'\\n\\':authority\\', \\'pomerium-control-plane-grpc\\'\\n\\':scheme\\', \\'http\\'\\n\\'te\\', \\'trailers\\'\\n\\'content-type\\', \\'application/grpc\\'\\n\\'x-envoy-internal\\', \\'true\\'\\n\\'x-forwarded-for\\', \\'172.28.0.2\\'\""}
    2020-05-31T19:41:48.430155830Z {"level":"debug","service":"envoy","name":"envoy","time":"2020-05-31T19:41:48Z"}
    2020-05-31T19:41:48.430182708Z {"level":"debug","service":"envoy","name":"pool","time":"2020-05-31T19:41:48Z","message":"queueing request due to no available connections"}
    2020-05-31T19:41:48.430194569Z {"level":"debug","service":"envoy","name":"pool","time":"2020-05-31T19:41:48Z","message":"creating a new connection"}
    2020-05-31T19:41:48.430205192Z {"level":"debug","service":"envoy","name":"client","time":"2020-05-31T19:41:48Z","message":"[C0] connecting"}
    2020-05-31T19:41:48.430216219Z {"level":"debug","service":"envoy","name":"connection","time":"2020-05-31T19:41:48Z","message":"[C0] connecting to 127.0.0.1:45679"}
    2020-05-31T19:41:48.430227711Z {"level":"debug","service":"envoy","name":"connection","time":"2020-05-31T19:41:48Z","message":"[C0] connection in progress"}
    2020-05-31T19:41:48.430238787Z {"level":"debug","service":"envoy","name":"http2","time":"2020-05-31T19:41:48Z","message":"[C0] updating connection-level initial window size to 268435456"}
    2020-05-31T19:41:48.430539407Z {"level":"info","service":"envoy","name":"config","time":"2020-05-31T19:41:48Z","message":"loading 0 listener(s)"}
    2020-05-31T19:41:48.430569233Z {"level":"info","service":"envoy","name":"config","time":"2020-05-31T19:41:48Z","message":"loading stats sink configuration"}
    2020-05-31T19:41:48.430580730Z {"level":"debug","service":"envoy","name":"init","time":"2020-05-31T19:41:48Z","message":"added target LDS to init manager Server"}
    2020-05-31T19:41:48.430826566Z {"level":"info","service":"envoy","name":"main","time":"2020-05-31T19:41:48Z","message":"starting main dispatch loop"}
    2020-05-31T19:41:48.430847734Z {"level":"debug","service":"envoy","name":"connection","time":"2020-05-31T19:41:48Z","message":"[C0] connected"}
    2020-05-31T19:41:48.430859549Z {"level":"debug","service":"envoy","name":"client","time":"2020-05-31T19:41:48Z","message":"[C0] connected"}
    2020-05-31T19:41:48.430914975Z {"level":"debug","service":"envoy","name":"pool","time":"2020-05-31T19:41:48Z","message":"[C0] attaching to next request"}
    2020-05-31T19:41:48.430929538Z {"level":"debug","service":"envoy","name":"pool","time":"2020-05-31T19:41:48Z","message":"[C0] creating stream"}
    2020-05-31T19:41:48.431015109Z {"level":"debug","service":"envoy","name":"router","time":"2020-05-31T19:41:48Z","message":"[C0][S10772420691089791976] pool ready"}
    2020-05-31T19:41:48.436008721Z {"level":"debug","service":"envoy","name":"router","time":"2020-05-31T19:41:48Z","message":"[C0][S10772420691089791976] upstream headers complete: end_stream=false"}
    2020-05-31T19:41:48.436180941Z {"level":"debug","service":"envoy","name":"http","time":"2020-05-31T19:41:48Z","message":"\"async http request response headers (end_stream=false):\\n\\':status\\', \\'200\\'\\n\\'content-type\\', \\'application/grpc\\'\""}
    2020-05-31T19:41:48.436197677Z {"level":"debug","service":"envoy","name":"envoy","time":"2020-05-31T19:41:48Z"}
    2020-05-31T19:41:48.436209044Z {"level":"debug","service":"envoy","name":"config","time":"2020-05-31T19:41:48Z","message":"Received gRPC message for type.googleapis.com/envoy.config.cluster.v3.Cluster at version 1"}
    2020-05-31T19:41:48.436524440Z {"level":"debug","service":"envoy","name":"config","time":"2020-05-31T19:41:48Z","message":"Pausing discovery requests for type.googleapis.com/envoy.api.v2.ClusterLoadAssignment"}
    2020-05-31T19:41:48.436546585Z {"level":"info","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"cds: add 21 cluster(s), remove 0 cluster(s)"}
    2020-05-31T19:41:48.436629164Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"\"cds: add/update cluster \\'pomerium-control-plane-grpc\\' skipped\""}
    2020-05-31T19:41:48.438302788Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"transport socket match, socket default selected for host with address 127.0.0.1:33161"}
    2020-05-31T19:41:48.438367202Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"add/update cluster pomerium-control-plane-http during init"}
    2020-05-31T19:41:48.438440757Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"adding TLS cluster pomerium-control-plane-http"}
    2020-05-31T19:41:48.438525214Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"initializing Primary cluster pomerium-control-plane-http completed"}
    2020-05-31T19:41:48.438690058Z {"level":"debug","service":"envoy","name":"init","time":"2020-05-31T19:41:48Z","message":"init manager Cluster pomerium-control-plane-http contains no targets"}
    2020-05-31T19:41:48.438733519Z {"level":"debug","service":"envoy","name":"init","time":"2020-05-31T19:41:48Z","message":"init manager Cluster pomerium-control-plane-http initialized, notifying ClusterImplBase"}
    2020-05-31T19:41:48.438749461Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"membership update for TLS cluster pomerium-control-plane-http added 1 removed 0"}
    2020-05-31T19:41:48.438854131Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"cm init: init complete: cluster=pomerium-control-plane-http primary=0 secondary=0"}
    2020-05-31T19:41:48.438931488Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"maybe finish initialize state: 2"}
    2020-05-31T19:41:48.439007696Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"cm init: adding: cluster=pomerium-control-plane-http primary=0 secondary=0"}
    2020-05-31T19:41:48.439107552Z {"level":"info","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"\"cds: add/update cluster \\'pomerium-control-plane-http\\'\""}
    2020-05-31T19:41:48.439680374Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"add/update cluster pomerium-authz during init"}
    2020-05-31T19:41:48.439703949Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"adding TLS cluster pomerium-authz"}
    2020-05-31T19:41:48.439898821Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"transport socket match, socket default selected for host with address [::1]:5443"}
    2020-05-31T19:41:48.439918202Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"DNS hosts have changed for localhost"}
    2020-05-31T19:41:48.440092493Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"DNS refresh rate reset for localhost, refresh rate 5000 ms"}
    2020-05-31T19:41:48.440109701Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"initializing Primary cluster pomerium-authz completed"}
    2020-05-31T19:41:48.440121045Z {"level":"debug","service":"envoy","name":"init","time":"2020-05-31T19:41:48Z","message":"init manager Cluster pomerium-authz contains no targets"}
    2020-05-31T19:41:48.440239835Z {"level":"debug","service":"envoy","name":"init","time":"2020-05-31T19:41:48Z","message":"init manager Cluster pomerium-authz initialized, notifying ClusterImplBase"}
    2020-05-31T19:41:48.440256193Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"membership update for TLS cluster pomerium-authz added 1 removed 0"}
    2020-05-31T19:41:48.440398978Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"cm init: init complete: cluster=pomerium-authz primary=0 secondary=0"}
    2020-05-31T19:41:48.440431384Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"maybe finish initialize state: 2"}
    2020-05-31T19:41:48.440447025Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"cm init: adding: cluster=pomerium-authz primary=0 secondary=0"}
    2020-05-31T19:41:48.440566722Z {"level":"info","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"\"cds: add/update cluster \\'pomerium-authz\\'\""}
    2020-05-31T19:41:48.441524266Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"transport socket match, socket default selected for host with address 192.168.1.2:9000"}
    2020-05-31T19:41:48.441737671Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"add/update cluster policy-5a1e0211ffbc5dc7 during init"}
    2020-05-31T19:41:48.441754974Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"adding TLS cluster policy-5a1e0211ffbc5dc7"}
    2020-05-31T19:41:48.441766278Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"initializing Primary cluster policy-5a1e0211ffbc5dc7 completed"}
    2020-05-31T19:41:48.441777392Z {"level":"debug","service":"envoy","name":"init","time":"2020-05-31T19:41:48Z","message":"init manager Cluster policy-5a1e0211ffbc5dc7 contains no targets"}
    2020-05-31T19:41:48.441949978Z {"level":"debug","service":"envoy","name":"init","time":"2020-05-31T19:41:48Z","message":"init manager Cluster policy-5a1e0211ffbc5dc7 initialized, notifying ClusterImplBase"}
    2020-05-31T19:41:48.441983228Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"membership update for TLS cluster policy-5a1e0211ffbc5dc7 added 1 removed 0"}
    2020-05-31T19:41:48.442038365Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"cm init: init complete: cluster=policy-5a1e0211ffbc5dc7 primary=0 secondary=0"}
    2020-05-31T19:41:48.442051770Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"maybe finish initialize state: 2"}
    2020-05-31T19:41:48.442103371Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"cm init: adding: cluster=policy-5a1e0211ffbc5dc7 primary=0 secondary=0"}
    2020-05-31T19:41:48.442180135Z {"level":"info","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"\"cds: add/update cluster \\'policy-5a1e0211ffbc5dc7\\'\""}
    2020-05-31T19:41:48.443271154Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"transport socket match, socket default selected for host with address 192.168.1.19:5000"}
    2020-05-31T19:41:48.443359447Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"add/update cluster policy-37a82db4c0ffdc3b during init"}
    2020-05-31T19:41:48.443456473Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"adding TLS cluster policy-37a82db4c0ffdc3b"}
    2020-05-31T19:41:48.443471492Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"initializing Primary cluster policy-37a82db4c0ffdc3b completed"}
    2020-05-31T19:41:48.443513566Z {"level":"debug","service":"envoy","name":"init","time":"2020-05-31T19:41:48Z","message":"init manager Cluster policy-37a82db4c0ffdc3b contains no targets"}
    2020-05-31T19:41:48.443558800Z {"level":"debug","service":"envoy","name":"init","time":"2020-05-31T19:41:48Z","message":"init manager Cluster policy-37a82db4c0ffdc3b initialized, notifying ClusterImplBase"}
    2020-05-31T19:41:48.443659717Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"membership update for TLS cluster policy-37a82db4c0ffdc3b added 1 removed 0"}
    2020-05-31T19:41:48.443676117Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"cm init: init complete: cluster=policy-37a82db4c0ffdc3b primary=0 secondary=0"}
    2020-05-31T19:41:48.443735698Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"maybe finish initialize state: 2"}
    2020-05-31T19:41:48.443882334Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"cm init: adding: cluster=policy-37a82db4c0ffdc3b primary=0 secondary=0"}
    2020-05-31T19:41:48.443901647Z {"level":"info","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"\"cds: add/update cluster \\'policy-37a82db4c0ffdc3b\\'\""}
    2020-05-31T19:41:48.444924751Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"transport socket match, socket default selected for host with address 192.168.1.19:8080"}
    2020-05-31T19:41:48.444943849Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"add/update cluster policy-81a4c0a20675e4ee during init"}
    2020-05-31T19:41:48.445083007Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"adding TLS cluster policy-81a4c0a20675e4ee"}
    2020-05-31T19:41:48.445114365Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"initializing Primary cluster policy-81a4c0a20675e4ee completed"}
    2020-05-31T19:41:48.445129868Z {"level":"debug","service":"envoy","name":"init","time":"2020-05-31T19:41:48Z","message":"init manager Cluster policy-81a4c0a20675e4ee contains no targets"}
    2020-05-31T19:41:48.445260490Z {"level":"debug","service":"envoy","name":"init","time":"2020-05-31T19:41:48Z","message":"init manager Cluster policy-81a4c0a20675e4ee initialized, notifying ClusterImplBase"}
    2020-05-31T19:41:48.445277616Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"membership update for TLS cluster policy-81a4c0a20675e4ee added 1 removed 0"}
    2020-05-31T19:41:48.445419363Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"cm init: init complete: cluster=policy-81a4c0a20675e4ee primary=0 secondary=0"}
    2020-05-31T19:41:48.445450798Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"maybe finish initialize state: 2"}
    2020-05-31T19:41:48.445507868Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"cm init: adding: cluster=policy-81a4c0a20675e4ee primary=0 secondary=0"}
    2020-05-31T19:41:48.445654650Z {"level":"info","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"\"cds: add/update cluster \\'policy-81a4c0a20675e4ee\\'\""}
    2020-05-31T19:41:48.446421584Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"transport socket match, socket default selected for host with address 192.168.1.2:9014"}
    2020-05-31T19:41:48.446568439Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"add/update cluster policy-f4225a06a6156331 during init"}
    2020-05-31T19:41:48.446584877Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"adding TLS cluster policy-f4225a06a6156331"}
    2020-05-31T19:41:48.446614293Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"initializing Primary cluster policy-f4225a06a6156331 completed"}
    2020-05-31T19:41:48.446744279Z {"level":"debug","service":"envoy","name":"init","time":"2020-05-31T19:41:48Z","message":"init manager Cluster policy-f4225a06a6156331 contains no targets"}
    2020-05-31T19:41:48.446777624Z {"level":"debug","service":"envoy","name":"init","time":"2020-05-31T19:41:48Z","message":"init manager Cluster policy-f4225a06a6156331 initialized, notifying ClusterImplBase"}
    2020-05-31T19:41:48.446964598Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"membership update for TLS cluster policy-f4225a06a6156331 added 1 removed 0"}
    2020-05-31T19:41:48.446992453Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"cm init: init complete: cluster=policy-f4225a06a6156331 primary=0 secondary=0"}
    2020-05-31T19:41:48.447069046Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"maybe finish initialize state: 2"}
    2020-05-31T19:41:48.447083371Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"cm init: adding: cluster=policy-f4225a06a6156331 primary=0 secondary=0"}
    2020-05-31T19:41:48.447095257Z {"level":"info","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"\"cds: add/update cluster \\'policy-f4225a06a6156331\\'\""}
    2020-05-31T19:41:48.448135909Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"transport socket match, socket default selected for host with address 192.168.1.2:9017"}
    2020-05-31T19:41:48.448157254Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"add/update cluster policy-9ec141595634cb8e during init"}
    2020-05-31T19:41:48.448168798Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"adding TLS cluster policy-9ec141595634cb8e"}
    2020-05-31T19:41:48.448229136Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"initializing Primary cluster policy-9ec141595634cb8e completed"}
    2020-05-31T19:41:48.448337708Z {"level":"debug","service":"envoy","name":"init","time":"2020-05-31T19:41:48Z","message":"init manager Cluster policy-9ec141595634cb8e contains no targets"}
    2020-05-31T19:41:48.448438037Z {"level":"debug","service":"envoy","name":"init","time":"2020-05-31T19:41:48Z","message":"init manager Cluster policy-9ec141595634cb8e initialized, notifying ClusterImplBase"}
    2020-05-31T19:41:48.448518378Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"membership update for TLS cluster policy-9ec141595634cb8e added 1 removed 0"}
    2020-05-31T19:41:48.448694690Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"cm init: init complete: cluster=policy-9ec141595634cb8e primary=0 secondary=0"}
    2020-05-31T19:41:48.448712266Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"maybe finish initialize state: 2"}
    2020-05-31T19:41:48.448724111Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"cm init: adding: cluster=policy-9ec141595634cb8e primary=0 secondary=0"}
    2020-05-31T19:41:48.448735271Z {"level":"info","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"\"cds: add/update cluster \\'policy-9ec141595634cb8e\\'\""}
    2020-05-31T19:41:48.449708788Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"transport socket match, socket default selected for host with address 192.168.1.2:9013"}
    2020-05-31T19:41:48.449743378Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"add/update cluster policy-6520db35db3cdb0c during init"}
    2020-05-31T19:41:48.449828985Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"adding TLS cluster policy-6520db35db3cdb0c"}
    2020-05-31T19:41:48.449950633Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"initializing Primary cluster policy-6520db35db3cdb0c completed"}
    2020-05-31T19:41:48.449979792Z {"level":"debug","service":"envoy","name":"init","time":"2020-05-31T19:41:48Z","message":"init manager Cluster policy-6520db35db3cdb0c contains no targets"}
    2020-05-31T19:41:48.449991314Z {"level":"debug","service":"envoy","name":"init","time":"2020-05-31T19:41:48Z","message":"init manager Cluster policy-6520db35db3cdb0c initialized, notifying ClusterImplBase"}
    2020-05-31T19:41:48.450019429Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"membership update for TLS cluster policy-6520db35db3cdb0c added 1 removed 0"}
    2020-05-31T19:41:48.450063175Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"cm init: init complete: cluster=policy-6520db35db3cdb0c primary=0 secondary=0"}
    2020-05-31T19:41:48.450136998Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"maybe finish initialize state: 2"}
    2020-05-31T19:41:48.450198706Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"cm init: adding: cluster=policy-6520db35db3cdb0c primary=0 secondary=0"}
    2020-05-31T19:41:48.450228561Z {"level":"info","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"\"cds: add/update cluster \\'policy-6520db35db3cdb0c\\'\""}
    2020-05-31T19:41:48.451439558Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"transport socket match, socket default selected for host with address 192.168.1.2:9016"}
    2020-05-31T19:41:48.451466306Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"add/update cluster policy-c474f5d3263b67c5 during init"}
    2020-05-31T19:41:48.451477710Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"adding TLS cluster policy-c474f5d3263b67c5"}
    2020-05-31T19:41:48.451554718Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"initializing Primary cluster policy-c474f5d3263b67c5 completed"}
    2020-05-31T19:41:48.451616608Z {"level":"debug","service":"envoy","name":"init","time":"2020-05-31T19:41:48Z","message":"init manager Cluster policy-c474f5d3263b67c5 contains no targets"}
    2020-05-31T19:41:48.451664226Z {"level":"debug","service":"envoy","name":"init","time":"2020-05-31T19:41:48Z","message":"init manager Cluster policy-c474f5d3263b67c5 initialized, notifying ClusterImplBase"}
    2020-05-31T19:41:48.451770301Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"membership update for TLS cluster policy-c474f5d3263b67c5 added 1 removed 0"}
    2020-05-31T19:41:48.451803519Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"cm init: init complete: cluster=policy-c474f5d3263b67c5 primary=0 secondary=0"}
    2020-05-31T19:41:48.451920223Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"maybe finish initialize state: 2"}
    2020-05-31T19:41:48.451986843Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"cm init: adding: cluster=policy-c474f5d3263b67c5 primary=0 secondary=0"}
    2020-05-31T19:41:48.452001595Z {"level":"info","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"\"cds: add/update cluster \\'policy-c474f5d3263b67c5\\'\""}
    2020-05-31T19:41:48.452968856Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"transport socket match, socket default selected for host with address 192.168.1.2:19999"}
    2020-05-31T19:41:48.453002376Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"add/update cluster policy-bd1bcd473d1c9c94 during init"}
    2020-05-31T19:41:48.453821904Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"adding TLS cluster policy-bd1bcd473d1c9c94"}
    2020-05-31T19:41:48.453866364Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"initializing Primary cluster policy-bd1bcd473d1c9c94 completed"}
    2020-05-31T19:41:48.453878333Z {"level":"debug","service":"envoy","name":"init","time":"2020-05-31T19:41:48Z","message":"init manager Cluster policy-bd1bcd473d1c9c94 contains no targets"}
    2020-05-31T19:41:48.453889358Z {"level":"debug","service":"envoy","name":"init","time":"2020-05-31T19:41:48Z","message":"init manager Cluster policy-bd1bcd473d1c9c94 initialized, notifying ClusterImplBase"}
    2020-05-31T19:41:48.453904399Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"membership update for TLS cluster policy-bd1bcd473d1c9c94 added 1 removed 0"}
    2020-05-31T19:41:48.453915547Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"cm init: init complete: cluster=policy-bd1bcd473d1c9c94 primary=0 secondary=0"}
    2020-05-31T19:41:48.453926408Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"maybe finish initialize state: 2"}
    2020-05-31T19:41:48.453936886Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"cm init: adding: cluster=policy-bd1bcd473d1c9c94 primary=0 secondary=0"}
    2020-05-31T19:41:48.453973686Z {"level":"info","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"\"cds: add/update cluster \\'policy-bd1bcd473d1c9c94\\'\""}
    2020-05-31T19:41:48.454736167Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"transport socket match, socket default selected for host with address 192.168.1.2:9022"}
    2020-05-31T19:41:48.456040853Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"add/update cluster policy-3731acb224b4dbe1 during init"}
    2020-05-31T19:41:48.456091807Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"adding TLS cluster policy-3731acb224b4dbe1"}
    2020-05-31T19:41:48.456103690Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"initializing Primary cluster policy-3731acb224b4dbe1 completed"}
    2020-05-31T19:41:48.456114464Z {"level":"debug","service":"envoy","name":"init","time":"2020-05-31T19:41:48Z","message":"init manager Cluster policy-3731acb224b4dbe1 contains no targets"}
    2020-05-31T19:41:48.456125057Z {"level":"debug","service":"envoy","name":"init","time":"2020-05-31T19:41:48Z","message":"init manager Cluster policy-3731acb224b4dbe1 initialized, notifying ClusterImplBase"}
    2020-05-31T19:41:48.456135821Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"membership update for TLS cluster policy-3731acb224b4dbe1 added 1 removed 0"}
    2020-05-31T19:41:48.456146646Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"cm init: init complete: cluster=policy-3731acb224b4dbe1 primary=0 secondary=0"}
    2020-05-31T19:41:48.456157459Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"maybe finish initialize state: 2"}
    2020-05-31T19:41:48.456167837Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"cm init: adding: cluster=policy-3731acb224b4dbe1 primary=0 secondary=0"}
    2020-05-31T19:41:48.456178498Z {"level":"info","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"\"cds: add/update cluster \\'policy-3731acb224b4dbe1\\'\""}
    2020-05-31T19:41:48.456393814Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"transport socket match, socket default selected for host with address 192.168.1.2:9035"}
    2020-05-31T19:41:48.456418134Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"add/update cluster policy-43037149f27f249d during init"}
    2020-05-31T19:41:48.456559230Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"adding TLS cluster policy-43037149f27f249d"}
    2020-05-31T19:41:48.456589669Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"initializing Primary cluster policy-43037149f27f249d completed"}
    2020-05-31T19:41:48.456619558Z {"level":"debug","service":"envoy","name":"init","time":"2020-05-31T19:41:48Z","message":"init manager Cluster policy-43037149f27f249d contains no targets"}
    2020-05-31T19:41:48.456686466Z {"level":"debug","service":"envoy","name":"init","time":"2020-05-31T19:41:48Z","message":"init manager Cluster policy-43037149f27f249d initialized, notifying ClusterImplBase"}
    2020-05-31T19:41:48.456733803Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"membership update for TLS cluster policy-43037149f27f249d added 1 removed 0"}
    2020-05-31T19:41:48.456798970Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"cm init: init complete: cluster=policy-43037149f27f249d primary=0 secondary=0"}
    2020-05-31T19:41:48.456877184Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"maybe finish initialize state: 2"}
    2020-05-31T19:41:48.457017406Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"cm init: adding: cluster=policy-43037149f27f249d primary=0 secondary=0"}
    2020-05-31T19:41:48.457049945Z {"level":"info","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"\"cds: add/update cluster \\'policy-43037149f27f249d\\'\""}
    2020-05-31T19:41:48.458156092Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"transport socket match, socket default selected for host with address 192.168.1.2:8888"}
    2020-05-31T19:41:48.458207167Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"add/update cluster policy-b31981dd2e23d760 during init"}
    2020-05-31T19:41:48.458232685Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"adding TLS cluster policy-b31981dd2e23d760"}
    2020-05-31T19:41:48.458246017Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"initializing Primary cluster policy-b31981dd2e23d760 completed"}
    2020-05-31T19:41:48.458295627Z {"level":"debug","service":"envoy","name":"init","time":"2020-05-31T19:41:48Z","message":"init manager Cluster policy-b31981dd2e23d760 contains no targets"}
    2020-05-31T19:41:48.458309630Z {"level":"debug","service":"envoy","name":"init","time":"2020-05-31T19:41:48Z","message":"init manager Cluster policy-b31981dd2e23d760 initialized, notifying ClusterImplBase"}
    2020-05-31T19:41:48.458321257Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"membership update for TLS cluster policy-b31981dd2e23d760 added 1 removed 0"}
    2020-05-31T19:41:48.458374840Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"cm init: init complete: cluster=policy-b31981dd2e23d760 primary=0 secondary=0"}
    2020-05-31T19:41:48.458468151Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"maybe finish initialize state: 2"}
    2020-05-31T19:41:48.458556915Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"cm init: adding: cluster=policy-b31981dd2e23d760 primary=0 secondary=0"}
    2020-05-31T19:41:48.458606211Z {"level":"info","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"\"cds: add/update cluster \\'policy-b31981dd2e23d760\\'\""}
    2020-05-31T19:41:48.459494493Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"transport socket match, socket default selected for host with address 192.168.1.2:9002"}
    2020-05-31T19:41:48.459516071Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"add/update cluster policy-595c96a54e13cb13 during init"}
    2020-05-31T19:41:48.459582915Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"adding TLS cluster policy-595c96a54e13cb13"}
    2020-05-31T19:41:48.459641384Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"initializing Primary cluster policy-595c96a54e13cb13 completed"}
    2020-05-31T19:41:48.459785256Z {"level":"debug","service":"envoy","name":"init","time":"2020-05-31T19:41:48Z","message":"init manager Cluster policy-595c96a54e13cb13 contains no targets"}
    2020-05-31T19:41:48.459825515Z {"level":"debug","service":"envoy","name":"init","time":"2020-05-31T19:41:48Z","message":"init manager Cluster policy-595c96a54e13cb13 initialized, notifying ClusterImplBase"}
    2020-05-31T19:41:48.459904042Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"membership update for TLS cluster policy-595c96a54e13cb13 added 1 removed 0"}
    2020-05-31T19:41:48.459918889Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"cm init: init complete: cluster=policy-595c96a54e13cb13 primary=0 secondary=0"}
    2020-05-31T19:41:48.459978680Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"maybe finish initialize state: 2"}
    2020-05-31T19:41:48.460016722Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"cm init: adding: cluster=policy-595c96a54e13cb13 primary=0 secondary=0"}
    2020-05-31T19:41:48.460047906Z {"level":"info","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"\"cds: add/update cluster \\'policy-595c96a54e13cb13\\'\""}
    2020-05-31T19:41:48.461011492Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"transport socket match, socket default selected for host with address 192.168.1.2:9002"}
    2020-05-31T19:41:48.461028213Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"add/update cluster policy-6e95de0e09f537a4 during init"}
    2020-05-31T19:41:48.461089548Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"adding TLS cluster policy-6e95de0e09f537a4"}
    2020-05-31T19:41:48.461165078Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"initializing Primary cluster policy-6e95de0e09f537a4 completed"}
    2020-05-31T19:41:48.461220947Z {"level":"debug","service":"envoy","name":"init","time":"2020-05-31T19:41:48Z","message":"init manager Cluster policy-6e95de0e09f537a4 contains no targets"}
    2020-05-31T19:41:48.461302860Z {"level":"debug","service":"envoy","name":"init","time":"2020-05-31T19:41:48Z","message":"init manager Cluster policy-6e95de0e09f537a4 initialized, notifying ClusterImplBase"}
    2020-05-31T19:41:48.461378614Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"membership update for TLS cluster policy-6e95de0e09f537a4 added 1 removed 0"}
    2020-05-31T19:41:48.461504294Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"cm init: init complete: cluster=policy-6e95de0e09f537a4 primary=0 secondary=0"}
    2020-05-31T19:41:48.461536079Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"maybe finish initialize state: 2"}
    2020-05-31T19:41:48.461548932Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"cm init: adding: cluster=policy-6e95de0e09f537a4 primary=0 secondary=0"}
    2020-05-31T19:41:48.461661281Z {"level":"info","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"\"cds: add/update cluster \\'policy-6e95de0e09f537a4\\'\""}
    2020-05-31T19:41:48.462523534Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"transport socket match, socket default selected for host with address 192.168.1.2:9001"}
    2020-05-31T19:41:48.462557539Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"add/update cluster policy-e524878422c7359f during init"}
    2020-05-31T19:41:48.462659731Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"adding TLS cluster policy-e524878422c7359f"}
    2020-05-31T19:41:48.462692341Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"initializing Primary cluster policy-e524878422c7359f completed"}
    2020-05-31T19:41:48.462826634Z {"level":"debug","service":"envoy","name":"init","time":"2020-05-31T19:41:48Z","message":"init manager Cluster policy-e524878422c7359f contains no targets"}
    2020-05-31T19:41:48.462842522Z {"level":"debug","service":"envoy","name":"init","time":"2020-05-31T19:41:48Z","message":"init manager Cluster policy-e524878422c7359f initialized, notifying ClusterImplBase"}
    2020-05-31T19:41:48.462909742Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"membership update for TLS cluster policy-e524878422c7359f added 1 removed 0"}
    2020-05-31T19:41:48.462997987Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"cm init: init complete: cluster=policy-e524878422c7359f primary=0 secondary=0"}
    2020-05-31T19:41:48.463071099Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"maybe finish initialize state: 2"}
    2020-05-31T19:41:48.463163855Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"cm init: adding: cluster=policy-e524878422c7359f primary=0 secondary=0"}
    2020-05-31T19:41:48.463231869Z {"level":"info","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"\"cds: add/update cluster \\'policy-e524878422c7359f\\'\""}
    2020-05-31T19:41:48.464170774Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"transport socket match, socket default selected for host with address 192.168.1.2:9001"}
    2020-05-31T19:41:48.464189284Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"add/update cluster policy-41179ead81ff71b3 during init"}
    2020-05-31T19:41:48.464200421Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"adding TLS cluster policy-41179ead81ff71b3"}
    2020-05-31T19:41:48.464259850Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"initializing Primary cluster policy-41179ead81ff71b3 completed"}
    2020-05-31T19:41:48.464273723Z {"level":"debug","service":"envoy","name":"init","time":"2020-05-31T19:41:48Z","message":"init manager Cluster policy-41179ead81ff71b3 contains no targets"}
    2020-05-31T19:41:48.464355607Z {"level":"debug","service":"envoy","name":"init","time":"2020-05-31T19:41:48Z","message":"init manager Cluster policy-41179ead81ff71b3 initialized, notifying ClusterImplBase"}
    2020-05-31T19:41:48.464446218Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"membership update for TLS cluster policy-41179ead81ff71b3 added 1 removed 0"}
    2020-05-31T19:41:48.464463165Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"cm init: init complete: cluster=policy-41179ead81ff71b3 primary=0 secondary=0"}
    2020-05-31T19:41:48.464503162Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"maybe finish initialize state: 2"}
    2020-05-31T19:41:48.464546704Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"cm init: adding: cluster=policy-41179ead81ff71b3 primary=0 secondary=0"}
    2020-05-31T19:41:48.464720623Z {"level":"info","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"\"cds: add/update cluster \\'policy-41179ead81ff71b3\\'\""}
    2020-05-31T19:41:48.465847740Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"transport socket match, socket default selected for host with address 192.168.1.2:8090"}
    2020-05-31T19:41:48.465865380Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"add/update cluster policy-527900684d2a08c1 during init"}
    2020-05-31T19:41:48.465876798Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"adding TLS cluster policy-527900684d2a08c1"}
    2020-05-31T19:41:48.465900789Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"initializing Primary cluster policy-527900684d2a08c1 completed"}
    2020-05-31T19:41:48.465973216Z {"level":"debug","service":"envoy","name":"init","time":"2020-05-31T19:41:48Z","message":"init manager Cluster policy-527900684d2a08c1 contains no targets"}
    2020-05-31T19:41:48.466005024Z {"level":"debug","service":"envoy","name":"init","time":"2020-05-31T19:41:48Z","message":"init manager Cluster policy-527900684d2a08c1 initialized, notifying ClusterImplBase"}
    2020-05-31T19:41:48.466078794Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"membership update for TLS cluster policy-527900684d2a08c1 added 1 removed 0"}
    2020-05-31T19:41:48.466153520Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"cm init: init complete: cluster=policy-527900684d2a08c1 primary=0 secondary=0"}
    2020-05-31T19:41:48.466226341Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"maybe finish initialize state: 2"}
    2020-05-31T19:41:48.466304163Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"cm init: adding: cluster=policy-527900684d2a08c1 primary=0 secondary=0"}
    2020-05-31T19:41:48.466368905Z {"level":"info","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"\"cds: add/update cluster \\'policy-527900684d2a08c1\\'\""}
    2020-05-31T19:41:48.467409049Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"transport socket match, socket default selected for host with address 192.168.1.2:9023"}
    2020-05-31T19:41:48.467434669Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"add/update cluster policy-ba709490a86c11a1 during init"}
    2020-05-31T19:41:48.467446026Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"adding TLS cluster policy-ba709490a86c11a1"}
    2020-05-31T19:41:48.467498010Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"initializing Primary cluster policy-ba709490a86c11a1 completed"}
    2020-05-31T19:41:48.467623528Z {"level":"debug","service":"envoy","name":"init","time":"2020-05-31T19:41:48Z","message":"init manager Cluster policy-ba709490a86c11a1 contains no targets"}
    2020-05-31T19:41:48.467656947Z {"level":"debug","service":"envoy","name":"init","time":"2020-05-31T19:41:48Z","message":"init manager Cluster policy-ba709490a86c11a1 initialized, notifying ClusterImplBase"}
    2020-05-31T19:41:48.467752649Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"membership update for TLS cluster policy-ba709490a86c11a1 added 1 removed 0"}
    2020-05-31T19:41:48.467803406Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"cm init: init complete: cluster=policy-ba709490a86c11a1 primary=0 secondary=0"}
    2020-05-31T19:41:48.467863935Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"maybe finish initialize state: 2"}
    2020-05-31T19:41:48.467969207Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"cm init: adding: cluster=policy-ba709490a86c11a1 primary=0 secondary=0"}
    2020-05-31T19:41:48.468045414Z {"level":"info","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"\"cds: add/update cluster \\'policy-ba709490a86c11a1\\'\""}
    2020-05-31T19:41:48.469034006Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"transport socket match, socket default selected for host with address 192.168.1.2:9025"}
    2020-05-31T19:41:48.469052143Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"add/update cluster policy-1a81cefd0b545e72 during init"}
    2020-05-31T19:41:48.469079098Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"adding TLS cluster policy-1a81cefd0b545e72"}
    2020-05-31T19:41:48.469091203Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"initializing Primary cluster policy-1a81cefd0b545e72 completed"}
    2020-05-31T19:41:48.469102479Z {"level":"debug","service":"envoy","name":"init","time":"2020-05-31T19:41:48Z","message":"init manager Cluster policy-1a81cefd0b545e72 contains no targets"}
    2020-05-31T19:41:48.469144605Z {"level":"debug","service":"envoy","name":"init","time":"2020-05-31T19:41:48Z","message":"init manager Cluster policy-1a81cefd0b545e72 initialized, notifying ClusterImplBase"}
    2020-05-31T19:41:48.469158028Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"membership update for TLS cluster policy-1a81cefd0b545e72 added 1 removed 0"}
    2020-05-31T19:41:48.469230153Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"cm init: init complete: cluster=policy-1a81cefd0b545e72 primary=0 secondary=0"}
    2020-05-31T19:41:48.469362094Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"maybe finish initialize state: 2"}
    2020-05-31T19:41:48.469388667Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"cm init: adding: cluster=policy-1a81cefd0b545e72 primary=0 secondary=0"}
    2020-05-31T19:41:48.469400254Z {"level":"info","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"\"cds: add/update cluster \\'policy-1a81cefd0b545e72\\'\""}
    2020-05-31T19:41:48.469426732Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"maybe finish initialize state: 3"}
    2020-05-31T19:41:48.469468299Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"maybe finish initialize primary init clusters empty: true"}
    2020-05-31T19:41:48.469528249Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"maybe finish initialize secondary init clusters empty: true"}
    2020-05-31T19:41:48.469614010Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"maybe finish initialize cds api ready: true"}
    2020-05-31T19:41:48.469683657Z {"level":"info","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"cm init: all clusters initialized"}
    2020-05-31T19:41:48.469727635Z {"level":"debug","service":"envoy","name":"config","time":"2020-05-31T19:41:48Z","message":"Pausing discovery requests for type.googleapis.com/envoy.api.v2.RouteConfiguration"}
    2020-05-31T19:41:48.469799126Z {"level":"info","service":"envoy","name":"main","time":"2020-05-31T19:41:48Z","message":"all clusters initialized. initializing init manager"}
    2020-05-31T19:41:48.469840910Z {"level":"debug","service":"envoy","name":"init","time":"2020-05-31T19:41:48Z","message":"init manager Server initializing"}
    2020-05-31T19:41:48.469908249Z {"level":"debug","service":"envoy","name":"init","time":"2020-05-31T19:41:48Z","message":"init manager Server initializing target LDS"}
    2020-05-31T19:41:48.469976025Z {"level":"debug","service":"envoy","name":"config","time":"2020-05-31T19:41:48Z","message":"gRPC mux addWatch for type.googleapis.com/envoy.config.listener.v3.Listener"}
    2020-05-31T19:41:48.470006702Z {"level":"debug","service":"envoy","name":"config","time":"2020-05-31T19:41:48Z","message":"Resuming discovery requests for type.googleapis.com/envoy.api.v2.RouteConfiguration"}
    2020-05-31T19:41:48.470085386Z {"level":"debug","service":"envoy","name":"config","time":"2020-05-31T19:41:48Z","message":"Resuming discovery requests for type.googleapis.com/envoy.api.v2.ClusterLoadAssignment"}
    2020-05-31T19:41:48.470210956Z {"level":"debug","service":"envoy","name":"config","time":"2020-05-31T19:41:48Z","message":"gRPC config for type.googleapis.com/envoy.config.cluster.v3.Cluster accepted with 21 resources with version 1"}
    2020-05-31T19:41:48.644857827Z {"level":"debug","service":"envoy","name":"config","time":"2020-05-31T19:41:48Z","message":"Received gRPC message for type.googleapis.com/envoy.config.listener.v3.Listener at version 1"}
    2020-05-31T19:41:48.647007609Z {"level":"debug","service":"envoy","name":"config","time":"2020-05-31T19:41:48Z","message":"Pausing discovery requests for type.googleapis.com/envoy.api.v2.RouteConfiguration"}
    2020-05-31T19:41:48.727764019Z {"level":"debug","service":"envoy","name":"config","time":"2020-05-31T19:41:48Z","message":"begin add/update listener: name=https-ingress hash=4718058485297799746"}
    2020-05-31T19:41:48.728079763Z {"level":"debug","service":"envoy","name":"config","time":"2020-05-31T19:41:48Z","message":"  filter #0:"}
    2020-05-31T19:41:48.728330770Z {"level":"debug","service":"envoy","name":"config","time":"2020-05-31T19:41:48Z","message":"    name: envoy.filters.listener.tls_inspector"}
    2020-05-31T19:41:48.728493010Z {"level":"debug","service":"envoy","name":"config","time":"2020-05-31T19:41:48Z","message":"  config: {\n \"@type\": \"type.googleapis.com/google.protobuf.Empty\"\n}"}
    2020-05-31T19:41:48.728623773Z {"level":"debug","service":"envoy","name":"envoy","time":"2020-05-31T19:41:48Z"}
    2020-05-31T19:41:48.729967113Z {"level":"debug","service":"envoy","name":"config","time":"2020-05-31T19:41:48Z","message":"  filter #0:"}
    2020-05-31T19:41:48.730052256Z {"level":"debug","service":"envoy","name":"config","time":"2020-05-31T19:41:48Z","message":"    name: envoy.filters.network.http_connection_manager"}
          
          
        
    

    Additional context

    Going back to v0.8.3 is working as expected.

  • Websocket error with linkerd dashboard

    Websocket error with linkerd dashboard

    What happened?

    I cannot get the linkerd dashboard live stream of calls to work. I get "websocket error: undefined`.

    What did you expect to happen?

    I expect to see a stream of calls to my service. I see the live stream if I port-forward. I do not see the live stream if I proxy with pomerium.

    What's your environment like?

    • Pomerium version (retrieve with pomerium --version or /ping endpoint): v0.5.1
    • Server Operating System/Architecture/Cloud: K8s Rev: v1.14.9-eks-ba3d77

    What's your config.yaml?

    apiVersion: v1
    data:
      config.yaml: "policy: \n  - allow_websockets: true\n    allowed_groups:\n    - [email protected]\n
        \   allowed_users: []\n    from: https://glooe-monitoring.production.tidepool.org\n
        \   to: http://glooe-grafana.gloo-system.svc.cluster.local\n  - allow_websockets:
        true\n    allowed_groups:\n    - [email protected]\n    allowed_users: []\n    from:
        https://linkerd-dashboard.production.tidepool.org\n    to: http://linkerd-dashboard.linkerd.svc.cluster.local:8080\n"
    kind: ConfigMap
    metadata:
      annotations:
        flux.weave.works/antecedent: pomerium:helmrelease/pomerium
      creationTimestamp: "2019-12-12T18:50:52Z"
      labels:
        app.kubernetes.io/instance: pomerium
        app.kubernetes.io/managed-by: Tiller
        app.kubernetes.io/name: pomerium
        helm.sh/chart: pomerium-4.1.2
      name: pomerium
      namespace: pomerium
      resourceVersion: "26735217"
      selfLink: /api/v1/namespaces/pomerium/configmaps/pomerium
      uid: 525ccabf-1d10-11ea-abeb-02c077500bb6
    

    Additional context

    Add any other context about the problem here.

  • OKTA IDP doesn't work for version 0.10.2

    OKTA IDP doesn't work for version 0.10.2

    Hello,

    What happened?

    We follow this documentation to deploy pomerium on a K8S cluster (rancher RKE) -> https://github.com/pomerium/pomerium/tree/v0.10.2/examples/kubernetes.

    Our configuration works with version 0.8.x and we tried to upgrade to the last version v0.10.2. The OKTA authentication works, but the access to the backend application defined in the policy is denied.

    We saw a change between these version on the idp_service_account, now it's seems that we have to use a json base64 encode. So we did that:

    cat api_key.json
    
    {
        "api_token": "XXXX OKTA IDP TOKEN XXXX"
    }
    
    cat api_key.json | base64
    ewogICAgImFwaV90b2tlbiI6ICIiWFhYWCBPS1RBIElEUCBUT0tFTiBYWFhYCn0K
    
    

    We use this in the pomerium config.yml file (we also try with "api_key" format with no success, we find this kind of key in a commit).

    The authentication works but we have a denied message, we are pretty sure that the call API to OKTA doesn't work but we don't find a log linked to this behavior (excepted the log below) on all pomerium services (we use separated services).

    image

    What did you expect to happen?

    Pomerium regarding the defined policy should accept the connection (we have the same configuration on each component in 0.8.X and it's works).

    What's your environment like?

    • Pomerium version -> 0.10.2
    • Server Operating System/Architecture/Cloud: Centos8 / K8S by RKE rancher

    What's your config.yaml?

    Our kubernetes-config.yaml is:

    insecure_server: true
    grpc_insecure: true
    grpc_address: ":80"
    
    pomerium_debug: true
    authenticate_service_url: https://authenticate.sso.domain.tld
    authorize_service_url: http://pomerium-authorize-service.namespace.svc.cluster.local
    cache_service_url: http://pomerium-cache-service.namespace.svc.cluster.local
    
    override_certificate_name: "*.sso.domain.tld"
    
    idp_provider: okta
    idp_client_id: <OKTA_APP_CLIENT_ID>
    idp_client_secret: <OKTA_APP_CLIENT_SECRET>
    idp_provider_url: https://ourdomain.okta.com
    idp_service_account: ewogICAgImFwaV90b2tlbiI6ICIiWFhYWCBPS1RBIElEUCBUT0tFTiBYWFhYCn0K
    
    policy:
        - from: https://hello.sso.domain.tld
          to: http://hello.namespace.svc.cluster.local
          allowed_groups:
            - <OKTA_GROUP_ID>
    

    What did you see in the logs?

    I don't know if it's link but we have this kind of log during the authentication:

    7:14PM INF authenticate: session load error error="Bad Request: internal/sessions: session is not found" X-Forwarded-For=["10.42.32.0,10.42.32.11"] X-Forwarded-Host=["authenticate.sso.domain.tld"] X-Forwarded-Port=["443"] X-Forwarded-Proto=["http"] X-Forwarded-Server=["traefik-ingress-controller-external-68c79c4f8-p8w4x"] X-Real-Ip=["10.42.32.0"] ip=127.0.0.1 request-id=XXXXX user_agent="Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.138 Safari/537.36"
    7:14PM INF authenticate: session load error error="Bad Request: internal/sessions: session is not found" X-Forwarded-For=["10.42.32.0,10.42.32.11"] X-Forwarded-Host=["authenticate.sso.domain.tld"] X-Forwarded-Port=["443"] X-Forwarded-Proto=["http"] X-Forwarded-Server=["traefik-ingress-controller-external-68c79c4f8-p8w4x"] X-Real-Ip=["10.42.32.0"] ip=127.0.0.1 request-id=YYYYY user_agent="Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.138 Safari/537.36"
    

    Thanks for your help,

  • `Access-Control-Allow-Origin` error on authenticate service

    `Access-Control-Allow-Origin` error on authenticate service

    What happened?

    I'm experimenting a strange and problematic situation, I suppose it is since v0.5.0 because I've never seen that before.

    At first it seemed similar to #390 but this one is about the pomerium service not answering CORS correctly.

    Basically what happens is something like this:

    • An SPA is making XHR calls without problems (with the X-Requested-With header)
    • at one point, one of the request is considered as needing reauth by the proxy
    • so the proxy returns a redirect response toward the authenticate service
    • the browser tries to validate this can be done with OPTIONS
    • we get the following error in the browser:
    Access to XMLHttpRequest at 'https://auth.example.com/.pomerium/sign_in?redirect_uri=https%3A%2F%2Fapp.example.com%2Fapi%stuff%2F10&sig=nSjPGT0tgnrsizrhZnWZZ0WvYSI_Zyy0UaMXkY-vdtg%3D&ts=1574843825' (redirected from 'https://app.example.com/api/stuff/10') from origin 'https://app.example.com' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource.
    

    What did you expect to happen?

    It seems to me that it is authenticate that does not answer CORS requests while I think it should.

    ## Environment

    • Pomerium version (retrieve with pomerium --version or /ping endpoint): v0.5.0
    • Server Operating System/Architecture/Cloud: AKS

    What did you see in the logs?

    The logs are not very clear about what happens

    authenticate

    {
        "level": "info",
        "fwd_ip": [
            "86.234.73.194"
        ],
        "ip": "10.242.1.28",
        "user_agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:70.0) Gecko/20100101 Firefox/70.0",
        "referer": "https://app.example.com/optimize/stuff/10",
        "req_id": "08388fd6-6fcb-99e4-075a-12f4c86c4189",
        "error": "internal/sessions: session is not found",
        "time": "2019-11-27T08:32:05Z",
        "message": "authenticate: verify session"
    }
    {
        "level": "debug",
        "fwd_ip": [
            "86.234.73.194"
        ],
        "ip": "10.242.1.28",
        "user_agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:70.0) Gecko/20100101 Firefox/70.0",
        "referer": "https://app.example.com/optimize/stuff/10",
        "req_id": "08388fd6-6fcb-99e4-075a-12f4c86c4189",
        "duration": 0.227797,
        "size": 841,
        "status": 302,
        "email": "",
        "group": "",
        "method": "GET",
        "service": "authenticate",
        "host": "auth.example.com",
        "path": "/.pomerium/sign_in?redirect_uri=https%3A%2F%2Fapp.example.com%2Fapi%2Fstuffs%2F10&sig=oMLbIJX2xjMf2-YkzmZgCrdDYBJSSR5IDdxv7blDN_o%3D&ts=1574843525",
        "time": "2019-11-27T08: 32: 05Z",
        "message": "http-request"
    }
    {
        "level": "debug",
        "fwd_ip": [
            "109.220.184.108"
        ],
        "ip": "10.242.0.22",
        "user_agent": "Mozilla/5.0 (X11; Linux x86_64; rv:70.0) Gecko/20100101 Firefox/70.0",
        "referer": "https://app.example.com/optimize/stuff/10",
        "req_id": "54dd9be1-43d9-65c4-16b6-f7ab64ba348e",
        "duration": 0.308897,
        "size": 0,
        "status": 200,
        "email": "",
        "group": "",
        "method": "OPTIONS",
        "service": "authenticate",
        "host": "auth.example.com",
        "path": "/.pomerium/sign_in?redirect_uri=https%3A%2F%2Fapp.example.com%2Fapi%2Fstuffs%2F10&sig=1bga3DYmFYiNUea7g_Fk4uTkeic7G34dOeWlt9eJWAM%3D&ts=1574843638",
        "time": "2019-11-27T08: 33: 58Z",
        "message": "http-request"
    }
    
  • proxy: grpc client should retry connections to services on failure

    proxy: grpc client should retry connections to services on failure

    Describe the bug

    I restarted pomerium, tried to login with my user and got a 500 error. After refreshing the page, I'm correctly logged in.

    To Reproduce Steps to reproduce the behavior:

    1. Restart pomerium with a fresh set of secrets (to ensure user has to log again)
    2. Go to a protected service and log in
    3. Saw 500 error

    Logs of the proxy:

    {"level":"error","fwd_ip":"10.4.0.1","ip":"10.4.0.42","user_agent":"Mozilla/5.0 (X11; Linux x86_64; rv:66.0) Gecko/20100101 Firefox/66.0","referer":"https://accounts.google.com/signin/oauth/oauthchooseaccount?client_id=XXXXXXXXX&flowName=GeneralOAuthFlow","req_id":"017ee31d-aad7-5207-a989-a834895ca395","error":"rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = \"transport: Error while dialing dial tcp 10.7.240.43:443: i/o timeout\"","time":"2019-03-25T10:00:16Z","message":"proxy: error redeeming authorization code"}
    

    There is no error in the authenticate service.

    Expected behavior

    User should be able to login at any time :)

    Environment:

    • Pomerium version (retrieve with pomerium --version): v0.0.2+45e6a8d
    • Server Operating System/Architecture/Cloud: GKE / GSuite
  • JWT_CLAIMS_HEADERS=email won't generate X-Pomerium-Claim-Email

    JWT_CLAIMS_HEADERS=email won't generate X-Pomerium-Claim-Email

    It looks like JWT_CLAIMS_HEADERS=email no longer generates X-Pomerium-Claim-Email header to the backend since 0.14.0.

    Reading the relevant doc, it looks like the header can be customized now but I did not expect the old config to stop working.

  • Pomerium docker image fails to start when supplied with non-letsencrypt wildcard cert

    Pomerium docker image fails to start when supplied with non-letsencrypt wildcard cert

    I have a wild card cert from Sectigo that is already in use on other web servers so I wanted to reuse it for pomerium. When I try to launch pomerium it says no certificate supplied.

    Here is an output from my docker-compose.yaml file:

    version: "3"
    services:
      pomerium:
        image: pomerium/pomerium:latest
        environment:
          # Generate new secret keys. e.g. `head -c32 /dev/urandom | base64`
          - COOKIE_SECRET=*******
        volumes:
          # Mount your domain's certificates : https://www.pomerium.io/docs/reference/certificates
          - /opt/pomerium/certs/<domain>.<tld>.crt:/pomerium/cert.pem:ro
          - /opt/pomerium/certs/<domain>.<tld>.key:/pomerium/privkey.pem:ro
          # Mount your config file : https://www.pomerium.io/docs/reference/reference/
          - /opt/pomerium/config.yaml:/pomerium/config.yaml:ro
        ports:
          - 443:443
    
      # https://httpbin.corp.beyondperimeter.com --> Pomerium --> http://httpbin
      httpbin:
        image: kennethreitz/httpbin:latest
        expose:
          - 80
    

    What's your environment like?

    Centos7 VM running on Xen with the latest docker-ce and docker compose installed.

    What's your config.yaml?

    # See detailed configuration settings : https://www.pomerium.io/docs/reference/reference/
    authenticate_service_url: https://sub.domain.tld
    
    # identity provider settings : https://www.pomerium.io/docs/identity-providers.html
    idp_provider: azure
    idp_provider_url: https://login.microsoftonline.com/<azure tenant>/v2.0/
    idp_client_id: <azure app id>
    idp_client_secret: <azure key>
    
    policy:
      - from: https://sub.domain.tld
        to: http://<internal-ip>
        allowed_domains:
          - domain.tld
          - domain.tld
          - domain.tld
          - domain.tld
    #  - from: https://external-httpbin.corp.beyondperimeter.com
    #    to: https://httpbin.org
    #    allow_public_unauthenticated_access: true
    
    

    What did you see in the logs?

    pomerium_1  | {"level":"fatal","error":"config: options from viper validation error config:no certificates supplied nor was insecure mode set","time":"2020-02-06T19:46:43Z","message":"cmd/pomerium"}
    httpbin_1   | [2020-02-06 19:46:44 +0000] [1] [INFO] Starting gunicorn 19.9.0
    httpbin_1   | [2020-02-06 19:46:44 +0000] [1] [INFO] Listening at: http://0.0.0.0:80 (1)
    httpbin_1   | [2020-02-06 19:46:44 +0000] [1] [INFO] Using worker: gevent
    httpbin_1   | [2020-02-06 19:46:44 +0000] [8] [INFO] Booting worker with pid: 8
    pomerium_pomerium_1 exited with code 1
    
  • Impersonate doesn't seem to work

    Impersonate doesn't seem to work

    Describe the bug

    After entering a user email or a group in the impersonate input (user and/or group) and clicking on impersonate, nothing happens: it seems there is a 302 that redirects me to the same /.pomerium url but I'm still logged as myself and not the impersonated user.

    Expected behavior

    To have my session updated as if I was logged as the user (or the group).

    Environment:

    • Pomerium version (retrieve with pomerium --version): v0.0.5
    • Server Operating System/Architecture/Cloud: AKS

    Configuration file(s): See #152 a always ;)

    Logs:

    {"level":"debug","fwd_ip":"10.240.0.66","ip":"10.240.0.52","user_agent":"Mozilla/5.0 (X11; Linux x86_64; rv:67.0) Gecko/20100101 Firefox/67.0","referer":"https://app.xxx.com/.pomerium","req_id":"af0966fc-1965-10f6-3905-5994d80923ea","duration":5.587115,"size":0,"status":302,"email":"","group":"","method":"POST","service":"proxy","url":"/.pomerium/impersonate","time":"2019-06-11T15:27:19Z","message":"http-request"}
    {"level":"debug","fwd_ip":"10.240.0.66","ip":"10.240.0.52","user_agent":"Mozilla/5.0 (X11; Linux x86_64; rv:67.0) Gecko/20100101 Firefox/67.0","referer":"https://app.xxx.com/.pomerium","req_id":"8bc2ad55-c02b-8991-0aeb-27b85d4b5e26","duration":5.144314,"size":12244,"status":200,"email":"[email protected]","group":"[email protected],[email protected],[email protected]","method":"GET","service":"proxy","url":"/.pomerium","time":"2019-06-11T15:27:19Z","message":"http-request"}
    
  • Authenticate redirect (/.pomerium/sign_in) returns 404

    Authenticate redirect (/.pomerium/sign_in) returns 404

    Access application over the proxy (ingress-controller enabled) correctly redirects to the authenticate service (which is also proxied over the ingress-controller), but the authenticate service (pod) responds with 404.

    The route should have been supported in the authenticate service and redirect to configured IdP is expected.

    1. Run a kind cluster (with port-mapping)
    2. Add entry to hosts file: "127.0.0.1 httpbin.example.com"
    3. Deploy pomerium using helm chart (see values below)
    4. Deployed httpbin as sample application
    5. Access application (https://httpbin.example.com/status/200)

    I'm redirected to: https://authenticate.my-tenant-ingress-8.com/.pomerium/sign_in?pomerium_expiry=1671710089&pomerium_idp_id=Gm3M1n2jTyZZtLFMSg8Au2K2GVnFTwWgvMe3YjRczX3v&pomerium_issued=1671709789&pomerium_redirect_uri=https%3A%2F%2Fhttpbin.example.com%2Fstatus%2F200&pomerium_signature=ouTzea7shGhFZdJ-pGp0c9en5waky1_y9d25FJ5Wt_0%3D Which gives a 404 response.

    What's your environment like?

    Running on a kind cluster (so locally; dockerized)

    • Pomerium chart: 33.0.1
    • Pomerium version: pomerium/pomerium:v0.20.0
    • Pomerium ingress-controller: pomerium/ingress-controller:sha-5623bd8
    • Server Operating System/Architecture/Cloud: Ubuntu

    Configs

    config:
      rootDomain: my-tenant-ingress-8.com
      generateTLS: true
      extraOpts:
        pomerium_debug: true
    
    authenticate:
      proxied: true
      idp:
        provider: "google"
        clientID: "*****"
        clientSecret: "*******"
    
    proxy:
      service:
        type: NodePort
        nodePort: 31111
    
    ingressController:
      enabled: true
    
    forwardAuth:
      enabled: false
    
    ingress:
      enabled: false
    

    This gives me following pomerium config:

    (base) ➜  poc8-istio-gateway git:(main) ✗ kubectl get secret pomerium -n pomerium -o jsonpath='{.data.config\.yaml}' | base64 --decode
    autocert: false
    dns_lookup_family: V4_ONLY
    address: :443
    grpc_address: :443
    certificate_authority_file: "/pomerium/ca/ca.crt"
    certificates:
    authenticate_service_url: https://authenticate.my-tenant-ingress-8.com
    authorize_service_url: https://pomerium-authorize.pomerium.svc.cluster.local
    databroker_service_url: https://pomerium-databroker.pomerium.svc.cluster.local
    idp_provider: google
    idp_scopes: 
    idp_provider_url: 
    
    pomerium_debug: true
    idp_client_id: *****
    idp_client_secret: ******
    routes:
    
    ---
    apiVersion: v1
    kind: Namespace
    metadata:
      # labels:
      #   istio-injection: enabled
      name: httpbin
    ---
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: httpbin
      namespace: httpbin
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: httpbin
      labels:
        app: httpbin
        service: httpbin
      namespace: httpbin
    spec:
      ports:
      - name: http
        port: 8000
        targetPort: 80
      selector:
        app: httpbin
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: httpbin
      namespace: httpbin
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: httpbin
          version: v1
      template:
        metadata:
          labels:
            app: httpbin
            version: v1
        spec:
          serviceAccountName: httpbin
          containers:
          - image: docker.io/kennethreitz/httpbin
            imagePullPolicy: IfNotPresent
            name: httpbin
            ports:
            - containerPort: 80
    
    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      annotations:
        ingress.pomerium.io/allow_any_authenticated_user: 'true'
      name: httpbin
      namespace: httpbin
    spec:
      ingressClassName: pomerium
      rules:
      - host: httpbin.example.com
        http:
          paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: httpbin
                port:
                  name: http
    

    What did you see in the logs?

    # proxy
    11:49AM INF http-request authority=httpbin.example.com duration=19.660079 forwarded-for=10.244.0.1 method=GET path=/status/200 referer= request-id=45431b0a-3e35-4d27-9465-b7830d09269e response-code=302 response-code-details=ext_authz_denied service=envoy size=1434 upstream-cluster= user-agent="Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/107.0.0.0 Safari/537.36"
    11:49AM INF http-request authority=authenticate.my-tenant-ingress-8.com duration=6.486118 forwarded-for=10.244.0.1 method=GET path=/.pomerium/sign_in referer= request-id=52a6f5dc-85d0-4513-8c17-3446f61383c5 response-code=404 response-code-details=via_upstream service=envoy size=19 upstream-cluster=pomerium-pomerium-authenticate-authenticate-my-tenant-ingress-8-com-68f5db2bbebb985c user-agent="Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/107.0.0.0 Safari/537.36"
    
    # authenticate
    11:49AM INF http-request authority=authenticate.my-tenant-ingress-8.com duration=1.112567 forwarded-for=10.244.0.1,10.244.0.10 method=GET path=/.pomerium/sign_in referer= request-id=303aebaf-5bc4-4d59-993c-88580a030a2c response-code=404 response-code-details=via_upstream service=envoy size=19 upstream-cluster=pomerium-control-plane-http user-agent="Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/107.0.0.0 Safari/537.36"
    
    # authorize
    11:49AM ERR httputil: error error=Login request-id=45431b0a-3e35-4d27-9465-b7830d09269e status=302 status-text=Found
    11:49AM INF authorize check allow=false allow-why-false=["non-pomerium-route","user-unauthenticated"] check-request-id=45431b0a-3e35-4d27-9465-b7830d09269e deny=false deny-why-false=["valid-client-certificate-or-none-required"] email= host=httpbin.example.com ip=10.244.0.1 method=GET path=/status/200 query= request-id=45431b0a-3e35-4d27-9465-b7830d09269e service=authorize user=
    

    Additional context

    I found the issue can be resolved by explicitly whitelisting the authenticate route in the config.yaml:

    config:
      routes:
        - from: https://authenticate.my-tenant-ingress-8.com
          to: https://pomerium-authenticate.pomerium.svc.cluster.local
          allow_public_unauthenticated_access: true
    

    This is weird since the helm chart generates the ingress for the authenticate service which should already whitelist the route? And also this route is explicitly NOT rendered when using the ingress-controller and setting authenticate to be proxied...

    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      annotations:
        ingress.pomerium.io/allow_public_unauthenticated_access: "true"
        ingress.pomerium.io/secure_upstream: "true"
        meta.helm.sh/release-name: pomerium
        meta.helm.sh/release-namespace: pomerium
      creationTimestamp: "2022-12-22T11:47:01Z"
      generation: 1
      labels:
        app.kubernetes.io/instance: pomerium
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/name: pomerium
        helm.sh/chart: pomerium-33.0.1
      name: pomerium-authenticate
      namespace: pomerium
      resourceVersion: "531"
      uid: 2b4bebc0-a4af-44e5-833f-883b1ccafccf
    spec:
      ingressClassName: pomerium
      rules:
      - host: authenticate.my-tenant-ingress-8.com
        http:
          paths:
          - backend:
              service:
                name: pomerium-authenticate
                port:
                  name: https
            path: /
            pathType: Prefix
      tls:
      - hosts:
        - authenticate.my-tenant-ingress-8.com
        secretName: pomerium-authenticate-tls
    status:
      loadBalancer: {}
    
  • Unable to connect to local OIDC

    Unable to connect to local OIDC

    What happened?

    I've installed Pomerium ingress controller and Dex IdP in a kubernetes cluster. Dex runs without TLS but is exposed via ingress as https://dex.cluster.local with TLS. I was unable to get Pomerium integrate with Dex in any way.

    I could not configure IdP URL to Dex internal service as http://dex.dex.svc.cluster.local because CRD rejects non ^https:// URLs. I could not make Pomerium recognize that it should use ingress URL when connecting to Dex.

    I even tried to add internal DNS record to point dex.cluster.local to dex.dex.svc.cluster.local but then got the issue that Dex is not serving TLS.

    Is there a way to connect to internal IdP without IdP exposing TLS directly?

    What did you expect to happen?

    Clear way to configure ingress controller and internal IdP on kubernetes.

    How'd it happen?

    Errors I see are these:

    {"level":"error","X-Forwarded-For":["10.42.0.0"],"X-Forwarded-Proto":["https"],"ip":"127.0.0.1","user_agent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0 Safari/537.36","request-id":"498a0170-9d6e-4f03-b016-8743c63e94e1","error":"failed to get sign in url: identity/oidc: could not connect to oidc: Get \"https://dex.cluster.local/.well-known/openid-configuration\": dial tcp 10.43.195.81:443: connect: connection refused","status":500,"status-text":"Internal Server Error","request-id":"498a0170-9d6e-4f03-b016-8743c63e94e1","time":"2022-12-14T12:40:29Z","message":"httputil: error"}
    

    That's because Dex is not serving TLS.

    What's your environment like?

    • Pomerium version (retrieve with pomerium --version): v0.20.0 tag from ingress-controller repo
    • Server Operating System/Architecture/Cloud: K3s on Docker via K3d.
  • kubernetes integration test fails

    kubernetes integration test fails

    What happened?

    The Kubernetes integration test always seems to fail now: https://github.com/pomerium/pomerium/actions/runs/3528649436/jobs/5927568091#step:8:24. It can't start k3s for some reason.

    What did you expect to happen?

    The Kubernetes integration tests to succeed.

    How'd it happen?

    Made a pull request.

    What's your environment like?

    • 52c967b8a52ac55b9823368f5b111bb338d3e106
  • Provide a way to deal with ssl on the IP address

    Provide a way to deal with ssl on the IP address

    Is your feature request related to a problem? Please describe. Currently, when someone visits the IP address of the server, they are presented with the Pomerium self-signed cert.

    I had issues in the past where the self-signed cert was offered in URL's that were in a cert, but not in the Routes config. That was fixed, thanks for this!

    However, it would seem that offering the self-signed cert is worse than offering a signed-cert, for the wrong domain. At least, according to various "vulnerability" scanning tools.

    It seems that using cert-bot (like I do) means there is no option to create a signed cert for the IP.

    I see Nginx and friends will offer a signed cert (if configured) for the IP, which does not correspond to the domain. This does NOT create red-flags on the bitsight scanner like the self-signed cert does.

    Describe the solution you'd like

    Create a setting where one can avoid having the self-signed cert offered up for any/or certain domains.

    Describe alternatives you've considered

    I've setup a route for the IP url in question, but that doesn't help much because I can't force the route to use a certain cert.

    Explain any additional use-cases

    Anyone who is pestered by management who wants no "red flags" on their system will want something like this. That is, unless Pomerium is behind another proxy. Which seems counter to the power of Pomerium.

    If one were able to set a default cert, or set a cert per route, that may well be a use-case others find handy. However, I can understand why this may not be something to encourage.

  • Infinite loop after switching from operator to ingress controller.

    Infinite loop after switching from operator to ingress controller.

    (Disclaimer: I know it's not a 'bug', but rather I don't understand something, but I decided to ask here, as it's kinda crucial to restore working Pomerium. Sorry for the mess).

    I've updated my EKS cluster from 1.21 to 1.22. Unfortunately, this made the operator stop working because of v1beta1 removal on Ingress.

    I've switched in my config (everything is managed by helm/helmfile) from:

    operator:
      enabled: true
      config:
        ingressClass: traefik-{{ .Environment.Name }}
      replicaCount: 1
    

    to:

    ingressController:
      enabled: true
      ingressClassResource:
        default: true
      config:
        ingressClass: traefik-{{ .Environment.Name }}
        operatorMode: true
    

    All pods are up & running, but I have infinite loop ('too may redirects' in Chrome, 'this page isn't redirected properly' in Firefox). Am I missing something here?

    Additional info: I also have a middleware named forward-auth configured in my pomerium namespace. It uses http://pomerium-proxy.pomerium as forwardAuth (pomerium-proxy is the name of service running proxy). All my Ingress has the annotations:

    traefik.ingress.kubernetes.io/router.entrypoints: websecure
    traefik.ingress.kubernetes.io/router.middlewares: pomerium-forward-auth@kubernetescrd
    

    Versions:

    • chart: 25.0.0 (I know it's a bit old, but for now I don't have time to upgrade to the most recent version, I need just this basic functionality working);
    • pomerium version: 0.15.7
  • Allow basic-auth for programmatic access

    Allow basic-auth for programmatic access

    Summary

    Allow basic-auth for a more feasible way to authenticate for a wide range of web apps

    Related issues

    Fixes #3687

    User Explanation

    Check if basic-auth headers are available and if so use them to authenticate your app in a programmatic way, using pomerium as default user name and the JWT as password.

    Checklist

    • [ ] reference any related issues
    • [ ] updated docs
    • [ ] updated unit tests
    • [ ] updated UPGRADING.md
    • [ ] add appropriate tag (improvement / bug / etc)
    • [ ] ready for review
Cost-aware network traffic analysis

Traffic Refinery Overview Traffic Refinery is a cost-aware network traffic analysis library implemented in Go For a project overview, installation inf

Nov 21, 2022
Sesame: an Ingress controller for Kubernetes that works by deploying the Envoy proxy as a reverse proxy and load balancer

Sesame Overview Sesame is an Ingress controller for Kubernetes that works by dep

Dec 28, 2021
Fast, concurrent, streaming access to Amazon S3, including gof3r, a CLI. http://godoc.org/github.com/rlmcpherson/s3gof3r

s3gof3r s3gof3r provides fast, parallelized, pipelined streaming access to Amazon S3. It includes a command-line interface: gof3r. It is optimized for

Dec 26, 2022
Google Compute Engine (GCE) VM takeover via DHCP flood - gain root access by getting SSH keys added by google_guest_agent

Abstract This is an advisory about an unpatched vulnerability (at time of publishing this repo, 2021-06-25) affecting virtual machines in Google's Com

Nov 9, 2022
Access your Kubernetes Deployment over the Internet
Access your Kubernetes Deployment over the Internet

Kubexpose: Access your Kubernetes Deployment over the Internet Kubexpose makes it easy to access a Kubernetes Deployment over a public URL. It's a Kub

Dec 5, 2022
GitOops is a tool to help attackers and defenders identify lateral movement and privilege escalation paths in GitHub organizations by abusing CI/CD pipelines and GitHub access controls.
GitOops is a tool to help attackers and defenders identify lateral movement and privilege escalation paths in GitHub organizations by abusing CI/CD pipelines and GitHub access controls.

GitOops is a tool to help attackers and defenders identify lateral movement and privilege escalation paths in GitHub organizations by abusing CI/CD pipelines and GitHub access controls.

Jan 2, 2023
Terraform provider to access CEPH S3 API

terraform-provider-ceph (S3) A very simple Terraform provider to create/delete buckets via CEPH S3 API. Build and install go build -o terraform-provid

Nov 26, 2021
The Masa Testnet and access bootnodes and node IP's

Masa Testnet Node V1.0 Get An OpenVPN File You must must be connected to our OpenVPN network in order to join the Masa Testnet and access bootnodes an

Dec 17, 2022
Using this you can access node external ip address value from your pod.

Using this you can access node external ip address value from your pod.

Jan 30, 2022
A small utility to generate a kubectl configuration file for all clusters you have access to in GKE.

gke-config-helper A small utility to generate a kubectl configuration file for all clusters you have access to in GKE. Usage $ gke-config-helper The b

Feb 9, 2022
An Oracle Cloud (OCI) Pulumi resource package, providing multi-language access to OCI

Oracle Cloud Infrastructure Resource Provider The Oracle Cloud Infrastructure (OCI) Resource Provider lets you manage OCI resources. Installing This p

Dec 2, 2022
easyssh-proxy provides a simple implementation of some SSH protocol features in Go

easyssh-proxy easyssh-proxy provides a simple implementation of some SSH protocol features in Go. Feature This project is forked from easyssh but add

Dec 30, 2022
S3 Reverse Proxy with GET, PUT and DELETE methods and authentication (OpenID Connect and Basic Auth)
S3 Reverse Proxy with GET, PUT and DELETE methods and authentication (OpenID Connect and Basic Auth)

Menu Why ? Features Configuration Templates Open Policy Agent (OPA) API GET PUT DELETE AWS IAM Policy Grafana Dashboard Prometheus metrics Deployment

Jan 2, 2023
The Cloud Native Application Proxy
The Cloud Native Application Proxy

Traefik (pronounced traffic) is a modern HTTP reverse proxy and load balancer that makes deploying microservices easy. Traefik integrates with your ex

Jan 9, 2023
Reworking kube-proxy's architecture

Kubernetes Proxy NG The Kubernetes Proxy NG a new design of kube-proxy aimed at allowing Kubernetes business logic to evolve with minimal to no impact

Jan 3, 2023
An Ingress controller for Kubernetes using NGINX as a reverse proxy and load balancer

NGINX Ingress Controller Overview ingress-nginx is an Ingress controller for Kubernetes using NGINX as a reverse proxy and load balancer. Learn more a

Nov 15, 2021
Custom Terraform provider that allows provisioning VGS Proxy Routes.

VGS Terraform Provider Custom Terraform provider that allows provisioning VGS Proxy Routes. How to Install Requirements: terraform ver 0.12 or later M

Mar 12, 2022
A Cloud-Native Network Proxy

Introduction ServiceCar is a cloud-native network proxy that run on cloud and edge and embraces the diversity of languages and developer frameworks. S

May 20, 2022
stratus is a cross-cloud identity broker that allows workloads with an identity issued by one cloud provider to exchange this identity for a workload identity issued by another cloud provider.
stratus is a cross-cloud identity broker that allows workloads with an identity issued by one cloud provider to exchange this identity for a workload identity issued by another cloud provider.

stratus stratus is a cross-cloud identity broker that allows workloads with an identity issued by one cloud provider to exchange this identity for a w

Dec 26, 2021
A cloud native Identity & Access Proxy / API (IAP) and Access Control Decision API

Heimdall Heimdall is inspired by Ory's OAthkeeper, tries however to resolve the functional limitations of that product by also building on a more mode

Jan 6, 2023