Simple Kubernetes real-time dashboard and management.

Skooner - Kubernetes Dashboard

We are changing our name from k8dash to Skooner! Please bear with us as we update our documentation and codebase to reflect this change. If you previously installed k8dash, you will need to uninstall it from your cluster and install Skooner instead. For most cases this can be done by running the following kubectl delete deployment,service k8dash

Skooner is the easiest way to manage your Kubernetes cluster. Skooner is now a sandbox project of the Cloud Native Computing Foundation!

  • Full cluster management: Namespaces, Nodes, Pods, Replica Sets, Deployments, Storage, RBAC and more
  • Blazing fast and Always Live: no need to refresh pages to see the latest cluster status
  • Quickly visualize cluster health at a glance: Real time charts help quickly track down poorly performing resources
  • Easy CRUD and scaling: plus inline API docs to easily understand what each field does
  • 100% responsive (runs on your phone/tablet)
  • Simple OpenID integration: no special proxies required
  • Simple installation: use the provided yaml resources to have skooner up and running in under 1 minute (no, seriously)
  • See Skooner in action:
    Skooner - Kubernetes Dashboard

Table of Contents

Prerequisites

(Back to Table of Contents)

Getting Started

Deploy Skooner with something like the following...

NOTE: never trust a file downloaded from the internet. Make sure to review the contents of kubernetes-skooner.yaml before running the script below.

kubectl apply -f https://raw.githubusercontent.com/skooner-k8s/skooner/master/kubernetes-skooner.yaml

To access skooner, you must make it publicly visible. If you have an ingress server setup, you can accomplish by adding a route like the following:

kind: Ingress
apiVersion: extensions/v1beta1
metadata:
  name: skooner
  namespace: kube-system
spec:
  rules:
  -
    host: skooner.example.com
    http:
      paths:
      -
        path: /
        backend:
          serviceName: skooner
          servicePort: 80

(Back to Table of Contents)

kubectl proxy

Unfortunately, kubectl proxy cannot be used to access Skooner. According to this comment, it seems that kubectl proxy strips the Authorization header when it proxies requests.

this is working as expected. "proxying" through the apiserver will not get you standard proxy behavior (preserving Authorization headers end-to-end), because the API is not being used as a standard proxy

(Back to Table of Contents)

Logging in

There are multiple options for logging into the dashboard: Service Account Token, OIDC, and NodePort.

Service Account Token

The first (and easiest) option is to create a dedicated service account. In the command line:

# Create the service account in the current namespace (we assume default)
kubectl create serviceaccount skooner-sa

# Give that service account root on the cluster
kubectl create clusterrolebinding skooner-sa --clusterrole=cluster-admin --serviceaccount=default:skooner-sa

# Find the secret that was created to hold the token for the SA
kubectl get secrets

# Show the contents of the secret to extract the token
kubectl describe secret skooner-sa-token-xxxxx

Copy the token value from the secret, and enter it into the login screen to access the dashboard.

OIDC

Skooner makes using OpenId Connect for authentication easy. Assuming your cluster is configured to use OIDC, all you need to do is create a secret containing your credentials and apply kubernetes-skooner-oidc.yaml.

To learn more about configuring a cluster for OIDC, check out these great links

You can deploy Skooner with OIDC support using something like the following script...

NOTE: never trust a file downloaded from the internet. Make sure to review the contents of kubernetes-skooner-oidc.yaml before running the script below.

OIDC_URL=<put your endpoint url here... something like https://accounts.google.com>
OIDC_ID=<put your id here... something like blah-blah-blah.apps.googleusercontent.com>
OIDC_SECRET=<put your oidc secret here>

kubectl create secret -n kube-system generic skooner \
--from-literal=url="$OIDC_URL" \
--from-literal=id="$OIDC_ID" \
--from-literal=secret="$OIDC_SECRET"

kubectl apply -f https://raw.githubusercontent.com/skooner-k8s/skooner/master/kubernetes-skooner-oidc.yaml

Additionally, you can provide other OIDC options via these environment variables:

  • OIDC_SCOPES: The default value for this value is openid email, but additional scopes can also be added using something like OIDC_SCOPES="openid email groups"
  • OIDC_METADATA: Skooner uses the excellent node-openid-client module. OIDC_METADATA will take a JSON string and pass it to the Client constructor. Docs here. For example, OIDC_METADATA='{"token_endpoint_auth_method":"client_secret_post"}

NodePort

If you do not have an ingress server setup, you can utilize a NodePort service as configured in kubernetes-skooner-nodeport.yaml. This is ideal when creating a single node master, or if you want to get up and running as fast as possible.

This will map Skooner port 4654 to a randomly selected port on the running node. The assigned port can be found using:

$ kubectl get svc --namespace=kube-system

NAME       TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
skooner     NodePort    10.107.107.62   
   
            4654:32565/TCP   1m

   

Metrics

Skooner relies heavily on metrics-server to display real time cluster metrics. It is strongly recommended to have metrics-server installed to get the best experience from Skooner.

(Back to Table of Contents)

Development

You will need:

  • A running Kubernetes cluster
    • Installing and running minikube is an easy way to get this.
    • Once minikube is installed, you can run it with the command minikube start --driver=docker
  • Once the cluster is up and running, create some login credentials as described above

(Back to Table of Contents)

Skooner Architecture

Server

To run the server, run npm i from the /server directory to install dependencies and then npm start to run the server. The server is a simple express.js server that is primarily responsible for proxying requests to the Kubernetes api server.

During development, the server will use whatever is configured in ~/.kube/config to connect the desired cluster. If you are using minikube, for example, you can run kubectl config set-context minikube to get ~/.kube/config set up correctly.

Client

The client is a React application (using TypeScript) with minimal other dependencies.

To run the client, open a new terminal tab and navigate to the /client directory, run npm i and then npm start. This will open up a browser window to your local Skooner dashboard. If everything compiles correctly, it will load the site and then an error message will pop up Unhandled Rejection (Error): Api request error: Forbidden.... The error message has an 'X' in the top righthand corner to close that message. After you close it, you should see the UI where you can enter your token.

(Back to Table of Contents)

License

Apache License 2.0

FOSSA Status

(Back to Table of Contents)

Comments
  • Config with and in Keycloak

    Config with and in Keycloak

    Hello,

    We have skooner up and running, but are unable to use oidc (keycloak). Is there any example how to setup skooner up with Keycloak?

    Our config (oidc-secret is dummy):

    OIDC_URL=https://sso.yyyyy.yyyyy.io/auth/realms/sso
    OIDC_ID=kubernetes
    OIDC_SECRET=243434-82b0-49ea-1111-454511a396b2
    OIDC_METADATA='{"token_endpoint_auth_method":"client_secret_post"}'
    OIDC_SCOPES="openid email"
    
    kubectl create secret -n kube-system generic skooner \
    --from-literal=OIDC_URL="$OIDC_URL" \
    --from-literal=OIDC_ID="$OIDC_ID" \
    --from-literal=OIDC_SECRET="$OIDC_SECRET" \
    --from-literal=OIDC_METADATA="$OIDC_METADATA" \
    --from-literal=OIDC_SCOPES="$OIDC_SCOPES"
    

    Our yaml:

    kind: Deployment
    apiVersion: apps/v1
    metadata:
      name: skooner
      namespace: kube-system
    spec:
      replicas: 1
      selector:
        matchLabels:
          k8s-app: skooner
      template:
        metadata:
          labels:
            k8s-app: skooner
        spec:
          containers:
          - name: skooner
            image: herbrandson/k8dash:latest
            ports:
            - containerPort: 4654
            livenessProbe:
              httpGet:
                scheme: HTTP
                path: /
                port: 4654
              initialDelaySeconds: 30
              timeoutSeconds: 30
            envFrom:
            - secretRef:
                name: skooner
          nodeSelector:
            'beta.kubernetes.io/os': linux
    
    ---
    kind: Service
    apiVersion: v1
    metadata:
      name: skooner
      namespace: kube-system
    spec:
      ports:
        - port: 80
          targetPort: 4654
      selector:
        k8s-app: skooner
    ---
    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      name: skooner
      namespace: kube-system
      annotations:
        cert-manager.io/cluster-issuer: letsencrypt-prod  
    spec:
      tls:
      - hosts:
          - skooner.xxxxx.xxxxx.io
        secretName: skooner.xxxxx.xxxxx.io-tls
      rules:
      - host: skooner.xxxxx.xxxxx.io
        http:
          paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: skooner
                port:
                  number: 80
    
    

    Error in Keycloak:

    10:56:54,252 WARN [org.keycloak.services] (default task-14) KC-SERVICES0091: Request is missing scope 'openid' so it's not treated as OIDC, but just pure OAuth2 request.
    .....
    10:46:31,823 WARN [org.keycloak.events] (default task-28) type=LOGIN_ERROR, realmId=sso, clientId=null, userId=null, ipAddress=10.245.4.115, error=invalid_request
    

    Thanks!

  • arm builds

    arm builds

    Would it be possible to publish arm builds for k8dash. Right now the image is x86 only and I want to use it on my raspberry pi 4 cluster. I know how to do this manually and would be willing to investigate how to automate it. https://engineering.docker.com/2019/04/multi-arch-images/

  • Feature Request:  Show ready/not ready, node type via node icons

    Feature Request: Show ready/not ready, node type via node icons

    First great job on the dashboard -- seems a lot more lightweight and less buggy than the standard dashboard. We have a couple of minor UI requests:

    On the nodes page it would be nice if unready nodes showed up by default on top, and maybe with a red icon, instead of the text 'READY' column. Even without the icon change, it would be good to float unready nodes to the top by default (we run on bare metal, so unready nodes are a big deal for us). We have alerts, obviously, but it still seems like a logical change to the UI.

    It would also be great if on that same page it was a bit more obvious which nodes were masters. You can look at the labels, but an icon change (or replacing the READY column with a MASTER column) would be pretty nice.

    Thanks again for all your great work!

  • Auth issues - call to /tokenreviews fails

    Auth issues - call to /tokenreviews fails

    Environment: AKS (K8s version 1.12.6)

    With ingress (Nginx): Login page is loaded (GET) but any POST fails because endpoint returns 404. Error message: Error occured attempting to login. Instead of contacting API, request is routed back to web.

    Request URL: https://something.com/apis/authentication.k8s.io/v1/tokenreviews
    Request Method: POST
    Status Code: 404
    

    Logs:

    OIDC_URL:  None
    [HPM] Proxy created: /  ->  https://something.hcp.westeurope.azmk8s.io:443
    Server started
    GET /
    GET /static/css/2.7b1d7de3.chunk.css
    GET /static/js/2.ab8f1278.chunk.js
    GET /static/css/main.a9446ed5.chunk.css
    GET /static/js/main.c1206f38.chunk.js
    GET /static/css/2.7b1d7de3.chunk.css.map
    GET /static/css/main.a9446ed5.chunk.css.map
    GET /static/js/2.ab8f1278.chunk.js.map
    GET /oidc
    GET /static/js/main.c1206f38.chunk.js.map
    GET /favicon.ico
    GET /manifest.json
    GET /
    POST /apis/authentication.k8s.io/v1/tokenreviews
    GET /
    

    The same thing happens when port-forwarded.

    Request URL: http://localhost:4654/apis/authentication.k8s.io/v1/tokenreviews
    Request Method: POST
    Status Code: 404 Not Found
    
  • Help on skooner and OIDC connect via keycloak?

    Help on skooner and OIDC connect via keycloak?

    This is what my client looks like - Screenshot 2022-07-21 at 12 16 45 PM

    My Keycloak is running at http://74.220.17.8:8080/ This is how my secret looks like

    kubectl create secret -n kube-system generic skooner \
    --from-literal=url="$OIDC_URL" \
    --from-literal=id="$OIDC_ID" \
    --from-literal=secret="$OIDC_SECRET"
    

    where OIDC_URL=http://74.220.17.8:8080/realms/master

    When I login to http://a3869259-c651-4eab-be4e-ddcba12856f1.k8s.civo.com it shows invalid Screenshot 2022-07-21 at 12 18 32 PM

    Anything wrong I am doing or any wrong values that I gave?

  • Keycloak support

    Keycloak support

    Hi,

    I'm using keycloak as an OIDC provider, does anyone succeed with k8dash ?

    I still getting invalid credentials" in k8dash, but keycloak is working fine (use it for grafana and legacy kubernetes dashboard).

    Just do a basic openid connect client.

    Sorry for not getting more detailed, but if anyone had this issue....

    Had also a look at the secret base64 encoding but it doesn't seem to be that.

    Thanks

  • Browser user/password?

    Browser user/password?

    On running the install as per the readme I get prompted for a basic auth user & password.

    This prevents me from getting to enter in the auth token

    edit: forgot to mention I was trying to access it doing a kubectl port-forward service/k8dash 8080:80

  • Fix pod and node views in kube system namespace.

    Fix pod and node views in kube system namespace.

    Fixes #276. There are few instances when an empty object item is returned and various functions called assume if the object exists at all, that it is fully declared. This PR attempts to check for this in the JSX or called render function if possible. In the pod component there are several calls directly in the JSX so a the check couldn't be pushed into a common function like in the node component.

    Kube System pod before: pod-before

    Kube System pod after: pod-after

    Node before: node-before

    Node after: node-after

  • Support passing bearer token in header

    Support passing bearer token in header

    Without logging in via the UI, if I pass a Bearer token with a request it redirects to the token login screen. It would be nice if it could recognize I already have a token and use that without needing to go through the browser login flow

  • Probleme using OIDC authentification

    Probleme using OIDC authentification

    Hi when i try to use my oidc (keycloak) with k8dash it doesn't work. In the pod logs i have:

     [HPM] POST /apis/authorization.k8s.io/v1/selfsubjectrulesreviews -> https://10.96.0.1:443                                                                                                                 │
    │ POST /apis/authorization.k8s.io/v1/selfsubjectrulesreviews 403                                                                                                                                            │
    │ GET /favicon.ico 200                                                                                                                                                                                      │
    │ GET /static/js/2.db22b280.chunk.js.map 304                                                                                                                                                                │
    │ GET /static/js/main.34226f17.chunk.js.map 304                                                                                                                                                             │
    │ GET /static/css/main.0d6d7525.chunk.css.map 304                                                                                                                                                           │
    │ GET /static/css/2.b522e268.chunk.css.map 304                                                                                                                                                              │
    │ (node:8) UnhandledPromiseRejectionWarning: ReferenceError: next is not defined                                                                                                                            │
    │     at getOidc (/usr/src/app/index.js:79:9)                                                                                                                                                               │
    │     at processTicksAndRejections (internal/process/task_queues.js:89:5)                                                                                                                                   │
    │ (node:8) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was n │
    │ ot handled with .catch(). (rejection id: 5)                                                                                                                                                               │
    

    and in the browser network tab for the path: /apis/authorization.k8s.io/v1/selfsubjectrulesreviews i have the response:

    {
      "kind": "Status",
      "apiVersion": "v1",
      "metadata": {
        
      },
      "status": "Failure",
      "message": "selfsubjectrulesreviews.authorization.k8s.io is forbidden: User \"system:anonymous\" cannot create resource \"selfsubjectrulesreviews\" in API group \"authorization.k8s.io\" at the cluster scope",
      "reason": "Forbidden",
      "details": {
        "group": "authorization.k8s.io",
        "kind": "selfsubjectrulesreviews"
      },
      "code": 403
    }
    

    I don't understand why k8dash use the system:anonymous account.

    I use k8s version 1.15.4

  • Invalid credentials with oidc auth with dex

    Invalid credentials with oidc auth with dex

    Hi,

    I get invalid credentials error like below when authenticated with dex as an oidc-provider.

    An error occured during the request { OpenIdConnectError: invalid_client (Invalid client credentials.)
        at Client.requestErrorHandler (/usr/src/app/node_modules/openid-client/lib/helpers/error_handler.js:16:11)
        at processTicksAndRejections (internal/process/next_tick.js:81:5)
      error: 'invalid_client',
      error_description: 'Invalid client credentials.' } POST /oidc
    POST /oidc 500
    

    If I turn off oidc auth, k8dash asks for token and it works if I enter a valid token. Dex is authenticating with github.com and it works fine with kubectl. Here is the kubectl settings

    user:
        auth-provider:
          config:
            client-id: kubernetes
            client-secret: ZXhhbXBsZS1hcHAtc2VjcmV0
            extra-scopes: offline_access openid profile email groups
            id-token: REDACTED
            idp-certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMrakNDQWVLZ0F3SUJBZ0lKQU1lRXJhSHYzNXJWTUEwR0NTcUdTSWIzRFFFQkN3VUFNQkl4RURBT0JnTlYKQkFNTUIydDFZbVV0WTJFd0hoY05NVGt3TXpNeE1Ua3dPVEE0V2hjTk1Ua3dOREV3TVRrd09UQTRXakFTTVJBdwpEZ1lEVlFRRERBZHJkV0psTFdOaE1JSUJJakFOQmdrcWhraUc5dzBCQVFFRkFBT0NBUThBTUlJQkNnS0NBUUVBCjBkb2NjV3Zpb29xbDRVa05oejFCZ01KV25JU0w5TUExRm1ySEZ4U2hUYysrL1V0VURxMVVlU0xCRXpXTjNZZmcKQm5TQVNBQUNmS0lCRTBDRWJWdzhSTUtodXJReExGT0hQUDBodWtVRGkxNmVnaXBHSjI0WWdWcnJ4cUpVYWxsYQo2cUpaTkdsUHQ3SmxWdWtrSHRlY0hONjVneG0wQjBzMWtwV1VRNFh2L0E2ZldOaHVhV3VqYlRjRWx0SEFtQlJnCmtmMHpRYnV2ZCtMRnl3V0V2VDdBai9ua1FVZko1L21DOTQyUmlYVDNXdUtyc1g1a3F3ellrVU9xN2hOM1B1aVQKU1NYRm9JNUxqQWd5eDVqVEhubDdmb3JWSnhObDYvdEc2eFg4S3BxMmpST3FZSzlUWFdhSFlDVktQeTlMUTFuegpBNG9jTXQyRkFzREY4a2ZMUjBhK2l3SURBUUFCbzFNd1VUQWRCZ05WSFE0RUZnUVVMK1gzejRKWkhDZkg4Ry80Ckl0ZDhUdUZ5ZEV3d0h3WURWUjBqQkJnd0ZvQVVMK1gzejRKWkhDZkg4Ry80SXRkOFR1RnlkRXd3RHdZRFZSMFQKQVFIL0JBVXdBd0VCL3pBTkJna3Foa2lHOXcwQkFRc0ZBQU9DQVFFQU1rTFB0dkZoZlZxM0VibUJFU3dER09ZdwpVYjFYS0VKb1JEVGV5dlozamZSWGhTVDlmdmM0bC9GMWVOd1ZKZnhXb0piUjdCU0JmbURiNzR5anBOcGVYS2xZClZVWnE1Mmx1dnlwNDlFNHJOQ1JHTDNzL0NjUnFnV0tqVmxKZWZGakg2TU8zYTZnM0NFZElGNXJSZi8zRXFGSDYKZm9tUkZ0MEw5NzZodmpGRXFyMlVYR01yTk1LMUN6YXJreDhaUXNkekwySGFhMzV6ei9aUG1PdFA1a2dzYUlMegpoSC9CQ215N242Q2pDVmx3UXZFRmFUOXVRRDZWa216eVNmQ29oaGo4WFYwanBMa2doeG12cGJRdzFDWmwvcDJSCkRwSTh3aCtNVkhGczMvZzNKa0lqUkU0SVJtV2ROWE5hWTBwMVVZUEVIMys3bDlDOXZTQ2Q3OXgvSTZtOVB3PT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
            idp-issuer-url: https://dex.example.com:32000
            refresh-token: ChlibzZjeDJyNnMzNWMzZjVoeWpuZm5oem8zEhltaWt3YmRxc3Eyem1qeHAyNmk2ZWlqYnd0
          name: oidc
    

    And this is k8s yaml manifests

    kind: Deployment
    apiVersion: apps/v1
    metadata:
      name: k8dash
      namespace: kube-system
    spec:
      replicas: 1
      selector:
        matchLabels:
          k8s-app: k8dash
      template:
        metadata:
          labels:
            k8s-app: k8dash
        spec:
          hostAliases:
          - hostnames:
            - dex.example.com
            ip: 10.0.2.100
          containers:
          - name: k8dash
            image: herbrandson/k8dash:dev
            command:
            - sh
            - -c
            - |
              npm config set cafile /ca/dex-ca.pem
              /sbin/tini -- node .
            ports:
            - containerPort: 4654
            livenessProbe:
              httpGet:
                scheme: HTTP
                path: /
                port: 4654
              initialDelaySeconds: 30
              timeoutSeconds: 30
            env:
            - name: OIDC_URL
              valueFrom:
                secretKeyRef:
                  name: k8dash
                  key: url
            - name: OIDC_CLIENT_ID
              valueFrom:
                secretKeyRef:
                  name: k8dash
                  key: id
            - name: OIDC_SECRET
              valueFrom:
                secretKeyRef:
                  name: k8dash
                  key: secret
            - name: NODE_EXTRA_CA_CERTS
              value: /ca/dex-ca.pem
            - name: OIDC_SCOPES
              value: "openid email groups"
            volumeMounts:
            - name: cafile
              mountPath: /ca
          volumes:
          - name: cafile
            configMap:
              name: k8dash
    
    ---
    kind: Service
    apiVersion: v1
    metadata:
      name: k8dash
      namespace: kube-system
    spec:
      ports:
        - port: 80
          targetPort: 4654
      selector:
        k8s-app: k8dash
    
    ---
    apiVersion: v1
    data:
      dex-ca.pem: |
        -----BEGIN CERTIFICATE-----
        MIIC+jCCAeKgAwIBAgIJAMeEraHv35rVMA0GCSqGSIb3DQEBCwUAMBIxEDAOBgNV
        BAMMB2t1YmUtY2EwHhcNMTkwMzMxMTkwOTA4WhcNMTkwNDEwMTkwOTA4WjASMRAw
        DgYDVQQDDAdrdWJlLWNhMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA
        0doccWviooql4UkNhz1BgMJWnISL9MA1FmrHFxShTc++/UtUDq1UeSLBEzWN3Yfg
        BnSASAACfKIBE0CEbVw8RMKhurQxLFOHPP0hukUDi16egipGJ24YgVrrxqJUalla
        6qJZNGlPt7JlVukkHtecHN65gxm0B0s1kpWUQ4Xv/A6fWNhuaWujbTcEltHAmBRg
        kf0zQbuvd+LFywWEvT7Aj/nkQUfJ5/mC942RiXT3WuKrsX5kqwzYkUOq7hN3PuiT
        SSXFoI5LjAgyx5jTHnl7forVJxNl6/tG6xX8Kpq2jROqYK9TXWaHYCVKPy9LQ1nz
        A4ocMt2FAsDF8kfLR0a+iwIDAQABo1MwUTAdBgNVHQ4EFgQUL+X3z4JZHCfH8G/4
        Itd8TuFydEwwHwYDVR0jBBgwFoAUL+X3z4JZHCfH8G/4Itd8TuFydEwwDwYDVR0T
        AQH/BAUwAwEB/zANBgkqhkiG9w0BAQsFAAOCAQEAMkLPtvFhfVq3EbmBESwDGOYw
        Ub1XKEJoRDTeyvZ3jfRXhST9fvc4l/F1eNwVJfxWoJbR7BSBfmDb74yjpNpeXKlY
        VUZq52luvyp49E4rNCRGL3s/CcRqgWKjVlJefFjH6MO3a6g3CEdIF5rRf/3EqFH6
        fomRFt0L976hvjFEqr2UXGMrNMK1Czarkx8ZQsdzL2Haa35zz/ZPmOtP5kgsaILz
        hH/BCmy7n6CjCVlwQvEFaT9uQD6VkmzySfCohhj8XV0jpLkghxmvpbQw1CZl/p2R
        DpI8wh+MVHFs3/g3JkIjRE4IRmWdNXNaY0p1UYPEH3+7l9C9vSCd79x/I6m9Pw==
        -----END CERTIFICATE-----
    kind: ConfigMap
    metadata:
      creationTimestamp: null
      name: k8dash
      namespace: kube-system
    ---
    apiVersion: v1
    data:
      id: a3ViZXJuZXRlcw==
      secret: ZXhhbXBsZS1hcHAtc2VjcmV0
      url: aHR0cHM6Ly9kZXguZXhhbXBsZS5jb206MzIwMDA=
    kind: Secret
    metadata:
      creationTimestamp: null
      name: k8dash
      namespace: kube-system
    

    Do you have any idea?

  • Bump json5 from 1.0.1 to 1.0.2 in /client

    Bump json5 from 1.0.1 to 1.0.2 in /client

    Bumps json5 from 1.0.1 to 1.0.2.

    Release notes

    Sourced from json5's releases.

    v1.0.2

    • Fix: Properties with the name __proto__ are added to objects and arrays. (#199) This also fixes a prototype pollution vulnerability reported by Jonathan Gregson! (#295). This has been backported to v1. (#298)
    Changelog

    Sourced from json5's changelog.

    Unreleased [code, diff]

    v2.2.3 [code, diff]

    v2.2.2 [code, diff]

    • Fix: Properties with the name __proto__ are added to objects and arrays. (#199) This also fixes a prototype pollution vulnerability reported by Jonathan Gregson! (#295).

    v2.2.1 [code, diff]

    • Fix: Removed dependence on minimist to patch CVE-2021-44906. (#266)

    v2.2.0 [code, diff]

    • New: Accurate and documented TypeScript declarations are now included. There is no need to install @types/json5. (#236, #244)

    v2.1.3 [code, diff]

    • Fix: An out of memory bug when parsing numbers has been fixed. (#228, #229)

    v2.1.2 [code, diff]

    ... (truncated)

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

  • Bump json5, react-scripts and tsconfig-paths in /client

    Bump json5, react-scripts and tsconfig-paths in /client

    Bumps json5 to 2.2.3 and updates ancestor dependencies json5, react-scripts and tsconfig-paths. These dependencies need to be updated together.

    Updates json5 from 1.0.1 to 2.2.3

    Release notes

    Sourced from json5's releases.

    v2.2.3

    v2.2.2

    • Fix: Properties with the name __proto__ are added to objects and arrays. (#199) This also fixes a prototype pollution vulnerability reported by Jonathan Gregson! (#295).

    v2.2.1

    • Fix: Removed dependence on minimist to patch CVE-2021-44906. (#266)

    v2.2.0

    • New: Accurate and documented TypeScript declarations are now included. There is no need to install @types/json5. (#236, #244)

    v2.1.3 [code, diff]

    • Fix: An out of memory bug when parsing numbers has been fixed. (#228, #229)

    v2.1.2

    • Fix: Bump minimist to v1.2.5. (#222)

    v2.1.1

    • New: package.json and package.json5 include a module property so bundlers like webpack, rollup and parcel can take advantage of the ES Module build. (#208)
    • Fix: stringify outputs \0 as \\x00 when followed by a digit. (#210)
    • Fix: Spelling mistakes have been fixed. (#196)

    v2.1.0

    • New: The index.mjs and index.min.mjs browser builds in the dist directory support ES6 modules. (#187)

    v2.0.1

    • Fix: The browser builds in the dist directory support ES5. (#182)

    v2.0.0

    • Major: JSON5 officially supports Node.js v6 and later. Support for Node.js v4 has been dropped. Since Node.js v6 supports ES5 features, the code has been rewritten in native ES5, and the dependence on Babel has been eliminated.

    • New: Support for Unicode 10 has been added.

    • New: The test framework has been migrated from Mocha to Tap.

    • New: The browser build at dist/index.js is no longer minified by default. A minified version is available at dist/index.min.js. (#181)

    • Fix: The warning has been made clearer when line and paragraph separators are

    ... (truncated)

    Changelog

    Sourced from json5's changelog.

    v2.2.3 [code, diff]

    v2.2.2 [code, diff]

    • Fix: Properties with the name __proto__ are added to objects and arrays. (#199) This also fixes a prototype pollution vulnerability reported by Jonathan Gregson! (#295).

    v2.2.1 [code, diff]

    • Fix: Removed dependence on minimist to patch CVE-2021-44906. (#266)

    v2.2.0 [code, diff]

    • New: Accurate and documented TypeScript declarations are now included. There is no need to install @types/json5. (#236, #244)

    v2.1.3 [code, diff]

    • Fix: An out of memory bug when parsing numbers has been fixed. (#228, #229)

    v2.1.2 [code, diff]

    • Fix: Bump minimist to v1.2.5. (#222)

    v2.1.1 [code, [diff][d2.1.1]]

    ... (truncated)

    Commits
    • c3a7524 2.2.3
    • 94fd06d docs: update CHANGELOG for v2.2.3
    • 3b8cebf docs(security): use GitHub security advisories
    • f0fd9e1 docs: publish a security policy
    • 6a91a05 docs(template): bug -> bug report
    • 14f8cb1 2.2.2
    • 10cc7ca docs: update CHANGELOG for v2.2.2
    • 7774c10 fix: add proto to objects and arrays
    • edde30a Readme: slight tweak to intro
    • 97286f8 Improve example in readme
    • Additional commits viewable in compare view

    Updates react-scripts from 3.4.4 to 5.0.1

    Commits

    Updates tsconfig-paths from 3.9.0 to 3.14.1

    Changelog

    Sourced from tsconfig-paths's changelog.

    [3.14.1] - 2022-03-22

    Fixed

    • Use minimist 1.2.6 for all depencencies becuase of pollution vulnerability. See PR #197. Thanks to @​gopijaganthan for this fix!

    [3.14.0] - 2022-03-13

    Added

    [3.13.0] - 2022-03-03

    Added

    • Include file extension in paths resolved from package.json "main" field. See PR #135 and issue #133. Thanks to @​katywings for this fix!

    [3.12.0] - 2021-08-24

    [3.11.0] - 2021-08-24

    • Reverted upgrade of json5 due to being a breaking change. See PR #173.

    [3.10.1] - 2021-07-06

    Fixed

    • Add register.js to published files

    [3.10.0] - 2021-07-06

    Added

    • feat(tsconfig-loader): extends config from node_modules (#106). Thanks to @​zorji for this PR!

    Fixed

    Commits

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

  • Add support for user/group impersonation

    Add support for user/group impersonation

    My team runs clusters where we do not have a direct ClusterRoleBinding to the cluster-admin ClusterRole. We have granted ourselves the ability to impersonate users and set up a phony user that does have the ClusterRoleBinding. This forces us to do something akin to sudo when we want to perform risky operations.

    In order to perform an administrative command, such as deleting a namespace, we can use kubectl like this:

    kubectl delete ns example-ns --as phony-user
    

    Please add support for doing user and group impersonation that leverages the standard k8s mechanisms linked above.

  • Add Footer element at the bottom of the index page

    Add Footer element at the bottom of the index page

    Hi,

    we are forced to append some custom legal text onto the main page of skooner. There is no such option to do this at this time. I have made a pull request to accomlish this feature. It adds an option to define custom text in the footer by environment variable REACT_APP_FOOTER.

    https://github.com/skooner-k8s/skooner/pull/352

    Please merge, Jan

  • Updating namespace to Skooner

    Updating namespace to Skooner

    Deploy skooner to skooner namespace instead of kube-system. This is good practice to keep applications in separate namespaces.

    Signed-off-by: Trevor Sullivan [email protected]

Related tags
StaticBackend is a simple backend server API handling user mgmt, database, storage and real-time component
StaticBackend is a simple backend server API handling user mgmt, database, storage and real-time component

StaticBackend is a simple backend that handles user management, database, file storage, forms, and real-time experiences via channel/topic-based communication for web and mobile applications.

Jan 7, 2023
A simple go application that uses Youtube Data API V3 to show the real-time stats for a youtube channel such as the subs, views, avg. earnings etc.
A simple go application that uses Youtube Data API V3 to show the real-time stats for a youtube channel such as the subs, views, avg. earnings etc.

Youtube-channel-monitor A simple go application that uses Youtube Data API V3 to show the real-time stats for a youtube channel such as the subs, view

Dec 30, 2021
Multi cluster kubernetes dashboard with batteries included. Build by developers, for developers.

kubetower Multi cluster kubernetes dashboard with batteries included. Built by developers, for developers. Features Restart deployments with one click

Nov 28, 2022
Kubernetes-native automatic dashboard for Ingress
Kubernetes-native automatic dashboard for Ingress

ingress-dashboard Automatic dashboard generation for Ingress objects. Features: No JS Supports OIDC (Keycloak, Google, Okta, ...) and Basic authorizat

Oct 20, 2022
provider-kubernetes is a Crossplane Provider that enables deployment and management of arbitrary Kubernetes objects on clusters

provider-kubernetes provider-kubernetes is a Crossplane Provider that enables deployment and management of arbitrary Kubernetes objects on clusters ty

Dec 14, 2022
Nba-simulation - Golang will be simulating nba match and streaming it real time

NBA Simulation golang in-memory To build and run go build ./nbaSimulation To ru

Feb 21, 2022
Tigris is a modern, scalable backend for building real-time websites and apps.

Tigris Data Getting started These instructions will get you through setting up Tigris Data locally as Docker containers. Prerequisites Make sure that

Dec 27, 2022
Reconstruct Open API Specifications from real-time workload traffic seamlessly
Reconstruct Open API Specifications from real-time workload traffic seamlessly

Reconstruct Open API Specifications from real-time workload traffic seamlessly: Capture all API traffic in an existing environment using a service-mes

Jan 1, 2023
Kstone is an etcd management platform, providing cluster management, monitoring, backup, inspection, data migration, visual viewing of etcd data, and intelligent diagnosis.
Kstone is an etcd management platform, providing cluster management, monitoring, backup, inspection, data migration, visual viewing of etcd data, and intelligent diagnosis.

Kstone 中文 Kstone is an etcd management platform, providing cluster management, monitoring, backup, inspection, data migration, visual viewing of etcd

Dec 27, 2022
:bento: Highly Configurable Terminal Dashboard for Developers and Creators
:bento: Highly Configurable Terminal Dashboard for Developers and Creators

DevDash is a highly configurable terminal dashboard for developers and creators who want to choose and display the most up-to-date metrics they need,

Jan 3, 2023
Kubernetes OS Server - Kubernetes Extension API server exposing OS configuration like sysctl via Kubernetes API

KOSS is a Extension API Server which exposes OS properties and functionality using Kubernetes API, so it can be accessed using e.g. kubectl. At the moment this is highly experimental and only managing sysctl is supported. To make things actually usable, you must run KOSS binary as root on the machine you will be managing.

May 19, 2021
Grafana Dashboard Manager

Grafana dash-n-grab Grafana Dash-n-Grab (GDG) -- Dashboard/DataSource Manager. The purpose of this project is to provide an easy to use CLI to interac

Dec 31, 2022
A Grafana backend plugin for automatic synchronization of dashboard between multiple Grafana instances.

Grafana Dashboard Synchronization Backend Plugin A Grafana backend plugin for automatic synchronization of dashboard between multiple Grafana instance

Dec 23, 2022
Exporter your cypress.io dashboard into prometheus Metrics

Cypress.io dashboard Prometheus exporter Prometheus exporter for a project from Cypress.io dashboards, giving the ability to alert, make special opera

Feb 8, 2022
A beautiful CLI dashboard for GitHub 🚀
A beautiful CLI dashboard for GitHub 🚀

gh-dash ✨ A GitHub (gh) CLI extension to display a dashboard with pull requests and issues by filters you care about. Installation Install the gh CLI

Dec 30, 2022
The Oracle Database Operator for Kubernetes (a.k.a. OraOperator) helps developers, DBAs, DevOps and GitOps teams reduce the time and complexity of deploying and managing Oracle Databases

The Oracle Database Operator for Kubernetes (a.k.a. OraOperator) helps developers, DBAs, DevOps and GitOps teams reduce the time and complexity of deploying and managing Oracle Databases. It eliminates the dependency on a human operator or administrator for the majority of database operations.

Dec 14, 2022
CPU usage percentage is the ratio of the total time the CPU was active, to the elapsed time of the clock on your wall.

Docker-Kubernetes-Container-CPU-Utilization Implementing CPU Load goroutine requires the user to call the goroutine from the main file. go CPULoadCalc

Dec 15, 2021
Kubernetes Native Policy Management
Kubernetes Native Policy Management

Kyverno Kubernetes Native Policy Management Kyverno is a policy engine designed for Kubernetes. It can validate, mutate, and generate configurations u

Jan 2, 2023