Kubei is a flexible Kubernetes runtime scanner, scanning images of worker and Kubernetes nodes providing accurate vulnerabilities assessment, for more information checkout:

Kubei is a vulnerabilities scanning and CIS Docker benchmark tool that allows users to get an accurate and immediate risk assessment of their kubernetes clusters. Kubei scans all images that are being used in a Kubernetes cluster, including images of application pods and system pods. It doesn’t scan the entire image registries and doesn’t require preliminary integration with CI/CD pipelines.

It is a configurable tool which allows users to define the scope of the scan (target namespaces), the speed, and the vulnerabilities level of interest.

It provides a graphical UI which allows the viewer to identify where and what should be replaced, in order to mitigate the discovered vulnerabilities.

Prerequisites

  1. A Kubernetes cluster is ready, and kubeconfig ( ~/.kube/config) is properly configured for the target cluster.

Required permissions

  1. Read secrets in cluster scope. This is required for getting image pull secrets for scanning private image repositories.
  2. List pods in cluster scope. This is required for calculating the target pods that need to be scanned.
  3. Create jobs in cluster scope. This is required for creating the jobs that will scan the target pods in their namespaces.

Configurations

The file deploy/kubei.yaml is used to deploy and configure Kubei on your cluster.

  1. Set the scan scope. Set the IGNORE_NAMESPACES env variable to ignore specific namespaces. Set TARGET_NAMESPACE to scan a specific namespace, or leave empty to scan all namespaces.

  2. Set the scan speed. Expedite scanning by running parallel scanners. Set the MAX_PARALLELISM env variable for the maximum number of simultaneous scanners.

  3. Set severity level threshold. Vulnerabilities with severity level higher than or equal to SEVERITY_THRESHOLD threshold will be reported. Supported levels are Unknown, Negligible, Low, Medium, High, Critical, Defcon1. Default is Medium.

  4. Set the delete job policy. Set the DELETE_JOB_POLICY env variable to define whether or not to delete completed scanner jobs. Supported values are:

    • All - All jobs will be deleted.
    • Successful - Only successful jobs will be deleted (default).
    • Never - Jobs will never be deleted.
  5. Disable CIS Docker benchmark. Set the SHOULD_SCAN_DOCKERFILE env variable to false.

  6. Set the scanner service account. Set the SCANNER_SERVICE_ACCOUNT env variable to a service account name to be used by the scanner jobs. Defaults to default service account.

  7. Scan insecure registries. Set the REGISTRY_INSECURE env variable to allow the scanner to access insecure registries (HTTP only). Default is false.

Usage

  1. Run the following command to deploy Kubei on the cluster:

    kubectl apply -f https://raw.githubusercontent.com/Portshift/kubei/master/deploy/kubei.yaml

  2. Run the following command to verify that Kubei is up and running:

    kubectl -n kubei get pod -lapp=kubei

  3. Then, port forwarding into the Kubei webapp via the following command:

    kubectl -n kubei port-forward $(kubectl -n kubei get pods -lapp=kubei -o jsonpath='{.items[0].metadata.name}') 8080

  4. In your browser, navigate to http://localhost:8080/view/ , and then click 'GO' to run a scan.

  5. To check the state of Kubei, and the progress of ongoing scans, run the following command:

    kubectl -n kubei logs $(kubectl -n kubei get pods -lapp=kubei -o jsonpath='{.items[0].metadata.name}')

  6. Refresh the page (http://localhost:8080/view/) to update the results.

Running Kubei with an external HTTP/HTTPS proxy

Uncomment and configure the proxy env variables for the Clair and Kubei deployments in deploy/kubei.yaml.

Amazon ECR support

Create an AWS IAM user with AmazonEC2ContainerRegistryFullAccess permissions.

Use the user credentials (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_DEFAULT_REGION) to create the following secret:

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Secret
metadata:
  name: ecr-sa
  namespace: kubei
type: Opaque
data:
  AWS_ACCESS_KEY_ID: $(echo -n 'XXXX'| base64 -w0)
  AWS_SECRET_ACCESS_KEY: $(echo -n 'XXXX'| base64 -w0)
  AWS_DEFAULT_REGION: $(echo -n 'XXXX'| base64 -w0)
EOF

Note:

  1. Secret name must be ecr-sa
  2. Secret data keys must be set to AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY and AWS_DEFAULT_REGION

Google GCR support

Create a Google service account with Artifact Registry Reader permissions.

Use the service account json file to create the following secret

kubectl -n kubei create secret generic --from-file=sa.json gcr-sa

Note:

  1. Secret name must be gcr-sa
  2. sa.json must be the name of the service account json file when generating the secret
  3. Kubei is using application default credentials. These only work when running Kubei from GCP.

Limitations

  1. Supports Kubernetes Image Manifest V 2, Schema 2 (https://docs.docker.com/registry/spec/manifest-v2-2/). It will fail to scan on earlier versions.

  2. The CVE database will update once a day.

Comments
  • Bump github.com/containers/image/v5 from 5.19.1 to 5.20.0

    Bump github.com/containers/image/v5 from 5.19.1 to 5.20.0

    Bumps github.com/containers/image/v5 from 5.19.1 to 5.20.0.

    Release notes

    Sourced from github.com/containers/image/v5's releases.

    v5.20.0

    • docker/referece: add IsFullIdentifier
    • Changed oci layout transport to thread-safe destination
    • add pkg/blobcache from Buildah
    • blobcache: drop import on buildah/docker
    • blobcache: drop history comment
    • blobcache: make ClearCache() private
    • blobcache: remove CacheLookupReferenceFunc
    • blobcache: turn BlobCache into a struct
    • blobcache: export clearCache
    • Remove (unused and unreachable) keyring support
    • Eliminate a goroutine
    • Also introduces internal-only interfaces to allow extending the transport feature set in the future
    Commits
    • ad6a5c0 Release v5.20.0
    • d722da5 Merge pull request #1481 from Madeeks/oci-layout-threadsafe-dest
    • 34a3bc2 Merge pull request #1479 from mtrmac/drop-dependency
    • 37c018e Changed oci layout transport to thread-safe destination
    • fcf97a3 Remove a direct dependency on golang.org/x/sys
    • 1045fb7 Merge pull request #1476 from vrothberg/full-id-regex
    • 40befe3 docker/referece: add IsFullIdentifier
    • 306f204 Merge pull request #1477 from mtrmac/drop-keyring
    • 576c8db Remove keyring support
    • 4eab9b2 Merge pull request #1475 from containers/dependabot/go_modules/github.com/kla...
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
  • Bump github.com/anchore/grype from 0.33.0 to 0.34.7

    Bump github.com/anchore/grype from 0.33.0 to 0.34.7

    Bumps github.com/anchore/grype from 0.33.0 to 0.34.7.

    Release notes

    Sourced from github.com/anchore/grype's releases.

    v0.34.7

    Changelog

    v0.34.7 (2022-03-24)

    Full Changelog

    Bug Fixes

    v0.34.6

    Changelog

    v0.34.5 (2022-03-23)

    Full Changelog

    Bug Fixes

    v0.34.4

    Changelog

    v0.34.4 (2022-03-21)

    Full Changelog

    Bug Fixes

    v0.34.3

    Changelog

    v0.34.3 (2022-03-16)

    ... (truncated)

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
  • No severeties

    No severeties

    Hi!

    After running the scanner, I got some report.

    However, it has no severity levels.

    Did I something wrong? 1

    I also have a lot of dead pods, created by kubernetes jobs

    1

  • Credentials not found

    Credentials not found

    What happened:

    Trying to scan a pod containing a private image and it fails, public images are scanned.

    $ oc logs scanner-zap2docker-stable-b72cafcd-4ccc-47cd-8e79-1fb6--1-jpr67 -n sbu-dev
    
    time="2022-04-26T16:21:19Z" level=debug msg="Credentials not found. image name=uk.icr.io/sbu-pipeline/zap2docker-stable@sha256:6c9d3f2cc80470bb4b54fb4b402ff982905e5cb2f13648b571da37e277540f00." \
    func="github.com/cisco-open/kubei/shared/pkg/utils/creds.(*CredExtractor).GetCredentials" file="/build/shared/pkg/utils/creds/extractor.go:78"
    

    What you expected to happen:

    I expect the secret (which is available in the namespace being scanned) to be obtained and used.

    How to reproduce it (as minimally and precisely as possible):

    Deploy kubeclarify v2.1.2 to k8s and perform a namespace scan whereby images within namespace are in a private registry.

    Are there any error messages in KubeClarity logs?

    $ oc logs scanner-zap2docker-stable-b72cafcd-4ccc-47cd-8e79-1fb6--1-jpr67 -n sbu-dev
    
    time="2022-04-26T16:21:19Z" level=debug msg="Credentials not found. image name=uk.icr.io/sbu-pipeline/zap2docker-stable@sha256:6c9d3f2cc80470bb4b54fb4b402ff982905e5cb2f13648b571da37e277540f00." \
    func="github.com/cisco-open/kubei/shared/pkg/utils/creds.(*CredExtractor).GetCredentials" file="/build/shared/pkg/utils/creds/extractor.go:78"
    

    Anything else we need to know?:

    Environment:

    • KubeClarity version: v2.1.2
  • Waiting: PodInitializing

    Waiting: PodInitializing

    hey folks!

    irst time I encountered this status of pods on your product

    after kubectl apply -f https://raw.githubusercontent.com/Portshift/kubei/master/deploy/kubei.yaml

    i run kubectl -n kubei get pod -lapp=kubei

    my output:

    NAME READY STATUS RESTARTS AGE kubei-65d6577695-mzn6p 0/1 Init:0/1 0 18m

    descirbe pod:

    kubectl describe pod kubei-65d6577695-mzn6p -n kubei
    Name:           kubei-65d6577695-mzn6p
    Namespace:      kubei
    Priority:       0
    Node:           worker2/10.2.67.205
    Start Time:     Thu, 06 Aug 2020 14:05:59 +0300
    Labels:         app=kubei
                    kubeiShouldScan=false
                    pod-template-hash=65d6577695
    Annotations:    <none>
    Status:         Pending
    IP:             10.233.103.17
    Controlled By:  ReplicaSet/kubei-65d6577695
    Init Containers:
      init-clairsvc:
        Container ID:  docker://2e689fc20c3b4b3cacaab228a0f49b33f9b7075d426481655804bf256550f5b3
        Image:         yauritux/busybox-curl
        Image ID:      docker-pullable://yauritux/busybox-curl@sha256:e67b94a5abb6468169218a0940e757ebdfd8ee370cf6901823ecbf4098f2bb65
        Port:          <none>
        Host Port:     <none>
        Args:
          /bin/sh
          -c
          set -x; while [ $(curl -sw '%{http_code}' "http://clair.kubei:6060/v1/namespaces" -o /dev/null) -ne 200 ]; do
            echo "waiting for clair to be ready";
            sleep 15;
          done
    
        State:          Running
          Started:      Thu, 06 Aug 2020 14:06:04 +0300
        Ready:          False
        Restart Count:  0
        Environment:    <none>
        Mounts:
          /var/run/secrets/kubernetes.io/serviceaccount from kubei-token-jkw6r (ro)
    Containers:
      kubei:
        Container ID:
        Image:          gcr.io/development-infra-208909/kubei:1.0.6
        Image ID:
        Ports:          8080/TCP, 8081/TCP
        Host Ports:     0/TCP, 0/TCP
        State:          Waiting
          Reason:       PodInitializing
        Ready:          False
        Restart Count:  0
        Limits:
          cpu:     100m
          memory:  100Mi
        Requests:
          cpu:     10m
          memory:  20Mi
        Environment:
          KLAR_IMAGE_NAME:     gcr.io/development-infra-208909/klar:1.0.3
          MAX_PARALLELISM:     10
          TARGET_NAMESPACE:
          SEVERITY_THRESHOLD:  MEDIUM
          IGNORE_NAMESPACES:   istio-system,kube-system
          DELETE_JOB_POLICY:   Successful
        Mounts:
          /var/run/secrets/kubernetes.io/serviceaccount from kubei-token-jkw6r (ro)
    Conditions:
      Type              Status
      Initialized       False
      Ready             False
      ContainersReady   False
      PodScheduled      True
    Volumes:
      kubei-token-jkw6r:
        Type:        Secret (a volume populated by a Secret)
        SecretName:  kubei-token-jkw6r
        Optional:    false
    QoS Class:       Burstable
    Node-Selectors:  <none>
    Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                     node.kubernetes.io/unreachable:NoExecute for 300s
    Events:
      Type    Reason     Age   From               Message
      ----    ------     ----  ----               -------
      Normal  Scheduled  19m   default-scheduler  Successfully assigned kubei/kubei-65d6577695-mzn6p to worker2
      Normal  Pulling    18m   kubelet, worker2   Pulling image "yauritux/busybox-curl"
      Normal  Pulled     18m   kubelet, worker2   Successfully pulled image "yauritux/busybox-curl"
      Normal  Created    18m   kubelet, worker2   Created container init-clairsvc
      Normal  Started    18m   kubelet, worker2   Started container init-clairsvc
    ```
    
    
    ![1](https://user-images.githubusercontent.com/38696837/89526820-02114d80-d7f1-11ea-9579-538043bd7493.png)
    
    
    
  • SecurityContextes & dropping privileges

    SecurityContextes & dropping privileges

    As initially reported there: https://github.com/Portshift/Kubei/issues/20 And partially fixed here: https://github.com/Portshift/kubei/pull/25/files

    Note that every container created by Kubei is also subject to broken securityContexts, leading to Jobs being created while their Pods are stuck:

      Normal   Scheduled  4m42s                  default-scheduler  Successfully assigned registry/scanner-docker-registry-exporter-1042b89a-080d-4dad-b84d-1lwpq8 to compute2
      Normal   Pulling    4m42s                  kubelet            Pulling image "gcr.io/eticloud/k8sec/klar:1.0.16"
      Normal   Pulled     4m16s                  kubelet            Successfully pulled image "gcr.io/eticloud/k8sec/klar:1.0.16" in 26.151366977s
      Normal   Pulling    4m16s                  kubelet            Pulling image "gcr.io/eticloud/k8sec/dockle:1.0.3"
      Normal   Pulled     3m54s                  kubelet            Successfully pulled image "gcr.io/eticloud/k8sec/dockle:1.0.3" in 21.426950017s
      Warning  Failed     3m10s (x5 over 3m54s)  kubelet            Error: container has runAsNonRoot and image will run as root
      Warning  Failed     2m56s (x6 over 4m16s)  kubelet            Error: container has runAsNonRoot and image will run as root
    

    Vulnerability scanners that can't run on secured clusters may miss its target audience.

    https://github.com/Portshift/kubei/blob/master/pkg/scanner/job-manager.go#L386 may need some fix, inserting proper SecurityContext while generating jobs pod template. Sadly, I'm no Go expert, ... Anyone here that would both understand the issue and how to best fix it?

  • Api Access

    Api Access

    I access the kubeclarity api address that I set up in the kubernetes environment through the 8888 port and I am getting 404 when trying to access the paths according to the swagger directive.

  • Support to ignore

    Support to ignore "unfixed"

    Hi, we would like to exclude reported CVEs caused by upstream binaries that don't have a fix for that yet.

    Trivy allows to specify this with the --ignore-unfixed flag.

    Or is there a flag for this already, but not yet documented?

  • Bump github.com/anchore/grype from 0.32.0 to 0.33.0

    Bump github.com/anchore/grype from 0.32.0 to 0.33.0

    Bumps github.com/anchore/grype from 0.32.0 to 0.33.0.

    Release notes

    Sourced from github.com/anchore/grype's releases.

    v0.33.0

    Changelog

    v0.33.0 (2022-02-15)

    Full Changelog

    Added Features

    Bug Fixes

    Commits
    • f29a0d0 Bump syft to v0.38.0 for release (#635)
    • 2ac7e17 remove duplicate manifest (#634)
    • 5aa8533 Normalize release assets and refactor install.sh (#630)
    • d2dba7d update golang crypto to resolve CVE-2020-29652 (#631)
    • 16e6bee update go -> 1.17 (#628)
    • c9f2716 Abstract upstream package before matching (#607)
    • 42ca8c6 Ensure completion of UI progress bar (#627)
    • a8c6580 update stereoscope version to include Podman (#612)
    • 0ce1c43 Add list of public data feeds that are sourced when populating grype's vulner...
    • 346df07 Add sprig templating functions for grype output (#610)
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
  • Docs: Details on authentication with private registries

    Docs: Details on authentication with private registries

    Based on the docs authentication with private registries is doable. However, nothing explains how or where to set a configmap, or environment variables for registry authentication. The only notation describes AWS secret auth. How is this configured?

  • kubeclarity-cli scan UNAUTHORIZED: unauthorized to access repository(e.g. harbor)

    kubeclarity-cli scan UNAUTHORIZED: unauthorized to access repository(e.g. harbor)

    I can scan harbor public project,However the private project will prompt unauthenticated Below is my command and output:

    BACKEND_HOST=${KUBECLARITY_HOST} BACKEND_DISABLE_TLS=true kubeclarity-cli scan $IMAGE --application-id $application_id -e

    time="2022-08-18T03:10:17Z" level=info msg="DependencyTrack config: {"host":"dependency-track-apiserver.dependency-track","project-name":"","project-version":"","should-delete-project":true,"disable-tls":false,"insecure-skip-verify":true,"fetch-vulnerabilities-retry-count":5,"fetch-vulnerabilities-retry-sleep":30000000000}" app=kubeclarity time="2022-08-18T03:10:17Z" level=info msg="Ignoring non SBOM input. type=image" app=kubeclarity scanner=dependency-track time="2022-08-18T03:10:17Z" level=info msg="Got result for job "dependency-track"" app=kubeclarity time="2022-08-18T03:10:17Z" level=info msg="Loading DB. update=true" app=kubeclarity mode=local scanner=grype time="2022-08-18T03:10:41Z" level=info msg="Gathering packages for source registry:harbor.xxxxxxxxxx.com/ci-cd/xxxxxxxxxx-ci:V0.0.9" app=kubeclarity mode=local scanner=grype time="2022-08-18T03:10:41Z" level=error msg="failed to analyze packages: could not fetch image 'harbor.xxxxxxxxxx.com/ci-cd/xxxxxxxxxx-ci:V0.0.9': unable determine image source" app=kubeclarity mode=local scanner=grype time="2022-08-18T03:10:41Z" level=warning msg=""grype" job failed: failed to analyze packages: could not fetch image 'harbor.xxxxxxxxxx.com/ci-cd/xxxxxxxxxx-ci:V0.0.9': unable determine image source" app=kubeclarity time="2022-08-18T03:10:41Z" level=info msg="Merging result from "dependency-track"" app=kubeclarity No vulnerabilities found time="2022-08-18T03:10:41Z" level=fatal msg="Failed get layer commands. failed to get layer commands: failed to get v1.image=harbor.xxxxxxxxxx.com/ci-cd/xxxxxxxxxx-ci:V0.0.9: failed to get image from registry: GET https://harbor.xxxxxxxxxx.com/v2/ci-cd/xxxxxxxxxx-ci/manifests/V0.0.9: UNAUTHORIZED: unauthorized to access repository: ci-cd/xxxxxxxxxx-ci, action: pull: unauthorized to access repository: ci-cd/xxxxxxxxxx-ci, action: pull" app=kubeclarity

    #kubeclarity-cli --version kubeclarity version 2.5.0

    How to solve the harbor authentication problem when executing the kubeclarity-cli command? Doing docker login harbor doesn't work even before the above steps

    Is there a way to scan only local images when executing kubeclarity-cli command? For example, download the image from harbor to the local first , and then execute the kubeclarity-cli command

  • build(deps): bump actions/setup-python from 4.3.1 to 4.4.0

    build(deps): bump actions/setup-python from 4.3.1 to 4.4.0

    Bumps actions/setup-python from 4.3.1 to 4.4.0.

    Release notes

    Sourced from actions/setup-python's releases.

    Add support to install multiple python versions

    In scope of this release we added support to install multiple python versions. For this you can try to use this snippet:

        - uses: actions/setup-python@v4
          with:
            python-version: |
                3.8
                3.9
                3.10
    

    Besides, we changed logic with throwing the error for GHES if cache is unavailable to warn (actions/setup-python#566).

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
  • Starting a scan on non-existing namespaces makes all future scan attempts result in HTTP 500 Internal Server Error

    Starting a scan on non-existing namespaces makes all future scan attempts result in HTTP 500 Internal Server Error

    What happened:

    I install KubeClarity on Rancher desktop, and it runs without error. I can scan existing namespaces without error. However, when I try to scan a namespace that does not exist, all future start scan attempts fail. The kubeclarity deployment needs to be restarted in order to give reasonable responses again.

    What you expected to happen:

    I expect the scan query for the non-existing namespace to fail gracefully, and subsequent scan attempts on valid namespace to work.

    How to reproduce it (as minimally and precisely as possible):

    One erroneous put will make all subsequents puts fail.

    kubectl port-forward -n kubeclarity svc/kubeclarity-kubeclarity 9999:8080
    
    curl -i 'http://localhost:9999/api/runtime/scan/start' \
      -X 'PUT' \
      -H 'Accept: */*' \
      -H 'content-type: application/json' \
      --data-raw '{"namespaces":["does-not-exist"]}'
    

    The following request normally completes correctly, but not if an erroneous namespace has been given:

    curl -i 'http://localhost:9999/api/runtime/scan/start' \
      -X 'PUT' \
      -H 'Accept: */*' \
      -H 'content-type: application/json' \
      --data-raw {"namespaces":["kubeclarity"]}
    

    To make the server respond normally again, it needs to be restarted.

    Are there any error messages in KubeClarity logs?

    2022/12/23 08:42:08 /build/backend/pkg/database/scheduler.go:58 record not found
    [3.047ms] [rows:0] SELECT * FROM "scheduler" ORDER BY "scheduler"."id" LIMIT 1
    2022/12/23 08:42:08 Serving kube clarity runtime scan a p is at http://[::]:8888
    2022/12/23 08:42:08 Serving kube clarity a p is at http://[::]:8080
    time="2022-12-23T08:46:15Z" level=error msg="Failed to send scan config to channel" func="github.com/openclarity/kubeclarity/backend/pkg/rest.(*Server).PutRuntimeScanStart" file="/build/backend/pkg/rest/runtime_scan_controller.go:53"
    time="2022-12-23T08:49:20Z" level=error msg="Failed to send scan config to channel" func="github.com/openclarity/kubeclarity/backend/pkg/rest.(*Server).PutRuntimeScanStart" file="/build/backend/pkg/rest/runtime_scan_controller.go:53"
    

    Anything else we need to know?:

    I have also noticed the same behaviour on empty namespaces (such as "default"), but report for non-existing namespace to make the issue easier to reproduce.

    Environment:

    • Kubernetes version: Server Version: v1.24.3+k3s1
    • KubeClarity version: v2.9.0 Commit: cf188c2a1e4846d9cbf707c7a217a7642bbe7fe3
    • KubeClarity Helm Chart: Installed via helm template ... --version v2.9.0 ...
    • Cloud provider or hardware configuration: Rancher Desktop 1.7.0
    • Others: N/A
  • Allowing application creation and getting application from CLI

    Allowing application creation and getting application from CLI

    Scenario:

    • I have a CI/CD pipeline for an app
    • I would like as a final pipeline step, to analyze it and export results to kubeclarity backend

    Right now the required steps are the following:

    • Application needs to exist on Kubeclarity side, so you ever have to create it from UI, or scan a cluster which would "import" the app in the backend
    • Push the results using the UUID that was generated

    Does this mean I would need to do this manual action and then import the application-id as variable in my CI ? It doesn't seem really scalable.

    What I would suggest is being able to:

    • Create an app from kubeclarity-cli
    • Get an app id (from name) from kubeclarity-cli (PK is application-id but seems like it's impossible to have two apps with the same name (code 409)), so we could retrieve app id from a name

    Being able to do those two actions from kubeclarity-cli would make CI/CD pipelines way simpler.

    Current workaround without modifying kubeclarity:

    • Create an app via a POST request on /api/applications
    • Getting application id from PostgreSQL applications table, or by doing a POST on /api/applications and taking ID from response if app already exists

    I would be happy to create a PR with such changes if you think it seems like a nice addition.

  • build(deps): bump goreleaser/goreleaser-action from 3 to 4

    build(deps): bump goreleaser/goreleaser-action from 3 to 4

    Bumps goreleaser/goreleaser-action from 3 to 4.

    Release notes

    Sourced from goreleaser/goreleaser-action's releases.

    v4.0.0

    What's Changed

    Full Changelog: https://github.com/goreleaser/goreleaser-action/compare/v3...v4.0.0

    v3.2.0

    What's Changed

    • chore: remove workaround for setOutput by @​crazy-max (#374)
    • chore(deps): bump @​actions/core from 1.9.1 to 1.10.0 (#372)
    • chore(deps): bump yargs from 17.5.1 to 17.6.0 (#373)

    Full Changelog: https://github.com/goreleaser/goreleaser-action/compare/v3.1.0...v3.2.0

    v3.1.0

    What's Changed

    • fix: dist resolution from config file by @​crazy-max (#369)
    • ci: fix workflow by @​crazy-max (#357)
    • docs: bump actions to latest major by @​crazy-max (#356)
    • chore(deps): bump crazy-max/ghaction-import-gpg from 4 to 5 (#360)
    • chore(deps): bump ghaction-import-gpg to v5 (#359)
    • chore(deps): bump @​actions/core from 1.6.0 to 1.8.2 (#358)
    • chore(deps): bump @​actions/core from 1.8.2 to 1.9.1 (#367)

    Full Changelog: https://github.com/goreleaser/goreleaser-action/compare/v3.0.0...v3.1.0

    Commits
    • 8f67e59 chore: regenerate
    • 78df308 chore(deps): bump minimatch from 3.0.4 to 3.1.2 (#383)
    • 66134d9 Merge remote-tracking branch 'origin/master' into flarco/master
    • 3c08cfd chore(deps): bump yargs from 17.6.0 to 17.6.2
    • 5dc579b docs: add example when using workdir along with upload-artifact (#366)
    • 3b7d1ba feat!: remove auto-snapshot on dirty tag (#382)
    • 23e0ed5 fix: do not override GORELEASER_CURRENT_TAG (#370)
    • 1315dab update build
    • b60ea88 improve install
    • 4d25ab4 Update goreleaser.ts
    • See full diff in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
  • helm: add the possibility to connect to an external PostgreSQL instance

    helm: add the possibility to connect to an external PostgreSQL instance

    Hello everyone. I wanted to use an external PostgreSQL database rather than the one used in dependencies, so I can decorrelate Kubeclarity storage from my Kubernetes cluster.

    The goal of this PR is to use an external PostgreSQL database whereas currently, only using the one deployed in dependencies is possible.

    I tried to make retrocompatible changes.

Testcontainers is a Golang library that providing a friendly API to run Docker container. It is designed to create runtime environment to use during your automatic tests.

When I was working on a Zipkin PR I discovered a nice Java library called Testcontainers. It provides an easy and clean API over the go docker sdk to

Jan 7, 2023
This library provides a metrics package which can be used to instrument code, expose application metrics, and profile runtime performance in a flexible manner.

This library provides a metrics package which can be used to instrument code, expose application metrics, and profile runtime performance in a flexible manner.

Jan 18, 2022
Open Source runtime scanner for OpenShift cluster and perform security audit checks based on CIS RedHat OpenShift Benchmark specification
Open Source runtime scanner for OpenShift cluster and perform security audit checks based on CIS RedHat OpenShift Benchmark specification

OpenShift-Ordeal Scan your Openshift cluster !! OpenShift-Ordeal is an open source audit scanner who perform audit check on OpenShift Cluster and outp

Sep 6, 2022
Open Source runtime scanner for Linux containers (LXD), It performs security audit checks based on CIS Linux containers Benchmark specification
Open Source runtime scanner for Linux containers (LXD), It performs security audit checks based on CIS Linux containers  Benchmark specification

lxd-probe Scan your Linux container runtime !! Lxd-Probe is an open source audit scanner who perform audit check on a linux container manager and outp

Dec 26, 2022
This is Reperio Health's GoLang backend assessment

reperio-backend-assessment This is Reperio Health's GoLang backend assessment. N

Dec 22, 2021
Savoir - A tool to perform tasks during internal security assessment

Savoir Savoir is a tool to perform tasks during internal security assessment. Th

Nov 9, 2022
A Simple and Comprehensive Vulnerability Scanner for Container Images, Git Repositories and Filesystems. Suitable for CI
A Simple and Comprehensive Vulnerability Scanner for Container Images, Git Repositories and Filesystems. Suitable for CI

A Simple and Comprehensive Vulnerability Scanner for Containers and other Artifacts, Suitable for CI. Table of Contents Abstract Features Installation

Jan 1, 2023
Vulnerability-exporter - A Prometheus Exporter for managing vulnerabilities in kubernetes by using trivy
Vulnerability-exporter - A Prometheus Exporter for managing vulnerabilities in kubernetes by using trivy

Kubernetes Vulnerability Exporter A Prometheus Exporter for managing vulnerabili

Dec 4, 2022
An image server which automatically optimize non webp and avif images to webp and avif images

go-imageserver go-imageserver is an image server which automatically optimize no

Apr 18, 2022
Viewnode displays Kubernetes cluster nodes with their pods and containers.

viewnode The viewnode shows Kubernetes cluster nodes with their pods and containers. It is very useful when you need to monitor multiple resources suc

Nov 23, 2022
Set of Kubernetes solutions for reusing idle resources of nodes by running extra batch jobs

Caelus Caelus is a set of Kubernetes solutions for reusing idle resources of nodes by running extra batch jobs, these resources come from the underuti

Nov 22, 2022
A CoreDNS plugin to create records for Kubernetes nodes.

kubenodes Name kubenodes - creates records for Kubernetes nodes. Description kubenodes watches the Kubernetes API and synthesizes A, AAAA, and PTR rec

Jul 7, 2022
K8s-delete-protection - Kubernetes admission controller to avoid deleteing master nodes

k8s-delete-protection Admission Controller If you want to make your Kubernetes c

Nov 2, 2022
Horusec is an open source tool that improves identification of vulnerabilities in your project with just one command.
Horusec is an open source tool that improves identification of vulnerabilities in your project with just one command.

Table of contents 1. About 2. Getting started 2.1. Requirements 2.2. Installation 3. Usage 3.1. CLI Usage 3.2. Using Docker 3.3. Older versions 3.4. U

Jan 7, 2023
ip-masq-agent-v2 aims to solve more specific networking cases, allow for more configuration options, and improve observability compared to the original.

ip-masq-agent-v2 Based on the original ip-masq-agent, v2 aims to solve more specific networking cases, allow for more configuration options, and impro

Aug 31, 2022
Kubernetes operator providing CRDs to interact with NETCONF servers.

NETCONF operator This operator is meant to provide support for: RFC6241 Network Configuration Protocol (NETCONF) RFC6242 Using the NETCONF Protocol ov

Nov 17, 2021
Kubernetes Virtualization API and runtime in order to define and manage virtual machines.
Kubernetes Virtualization API and runtime in order to define and manage virtual machines.

Kubernetes Virtualization API and runtime in order to define and manage virtual machines.

Jan 5, 2023
Open Source runtime tool which help to detect malware code execution and run time mis-configuration change on a kubernetes cluster
Open Source runtime tool which help to detect malware code execution and run time mis-configuration change on a kubernetes cluster

Kube-Knark Project Trace your kubernetes runtime !! Kube-Knark is an open source tracer uses pcap & ebpf technology to perform runtime tracing on a de

Sep 19, 2022
Snowcat - A service mesh scanning tool
 Snowcat - A service mesh scanning tool

Snowcat - A service mesh scanning tool Snowcat gathers and analyzes the configuration of an Istio cluster and audits it for potential violations of se

Nov 9, 2022