Stuff to make standing up sigstore (esp. for testing) easier for e2e/integration testing.

sigstore-scaffolding

This repository contains scaffolding to make standing up a full sigstore stack easier and automatable. Our focus is on running on Kubernetes and rely on several primitives provided by k8s as well as some semantics. As a starting point, below is a markdown version of a Google document that @nsmith5 and @vaikas did based on a discussion in a sigstore community meeting on 2022-01-10.

Sigstore automation for tests

Ville Aikas <[email protected]>

Nathan Smith <[email protected]>

2022-01-11

Quickstart

If you do not care about the nitty gritty details and just want to stand up a stack, check out the Getting Started Guide

Background

Currently in various e2e tests we (the community) do not exercise all the components of the Sigstore when running tests. This results in us skipping some validation tests (for example, but not limited to, –insecure-skip-verify flag), or using public instances for some of the tests. Part of the reason is that there are currently some manual steps or some assumptions baked in some places that make this trickier than is strictly necessary. At Chainguard we use all the sigstore components heavily and utilize GitHub actions for our e2e/integration tests, and have put together some components that might make it easier for other folks as well as upstream to do more thorough testing as well as hopefully catch breaking changes by ensuring that we have the ability to test the full stack by various clients (for example, Tekton Chains is one example, I’m sure there are others).

A wonderful very detailed document for standing all the pieces from scratch is given in Luke Hinds’ “Sigstore the hard way

Overview

This document is meant to describe what pieces have been built and why. The goals are to be able to stand up a fully functional setup suitable for k8s clusters, including KinD, which is what we use in our GitHub actions for our integration testing.

Because we assume k8s is the environment that we run in, we make use of a couple of concepts provided by it that make automation easier.

  • Jobs - Run to completion abstraction. Creates pods, if they fail, will recreate until it succeeds, or finally gives up.
  • ConfigMaps - Hold arbitrary configuration information
  • Secrets - Hold secrety information, but care must be taken for these to actually be secret

By utilizing the Jobs “run to completion” properties, we can construct “gates” in our automation, which allows us to not proceed until a Job completes successfully (“full speed ahead”) or fails (fail the test setup and bail). These take a form of using kubectl wait command, for example, waiting for jobs in ‘mynamespace’ to complete within 5 minutes or fail.:

kubectl wait --timeout 5m -n mynamespace --for=condition=Complete jobs --all

Another k8s concept we utilize is the ability to mount both ConfigMaps and Secrets into Pods. Furthermore, if a ConfigMap or Secret (and more granularly a ‘key’ in either, but it’s not important) is not available, the Pod will block starting. This naturally gives us another “gate” which allows us to deploy components and rely on k8s to reconcile to a known good state (or fail if it can not be accomplished).

Components

Here’s a high level overview of the components in play that we would like to be able to spin up with the lines depicting dependencies. Later on in the document we will cover each of these components in detail, starting from the “bottom up”.

alt_text

Trillian

For Trillian, there needs to be a database and a schema before Trillian services are able to function. Our assumption is that there is a provisioned mysql database, for our Github actions, we spin up a container that has the mysql running, and then we need to create a schema for it.

For this we create a Kubernetes Job, which runs against a given mysql database and verifies that all the tables and indices exist. It does not currently handle upgrades to schema, but this is a feature that could be added, but looking at the Change History of the schema, the schema seems to be stable and adding this feature seemed not worth doing at this point.

So, we have a k8s Job called ‘CreateDB’ which is responsible for creating the schema for a given database. As a reminder, because this is a job, automation can gate any further action before this Job successfully completes. We can also (but not currently) make Trillian services depend on the output of ‘CreateDB’ before proceeding (by using the mounting technique described above), but we have not had need for that yet because they recover if the schema does not exist.

Rekor

Rekor requires a Merkle tree that has been created in Trillian to function. This can be achieved by using the admin grpc client CreateTree call. This again is a Job ‘CreateTree’ and this job will also create a ConfigMap containing the newly minted TreeID. This allows us to (recall mounting Configmaps to pods from above) to block Rekor server from starting before the TreeID has been provisioned. So, assuming that Rekor runs in Namespace rekor-system and the ConfigMap that is created by ‘CreateTree’ Job, we can have the following (some stuff omitted for readability) in our Rekor Deployment to ensure that Rekor will not start prior to TreeID having been properly provisioned.

spec:
  template:
    spec:
      containers:
      - name: rekor-server
        image: ko://github.com/sigstore/rekor/cmd/rekor-server
        args: [
          "serve",
          "--trillian_log_server.address=log-server.trillian-system.svc",
          "--trillian_log_server.port=80",
          "--trillian_log_server.tlog_id=$(TREE_ID)",
        ]
        env:
        - name: TREE_ID
          valueFrom:
            configMapKeyRef:
              name: rekor-config
              key: treeID

CTLog

CTLog is the first piece in the puzzle that requires a bit more wrangling because it actually has a dependency on Trillian as well as Fulcio (more about Fulcio details later).

For Trillian, we just need to create another TreeID, but we’re reusing the same ‘CreateTree’ Job from above.

In addition to Trillian, the dependency on Fulcio is that we need to establish trust for the Root Certificate that Fulcio is using so that when Fulcio sends requests for inclusion in our CTLog, we trust it. For this, we use RootCert API call to fetch the Certificate.

Lastly we need to create a Certificate for CTLog itself.

So in addition to ‘CreateTree’ Job, we also have a ‘CreateCerts’ Job that will fail to make progress until TreeID has been populated in the ConfigMap by the ‘CreateTree’ call above. Once the TreeID has been created, it will try to fetch a Fulcio Root Certificate (again, failing until it becomes available). Once the Fulcio Root Certificate is retrieved, the Job will then create a Public/Private keys to be used by the CTLog service and will write the following two Secrets (names can be changed ofc):

  • ctlog-secrets - Holds the public/private keys for CTLog as well as Root Certificate for Fulcio in the following keys:
    • private - CTLog private key
    • public - CTLog public key
    • rootca - Fulcio Root Certificate
  • ctlog-public-key - Holds the public key for CTLog so that clients calling Fulcio will able to verify the SCT that they receive from Fulcio.

In addition to the Secrets above, the Job will also add a new entry into the ConfigMap (now that I write this, it could just as well go in the secrets above I think…) created by the ‘CreateTree’ above. This entry is called ‘config’ and it’s a serialized ProtoBuf required by the CTLog to start up.

Again by using the fact that the Pod will not start until all the required ConfigMaps / Secrets are available, we can configure the CTLog deployment to block until everything is available. Again for brevity some things have been left out, but the CTLog configuration would look like so:

spec:
  template:
    spec:
      containers:
        - name: ctfe
          image: ko://github.com/google/certificate-transparency-go/trillian/ctfe/ct_server
          args: [
            "--http_endpoint=0.0.0.0:6962",
            "--log_config=/ctfe-config/ct_server.cfg",
            "--alsologtostderr"
          ]
          volumeMounts:
          - name: keys
            mountPath: "/ctfe-keys"
            readOnly: true
          - name: config
            mountPath: "/ctfe-config"
            readOnly: true
      volumes:
        - name: keys
          secret:
            secretName: ctlog-secret
            items:
            - key: private
              path: privkey.pem
            - key: public
              path: pubkey.pem
            - key: rootca
              path: roots.pem
        - name: config
          configMap:
            name: ctlog-config
            items:
            - key: config
              path: ct_server.cfg

Here instead of mounting into environmental variables, we must mount to the filesystem given how the CTLog expects these things to be materialized.

Ok, so with the ‘CreateTree’ and ‘CreateCerts’ jobs having successfully completed, CTLog will happily start up and be ready to serve requests. Again if it fails, tests will fail and the logs will contain information about the particular failure.

Also, the reason why the public key was created in a different secret is because clients will need access to this key because they need that public key to verify the SCT returned by the Fulcio to ensure it actually was properly signed.

Fulcio

Make it stop!!! Is there more??? Last one, I promise… For Fulcio we just need to create a Root Certificate that it will use to sign incoming Signing Certificate requests. For this we again have a Job ‘CreateCerts’ (different from above: TODO(vaikas): Rename)) that will create a self signed certificate, private/public keys as well as password used to encrypt the private key. Basically we need to ensure we have all the necessary pieces to start up Fulcio.

This ‘CreateCerts’ job just creates the pieces mentioned above and creates a Secret containing the following keys:

  • cert - Root Certificate
  • private - Private key
  • password - Password to use for decrypting the private key
  • public - Public key

And as seen already above, we modify the Deployment to not start the Pod until all the pieces are available, making our Deployment of Fulcio look (simplified again) like this.

spec:
  template:
    spec:
      containers:
      - image: ko://github.com/sigstore/fulcio/cmd/fulcio
        name: fulcio
        args:
          - "serve"
          - "--port=5555"
          - "--ca=fileca"
          - "--fileca-key"
          - "/var/run/fulcio-secrets/key.pem"
          - "--fileca-cert"
          - "/var/run/fulcio-secrets/cert.pem"
          - "--fileca-key-passwd"
          - "$(PASSWORD)"
          - "--ct-log-url=http://ctlog.ctlog-system.svc/e2e-test-tree"
        env:
        - name: PASSWORD
          valueFrom:
            secretKeyRef:
              name: fulcio-secret
              key: password
        volumeMounts:
        - name: fulcio-cert
          mountPath: "/var/run/fulcio-secrets"
          readOnly: true
      volumes:
      - name: fulcio-cert
        secret:
          secretName: fulcio-secret
          items:
          - key: private
            path: key.pem
          - key: cert
            path: cert.pem

Other rando stuff

This document focused on the Tree management, Certificate, Key and such creation automagically, coordinating the interactions and focusing on the fact that no manual intervention is required at any point during the deployment and relying on k8s primitives and semantics. What has been left out only because there are already existing solutions is configuring each of the services to actually connect at the dataplane level. For example, in the Fulcio case, the argument to Fulcio ‘--ct-log-url’ needs to point to where the CTLog above was installed or hilarity will of course follow.

I’m curious if there would be appetite for upstreaming these.

Comments
  • Quick Start

    Quick Start "FUN" Part 1

    Env Details KIND= kind v0.11.1 go1.16.4 linux/amd64 KO= v0.9.3 Knative Serving = latest K8s= 1.21.1 GO= go version go1.17.3 linux/amd64 Hardware OS = Ubuntu 20.04.2 LTS

    STEPS TO RECREATE:

    1. create a default cluster in kind and install knative serving
    2. run ko apply -BRf ./config

    Output of kubectl get pods --all-namespaces

    image

    Output of k get jobs --all-namespaces

    image

    Issue(s): ctlog-system

    looks like ctlog-public-key not found?

    image

    Trillian-System Log Signer (Pod ImagePullBackoff issue) image

  • Use SIGSTORE_REKOR_PUBLIC_KEY, remove SIGSTORE_TRUST_REKOR_API_PUBLIC_KEY

    Use SIGSTORE_REKOR_PUBLIC_KEY, remove SIGSTORE_TRUST_REKOR_API_PUBLIC_KEY

    Description

    Users should be using verification material out of band, and we should deprecate SIGSTORE_TRUST_REKOR_API_PUBLIC_KEY.

    Instead, the scaffolding setup should export SIGSTORE_REKOR_PUBLIC_KEY with the location of the public key file, similar to the CT log public key.

  • Figure out what is causing grief on k8s 1.23

    Figure out what is causing grief on k8s 1.23

    This passed with 1.21 and 1.22 but 1.23 seemed to be timing out jobs since looked like they were only retried 6 times. Is this a new behaviour or did something else change? https://github.com/sigstore/cosign/runs/5697562612?check_suite_focus=true

  • Create K8s pod metric based logs and alerts

    Create K8s pod metric based logs and alerts

    Closes https://github.com/sigstore/public-good-instance/issues/703

    Summary

    Summary of metrics and alert policies

    This PR adds two log based metrics for K8s pod errors for Rekor and Fulcio (so four new logs are added in total). It also adds alert policies around each metric.

    The metrics simply count the number log entries that contain the specified k8s error message (either "unscheduable" or "Back-off restarting failed container") where resource.type is k8s_pod and resource.labels.namespace_name is either fulcio-system or rekor-system (depending on the specific metric).

    The alert policies on these metrics will fire when logs with the error messages are present for more than ten minutes.

    The metrics and alert policies were first created in the GCP UI and converted to Terraform with the terraformer tool. Below are links to the metrics and alert policies in the GCP UI.

    Notes on the approach

    https://github.com/sigstore/public-good-instance/issues/703 mentioned investigating whether we could use the GCP Error Reporting service, since it is automatically capturing the K8s pod errors. After some digging, that service does not seem to be supported in the Google Terraform provider and I was unable to configure PagerDuty notifications for it. So I opted to create these logs and alerts manually. These metrics and alerts are almost identical to each other, so it would be good to investigate whether we can reuse some of these metric and alert definitions but for the first pass, I opted to just create separate resource definitions for each metric and alert.

    Questions for the reviewer:

    • Do we want to create these metrics and alerts for the other K8s deployments in the project, like the prober? I just added metrics and alerts to Rekor and Fulcio to start. We can add more in this PR or create a follow up issue.

    Release Note

    None.

    Documentation

    None.

  • sigstore/scaffolding/actions/setup@main currently broken

    sigstore/scaffolding/actions/setup@main currently broken

    Description

    Currently working on an enhancement proposal for the Tekton Chains project. When running e2e tests for this project sigstore/scaffolding/actions/setup@main is called. This currently fails with the following output:

    + kubectl apply -f https://github.com/sigstore/scaffolding/releases/download/v0.4.0/release.yaml
    error: unable to read URL "https://github.com/sigstore/scaffolding/releases/download/v0.4.0/release.yaml", server reported 404 Not Found, status code=404
    

    This action is called here.

  • Remove latency alerts on uptime checks

    Remove latency alerts on uptime checks

    Context and discussion at https://github.com/sigstore/public-good-instance/issues/513

    Signed-off-by: Priya Wadhwa [email protected]

    Summary

    Release Note

    Documentation

  • fix: actions/cache

    fix: actions/cache

    Summary

    Fixes actions/cache in the E2E tests by properly constructing a cache key and reordering certain steps to populate/reuse the cache, e.g. when installing dependencies.

    Ticket Link

    Fixes: #140 Signed-off-by: Michael Gasch [email protected]

    Release Note

    NONE
    
  • Bump google.golang.org/grpc from 1.45.0 to 1.46.0

    Bump google.golang.org/grpc from 1.45.0 to 1.46.0

    Bumps google.golang.org/grpc from 1.45.0 to 1.46.0.

    Release notes

    Sourced from google.golang.org/grpc's releases.

    Release 1.46.0

    New Features

    • server: Support setting TCP_USER_TIMEOUT on grpc.Server connections using keepalive.ServerParameters.Time (#5219)
    • client: perform graceful switching of LB policies in the ClientConn by default (#5285)
    • all: improve logging by including channelz identifier in log messages (#5192)

    API Changes

    • grpc: delete WithBalancerName() API, deprecated over 4 years ago in #1697 (#5232)
    • balancer: change BuildOptions.ChannelzParentID to an opaque identifier instead of int (#5192)
      • Note: the balancer package is labeled as EXPERIMENTAL, and we don't believe users were using this field.

    Behavior Changes

    • client: change connectivity state to TransientFailure in pick_first LB policy when all addresses are removed (#5274)
      • This is a minor change that brings grpc-go's behavior in line with the intended behavior and how C and Java behave.
    • metadata: add client-side validation of HTTP-invalid metadata before attempting to send (#4886)

    Bug Fixes

    • metadata: make a copy of the value slices in FromContext() functions so that modifications won't be made to the original copy (#5267)
    • client: handle invalid service configs by applying the default, if applicable (#5238)
    • xds: the xds client will now apply a 1 second backoff before recreating ADS or LRS streams (#5280)

    Dependencies

    Commits
    • e8d06c5 Change version to 1.46.0 (#5296)
    • efbd542 gcp/observability: correctly test this module in presubmit tests (#5300) (#5307)
    • 4467a29 gcp/observability: implement logging via binarylog (#5196)
    • 18fdf54 cmd/protoc-gen-go-grpc: allow hooks to modify client structs and service hand...
    • 337b815 interop: build client without timeout; add logs to help debug failures (#5294)
    • e583b19 xds: Add RLS in xDS e2e test (#5281)
    • 0066bf6 grpc: perform graceful switching of LB policies in the ClientConn by defaul...
    • 3cccf6a xdsclient: always backoff between new streams even after successful stream (#...
    • 4e78093 xds: ignore routes with unsupported cluster specifiers (#5269)
    • 99aae34 cluster manager: Add Graceful Switch functionality to Cluster Manager (#5265)
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
  • Terraform configuration for Timestamp Authority

    Terraform configuration for Timestamp Authority

    Copy-paste for the win

    Ref: https://github.com/sigstore/timestamp-authority/issues/48

    Signed-off-by: Hayden Blauzvern [email protected]

    Summary

    Release Note

    Documentation

  • Add ctlog shards that create their own Cloud SQL instances.

    Add ctlog shards that create their own Cloud SQL instances.

    Signed-off-by: Ville Aikas [email protected]

    Summary

    WIP: Need to do some testing, but wanted to share the approach early :)

    Starts putting the pieces at the infra level necessary for:

    • https://github.com/sigstore/public-good-instance/issues/343
    • https://github.com/sigstore/public-good-instance/issues/418
    • https://github.com/sigstore/public-good-instance/issues/524

    In particular:

    • Add mysql creation (optionally) into the CTLog module. It's made optional since we already use that module, and we don't want to create a new Cloud SQL instance for the already existing one.
    • Add ctlog_shards variable to Sigstore ctlog_shards which is a list of shards. So we'd add, say 2021 into this list first to create a new separate Cloud SQL instance for the new CTLog
    • Add ctlog_mysql_instances which outputs the list of CTLog DB instances

    Release Note

    • Add ability to create new CTLog shards with their own Cloud SQL instance.

    Documentation

  • Add Terraform resource for TUF preprod bucket

    Add Terraform resource for TUF preprod bucket

    This will be used to store the staged TUF prod root before it's synced to the production bucket, letting us catch issues early.

    This is already created in production.

    Signed-off-by: Hayden Blauzvern [email protected]

    Summary

    Ticket Link

    Fixes

    Release Note

    
    
  • Testing timestamp-authority from HEAD

    Testing timestamp-authority from HEAD

    We test the timestamp authority CLI from HEAD (See https://github.com/sigstore/scaffolding/blob/48993611d81a91328460cee18b3b48725711755a/.github/workflows/test-release.yaml#L136-L162, and https://github.com/sigstore/scaffolding/blob/48993611d81a91328460cee18b3b48725711755a/.github/workflows/test-action-tuf.yaml#L87-L112 and https://github.com/sigstore/scaffolding/blob/48993611d81a91328460cee18b3b48725711755a/.github/workflows/fulcio-rekor-kind.yaml#L198-L223). If we make any breaking changes at HEAD, this breaks tests. As noted in https://github.com/sigstore/timestamp-authority/issues/177, when we changed cert-chain to certificate-chain, this caused CI to break. I recommend checking out the latest released version and having dependabot handle updating to the latest release.

    cc @vaikas @bobcallaway

  • [DNM] Test grpc with duplex.

    [DNM] Test grpc with duplex.

    Signed-off-by: Ville Aikas [email protected]

    Summary

    Testing https://github.com/sigstore/cosign/pull/1762 with duplex.

    Release Note

    Documentation

  • Add ability to install specific versions of Fulcio, Rekor, etc.

    Add ability to install specific versions of Fulcio, Rekor, etc.

    Description

    It would be nice to be able to specify which release version of the components should be stood up, for example: https://github.com/sigstore/cosign/pull/2402#issuecomment-1301150996

    It would be nice to be able to specify which (for example, Rekor), say 1.0.0 or 1.0.x that should get installed. Couple of things off the top of my head is to grab the releases from GitHub and then parse, like is done here (so supports, latest, 1.0.0, and 1.0.x: https://github.com/chainguard-dev/actions/blob/main/setup-knative/action.yaml#L82

    So, that's cool, it gives us the version for the release we're looking for, but then we need to go through and actually pull out the released container image. I'm not sure where else this is kept right now except in things like: https://github.com/sigstore/rekor/releases/download/v1.0.0/rekor-v1.0.0.yaml

    where we'd then pull the image from. Is there a release artifact that we would have the container image we could get in an easier manner? @cpanato thoughts?

    And lastly, once we get the container image, we'd need to kustomize (or something else) and replace the various ./config files with the correct container images. Like here: https://github.com/sigstore/scaffolding/blob/main/config/rekor/rekor/300-rekor.yaml#L22

  • Terraform: OpenStack Support

    Terraform: OpenStack Support

    Description

    Scaffolding project currently only supports GCP Terraform provider. We want (@developer-guy) to provision a full sigstore stack on OpenStack. Does it make sense to create a new folder called openstack in the terraform folder to drop all related modules in there?

    Of course, we can extend the supporting list in the future:

    • AWS
    • VMware Cloud Director
    • Azure
    • Alibaba
    • Azure
    • etc.
kubectl plugin for signing Kubernetes manifest YAML files with sigstore
kubectl plugin for signing Kubernetes manifest YAML files with sigstore

k8s-manifest-sigstore kubectl plugin for signing Kubernetes manifest YAML files with sigstore ⚠️ Still under developement, not ready for production us

Nov 28, 2022
Plugin for Helm to integrate the sigstore ecosystem

helm-sigstore Plugin for Helm to integrate the sigstore ecosystem. Search, upload and verify signed Helm Charts in the Rekor Transparency Log. Info he

Dec 21, 2022
Lightweight, single-binary Backup Repository client. Part of E2E Backup Architecture designed by RiotKit

Backup Maker Tiny backup client packed in a single binary. Interacts with a Backup Repository server to store files, uses GPG to secure your backups e

Apr 4, 2022
Kubelet-bench - Example Go-based e2e benchmark for various Kubelet operations without spinning up whole K8s cluster

kubelet-bench An example of Go based e2e benchmark for various Kubelet operation

Mar 17, 2022
Utility to make kubeseal --raw a bit easier.

ks Utility to make kubeseal --raw a bit easier. Building GOOS=windows GOARCH=amd64 go build -o ks-windows-amd64.exe ks.go GOOS=windows GOARCH=386 go b

Aug 19, 2022
Golang Integration Testing Framework For Kong Kubernetes APIs and Controllers.
Golang Integration Testing Framework For Kong Kubernetes APIs and Controllers.

Kong Kubernetes Testing Framework (KTF) Testing framework used by the Kong Kubernetes Team for the Kong Kubernetes Ingress Controller (KIC). Requireme

Dec 20, 2022
Terraform provider to help with various AWS automation tasks (mostly all that stuff we cannot accomplish with the official AWS terraform provider)
Terraform provider to help with various AWS automation tasks (mostly all that stuff we cannot accomplish with the official AWS terraform provider)

terraform-provider-awsutils Terraform provider for performing various tasks that cannot be performed with the official AWS Terraform Provider from Has

Dec 8, 2022
Kubernetes Stuff

Kubernetes Stuff

Jan 11, 2022
This is a CLI to help changing and doing stuff in Terraform Cloud.

Terraform Cloud Tool This is a CLI to help changing and doing stuff in Terraform Cloud. Terraform CLI Functions $ terraform-cloud-tool Terraform Cloud

Jul 27, 2022
A demo repository that shows CI/CD integration using DroneCI + ArgoCD + Kubernetes.
A demo repository that shows CI/CD integration using DroneCI + ArgoCD + Kubernetes.

CI/CD Demo This is the demo repo for my blog post. This tutorial shows how to build CI/CD pipeline with DroneCI and ArgoCD. In this demo, we use Drone

Oct 18, 2022
Prevent Kubernetes misconfigurations from ever making it (again 😤) to production! The CLI integration provides policy enforcement solution to run automatic checks for rule violations. Docs: https://hub.datree.io
Prevent Kubernetes misconfigurations from ever making it  (again 😤) to production! The CLI integration provides policy enforcement solution to run automatic checks for rule violations.  Docs: https://hub.datree.io

What is Datree? Datree helps to prevent Kubernetes misconfigurations from ever making it to production. The CLI integration can be used locally or in

Jan 1, 2023
TriggerMesh open source event-driven integration platform powered by Kubernetes and Knative.

TriggerMesh open source event-driven integration platform powered by Kubernetes and Knative. TriggerMesh allows you to declaratively define event flows between sources and targets as well as add even filter, splitting and processing using functions.

Dec 30, 2022
Mutagen Compose is a modified version of Docker Compose that offers automated integration with Mutagen.

Mutagen Compose Mutagen Compose is a (minimally) modified version of Docker Compose that offers automated integration with Mutagen. This allows you to

Dec 22, 2022
A best practices Go source project with unit-test and integration test, also use skaffold & helm to automate CI & CD at local to optimize development cycle

Dependencies Docker Go 1.17 MySQL 8.0.25 Bootstrap Run chmod +x start.sh if start.sh script does not have privileged to run Run ./start.sh --bootstrap

Apr 4, 2022
Conduit - Data Integration for Production Data Stores
Conduit - Data Integration for Production Data Stores

Conduit Data Integration for Production Data Stores. ?? Overview Conduit is a da

Jan 3, 2023
Mango-kong - Mango (man page generator) integration for Kong
Mango-kong - Mango (man page generator) integration for Kong

Mango (man page generator) integration for Kong This package allows Kong package

Dec 5, 2022
jsPolicy - Easier & Faster Kubernetes Policies using JavaScript or TypeScript
jsPolicy - Easier & Faster Kubernetes Policies using JavaScript or TypeScript

Website • Getting Started Guide • Documentation • Blog • Twitter • Slack jsPolicy - Easier & Faster Kubernetes Policies using JavaScript or TypeScript

Dec 30, 2022
Go library for easier work with sqlgo

sqlgo go library for easier work with sql Installation go get github.com/Mikhail

Jan 7, 2022
Kubeswitch - Easier way to switch your kubernetes context

Switch Kubectl Context Easier way to switch your kubernetes context Set PATH Dow

Jun 17, 2022