A tool for managing complex enterprise Kubernetes environments as code.

kubecfg

Go Report Card

A tool for managing Kubernetes resources as code.

kubecfg allows you to express the patterns across your infrastructure and reuse these powerful "templates" across many services, and then manage those templates as files in version control. The more complex your infrastructure is, the more you will gain from using kubecfg.

Yes, Google employees will recognise this as being very similar to a similarly-named internal tool ;)

Install

Pre-compiled executables exist for some platforms on the Github releases page.

On macOS, it can also be installed via Homebrew: brew install kubecfg

To build from source:

% PATH=$PATH:$GOPATH/bin
% go get github.com/kubecfg/kubecfg

Quickstart

# Show generated YAML
% kubecfg show -o yaml examples/guestbook.jsonnet

# Create resources
% kubecfg update examples/guestbook.jsonnet

# Modify configuration (downgrade gb-frontend image)
% sed -i.bak '\,gcr.io/google-samples/gb-frontend,s/:v4/:v3/' examples/guestbook.jsonnet
# See differences vs server
% kubecfg diff examples/guestbook.jsonnet

# Update to new config
% kubecfg update examples/guestbook.jsonnet

# Clean up after demo
% kubecfg delete examples/guestbook.jsonnet

Features

  • Supports JSON, YAML or jsonnet files (by file suffix).
  • Best-effort sorts objects before updating, so that dependencies are pushed to the server before objects that refer to them.
  • Additional jsonnet builtin functions. See lib/kubecfg.libsonnet.
  • Optional "garbage collection" of objects removed from config (see --gc-tag).

Infrastructure-as-code Philosophy

The idea is to describe as much as possible about your configuration as files in version control (eg: git).

Changes to the configuration follow a regular review, approve, merge, etc code change workflow (github pull-requests, phabricator diffs, etc). At any point, the config in version control captures the entire desired-state, so the system can be easily recreated in a QA cluster or to recover from disaster.

Jsonnet

Kubecfg relies heavily on jsonnet to describe Kubernetes resources, and is really just a thin Kubernetes-specific wrapper around jsonnet evaluation. You should read the jsonnet tutorial, and skim the functions available in the jsonnet std library.

Community

Click here to sign up to the Kubernetes Slack org.

Comments
  • Add parseHelmChart builtin

    Add parseHelmChart builtin

    Add a new parseHelmChart native function that expands helm charts into jsonnet objects. The new code links helm (v3) libraries directly, and does not require nor use the external helm command.

    Note the helm chart is provided as an array of bytes (numbers 0-255) - this builtin does not fetch the helm chart. The expectation is that this will be used with another mechanism like the jsonnet importbin statement.

    Fixes #267

    Example usage:

    local kubecfg = import "kubecfg.libsonnet";
    
    local data = importbin "https://charts.jetstack.io/charts/cert-manager-v1.5.3.tgz";
    local cm = kubecfg.parseHelmChart(data, "cert-manager", namespace, {
      // Example values.yaml
      webhook: {replicaCount: 2},
    });
    
    // ... returns a jsonnet object with filename keys from the helm chart
    // and expanded/parsed Kubernetes objects as values.
    
    // The result can be manipulated and used just like any other jsonnet
    // value.
    cm + {
      "cert-manager/templates/webhook-deployment.yaml": [o + {
        spec+: {
          template+: {
            spec+: {
              nodeSelector+: {"kubernetes.io/arch": "amd64"},
            },
          },
        },
      } for o in super["cert-manager/templates/webhook-deployment.yaml"]],
    }
    

    Caveats:

    • Helm 'hooks' are not supported and ignored.
    • Chart sort order is ignored, and the usual kubecfg sort mechanism is used.
    • Probably some other things.
  • Add isK8sObject, deepMap and fold jsonnet builtins

    Add isK8sObject, deepMap and fold jsonnet builtins

    Add some helper functions to help manipulate jsonnet structures that conform to kubecfg's usual "nested collections of Kubernetes objects" result structure.

    Specifically they handle:

    • null (ignored)
    • single Kubernetes object
    • Kubernetes v1.List of Kubernetes objects (note other kind-specific list types are not supported)
    • jsonnet array of any of these types (recursively)
    • jsonnet object with values of any of these types (recursively)

    Helper functions added in this commit:

    isK8sObject: returns true if this is a Kubernetes object.

    deepMap: map Kubernetes objects, preserving the original nested structure.

    fold: iterate over Kubernetes objects and accumulate a result.

    gvkName: useful function for fold. Accumulates a two-level nested object by apiVersion.kind, then object name.

    gvkNsName: useful function for fold. Accumulates a three-level nested object by apiVersion.kind, object namespace (or _), then object name.

  • #52 limit garbage collection to the namespace scope

    #52 limit garbage collection to the namespace scope

    This addresses https://github.com/kubecfg/kubecfg/issues/52

    Adds a new cli flag --gc-all-namespaces to kubecfg update.

    • When set to false: apply the namespace scope (i.e. default namespace, adjustable with --namespace) to garbage collection
    • When set to true (default): keep behavior unchanged
  • feat: Add helmTemplate builtin

    feat: Add helmTemplate builtin

    This is an import of https://github.com/anguslees/kubecfg-1/tree/helm onto the new kubecfg repository

    Add a new helmTemplate native function that downloads and expands helm charts into jsonnet objects. The new code links helm (v3) libraries directly, and does not require nor use the external helm command.

    Fixes vmware-archive#267

    Example usage:

    local kubecfg = import "kubecfg.libsonnet";
    
    local url = "https://charts.jetstack.io/charts/cert-manager-v1.5.3.tgz";
    local cm = kubecfg.helmTemplate("cert-manager", namespace, url, {
      // Example values.yaml
      webhook: {replicaCount: 2},
    });
    
    // ... returns a jsonnet object with filename keys from the helm chart
    // and expanded/parsed Kubernetes objects as values.
    
    // The result can be manipulated and used just like any other jsonnet
    // value.
    cm + {
      "cert-manager/templates/webhook-deployment.yaml"+: {
        spec+: {
          template+: {
            spec+: {
              nodeSelector+: {"kubernetes.io/arch": "amd64"},
            },
          },
        },
      },
    }
    

    Caveats:

    • Helm 'hooks' are not supported and ignored.
    • Chart sort order is ignored, and the usual kubecfg sort mechanism is used.
    • chartURL argument uses the kubecfg URL-based importer, but will reject relative URLs by default[1].
    • HTTP_PROXY is obeyed, but there is no other cache. The helm chart is re-downloaded on every invocation (for now).
    • Probably some other things.

    [1]: Relative URLs can be enabled using a new --allow-relative-helm-urls flag. URLs are interpreted relative to $PWD currently, even when used by jsonnet from remote URLs. This will change. TODO: Make this consistent with usual relative import semantics, and enable by default.

  • Allow output filenames to have yml suffix

    Allow output filenames to have yml suffix

    In our internal project, we have almost 5000 individual YAML files that we generate. For historic reasons, we continue to name the files .yml instead of .yaml.

    By enabling kubecfg to write the files with a .yml suffix, we can produce these files for our CI/CD pipeline measurably faster.

    I used hyperfine to benchmark the current mode (my build of kubecfg, do not specify --format, followed by find | read | mv), vs. the same build of kubecfg with --format=yml, no call to find.

    Benchmark #1: GENMODE=current make generate Time (mean ± σ): 13.384 s ± 0.459 s [User: 79.086 s, System: 19.813 s] Range (min … max): 12.775 s … 14.280 s 10 runs

    Benchmark #2: GENMODE=newkubecfg make generate Time (mean ± σ): 11.166 s ± 0.216 s [User: 73.907 s, System: 9.905 s] Range (min … max): 10.858 s … 11.544 s 10 runs

    Summary 'GENMODE=newkubecfg make generate' ran 1.20 ± 0.05 times faster than 'GENMODE=current make generate'

  • Update module github.com/mkmik/yaml to v2 - autoclosed

    Update module github.com/mkmik/yaml to v2 - autoclosed

    WhiteSource Renovate

    This PR contains the following updates:

    | Package | Type | Update | Change | |---|---|---|---| | github.com/mkmik/yaml | replace | major | v0.0.0-20210505221935-5a0cbc1c4094 -> v2.4.0 |


    Release Notes

    mkmik/yaml

    v2.4.0

    Compare Source

    v2.3.0

    Compare Source

    v2.2.8

    Compare Source

    v2.2.7

    Compare Source

    v2.2.6

    Compare Source

    v2.2.5

    Compare Source

    v2.2.4

    Compare Source

    v2.2.3

    Compare Source

    v2.2.2

    Compare Source

    v2.2.1

    Compare Source

    v2.2.0

    Compare Source

    v2.1.1

    Compare Source

    v2.1.0

    Compare Source


    Configuration

    📅 Schedule: At any time (no schedule defined).

    🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

    Rebasing: Whenever PR is behind base branch, or you tick the rebase/retry checkbox.

    🔕 Ignore: Close this PR and you won't be reminded about this update again.


    • [ ] If you want to rebase/retry this PR, click this checkbox.

    This PR has been generated by WhiteSource Renovate. View repository job log here.

  • Update module github.com/googleapis/gnostic to v0.6.2

    Update module github.com/googleapis/gnostic to v0.6.2

    WhiteSource Renovate

    This PR contains the following updates:

    | Package | Type | Update | Change | |---|---|---|---| | github.com/googleapis/gnostic | require | minor | v0.5.5 -> v0.6.2 |


    Release Notes

    googleapis/gnostic

    v0.6.2

    Compare Source

    This adds a retract statement to go.mod to exclude v0.6.0 from dependency updates. Thanks @​morphar and @​shenqidebaozi for quickly catching and fixing problems with the multimodule configuration!

    v0.6.1

    Compare Source

    v0.6.0

    Compare Source

    This renames the former apps directory to cmd and adds a go.mod for each cmd subdirectory. These directories contain demonstrations and various gnostic-related applications, and putting each in a separate module clarifies dependencies and reduces the apparent dependencies of gnostic itself (as listed in the top-level go.mod). Thanks @​shenqidebaozi for making this change and @​morphar for advising.

    This also includes significant improvements to protoc-gen-openapi from @​morphar and @​tonybase and a new protoc-gen-jsonschema pluigin contributed by @​morphar.

    v0.5.7

    Compare Source

    v0.5.6

    Compare Source


    Configuration

    📅 Schedule: At any time (no schedule defined).

    🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

    Rebasing: Whenever PR is behind base branch, or you tick the rebase/retry checkbox.

    🔕 Ignore: Close this PR and you won't be reminded about this update again.


    • [ ] If you want to rebase/retry this PR, click this checkbox.

    This PR has been generated by WhiteSource Renovate. View repository job log here.

  • Use gc tags present in input

    Use gc tags present in input

    kubecfg supports garbage collection with a tag given by --gc-tag. This PR adds another flag --gc-tags-from-input which instructs kubecfg to garbage collect for all tags present on objects in the input.

    This allows to deploy multiple applications at once where each application has its own gc tag.

  • Add eval command

    Add eval command

    Demo:

    
    $ export KUBECFG_ALPHA=true # or pass the --alpha flag
    $ ./kubecfg eval ./examples/guestbook.jsonnet  | head
    frontend:
      deploy:
        apiVersion: apps/v1beta2
        kind: Deployment
        metadata:
          annotations: {}
          labels:
            name: frontend
          name: frontend
        spec:
    $ ./kubecfg eval ./examples/guestbook.jsonnet  -k    
    - frontend
    - master
    - slave
    
    $ ./kubecfg eval ./examples/guestbook.jsonnet -e $.master -k
    - deploy
    - svc
    
    $ ./kubecfg eval ./examples/guestbook.jsonnet -e $.master.svc
    apiVersion: v1
    kind: Service
    metadata:
      annotations: {}
      labels:
        name: redis-master
      name: redis-master
    spec:
      ports:
      - port: 6379
        targetPort: 6379
      selector:
        name: redis-master
      type: ClusterIP
    
  • Fish completion

    Fish completion

    Might as well include this, since it's supported. I wanted to add fish completion to the AUR pkg for kubecfg, so this is a necessary first step.

    NOTES:

    1. I tried to run make generate as advised by the contrib guide, but it doesn't seem to exist. Not sure what's going on there. Maybe those need an update?
    2. I ran make tidy as a test, and it did produce various changes, notably
    diff --git a/go.mod b/go.mod
    index 193c90c..f1c7448 100644
    --- a/go.mod
    +++ b/go.mod
    @@ -31,7 +31,6 @@ require (
     )
    
     require (
    -       cloud.google.com/go v0.100.2 // indirect
    

    but I decided not to include this in my PR, insofar as this seems like a pretty trivial change to me. Please advise if this is wrong.

    1. I tested ./kubecfg completion > ~/.config/fish/completions/kubecfg.fish on fish 3.5.1 and it seems to work as expected, but I didn't go deeper than that.
  • Duplicate detection should not include version part of apiVersion when compaing if resource already exists

    Duplicate detection should not include version part of apiVersion when compaing if resource already exists

    To reproduce the issue:

    local i = {
      kind: 'Ingress',
      metadata: { name: 'i1', namespace: 'ns1' },
      spec: {},
    };
    
    [
      (i { apiVersion: 'networking.k8s.io/v1' }),
      (i { apiVersion: 'networking.k8s.io/v1' }),
    ]
    

    This fails. The following should also fail, but works as expected:

    local i = {
      kind: 'Ingress',
      metadata: { name: 'i1', namespace: 'ns1' },
      spec: {},
    };
    
    [
      (i { apiVersion: 'networking.k8s.io/v1' }),
      (i { apiVersion: 'networking.k8s.io/v1beta1' }),
    ]
    
    Kubernetes would treat `networking.k8s.io/v1` and `networking.k8s.io/v1beta1` resources with same name and namespace as same, so it is a duplicate.
    
    I'm not sure how exactly should `apiVersion` be compared.
    
  • Update actions/cache digest to 4723a57

    Update actions/cache digest to 4723a57

    Mend Renovate

    This PR contains the following updates:

    | Package | Type | Update | Change | |---|---|---|---| | actions/cache | action | digest | 9b0c1fc -> 4723a57 |


    Configuration

    📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).

    🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

    Rebasing: Whenever PR is behind base branch, or you tick the rebase/retry checkbox.

    🔕 Ignore: Close this PR and you won't be reminded about this update again.


    • [ ] If you want to rebase/retry this PR, check this box

    This PR has been generated by Mend Renovate. View repository job log here.

  • Update module github.com/onsi/ginkgo to v2

    Update module github.com/onsi/ginkgo to v2

    Mend Renovate

    This PR contains the following updates:

    | Package | Type | Update | Change | |---|---|---|---| | github.com/onsi/ginkgo | require | major | v1.16.5 -> v2.6.1 |


    Release Notes

    onsi/ginkgo

    v2.6.1

    Compare Source

    2.6.1

    Features
    • Override formatter colors from envvars - this is a new feature but an alternative approach involving config files might be taken in the future (#​1095) [60240d1]
    Fixes
    • GinkgoRecover now supports ignoring panics that match a specific, hidden, interface [301f3e2]
    Maintenance

    v2.6.0

    Compare Source

    2.6.0

    Features
    • ReportBeforeSuite provides access to the suite report before the suite begins.
    • Add junit config option for omitting leafnodetype (#​1088) [956e6d2]
    • Add support to customize junit report config to omit spec labels (#​1087) [de44005]
    Fixes
    • Fix stack trace pruning so that it has a chance of working on windows [2165648]

    v2.5.1

    Compare Source

    2.5.1

    Fixes
    Maintenance

    v2.5.0

    Compare Source

    2.5.0

    Ginkgo output now includes a timeline-view of the spec

    This commit changes Ginkgo's default output. Spec details are now presented as a timeline that includes events that occur during the spec lifecycle interleaved with any GinkgoWriter content. This makes is much easier to understand the flow of a spec and where a given failure occurs.

    The --progress, --slow-spec-threshold, --always-emit-ginkgo-writer flags and the SuppressProgressReporting decorator have all been deprecated. Instead the existing -v and -vv flags better capture the level of verbosity to display. However, a new --show-node-events flag is added to include node > Enter and < Exit events in the spec timeline.

    In addition, JUnit reports now include the timeline (rendered with -vv) and custom JUnit reports can be configured and generated using GenerateJUnitReportWithConfig(report types.Report, dst string, config JunitReportConfig)

    Code should continue to work unchanged with this version of Ginkgo - however if you have tooling that was relying on the specific output format of Ginkgo you may run into issues. Ginkgo's console output is not guaranteed to be stable for tooling and automation purposes. You should, instead, use Ginkgo's JSON format to build tooling on top of as it has stronger guarantees to be stable from version to version.

    Features
    • Provide details about which timeout expired [0f2fa27]
    Fixes
    • Add Support Policy to docs [c70867a]
    Maintenance

    v2.4.0

    Compare Source

    2.4.0

    Features
    Fixes
    Maintenance

    v2.3.1

    Compare Source

    2.3.1

    Fixes

    Several users were invoking ginkgo by installing the latest version of the cli via go install github.com/onsi/ginkgo/v2/ginkgo@latest. When 2.3.0 was released this resulted in an influx of issues as CI systems failed due to a change in the internal contract between the Ginkgo CLI and the Ginkgo library. Ginkgo only supports running the same version of the library as the cli (which is why both are packaged in the same repository).

    With this patch release, the ginkgo CLI can now identify a version mismatch and emit a helpful error message.

    • Ginkgo cli can identify version mismatches and emit a helpful error message [bc4ae2f]
    • further emphasize that a version match is required when running Ginkgo on CI and/or locally [2691dd8]

    Maintenance

    v2.3.0

    Compare Source

    2.3.0

    Interruptible Nodes and Timeouts

    Ginkgo now supports per-node and per-spec timeouts on interruptible nodes. Check out the documentation for all the details but the gist is you can now write specs like this:

    It("is interruptible", func(ctx SpecContext) { // or context.Context instead of SpecContext, both are valid.
        // do things until `ctx.Done()` is closed, for example:
        req, err := http.NewRequestWithContext(ctx, "POST", "/build-widgets", nil)
        Expect(err).NotTo(HaveOccured())
        _, err := http.DefaultClient.Do(req)
        Expect(err).NotTo(HaveOccured())
    
        Eventually(client.WidgetCount).WithContext(ctx).Should(Equal(17))
    }, NodeTimeout(time.Second*20), GracePeriod(5*time.Second))
    

    and have Ginkgo ensure that the node completes before the timeout elapses. If it does elapse, or if an external interrupt is received (e.g. ^C) then Ginkgo will cancel the context and wait for the Grace Period for the node to exit before proceeding with any cleanup nodes associated with the spec. The ctx provided by Ginkgo can also be passed down to Gomega's Eventually to have all assertions within the node governed by a single deadline.

    Features
    • Ginkgo now records any additional failures that occur during the cleanup of a failed spec. In prior versions this information was quietly discarded, but the introduction of a more rigorous approach to timeouts and interruptions allows Ginkgo to better track subsequent failures.
    • SpecContext also provides a mechanism for third-party libraries to provide additional information when a Progress Report is generated. Gomega uses this to provide the current state of an Eventually().WithContext() assertion when a Progress Report is requested.
    • DescribeTable now exits with an error if it is not passed any Entries [a4c9865]

    Fixes

    • fixes crashes on newer Ruby 3 installations by upgrading github-pages gem dependency [92c88d5]
    • Make the outline command able to use the DSL import [1be2427]

    Maintenance

    • chore(docs): delete no meaning d [57c373c]
    • chore(docs): Fix hyperlinks [30526d5]
    • chore(docs): fix code blocks without language settings [cf611c4]
    • fix intra-doc link [b541bcb]

    v2.2.0

    Compare Source

    2.2.0

    Generate real-time Progress Reports [f91377c]

    Ginkgo can now generate Progress Reports to point users at the current running line of code (including a preview of the actual source code) and a best guess at the most relevant subroutines.

    These Progress Reports allow users to debug stuck or slow tests without exiting the Ginkgo process. A Progress Report can be generated at any time by sending Ginkgo a SIGINFO (^T on MacOS/BSD) or SIGUSR1.

    In addition, the user can specify --poll-progress-after and --poll-progress-interval to have Ginkgo start periodically emitting progress reports if a given node takes too long. These can be overriden/set on a per-node basis with the PollProgressAfter and PollProgressInterval decorators.

    Progress Reports are emitted to stdout, and also stored in the machine-redable report formats that Ginkgo supports.

    Ginkgo also uses this progress reporting infrastructure under the hood when handling timeouts and interrupts. This yields much more focused, useful, and informative stack traces than previously.

    Features
    • BeforeSuite, AfterSuite, SynchronizedBeforeSuite, SynchronizedAfterSuite, and ReportAfterSuite now support (the relevant subset of) decorators. These can be passed in after the callback functions that are usually passed into these nodes.

      As a result the signature of these methods has changed and now includes a trailing args ...interface{}. For most users simply using the DSL, this change is transparent. However if you were assigning one of these functions to a custom variable (or passing it around) then your code may need to change to reflect the new signature.

    Maintenance
    • Modernize the invocation of Ginkgo in github actions [0ffde58]
    • Update reocmmended CI settings in docs [896bbb9]
    • Speed up unnecessarily slow integration test [6d3a90e]

    v2.1.6

    Compare Source

    2.1.6

    Fixes
    • Add SuppressProgressReporting decorator to turn off --progress announcements for a given node [dfef62a]
    • chore: remove duplicate word in comments [7373214]

    v2.1.5

    Compare Source

    2.1.5

    Fixes
    • drop -mod=mod instructions; fixes #​1026 [6ad7138]
    • Ensure CurrentSpecReport and AddReportEntry are thread-safe [817c09b]
    • remove stale importmap gcflags flag test [3cd8b93]
    • Always emit spec summary [5cf23e2] - even when only one spec has failed
    • Fix ReportAfterSuite usage in docs [b1864ad]
    • fixed typo (#​997) [219cc00]
    • TrimRight is not designed to trim Suffix [71ebb74]
    • refactor: replace strings.Replace with strings.ReplaceAll (#​978) [143d208]
    • fix syntax in examples (#​975) [b69554f]
    Maintenance

    v2.1.4

    Compare Source

    Fixes
    • Numerous documentation typos
    • Prepend when when using When (this behavior was in 1.x but unintentionally lost during the 2.0 rewrite) [efce903]
    • improve error message when a parallel process fails to report back [a7bd1fe]
    • guard against concurrent map writes in DeprecationTracker [0976569]
    • Invoke reporting nodes during dry-run (fixes #​956 and #​935) [aae4480]
    • Fix ginkgo import circle [f779385]

    v2.1.3

    Compare Source

    See https://onsi.github.io/ginkgo/MIGRATING_TO_V2 for details on V2.

    Fixes
    • Calling By in a container node now emits a useful error. [ff12cee]

    v2.1.2

    Compare Source

    Fixes
    • Track location of focused specs correctly in ginkgo unfocus [a612ff1]
    • Profiling suites with focused specs no longer generates an erroneous failure message [8fbfa02]
    • Several documentation typos fixed. Big thanks to everyone who helped catch them and report/fix them!

    v2.1.1

    Compare Source

    See https://onsi.github.io/ginkgo/MIGRATING_TO_V2 for details on V2.

    Fixes
    • Suites that only import the new dsl packages are now correctly identified as Ginkgo suites [ec17e17]

    v2.1.0

    Compare Source

    See https://onsi.github.io/ginkgo/MIGRATING_TO_V2 for details on V2.

    2.1.0 is a minor release with a few tweaks:

    • Introduce new DSL packages to enable users to pick-and-choose which portions of the DSL to dot-import. [90868e2] More details here.
    • Add error check for invalid/nil parameters to DescribeTable [6f8577e]
    • Myriad docs typos fixed (thanks everyone!) [718542a, ecb7098, 146654c, a8f9913, 6bdffde, 03dcd7e]

    v2.0.0: Ginkgo v2.0.0

    Compare Source

    Ginkgo v2.0.0 is a major new release of Ginkgo.

    The changes to Ginkgo are substantial and wide-ranging, however care has been given to ensure that most users will experience a smooth migration from V1 to V2 with relatively little work. A combined changelog and migration guides is available here and the Ginkgo docs have been updated to capture the new functionality in V2.


    Configuration

    📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).

    🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

    Rebasing: Whenever PR is behind base branch, or you tick the rebase/retry checkbox.

    🔕 Ignore: Close this PR and you won't be reminded about this update again.


    • [ ] If you want to rebase/retry this PR, check this box

    This PR has been generated by Mend Renovate. View repository job log here.

  • Update actions/checkout digest to 755da8c

    Update actions/checkout digest to 755da8c

    Mend Renovate

    This PR contains the following updates:

    | Package | Type | Update | Change | |---|---|---|---| | actions/checkout | action | digest | 93ea575 -> 755da8c |


    Configuration

    📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).

    🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

    Rebasing: Whenever PR is behind base branch, or you tick the rebase/retry checkbox.

    🔕 Ignore: Close this PR and you won't be reminded about this update again.


    • [ ] If you want to rebase/retry this PR, check this box

    This PR has been generated by Mend Renovate. View repository job log here.

  • Build multi-arch docker container

    Build multi-arch docker container

    2022/12/20 16:57:28 Using base distroless.dev/static:latest@sha256:dd37b804e4de19f7d979c719e3520a79651d0158cafc27fe2005d099a0029afd for github.com/kubecfg/kubecfg
    2022/12/20 16:57:28 Building github.com/kubecfg/kubecfg for linux/amd64
    2022/12/20 16:57:28 Building github.com/kubecfg/kubecfg for linux/ppc64le
    2022/12/20 16:57:28 Building github.com/kubecfg/kubecfg for linux/s390x
    2022/12/20 16:57:28 Building github.com/kubecfg/kubecfg for linux/386
    2022/12/20 16:57:28 Building github.com/kubecfg/kubecfg for linux/riscv64
    2022/12/20 16:57:28 Building github.com/kubecfg/kubecfg for linux/arm64
    2022/12/20 16:57:28 Building github.com/kubecfg/kubecfg for linux/arm/v7
    2022/12/20 16:57:29 Building github.com/kubecfg/kubecfg for linux/arm/v6
    
  • gc-tags labels vs annotations

    gc-tags labels vs annotations

    Initially we implement gc-tags in annotations but that makes it slow for us to find objects matching the tag (it requires a full scan of the apiserver), so we're migrating to using labels.

    However @muxmuse found out that:

    Some automatically created resources inherit labels of their "parents", but not annotations. E.g. Endpoints inherit labels (but not annotations) of the related Service and don't get an ownerReference, such that they will be garbage-collected on each kubecfg update when using tag-based garbage collection.

    Reproducible on docker-desktop v1.25.0 with

    kubecfg update --gc-tag=test main.yaml
    kubecfg update --gc-tag=test main.yaml
    # INFO  Garbage collecting endpoints default.main (v1)
    # main.yaml
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: main
    spec:
      selector:
        matchLabels:
          app: main
      template:
        metadata:
          labels:
            app: main
        spec:
          containers:
          - name: main
            image: containous/whoami
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: main
    spec:
      ports:
      - port: 80
      selector:
        app: main
      type: ClusterIP
    

    This also applies to metrics.k8s.io/PodMetrics which inherit labels of the Pods they belong to, but get no ownerReference.

  • Update module golang.org/x/crypto to v0.4.0

    Update module golang.org/x/crypto to v0.4.0

    Mend Renovate

    This PR contains the following updates:

    | Package | Type | Update | Change | |---|---|---|---| | golang.org/x/crypto | require | minor | v0.3.0 -> v0.4.0 |


    Release Notes

    golang/crypto

    v0.4.0

    Compare Source


    Configuration

    📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).

    🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

    Rebasing: Whenever PR is behind base branch, or you tick the rebase/retry checkbox.

    🔕 Ignore: Close this PR and you won't be reminded about this update again.


    • [ ] If you want to rebase/retry this PR, check this box

    This PR has been generated by Mend Renovate. View repository job log here.

Package trn introduces a Range type with useful methods to perform complex operations over time ranges

Time Ranges Package trn introduces a Range type with useful methods to perform c

Aug 18, 2022
for Prometheus, complex settings

Remo Manager Nature Remo Cloud API を用いて、Prometheus上にデータを展開するプログラム。 Nature Remo Cloud APIのアクセスリミットに対応し、1分間に1度だけデータを更新する。 使い方 make build -> docker-compo

Dec 29, 2021
Helm : a tool for managing Kubernetes charts

Helm Helm is a tool for managing Kubernetes charts. Charts are packages of pre-configured Kubernetes resources. Use Helm to: Find and use popular soft

Nov 30, 2021
Vagrant is a tool for building and distributing development environments.

Vagrant Website: https://www.vagrantup.com/ Source: https://github.com/hashicorp/vagrant HashiCorp Discuss: https://discuss.hashicorp.com/c/vagrant/24

Jan 7, 2023
Tool (in Go!) to compare and diff container and host environments. Dinosaur fun!

Compenv compare environments between containers, and host ??️ This is a simple tool to compare environments. This means the environment on your host v

Sep 24, 2022
sail is an operation framework based on Ansible/Helm. sail follows the principles of Infrastructure as Code (IaC), Operation as Code (OaC), and Everything as Code. So it is a tool for DevOps.

sail 中文文档 sail is an operation framework based on Ansible/Helm. sail follows the principles of Infrastructure as Code (IaC), Operation as Code (OaC),a

Dec 16, 2021
Managing your Kubernetes clusters (including public, private, edge, etc) as easily as visiting the Internet

Clusternet Managing Your Clusters (including public, private, hybrid, edge, etc) as easily as Visiting the Internet. Clusternet (Cluster Internet) is

Dec 30, 2022
The Oracle Database Operator for Kubernetes (a.k.a. OraOperator) helps developers, DBAs, DevOps and GitOps teams reduce the time and complexity of deploying and managing Oracle Databases

The Oracle Database Operator for Kubernetes (a.k.a. OraOperator) helps developers, DBAs, DevOps and GitOps teams reduce the time and complexity of deploying and managing Oracle Databases. It eliminates the dependency on a human operator or administrator for the majority of database operations.

Dec 14, 2022
Kubernetes is an open source system for managing containerized applications across multiple hosts.
Kubernetes is an open source system for managing containerized applications across multiple hosts.

Kubernetes Kubernetes is an open source system for managing containerized applications across multiple hosts. It provides basic mechanisms for deploym

Nov 25, 2021
Vulnerability-exporter - A Prometheus Exporter for managing vulnerabilities in kubernetes by using trivy
Vulnerability-exporter - A Prometheus Exporter for managing vulnerabilities in kubernetes by using trivy

Kubernetes Vulnerability Exporter A Prometheus Exporter for managing vulnerabili

Dec 4, 2022
Reward is a Swiss Army knife CLI utility for orchestrating Docker based development environments.
Reward is a Swiss Army knife CLI utility for orchestrating Docker based development environments.

Reward Reward is a Swiss Army knife CLI utility for orchestrating Docker based development environments. It makes possible to run multiple local envir

Dec 9, 2022
Open Service Mesh (OSM) is a lightweight, extensible, cloud native service mesh that allows users to uniformly manage, secure, and get out-of-the-box observability features for highly dynamic microservice environments.
Open Service Mesh (OSM) is a lightweight, extensible, cloud native service mesh that allows users to uniformly manage, secure, and get out-of-the-box observability features for highly dynamic microservice environments.

Open Service Mesh (OSM) Open Service Mesh (OSM) is a lightweight, extensible, Cloud Native service mesh that allows users to uniformly manage, secure,

Jan 2, 2023
Example used to try a compose application with Docker Dev Environments

compose-dev-env Example used to try a Compose application with Docker Dev Environments. This example is based on the nginx-golang-mysql sample of awes

Dec 29, 2022
Composer is a simple process manager for dev environments.

Composer Composer is a simple service manager for dev environments. How to build/install it? To build composer under ./bin, run: make build To build

May 12, 2022
Ensi-local-ctl - ELC - orchestrator of development environments

ELC - orchestrator of development environments With ELC you can: start a couple

Oct 13, 2022
Enterprise-grade application development platform

Erda Overview Feature list Architecture Related repositories erda-proto erda-infra erda-ui Quick start To start using erda To start developing erda Do

Dec 28, 2022
Enterprise-grade container platform tailored for multicloud and multi-cluster management
Enterprise-grade container platform tailored for multicloud and multi-cluster management

KubeSphere Container Platform What is KubeSphere English | 中文 KubeSphere is a distributed operating system providing cloud native stack with Kubernete

Jan 2, 2023
KubeCube is an open source enterprise-level container platform
KubeCube is an open source enterprise-level container platform

KubeCube English | 中文文档 KubeCube is an open source enterprise-level container platform that provides enterprises with visualized management of Kuberne

Jan 4, 2023
The GCP Enterprise Cloud Cost Optimiser, or gecco for short, helps teams optimise their cloud project costs.
The GCP Enterprise Cloud Cost Optimiser, or gecco for short, helps teams optimise their cloud project costs.

gecco helps teams optimise their cloud resource costs. Locate abandoned, idle, and inefficiently configured resources quickly. gecco helps teams build

Jan 9, 2022