Flux is a tool for keeping Kubernetes clusters in sync with sources of configuration, and automating updates to configuration when there is new code to deploy.

Flux version 2

CII Best Practices e2e report license release

Flux is a tool for keeping Kubernetes clusters in sync with sources of configuration (like Git repositories), and automating updates to configuration when there is new code to deploy.

Flux version 2 ("v2") is built from the ground up to use Kubernetes' API extension system, and to integrate with Prometheus and other core components of the Kubernetes ecosystem. In version 2, Flux supports multi-tenancy and support for syncing an arbitrary number of Git repositories, among other long-requested features.

Flux v2 is constructed with the GitOps Toolkit, a set of composable APIs and specialized tools for building Continuous Delivery on top of Kubernetes.

Flux installation

With Homebrew for macOS and Linux:

brew install fluxcd/tap/flux

With GoFish for Windows, macOS and Linux:

gofish install flux

With Bash for macOS and Linux:

curl -s https://fluxcd.io/install.sh | sudo bash

# enable completions in ~/.bash_profile
. <(flux completion bash)

Arch Linux (AUR) packages:

  • flux-bin: install the latest stable version using a pre-build binary (recommended)
  • flux-go: build the latest stable version from source code
  • flux-scm: build the latest (unstable) version from source code from our git main branch

Binaries for macOS AMD64/ARM64, Linux AMD64/ARM/ARM64 and Windows are available to download on the release page.

A multi-arch container image with kubectl and flux is available on Docker Hub and GitHub:

  • docker.io/fluxcd/flux-cli:
  • ghcr.io/fluxcd/flux-cli:

Verify that your cluster satisfies the prerequisites with:

flux check --pre

Get started

To get started with Flux, start browsing the documentation or get started with one of the following guides:

If you need help, please refer to our Support page.

GitOps Toolkit

The GitOps Toolkit is the set of APIs and controllers that make up the runtime for Flux v2. The APIs comprise Kubernetes custom resources, which can be created and updated by a cluster user, or by other automation tooling.

overview

You can use the toolkit to extend Flux, or to build your own systems for continuous delivery -- see the developer guides.

Components

Community

Need help or want to contribute? Please see the links below. The Flux project is always looking for new contributors and there are a multitude of ways to get involved.

Events

Check out our events calendar, both with upcoming talks, events and meetings you can attend. Or view the resources section with past events videos you can watch.

We look forward to seeing you with us!

Owner
Flux project
Open and extensible continuous delivery solution for Kubernetes
Flux project
Comments
  • AKS: Azure network policy addon blocks source-controller ingress

    AKS: Azure network policy addon blocks source-controller ingress

    Dears,

    I'm trying to bootstrap flux2 in a new azure AKS without any network policies defined. After all CRDs are installed and the GitHub account is create, the bootstrap finishes with (time exceeded). The four controller pods are up and running.

    • The cluster is synchronized with the last commit correctly as below:

    #kubectl get gitrepositories.source.toolkit.fluxcd.io -A NAMESPACE NAME URL READY STATUS AGE flux-system flux-system https://github.com/name/tanyflux True Fetched revision: main/6554ea6324d70caf0f2dfa200e137fd9c2aecc8a 59m

    • But the Kustomization has the below error : #kubectl get kustomizations.kustomize.toolkit.fluxcd -A failed to download artifact from http://source-controller.flux-system.svc.cluster.local./gitrepository/flux-system/flux-system/6554ea6324d70caf0f2dfa200e137fd9c2aecc8a.tar.gz, error: Get "http://source-controller.flux-system.svc.cluster.local./gitrepository/flux-system/flux-system/6554ea6324d70caf0f2dfa200e137fd9c2aecc8a.tar.gz": dial tcp 10.0.165.86:80: i/o timeout

    • Same error exists while checking the Kustomization pod as below: #kubectl logs kustomize-controller-7f5455cd78-wwxhk -n flux-system
      "level":"error","ts":"2021-01-14T09:04:52.524Z","logger":"controller","msg":"Reconciler error","reconcilerGroup":"kustomize.toolkit.fluxcd.io","reconcilerKind":"Kustomization","controller":"kustomization","name":"flux-system","namespace":"flux-system","error":"failed to download artifact from http://source-controller.flux-system.svc.cluster.local./gitrepository/flux-system/flux-system/6554ea6324d70caf0f2dfa200e137fd9c2aecc8a.tar.gz, error: Get "http://source-controller.flux-system.svc.cluster.local./gitrepository/flux-system/flux-system/6554ea6324d70caf0f2dfa200e137fd9c2aecc8a.tar.gz": dial tcp 10.0.165.86:80: i/o timeout"}

    Thanks for any helpful advice.

  • Bootstrap fails the first time

    Bootstrap fails the first time

    Describe the bug

    When running bootstrap on a github repository it seems to always fail the first time with:

    installing components in "flux-system" namespace
    Kustomization/flux-system/flux-system dry-run failed, error: no matches for kind "Kustomization" in version "kustomize.toolkit.fluxcd.io/v1beta2"
    

    After running the exact same bootstrap command again it works as expected. The bootstrap command is flux bootstrap github --owner=*** --repository=*** --path=some/repo/path --personal

    Any ideas what this might be about?

    Steps to reproduce

    N/A

    Expected behavior

    N/A

    Screenshots and recordings

    No response

    OS / Distro

    Windows 10

    Flux version

    0.25.3

    Flux check

    N/A

    Git provider

    github

    Container Registry provider

    No response

    Additional context

    No response

    Code of Conduct

    • [X] I agree to follow this project's Code of Conduct
  • Two PVC-s bound to the same PV

    Two PVC-s bound to the same PV

    Describe the bug

    Hello team,

    Reconciliation process creates new pod before deleting old one. In case of pod has pvc in volume-section that ordering creates double claim to the same pv. IMO order of operations should be

    1. remove old pod
    2. create new one

    Steps to reproduce

    Easiest way to reproduce is to follow "Automate image updates to Git" guide , with the following addition to podinfo-deployment.yaml.

    Step 1) Add PV / PVC and attach volume to pod.

          volumes:
            - name: empty-dir-vol
              persistentVolumeClaim:
                claimName: empty-dir-pvc
    ---
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: empty-dir-pvc
      namespace: podinfo-image-updater
    spec:
      storageClassName: slow
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 3Gi
    ---
    apiVersion: v1
    kind: PersistentVolume
    metadata:
      labels:
        type: nfs
      name: podinfoimageupdater-emptydir-pv
    spec:
      accessModes:
      - ReadWriteOnce
      capacity:
        storage: 10Gi
      claimRef:
        name: empty-dir-pvc
        namespace: podinfo-image-updater
      nfs:
        path: /storage_local/podinfo-image-updater/empty-dir
        server: 192.168.170.36
      storageClassName: slow
    

    If it is confusing full manifest is here

    1. Change image version to trigger deployment reconciliation

    2. Observe the problem. PVC will get to Lost state

    $ kubectl get pvc -w
    NAME            STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
    empty-dir-pvc   Bound     podinfoimageupdater-emptydir-pv   10Gi       RWO            slow           11s
    empty-dir-pvc   Lost      podinfoimageupdater-emptydir-pv   0                         slow           2m26s
    
    ikuchin@microk8s-test:~$ microk8s.kubectl describe pvc 
    Name:          empty-dir-pvc
    Namespace:     podinfo-image-updater
    StorageClass:  slow
    Status:        Lost
    Volume:        podinfoimageupdater-emptydir-pv
    Labels:        kustomize.toolkit.fluxcd.io/name=flux-system
                   kustomize.toolkit.fluxcd.io/namespace=flux-system
    Annotations:   pv.kubernetes.io/bind-completed: yes
                   pv.kubernetes.io/bound-by-controller: yes
    Finalizers:    [kubernetes.io/pvc-protection]
    Capacity:      0
    Access Modes:  
    VolumeMode:    Filesystem
    Used By:       podinfo-9ccf96ff5-6d8nx    <----------- notice podID
    Events:
      Type     Reason         Age   From                         Message
      ----     ------         ----  ----                         -------
      Warning  ClaimMisbound  26s   persistentvolume-controller  Two claims are bound to the same volume, this one is bound incorrectly
    

    PV will get to Available state

    $ kubectl get pv -w
    podinfoimageupdater-emptydir-pv     10Gi       RWO            Retain           Bound       podinfo-image-updater/empty-dir-pvc   slow                    2m23s
    podinfoimageupdater-emptydir-pv     10Gi       RWO            Retain           Available   podinfo-image-updater/empty-dir-pvc   slow                    2m23s
    

    Reason for that is order of pod update operations

    $ kubectl get pod -w
    NAME                       READY   STATUS    RESTARTS       AGE
    podinfo-844777597c-hhj8g   1/1     Running   1 (114m ago)   11h <----- this pod owns PVC
    podinfo-9ccf96ff5-6d8nx    0/1     Pending   0              0s
    podinfo-9ccf96ff5-6d8nx    0/1     Pending   0              0s
    podinfo-9ccf96ff5-6d8nx    0/1     Pending   0              14s
    podinfo-9ccf96ff5-6d8nx    0/1     ContainerCreating   0              14s
    podinfo-9ccf96ff5-6d8nx    0/1     ContainerCreating   0              15s
    podinfo-9ccf96ff5-6d8nx    1/1     Running             0              15s   <--------- this pod creates duplicate PVC
    podinfo-844777597c-hhj8g   1/1     Terminating         1 (116m ago)   11h
    podinfo-844777597c-hhj8g   0/1     Terminating         1 (116m ago)   11h
    podinfo-844777597c-hhj8g   0/1     Terminating         1 (116m ago)   11h
    podinfo-844777597c-hhj8g   0/1     Terminating         1 (116m ago)   11h
    

    Expected behavior

    Successful image update even with PV/PVC attached to the pod

    Screenshots and recordings

    No response

    OS / Distro

    20.04.3 LTS (Focal Fossa)

    Flux version

    flux version 0.27.0

    Flux check

    $ flux check ► checking prerequisites ✔ Kubernetes 1.22.6-3+7ab10db7034594 >=1.20.6-0 ► checking controllers ✔ helm-controller: deployment ready ► ghcr.io/fluxcd/helm-controller:v0.17.0 ✔ notification-controller: deployment ready ► ghcr.io/fluxcd/notification-controller:v0.22.0 ✔ image-reflector-controller: deployment ready ► ghcr.io/fluxcd/image-reflector-controller:v0.16.0 ✔ image-automation-controller: deployment ready ► ghcr.io/fluxcd/image-automation-controller:v0.20.0 ✔ kustomize-controller: deployment ready ► ghcr.io/fluxcd/kustomize-controller:v0.21.0 ✔ source-controller: deployment ready ► ghcr.io/fluxcd/source-controller:v0.21.2 ✔ all checks passed

    Git provider

    No response

    Container Registry provider

    No response

    Additional context

    No response

    Code of Conduct

    • [X] I agree to follow this project's Code of Conduct
  • [Gitlab] flux bootstrap fails for personal projects if they already exist

    [Gitlab] flux bootstrap fails for personal projects if they already exist

    When I try to update my flux v2 installation using the bootstrap command it fails with an error from gitlab:

    failed to create project, error: POST https://gitlab.com/api/v4/projects: 400

    I use the below bootstrap command to install / update flux v2, which worked until now:

    ubuntu@srv1:~$ cat install-flux.sh 
    curl -s https://toolkit.fluxcd.io/install.sh | sudo bash
    
    export GITLAB_TOKEN=???????????????
    
    flux bootstrap gitlab \
      --owner=isnull \
      --repository=myrepo \
      --branch=master \
      --path=k8 \
      --token-auth \
      --personal
    

    Executing the flux bootstrap yields the error:

    ubuntu@srv1:~$ sh install-flux.sh 
    [INFO]  Downloading metadata https://api.github.com/repos/fluxcd/flux2/releases/latest
    [INFO]  Using 0.7.4 as release
    [INFO]  Downloading hash https://github.com/fluxcd/flux2/releases/download/v0.7.4/flux_0.7.4_checksums.txt
    [INFO]  Downloading binary https://github.com/fluxcd/flux2/releases/download/v0.7.4/flux_0.7.4_linux_amd64.tar.gz
    [INFO]  Verifying binary download
    [INFO]  Installing flux to /usr/local/bin/flux
    ► connecting to gitlab.com
    ✗ failed to create project, error: POST https://gitlab.com/api/v4/projects: 400 {message: {limit_reached: []}, {name: [has already been taken]}, {path: [has already been taken]}}
    

    Sys info:

    ubuntu@srv1:~$ flux check
    ► checking prerequisites
    ✔ kubectl 1.20.2 >=1.18.0
    ✔ Kubernetes 1.19.5-34+8af48932a5ef06 >=1.16.0
    ► checking controllers
    ✔ source-controller is healthy
    ► ghcr.io/fluxcd/source-controller:v0.5.6
    ✔ kustomize-controller is healthy
    ► ghcr.io/fluxcd/kustomize-controller:v0.5.3
    ✔ helm-controller is healthy
    ► ghcr.io/fluxcd/helm-controller:v0.4.4
    ✔ notification-controller is healthy
    ► ghcr.io/fluxcd/notification-controller:v0.5.0
    ✔ all checks passed
    

    Maybe some Gitlab project api change caused this?

  • ImageRepository manifests ignoring spec.secretRef changes

    ImageRepository manifests ignoring spec.secretRef changes

    Describe the bug

    We noticed this issue after updating to v0.25.1 and the issue is not currently effecting one of our other clusters that is on v0.24.1

    when making changes to our image repository manifests we noticed that despite the reconciliation passing without issue the spec.secretRef field was not effected example.

    Git Version

    apiVersion: image.toolkit.fluxcd.io/v1beta1
    kind: ImageRepository
    metadata:
      name: some-app
      namespace: flux-system
    spec:
      image: <ECR_URL>/some-app
      interval: 5m
    

    Cluster Version

    apiVersion: image.toolkit.fluxcd.io/v1beta1
    kind: ImageRepository
    metadata:
      name: some-app
      namespace: flux-system
    spec:
      image: <ECR_URL>/some-app
      interval: 5m
      secretRef:
        name: ecr-credentials
    

    Steps to reproduce

    1. add a spec.secretRef section to an existing ImageRepository manifest
    2. commit to git
    3. watch reconciliation pass successfully
    4. remove field
    5. watch reconciliation pass successfully
    6. see that spec.secretRef has not been removed

    Expected behavior

    I expect that when removing the spec.secretRef for the sync process to remove it on the cluster as well or error if there is a reason it cannot be edited/applied.

    Screenshots and recordings

    No response

    OS / Distro

    N/A

    Flux version

    v.0.25.1

    Flux check

    ► checking prerequisites ✔ Kubernetes 1.21.5 >=1.19.0-0 ► checking controllers ✔ helm-controller: deployment ready ► ghcr.io/fluxcd/helm-controller:v0.15.0 ✔ image-automation-controller: deployment ready ► ghcr.io/fluxcd/image-automation-controller:v0.19.0 ✔ image-reflector-controller: deployment ready ► ghcr.io/fluxcd/image-reflector-controller:v0.15.0 ✔ kustomize-controller: deployment ready ► ghcr.io/fluxcd/kustomize-controller:v0.19.0 ✔ notification-controller: deployment ready ► ghcr.io/fluxcd/notification-controller:v0.20.1 ✔ source-controller: deployment ready ► ghcr.io/fluxcd/source-controller:v0.20.1 ✔ all checks passed

    Git provider

    gitlab

    Container Registry provider

    ECR

    Additional context

    No response

    Code of Conduct

    • [X] I agree to follow this project's Code of Conduct
  • Feature request: `flux render {kustomization|helmrelease}`

    Feature request: `flux render {kustomization|helmrelease}`

    Debugging configuration examples would benefit with a new render subcommand to flux whereby the fully rendered manifests defined by a Kustomization or HelmRelease object are output. You'd run

    flux render kustomization my-app
    

    and get the streamed manifests as they were (our would have been except for an error) applied to K8S.

  • Flux ignores kustomization.yaml

    Flux ignores kustomization.yaml

    Describe the bug

    After recently deploying a new cluster with GKE version 1.22 I receive the error below:

    Kustomization/flux-system/${environment} dry-run failed, reason: Invalid, error: Kustomization.kustomize.toolkit.fluxcd.io "${environment}" is invalid: metadata.name: Invalid value: "${environment}": a lowercase RFC 1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')
    

    It seems that that kustomization.yaml file is somehow completely ignored. Because I compared contents of all the patch targets and they are clearly not patched.

    When, I assume, trying to deploy this:

    apiVersion: kustomize.toolkit.fluxcd.io/v1beta2
    kind: Kustomization
    metadata:
      name: ${environment}
      namespace: flux-system
    spec:
      prune: True
      interval: 1m
      dependsOn:
        - name: namespaces
        - ....
      path: environments/${environment}/application
      sourceRef:
        kind: GitRepository
        name: flux-system
        namespace: flux-system
      postBuild:
        substitute:
          environment: ${environment}
    

    Steps to reproduce

    1. Kubernetes version 1.22.12-gke.1200 - I am not sure about this step but that it is the only significant change at that stage

    kustomization.yaml:

    # This manifest was generated by Terraform. DO NOT EDIT.
    # Modify this file through the flux module
    apiVersion: kustomize.config.k8s.io/v1beta1
    kind: Kustomization
    resources:
    - gotk-sync.yaml
    - gotk-components.yaml
    patches:
      - target:
          version: v1beta2
          group: kustomize.toolkit.fluxcd.io
          kind: Kustomization
          name: flux-system
          namespace: flux-system
        patch: |-
          - op: add
            path: /spec/postBuild
            value:
              substitute:
                environment: "dev"
    

    Expected behavior

    Flux should pickup kustomization.yaml and apply all the patches in it.

    Screenshots and recordings

    No response

    OS / Distro

    WSL2, Kubernetes version 1.22.12-gke.1200

    Flux version

    v0.31.5, v0.35.0

    Flux check

    flux check ► checking prerequisites ✗ flux 0.31.5 <0.35.0 (new version is available, please upgrade) W1013 11:21:43.198854 10069 gcp.go:120] WARNING: the gcp auth plugin is deprecated in v1.22+, unavailable in v1.25+; use gcloud instead. To learn more, consult https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke ✔ Kubernetes 1.22.12-gke.1200 >=1.20.6-0 ► checking controllers ✔ helm-controller: deployment ready ► ghcr.io/fluxcd/helm-controller:v0.25.0 ✔ image-automation-controller: deployment ready ► ghcr.io/fluxcd/image-automation-controller:v0.26.0 ✔ image-reflector-controller: deployment ready ► ghcr.io/fluxcd/image-reflector-controller:v0.22.0 ✔ kustomize-controller: deployment ready ► ghcr.io/fluxcd/kustomize-controller:v0.29.0 ✔ notification-controller: deployment ready ► ghcr.io/fluxcd/notification-controller:v0.27.0 ✔ source-controller: deployment ready ► ghcr.io/fluxcd/source-controller:v0.30.0 ► checking crds ✔ alerts.notification.toolkit.fluxcd.io/v1beta1 ✔ buckets.source.toolkit.fluxcd.io/v1beta2 ✔ gitrepositories.source.toolkit.fluxcd.io/v1beta2 ✔ helmcharts.source.toolkit.fluxcd.io/v1beta2 ✔ helmreleases.helm.toolkit.fluxcd.io/v2beta1 ✔ helmrepositories.source.toolkit.fluxcd.io/v1beta2 ✔ imagepolicies.image.toolkit.fluxcd.io/v1beta1 ✔ imagerepositories.image.toolkit.fluxcd.io/v1beta1 ✔ imageupdateautomations.image.toolkit.fluxcd.io/v1beta1 ✔ kustomizations.kustomize.toolkit.fluxcd.io/v1beta2 ✔ ocirepositories.source.toolkit.fluxcd.io/v1beta2 ✔ providers.notification.toolkit.fluxcd.io/v1beta1 ✔ receivers.notification.toolkit.fluxcd.io/v1beta1 ✔ all checks passed

    Git provider

    GitHub

    Container Registry provider

    ghcr.io

    Additional context

    No response

    Code of Conduct

    • [X] I agree to follow this project's Code of Conduct
  • Bootstrap creates empty files

    Bootstrap creates empty files

    Bootstrap command doesn't want to install on existing GIT repository. I can live with it, so I've decided to allow it create new repository. Here's a log of creation:

     $ flux bootstrap github   --token-auth   --hostname=github.tools.xxx   --owner=yyyy   --repository=kubernetes-config2   --branch=master   --path=/clusters/k3sdev   --team=zzzz
    ► connecting to github.tools.xxx
    ✔ repository created
    ✔ zzzz team access granted
    ✔ repository cloned
    ✚ generating manifests
    ✔ components manifests pushed
    ► installing components in flux-system namespace
    namespace/flux-system created
    networkpolicy.networking.k8s.io/allow-scraping created
    networkpolicy.networking.k8s.io/allow-webhooks created
    networkpolicy.networking.k8s.io/deny-ingress created
    role.rbac.authorization.k8s.io/crd-controller-flux-system created
    rolebinding.rbac.authorization.k8s.io/crd-controller-flux-system created
    clusterrolebinding.rbac.authorization.k8s.io/cluster-reconciler-flux-system created
    customresourcedefinition.apiextensions.k8s.io/buckets.source.toolkit.fluxcd.io created
    customresourcedefinition.apiextensions.k8s.io/gitrepositories.source.toolkit.fluxcd.io created
    customresourcedefinition.apiextensions.k8s.io/helmcharts.source.toolkit.fluxcd.io created
    customresourcedefinition.apiextensions.k8s.io/helmrepositories.source.toolkit.fluxcd.io created
    service/source-controller created
    deployment.apps/source-controller created
    customresourcedefinition.apiextensions.k8s.io/kustomizations.kustomize.toolkit.fluxcd.io created
    deployment.apps/kustomize-controller created
    customresourcedefinition.apiextensions.k8s.io/helmreleases.helm.toolkit.fluxcd.io created
    deployment.apps/helm-controller created
    customresourcedefinition.apiextensions.k8s.io/alerts.notification.toolkit.fluxcd.io created
    customresourcedefinition.apiextensions.k8s.io/providers.notification.toolkit.fluxcd.io created
    customresourcedefinition.apiextensions.k8s.io/receivers.notification.toolkit.fluxcd.io created
    service/notification-controller created
    service/webhook-receiver created
    deployment.apps/notification-controller created
    Waiting for deployment "source-controller" rollout to finish: 0 of 1 updated replicas are available...
    deployment "source-controller" successfully rolled out
    deployment "kustomize-controller" successfully rolled out
    deployment "helm-controller" successfully rolled out
    Waiting for deployment "notification-controller" rollout to finish: 0 of 1 updated replicas are available...
    deployment "notification-controller" successfully rolled out
    ✔ install completed
    ► generating sync manifests
    ✔ sync manifests pushed
    ► applying sync manifests
    ◎ waiting for cluster sync
    ✗ kustomization path not found: stat /tmp/flux-system309109433/clusters/k3sdev: no such file or directory
    

    Repository has been created, README.md is created, but yaml files are empty. Should they? Listing logs repository shows:

    $ git log -p
    commit cf5e74b0428674aced9f2fd1b45f7d147991fb40 (HEAD -> master, origin/master, origin/HEAD)
    Author: flux <xxxx>
    Date:   Fri Dec 11 10:58:55 2020 +0100
    
        Add manifests
    
    commit bea83df32c89baeec8031da2235b83504a43c6c3
    Author: flux <xxxx>
    Date:   Fri Dec 11 10:58:35 2020 +0100
    
        Add manifests
    
    commit 0bd99a15329e3370dcf82833455e82efb8ff35d7
    Author: xxxx <xxxx>
    Date:   Fri Dec 11 10:58:31 2020 +0100
    
        Initial commit
    
  • Kustomizations without a base do not apply

    Kustomizations without a base do not apply

    Describe the bug

    according to the FAQ we should be able to patch arbitrary pre-installed resources using kustomize objects.

    I have not been able to patch any using the (limited) instructions in the FAQ.

    Steps to reproduce

    1. install flux
    2. create kustomization with patchesStrategicMerge
    3. reconcile kustomization

    Expected behavior

    resource patched with provided patch

    Screenshots and recordings

    kustomization:

    apiVersion: kustomize.config.k8s.io/v1beta1
    kind: Kustomization
    patches:
    - path: weave-liveness.yaml
      target:
        kind: DaemonSet
        name: weave-net
        namespace: kube-system
    

    weave-liveness.yaml:

    apiVersion: apps/v1
    kind: DaemonSet
    metadata:
      annotations:
        kustomize.fluxcd.toolkit.io/ssa: merge
      name: weave-net
      namespace: kube-system
    spec:
      template:
        spec:
          containers:
          - name: weave
            livenessProbe:
              exec:
                command:
                - /bin/sh
                - -c
                - /home/weave/weave --local status connections | grep fastdp
              initialDelaySeconds: 20
              periodSeconds: 5
    

    no errors, but also no change / no output.

    # kubectl get kustomizations.kustomize.toolkit.fluxcd.io -n flux-system weave-net
    NAME        AGE   READY   STATUS
    weave-net   22h   True    Applied revision: main/ca160ca0ec5d1ef98cb6fc368d09e6e09195f1ab
    

    OS / Distro

    centos 7.7

    Flux version

    v0.28.4

    Flux check

    flux check

    ► checking prerequisites ✔ Kubernetes 1.23.3 >=1.20.6-0 ► checking controllers ✔ helm-controller: deployment ready ► car:5000/helm-controller:v0.18.2 ✔ image-automation-controller: deployment ready ► car:5000/image-automation-controller:v0.21.2 ✔ image-reflector-controller: deployment ready ► car:5000/image-reflector-controller:v0.17.1 ✔ kustomize-controller: deployment ready ► car:5000/kustomize-controller:v0.22.2 ✔ notification-controller: deployment ready ► car:5000/notification-controller:v0.23.1 ✔ source-controller: deployment ready ► car:5000/source-controller:v0.22.4 ✔ all checks passed

    Git provider

    No response

    Container Registry provider

    No response

    Additional context

    No response

    Code of Conduct

    • [X] I agree to follow this project's Code of Conduct
  • Bootstrapping new cluster fails on k3s v1.20

    Bootstrapping new cluster fails on k3s v1.20

    I have a k3s cluster working on a Raspberry Pi connected to my home local network. Tried to bootstrap a new GOTK repo using the following command:

    flux bootstrap github \
    --owner=$GITHUB_USER \
    --repository=$CONFIG_REPO \
    --branch=master \
    --path=./clusters/my-cluster \
    --personal \
    --kubeconfig=/etc/rancher/k3s/k3s.yaml
    

    The output for the bootstrapping command (notice the "context deadline exceeded" after "waiting for Kustomization "flux-system/flux-system" to be reconciled"):

    ► connecting to github.com
    ► cloning branch "master" from Git repository "https://github.com/argamanza/raspberry-pi-flux-config.git"
    ✔ cloned repository
    ► generating component manifests
    ✔ generated component manifests
    ✔ component manifests are up to date
    ► installing toolkit.fluxcd.io CRDs
    ◎ waiting for CRDs to be reconciled
    ✔ CRDs reconciled successfully
    ► installing components in "flux-system" namespace
    ✔ installed components
    ✔ reconciled components
    ► determining if source secret "flux-system/flux-system" exists
    ✔ source secret up to date
    ► generating sync manifests
    ✔ generated sync manifests
    ✔ sync manifests are up to date
    ► applying sync manifests
    ✔ reconciled sync configuration
    ◎ waiting for Kustomization "flux-system/flux-system" to be reconciled
    ✗ context deadline exceeded
    ► confirming components are healthy
    ✔ source-controller: deployment ready
    ✔ kustomize-controller: deployment ready
    ✔ helm-controller: deployment ready
    ✔ notification-controller: deployment ready
    ✔ all components are healthy
    ✗ bootstrap failed with 1 health check failure(s)
    

    The logs for the Kustomize Controller expose what the issue might be:

    {"level":"info","ts":"2021-04-24T20:55:51.200Z","logger":"controller-runtime.metrics","msg":"metrics server is starting to listen","addr":":8080"}
    {"level":"info","ts":"2021-04-24T20:55:51.202Z","logger":"setup","msg":"starting manager"}
    I0424 20:55:51.206769       7 leaderelection.go:243] attempting to acquire leader lease flux-system/kustomize-controller-leader-election...
    {"level":"info","ts":"2021-04-24T20:55:51.307Z","msg":"starting metrics server","path":"/metrics"}
    I0424 20:56:30.436269       7 leaderelection.go:253] successfully acquired lease flux-system/kustomize-controller-leader-election
    {"level":"info","ts":"2021-04-24T20:56:30.436Z","logger":"controller.kustomization","msg":"Starting EventSource","reconciler group":"kustomize.toolkit.fluxcd.io","reconciler kind":"Kustomization","source":"kind source: /, Kind="}
    {"level":"info","ts":"2021-04-24T20:56:30.437Z","logger":"controller.kustomization","msg":"Starting EventSource","reconciler group":"kustomize.toolkit.fluxcd.io","reconciler kind":"Kustomization","source":"kind source: /, Kind="}
    {"level":"info","ts":"2021-04-24T20:56:30.538Z","logger":"controller.kustomization","msg":"Starting EventSource","reconciler group":"kustomize.toolkit.fluxcd.io","reconciler kind":"Kustomization","source":"kind source: /, Kind="}
    {"level":"info","ts":"2021-04-24T20:56:30.639Z","logger":"controller.kustomization","msg":"Starting Controller","reconciler group":"kustomize.toolkit.fluxcd.io","reconciler kind":"Kustomization"}
    {"level":"info","ts":"2021-04-24T20:56:30.639Z","logger":"controller.kustomization","msg":"Starting workers","reconciler group":"kustomize.toolkit.fluxcd.io","reconciler kind":"Kustomization","worker count":4}
    {"level":"info","ts":"2021-04-24T20:56:47.576Z","logger":"controller.kustomization","msg":"Kustomization applied in 2.713132582s","reconciler group":"kustomize.toolkit.fluxcd.io","reconciler kind":"Kustomization","name":"flux-system","namespace":"flux-system","output":{"clusterrole.rbac.authorization.k8s.io/crd-controller-flux-system":"unchanged","clusterrolebinding.rbac.authorization.k8s.io/cluster-reconciler-flux-system":"unchanged","clusterrolebinding.rbac.authorization.k8s.io/crd-controller-flux-system":"unchanged","customresourcedefinition.apiextensions.k8s.io/alerts.notification.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/buckets.source.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/gitrepositories.source.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/helmcharts.source.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/helmreleases.helm.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/helmrepositories.source.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/kustomizations.kustomize.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/providers.notification.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/receivers.notification.toolkit.fluxcd.io":"configured","deployment.apps/helm-controller":"configured","deployment.apps/kustomize-controller":"configured","deployment.apps/notification-controller":"configured","deployment.apps/source-controller":"configured","gitrepository.source.toolkit.fluxcd.io/flux-system":"unchanged","kustomization.kustomize.toolkit.fluxcd.io/flux-system":"unchanged","namespace/flux-system":"unchanged","networkpolicy.networking.k8s.io/allow-egress":"unchanged","networkpolicy.networking.k8s.io/allow-scraping":"unchanged","networkpolicy.networking.k8s.io/allow-webhooks":"unchanged","service/notification-controller":"unchanged","service/source-controller":"unchanged","service/webhook-receiver":"unchanged","serviceaccount/helm-controller":"unchanged","serviceaccount/kustomize-controller":"unchanged","serviceaccount/notification-controller":"unchanged","serviceaccount/source-controller":"unchanged"}}
    {"level":"error","ts":"2021-04-24T20:56:47.609Z","logger":"controller.kustomization","msg":"unable to update status after reconciliation","reconciler group":"kustomize.toolkit.fluxcd.io","reconciler kind":"Kustomization","name":"flux-system","namespace":"flux-system","error":"Kustomization.kustomize.toolkit.fluxcd.io \"flux-system\" is invalid: status.snapshot.entries.namespace: Invalid value: \"null\": status.snapshot.entries.namespace in body must be of type string: \"null\""}
    {"level":"error","ts":"2021-04-24T20:56:47.609Z","logger":"controller.kustomization","msg":"Reconciler error","reconciler group":"kustomize.toolkit.fluxcd.io","reconciler kind":"Kustomization","name":"flux-system","namespace":"flux-system","error":"Kustomization.kustomize.toolkit.fluxcd.io \"flux-system\" is invalid: status.snapshot.entries.namespace: Invalid value: \"null\": status.snapshot.entries.namespace in body must be of type string: \"null\""}
    {"level":"info","ts":"2021-04-24T20:56:53.835Z","logger":"controller.kustomization","msg":"Kustomization applied in 2.470822475s","reconciler group":"kustomize.toolkit.fluxcd.io","reconciler kind":"Kustomization","name":"flux-system","namespace":"flux-system","output":{"clusterrole.rbac.authorization.k8s.io/crd-controller-flux-system":"unchanged","clusterrolebinding.rbac.authorization.k8s.io/cluster-reconciler-flux-system":"unchanged","clusterrolebinding.rbac.authorization.k8s.io/crd-controller-flux-system":"unchanged","customresourcedefinition.apiextensions.k8s.io/alerts.notification.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/buckets.source.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/gitrepositories.source.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/helmcharts.source.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/helmreleases.helm.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/helmrepositories.source.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/kustomizations.kustomize.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/providers.notification.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/receivers.notification.toolkit.fluxcd.io":"configured","deployment.apps/helm-controller":"configured","deployment.apps/kustomize-controller":"configured","deployment.apps/notification-controller":"configured","deployment.apps/source-controller":"configured","gitrepository.source.toolkit.fluxcd.io/flux-system":"unchanged","kustomization.kustomize.toolkit.fluxcd.io/flux-system":"unchanged","namespace/flux-system":"unchanged","networkpolicy.networking.k8s.io/allow-egress":"unchanged","networkpolicy.networking.k8s.io/allow-scraping":"unchanged","networkpolicy.networking.k8s.io/allow-webhooks":"unchanged","service/notification-controller":"unchanged","service/source-controller":"unchanged","service/webhook-receiver":"unchanged","serviceaccount/helm-controller":"unchanged","serviceaccount/kustomize-controller":"unchanged","serviceaccount/notification-controller":"unchanged","serviceaccount/source-controller":"unchanged"}}
    {"level":"error","ts":"2021-04-24T20:56:53.863Z","logger":"controller.kustomization","msg":"unable to update status after reconciliation","reconciler group":"kustomize.toolkit.fluxcd.io","reconciler kind":"Kustomization","name":"flux-system","namespace":"flux-system","error":"Kustomization.kustomize.toolkit.fluxcd.io \"flux-system\" is invalid: status.snapshot.entries.namespace: Invalid value: \"null\": status.snapshot.entries.namespace in body must be of type string: \"null\""}
    

    From the logs I can tell that status.snapshot.entries.namespace shouldn't be null for the flux-system kustomization, and after testing the same bootstrap procedure on a local machine using cluster I provisioned using kind I can see that the kustomization indeed miss the status.snapshot data in the K3S cluster while on my local kind cluster it exists:

    K3S@RaspberryPi:

    kubectl describe kustomization flux-system -n flux-system
    
    Name:         flux-system
    Namespace:    flux-system
    Labels:       kustomize.toolkit.fluxcd.io/checksum=1d4c5beef02b0043768a476cc3fed578aa3ed6f0
                  kustomize.toolkit.fluxcd.io/name=flux-system
                  kustomize.toolkit.fluxcd.io/namespace=flux-system
    Annotations:  <none>
    API Version:  kustomize.toolkit.fluxcd.io/v1beta1
    Kind:         Kustomization
    Metadata:
      Creation Timestamp:  2021-04-24T19:42:50Z
      Finalizers:
        finalizers.fluxcd.io
      Generation:  1
    ...
    ...
    Status:
      Conditions:
        Last Transition Time:  2021-04-24T19:43:30Z
        Message:               reconciliation in progress
        Reason:                Progressing
        Status:                Unknown
        Type:                  Ready
    Events:
      Type    Reason  Age   From                  Message
      ----    ------  ----  ----                  -------
      Normal  info    57m   kustomize-controller  customresourcedefinition.apiextensions.k8s.io/buckets.source.toolkit.fluxcd.io configured
    ...
    

    kind@local:

    kubectl describe kustomization flux-system -n flux-system
    
    Name:         flux-system
    Namespace:    flux-system
    Labels:       kustomize.toolkit.fluxcd.io/checksum=1d4c5beef02b0043768a476cc3fed578aa3ed6f0
                  kustomize.toolkit.fluxcd.io/name=flux-system
                  kustomize.toolkit.fluxcd.io/namespace=flux-system
    Annotations:  <none>
    API Version:  kustomize.toolkit.fluxcd.io/v1beta1
    Kind:         Kustomization
    Metadata:
      Creation Timestamp:  2021-04-25T12:35:37Z
      Finalizers:
        finalizers.fluxcd.io
      Generation:  1
    ...
    ...
    Status:
      Conditions:
        Last Transition Time:   2021-04-25T12:37:02Z
        Message:                Applied revision: master/dbce13415e4118bb071b58dab20d1f2bec527a14
        Reason:                 ReconciliationSucceeded
        Status:                 True
        Type:                   Ready
      Last Applied Revision:    master/dbce13415e4118bb071b58dab20d1f2bec527a14
      Last Attempted Revision:  master/dbce13415e4118bb071b58dab20d1f2bec527a14
      Observed Generation:      1
      Snapshot:
        Checksum:  1d4c5beef02b0043768a476cc3fed578aa3ed6f0
        Entries:
          Kinds:
            /v1, Kind=Namespace:                                     Namespace
            apiextensions.k8s.io/v1, Kind=CustomResourceDefinition:  CustomResourceDefinition
            rbac.authorization.k8s.io/v1, Kind=ClusterRole:          ClusterRole
            rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding:   ClusterRoleBinding
          Namespace:
          Kinds:
            /v1, Kind=Service:                                        Service
            /v1, Kind=ServiceAccount:                                 ServiceAccount
            apps/v1, Kind=Deployment:                                 Deployment
            kustomize.toolkit.fluxcd.io/v1beta1, Kind=Kustomization:  Kustomization
            networking.k8s.io/v1, Kind=NetworkPolicy:                 NetworkPolicy
            source.toolkit.fluxcd.io/v1beta1, Kind=GitRepository:     GitRepository
          Namespace:                                                  flux-system
    Events:
      Type    Reason  Age    From                  Message
      ----    ------  ----   ----                  -------
      Normal  info    3m53s  kustomize-controller  customresourcedefinition.apiextensions.k8s.io/buckets.source.toolkit.fluxcd.io configured
    ...
    

    This is also where my debugging process came to a dead end as I couldn't find a reason why the status.snapshot doesn't populate on my K3S@RaspberryPi while it does on Kind@Local using the same bootstrap process.

    I believe the fact that the issue only occurs on my raspberry pi implies that it might be a networking issue of some kind that prevents the kustomize controller from getting status updates from GitHub and I need to handle port forwarding or something similar, but I'm not sure.

    • Kubernetes version: v1.20.6+k3s1
    • Git provider: GitHub
    flux --version
    flux version 0.13.1
    
    flux check
    ► checking prerequisites
    ✔ kubectl 1.20.6+k3s1 >=1.18.0-0
    ✔ Kubernetes 1.20.6+k3s1 >=1.16.0-0
    ► checking controllers
    ✔ helm-controller: deployment ready
    ► ghcr.io/fluxcd/helm-controller:v0.10.0
    ✔ kustomize-controller: deployment ready
    ► ghcr.io/fluxcd/kustomize-controller:v0.11.1
    ✔ notification-controller: deployment ready
    ► ghcr.io/fluxcd/notification-controller:v0.13.0
    ✔ source-controller: deployment ready
    ► ghcr.io/fluxcd/source-controller:v0.12.1
    ✔ all checks passed
    
  • custom port is not honored for ssh based git url

    custom port is not honored for ssh based git url

    I am trying to do a bootstrap of flux on to a new cluster, but the git server I have uses a custom port for its ssh access, and flux bootstrap seems to strip the port off, causing the initial clone to fail.

    I found the line in the git bootstrap below, which seems to have actually been changed in the past so custom ports are allowed for http/s.

    Not sure whether this is intended for ssh (I don't see why), but perhaps this can be changed. I would have done the change myself but I don't know whether there is a reason it is this way now.

    https://github.com/fluxcd/flux2/blob/18c944d18a6272e4c6fb26116a9db02ba4deb937/cmd/flux/bootstrap_git.go#L190

  • Generated secrets contain newline

    Generated secrets contain newline

    Describe the bug

    Kubernetes secrets generated by secretGenerator contain newlines.

    Steps to reproduce

    sops secrets/test
    

    Save it without any newline.

    Add a kustomization with the following:

    secretGenerator:
      - files:
          - secrets/test
        name: test-secret
    

    The Kubernetes secret contains a newline: image

    Expected behavior

    The secret should not contain a newline.

    Screenshots and recordings

    No response

    OS / Distro

    Amazon EKS optimized Amazon Linux AMIs

    Flux version

    0.38.2

    Flux check

    N/A

    Git provider

    No response

    Container Registry provider

    No response

    Additional context

    No response

    Code of Conduct

    • [X] I agree to follow this project's Code of Conduct
  • Kustomize key `specs.suspend` does not get removed by flux on a flux-created kustomization resource after being manually added on the cluster.

    Kustomize key `specs.suspend` does not get removed by flux on a flux-created kustomization resource after being manually added on the cluster.

    Describe the bug

    Flux does not properly revert a manually changed kustomization resource back to what was defined in the GitOps repository. If one uses a bootstrap approach to create multiple kustomization resources from their definitions in the GitOps repository, flux will not remove a manually added specs.suspend key. This is the case even if the bootstrap resource has spec.prune=true.

    Steps to reproduce

    1. Create bootstrap kustomization resources with spec.prune=true, which creates a set of kustomization resources defined in the GitOps repository

        flux create kustomization bootstrap \
          --path="./clusters/<device-ID>/flux" \
          --source=gitops-source \
          --prune=true \
          --interval=1m
      
      

      Simplified GitOps repository folder structure:

      .
      ├── apps
      └── clusters
            └── <device-ID>
                ├── flux
                │     ├── sets-1.yaml
                │     └── sets-2.yaml
                └── sets
                      └── sets-1
                      └── sets-2
      
      

      eg. Kustomization resource definition pointing to clusters/<device-ID>/sets/sets-1 defined in the manifest file clusters/<device-ID>/flux/sets-1.yaml

      apiVersion: kustomize.toolkit.fluxcd.io/v1beta2
      kind: Kustomization
      metadata:
        name: <device-ID>-sets-1
        namespace: flux-system
      spec:
        interval: 1m
        sourceRef:
          kind: GitRepository
          name: gitops-repo
          namespace: flux-system
        path: ./clusters/<device-ID>/sets/sets-1
        prune: true
        wait: true
        timeout: 10m
      

      The key spec.suspend is not specified in any of the resource definitions of the kustomization resources in the GitOps repo. As expected neither is the spec.suspend key set for the created kustomization resources on the cluster, when checking with kubectl.

    2. Manually suspend the kustomization resource with flux suspend on the cluster. This will add the spec.suspend=true to the kustomization resource.

      flux suspend kustomization <device-ID>-sets-1
      
    3. After reconciliation the suspend key does not get removed from the kustomization resource. The resource's state on the cluster is therefore different from what is defined in the GitOps repository.

    Expected behavior

    Flux would revert the kustomization resource to how it was defined in the GitOps repository. Which would mean in this case removing the added spec.suspend=true from the resource.

    The change only gets reverted back, if the kustomization resource definition in clusters/flux/ has a reference to spec.suspend.

    Screenshots and recordings

    Flux would revert the kustomization resource to how it was defined in the GitOps repository. Which would mean in this case removing the added spec.suspend=true from the resource.

    The change only gets reverted back, if the kustomization resource definition in clusters/flux/ has a reference to spec.suspend.

    OS / Distro

    Ubuntu 20.04

    Flux version

    v0.37.0

    Flux check

    ► checking prerequisites ✗ flux 0.37.0 <0.38.2 (new version is available, please upgrade) ✔ Kubernetes 1.25.4+k3s1 >=1.20.6-0 ► checking controllers ✔ kustomize-controller: deployment ready ► mcr.microsoft.com/oss/fluxcd/kustomize-controller:v0.31.0 ✔ notification-controller: deployment ready ► mcr.microsoft.com/oss/fluxcd/notification-controller:v0.29.0 ✔ helm-controller: deployment ready ► mcr.microsoft.com/oss/fluxcd/helm-controller:v0.27.0 ✔ source-controller: deployment ready ► mcr.microsoft.com/oss/fluxcd/source-controller:v0.32.1 ✔ fluxconfig-controller: deployment ready ► mcr.microsoft.com/azurek8sflux/fluxconfig-controller:1.6.3 ► mcr.microsoft.com/azurek8sflux/fluent-bit:1.6.3 ✔ fluxconfig-agent: deployment ready ► mcr.microsoft.com/azurek8sflux/fluxconfig-agent:1.6.3 ► mcr.microsoft.com/azurek8sflux/fluent-bit:1.6.3 ► checking crds ✔ fluxconfigs.clusterconfig.azure.com/v1alpha1 ✔ helmreleases.helm.toolkit.fluxcd.io/v2beta1 ✔ imagepolicies.image.toolkit.fluxcd.io/v1beta1 ✔ imagerepositories.image.toolkit.fluxcd.io/v1beta1 ✔ imageupdateautomations.image.toolkit.fluxcd.io/v1beta1 ✔ kustomizations.kustomize.toolkit.fluxcd.io/v1beta2 ✔ alerts.notification.toolkit.fluxcd.io/v1beta1 ✔ providers.notification.toolkit.fluxcd.io/v1beta1 ✔ receivers.notification.toolkit.fluxcd.io/v1beta1 ✔ buckets.source.toolkit.fluxcd.io/v1beta2 ✔ gitrepositories.source.toolkit.fluxcd.io/v1beta2 ✔ helmcharts.source.toolkit.fluxcd.io/v1beta2 ✔ helmrepositories.source.toolkit.fluxcd.io/v1beta2 ✔ ocirepositories.source.toolkit.fluxcd.io/v1beta2 ✔ all checks passed

    Git provider

    No response

    Container Registry provider

    No response

    Additional context

    No response

    Code of Conduct

    • [X] I agree to follow this project's Code of Conduct
  • kustomization error

    kustomization error

    Describe the bug

    I added this to the exclusionList of an Alert:

    "ImageRepository\.image\.toolkit\.fluxcd\.io \".*\" not found"
    

    And that caused the kustomization-controller to fail with this:

    {"level":"error","ts":"2023-01-03T04:56:21.911Z","msg":"Reconciler error","controller":"kustomization","controllerGroup":"kustomize.toolkit.fluxcd.io","controllerKind":"Kustomization","Kustomization":{"name":"flux-system","namespace":"flux-system"},"namespace":"flux-system","name":"flux-system","reconcileID":"f05372d4-cb4d-409e-80dd-d185b42f009e","error":"panic: runtime error: invalid memory address or nil pointer dereference [recovered]"}
    E0103 04:56:45.994483       8 runtime.go:79] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
    goroutine 309 [running]:
    k8s.io/apimachinery/pkg/util/runtime.logPanic({0x1eaf2e0?, 0x380e010})
        k8s.io/[email protected]/pkg/util/runtime/runtime.go:75 +0x99
    sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile.func1()
        sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:110 +0xb9
    panic({0x1eaf2e0, 0x380e010})
        runtime/panic.go:884 +0x212
    sigs.k8s.io/kustomize/kyaml/internal/forked/github.com/go-yaml/yaml.handleErr(0xc000959928)
        sigs.k8s.io/kustomize/[email protected]/internal/forked/github.com/go-yaml/yaml/yaml.go:304 +0x6d
    panic({0x1eaf2e0, 0x380e010})
        runtime/panic.go:884 +0x212
    sigs.k8s.io/kustomize/kyaml/internal/forked/github.com/go-yaml/yaml.yaml_parser_split_stem_comment(0xc0005f6800, 0x38)
        sigs.k8s.io/kustomize/[email protected]/internal/forked/github.com/go-yaml/yaml/parserc.go:789 +0x38
    sigs.k8s.io/kustomize/kyaml/internal/forked/github.com/go-yaml/yaml.yaml_parser_parse_block_sequence_entry(0xc0005f6800, 0xc0005f6ab0, 0x70?)
        sigs.k8s.io/kustomize/[email protected]/internal/forked/github.com/go-yaml/yaml/parserc.go:703 +0x1fb
    sigs.k8s.io/kustomize/kyaml/internal/forked/github.com/go-yaml/yaml.yaml_parser_state_machine(0x0?, 0x5?)
        sigs.k8s.io/kustomize/[email protected]/internal/forked/github.com/go-yaml/yaml/parserc.go:179 +0xcf
    sigs.k8s.io/kustomize/kyaml/internal/forked/github.com/go-yaml/yaml.yaml_parser_parse(0xc000dd25a0?, 0x219edcc?)
        sigs.k8s.io/kustomize/[email protected]/internal/forked/github.com/go-yaml/yaml/parserc.go:129 +0x8c
    sigs.k8s.io/kustomize/kyaml/internal/forked/github.com/go-yaml/yaml.(*parser).peek(0xc0005f6800)
        sigs.k8s.io/kustomize/[email protected]/internal/forked/github.com/go-yaml/yaml/decode.go:103 +0x30
    sigs.k8s.io/kustomize/kyaml/internal/forked/github.com/go-yaml/yaml.(*parser).sequence(0xc0005f6800)
        sigs.k8s.io/kustomize/[email protected]/internal/forked/github.com/go-yaml/yaml/decode.go:258 +0x115
    sigs.k8s.io/kustomize/kyaml/internal/forked/github.com/go-yaml/yaml.(*parser).parse(0xc0005f6800)
        sigs.k8s.io/kustomize/[email protected]/internal/forked/github.com/go-yaml/yaml/decode.go:154 +0xeb
    sigs.k8s.io/kustomize/kyaml/internal/forked/github.com/go-yaml/yaml.(*parser).parseChild(0xc0005f6800?, 0xc000dc0320)
        sigs.k8s.io/kustomize/[email protected]/internal/forked/github.com/go-yaml/yaml/decode.go:194 +0x25
    sigs.k8s.io/kustomize/kyaml/internal/forked/github.com/go-yaml/yaml.(*parser).mapping(0xc0005f6800)
        sigs.k8s.io/kustomize/[email protected]/internal/forked/github.com/go-yaml/yaml/decode.go:285 +0x1d7
    sigs.k8s.io/kustomize/kyaml/internal/forked/github.com/go-yaml/yaml.(*parser).parse(0xc0005f6800)
        sigs.k8s.io/kustomize/[email protected]/internal/forked/github.com/go-yaml/yaml/decode.go:152 +0x105
    sigs.k8s.io/kustomize/kyaml/internal/forked/github.com/go-yaml/yaml.(*parser).parseChild(0xc0005f6800?, 0xc000dafb80)
        sigs.k8s.io/kustomize/[email protected]/internal/forked/github.com/go-yaml/yaml/decode.go:194 +0x25
    sigs.k8s.io/kustomize/kyaml/internal/forked/github.com/go-yaml/yaml.(*parser).mapping(0xc0005f6800)
        sigs.k8s.io/kustomize/[email protected]/internal/forked/github.com/go-yaml/yaml/decode.go:285 +0x1d7
    sigs.k8s.io/kustomize/kyaml/internal/forked/github.com/go-yaml/yaml.(*parser).parse(0xc0005f6800)
        sigs.k8s.io/kustomize/[email protected]/internal/forked/github.com/go-yaml/yaml/decode.go:152 +0x105
    sigs.k8s.io/kustomize/kyaml/internal/forked/github.com/go-yaml/yaml.(*parser).parseChild(...)
        sigs.k8s.io/kustomize/[email protected]/internal/forked/github.com/go-yaml/yaml/decode.go:194
    sigs.k8s.io/kustomize/kyaml/internal/forked/github.com/go-yaml/yaml.(*parser).document(0xc0005f6800)
        sigs.k8s.io/kustomize/[email protected]/internal/forked/github.com/go-yaml/yaml/decode.go:203 +0x85
    sigs.k8s.io/kustomize/kyaml/internal/forked/github.com/go-yaml/yaml.(*parser).parse(0xc0005f6800)
        sigs.k8s.io/kustomize/[email protected]/internal/forked/github.com/go-yaml/yaml/decode.go:156 +0xae
    sigs.k8s.io/kustomize/kyaml/internal/forked/github.com/go-yaml/yaml.(*Decoder).Decode(0xc000d05f90, {0x1fedce0?, 0xc000dafa40})
        sigs.k8s.io/kustomize/[email protected]/internal/forked/github.com/go-yaml/yaml/yaml.go:123 +0x136
    sigs.k8s.io/kustomize/kyaml/kio.(*ByteReader).decode(0xc000959d68, {0xc000b00000, 0x430}, 0x26a86a0?, 0xc0008ec7b0?)
        sigs.k8s.io/kustomize/[email protected]/kio/byteio_reader.go:292 +0x79
    sigs.k8s.io/kustomize/kyaml/kio.(*ByteReader).Read(0xc000959d68)
        sigs.k8s.io/kustomize/[email protected]/kio/byteio_reader.go:228 +0x371
    sigs.k8s.io/kustomize/kyaml/kio.FromBytes({0xc000daa000, 0x895, 0x896})
        sigs.k8s.io/kustomize/[email protected]/kio/byteio_reader.go:110 +0xbb
    sigs.k8s.io/kustomize/api/resource.(*Factory).RNodesFromBytes(0xc000fde360?, {0xc000daa000?, 0x26e0240?, 0x388af00?})
        sigs.k8s.io/kustomize/[email protected]/resource/factory.go:167 +0x30
    sigs.k8s.io/kustomize/api/resource.(*Factory).SliceFromBytes(0xc0010b5a90?, {0xc000daa000?, 0x1d?, 0x26ac368?})
        sigs.k8s.io/kustomize/[email protected]/resource/factory.go:120 +0x38
    github.com/fluxcd/pkg/kustomize.(*Generator).generateKustomization.func1.1({0xc0010b59a0, 0x4b}, {0x26d97c8?, 0xc000d0be10?}, {0x0?, 0x0?})
        github.com/fluxcd/pkg/[email protected]/kustomize_generator.go:451 +0x3e5
    github.com/fluxcd/pkg/kustomize/filesys.fsSecure.Walk.func1({0xc0010b59a0, 0x4b}, {0x26d97c8, 0xc000d0be10}, {0x0, 0x0})
        github.com/fluxcd/pkg/[email protected]/filesys/fs_secure.go:241 +0x176
    path/filepath.walk({0xc0010b59a0, 0x4b}, {0x26d97c8, 0xc000d0be10}, 0xc0002ce190)
        path/filepath/path.go:433 +0x123
    path/filepath.walk({0xc000b3d180, 0x33}, {0x26d97c8, 0xc000e7a680}, 0xc0002ce190)
        path/filepath/path.go:457 +0x285
    path/filepath.walk({0xc000346750, 0x2c}, {0x26d97c8, 0xc000f68a90}, 0xc0002ce190)
        path/filepath/path.go:457 +0x285
    path/filepath.Walk({0xc000346750, 0x2c}, 0xc0002ce190)
        path/filepath/path.go:520 +0x6c
    sigs.k8s.io/kustomize/kyaml/filesys.fsOnDisk.Walk(...)
        sigs.k8s.io/kustomize/[email protected]/filesys/fsondisk.go:138
    github.com/fluxcd/pkg/kustomize/filesys.fsSecure.Walk({{0xc000fde360, 0x1d}, {0x26e0240, 0x388af00}, {0x0, 0x0, 0x0}}, {0xc000346750, 0x2c}, 0xc000b50340)
        github.com/fluxcd/pkg/[email protected]/filesys/fs_secure.go:243 +0xec
    github.com/fluxcd/pkg/kustomize.(*Generator).generateKustomization.func1({0xc000346750, 0x2c})
        github.com/fluxcd/pkg/[email protected]/kustomize_generator.go:422 +0x151
    github.com/fluxcd/pkg/kustomize.(*Generator).generateKustomization(0x1ebfd60?, {0xc000346750, 0x2c})
        github.com/fluxcd/pkg/[email protected]/kustomize_generator.go:465 +0x29b
    github.com/fluxcd/pkg/kustomize.(*Generator).WriteFile(0xc00095bff8?, {0xc000346750, 0x2c}, {0x0, 0x0, 0x215e3c0?})
        github.com/fluxcd/pkg/[email protected]/kustomize_generator.go:112 +0x56
    github.com/fluxcd/kustomize-controller/controllers.(*KustomizationReconciler).generate(0x384cec0?, {0x215e3c0?}, {0xc0005ba880?, 0xc0000f6f00?}, {0xc000346750?, 0x65?})
        github.com/fluxcd/kustomize-controller/controllers/kustomization_controller.go:559 +0x53
    github.com/fluxcd/kustomize-controller/controllers.(*KustomizationReconciler).reconcile(0xc000cc8410, {0x26d43e8, 0xc0005bee70}, 0xc0000f6f00, {0x26d4810, 0xc000b0d680}, 0xc000a12970?)
        github.com/fluxcd/kustomize-controller/controllers/kustomization_controller.go:375 +0xc25
    github.com/fluxcd/kustomize-controller/controllers.(*KustomizationReconciler).Reconcile(0xc000cc8410, {0x26d43e8, 0xc0005bee70}, {{{0xc000a12970?, 0x10?}, {0xc000a12960?, 0x40dc67?}}})
        github.com/fluxcd/kustomize-controller/controllers/kustomization_controller.go:263 +0x985
    sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile(0x26d4340?, {0x26d43e8?, 0xc0005bee70?}, {{{0xc000a12970?, 0x20ab920?}, {0xc000a12960?, 0x404554?}}})
        sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:121 +0xc8
    sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler(0xc000ce4140, {0x26d4340, 0xc000caaec0}, {0x1f4bb00?, 0xc000426a80?})
        sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:320 +0x33c
    sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem(0xc000ce4140, {0x26d4340, 0xc000caaec0})
        sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:273 +0x1d9
    sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2()
        sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:234 +0x85
    created by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2
        sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:230 +0x333
    

    I'll understand if you close this as user error since clearly my regex is bad but I didn't figure it should crash this hard. I was unable to recover flux without running flux uninstall but I didn't try too hard to fix it before I ran the uninstall command.

    Steps to reproduce

    Install flux and create a notification configuration like this:

    ---
    apiVersion: notification.toolkit.fluxcd.io/v1beta1
    kind: Provider
    metadata:
      name: slack
      namespace: flux-system
    spec:
      type: slack
      channel: notifications
      address: https://hooks.slack.com/services/ABCD/EFGH
    ---
    apiVersion: notification.toolkit.fluxcd.io/v1beta1
    kind: Alert
    metadata:
      name: errors-to-slack
      namespace: flux-system
    spec:
      providerRef:
        name: slack
      eventSeverity: error
      eventSources:
        - kind: Bucket
          name: '*'
        - kind: GitRepository
          name: '*'
        - kind: Kustomization
          name: '*'
        - kind: HelmRelease
          name: '*'
        - kind: HelmChart
          name: '*'
        - kind: HelmRepository
          name: '*'
        - kind: ImageRepository
          name: '*'
        - kind: ImagePolicy
          name: '*'
        - kind: ImageUpdateAutomation
          name: '*'
      exclusionList:
        # ignore messages when something first enters the system
        - "version list argument cannot be empty"
        - "ImageRepository\.image\.toolkit\.fluxcd\.io \".*\" not found"
    

    Expected behavior

    Maybe log an error?

    Screenshots and recordings

    No response

    OS / Distro

    k3s on Debian

    Flux version

    0.38.2

    Flux check

    ► checking prerequisites ✔ Kubernetes 1.25.4+k3s1 >=1.20.6-0 ► checking controllers ✔ helm-controller: deployment ready ► ghcr.io/fluxcd/helm-controller:v0.28.1 ✔ image-automation-controller: deployment ready ► ghcr.io/fluxcd/image-automation-controller:v0.28.0 ✔ image-reflector-controller: deployment ready ► ghcr.io/fluxcd/image-reflector-controller:v0.23.1 ✔ kustomize-controller: deployment ready ► ghcr.io/fluxcd/kustomize-controller:v0.32.0 ✔ notification-controller: deployment ready ► ghcr.io/fluxcd/notification-controller:v0.30.2 ✔ source-controller: deployment ready ► ghcr.io/fluxcd/source-controller:v0.33.0 ► checking crds ✔ alerts.notification.toolkit.fluxcd.io/v1beta2 ✔ buckets.source.toolkit.fluxcd.io/v1beta2 ✔ gitrepositories.source.toolkit.fluxcd.io/v1beta2 ✔ helmcharts.source.toolkit.fluxcd.io/v1beta2 ✔ helmreleases.helm.toolkit.fluxcd.io/v2beta1 ✔ helmrepositories.source.toolkit.fluxcd.io/v1beta2 ✔ imagepolicies.image.toolkit.fluxcd.io/v1beta1 ✔ imagerepositories.image.toolkit.fluxcd.io/v1beta1 ✔ imageupdateautomations.image.toolkit.fluxcd.io/v1beta1 ✔ kustomizations.kustomize.toolkit.fluxcd.io/v1beta2 ✔ ocirepositories.source.toolkit.fluxcd.io/v1beta2 ✔ providers.notification.toolkit.fluxcd.io/v1beta2 ✔ receivers.notification.toolkit.fluxcd.io/v1beta2 ✔ all checks passed

    Git provider

    No response

    Container Registry provider

    No response

    Additional context

    Thanks for your help!

    Code of Conduct

    • [X] I agree to follow this project's Code of Conduct
  • GitHub Actions unreliable

    GitHub Actions unreliable

    Describe the bug

    I'm trying to automate FluxCD Updates using the recommended workflow (https://github.com/fluxcd/flux2/tree/main/action#automate-flux-updates). Sadly this fails randomly multiple times a day.

    Steps to reproduce

    1. Setup workflow https://github.com/fluxcd/flux2/tree/main/action#automate-flux-updates
    2. Monitor Success/Failure of GH Action

    Expected behavior

    Reliable execution of workflow

    Screenshots and recordings

    2023-01-01T10:08:13.0895617Z ##[group]Run fluxcd/flux2/action@main
    2023-01-01T10:08:13.0895864Z with:
    2023-01-01T10:08:13.0896033Z   arch: amd64
    2023-01-01T10:08:13.0896219Z ##[endgroup]
    2023-01-01T10:08:13.1231060Z ##[group]Run ARCH=amd64
    2023-01-01T10:08:13.1231379Z ARCH=amd64
    2023-01-01T10:08:13.1231588Z VERSION=
    2023-01-01T10:08:13.1231762Z 
    2023-01-01T10:08:13.1231959Z if [ -z $VERSION ]; then
    2023-01-01T10:08:13.1232361Z   VERSION=$(curl https://api.github.com/repos/fluxcd/flux2/releases/latest -sL | grep tag_name | sed -E 's/.*"([^"]+)".*/\1/' | cut -c 2-)
    2023-01-01T10:08:13.1232861Z fi
    2023-01-01T10:08:13.1233034Z 
    2023-01-01T10:08:13.1233348Z BIN_URL="https://github.com/fluxcd/flux2/releases/download/v${VERSION}/flux_${VERSION}_linux_${ARCH}.tar.gz"
    2023-01-01T10:08:13.1233704Z curl -sL ${BIN_URL} -o /tmp/flux.tar.gz
    2023-01-01T10:08:13.1233956Z mkdir -p /tmp/flux
    2023-01-01T10:08:13.1234211Z tar -C /tmp/flux/ -zxvf /tmp/flux.tar.gz
    2023-01-01T10:08:13.1292399Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0}
    2023-01-01T10:08:13.1292706Z ##[endgroup]
    2023-01-01T10:08:13.2493808Z ##[error]Process completed with exit code 1.
    

    OS / Distro

    n/a

    Flux version

    n/a

    Flux check

    n/a

    Git provider

    No response

    Container Registry provider

    No response

    Additional context

    No response

    Code of Conduct

    • [X] I agree to follow this project's Code of Conduct
  • Flux CLI tool doesn't respect context namespace

    Flux CLI tool doesn't respect context namespace

    Describe the bug

    Say a Kubeconfig has the following block:

    contexts:
    - context:
        cluster: cluster
        namespace: services
        user: user
      name: cluster
    current-context: cluster
    

    When you run flux commands it will not target the namespace your context is currently using. It always targets flux-system unless the namespace flag is specified. For continuity sake between all kubernetes cli tools e.g. kubectl and helm the cli should use namespace context for CLI calls.

    Steps to reproduce

    1. Set your kubecontext to a namespace containing a helmrelease e.g.
    namespace: services
    helmrelease: service
    
    1. Run flux reconcile hr service

    2. See this error ✗ helmreleases.helm.toolkit.fluxcd.io "service" not found

    3. Run flux reconcile hr service --namespace service

    It works

    Expected behavior

    1. Set your kubecontext to a namespace containing a helmrelease e.g.
    namespace: services
    helmrelease: service
    
    1. Run flux reconcile hr service it works

    Screenshots and recordings

    No response

    OS / Distro

    Fedora

    Flux version

    v0.38.2

    Flux check

    N/A

    Git provider

    No response

    Container Registry provider

    No response

    Additional context

    No response

    Code of Conduct

    • [X] I agree to follow this project's Code of Conduct
  • Pick latest build image always if no policy but filter tag given

    Pick latest build image always if no policy but filter tag given

    Discussed in https://github.com/fluxcd/flux2/discussions/3265

    Originally posted by marrip October 28, 2022 Hi,

    I have a flux2 deployment running and want flux to automatically update the image I am running in the test environment. My images can be tagged with anything from <branch>-<sha> to semver tags. I created a ImagePolicy such as :

    ---
    apiVersion: image.toolkit.fluxcd.io/v1beta1
    kind: ImagePolicy
    metadata:
      name: test
      namespace: test
    spec:
      imageRepositoryRef:
        name: test
        namespace: test
      policy:
       alphabetical:
          order: asc
    

    which is clearly not what I want as commit shas are random and I always want to last image built not the first on the list deployed into the cluster. Any pointers or ideas?

Hot-swap Kubernetes clusters while keeping your microservices up and running.

Okra Okra is a Kubernetes controller and a set of CRDs which provide advanced multi-cluster appilcation rollout capabilities, such as canary deploymen

Nov 23, 2022
In this repository, the development of the gardener extension, which deploys the flux controllers automatically to shoot clusters, takes place.

Gardener Extension for Flux Project Gardener implements the automated management and operation of Kubernetes clusters as a service. Its main principle

Dec 3, 2022
PolarDB Stack is a DBaaS implementation for PolarDB-for-Postgres, as an operator creates and manages PolarDB/PostgreSQL clusters running in Kubernetes. It provides re-construct, failover swtich-over, scale up/out, high-available capabilities for each clusters.
PolarDB Stack is a DBaaS implementation for PolarDB-for-Postgres, as an operator creates and manages PolarDB/PostgreSQL clusters running in Kubernetes. It provides re-construct, failover swtich-over, scale up/out, high-available capabilities for each clusters.

PolarDB Stack开源版生命周期 1 系统概述 PolarDB是阿里云自研的云原生关系型数据库,采用了基于Shared-Storage的存储计算分离架构。数据库由传统的Share-Nothing,转变成了Shared-Storage架构。由原来的N份计算+N份存储,转变成了N份计算+1份存储

Nov 8, 2022
KinK is a helper CLI that facilitates to manage KinD clusters as Kubernetes pods. Designed to ease clusters up for fast testing with batteries included in mind.
KinK is a helper CLI that facilitates to manage KinD clusters as Kubernetes pods. Designed to ease clusters up for fast testing with batteries included in mind.

kink A helper CLI that facilitates to manage KinD clusters as Kubernetes pods. Table of Contents kink (KinD in Kubernetes) Introduction How it works ?

Dec 10, 2022
Deploy, manage, and secure applications and resources across multiple clusters using CloudFormation and Shipa

CloudFormation provider Deploy, secure, and manage applications across multiple clusters using CloudFormation and Shipa. Development environment setup

Feb 12, 2022
Natural-deploy - A natural and simple way to deploy workloads or anything on other machines.

Natural Deploy Its Go way of doing Ansibles: Motivation: Have you ever felt when using ansible or any declarative type of program that is used for dep

Jan 3, 2022
Automating Kubernetes Rollouts with Argo and Prometheus. Checkout the demo URL below
Automating Kubernetes Rollouts with Argo and Prometheus. Checkout the demo URL below

observe-argo-rollout Demo for Automating and Monitoring Kubernetes Rollouts with Argo and Prometheus Performing Demo The demo can be found on Katacoda

Nov 16, 2022
A Terraform controller for Flux

tf-controller A Terraform controller for Flux Quick start Here's a simple exampl

Dec 29, 2022
ArgoCD is widely used for enabling CD GitOps. ArgoCD internally builds manifest from source data in Git repository, and auto-sync it with target clusters.
ArgoCD is widely used for enabling CD GitOps. ArgoCD internally builds manifest from source data in Git repository, and auto-sync it with target clusters.

ArgoCD Interlace ArgoCD is widely used for enabling CD GitOps. ArgoCD internally builds manifest from source data in Git repository, and auto-sync it

Dec 14, 2022
grafana-sync Keep your grafana dashboards in sync.

grafana-sync Keep your grafana dashboards in sync. Table of Contents grafana-sync Table of Contents Installing Getting Started Pull Save all dashboard

Dec 14, 2022
vcluster - Create fully functional virtual Kubernetes clusters - Each cluster runs inside a Kubernetes namespace and can be started within seconds
vcluster - Create fully functional virtual Kubernetes clusters - Each cluster runs inside a Kubernetes namespace and can be started within seconds

Website • Quickstart • Documentation • Blog • Twitter • Slack vcluster - Virtual Clusters For Kubernetes Lightweight & Low-Overhead - Based on k3s, bu

Jan 4, 2023
provider-kubernetes is a Crossplane Provider that enables deployment and management of arbitrary Kubernetes objects on clusters

provider-kubernetes provider-kubernetes is a Crossplane Provider that enables deployment and management of arbitrary Kubernetes objects on clusters ty

Dec 14, 2022
Kubernetes Operator to sync secrets between different secret backends and Kubernetes

Vals-Operator Here at Digitalis we love vals, it's a tool we use daily to keep secrets stored securely. We also use secrets-manager on the Kubernetes

Nov 13, 2022
Crossplane provider to provision and manage Kubernetes objects on (remote) Kubernetes clusters.

provider-kubernetes provider-kubernetes is a Crossplane Provider that enables deployment and management of arbitrary Kubernetes objects on clusters ty

Jan 3, 2023
Kubernetes IN Docker - local clusters for testing Kubernetes
Kubernetes IN Docker - local clusters for testing Kubernetes

kind is a tool for running local Kubernetes clusters using Docker container "nodes".

Jan 5, 2023
Kubernetes IN Docker - local clusters for testing Kubernetes
Kubernetes IN Docker - local clusters for testing Kubernetes

Please see Our Documentation for more in-depth installation etc. kind is a tool for running local Kubernetes clusters using Docker container "nodes".

Feb 14, 2022
Harbormaster - Toolkit for automating the creation & mgmt of Docker components and tools

My development environment is MacOS with an M1 chip and I mostly develop for lin

Feb 17, 2022
Kubernetes OS Server - Kubernetes Extension API server exposing OS configuration like sysctl via Kubernetes API

KOSS is a Extension API Server which exposes OS properties and functionality using Kubernetes API, so it can be accessed using e.g. kubectl. At the moment this is highly experimental and only managing sysctl is supported. To make things actually usable, you must run KOSS binary as root on the machine you will be managing.

May 19, 2021