Crossplane provider to provision and manage Kubernetes objects on (remote) Kubernetes clusters.

provider-kubernetes

provider-kubernetes is a Crossplane Provider that enables deployment and management of arbitrary Kubernetes objects on clusters typically provisioned by Crossplane:

  • A Provider resource type that only points to a credentials Secret.
  • An Object resource type that is to manage Kubernetes Objects.
  • A managed resource controller that reconciles Object typed resources and manages arbitrary Kubernetes Objects.

Install

If you would like to install provider-kubernetes without modifications, you may do so using the Crossplane CLI in a Kubernetes cluster where Crossplane is installed:

kubectl crossplane install provider crossplane/provider-kubernetes:main

You may also manually install provider-kubernetes by creating a Provider directly:

apiVersion: pkg.crossplane.io/v1
kind: Provider
metadata:
  name: provider-kubernetes
spec:
  package: "crossplane/provider-kubernetes:main"

Developing locally

Start a local development environment with Kind where crossplane is installed:

make
make local-dev

Run controller against the cluster:

make run

Since the controller is running outside the Kind cluster, you need to make api server accessible (on a separate terminal):

sudo kubectl proxy --port=8081

Testing in Local Cluster

  1. Prepare provider config for the local cluster:

  2. If provider kubernetes running in the cluster (e.g. provider installed with crossplane):

    SA=$(kubectl -n crossplane-system get sa -o name | grep provider-kubernetes | sed -e 's|serviceaccount\/|crossplane-system:|g')
    kubectl create clusterrolebinding provider-kubernetes-admin-binding --clusterrole cluster-admin --serviceaccount="${SA}"
    kubectl apply -f examples/provider/config-in-cluster.yaml
    
  3. If provider kubernetes running outside the cluster (e.g. running locally with make run)

    KUBECONFIG=$(kind get kubeconfig --name local-dev | sed -e 's|server:\s*.*$|server: http://localhost:8081|g')
    kubectl -n crossplane-system create secret generic cluster-config --from-literal=kubeconfig="${KUBECONFIG}" 
    kubectl apply -f examples/provider/config.yaml
    
  4. Now you can create Object resources with provider reference, see sample object.yaml.

    kubectl create -f examples/object/object.yaml
    

Cleanup

make local.down
Comments
  • Cannot get secret when performing a patch

    Cannot get secret when performing a patch

    What happened?

    I have created a cluster in AWS using Crossplane. In the same composition, I also install FluxCD onto the cluster which creates a secret with a couple of values in there. I am interested in the public key value (identity.pub). So I am using the kubernetes provider to try and retrieve the public key from the secret, however, it cannot be found. Currently, I'm just trying to retrieve it and put the value into a config map - just to see if secret retrieval works. Once this works, I will be attempting to output the public key value to the XR using ToCompositeFieldPath so I can use it in another resource (a Gitlab resource that uses the public key to create a Deploy Key)

    The following error is returned when I do a describe on the resource.

    Events:
      Type     Reason                         Age                   From                                     Message
      ----     ------                         ----                  ----                                     -------
      Warning  CannotConnectToProvider        5m21s (x18 over 17m)  managed/object.kubernetes.crossplane.io  cannot get ProviderConfig: ProviderConfig.kubernetes.crossplane.io "default" not found
      Warning  CannotObserveExternalResource  21s (x7 over 5m4s)    managed/object.kubernetes.crossplane.io  cannot resolve resource references: cannot get referenced resource: secrets "test-vpc-sync" not found
    

    How can we reproduce it?

    Have an external resource that is a secret on another cluster and then try and retrieve one of it's values.

    Kubernetes Composition

        - name: kubernetes
          base:
            apiVersion: kubernetes.crossplane.io/v1alpha1
            kind: ProviderConfig
            spec:
              credentials:
                source: Secret
                secretRef:
                  key: kubeconfig
          patches:
          - fromFieldPath: spec.id
            toFieldPath: metadata.name
          - fromFieldPath: spec.writeConnectionSecretToRef.namespace
            toFieldPath: spec.credentials.secretRef.namespace
          - fromFieldPath: spec.id
            toFieldPath: spec.credentials.secretRef.name
            transforms:
              - type: string
                string:
                  fmt: "%s-cluster"
          readinessChecks:
            - type: None
        - name: get-public-key
          base:
            apiVersion: kubernetes.crossplane.io/v1alpha1
            kind: Object
            metadata:
              name: foo
            spec:
              references:
                - patchesFrom:
                    apiVersion: v1
                    kind: Secret
                    name: "test-vpc-sync"
                    namespace: flux-system
                    fieldPath: data[identity.pub]
                  toFieldPath: data.publicKey
              forProvider:
                manifest:
                  apiVersion: v1
                  kind: ConfigMap
                  metadata:
                    namespace: flux-system
                    name: pubsecret
                  data:
                    publicKey: sample-value
          patches:
          - fromFieldPath: spec.id
            toFieldPath: spec.providerConfigRef.name
          - fromFieldPath: spec.id
            toFieldPath: metadata.name
          - fromFieldPath: spec.writeConnectionSecretToRef.namespace
            toFieldPath: spec.credentials.secretRef.namespace
          - fromFieldPath: spec.id
            toFieldPath: spec.credentials.secretRef.name
            transforms:
              - type: string
                string:
                  fmt: "%s-cluster"
          readinessChecks:
            - type: None
    

    What environment did it happen in?

    Crossplane version: Chart version: 1.7.1

    • AWS EKS
    • K8s: 1.22
  • Implement feature management policy and reference

    Implement feature management policy and reference

    Signed-off-by: Ying Mo [email protected]

    Description of your changes

    Fixes #13 to implement the proposed features that are documented in this design doc.

    I have:

    • [x] Read and followed Crossplane's contribution process.
    • [x] Run make reviewable test to ensure this PR is ready for review.

    How has this code been tested

    All newly-added methods have been tested by adding new test cases in object_test.go. Just follow the same pattern that is used on object_test.go to define sample data, add new cases with expected results, and iterate over the cases to trigger Observe/Create/Update/Delete, also AddFinalizer/RemoveFinalizer. The code coverage has also been increased.

  • Possibility to configure RBAC

    Possibility to configure RBAC

    What problem are you facing?

    I'm trying to use provider-kubernetes to create Kubernetes Jobs, however the RBAC settings for the used ServiceAccount won't allow it. I can specify a ServiceAccount to use but then I also need to manually add the RBAC settings for my resources as well as the CRDs from this package.

    How could Crossplane help solve your problem?

    Ability to somehow configure RBAC settings for the provider without having to create an extra ServiceAccount. I suppose it would also be okay if the created ServiceAccount was deterministic (for example by specifying the name). Then we could just add extra Roles on top of the pre-existing ones.

  • Lot's of rate limited requests

    Lot's of rate limited requests

    I see lots of rate limited requests in the logs of this controller like the ones attached.

    I know this has to do with the k8s/client-go discovery cache. I know @negz is working on this and have seen tickets in kubernetes/kubernetes and kubernetes/kubectl.

    Does this mean we have to not only solve the discovery cache problem for kubectl, but also for controllers like this one?

    I1115 16:57:11.131312       1 request.go:655] Throttling request took 1.040355092s, request: GET:https://<CLUSTER_IP>>:443/apis/cloud.google.com/v1?timeout=32s
    I1206 21:42:01.320703       1 request.go:655] Throttling request took 1.197162562s, request: GET:https://<CLUSTER_IP>>:443/apis/microsoft.resources.azure.com/v1alpha1api20200601?timeout=32s
    I1206 21:42:11.514747       1 request.go:655] Throttling request took 11.391711778s, request: GET:https://<CLUSTER_IP>>:443/apis/networking.internal.knative.dev/v1alpha1?timeout=32s
    I1206 21:42:21.624922       1 request.go:655] Throttling request took 5.194396672s, request: GET:https://<CLUSTER_IP>>:443/apis/database.azure.crossplane.io/v1alpha3?timeout=32s
    I1206 21:42:31.682872       1 request.go:655] Throttling request took 1.795218693s, request: GET:https://<CLUSTER_IP>>:443/apis/container.cnrm.cloud.google.com/v1beta1?timeout=32s
    I1206 21:42:41.881957       1 request.go:655] Throttling request took 11.993478748s, request: GET:https://<CLUSTER_IP>>:443/apis/crd.projectcalico.org/v1?timeout=32s
    I1206 21:42:51.899638       1 request.go:655] Throttling request took 8.589133901s, request: GET:https://<CLUSTER_IP>>:443/apis/security.istio.io/v1beta1?timeout=32s
    I1206 21:43:44.499915       1 request.go:655] Throttling request took 1.194056856s, request: GET:https://<CLUSTER_IP>>:443/apis/dataflow.cnrm.cloud.google.com/v1beta1?timeout=32s
    I1206 21:43:54.696859       1 request.go:655] Throttling request took 11.391668377s, request: GET:https://<CLUSTER_IP>>:443/apis/microsoft.storage.azure.com/v1alpha1api20210401storage?timeout=32s
    
  • In crossplane composition cannot patch spec.forProvider.manifest.metadata.namesapce

    In crossplane composition cannot patch spec.forProvider.manifest.metadata.namesapce

    Motivation

    I want to give a simple API for a complicated kubernetes resource manifest. For better demonstration, the "complicated" k8s resource will be a Secret.

    What happened?

    I created the following XRD and Composition

    ---
    apiVersion: apiextensions.crossplane.io/v1
    kind: CompositeResourceDefinition
    metadata:
      name: compositeexternalsecrets.my-domain.io
    spec:
      group: my-domain.io
      names:
        kind: CompositeSecret
        plural: compositesecrets
      claimNames:
        kind: MySecret
        plural: mysecrets
      versions:
        - name: v1alpha1
          served: true
          referenceable: true
          schema:
            openAPIV3Schema:
              type: object
              properties:
                spec:
                  type: object
                  properties:
                    dataInject:
                      type: string
    ---
    apiVersion: apiextensions.crossplane.io/v1
    kind: Composition
    metadata:
      name: mysecret
    spec:
      compositeTypeRef:
        apiVersion: my-domain.io/v1alpha1
        kind: CompositeSecret
      resources:
        - base:
            apiVersion: kubernetes.crossplane.io/v1alpha1
            kind: Object
            spec:
              providerConfigRef:
                name: crossplane-provider-kubernetes-config
              forProvider:
                manifest:
                  apiVersion: v1
                  kind: Secret
                  type: Opaque
                  data:
                    default: default
          patches:
            - fromFieldPath: "metadata.namespace"
              toFieldPath: "spec.forProvider.manifest.metadata.namespace"
            - fromFieldPath: "spec.dataInject"
              toFieldPath: "spec.forProvider.manifest.data[injected]"
    

    And claim it with

    ---
    apiVersion: my-domain.io/v1alpha1
    kind: MySecret
    metadata:
      name: secret
      namespace: some-namespace
    spec:
      dataInject: injected
    

    The idea is to created a k8s resource of kind MySecret with as little data as possible and by that create a more complex and importantly a different k8s resource. In this example I would like to see a Secret to be created that looks like this:

    apiVersion: v1
    kind: Secret
    type: Opaque
    metadata:
      name: secret-hashed-12379 # generated
      namespace: some-namespace # identical to MySecret's namespace
    data:
      default: default
      injected: injected
    

    What happens instead is that the Secret does not get created. Instead I see the following error an empty namespace may not be set when a resource name is provided.

    A kubectl describe object secret-f7jbk-n52l5 returns:

    Name:         secret-f7jbk-n52l5
    Namespace:
    Labels:       crossplane.io/claim-name=secret
                  crossplane.io/claim-namespace=sms-infra
                  crossplane.io/composite=secret-f7jbk
    Annotations:  crossplane.io/external-name: secret-f7jbk-n52l5
    API Version:  kubernetes.crossplane.io/v1alpha1
    Kind:         Object
    Metadata:
      ...
      Owner References:
        API Version:     my-domain.io/v1alpha1
        Controller:      true
        Kind:            CompositeSecret
        Name:            secret-f7jbk
        UID:             c48be853-eb31-4f60-9f78-13ade20dc7c8
      Resource Version:  1267707672
      UID:               c7b56c56-8578-4095-a2d1-6d6b165ff750
    Spec:
      For Provider:
        Manifest:
          API Version:  v1
          Data:
            Default:      default
            Injected:     injected
          Kind:           Secret
          Type:           Opaque
      Management Policy:  Default
      Provider Config Ref:
        Name:  crossplane-provider-kubernetes-config
    Status:
      At Provider:
      Conditions:
        Last Transition Time:  2022-04-19T13:51:05Z
        Message:               observe failed: cannot get object: an empty namespace may not be set when a resource name is provided
        Reason:                ReconcileError
        Status:                False
        Type:                  Synced
    Events:
      Type     Reason                         Age                  From                                     Message
      ----     ------                         ----                 ----                                     -------
      Warning  CannotObserveExternalResource  92s (x11 over 7m7s)  managed/object.kubernetes.crossplane.io  cannot get object: an empty namespace may not be set when a resource name is provided
    

    I also tried other resources instead of Secrets. Patching the namespace does not work. I am using provider-kubernetes for this because of Crossplane's Composite Resource Limitation for cluster scoped resources https://github.com/crossplane/crossplane/issues/1730

    What environment did it happen in?

    Crossplane version: v1.6.0 provider-kubernetes version: v0.3.0

    kubectl version
    
    Client Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.5", GitCommit:"c285e781331a3785a7f436042c65c5641ce8a9e9", GitTreeState:"clean", BuildDate:"2022-03-16T15:51:05Z", GoVersion:"go1.17.8", Compiler:"gc", Platform:"darwin/arm64"}
    Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.5", GitCommit:"aea7bbadd2fc0cd689de94a54e5b7b758869d691", GitTreeState:"clean", BuildDate:"2021-09-15T21:04:16Z", GoVersion:"go1.16.8", Compiler:"gc", Platform:"linux/amd64"}
    ```
  • Design proposal for resource management policy and resource reference

    Design proposal for resource management policy and resource reference

    New features:

    • Allow user to define resource management policy to instruct the provider how to manage Kubernetes resources in a fine-grained manner.
    • Allow user to define resource references for an Object as dependencies to retrieve values from dependent resources at runtime and guarantee the resource rendering in a specified order.
  • Managed Resource that could copy object from control plane to remote clusters

    Managed Resource that could copy object from control plane to remote clusters

    What problem are you facing?

    While building platform configurations with Crossplane, it is quite common that we need to copy some objects from the local cluster (a.k.a control plane) to remote clusters. The most common examples of this are distributing secrets like dockerhub & helm repo credentials or copying connection creds of a managed resource (e.g. db) to a remote cluster (i.e. application cluster).

    This was asked a couple of times in the Crossplane slack and the best solution (workaround?) we have so far is using provider-helm's set.valueFrom .secretKeyRef: https://crossplane.slack.com/archives/CEG3T90A1/p1629975361008800?thread_ts=1629969149.007900&cid=CEG3T90A1

    There is also an open issue in Crossplane which could be used as a solution here as a composition patch but may not help if a composition is not used or a whole objected needs to be copied.

    How could Crossplane help solve your problem?

    Currently provider Kubernetes seems to be the best fit to implement/support this in the form of a managed resource.

    I came up of something like the following after a quick thought:

    apiVersion: kubernetes.crossplane.io/v1alpha2
    kind: Object
    metadata:
      name: sample-namespace
    spec:
      forProvider:
        manifest:
          apiVersion: v1
          kind: Secret
          metadata:
            name: regcred
            namespace: application
        from:
          apiVersion: v1
          kind: Secret
          name: regcred
          namespace: crossplane-system
          fieldPaths: 
            - type
            - data
    
  • Patch existing resources(Object)

    Patch existing resources(Object)

    Using provider-aws to create an EKS cluster and in a composition would be nice to use this provider to "patch" the aws-auth configMap with additional mapRoles or mapUsers.

    Not sure if this should be implemented here or in provider-aws.

  • kubectl ket provider doesn't show the kubernetes provider config

    kubectl ket provider doesn't show the kubernetes provider config

    For the default crossplane providers you can see the provider configs via

    kubectl get provider
    

    The kubernetes provider doesn't show up in that list as of know.

    That's because in the CRD the categories section of the providerconfigs.kubernetes.crossplane.io crd is missing:

      names:
        categories:
        - crossplane
        - provider
    

    This probably has to be generated through some kubebuilder annotations.

  • Initial implementation

    Initial implementation

    Adds initial implementation for provider-kubernetes.

    Provider Kubernetes enables deployment and management of arbitrary Kubernetes objects on Kubernetes clusters typically provisioned by Crossplane. In other words, it is similar to provider-helm but for Kubernetes resources rather than helm packages.

    For example, the following Object resource would create a namespace on the Kubernetes cluster configured with providerConfigRef.

    apiVersion: kubernetes.crossplane.io/v1alpha1
    kind: Object
    metadata:
      name: team-a-namespace
    spec:
      forProvider:
        manifest:
          apiVersion: v1
          kind: Namespace
          metadata:
            # name in manifest is optional and defaults to Object name
            # name: some-other-name
            labels:
              example: "true"
      providerConfigRef:
        name: kubernetes-provider
    
  • Bump sigs.k8s.io/controller-runtime from 0.12.3 to 0.13.1

    Bump sigs.k8s.io/controller-runtime from 0.12.3 to 0.13.1

    Bumps sigs.k8s.io/controller-runtime from 0.12.3 to 0.13.1.

    Release notes

    Sourced from sigs.k8s.io/controller-runtime's releases.

    v0.13.1

    What's Changed

    Full Changelog: https://github.com/kubernetes-sigs/controller-runtime/compare/v0.13.0...v0.13.1

    v0.13.0

    changes since v0.12.3

    :warning: Breaking Changes

    • Do not mutate the global warning handler (#1944)
    • Add GetOptions as optional argument of client.Reader and all its implementation (#1917)

    :sparkles: New Features

    • Bump golangci lint to v1.49.0 (#1988)
    • Update k8s API to v1.25 (#1985)
    • Implement IgnoreAlreadyExists (#1965)
    • Bump k8s v0.25.0-alpha.3 (#1967)
    • webhook: add an option to recover from panics in handler (#1900)
    • Provide access to admission.Request in custom validator/defaulter (#1950)
    • komega: add EqualObject matcher (#1833)
    • fix some typos (#1924)
    • Allow TLS to be entirely configured on webhook server (#1897)

    :bug: Bug Fixes

    • Rearange EventBroadcaster log statement. (#1974)
    • Fix log depth for DelegatingLogSink (#1975)
    • Remove no-op clientgo reflector metrics (#1946)
    • Fix webhook write response error for broken HTTP connection (#1930)
    • Fix issue with starting multiple test envs (#1910)
    • don't override global log in builder (#1907)
    • skip mutation handler when received deletion verb (#1765)
    • fix loading CRDs from multiple directories in envtests (#1904)

    Thanks to all our contributors!

    Commits
    • 44c5d50 Merge pull request #2028 from k8s-infra-cherrypick-robot/cherry-pick-2023-to-...
    • 271f9e6 Add tls options to manager.Options
    • d242fe2 Merge pull request #1988 from sbueringer/pr-bump-golangci-lint
    • 4b208ab Bump golangci lint to v1.49.0
    • 02dc464 Merge pull request #1985 from Fedosin/k8s_v125
    • 0873d15 Bump k8s libs to v1.25
    • 7a5d60d Merge pull request #1983 from nakamasato/fix-reconciler-comment
    • 3ba8cf0 docs: update doc for reconcile example
    • 2d210d0 Merge pull request #1965 from rstefan1/implement-ignore-already-exists
    • c2c26e3 Implement IgnoreAlreadyExists
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
  • Container creation fails for version 0.5.0

    Container creation fails for version 0.5.0

    What happened?

    KUBERNETES provider controller container creation fails with error Error: failed to start container "provider-kubernetes": Error response from daemon: OCI runtime create failed: container_linux.go:367: starting container process caused: chdir to cwd ("/home/nonroot") set in config.json failed: permission denied: unknown

    Similar issue for provider-aws and for provider-helm

    The last working release is v0.4.0.

    How can we reproduce it?

    1. create Minikube cluster
    2. install Crossplane
    3. add Kubernetes provider

    What environment did it happen in?

    Crossplane version: 1.9.0, installed by Helm chart k8s: Minikube with k8s v1.20.2

  • Bump sigs.k8s.io/controller-runtime from 0.12.3 to 0.14.1

    Bump sigs.k8s.io/controller-runtime from 0.12.3 to 0.14.1

    Bumps sigs.k8s.io/controller-runtime from 0.12.3 to 0.14.1.

    Release notes

    Sourced from sigs.k8s.io/controller-runtime's releases.

    v0.14.1

    Changes since v0.14.0

    :bug: Bug Fixes

    Full Changelog: https://github.com/kubernetes-sigs/controller-runtime/compare/v0.14.0...v0.14.1

    v0.14.0

    Changes since v0.13.1

    :warning: Breaking Changes

    • Add Get functionality to SubResourceClient (#2094)
    • Allow configuring RecoverPanic for controllers globally (#2093)
    • Add client.SubResourceWriter (#2072)
    • Support registration and removal of event handler (#2046)
    • Update Kubernetes dependencies to v0.26 (#2043, #2087)
    • Zap log: Default to RFC3339 time encoding (#2029)
    • cache.BuilderWithOptions inherit options from caller (#1980)

    :sparkles: New Features

    • Builder: Do not require For (#2091)
    • support disable deepcopy on list funcion (#2076)
    • Add cluster.NewClientFunc with options (#2054)
    • Tidy up startup logging of kindWithCache source (#2057)
    • Add function to get reconcileID from context (#2056)
    • feat: add NOT predicate (#2031)
    • Allow to provide a custom lock interface to manager (#2027)
    • Add tls options to manager.Options (#2023)
    • Update Go version to 1.19 (#1986)

    :bug: Bug Fixes

    • Prevent manager from getting started a second time (#2090)
    • Missing error log for in-cluster config (#2051)
    • Skip custom mutation handler when delete a CR (#2049)
    • fix: improve semantics of combining cache selectorsByObject (#2039)
    • Conversion webhook should not panic when conversion request is nil (#1970)

    :seedling: Others

    • Prepare for release 0.14 (#2100)
    • Generate files and update modules (#2096)
    • Bump github.com/onsi/ginkgo/v2 from 2.5.1 to 2.6.0 (#2097)
    • Bump golang.org/x/time (#2089)
    • Update OWNERS: remove inactive members, promote fillzpp sbueringer (#2088, #2092)
    • Default ENVTEST version to a working one (1.24.2) (#2081)
    • Update golangci-lint to v1.50.1 (#2080)
    • Bump go.uber.org/zap from 1.23.0 to 1.24.0 (#2077)
    • Bump golang.org/x/sys from 0.2.0 to 0.3.0 (#2078)
    • Ignore Kubernetes Dependencies in Dependabot (#2071)

    ... (truncated)

    Commits
    • 84c5c9f 🐛 controllers without For() fail to start (#2108)
    • ddcb99d Merge pull request #2100 from vincepri/release-0.14
    • 69f0938 Merge pull request #2094 from alvaroaleman/subresoruce-get
    • 8738e91 Merge pull request #2091 from alvaroaleman/no-for
    • ca4b4de Merge pull request #2096 from lucacome/generate
    • 5673341 Merge pull request #2097 from kubernetes-sigs/dependabot/go_modules/github.co...
    • 7333aed :seedling: Bump github.com/onsi/ginkgo/v2 from 2.5.1 to 2.6.0
    • d4f1e82 Generate files and update modules
    • a387bf4 Merge pull request #2093 from alvaroaleman/recover-panic-globally
    • da7dd5d :warning: Allow configuring RecoverPanic for controllers globally
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
  • status.unavailableReplicas is not removed when the key removed

    status.unavailableReplicas is not removed when the key removed

    What happened?

    composition

           - type: ToCompositeFieldPath
              fromFieldPath: status.atProvider.manifest.status
              toFieldPath: status.nginx
    
      nginx:
        availableReplicas: 1
        conditions:
          - lastTransitionTime: '2022-12-19T02:11:59Z'
            lastUpdateTime: '2022-12-19T02:11:59Z'
            message: Deployment has minimum availability.
            reason: MinimumReplicasAvailable
            status: 'True'
            type: Available
          - lastTransitionTime: '2022-12-19T02:11:35Z'
            lastUpdateTime: '2022-12-19T02:11:59Z'
            message: ReplicaSet "abcd-7fd8bc7fcd" has successfully progressed.
            reason: NewReplicaSetAvailable
            status: 'True'
            type: Progressing
        replicas: 1
        unavailableReplicas: 1
        updatedReplicas: 1
    

    deployment created by object

          status:
            availableReplicas: 1
            conditions:
            - lastTransitionTime: "2022-12-19T02:11:59Z"
              lastUpdateTime: "2022-12-19T02:11:59Z"
              message: Deployment has minimum availability.
              reason: MinimumReplicasAvailable
              status: "True"
              type: Available
            - lastTransitionTime: "2022-12-19T02:11:35Z"
              lastUpdateTime: "2022-12-19T02:11:59Z"
              message: ReplicaSet "abcd-7fd8bc7fcd" has successfully progressed.
              reason: NewReplicaSetAvailable
              status: "True"
              type: Progressing
            observedGeneration: 1
            readyReplicas: 1
            replicas: 1
            updatedReplicas: 1
    

    status.nginx.unavailableReplicas of the XR is still exitst when status.unavailableReplicas key of the deployment create by object removed.

    How can we reproduce it?

    What environment did it happen in?

    Crossplane version: provider_kubernetes : provider-kubernetes-f935b3d8b7ec

    kubernetes version 1.19

  • Bump k8s.io/api from 0.25.3 to 0.26.0

    Bump k8s.io/api from 0.25.3 to 0.26.0

    Bumps k8s.io/api from 0.25.3 to 0.26.0.

    Commits
    • 2ee9a6c Update dependencies to v0.26.0 tag
    • 07ac8fe Merge remote-tracking branch 'origin/master' into release-1.26
    • 566ee01 Update golang.org/x/net 1e63c2f
    • b966dc9 sync: update go.mod
    • 053624e Merge pull request #111023 from pohly/dynamic-resource-allocation
    • 3590eda Merge pull request #113375 from atiratree/PodHealthyPolicy-api
    • 5a4f9a5 generated
    • 5cb3202 Merge pull request #113186 from ttakahashi21/KEP-3294
    • 993c43c api: add UnhealthyPodEvictionPolicy for PDBs
    • dfd6ea2 Generate code
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
  • Bump k8s.io/apimachinery from 0.25.3 to 0.26.0

    Bump k8s.io/apimachinery from 0.25.3 to 0.26.0

    Bumps k8s.io/apimachinery from 0.25.3 to 0.26.0.

    Commits
    • 5d4cdd2 Merge remote-tracking branch 'origin/master' into release-1.26
    • 6cbc4a3 Update golang.org/x/net 1e63c2f
    • 6561235 Merge pull request #113699 from liggitt/manjusaka/fix-107415
    • dad8cd8 Update workload selector validation
    • fe82462 Add extra value validation for matchExpression field in LabelSelector
    • 067949d update k8s.io/utils to fix util tracing panic
    • 0ceff90 Merge pull request #112223 from astraw99/fix-ownerRef-validate
    • 9e85d3a Merge pull request #112649 from howardjohn/set/optimize-everything-nothing
    • 88a1448 Rename and comment on why sharing is safe
    • b03a432 Merge pull request #113367 from pohly/dep-ginkgo-gomega
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
  • Bump go.uber.org/zap from 1.19.1 to 1.24.0

    Bump go.uber.org/zap from 1.19.1 to 1.24.0

    Bumps go.uber.org/zap from 1.19.1 to 1.24.0.

    Release notes

    Sourced from go.uber.org/zap's releases.

    v1.24.0

    Enhancements:

    • #1148[]: Add Level to both Logger and SugaredLogger that reports the current minimum enabled log level.
    • #1185[]: SugaredLogger turns errors to zap.Error automatically.

    Thanks to @​Abirdcfly, @​craigpastro, @​nnnkkk7, and @​sashamelentyev for their contributions to this release.

    #1148: uber-go/zap#1148 #1185: uber-go/zap#1185

    v1.23.0

    Enhancements:

    • #1147[]: Add a zapcore.LevelOf function to determine the level of a LevelEnabler or Core.
    • #1155[]: Add zap.Stringers field constructor to log arrays of objects that implement String() string.

    #1147: uber-go/zap#1147 #1155: uber-go/zap#1155

    v1.22.0

    Enhancements:

    • #1071[]: Add zap.Objects and zap.ObjectValues field constructors to log arrays of objects. With these two constructors, you don't need to implement zapcore.ArrayMarshaler for use with zap.Array if those objects implement zapcore.ObjectMarshaler.
    • #1079[]: Add SugaredLogger.WithOptions to build a copy of an existing SugaredLogger with the provided options applied.
    • #1080[]: Add *ln variants to SugaredLogger for each log level. These functions provide a string joining behavior similar to fmt.Println.
    • #1088[]: Add zap.WithFatalHook option to control the behavior of the logger for Fatal-level log entries. This defaults to exiting the program.
    • #1108[]: Add a zap.Must function that you can use with NewProduction or NewDevelopment to panic if the system was unable to build the logger.
    • #1118[]: Add a Logger.Log method that allows specifying the log level for a statement dynamically.

    Thanks to @​cardil, @​craigpastro, @​sashamelentyev, @​shota3506, and @​zhupeijun for their contributions to this release.

    #1071: uber-go/zap#1071 #1079: uber-go/zap#1079 #1080: uber-go/zap#1080 #1088: uber-go/zap#1088

    ... (truncated)

    Changelog

    Sourced from go.uber.org/zap's changelog.

    1.24.0 (30 Nov 2022)

    Enhancements:

    • #1148[]: Add Level to both Logger and SugaredLogger that reports the current minimum enabled log level.
    • #1185[]: SugaredLogger turns errors to zap.Error automatically.

    Thanks to @​Abirdcfly, @​craigpastro, @​nnnkkk7, and @​sashamelentyev for their contributions to this release.

    #1148: https://github.coml/uber-go/zap/pull/1148 #1185: https://github.coml/uber-go/zap/pull/1185

    1.23.0 (24 Aug 2022)

    Enhancements:

    • #1147[]: Add a zapcore.LevelOf function to determine the level of a LevelEnabler or Core.
    • #1155[]: Add zap.Stringers field constructor to log arrays of objects that implement String() string.

    #1147: uber-go/zap#1147 #1155: uber-go/zap#1155

    1.22.0 (8 Aug 2022)

    Enhancements:

    • #1071[]: Add zap.Objects and zap.ObjectValues field constructors to log arrays of objects. With these two constructors, you don't need to implement zapcore.ArrayMarshaler for use with zap.Array if those objects implement zapcore.ObjectMarshaler.
    • #1079[]: Add SugaredLogger.WithOptions to build a copy of an existing SugaredLogger with the provided options applied.
    • #1080[]: Add *ln variants to SugaredLogger for each log level. These functions provide a string joining behavior similar to fmt.Println.
    • #1088[]: Add zap.WithFatalHook option to control the behavior of the logger for Fatal-level log entries. This defaults to exiting the program.
    • #1108[]: Add a zap.Must function that you can use with NewProduction or NewDevelopment to panic if the system was unable to build the logger.
    • #1118[]: Add a Logger.Log method that allows specifying the log level for a statement dynamically.

    Thanks to @​cardil, @​craigpastro, @​sashamelentyev, @​shota3506, and @​zhupeijun for their contributions to this release.

    #1071: uber-go/zap#1071 #1079: uber-go/zap#1079 #1080: uber-go/zap#1080 #1088: uber-go/zap#1088

    ... (truncated)

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
KinK is a helper CLI that facilitates to manage KinD clusters as Kubernetes pods. Designed to ease clusters up for fast testing with batteries included in mind.
KinK is a helper CLI that facilitates to manage KinD clusters as Kubernetes pods. Designed to ease clusters up for fast testing with batteries included in mind.

kink A helper CLI that facilitates to manage KinD clusters as Kubernetes pods. Table of Contents kink (KinD in Kubernetes) Introduction How it works ?

Dec 10, 2022
Provider-milvus - Milvus provider for crossplane

provider-milvus provider-milvus is a minimal Crossplane Provider that is meant t

Feb 9, 2022
Awesome-italia-remote - A list of remote-friendly or full-remote companies that targets Italian talents

Awesome Italia Remote A list of remote-friendly or full-remote companies that ta

Dec 29, 2022
kcount counts Kubernetes (K8s) objects across clusters.

kcount counts Kubernetes (K8s) objects across clusters. It gets the cluster configuration, including cluster name and namespace, from kubeconfig files

Sep 23, 2022
PolarDB Stack is a DBaaS implementation for PolarDB-for-Postgres, as an operator creates and manages PolarDB/PostgreSQL clusters running in Kubernetes. It provides re-construct, failover swtich-over, scale up/out, high-available capabilities for each clusters.
PolarDB Stack is a DBaaS implementation for PolarDB-for-Postgres, as an operator creates and manages PolarDB/PostgreSQL clusters running in Kubernetes. It provides re-construct, failover swtich-over, scale up/out, high-available capabilities for each clusters.

PolarDB Stack开源版生命周期 1 系统概述 PolarDB是阿里云自研的云原生关系型数据库,采用了基于Shared-Storage的存储计算分离架构。数据库由传统的Share-Nothing,转变成了Shared-Storage架构。由原来的N份计算+N份存储,转变成了N份计算+1份存储

Nov 8, 2022
An experimental crossplane provider for @zscaler zpa

provider-zpa Crossplane provider for [Zscaler ZPA] The provider built from this repository can be installed into a Crossplane control plane or run sep

Dec 7, 2021
Generate Crossplane Providers from any Terraform Provider

Terrajet - Generate Crossplane Providers from any Terraform Provider Terrajet is a code generator framework that allows developers to build code gener

Dec 29, 2022
Crossplane provider for InfluxDB Cloud

provider-template provider-template is a minimal Crossplane Provider that is meant to be used as a template for implementing new Providers. It comes w

Jan 10, 2022
Crossplane provider for Confluent Cloud

provider-confluent provider-confluent is a minimal Crossplane Provider that is meant to be used as a template for implementing new Providers. It comes

Feb 4, 2022
A minimal Crossplane Provider For Golang

provider-template provider-template is a minimal Crossplane Provider that is mea

Dec 19, 2021
A minimal Crossplane Provider that is meant to be used as a template for implementing new Providers

provider-template provider-template is a minimal Crossplane Provider that is meant to be used as a template for implementing new Providers. It comes w

Jan 16, 2022
Provider-template - Template for writing providers for crossplane

provider-template provider-template is a minimal Crossplane Provider that is mea

Feb 3, 2022
Cloudflare-operator - Manage Cloudflare DNS records with Kubernetes objects

cloudflare-operator Documentation The goal of cloudflare-operator is to manage C

Nov 16, 2022
Automated-gke-cilium-networkpolicy-demo - Quickly provision and tear down a GKE cluster with Cilium enabled for working with Network Policy.

Automated GKE Network Policy Demo Before running the automation, make sure you have the correct variables in env-automation/group_vars/all.yaml. There

Jan 1, 2022
Local Storage is one of HwameiStor components. It will provision the local LVM volume.
Local Storage is one of HwameiStor components. It will provision the local LVM volume.

Local Storage Module English | Simplified_Chinese Introduction Local Storage is one of modules of HwameiStor which is a cloud native local storage sys

Aug 6, 2022
🐶 Kubernetes CLI To Manage Your Clusters In Style!
🐶 Kubernetes CLI To Manage Your Clusters In Style!

K9s - Kubernetes CLI To Manage Your Clusters In Style! K9s provides a terminal UI to interact with your Kubernetes clusters. The aim of this project i

Jan 9, 2023
Manage large fleets of Kubernetes clusters
Manage large fleets of Kubernetes clusters

Introduction Fleet is GitOps at scale. Fleet is designed to manage up to a million clusters. It's also lightweight enough that it works great for a si

Dec 31, 2022
Simple Tools to help manage non-production Kubernetes Clusters

SecondMate.io A tool to help your nonProduction Kubernetes Clusters running clean. The goal of this tool is to add some features to non production clu

Feb 21, 2022
Deploy, manage, and secure applications and resources across multiple clusters using CloudFormation and Shipa

CloudFormation provider Deploy, secure, and manage applications across multiple clusters using CloudFormation and Shipa. Development environment setup

Feb 12, 2022