A Kubernetes operator for managing Argo CD clusters.

Argo CD Operator

Build Status Go Report Card Documentation Status Contributor Covenant

A Kubernetes operator for managing Argo CD clusters.

Documentation

See the documentation for installation and usage of the operator.

License

The Argo CD Operator is released under the Apache 2.0 license. See the LICENSE file for details.

Comments
  • feat: Upgrade RH-SSO to v7.5.1 and support kube:admin, ocp groups and proxy env

    feat: Upgrade RH-SSO to v7.5.1 and support kube:admin, ocp groups and proxy env

    What type of PR is this?

    /kind enhancement

    What does this PR do / why we need it:

    This PR adds the below enhancements.

    • Added missing RBAC annotations in agocd_controller.go: template.openshift.io and oauth.openshift.io api groups. Without these annotations, RBAC manifests won't get generated properly in order to install RHSSO.
    • Removed hardcoded OIDC client secrets and replaced them by generated random strings. These client secrets do not need to be stored.
    • Added KeycloakIdentityProviderMapper to workaround https://github.com/keycloak/keycloak-operator/issues/471. Without the workaround, tokens can't be translated to credentials that contain the group identity.
    • Propagated proxy environment variables from operator to Keycloak containers so that outbound calls can go through cluster proxy if cluste proxy is configured.
    • Delete OAuth client using foreground propagation policy to ensure that the garbage collector removes all instantiated objects before the TemplateInstance itself disappears.
    • Add extra Delete OAuth client calls to workaround https://github.com/openshift/client-go/issues/209
    • Add retries to handle RHSSO image upgrade
    • Upgrades the default version of RH-SSO to 7.5.1
    • Adds support to login with kube:admin OpenShift User.
    • With this PR, RH-SSO can fetch the group details of OpenShift Users(You should see the group details of the logged in user in the Argo CD console). This will help admins define Group Level RBAC for their OpenShift groups. https://github.com/keycloak/keycloak/pull/8381
    • Kube:admin by default you only have. Not working previously. not an OS user. https://github.com/keycloak/keycloak/pull/8428
    • RHSSO works in proxy enabled cluster with no additional or manual configuration. https://github.com/keycloak/keycloak/pull/8559 *. Upgrades Keycloak version to 15.0.2. On non-OpenShif Kubernetes clusters, users will see a Keycloak version upgrade from 9.0.3 to 15.0.2

    Upgrades

    Whether you are using the Operator on Kubernetes or OpenShift, Version upgrades mentioned above are smooth and does not require any manual intervention other than upgrading the operator to new version(0.3.0) once it's released.

    Have you updated the necessary documentation?

    • [x] Documentation update is required by this PR.
    • [x] Documentation has been updated.

    How to test changes / Special notes to the reviewer:

    Kubernetes:

    1. Deploy the below catalog source
    apiVersion: operators.coreos.com/v1alpha1
    kind: CatalogSource
    metadata:
      name: argocd-catalog
    spec:
      sourceType: grpc
      image: quay.io/aveerama/argocd-operator-index@sha256:bb0db86d3e6e27fe9d9e6891027db62e3b15f2947d65b059de8c7aae3a582eda
      displayName: Argo CD Operators
      publisher: Argo CD Community
    
    1. Run the make target to install the CRDs make install
    2. kubectl create namespace argocd
    3. kubectl create -n argocd -f deploy/operator_group.yaml
    4. kubectl create -n argocd -f deploy/subscription.yaml
    5. kubectl create -n argocd -f examples/argocd-keycloak-k8s.yaml

    OpenShift:

    1. Deploy the below catalog source into openshift-operators namespace.
    apiVersion: operators.coreos.com/v1alpha1
    kind: CatalogSource
    metadata:
      name: argocd-catalog
    spec:
      sourceType: grpc
      image: quay.io/aveerama/argocd-operator-index@sha256:bb0db86d3e6e27fe9d9e6891027db62e3b15f2947d65b059de8c7aae3a582eda
      displayName: Argo CD Operators
      publisher: Argo CD Community
    
    1. Move to Operator Hub to install the Operator.
    2. kubectl create namespace argocd
    3. kubectl create -n argocd -f examples/argocd-keycloak-openshift.yaml

    Testing upgrade:

    1. Install the 0.2.0 of operator(same for k8s and OpenShift).
    2. Create an Argo CD Instance with Keycloak as shown above.
    3. Delete the operator but keep the Argo CD Instance and workloads as is.
    4. Now, Install the new version of operator and see if the Keycloak pod is recreated and updated.

    How to test login with kube:admin

    On your OpenShift cluster, Go to Networking -> Routes -> Click on the Argo CD route.

    1. Click on Login Via Keycloak -> Login with OpenShift -> Provide kubeadmin credentials.
    2. Once you login to Argo CD, you can also confirm this by looking into the user profile section.

    Run E2E test:

    Download kuttl on your laptop or server Run operator locally or through the bundle. Run the below command to execute rhsso e2e tests kubectl kuttl test --config kuttl-test-rhsso.yaml

  • Dex can't be disabled

    Dex can't be disabled

    Describe the bug

    Hi, the documentation describing how to disable Dex is not correct. The following subscription configuration does NOT disable it:

    To Reproduce

    apiVersion: operators.coreos.com/v1alpha1
    kind: Subscription
    metadata:
      name: argocd-operator
    spec:
      # channel: alpha
      name: argocd-operator
      # source: argocd-catalog
      source: operatorhubio-catalog
      sourceNamespace: olm
      config:
        env:
          - name: DISABLE_DEX
            value: "true"
          - name: ARGOCD_CLUSTER_CONFIG_NAMESPACES
            value: argocd
    

    Expected behavior

    The configuration to stop dex from starting.

    Additional context

    Deployed version: v2.3.3-07ac038

  • Pods stuck in crashloop after update to 0.0.14

    Pods stuck in crashloop after update to 0.0.14

    Hi :)

    First thanks for the great Operator :)!

    I updated our Dev Cluster today from 0.0.13 to 0.0.14 and since then two pods are crashlooping :(

    See below logs :)

    argocd-repo-server

    time="2020-10-14T11:48:10Z" level=info msg="Initializing GnuPG keyring at /app/config/gpg/keys"
    time="2020-10-14T11:48:10Z" level=fatal msg="stat /app/config/gpg/keys/trustdb.gpg: permission denied"
    

    argocd-application-controller

    time="2020-10-14T11:48:10Z" level=info msg="appResyncPeriod=3m0s"
    time="2020-10-14T11:48:10Z" level=info msg="Application Controller (version: v1.7.7+33c93ae, built: 2020-09-29T04:56:38Z) starting (namespace: argocd)"
    time="2020-10-14T11:48:10Z" level=info msg="Starting configmap/secret informers"
    time="2020-10-14T11:48:10Z" level=info msg="Configmap/secret informer synced"
    E1014 11:48:10.261304       1 runtime.go:78] Observed a panic: "assignment to entry in nil map" (assignment to entry in nil map)
    goroutine 63 [running]:
    k8s.io/apimachinery/pkg/util/runtime.logPanic(0x1cbec40, 0x227bc80)
    	/go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:74 +0xa3
    k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
    	/go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:48 +0x82
    panic(0x1cbec40, 0x227bc80)
    	/usr/local/go/src/runtime/panic.go:967 +0x166
    github.com/argoproj/argo-cd/util/settings.addStatusOverrideToGK(...)
    	/go/src/github.com/argoproj/argo-cd/util/settings/settings.go:508
    github.com/argoproj/argo-cd/util/settings.(*SettingsManager).GetResourceOverrides(0xc0000f78c0, 0xc000ecaed0, 0x0, 0x0)
    	/go/src/github.com/argoproj/argo-cd/util/settings/settings.go:485 +0x468
    github.com/argoproj/argo-cd/controller/cache.(*liveStateCache).loadCacheSettings(0xc00030db80, 0x10, 0xc0004a1380, 0x1ac619d)
    	/go/src/github.com/argoproj/argo-cd/controller/cache/cache.go:113 +0x9b
    github.com/argoproj/argo-cd/controller/cache.(*liveStateCache).Init(0xc00030db80, 0x2309518, 0xc000639c20)
    	/go/src/github.com/argoproj/argo-cd/controller/cache/cache.go:417 +0x2f
    github.com/argoproj/argo-cd/controller.(*ApplicationController).Run(0xc0004c9680, 0x22ead00, 0xc0004b9580, 0x14, 0xa)
    	/go/src/github.com/argoproj/argo-cd/controller/appcontroller.go:454 +0x26d
    created by main.newCommand.func1
    	/go/src/github.com/argoproj/argo-cd/cmd/argocd-application-controller/main.go:109 +0x90c
    panic: assignment to entry in nil map [recovered]
    	panic: assignment to entry in nil map
    
    goroutine 63 [running]:
    k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
    	/go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:55 +0x105
    panic(0x1cbec40, 0x227bc80)
    	/usr/local/go/src/runtime/panic.go:967 +0x166
    github.com/argoproj/argo-cd/util/settings.addStatusOverrideToGK(...)
    	/go/src/github.com/argoproj/argo-cd/util/settings/settings.go:508
    github.com/argoproj/argo-cd/util/settings.(*SettingsManager).GetResourceOverrides(0xc0000f78c0, 0xc000ecaed0, 0x0, 0x0)
    	/go/src/github.com/argoproj/argo-cd/util/settings/settings.go:485 +0x468
    github.com/argoproj/argo-cd/controller/cache.(*liveStateCache).loadCacheSettings(0xc00030db80, 0x10, 0xc0004a1380, 0x1ac619d)
    	/go/src/github.com/argoproj/argo-cd/controller/cache/cache.go:113 +0x9b
    github.com/argoproj/argo-cd/controller/cache.(*liveStateCache).Init(0xc00030db80, 0x2309518, 0xc000639c20)
    	/go/src/github.com/argoproj/argo-cd/controller/cache/cache.go:417 +0x2f
    github.com/argoproj/argo-cd/controller.(*ApplicationController).Run(0xc0004c9680, 0x22ead00, 0xc0004b9580, 0x14, 0xa)
    	/go/src/github.com/argoproj/argo-cd/controller/appcontroller.go:454 +0x26d
    created by main.newCommand.func1
    	/go/src/github.com/argoproj/argo-cd/cmd/argocd-application-controller/main.go:109 +0x90c
    
  • feat: changes to support the feature apps in any namespaces

    feat: changes to support the feature apps in any namespaces

    What type of PR is this? /kind enhancement

    What does this PR do / why we need it: The PR adds support for the feature apps in any namespace added upstream. To do so, the PR adds new command args --additional-namespace to ArgoCD ApplicationController and ArgoCD Server components. Also, the PR adds new roles in the respective namespaces for ArgoCD Server to perform actions in the namespace.

    Have you updated the necessary documentation?

    • [x] Documentation update is required by this PR.
    • [x] Documentation has been updated.

    Which issue(s) this PR fixes: https://issues.redhat.com/browse/GITOPS-2341

    How to test changes / Special notes to the reviewer:

    • Run make install run
    • Create namespaces argocd-test and argocd-1
    • Deploy an ArgoCD instance using below yaml file
    apiVersion: argoproj.io/v1alpha1
    kind: ArgoCD
    metadata:
      name: example-argocd
    spec:
      controller:
        sourceNamespaces:
        - argocd
        - argocd-1
      notifications:
        enabled: True
        env:
          - name: foo
            value: bar
      server:
        ingress:
          enabled: true
        sourceNamespaces:
        - argocd-test
        - argocd-1
    
    • Check if the ArgoCD pods came up successfully using command kubectl get pods | grep example-argocd

    • Check whether the new roles were created in namespaces argocd-test and argocd-1

    • Check if the namespaces argocd-test and argocd-1 have a new label argocd.argoproj.io/managed-by-cluster-argocd

    • e2e test for the PR. kubectl kuttl test ./tests/k8s --config ./tests/kuttl-tests.yaml --test 1-024_validate_apps_in_any_namespace

  • Add the ability to define content to be inserted into the argocd-cm configmap

    Add the ability to define content to be inserted into the argocd-cm configmap

    A lot of additional configuration is handled within the argocd-cm configmap. Is there currently a way to define this as part of the request to spin up an ArgoCD instance? I don't see anything that pops out after a quick scan of the docs -- but is this currently possible with this operator? If not, is this something that this operator would want to do in the future? Happy to help contribute some of this if it would be accepted.

  • feat: Support Ingress Class Annotation in Argo CD CRD

    feat: Support Ingress Class Annotation in Argo CD CRD

    What type of PR is this? /kind enhancement

    What does this PR do / why we need it: This PR adds IngressClass support to Ingress resources.

    Have you updated the necessary documentation?

    • [x] Documentation update is required by this PR.
    • [x] Documentation has been updated.

    Which issue(s) this PR fixes:

    Fixes #626

    How to test changes / Special notes to the reviewer:

    Relevant tests can be run by the following command:

    go test -v -run 'TestReconcileArgoCD_reconcile_.*Ingress_ingressClassName' ./controllers/argocd/
    

    Change can be tested by setting up a dev environment and applying the following manifest:

    apiVersion: argoproj.io/v1alpha1
    kind: ArgoCD
    metadata:
      name: example-argocd
      labels:
        example: ingress
    spec:
      server:
        grpc:
          ingress:
            enabled: true
        ingress:
          enabled: true
          ingressClassName: nginx
        insecure: true
    

    Note: As discussed in #626, this PR contains a breaking change: kubernetes.io/ingress.class annotation is no longer added and nginx is no longer the default ingress controller (Kubernetes will fall back to the default ingress class)

    Default nginx annotations (SSL redirect, backend) are still added though.

  • feat(argocd): Update controller-runtime for cached namespaces fix

    feat(argocd): Update controller-runtime for cached namespaces fix

    /kind bug

    What does this PR do / why we need it:

    This updates the controller runtime to support the latest round of bug fixes for multiple namespaces.

    Which issue(s) this PR fixes:

    Fixes #337

  • Add an option to give the argocd-application-controller cluster-admin (or similar) rights.

    Add an option to give the argocd-application-controller cluster-admin (or similar) rights.

    It would be nice to have the ability to grant Argo CD "cluster-admin" capabilities (or something similar) to allow for more cluster configuration to be controlled by Argo CD.

    For example, with Argo CD 0.0.3 I had altered the argocd-application-controller role to allow Argo CD to:

    • Create namespaces
    • CRUD on secrets, even in openshift- namespaces.
    • Update scc's.

    This was nice because I could have almost my entire cluster configuration backed up in git, and modified through pull request.

    The ability to control the service accounts that get the anyuid scc was nice as well. The version of Bitnami Sealed Secrets that I'm using on the cluster requires a service account with anyuid. I was able to have this scc yaml file in git managed by Argo CD, so if I created a new cluster, I didn't have to remember to add that scc to the service account.

    Also, it was nice to have Argo CD manage my OAuth config. When I have a new cluster, creating the "cluster config" project and application was enough to have Htpasswd and Github auth applied to my cluster.

    I also like the idea of restricting resources/verbs on a per-project basis.

    I'm not very familiar with the workings of the operator, but my naive suggestion would be an additional flag on the argocd CRD to grant cluster-admin by default.

    Thanks.

  • feat: unify SSO configuration under `.spec.sso` in backward-compatible way

    feat: unify SSO configuration under `.spec.sso` in backward-compatible way

    What type of PR is this?

    Uncomment only one /kind line, and delete the rest. For example, > /kind bug would simply become: /kind bug

    /kind bug /kind chore /kind cleanup /kind failing-test /kind enhancement /kind documentation /kind code-refactoring

    What does this PR do / why we need it: This PR re-introduces the effort to unify existing SSO configuration options for dex and keycloak under .spec.sso. It intoduces .spec.sso.dex and .spec.sso.keycloak. It also maintains backward compatibility to support .spec.dex, DISABLE_DEX and existing .spec.sso fields for keycloak configuration until they are deprecated in v0.6.0. It emits events warning users of these soon-to-be-deprecated config options. Since we need to support both the old and new config specs for dex and keycloak, this PR introduces a wide range of checks in order to ensure there is no mismatch or illegal combinations of the various SSO spec options available

    IMPORTANT NOTES

    1. SInce https://github.com/argoproj-labs/argocd-operator/pull/615 was merged, dex health check will fail if dex is running but is not configured. Therefore this PR introduces a new constraint where dex cannot be enabled (either through .spec.ssp.provider=dex or DISABLE_DEX=false) without providing some kind of configuration in .spec.dex or .spec.sso.dex
    2. if using the env var, the absence of DISABLE_DEX no longer implies that dex is enabled by default. It will be treated as saying dex is disabled instead. In order not to break existing workloads, if dex pods are found to be running but there is no DISBALE_DEX flag set, the dex resources not be deleted if any dex configuration is found to exist (either openShiftOAuth=true or dex.config is supplied). In all other cases, dex resources will be deleted

    Have you updated the necessary documentation?

    • [x] Documentation update is required by this PR.
    • [ ] Documentation has been updated.

    Which issue(s) this PR fixes: https://github.com/argoproj-labs/argocd-operator/issues/653

    Fixes https://issues.redhat.com/browse/GITOPS-1332

    How to test changes / Special notes to the reviewer:

    1. launch operator locally with make run
    2. Create empty argo-cd instance
    apiVersion: argoproj.io/v1alpha1
    kind: ArgoCD
    metadata:
      name: argocd
    spec: {}
    EOF
    
    1. Since DISABLE_DEX was not set, check that dex resources don't come up by default
    2. Set .spec.sso.provider=dex
    3. Observe dex deployment doesn't come up. Observe error in logs indicating dex must also be configured by supplying a .spec.sso.dex
    4. Set .spec.sso.dex.Config="test"
    5. Observe dex deployment comes up successfully
    6. Set .spec.sso.image=test-image
    7. Observe error in logs indicating .spec.sso fields (for keycloak) cannot be specified when .spec.sso.provider=dex
    8. Remove .spec.sso.image=test-image. Set .spec.sso.keycloak.image=test-image
    9. Observe error in logs indicating .spec.sso.keycloak cannot be specified when .spec.sso.provider=dex
    10. Remove .spec.sso.keycloak. Set.spec.dex.config=test-2`
    11. Observe error in logs indicating .spec.dex fields cannot be specified when .spec.sso.provider=dex
    12. Remove .spec.dex. Remove .spec.sso.dex. Set .spec.sso.provider=keycloak
    13. Observe that dex resources are deleted and keycloak resources are created successfully
    14. Set .spec.sso.image=<some-keycloak-image> and observe it gets updated
    15. Set .spec.sso.keycloak.image=xyz.
    16. Observe error in logs indicating that conflicting information cannot be provided in .spec.sso fields and equivalent .spec.sso.keycloak fields
    17. Remove .spec.sso keycloak fields (.spec.sso.image, .spec.sso.version, .spec.sso.resources, .spec.sso.verifyTLS) and .spec.sso.keycloak
    18. Set .spec.sso.dex.config=test
    19. Observe error in logs that .spec.sso.dex cannot be specified when .spec.sso.provider=keycloak
    20. Remove .spec.sso.dex. Set .spec.dex.openShiftOAuth=true
    21. Observe error in logs indicating that multiple SSO providers cannot be configured simultaneously
    22. Remove .spec.sso entirely and observe that keycloak resources are deleted =====================================================================================
    23. launch operator locally with make run DISABLE_DEX=false
    24. Create empty argo-cd instance
    apiVersion: argoproj.io/v1alpha1
    kind: ArgoCD
    metadata:
      name: argocd
    spec: {}
    EOF
    
    1. Notice dex pod doesn't come up. Observe error in logs indicating that dex must be configured via .spec.dex if dex is enabled
    2. Set .spec.dex.openShiftOAuth=true
    3. Observe that dex pod comes up successfully
    4. Set .spec.dex.openShiftOAuth=false and .spec.sso.provider=keycloak
    5. Observe keycloak pod should come up in order to preserve existing behavior and not introduce a breaking change
    6. Remove .spec.sso.provider=keycloak, Set .spec.sso.keycloak.image=test
    7. Observe error in logs indicating that SSO provider spec cannot be supplied when .spec.sso.provider="
    8. Remove .spec.sso.provider and .spec.sso.keycloak
    9. Set .spec.sso.image=test
    10. Observe no errors in logs ========================================================================================
    11. Execute kubectl get events
    12. Observe that deprecation events were emitted warning users that .spec.dex, DISABLE_DEX and .spec.sso fields for key cloak have been deprecated and support will be removed in a future release
  • feat: Publish latest operator image after push to master branch

    feat: Publish latest operator image after push to master branch

    What type of PR is this?

    Uncomment only one /kind line, and delete the rest. For example, > /kind bug would simply become: /kind bug

    /kind enhancement

    What does this PR do / why we need it: This PR adds a GHA workflow to build and publish the latest code to a container registry with the tag :latest. It requires three secrets to be created in the repository:

    • REGISTRY_URL - The registry to publish the image to (i.e. quay.io/<reponame>)
    • REGISTRY_USERNAME - The user to authenticate as
    • REGISTRY_PASSWORD - The password/token to use to authenticate

    This workflow will only run on push/merge to the master branch.

    Have you updated the necessary documentation?

    • [X] Documentation update is required by this PR.
    • [X] Documentation has been updated.

    Which issue(s) this PR fixes:

    Fixes #325

    How to test changes / Special notes to the reviewer: No great way to test this. But you can see this in action here:

    https://github.com/tylerauerbeck/argocd-operator/actions/runs/1024920750

  • Multiple ArgoCD Instances in different Namespaces

    Multiple ArgoCD Instances in different Namespaces

    Using the watched_namespace environment variable I set ArgoCD to watch multiple namespaces such as "argocd-operator-system,org-fdi-system". However while ArgoCD does detect all of the ArgoCD instances I get an issue with the following message:

    unable to get: %v because of unknown namespace for the cache
    

    It seems this is related to the following upstream bug:

    • https://github.com/kubernetes-sigs/controller-runtime/issues/934

    I was just curious as to if any other people have came across this issue and maybe if I myself am doing something wrong. Appreciate any insight you can provide!

  • server rolebinding does not have access to applicationset resource

    server rolebinding does not have access to applicationset resource

    Describe the bug If spec.applicationSet is enabled, the operator does not reconcile the server RoleBinding to manage ApplicationSets.

    To Reproduce Steps to reproduce the behavior:

    1. Create a basic cluster with applicationset-controller enabled:
    apiVersion: argoproj.io/v1alpha1
    kind: ArgoCD
    metadata:
      name: example
    spec:
      applicationSet: {}
    
    $ kubectl apply -n argocd -f example.yaml
    
    1. Login into the CLI (in this case, with port-forward):
    $ kubectl port-forward -n argocd services/example-server 8080:443 &>/dev/null &
    $ argocd login localhost:8080 --insecure
    
    1. List applicationsets (for example):
    $ argocd appset list
    

    Expected behavior A list containing the installed applicationsets should display, instead the following error appears:

    FATA[0000] rpc error: code = PermissionDenied desc = error listing ApplicationSets with selectors: applicationsets.argoproj.io is forbidden: User "system:serviceaccount:argocd:example-argocd-server" cannot list resource "applicationsets" in API group "argoproj.io" in the namespace "argocd"
    

    Additional information Operator version: v0.5.0 (2f5c0d456760)

  • GITOPS-2480: Added HOME env var to controller container

    GITOPS-2480: Added HOME env var to controller container

    What type of PR is this? /kind bug

    What does this PR do / why we need it: GITOPS-2480

    Have you updated the necessary documentation?

    • [ ] Documentation update is required by this PR.
    • [ ] Documentation has been updated.

    Which issue(s) this PR fixes: Fixes GITOPS-2480

    How to test changes / Special notes to the reviewer:

    1. Create an argocd instance in a namespace which doesn't have cluster admin permission.
    2. Run kubectl get pods -A -v=20 command in the argocd-application-controller terminal.
    3. Confirm that there's no .kube: permission denied error
  • feat: Allow user instance monitoring

    feat: Allow user instance monitoring

    What type of PR is this?

    Uncomment only one /kind line, and delete the rest. For example, > /kind bug would simply become: /kind bug

    /kind bug /kind chore /kind cleanup /kind failing-test /kind enhancement /kind documentation /kind code-refactoring

    What does this PR do / why we need it: This PR allows users to enable workload status monitoring for a given argo-cd instance. Enabling this creates a PrometheusRule for alerts (with 8 opinionated rules within it - one for each workload). The rules are configured to fire when a workload has remained in failed/pending state for a certain duration of time out of the box. Users are free to make changes to the alert rule and the operator would not overwrite them.

    Have you updated the necessary documentation? Not yet

    • [x] Documentation update is required by this PR.
    • [ ] Documentation has been updated.

    Which issue(s) this PR fixes:

    Fixes #?

    How to test changes / Special notes to the reviewer:

  • WIP chore: update Argocd version 2.6.0-rc1 in master

    WIP chore: update Argocd version 2.6.0-rc1 in master

    (DO NOT MERGE YET- ArgoCDv2.6-RC2 will be released tomorrow and this PR will be updated accordingly then)

    What type of PR is this? /kind chore

    What does this PR do / why we need it: updated Argocd version to 2.6.0-rc1

  • feat: Expose instance level metrics

    feat: Expose instance level metrics

    What type of PR is this?

    Uncomment only one /kind line, and delete the rest. For example, > /kind bug would simply become: /kind bug

    /kind bug /kind chore /kind cleanup /kind failing-test /kind enhancement /kind documentation /kind code-refactoring

    What does this PR do / why we need it: This PR exposes new metrics to track the statuses of the argo-cd workloads. It defines the following metrics:

    argocd_application_controller_status Describes the status of the application controller workload [0='Unknown', 1='Failed', 2='Pending', 3='Running']
    argocd_applicationset_controller_status Describes the status of the applicationSet controller workload [0='Unknown', 1='Failed', 2='Pending', 3='Running']
    argocd_dex_status Describes the status of the dex workload [0='Unknown', 1='Failed', 2='Pending', 3='Running']
    argocd_phase Describes the phase of argo-cd instance [2='Pending', 4='Available']
    argocd_redis_status Describes the status of the redis workload [0='Unknown', 1='Failed', 2='Pending', 3='Running']
    argocd_repo_server_status Describes the status of the repo server workload [0='Unknown', 1='Failed', 2='Pending', 3='Running']
    argocd_server_status Describes the status of the argo-cd server workload [0='Unknown', 1='Failed', 2='Pending', 3='Running']
    

    It spins up a new metrics server that listens on port 8085 workload status reconciliation logic is updated to reflect the status within the newly defined metrics and write them out to the /metrics endpoint. Individual workload statuses can be queried by specifying the namespace of the instance that the workload is a part of.

    NOTE: this PR assumes there will not be more than 1 argo-cd instance in any namespace as that is an anti-pattern.

    This PR should be merged after https://github.com/argoproj-labs/argocd-operator/pull/829

    Have you updated the necessary documentation?

    • [ ] Documentation update is required by this PR.
    • [ ] Documentation has been updated.

    Which issue(s) this PR fixes:

    Fixes https://issues.redhat.com/browse/GITOPS-2456

    How to test changes / Special notes to the reviewer:

    1. Deploy operator locally
    2. Create argo-cd instance in argocd namespace
    3. Run a GET query against localhost:8085/metrics in postman/curl
    4. Verify that the response looks something like this :
    # HELP argocd_application_controller_status Describes the status of the application controller workload [0='Unknown', 1='Failed', 2='Pending', 3='Running']
    # TYPE argocd_application_controller_status gauge
    argocd_application_controller_status{namespace="argocd"} 3
    # HELP argocd_applicationset_controller_status Describes the status of the applicationSet controller workload [0='Unknown', 1='Failed', 2='Pending', 3='Running']
    # TYPE argocd_applicationset_controller_status gauge
    argocd_applicationset_controller_status{namespace="argocd"} 0
    # HELP argocd_dex_status Describes the status of the dex workload [0='Unknown', 1='Failed', 2='Pending', 3='Running']
    # TYPE argocd_dex_status gauge
    argocd_dex_status{namespace="argocd"} 0
    # HELP argocd_phase Describes the phase of argo-cd instance [2='Pending', 4='Available']
    # TYPE argocd_phase gauge
    argocd_phase{namespace="argocd"} 4
    # HELP argocd_redis_status Describes the status of the redis workload [0='Unknown', 1='Failed', 2='Pending', 3='Running']
    # TYPE argocd_redis_status gauge
    argocd_redis_status{namespace="argocd"} 3
    # HELP argocd_repo_server_status Describes the status of the repo server workload [0='Unknown', 1='Failed', 2='Pending', 3='Running']
    # TYPE argocd_repo_server_status gauge
    argocd_repo_server_status{namespace="argocd"} 3
    # HELP argocd_server_status Describes the status of the argo-cd server workload [0='Unknown', 1='Failed', 2='Pending', 3='Running']
    # TYPE argocd_server_status gauge
    argocd_server_status{namespace="argocd"} 3
    
    1. Edit Argo CD CR to enable notifications, and replace appset image by invalid one to create pending status
    apiVersion: argoproj.io/v1alpha1
    kind: ArgoCD
    metadata:
      name: example-argocd
    spec:
      applicationSet:
        image: quay.io/argoproj/argocd@sha256:8283a9f06033c2377dc61b03daf49
      notifications:
        enabled: true
    
    1. Query again and verify that response now contains pending status for appset and running status for notifications controller
    # HELP argocd_applicationset_controller_status Describes the status of the applicationSet controller workload [0='Unknown', 1='Failed', 2='Pending', 3='Running']
    # TYPE argocd_applicationset_controller_status gauge
    argocd_applicationset_controller_status{namespace="argocd"} 2
    ...
    # HELP argocd_notifications_controller_status Describes the status of the notifications controller workload [0='Unknown', 1='Failed', 2='Pending', 3='Running']
    # TYPE argocd_notifications_controller_status gauge
    argocd_notifications_controller_status{namespace="argocd"} 3
    
  • Add system-level configuration docs and fix minor typo

    Add system-level configuration docs and fix minor typo

    What type of PR is this?

    /kind bug /kind documentation

    What does this PR do / why we need it: This adds system-level configuration documentation that relates to resource customization. Also fixes a typo from "deployments" to "Deployment" in some kind fields.

    Which issue(s) this PR fixes: https://issues.redhat.com/browse/GITOPS-2463

Argo Workflows is an open source container-native workflow engine for orchestrating parallel jobs on Kubernetes.
Argo Workflows is an open source container-native workflow engine for orchestrating parallel jobs on Kubernetes.

What is Argo Workflows? Argo Workflows is an open source container-native workflow engine for orchestrating parallel jobs on Kubernetes. Argo Workflow

Dec 10, 2021
Notifications for Argo CD
Notifications for Argo CD

Argo CD Notifications Argo CD Notifications continuously monitors Argo CD applications and provides a flexible way to notify users about important cha

Nov 27, 2022
Argo CD ApplicationSet Controller

The ApplicationSet controller manages multiple Argo CD Applications as a single ApplicationSet unit, supporting deployments to large numbers of clusters, deployments of large monorepos, and enabling secure Application self-service.

Dec 14, 2022
Automatic container image update for Argo CD

Argo CD Image Updater Introduction Argo CD Image Updater is a tool to automatically update the container images of Kubernetes workloads which are mana

Dec 25, 2022
Support for extending Argo CD

Argo CD Extensions To enable Extensions for your Argo CD cluster will require just a single kubectl apply. Here we provide a way to extend Argo CD suc

Dec 20, 2022
Argo-CD Autopilot
Argo-CD Autopilot

Introduction New users to GitOps and Argo CD are not often sure how they should structure their repos, add applications, promote apps across environme

Jan 6, 2023
Hera is a Python framework for constructing and submitting Argo Workflows.

Hera is an Argo Workflows Python SDK. Hera aims to make workflow construction and submission easy and accessible to everyone! Hera abstracts away workflow setup details while still maintaining a consistent vocabulary with Argo Workflows.

Dec 31, 2022
A series of controllers for configuring namespaces to accomodate Argo

argo-controller A series of controllers for configuring namespaces to accomodate Argo. ArgoCD TBD Argo Workflows Make a service account in every names

Jan 4, 2022
Dataflow is a Kubernetes-native platform for executing large parallel data-processing pipelines.
Dataflow is a Kubernetes-native platform for executing large parallel data-processing pipelines.

Dataflow Summary Dataflow is a Kubernetes-native platform for executing large parallel data-processing pipelines. Each pipeline is specified as a Kube

Jan 4, 2023
PolarDB Stack is a DBaaS implementation for PolarDB-for-Postgres, as an operator creates and manages PolarDB/PostgreSQL clusters running in Kubernetes. It provides re-construct, failover swtich-over, scale up/out, high-available capabilities for each clusters.
PolarDB Stack is a DBaaS implementation for PolarDB-for-Postgres, as an operator creates and manages PolarDB/PostgreSQL clusters running in Kubernetes. It provides re-construct, failover swtich-over, scale up/out, high-available capabilities for each clusters.

PolarDB Stack开源版生命周期 1 系统概述 PolarDB是阿里云自研的云原生关系型数据库,采用了基于Shared-Storage的存储计算分离架构。数据库由传统的Share-Nothing,转变成了Shared-Storage架构。由原来的N份计算+N份存储,转变成了N份计算+1份存储

Nov 8, 2022
Basic Kubernetes operator that have multiple versions in CRD. This operator can be used to experiment and understand Operator/CRD behaviors.

add-operator Basic Kubernetes operator that have multiple versions in CRD. This operator can be used to experiment and understand Operator/CRD behavio

Dec 15, 2021
Argo Rollout visualization in Argo CD Web UI
Argo Rollout visualization in Argo CD Web UI

Rollout Extension The project introduces the Argo Rollout dashboard into the Argo CD Web UI. Quick Start Install Argo CD and Argo CD Extensions Contro

Dec 29, 2022
An operator for managing ephemeral clusters in GKE

Test Cluster Operator for GKE This operator provides an API-driven cluster provisioning for integration and performance testing of software that integ

Oct 22, 2022
An operator which complements grafana-operator for custom features which are not feasible to be merged into core operator

Grafana Complementary Operator A grafana which complements grafana-operator for custom features which are not feasible to be merged into core operator

Aug 16, 2022
KinK is a helper CLI that facilitates to manage KinD clusters as Kubernetes pods. Designed to ease clusters up for fast testing with batteries included in mind.
KinK is a helper CLI that facilitates to manage KinD clusters as Kubernetes pods. Designed to ease clusters up for fast testing with batteries included in mind.

kink A helper CLI that facilitates to manage KinD clusters as Kubernetes pods. Table of Contents kink (KinD in Kubernetes) Introduction How it works ?

Dec 10, 2022
Managing your Kubernetes clusters (including public, private, edge, etc) as easily as visiting the Internet

Clusternet Managing Your Clusters (including public, private, hybrid, edge, etc) as easily as Visiting the Internet. Clusternet (Cluster Internet) is

Dec 30, 2022
Kubernetes operator to autoscale Google's Cloud Bigtable clusters
Kubernetes operator to autoscale Google's Cloud Bigtable clusters

Bigtable Autoscaler Operator Bigtable Autoscaler Operator is a Kubernetes Operator to autoscale the number of nodes of a Google Cloud Bigtable instanc

Nov 5, 2021
Nebula Operator manages NebulaGraph clusters on Kubernetes and automates tasks related to operating a NebulaGraph cluster

Nebula Operator manages NebulaGraph clusters on Kubernetes and automates tasks related to operating a NebulaGraph cluster. It evolved from NebulaGraph Cloud Service, makes NebulaGraph a truly cloud-native database.

Dec 31, 2022
ClickHouse Operator creates, configures and manages ClickHouse clusters running on Kubernetes

ClickHouse Operator ClickHouse Operator creates, configures and manages ClickHouse clusters running on Kubernetes. Features The ClickHouse Operator fo

Dec 29, 2022