A Kubernetes operator for managing Argo CD clusters.

Argo CD Operator

Build Status Go Report Card Documentation Status Contributor Covenant

A Kubernetes operator for managing Argo CD clusters.

Documentation

See the documentation for installation and usage of the operator.

License

The Argo CD Operator is released under the Apache 2.0 license. See the LICENSE file for details.

Comments
  • feat: Upgrade RH-SSO to v7.5.1 and support kube:admin, ocp groups and proxy env

    feat: Upgrade RH-SSO to v7.5.1 and support kube:admin, ocp groups and proxy env

    What type of PR is this?

    /kind enhancement

    What does this PR do / why we need it:

    This PR adds the below enhancements.

    • Added missing RBAC annotations in agocd_controller.go: template.openshift.io and oauth.openshift.io api groups. Without these annotations, RBAC manifests won't get generated properly in order to install RHSSO.
    • Removed hardcoded OIDC client secrets and replaced them by generated random strings. These client secrets do not need to be stored.
    • Added KeycloakIdentityProviderMapper to workaround https://github.com/keycloak/keycloak-operator/issues/471. Without the workaround, tokens can't be translated to credentials that contain the group identity.
    • Propagated proxy environment variables from operator to Keycloak containers so that outbound calls can go through cluster proxy if cluste proxy is configured.
    • Delete OAuth client using foreground propagation policy to ensure that the garbage collector removes all instantiated objects before the TemplateInstance itself disappears.
    • Add extra Delete OAuth client calls to workaround https://github.com/openshift/client-go/issues/209
    • Add retries to handle RHSSO image upgrade
    • Upgrades the default version of RH-SSO to 7.5.1
    • Adds support to login with kube:admin OpenShift User.
    • With this PR, RH-SSO can fetch the group details of OpenShift Users(You should see the group details of the logged in user in the Argo CD console). This will help admins define Group Level RBAC for their OpenShift groups. https://github.com/keycloak/keycloak/pull/8381
    • Kube:admin by default you only have. Not working previously. not an OS user. https://github.com/keycloak/keycloak/pull/8428
    • RHSSO works in proxy enabled cluster with no additional or manual configuration. https://github.com/keycloak/keycloak/pull/8559 *. Upgrades Keycloak version to 15.0.2. On non-OpenShif Kubernetes clusters, users will see a Keycloak version upgrade from 9.0.3 to 15.0.2

    Upgrades

    Whether you are using the Operator on Kubernetes or OpenShift, Version upgrades mentioned above are smooth and does not require any manual intervention other than upgrading the operator to new version(0.3.0) once it's released.

    Have you updated the necessary documentation?

    • [x] Documentation update is required by this PR.
    • [x] Documentation has been updated.

    How to test changes / Special notes to the reviewer:

    Kubernetes:

    1. Deploy the below catalog source
    apiVersion: operators.coreos.com/v1alpha1
    kind: CatalogSource
    metadata:
      name: argocd-catalog
    spec:
      sourceType: grpc
      image: quay.io/aveerama/[email protected]:bb0db86d3e6e27fe9d9e6891027db62e3b15f2947d65b059de8c7aae3a582eda
      displayName: Argo CD Operators
      publisher: Argo CD Community
    
    1. Run the make target to install the CRDs make install
    2. kubectl create namespace argocd
    3. kubectl create -n argocd -f deploy/operator_group.yaml
    4. kubectl create -n argocd -f deploy/subscription.yaml
    5. kubectl create -n argocd -f examples/argocd-keycloak-k8s.yaml

    OpenShift:

    1. Deploy the below catalog source into openshift-operators namespace.
    apiVersion: operators.coreos.com/v1alpha1
    kind: CatalogSource
    metadata:
      name: argocd-catalog
    spec:
      sourceType: grpc
      image: quay.io/aveerama/[email protected]:bb0db86d3e6e27fe9d9e6891027db62e3b15f2947d65b059de8c7aae3a582eda
      displayName: Argo CD Operators
      publisher: Argo CD Community
    
    1. Move to Operator Hub to install the Operator.
    2. kubectl create namespace argocd
    3. kubectl create -n argocd -f examples/argocd-keycloak-openshift.yaml

    Testing upgrade:

    1. Install the 0.2.0 of operator(same for k8s and OpenShift).
    2. Create an Argo CD Instance with Keycloak as shown above.
    3. Delete the operator but keep the Argo CD Instance and workloads as is.
    4. Now, Install the new version of operator and see if the Keycloak pod is recreated and updated.

    How to test login with kube:admin

    On your OpenShift cluster, Go to Networking -> Routes -> Click on the Argo CD route.

    1. Click on Login Via Keycloak -> Login with OpenShift -> Provide kubeadmin credentials.
    2. Once you login to Argo CD, you can also confirm this by looking into the user profile section.

    Run E2E test:

    Download kuttl on your laptop or server Run operator locally or through the bundle. Run the below command to execute rhsso e2e tests kubectl kuttl test --config kuttl-test-rhsso.yaml

  • Dex can't be disabled

    Dex can't be disabled

    Describe the bug

    Hi, the documentation describing how to disable Dex is not correct. The following subscription configuration does NOT disable it:

    To Reproduce

    apiVersion: operators.coreos.com/v1alpha1
    kind: Subscription
    metadata:
      name: argocd-operator
    spec:
      # channel: alpha
      name: argocd-operator
      # source: argocd-catalog
      source: operatorhubio-catalog
      sourceNamespace: olm
      config:
        env:
          - name: DISABLE_DEX
            value: "true"
          - name: ARGOCD_CLUSTER_CONFIG_NAMESPACES
            value: argocd
    

    Expected behavior

    The configuration to stop dex from starting.

    Additional context

    Deployed version: v2.3.3-07ac038

  • Pods stuck in crashloop after update to 0.0.14

    Pods stuck in crashloop after update to 0.0.14

    Hi :)

    First thanks for the great Operator :)!

    I updated our Dev Cluster today from 0.0.13 to 0.0.14 and since then two pods are crashlooping :(

    See below logs :)

    argocd-repo-server

    time="2020-10-14T11:48:10Z" level=info msg="Initializing GnuPG keyring at /app/config/gpg/keys"
    time="2020-10-14T11:48:10Z" level=fatal msg="stat /app/config/gpg/keys/trustdb.gpg: permission denied"
    

    argocd-application-controller

    time="2020-10-14T11:48:10Z" level=info msg="appResyncPeriod=3m0s"
    time="2020-10-14T11:48:10Z" level=info msg="Application Controller (version: v1.7.7+33c93ae, built: 2020-09-29T04:56:38Z) starting (namespace: argocd)"
    time="2020-10-14T11:48:10Z" level=info msg="Starting configmap/secret informers"
    time="2020-10-14T11:48:10Z" level=info msg="Configmap/secret informer synced"
    E1014 11:48:10.261304       1 runtime.go:78] Observed a panic: "assignment to entry in nil map" (assignment to entry in nil map)
    goroutine 63 [running]:
    k8s.io/apimachinery/pkg/util/runtime.logPanic(0x1cbec40, 0x227bc80)
    	/go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:74 +0xa3
    k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
    	/go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:48 +0x82
    panic(0x1cbec40, 0x227bc80)
    	/usr/local/go/src/runtime/panic.go:967 +0x166
    github.com/argoproj/argo-cd/util/settings.addStatusOverrideToGK(...)
    	/go/src/github.com/argoproj/argo-cd/util/settings/settings.go:508
    github.com/argoproj/argo-cd/util/settings.(*SettingsManager).GetResourceOverrides(0xc0000f78c0, 0xc000ecaed0, 0x0, 0x0)
    	/go/src/github.com/argoproj/argo-cd/util/settings/settings.go:485 +0x468
    github.com/argoproj/argo-cd/controller/cache.(*liveStateCache).loadCacheSettings(0xc00030db80, 0x10, 0xc0004a1380, 0x1ac619d)
    	/go/src/github.com/argoproj/argo-cd/controller/cache/cache.go:113 +0x9b
    github.com/argoproj/argo-cd/controller/cache.(*liveStateCache).Init(0xc00030db80, 0x2309518, 0xc000639c20)
    	/go/src/github.com/argoproj/argo-cd/controller/cache/cache.go:417 +0x2f
    github.com/argoproj/argo-cd/controller.(*ApplicationController).Run(0xc0004c9680, 0x22ead00, 0xc0004b9580, 0x14, 0xa)
    	/go/src/github.com/argoproj/argo-cd/controller/appcontroller.go:454 +0x26d
    created by main.newCommand.func1
    	/go/src/github.com/argoproj/argo-cd/cmd/argocd-application-controller/main.go:109 +0x90c
    panic: assignment to entry in nil map [recovered]
    	panic: assignment to entry in nil map
    
    goroutine 63 [running]:
    k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
    	/go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:55 +0x105
    panic(0x1cbec40, 0x227bc80)
    	/usr/local/go/src/runtime/panic.go:967 +0x166
    github.com/argoproj/argo-cd/util/settings.addStatusOverrideToGK(...)
    	/go/src/github.com/argoproj/argo-cd/util/settings/settings.go:508
    github.com/argoproj/argo-cd/util/settings.(*SettingsManager).GetResourceOverrides(0xc0000f78c0, 0xc000ecaed0, 0x0, 0x0)
    	/go/src/github.com/argoproj/argo-cd/util/settings/settings.go:485 +0x468
    github.com/argoproj/argo-cd/controller/cache.(*liveStateCache).loadCacheSettings(0xc00030db80, 0x10, 0xc0004a1380, 0x1ac619d)
    	/go/src/github.com/argoproj/argo-cd/controller/cache/cache.go:113 +0x9b
    github.com/argoproj/argo-cd/controller/cache.(*liveStateCache).Init(0xc00030db80, 0x2309518, 0xc000639c20)
    	/go/src/github.com/argoproj/argo-cd/controller/cache/cache.go:417 +0x2f
    github.com/argoproj/argo-cd/controller.(*ApplicationController).Run(0xc0004c9680, 0x22ead00, 0xc0004b9580, 0x14, 0xa)
    	/go/src/github.com/argoproj/argo-cd/controller/appcontroller.go:454 +0x26d
    created by main.newCommand.func1
    	/go/src/github.com/argoproj/argo-cd/cmd/argocd-application-controller/main.go:109 +0x90c
    
  • Add the ability to define content to be inserted into the argocd-cm configmap

    Add the ability to define content to be inserted into the argocd-cm configmap

    A lot of additional configuration is handled within the argocd-cm configmap. Is there currently a way to define this as part of the request to spin up an ArgoCD instance? I don't see anything that pops out after a quick scan of the docs -- but is this currently possible with this operator? If not, is this something that this operator would want to do in the future? Happy to help contribute some of this if it would be accepted.

  • feat(argocd): Update controller-runtime for cached namespaces fix

    feat(argocd): Update controller-runtime for cached namespaces fix

    /kind bug

    What does this PR do / why we need it:

    This updates the controller runtime to support the latest round of bug fixes for multiple namespaces.

    Which issue(s) this PR fixes:

    Fixes #337

  • Add an option to give the argocd-application-controller cluster-admin (or similar) rights.

    Add an option to give the argocd-application-controller cluster-admin (or similar) rights.

    It would be nice to have the ability to grant Argo CD "cluster-admin" capabilities (or something similar) to allow for more cluster configuration to be controlled by Argo CD.

    For example, with Argo CD 0.0.3 I had altered the argocd-application-controller role to allow Argo CD to:

    • Create namespaces
    • CRUD on secrets, even in openshift- namespaces.
    • Update scc's.

    This was nice because I could have almost my entire cluster configuration backed up in git, and modified through pull request.

    The ability to control the service accounts that get the anyuid scc was nice as well. The version of Bitnami Sealed Secrets that I'm using on the cluster requires a service account with anyuid. I was able to have this scc yaml file in git managed by Argo CD, so if I created a new cluster, I didn't have to remember to add that scc to the service account.

    Also, it was nice to have Argo CD manage my OAuth config. When I have a new cluster, creating the "cluster config" project and application was enough to have Htpasswd and Github auth applied to my cluster.

    I also like the idea of restricting resources/verbs on a per-project basis.

    I'm not very familiar with the workings of the operator, but my naive suggestion would be an additional flag on the argocd CRD to grant cluster-admin by default.

    Thanks.

  • feat: unify SSO configuration under `.spec.sso` in backward-compatible way

    feat: unify SSO configuration under `.spec.sso` in backward-compatible way

    What type of PR is this?

    Uncomment only one /kind line, and delete the rest. For example, > /kind bug would simply become: /kind bug

    /kind bug /kind chore /kind cleanup /kind failing-test /kind enhancement /kind documentation /kind code-refactoring

    What does this PR do / why we need it: This PR re-introduces the effort to unify existing SSO configuration options for dex and keycloak under .spec.sso. It intoduces .spec.sso.dex and .spec.sso.keycloak. It also maintains backward compatibility to support .spec.dex, DISABLE_DEX and existing .spec.sso fields for keycloak configuration until they are deprecated in v0.6.0. It emits events warning users of these soon-to-be-deprecated config options. Since we need to support both the old and new config specs for dex and keycloak, this PR introduces a wide range of checks in order to ensure there is no mismatch or illegal combinations of the various SSO spec options available

    IMPORTANT NOTES

    1. SInce https://github.com/argoproj-labs/argocd-operator/pull/615 was merged, dex health check will fail if dex is running but is not configured. Therefore this PR introduces a new constraint where dex cannot be enabled (either through .spec.ssp.provider=dex or DISABLE_DEX=false) without providing some kind of configuration in .spec.dex or .spec.sso.dex
    2. if using the env var, the absence of DISABLE_DEX no longer implies that dex is enabled by default. It will be treated as saying dex is disabled instead. In order not to break existing workloads, if dex pods are found to be running but there is no DISBALE_DEX flag set, the dex resources not be deleted if any dex configuration is found to exist (either openShiftOAuth=true or dex.config is supplied). In all other cases, dex resources will be deleted

    Have you updated the necessary documentation?

    • [x] Documentation update is required by this PR.
    • [ ] Documentation has been updated.

    Which issue(s) this PR fixes: https://github.com/argoproj-labs/argocd-operator/issues/653

    Fixes https://issues.redhat.com/browse/GITOPS-1332

    How to test changes / Special notes to the reviewer:

    1. launch operator locally with make run
    2. Create empty argo-cd instance
    apiVersion: argoproj.io/v1alpha1
    kind: ArgoCD
    metadata:
      name: argocd
    spec: {}
    EOF
    
    1. Since DISABLE_DEX was not set, check that dex resources don't come up by default
    2. Set .spec.sso.provider=dex
    3. Observe dex deployment doesn't come up. Observe error in logs indicating dex must also be configured by supplying a .spec.sso.dex
    4. Set .spec.sso.dex.Config="test"
    5. Observe dex deployment comes up successfully
    6. Set .spec.sso.image=test-image
    7. Observe error in logs indicating .spec.sso fields (for keycloak) cannot be specified when .spec.sso.provider=dex
    8. Remove .spec.sso.image=test-image. Set .spec.sso.keycloak.image=test-image
    9. Observe error in logs indicating .spec.sso.keycloak cannot be specified when .spec.sso.provider=dex
    10. Remove .spec.sso.keycloak. Set.spec.dex.config=test-2`
    11. Observe error in logs indicating .spec.dex fields cannot be specified when .spec.sso.provider=dex
    12. Remove .spec.dex. Remove .spec.sso.dex. Set .spec.sso.provider=keycloak
    13. Observe that dex resources are deleted and keycloak resources are created successfully
    14. Set .spec.sso.image=<some-keycloak-image> and observe it gets updated
    15. Set .spec.sso.keycloak.image=xyz.
    16. Observe error in logs indicating that conflicting information cannot be provided in .spec.sso fields and equivalent .spec.sso.keycloak fields
    17. Remove .spec.sso keycloak fields (.spec.sso.image, .spec.sso.version, .spec.sso.resources, .spec.sso.verifyTLS) and .spec.sso.keycloak
    18. Set .spec.sso.dex.config=test
    19. Observe error in logs that .spec.sso.dex cannot be specified when .spec.sso.provider=keycloak
    20. Remove .spec.sso.dex. Set .spec.dex.openShiftOAuth=true
    21. Observe error in logs indicating that multiple SSO providers cannot be configured simultaneously
    22. Remove .spec.sso entirely and observe that keycloak resources are deleted =====================================================================================
    23. launch operator locally with make run DISABLE_DEX=false
    24. Create empty argo-cd instance
    apiVersion: argoproj.io/v1alpha1
    kind: ArgoCD
    metadata:
      name: argocd
    spec: {}
    EOF
    
    1. Notice dex pod doesn't come up. Observe error in logs indicating that dex must be configured via .spec.dex if dex is enabled
    2. Set .spec.dex.openShiftOAuth=true
    3. Observe that dex pod comes up successfully
    4. Set .spec.dex.openShiftOAuth=false and .spec.sso.provider=keycloak
    5. Observe keycloak pod should come up in order to preserve existing behavior and not introduce a breaking change
    6. Remove .spec.sso.provider=keycloak, Set .spec.sso.keycloak.image=test
    7. Observe error in logs indicating that SSO provider spec cannot be supplied when .spec.sso.provider="
    8. Remove .spec.sso.provider and .spec.sso.keycloak
    9. Set .spec.sso.image=test
    10. Observe no errors in logs ========================================================================================
    11. Execute kubectl get events
    12. Observe that deprecation events were emitted warning users that .spec.dex, DISABLE_DEX and .spec.sso fields for key cloak have been deprecated and support will be removed in a future release
  • feat: Publish latest operator image after push to master branch

    feat: Publish latest operator image after push to master branch

    What type of PR is this?

    Uncomment only one /kind line, and delete the rest. For example, > /kind bug would simply become: /kind bug

    /kind enhancement

    What does this PR do / why we need it: This PR adds a GHA workflow to build and publish the latest code to a container registry with the tag :latest. It requires three secrets to be created in the repository:

    • REGISTRY_URL - The registry to publish the image to (i.e. quay.io/<reponame>)
    • REGISTRY_USERNAME - The user to authenticate as
    • REGISTRY_PASSWORD - The password/token to use to authenticate

    This workflow will only run on push/merge to the master branch.

    Have you updated the necessary documentation?

    • [X] Documentation update is required by this PR.
    • [X] Documentation has been updated.

    Which issue(s) this PR fixes:

    Fixes #325

    How to test changes / Special notes to the reviewer: No great way to test this. But you can see this in action here:

    https://github.com/tylerauerbeck/argocd-operator/actions/runs/1024920750

  • Multiple ArgoCD Instances in different Namespaces

    Multiple ArgoCD Instances in different Namespaces

    Using the watched_namespace environment variable I set ArgoCD to watch multiple namespaces such as "argocd-operator-system,org-fdi-system". However while ArgoCD does detect all of the ArgoCD instances I get an issue with the following message:

    unable to get: %v because of unknown namespace for the cache
    

    It seems this is related to the following upstream bug:

    • https://github.com/kubernetes-sigs/controller-runtime/issues/934

    I was just curious as to if any other people have came across this issue and maybe if I myself am doing something wrong. Appreciate any insight you can provide!

  • Argocd failed to deploy (openshift - argocd 0.0.11)

    Argocd failed to deploy (openshift - argocd 0.0.11)

    Hello,

    we are using OpenShift 4.4 deploying ArgoCD using the Operator fails starting from 0.0.11 because of this:

      - dependents:
        - group: rbac.authorization.k8s.io
          kind: PolicyRule
          message: namespaced rule:{"verbs":["get"],"apiGroups":[""],"resources":["endpoints"]}
          status: NotSatisfied
          version: v1beta1
        group: ""
        kind: ServiceAccount
        message: Policy rule not satisfied for service account
        name: argocd-redis-ha
        status: PresentNotSatisfied
        version: v1
    
  • add route/ingress URL to .status

    add route/ingress URL to .status

    What type of PR is this? /kind enhancement

    What does this PR do / why we need it: This would add a new field, .host, to .status of ArgoCD. When route or ingress is enabled (priority given to route) then the route will be displayed in the new URL field.

    When no URL exists from a route or ingress, the field will not be displayed.

    When on a non-OpenShift cluster (meaning the Route API is not available), if the user chooses to enable Route, they will only get a log saying that Routes are not available in non-OpenShift environments and to please use Ingresses instead. The state of the application controller and of the URL will not be affected.

    When the route or ingress is configured, but the corresponding controller has not yet set it up properly (i.e. is not in Ready state or does not propagate its URL), this is indicated in the Operand as well in the value of .status.url as Pending instead of the URL. Also, if .status.url is Pending, this affects the overall status for the Operand by making it Pending instead of Available.

    Have you updated the necessary documentation?

    • [x] Documentation update is required by this PR.
    • [ ] Documentation has been updated.

    Which issue(s) this PR fixes: Fixes #246

    How to test changes / Special notes to the reviewer:

    Special Notes to the reviewer:

    Whatever your cluster is, make sure that Ingress Controller is installed, enabled and running before moving forward. Instructions for ingress-controller for the following types of cluster are:

    1. kind : https://abhishekveeramalla-av.medium.com/run-argo-cd-using-operator-on-kind-e59f48687d38
    2. minikube: run command minikube addons enable ingress
    3. k3d: Traefik Ingress Controller is installed/enabled by default
    4. OpenShift: after enabling ingress in Argo CD spec, update the following on the ingress spec:
      • remove Nginx annotations
      • update hostame to example-argocd.yourcluster.example.com where example-argocd is the argocd host

    Ingress Testing:

    1. Get cluster and log in (I used a K3D cluster for this).

    2. In your cloned repo on this branch, go to Makefile and change the version # (just to something else) and change image base tag to IMAGE_TAG_BASE ?= quay.io/{your quay or docker username}/argocd-operator. Then run make docker-build, make docker-pushand login to a cluster and then run make deploy

    3. Add the Argo CD instance:

    cat <<EOF | kubectl apply -f -                                                                    
    apiVersion: argoproj.io/v1alpha1
    kind: ArgoCD
    metadata:
      name: argocd
    spec: {}
    EOF   
    
    1. Create an Ingress:
    cat <<EOF | oc apply -f -                                                                    
    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      name: argocd-server
    spec:
      rules:
      - host: argocd
        http:
          paths:
          - backend:
              service:
                name: argocd-server
                port:
                  number: 8080
            path: /
            pathType: ImplementationSpecific
    EOF
    

    If you do oc get ingress argocd-server you should see:

    NAME            CLASS    HOSTS    ADDRESS      PORTS   AGE
    argocd-server   <none>   argocd   172.22.0.3   80      9m56s
    
    1. Edit the ArgoCD instance so that ingress and route (or at least just route if you're on an OpenShift cluster) is enabled with oc edit argocd argocd. spec.server should look something like:
      server:
        autoscale:
          enabled: false
        grpc:
          ingress:
            enabled: false
        ingress:
          enabled: true
        route:
          enabled: false
        service:
          type: ""
      tls:
        ca: {}
    

    Save your changes.

    1. Wait a second for it to update, and then check back on your CR oc get argocd argocd -o yaml. You should see that the URL has been added to the .status field.
    status:
      applicationController: Running
      dex: Running
      host: 172.22.0.3
      phase: Available
      redis: Running
      repo: Running
      server: Running
      ssoConfig: Unknown
    

    Route Testing:

    1. Get cluster and log in (I used an OpenShift cluster for this since routes are specific to OpenShift, if the Route API is not available this functionality will not work).

    2. In your cloned repo on this branch, go to Makefile and change the version # (just to something else) and change image base tag to IMAGE_TAG_BASE ?= quay.io/{your quay or docker username}/argocd-operator. Then run make docker-build, make docker-pushand login to a cluster and then run make deploy

    3. Add the Argo CD instance:

    cat <<EOF | kubectl apply -f -                                                                    
    apiVersion: argoproj.io/v1alpha1
    kind: ArgoCD
    metadata:
      name: argocd
    spec: {}
    EOF   
    
    1. Edit the ArgoCD instance so that ingress and route (or at least just route if you're on an OpenShift cluster) is enabled with oc edit argocd argocd. spec.server should look something like:
      server:
        autoscale:
          enabled: false
        grpc:
          ingress:
            enabled: false
        ingress:
          enabled: false
        route:
          enabled: true
        service:
          type: ""
      tls:
        ca: {}
    

    Save your changes. 5. Wait a second for it to update, and then check back on your CR oc get argocd argocd -o yaml. You should see that the URL has been added to the .status field.

    status:
      applicationController: Running
      dex: Running
      host: argocd-server-default.apps.app-svc-4.8-120914.devcluster.openshift.com
      phase: Available
      redis: Running
      repo: Running
      server: Running
      ssoConfig: Unknown
    

    Note: If route and ingress are both enabled, the route (if available) will have preference over ingress.

  • Fix that new host name isn't applied to ingress

    Fix that new host name isn't applied to ingress

    What type of PR is this? /kind bug

    What does this PR do / why we need it: Fixed a problem where changes to .spec.server.host in the ArgoCD CR were not reflected in the Ingress resource by changing the order of processing in reconcileArgoServerIngress().

    Have you updated the necessary documentation?

    • [ ] Documentation update is required by this PR.
    • [ ] Documentation has been updated.

    Which issue(s) this PR fixes:

    Fixes #558

    How to test changes / Special notes to the reviewer: I have added a test scenario in ingress_test.go. You can be verified by a test run. If you want to check manually, you can do so as follows.

    $ cat <<EOF | kubectl apply -f - 
    apiVersion: argoproj.io/v1alpha1
    kind: ArgoCD
    metadata:
      name: argocd-sample
    spec:
      server:
        host: before.example.com
        ingress:
          enabled: true
    EOF
    argocd.argoproj.io/argocd-sample configured
    
    $ kubectl get ingress argocd-sample-server -o yaml
    ...
    spec:
      rules:
      - host: before.example.com
        http:
          paths:
          - backend:
              service:
                name: argocd-sample-server
                port:
                  name: http
            path: /
            pathType: ImplementationSpecific
      tls:
      - hosts:
        - before.example.com
        secretName: argocd-secret
    
    $ cat <<EOF | kubectl apply -f - 
    apiVersion: argoproj.io/v1alpha1
    kind: ArgoCD
    metadata:
      name: argocd-sample
    spec:
      server:
        host: after.example.com
        ingress:
          enabled: true
    EOF
    argocd.argoproj.io/argocd-sample configured
    
    $ kubectl get ingress argocd-sample-server -o yaml
    ...
    spec:
      rules:
      - host: after.example.com
        http:
          paths:
          - backend:
              service:
                name: argocd-sample-server
                port:
                  name: http
            path: /
            pathType: ImplementationSpecific
      tls:
      - hosts:
        - after.example.com
        secretName: argocd-secret
    
  • add documentation on how to build/run docs locally

    add documentation on how to build/run docs locally

    Is your feature request related to a problem? Please describe. I was trying to test out a docs PR and couldn't find any documentation how to do so. Luckily one of my teammates knew and sent me over instructions, but they were only kept in our company slack and AFAIK not documented anywhere else.

    Describe the solution you'd like If we could add a README with instruction in the docs/ folder here with instructions how to do that I think it would be helpful to anyone else trying it. Or in the documentation itself maybe under Development > Setup.

    Additional context Current steps for building and running documentation locally here:

    1. Create a Python Virtual Environment(NOT mandatory) python3 -m venv doc
    2. Get into virtual environment source doc/bin/activate
    3. pip3 install mkdocs
    4. pip install mkdocs-material
    5. mkdocs serve

    If someone could provide input on where would be the best place for this documentation I can provide a PR. Thanks!

  • Added routes,ingress,svc code for applicationset

    Added routes,ingress,svc code for applicationset

    Signed-off-by: rishabh625 [email protected]

    What type of PR is this?

    Uncomment only one /kind line, and delete the rest. For example, > /kind bug would simply become: /kind bug

    /kind bug

    /kind bug /kind chore /kind cleanup /kind failing-test /kind enhancement /kind documentation /kind code-refactoring

    What does this PR do / why we need it: This PR creates service,routesor ingress and exposes port 7000 for webhook and 8080 for metrics of application controller

    Have you updated the necessary documentation?

    • [] Documentation update is required by this PR.
    • [ ] Documentation has been updated.

    Which issue(s) this PR fixes:

    Fixes #639

    How to test changes / Special notes to the reviewer:

  • Redirect e2e test logs to both stdout and log file

    Redirect e2e test logs to both stdout and log file

    What type of PR is this?

    /kind chore

    What does this PR do / why we need it:

    Currently, the output of e2e tests is only redirected to a log file. With this change, the logs will also be displayed on the console Have you updated the necessary documentation?

    • [ ] Documentation update is required by this PR.
    • [ ] Documentation has been updated.

    Which issue(s) this PR fixes:

    Fixes #?

    How to test changes / Special notes to the reviewer:

  • failures after upgrade to 0.1.0, 0.2.0, or 0.2.1

    failures after upgrade to 0.1.0, 0.2.0, or 0.2.1

    Describe the bug We've been running operator version 0.0.15 with argo-cd 2.1.10. I tried upgrading to 0.1.0, but started getting a lot of errors like so:

    time="2022-06-02T17:12:55Z" level=info msg="Normalized app spec: {\"status\":{\"conditions\":[{\"lastTransitionTime\":\"2022-06-02T17:12:55Z\",\"message\":\"Namespace \\\"sys-oshift-test-develop\\\" for Rollout \\\"root-rollout\\\" is not managed\",\"type\":\"ComparisonError\"}]}}" application=sys-oshift-test-develop
    

    The pod argocd-application-controller-0 is run by service account argocd-argocd-application-controller. That service account has cluster-admin privs. So it's not a privilege issue.

    Meanwhile, the controller pod is logging:

    2022-06-02T15:29:55.154Z	ERROR	controller-runtime.manager.controller.argocd	Reconciler error	{"reconciler group": "argoproj.io", "reconciler kind": "ArgoCD", "name": "argocd", "namespace": "argocd", "error": "roles.rbac.authorization.k8s.io \"argocd-argocd-application-controller\" already exists"}
    

    And it continues to log that even after I deleted the role argocd-argocd-application-controller.

    This is on openshift 4.7.

    I tried upgrading to operator 0.2.0 or 0.2.1. Same problem.

    I tried upgrading the argocd to 2.1.15 by editing the argocd's spec.version. But with the operator in this state, it ignored the version change.

    Any insights?

Argo Workflows is an open source container-native workflow engine for orchestrating parallel jobs on Kubernetes.
Argo Workflows is an open source container-native workflow engine for orchestrating parallel jobs on Kubernetes.

What is Argo Workflows? Argo Workflows is an open source container-native workflow engine for orchestrating parallel jobs on Kubernetes. Argo Workflow

Dec 10, 2021
Notifications for Argo CD
Notifications for Argo CD

Argo CD Notifications Argo CD Notifications continuously monitors Argo CD applications and provides a flexible way to notify users about important cha

Jun 25, 2022
Automatic container image update for Argo CD

Argo CD Image Updater Introduction Argo CD Image Updater is a tool to automatically update the container images of Kubernetes workloads which are mana

Jun 24, 2022
Support for extending Argo CD

Argo CD Extensions To enable Extensions for your Argo CD cluster will require just a single kubectl apply. Here we provide a way to extend Argo CD suc

Jun 17, 2022
Argo-CD Autopilot
Argo-CD Autopilot

Introduction New users to GitOps and Argo CD are not often sure how they should structure their repos, add applications, promote apps across environme

Jun 29, 2022
Hera is a Python framework for constructing and submitting Argo Workflows.

Hera is an Argo Workflows Python SDK. Hera aims to make workflow construction and submission easy and accessible to everyone! Hera abstracts away workflow setup details while still maintaining a consistent vocabulary with Argo Workflows.

Jun 30, 2022
Argo CD ApplicationSet Controller

The ApplicationSet controller manages multiple Argo CD Applications as a single ApplicationSet unit, supporting deployments to large numbers of clusters, deployments of large monorepos, and enabling secure Application self-service.

Jun 29, 2022
A series of controllers for configuring namespaces to accomodate Argo

argo-controller A series of controllers for configuring namespaces to accomodate Argo. ArgoCD TBD Argo Workflows Make a service account in every names

Jan 4, 2022
Dataflow is a Kubernetes-native platform for executing large parallel data-processing pipelines.
Dataflow is a Kubernetes-native platform for executing large parallel data-processing pipelines.

Dataflow Summary Dataflow is a Kubernetes-native platform for executing large parallel data-processing pipelines. Each pipeline is specified as a Kube

Jun 29, 2022
PolarDB Stack is a DBaaS implementation for PolarDB-for-Postgres, as an operator creates and manages PolarDB/PostgreSQL clusters running in Kubernetes. It provides re-construct, failover swtich-over, scale up/out, high-available capabilities for each clusters.
PolarDB Stack is a DBaaS implementation for PolarDB-for-Postgres, as an operator creates and manages PolarDB/PostgreSQL clusters running in Kubernetes. It provides re-construct, failover swtich-over, scale up/out, high-available capabilities for each clusters.

PolarDB Stack开源版生命周期 1 系统概述 PolarDB是阿里云自研的云原生关系型数据库,采用了基于Shared-Storage的存储计算分离架构。数据库由传统的Share-Nothing,转变成了Shared-Storage架构。由原来的N份计算+N份存储,转变成了N份计算+1份存储

Jun 2, 2022
Argo Rollout visualization in Argo CD Web UI
Argo Rollout visualization in Argo CD Web UI

Rollout Extension The project introduces the Argo Rollout dashboard into the Argo CD Web UI. Quick Start Install Argo CD and Argo CD Extensions Contro

Jul 2, 2022
Basic Kubernetes operator that have multiple versions in CRD. This operator can be used to experiment and understand Operator/CRD behaviors.

add-operator Basic Kubernetes operator that have multiple versions in CRD. This operator can be used to experiment and understand Operator/CRD behavio

Dec 15, 2021
An operator for managing ephemeral clusters in GKE

Test Cluster Operator for GKE This operator provides an API-driven cluster provisioning for integration and performance testing of software that integ

Feb 9, 2022
An operator which complements grafana-operator for custom features which are not feasible to be merged into core operator

Grafana Complementary Operator A grafana which complements grafana-operator for custom features which are not feasible to be merged into core operator

May 24, 2022
KinK is a helper CLI that facilitates to manage KinD clusters as Kubernetes pods. Designed to ease clusters up for fast testing with batteries included in mind.
KinK is a helper CLI that facilitates to manage KinD clusters as Kubernetes pods. Designed to ease clusters up for fast testing with batteries included in mind.

kink A helper CLI that facilitates to manage KinD clusters as Kubernetes pods. Table of Contents kink (KinD in Kubernetes) Introduction How it works ?

Jul 3, 2022
Managing your Kubernetes clusters (including public, private, edge, etc) as easily as visiting the Internet

Clusternet Managing Your Clusters (including public, private, hybrid, edge, etc) as easily as Visiting the Internet. Clusternet (Cluster Internet) is

Jun 27, 2022
Kubernetes operator to autoscale Google's Cloud Bigtable clusters
Kubernetes operator to autoscale Google's Cloud Bigtable clusters

Bigtable Autoscaler Operator Bigtable Autoscaler Operator is a Kubernetes Operator to autoscale the number of nodes of a Google Cloud Bigtable instanc

Nov 5, 2021
Nebula Operator manages NebulaGraph clusters on Kubernetes and automates tasks related to operating a NebulaGraph cluster

Nebula Operator manages NebulaGraph clusters on Kubernetes and automates tasks related to operating a NebulaGraph cluster. It evolved from NebulaGraph Cloud Service, makes NebulaGraph a truly cloud-native database.

Jun 6, 2022
ClickHouse Operator creates, configures and manages ClickHouse clusters running on Kubernetes

ClickHouse Operator ClickHouse Operator creates, configures and manages ClickHouse clusters running on Kubernetes. Features The ClickHouse Operator fo

Jun 13, 2022