An operator to support Haschicorp Vault configuration workflows from within Kubernetes

Vault Config Operator

This operator helps set up Vault Configurations. The main intent is to do so such that subsequently pods can consume the secrets made available. There are two main principles through all of the capabilities of this operator:

  1. high-fidelity API. The CRD exposed by this operator reflect field by field the Vault APIs. This is because we don't want to make any assumption on the kinds of configuration workflow that user will set up. That being said the Vault API is very extensive and we are starting with enough API coverage to support, we think, some simple and very common configuration workflows.
  2. attention to security (after all we are integrating with a security tool). To prevent credential leaks we give no permissions to the operator itself against Vault. All APIs exposed by this operator contains enough information to authenticate to Vault using a local service account (local to the namespace where the API exist). In other word for a namespace user to be abel to successfully configure Vault, a service account in that namespace must have been previously given the needed Vault permissions.

Currently this operator supports the following CRDs:

  1. Policy Configures Vault Policies
  2. VaultRole Configures a Vault Kubernetes Authentication Role
  3. SecretEngineMount Configures a Mount point for a SecretEngine
  4. DatabaseSecretEngineConfig Configures a Database Secret Engine Connection
  5. DatabaseSecretEngineRole Configures a Database Secret Engine Role
  6. RandomSecret Creates a random secret in a vault kv Secret Engine with one password field generated using a PasswordPolicy

The Authentication Section

As discussed each API has an Authentication Section that specify how to authenticate to Vault. Here is an example:

  authentication: 
    path: kubernetes
    role: policy-admin
    namespace: tenant-namespace
    serviceAccount:
      name: vaultsa

The path field specifies the path at which the Kubernetes authentication role is mounted.

The role field specifies which role to request when authenticating

The namespace field specifies the Vault namespace (not related to Kubernetes namespace) to use. This is optional.

The serviceAccount.name specifies the token of which service account to use during the authentication process.

So the above configuration roughly correspond to the following command:

vault write [tenant-namespace/]auth/kubernetes/login role=policy-admin jwt=<vaultsa jwt token>

Policy

The Policy CRD allows a user to create a [Vault Policy], here is an example:

apiVersion: redhatcop.redhat.io/v1alpha1
kind: Policy
metadata:
  name: database-creds-reader
spec:
  authentication: 
    path: kubernetes
    role: policy-admin
  policy: |
    # Configure read secrets
    path "/{{identity.entity.aliases.auth_kubernetes_804f1655.metadata.service_account_namespace}}/database/creds/+" {
      capabilities = ["read"]
    }

Notice that in this policy we have parametrized the path based on the namespace of the connecting service account.

VaultRole

The VaultRole creates a Vault Authentication Role for a Kubernetes Authentication mount, here is an example:

apiVersion: redhatcop.redhat.io/v1alpha1
kind: VaultRole
metadata:
  name: database-engine-admin
spec:
  authentication: 
    path: kubernetes
    role: policy-admin
  path: kubernetes  
  policies:
    - database-engine-admin
  targetServiceAccounts: 
  - vaultsa  
  targetNamespaceSelector:
    matchLabels:
      postgresql-enabled: "true"

The path field specifies the path of the Kubernetes Authentication Mount at which the role will be mounted.

The policies field specifies which Vault policies will be associated with this role.

The targetServiceAccounts field specifies which service accounts can authenticate. If not specified, it defaults to default.

The targetNamespaceSelector field specifies from which kubernetes namespaces it is possible to authenticate. Notice as the set of namespaces selected by the selector varies, this configuration will be updated. It is also possible to specify a static set of namespaces.

Many other standard Kubernetes Authentication Role fields are available for fine tuning, see the Vault Documentation

This CR is roughly equivalent to this Vault CLI command:

vault write [namespace/]auth/kubernetes/role/database-engine-admin bound_service_account_names=vaultsa bound_service_account_namespaces=<dynamically generated> policies=database-engine-admin

SecretEngineMount

The SecretEngineMount CRD allows a user to create a Secret Engine mount point, here is an example:

apiVersion: redhatcop.redhat.io/v1alpha1
kind: SecretEngineMount
metadata:
  name: database
spec:
  authentication: 
    path: kubernetes
    role: database-engine-admin
  type: database
  path: postgresql-vault-demo

The type field specifies the secret engine type.

The path field specifies the path at which to mount the secret engine

Many other standard Secret Engine Mount fields are available for fine tuning, see the Vault Documentation

This CR is roughly equivalent to this Vault CLI command:

vault secrets enable -path [namespace/]postgresql-vault-demo/database database

DatabaseSecretEngineConfig

DatabaseSecretEngineConfig CRD allows a user to create a Database Secret Engine configuration, also called connection for an existing Database Secret Engine Mount. Here is an example

apiVersion: redhatcop.redhat.io/v1alpha1
kind: DatabaseSecretEngineConfig
metadata:
  name: my-postgresql-database
spec:
  authentication: 
    path: kubernetes
    role: database-engine-admin
  pluginName: postgresql-database-plugin
  allowedRoles:
    - read-write
    - read-only
  connectionURL: postgresql://{{username}}:{{password}}@my-postgresql-database.postgresql-vault-demo.svc:5432
  username: admin
  rootCredentialsFromSecret:
    name: postgresql-admin-password
  path: postgresql-vault-demo/database

The pluginName field specifies what type of database this connection is for.

The allowedRoles field specifies which role names can be created for this connection.

The connectionURL field specifies how to connect to the database.

The username field specific the username to be used to connect to the database. This field is optional, if not specified the username will be retrieved from teh credential secret.

The path field specifies the path of the secret engine to which this connection will be added.

The password and possibly the username can be retrived a three different ways:

  1. From a Kubernetes secret, specifying the rootCredentialsFromSecret field. The secret must be of basic auth type. If the secret is updated this connection will also be updated.
  2. From a Vault secret, specifying the rootCredentialsFromVaultSecret field.
  3. From a RandomSecret, specifying the rootCredentialsFromRandomSecret field. When the RandomSecret generates a new secret, this connection will also be updated.

Many other standard Database Secret Engine Config fields are available for fine tuning, see the Vault Documentation

This CR is roughly equivalent to this Vault CLI command:

password= ">
vault write [namespace/]postgresql-vault-demo/database/config/my-postgresql-database plugin_name=postgresql-database-plugin allowed_roles="read-write,read-only" connection_url="postgresql://{{username}}:{{password}}@my-postgresql-database.postgresql-vault-demo.svc:5432/" username=<retrieved dynamically> password=<retrieved dynamically>

DatabaseSecretEngineRole

The DatabaseSecretEngineRole CRD allows a user to create a Database Secret Engine Role, here is an example:

apiVersion: redhatcop.redhat.io/v1alpha1
kind: DatabaseSecretEngineRole
metadata:
  name: read-only
spec:
  authentication: 
    path: kubernetes
    role: database-engine-admin
  path: postgresql-vault-demo/database
  dBName: my-postgresql-database
  creationStatements:
    - CREATE ROLE "{{name}}" WITH LOGIN PASSWORD '{{password}}' VALID UNTIL '{{expiration}}'; GRANT SELECT ON ALL TABLES IN SCHEMA public TO "{{name}}";

The path field specifies the path of the secret engine that will contain this role.

The dBname field specifies the name of the connection to be used with this role.

The creationStatements field specifies the statements to run to create a new account.

Many other standard Database Secret Engine Role fields are available for fine tuning, see the Vault Documentation

This CR is roughly equivalent to this Vault CLI command:

vault write [namespace/]postgresql-vault-demo/database/roles/read-only db_name=my-postgresql-database creation_statements="CREATE ROLE \"{{name}}\" WITH LOGIN PASSWORD '{{password}}' VALID UNTIL '{{expiration}}'; GRANT SELECT ON ALL TABLES IN SCHEMA public TO \"{{name}}\";"

RandomSecret

The RandomSecret CRD allows a user to generate a random secret (normally a password) and store it in Vault with a given Key. The generated secret will be compliant with a Vault [Password Policy], here is an example:

apiVersion: redhatcop.redhat.io/v1alpha1
kind: RandomSecret
metadata:
  name: my-postgresql-admin-password
spec:
  authentication: 
    path: kubernetes
    role: database-engine-admin
  path: kv/vault-tenant
  secretKey: password
  secretFormat:
    passwordPolicyName: my-complex-password-format
  refreshPeriod: 1h

The path field specifies the path at which the secret will be written, it must correspond to a kv Secret Engine mount.

The secretKey field is the key of the secret.

The secretFormat is a reference to a Vault Password policy, it can also supplied inline.

The refreshPeriod specifies the frequency at which this secret will be regenerated. This is an optional field, if not specified the secret will be generated once and then never updated.

With a RandomSecret it is possible to build workflow in which the root password of a resource that we need to protect is never stored anywhere, except in vault. One way to achieve this is to have a random secret seed the root password. Then crete an operator that watches the RandomSecret and retrieves ths generated secret from vault and updates the resource to be protected. Finally configure the Secret Engine object to watch for the RandomSecret updates.

This CR is roughly equivalent to this Vault CLI command:

vault kv put [namespace/]kv/vault-tenant password=<generated value>

Metrics

Prometheus compatible metrics are exposed by the Operator and can be integrated into OpenShift's default cluster monitoring. To enable OpenShift cluster monitoring, label the namespace the operator is deployed in with the label openshift.io/cluster-monitoring="true".

oc label namespace <namespace> openshift.io/cluster-monitoring="true"

Testing metrics

export operatorNamespace=vault-config-operator-local # or vault-config-operator
oc label namespace ${operatorNamespace} openshift.io/cluster-monitoring="true"
oc rsh -n openshift-monitoring -c prometheus prometheus-k8s-0 /bin/bash
export operatorNamespace=vault-config-operator-local # or vault-config-operator
curl -v -s -k -H "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" https://vault-config-operator-controller-manager-metrics.${operatorNamespace}.svc.cluster.local:8443/metrics
exit

Deploying the Operator

This is a cluster-level operator that you can deploy in any namespace, vault-config-operator is recommended.

It is recommended to deploy this operator via OperatorHub, but you can also deploy it using Helm.

Multiarch Support

Arch Support
amd64
arm64
ppc64le
s390x

Deploying from OperatorHub

Note: This operator supports being installed disconnected environments

If you want to utilize the Operator Lifecycle Manager (OLM) to install this operator, you can do so in two ways: from the UI or the CLI.

Deploying from OperatorHub UI

  • If you would like to launch this operator from the UI, you'll need to navigate to the OperatorHub tab in the console. Before starting, make sure you've created the namespace that you want to install this operator to with the following:
oc new-project vault-config-operator
  • Once there, you can search for this operator by name: vault config operator. This will then return an item for our operator and you can select it to get started. Once you've arrived here, you'll be presented with an option to install, which will begin the process.
  • After clicking the install button, you can then select the namespace that you would like to install this to as well as the installation strategy you would like to proceed with (Automatic or Manual).
  • Once you've made your selection, you can select Subscribe and the installation will begin. After a few moments you can go ahead and check your namespace and you should see the operator running.

Cert Utils Operator

Deploying from OperatorHub using CLI

If you'd like to launch this operator from the command line, you can use the manifests contained in this repository by running the following:

oc new-project vault-config-operator

oc apply -f config/operatorhub -n vault-config-operator

This will create the appropriate OperatorGroup and Subscription and will trigger OLM to launch the operator in the specified namespace.

Deploying with Helm

Here are the instructions to install the latest release with Helm.

oc new-project vault-config-operator
helm repo add vault-config-operator https://redhat-cop.github.io/vault-config-operator
helm repo update
helm install vault-config-operator vault-config-operator/vault-config-operator

This can later be updated with the following commands:

helm repo update
helm upgrade vault-config-operator vault-config-operator/vault-config-operator

Development

Running the operator locally

Deploy a Vault instance

If you don't have a Vault instance available for testing, deploy one with these steps:

helm repo add hashicorp https://helm.releases.hashicorp.com
export cluster_base_domain=$(oc get dns cluster -o jsonpath='{.spec.baseDomain}')
envsubst < ./config/local-development/vault-values.yaml > /tmp/values
helm upgrade vault hashicorp/vault -i --create-namespace -n vault --atomic -f /tmp/values

INIT_RESPONSE=$(oc exec vault-0 -n vault -- vault operator init -address https://vault-internal.vault.svc:8200 -ca-path /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt -format=json -key-shares 1 -key-threshold 1)

UNSEAL_KEY=$(echo "$INIT_RESPONSE" | jq -r .unseal_keys_b64[0])
ROOT_TOKEN=$(echo "$INIT_RESPONSE" | jq -r .root_token)

echo "$UNSEAL_KEY"
echo "$ROOT_TOKEN"

#here we are saving these variable in a secret, this is probably not what you should do in a production environment
oc delete secret vault-init -n vault
oc create secret generic vault-init -n vault --from-literal=unseal_key=${UNSEAL_KEY} --from-literal=root_token=${ROOT_TOKEN}
export UNSEAL_KEY=$(oc get secret vault-init -n vault -o jsonpath='{.data.unseal_key}' | base64 -d )
export ROOT_TOKEN=$(oc get secret vault-init -n vault -o jsonpath='{.data.root_token}' | base64 -d )
oc exec vault-0 -n vault -- vault operator unseal -address https://vault-internal.vault.svc:8200 -ca-path /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt $UNSEAL_KEY

Configure an Kubernetes Authentication mount point

All the configuration made by the operator need to authenticate via a Kubernetes Authentication. So you need a root Kubernetes Authentication mount point and role. The you can create more roles via the operator. If you don't have a root mount point and role, you can create them as follows:

oc new-project vault-admin
export cluster_base_domain=$(oc get dns cluster -o jsonpath='{.spec.baseDomain}')
export VAULT_ADDR=https://vault-vault.apps.${cluster_base_domain}
export VAULT_TOKEN=$(oc get secret vault-init -n vault -o jsonpath='{.data.root_token}' | base64 -d )
# this policy is intentionally broad to allow to test anything in Vault. In a real life scenario this policy would be scoped down.
vault policy write -tls-skip-verify vault-admin  ./config/local-development/vault-admin-policy.hcl
vault auth enable -tls-skip-verify kubernetes
export sa_secret_name=$(oc get sa default -n vault -o jsonpath='{.secrets[*].name}' | grep -o '\b\w*\-token-\w*\b')
oc get secret ${sa_secret_name} -n vault -o jsonpath='{.data.ca\.crt}' | base64 -d > /tmp/ca.crt
vault write -tls-skip-verify auth/kubernetes/config token_reviewer_jwt="$(oc serviceaccounts get-token vault -n vault)" kubernetes_host=https://kubernetes.default.svc:443 kubernetes_ca_cert=@/tmp/ca.crt
vault write -tls-skip-verify auth/kubernetes/role/policy-admin bound_service_account_names=default bound_service_account_namespaces=vault-admin policies=vault-admin ttl=1h
export accessor=$(vault read -tls-skip-verify -format json sys/auth | jq -r '.data["kubernetes/"].accessor')

Run the operator

make install
oc new-project vault-config-operator-local
kustomize build ./config/local-development | oc apply -f - -n vault-config-operator-local
export token=$(oc serviceaccounts get-token 'vault-config-operator-controller-manager' -n vault-config-operator-local)
oc login --token ${token}
export VAULT_ADDR=https://vault-vault.apps.${cluster_base_domain}
unset VAULT_TOKEN
export VAULT_SKIP_VERIFY=true
make run ENABLE_WEBHOOKS=false

Test Manually

Policy

envsubst < ./test/database-engine-admin-policy.yaml | oc apply -f - -n vault-admin

Vault Role

oc new-project test-vault-config-operator
oc label namespace test-vault-config-operator database-engine-admin=true
oc apply -f ./test/database-engine-admin-role.yaml -n vault-admin

Secret Engine Mount

oc apply -f ./test/database-secret-engine.yaml -n test-vault-config-operator

Database secret engine connection. This will deploy a postgresql database to connect to

oc create secret generic postgresql-admin-password --from-literal=postgresql-password=changeit -n test-vault-config-operator
export uid=$(oc get project test-vault-config-operator -o jsonpath='{.metadata.annotations.openshift\.io/sa\.scc\.uid-range}'|sed 's/\/.*//')
export guid=$(oc get project test-vault-config-operator -o jsonpath='{.metadata.annotations.openshift\.io/sa\.scc\.supplemental-groups}'|sed 's/\/.*//')
helm upgrade my-postgresql-database bitnami/postgresql -i --create-namespace -n test-vault-config-operator -f ./examples/postgresql/postgresql-values.yaml --set securityContext.fsGroup=${guid} --set containerSecurityContext.runAsUser=${uid} --set volumePermissions.securityContext.runAsUser=${uid} --set metrics.securityContext.runAsUser=${uid}
oc apply -f ./test/database-engine-config.yaml -n test-vault-config-operator

Database Secret engine role

oc apply -f ./test/database-engine-read-only-role.yaml -n test-vault-config-operator

RandomSecret

vault write -tls-skip-verify /sys/policies/password/simple-password-policy policy=@./test/password-policy.hcl
envsubst < ./test/kv-engine-admin-policy.yaml | oc apply -f - -n vault-admin
envsubst < ./test/secret-writer-policy.yaml | oc apply -f - -n vault-admin
oc apply -f ./test/kv-engine-admin-role.yaml -n vault-admin
oc apply -f ./test/secret-writer-role.yaml -n vault-admin
oc apply -f ./test/kv-secret-engine.yaml -n test-vault-config-operator
oc apply -f ./test/random-secret.yaml -n test-vault-config-operator

Test helm chart locally

Define an image and tag. For example...

export imageRepository="quay.io/redhat-cop/vault-config-operator"
export imageTag="$(git -c 'versionsort.suffix=-' ls-remote --exit-code --refs --sort='version:refname' --tags https://github.com/redhat-cop/vault-config-operator.git '*.*.*' | tail --lines=1 | cut --delimiter='/' --fields=3)"

Deploy chart...

make helmchart IMG=${imageRepository} VERSION=${imageTag}
helm upgrade -i vault-config-operator-local charts/vault-config-operator -n vault-config-operator-local --create-namespace

Delete...

helm delete vault-config-operator-local -n vault-config-operator-local
kubectl delete -f charts/vault-config-operator/crds/crds.yaml

Building/Pushing the operator image

export repo=raffaelespazzoli #replace with yours
docker login quay.io/$repo
make docker-build IMG=quay.io/$repo/vault-config-operator:latest
make docker-push IMG=quay.io/$repo/vault-config-operator:latest

Deploy to OLM via bundle

make manifests
make bundle IMG=quay.io/$repo/vault-config-operator:latest
operator-sdk bundle validate ./bundle --select-optional name=operatorhub
make bundle-build BUNDLE_IMG=quay.io/$repo/vault-config-operator-bundle:latest
docker push quay.io/$repo/vault-config-operator-bundle:latest
operator-sdk bundle validate quay.io/$repo/vault-config-operator-bundle:latest --select-optional name=operatorhub
oc new-project vault-config-operator
oc label namespace vault-config-operator openshift.io/cluster-monitoring="true"
operator-sdk cleanup vault-config-operator -n vault-config-operator
operator-sdk run bundle --install-mode AllNamespaces -n vault-config-operator quay.io/$repo/vault-config-operator-bundle:latest

Releasing

" -m " " git push upstream ">
git tag -a "
      
       "
       -m "
      
       "
      
git push upstream <tagname>

If you need to remove a release:

git tag -d <tagname>
git push upstream --delete <tagname>

If you need to "move" a release to the current main

git tag -f <tagname>
git push upstream -f <tagname>

Cleaning up

operator-sdk cleanup vault-config-operator -n vault-config-operator
oc delete operatorgroup operator-sdk-og
oc delete catalogsource vault-config-operator-catalog
Owner
Red Hat Communities of Practice
The Red Hat Communities of Practice
Red Hat Communities of Practice
Comments
  • Support directly specifying username and password for DatabaseSecretEngineConfig

    Support directly specifying username and password for DatabaseSecretEngineConfig

    Vault supports (via the CLI, via Terraform etc.) specifying the username and password directly, it would be great if the CRD did as well instead of requiring them to be provided via a Secret/VaultSecret/RandomSecret. Vault lets users rotate the password after the engine has been configured, and if that is done, the engine config will anyway be out of sync with whatever secret was used to initialize it.

  • Kubernetes v1.24: SecretEngineMount Type: KV / KV version 2

    Kubernetes v1.24: SecretEngineMount Type: KV / KV version 2

    Hi, this issue is also related to #66 #59

    Now I try to create a SecretEngineMount:

    apiVersion: redhatcop.redhat.io/v1alpha1 kind: SecretEngineMount metadata: spec: authentication: path: prod1/klst-kubernetes role: klst-secret-engine-admin config: defaultLeaseTTL: "" forceNoCache: false listingVisibility: hidden maxLeaseTTL: "" options: version: "2" path: prod1/klst type: kv status: conditions: - lastTransitionTime: "2022-08-16T15:22:54Z" message: unable to find token secret name for service accountklst/default observedGeneration: 1 reason: LastReconcileCycleFailed status: "True" type: ReconcileError

    note this happens for kv and kv version2

    In a Kubernetes Cluster < V1,24 the same deployment works perfect. Thanks for your help and let me konw in case I can assist

  • Kubernetes V1.24

    Kubernetes V1.24

    Hi, i had the vault-config-operator running on my cluster (v1.23) and it works like a charm.

    Now we updated our Kubernetes Cluster to v1.24. In 1.24 secrets are not created by default. See Kubernetes 1.24+ only" Therefore I created a secret manually as described by hashicorp.

    Unfortunately I don't get it running again. The vault-config-operator log tells me: "unable to find token secret name for service accountvault-admin/default

    One Hint: In 1.23 ServiceAccounts have a secrets section. So you can query SA for secrets by parsing that section in 1.24 ServiceAccounts miss that section. You have to have a look into the Annotation of the Service Account to find the corresponding secret.

    see: commons.go

    _var tokenSecretName string
    for _, secretName := range serviceAccount.Secrets {
    	if strings.Contains(secretName.Name, "token") {
    		tokenSecretName = secretName.Name
    		break
    	}
    }
    if tokenSecretName == "" {
    	return "", errors.New("unable to find token secret name for service account" + kubeNamespace + "/" + serviceAccountName)
    }_
    

    Thanks a lot for your help Klaus

    Additional Info:

    apiVersion: v1 kind: Secret metadata: name: default annotations: kubernetes.io/service-account.name: default type: kubernetes.io/service-account-token


    apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: role-tokenreview-binding namespace: vault-admin roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:auth-delegator subjects:

    • kind: ServiceAccount name: default namespace: vault-admin

    apiVersion: redhatcop.redhat.io/v1alpha1 kind: AuthEngineMount metadata: name: team-dev-kubernetes namespace: vault-admin spec: authentication: path: cluster1-admin role: vault-admin type: kubernetes path: cluster1

  • image pull backoff using subscription based install

    image pull backoff using subscription based install

    I tried to apply the subscription in our openshift 4.7 cluster I ended with:

      Type     Reason          Age                 From               Message
      ----     ------          ----                ----               -------
      Normal   Scheduled       3m24s               default-scheduler  Successfully assigned vault-config-operator/vault-config-operator-controller-manager-97c48cfbc-xvqkc to myworker1
      Normal   AddedInterface  3m24s               multus             Add eth0 [10.254.5.151/24]
      Normal   Pulling         3m8s                kubelet            Pulling image "quay.io/redhat-cop/vault-config-operator@sha256:5ce02b001726dc749b6b543ccd43107d0a5f63dbd1e4a8c1a60ea3349cdecbb5"
      Normal   Started         2m1s                kubelet            Started container manager
      Normal   Pulled          2m1s                kubelet            Successfully pulled image "quay.io/redhat-cop/vault-config-operator@sha256:5ce02b001726dc749b6b543ccd43107d0a5f63dbd1e4a8c1a60ea3349cdecbb5" in 1m7.190758748s
      Normal   Created         2m1s                kubelet            Created container manager
      Warning  Failed          51s (x3 over 3m8s)  kubelet            Failed to pull image "quay.io/redhat-cop/kube-rbac-proxy@sha256:fda7f669f6f5e0722c88946075f323681344a7639a77acae79fabc26eb5ad636": rpc error: code = Unknown desc = Error reading manifest sha256:fda7f669f6f5e0722c88946075f323681344a7639a77acae79fabc26eb5ad636 in quay.io/redhat-cop/kube-rbac-proxy: manifest unknown: manifest unknown
      Warning  Failed          51s (x3 over 3m8s)  kubelet            Error: ErrImagePull
      Normal   BackOff         14s (x5 over 98s)   kubelet            Back-off pulling image "quay.io/redhat-cop/kube-rbac-proxy@sha256:fda7f669f6f5e0722c88946075f323681344a7639a77acae79fabc26eb5ad636"
      Warning  Failed          14s (x5 over 98s)   kubelet            Error: ImagePullBackOff
      Normal   Pulling         3s (x4 over 3m24s)  kubelet            Pulling image "quay.io/redhat-cop/kube-rbac-proxy@sha256:fda7f669f6f5e0722c88946075f323681344a7639a77acae79fabc26eb5ad636"
    

    and looking at quay.io repo indeed this images seems not present (anymore ?)

  • Vault PKI Secret Engine

    Vault PKI Secret Engine

    • [X] PKI Engine implementation

      • [X] PKISecretEngineConfig Root CA

      • [X] /my-pki/root/generate/internal -> PKISecretEngineConfig CR with type root

      • [X] /my-pki/config/urls -> PKISecretEngineConfig CR

      • [X] PKISecretEngineConfig intermediate CA

      • [X] /my-pki/intermediate/generate/internal -> PKISecretEngineConfig CR with type intermediate

      • [X] /my-pki/config/urls -> PKISecretEngineConfig CR

      • [x] /my-int/intermediate/set-signed PKISecretEngineConfig CR retrieved from a secret

    • [X] PKISecretEngineRole CR

      • [X] /my-pki/roles/
    • [X] PKI Engine sample

    • [X] Test

    • [X] Document

    • [X] Improved local development.

    • [x] PKI Engine Clean up logic

  • govulncheck results

    govulncheck results

  • Ldap auth engine

    Ldap auth engine

    This PR contains objects for new auth method, type LDAPAuthEngineConfig :

    • Create the CRD type and a validating webhooks for LDAPAuthEngineConfig
    • Update type specs and webhook
    • Generate CRD manifests
    • Add test for LDAPAuthEngineConfig
    • Add LDAPAuthEngineConfig section in readme
  • Manager Webhook VolumeMounts Missing with OLM Install 4.6.48

    Manager Webhook VolumeMounts Missing with OLM Install 4.6.48

    Issue:

    On a fresh Openshift 4.6.48 cluster, using OLM for operator installation (v.0.4.1) manager container fails to startup due to missing Webhook serving certificates:

    Expected Result:

    Manager includes webhook TLS cert volume/mounts setup.

    volumeMounts:
    ...
      - name: webhook-cert 
        mountPath: /tmp/k8s-webhook-server/serving-certs
    volumes:
    ...
      - name: webhook-cert 
        secretName: vault-config-operator-controller-manager-service-cert 
        ...
    

    Logs:

    2022-04-19T18:50:25.475Z	ERROR	setup	problem running manager	{"error": "open /tmp/k8s-webhook-server/serving-certs/tls.crt: no such file or directory"}
    

    Metadata:

     installedCSV: vault-config-operator.v0.4.1
     currentCSV: vault-config-operator.v0.4.1
     OCP: 4.6.48
    
  • minor bug with `KubernetesAuthEngineRole` if `targetNamespaceSelector` has no matches

    minor bug with `KubernetesAuthEngineRole` if `targetNamespaceSelector` has no matches

    config excerpt:

    kind: KubernetesAuthEngineRole
    ...
    targetNamespaces:
        targetNamespaceSelector:
          matchLabels:
           vault-access: application
    status:
      conditions:
        - lastTransitionTime: '2022-08-04T20:18:26Z'
          message: >-
            Error making API request.
            URL: PUT
            https://vault.mycompany.dev:8200/v1/auth/kubernetes/role/application-viewer
            Code: 400. Errors:
            * "bound_service_account_namespaces" can not be empty
          observedGeneration: 1
          reason: LastReconcileCycleFailed
          status: 'True'
          type: ReconcileError
    

    Problem: if no namespaces exist with the vault-access: application label, the operator will still attempt to create a Vault role with an empty namespace.

    Expected: Operator should not attempt to write an invalid Vault role configuration with an empty namespace in the first place.

    Please note this error does resolve itself if a namespace with a matching label is added. So this is only a minor issue.

  • Fix PKI CA Chain hierarchy

    Fix PKI CA Chain hierarchy

    1. Create an Intermediate SecretEngineMount.
    apiVersion: redhatcop.redhat.io/v1alpha1
    kind: SecretEngineMount
    metadata:
      name: intermediate
    spec:
      authentication:
        path: kubernetes
        role: vault-admin
      type: pki
      path: pki
      config:
        # 1 Year
        maxLeaseTTL: "8760h"
    
    1. Configure an Intermediate PKISecretEngineConfig.
    apiVersion: redhatcop.redhat.io/v1alpha1
    kind: PKISecretEngineConfig
    metadata:
      name: intermediate
    spec:
      authentication:
        path: kubernetes
        role: vault-admin
      path: pki/intermediate
      commonName: vault.int.company.io
      TTL: "8760h"
      type: intermediate
      privateKeyType: exported
      country: CH
      province: ZH
      locality: Zurich
      organization: Red Hat
      maxPathLength: 1
      issuingCertificates:
      - https://${VAULT_ROUTE}/v1/pki/intermediate/ca
      crlDistributionPoints:
      - https://${VAULT_ROUTE}/v1/pki/intermediate/crl"
    

    Note: PKISecretEngineConfig stays in error status until the signed certificate has been provided.

    Waiting spec.externalSignSecret with signed intermediate certificate.
    
    1. Sign the CSR with the company Root CA.
    oc extract secret/intermediate --keys=csr
    openssl ca -config /opt/tls/root/openssl.cnf -extensions v3_intermediate_ca -days 365 -notext -md sha256 -in csr -out tls.crt
    
    1. Create the secret with the signed intermediate certificate.
    oc create secret generic signed-intermediate --from-file=tls.crt
    
    1. Patch the PKISecretEngineConfig with the new signed-intermediate secret.
    cat <<EOF > patch.yaml
    spec:
      externalSignSecret:
        name: signed-intermediate
    EOF
    
    oc patch pkisecretengineconfig intermediate --type=merge --patch-file patch.yaml -n vault-config-operator
    
  • Custom resource cannot be deleted if it fails to be created

    Custom resource cannot be deleted if it fails to be created

    If a custom resource cannot be created, most often because a wrong path is entered (for example, for Kubernetes mount, etc,), it cannot be removed.

    We created a resource that failed to be created in Vault because of permission problem (we saw 403). Indeed, it was our mistake, but then - the path (and some other fields in the CR cannot be modified, and when we tried to delete the CR, it was stuck in Deleting state and never removed.

    The only workaround was to grant priviliges to the wrong path in Vault, resource got created in Vault, and then the CR could be deleted in Opemnshift :-)

  • Add volumes and volumeMounts customization to Helm chart.

    Add volumes and volumeMounts customization to Helm chart.

    Additional volumes and volume mounts can be injected using OLM, but the Helm chart does not provide the same functionality. This makes it impossible to specify custom certificates for vault when the operator is installed using the Helm chart.

  • Cannot delete a resource that the operator cannot reconcile

    Cannot delete a resource that the operator cannot reconcile

    I encountered this with two different kinds of resources: Policy and KubernetesAuthEngineRole, don't know if the problem is restricted to those two resource types, or all custom resources.

    If I create a custom resource in a namespace that is not authorized to reconcile that resource, the operator fails to reconcile it (as expected):

    2022-12-29T15:18:41-08:00 URL: PUT https://vault.example.com/v1/auth/kubernetes/login
    2022-12-29T15:18:41-08:00 Code: 403. Errors:
    2022-12-29T15:18:41-08:00 
    2022-12-29T15:18:41-08:00 * namespace not authorized	{"type": "Warning", "object": {"kind":"KubernetesAuthEngineRole","namespace":"default","name":"test-role","uid":"a5463c14-c1b8-4617-a3f4-ab5238fc0419","apiVersion":"redhatcop.redhat.io/v1alpha1","resourceVersion":"35432162"}, "reason": "ProcessingError"}
    2022-12-29T15:18:41-08:00 1.6723559212221923e+09	ERROR	Reconciler error	{"controller": "kubernetesauthenginerole", "controllerGroup": "redhatcop.redhat.io", "controllerKind": "KubernetesAuthEngineRole", "KubernetesAuthEngineRole": {"name":"test-role","namespace":"default"}, "namespace": "default", "name": "test-role", "reconcileID": "d91aa212-e8f6-4041-993e-47dd709eb840", "error": "Error making API request.\n\nURL: PUT https://vault.example.com/v1/auth/kubernetes/login\nCode: 403. Errors:\n\n* namespace not authorized"}
    2022-12-29T15:18:41-08:00 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler
    2022-12-29T15:18:41-08:00 	/home/runner/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:326
    2022-12-29T15:18:41-08:00 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
    2022-12-29T15:18:41-08:00 	/home/runner/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:273
    2022-12-29T15:18:41-08:00 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2
    2022-12-29T15:18:41-08:00 	/home/runner/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:234
    2022-12-29T15:18:41-08:00 1.6723559212395604e+09	ERROR	unable to login to vault	{"controller": "kubernetesauthenginerole", "controllerGroup": "redhatcop.redhat.io", "controllerKind": "KubernetesAuthEngineRole", "KubernetesAuthEngineRole": {"name":"test-role","namespace":"default"}, "namespace": "default", "name": "test-role", "reconcileID": "a26944e7-0087-4283-8bc2-6cadc7902f70", "error": "Error making API request.\n\nURL: PUT https://vault.example.com/v1/auth/kubernetes/login\nCode: 403. Errors:\n\n* namespace not authorized"}
    2022-12-29T15:18:41-08:00 github.com/redhat-cop/vault-config-operator/api/v1alpha1/utils.(*KubeAuthConfiguration).createVaultClient
    2022-12-29T15:18:41-08:00 	/home/runner/work/vault-config-operator/vault-config-operator/api/v1alpha1/utils/commons.go:142
    2022-12-29T15:18:41-08:00 github.com/redhat-cop/vault-config-operator/api/v1alpha1/utils.(*KubeAuthConfiguration).GetVaultClient
    2022-12-29T15:18:41-08:00 	/home/runner/work/vault-config-operator/vault-config-operator/api/v1alpha1/utils/commons.go:80
    2022-12-29T15:18:41-08:00 github.com/redhat-cop/vault-config-operator/controllers.prepareContext
    2022-12-29T15:18:41-08:00 	/home/runner/work/vault-config-operator/vault-config-operator/controllers/commons.go:21
    2022-12-29T15:18:41-08:00 github.com/redhat-cop/vault-config-operator/controllers.(*KubernetesAuthEngineRoleReconciler).Reconcile
    2022-12-29T15:18:41-08:00 	/home/runner/work/vault-config-operator/vault-config-operator/controllers/kubernetesauthenginerole_controller.go:80
    2022-12-29T15:18:41-08:00 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile
    2022-12-29T15:18:41-08:00 	/home/runner/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:121
    2022-12-29T15:18:41-08:00 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler
    
    

    However, after that, the resource cannot be deleted. Kubectl says that it's deleted:

    kubectl delete kubernetesauthenginerole test-role
    kubernetesauthenginerole.redhatcop.redhat.io "test-role" deleted
    

    but the delete command just gets stuck indefinitely, and the resource remains in the cluster.

  • Cannot delete a RandomSecret

    Cannot delete a RandomSecret

    I am currently testing this operator to manage some of our workflows and it looks awesome !

    I am creating & deleting the below yaml to test my policies :

    • setup a KV v2 mount
    • create a secret in this mount with a RandomSecret
    • create a k8s secret with VaultSecret
    apiVersion: redhatcop.redhat.io/v1alpha1
    kind: SecretEngineMount
    metadata:
      name: kv
      namespace: geocoding-dev
    spec:
      authentication:
        path: k8s/o2sm-kubernetes
        role: o2sm-secret-engine-admin
      config:
        defaultLeaseTTL: ""
        forceNoCache: false
        listingVisibility: hidden
        maxLeaseTTL: ""
      options:
        version: "2"
      type: kv
      path: k8s/o2sm
    ---
    apiVersion: redhatcop.redhat.io/v1alpha1
    kind: RandomSecret
    metadata:
      namespace: geocoding-dev
      name: geocoding
    spec:
      authentication:
        path: k8s/o2sm-kubernetes
        role: o2sm-secret-engine-admin
      isKVSecretsEngineV2: true
      path: k8s/o2sm/kv/data
      secretKey: password
      secretFormat:
        passwordPolicyName: database_policy
      refreshPeriod: 1h
    ---
    apiVersion: redhatcop.redhat.io/v1alpha1
    kind: VaultSecret
    metadata:
      name: database-geocoding
      namespace: geocoding-dev
    spec:
      vaultSecretDefinitions:
        - authentication:
            path: k8s/o2sm-kubernetes
            role: o2sm-secret-engine-admin
          name: database-geocoding
          path: k8s/o2sm/kv/data/geocoding
      output:
        name: database-geocoding2
        stringData:
          password: '{{ index . "database-geocoding" "password" }}'
        type: Opaque
        annotations:
          refresh: every-minute
      refreshPeriod: 3m0s
    

    Everything works fine from a Vault perspective (creation & deletion) but I'm left with the RandomSecret in k8s. The object have the correct properties set and no errors in describe section :

    deletionGracePeriodSeconds: 0
    deletionTimestamp: "2022-12-29T15:40:43Z"
    

    In operator logs, I see the SecretEngine & VaultSecret logs but nothing for RandomSecret

    2022-12-29T15:40:43.208781957Z 1.6723284432085931e+09    DEBUG    controller-runtime.webhook.webhooks    received request    {"webhook": "/validate-redhatcop-redhat-io-v1alpha1-secretenginemount", "UID": "64ed7231-853c-4a0f-859c-4cf66c974f79", "kind": "redhatcop.redhat.io/v1alpha1, Kind=SecretEngineMount", "resource": {"group":"redhatcop.redhat.io","version":"v1alpha1","resource":"secretenginemounts"}}
    2022-12-29T15:40:43.209019439Z 1.672328443208945e+09    INFO    secretenginemount-resource    validate update    {"name": "kv"}
    2022-12-29T15:40:43.209189195Z 1.672328443209107e+09    DEBUG    controller-runtime.webhook.webhooks    wrote response    {"webhook": "/validate-redhatcop-redhat-io-v1alpha1-secretenginemount", "code": 200, "reason": "", "UID": "64ed7231-853c-4a0f-859c-4cf66c974f79", "allowed": true}
    2022-12-29T15:40:43.237149308Z 1.672328443237007e+09    DEBUG    controllers.VaultSecret    Delete Event    {"kind": "VaultSecret", "namespacedName": "geocoding-dev/database-geocoding"}
    2022-12-29T15:40:43.248093813Z 1.672328443247921e+09    DEBUG    controllers.VaultSecret    Delete Event    {"kind": "Secret", "namespacedName": "geocoding-dev/database-geocoding2"}
    

    Also quick question, what are the expected semantics of RandomSecret ? is it a one-shot generation only ? will it try to regenerate a secret in Vault if its deleted there ?

    Thanks

  • http: TLS handshake error from 10.42.0.0:45928: remote error: tls: bad certificate

    http: TLS handshake error from 10.42.0.0:45928: remote error: tls: bad certificate

    I've deployed the operator to bare metal kubernetes (k3s version 1.24.8) using the helm chart, with enableCertManager set to false (as default). I've generated my own certs using a self-signed ClusterIssuer, and named the corresponding secrets webhook-server-cert and vault-config-operator-certs.

    The pod spins up correctly, but when I create a custom resource (say, Policy), the creation fails with:

    Error from server (InternalError): error when creating "test-policy.yaml": Internal error occurred: failed calling webhook "mpolicy.kb.io": failed to call webhook: Post "https://vault-config-operator-webhook-service.vault.svc:443/mutate-redhatcop-redhat-io-v1alpha1-policy?timeout=10s": x509: certificate signed by unknown authority
    

    And I see the following error in vault-config-operator's logs:

    http: TLS handshake error from 10.42.0.0:45928: remote error: tls: bad certificate
    

    My guess is that this is somehow due to the cert being self-signed? What's confusing to me, is that setting enableCertManager to true also creates self-signed certs, and that is presumably working?

  • WebhookConfiguration objects not generated correctly via OLM

    WebhookConfiguration objects not generated correctly via OLM

    Describe the bug During VCO Operator upgrade via OLM from v0.8.0 to v0.8.4, we are facing the following error :

    Error "failed calling webhook "msecretenginemount.kb.io": failed to call webhook: Post "[https://vault-config-operator-controller-manager-service.vault-config-operator.svc:443/mutate-redhatcop-redhat-io-v1alpha1-secretenginemount?timeout=10s":](https://vault-config-operator-controller-manager-service.vault-config-operator.svc/mutate-redhatcop-redhat-io-v1alpha1-secretenginemount?timeout=10s%22:) service "vault-config-operator-controller-manager-service" not found" for field "undefined".

    It looks like that when the release is being upgraded to v0.8.4, in both MutatingWebhookConfiguration and ValidatingWebhookConfiguration objects, for each Webhook, the wrong service is generated :

    $ oc get validatingwebhookconfigurations.admissionregistration.k8s.io -l olm.owner=vault-config-operator.v0.8.4 -ojson | jq .items[].webhooks[].clientConfig.service.name "vault-config-operator-controller-manager-service" "vault-config-operator-controller-manager-service" "vault-config-operator-controller-manager-service"

    where the correct service should be as follows :

    • admissionReviewVersions:
      • v1
      • v1beta1 clientConfig: service: name: vault-config-operator-webhook-service namespace: {{.Release.Namespace}} path: /mutate-redhatcop-redhat-io-v1alpha1-secretenginemount failurePolicy: Fail name: msecretenginemount.kb.io

    To Reproduce Steps to reproduce the behavior:

    1. Operator is being updated through the Openshift UI :

    image

    Logs during the Operator upgrade :

    image

    Expected behavior WebhookConfigurations for each webhook are generated with the correct service "vault-config-operator-webhook-service" and webhooks are accessed as follows :

    curl -k https://vault-config-operator-webhook-service.vault-config-operator.svc/mutate-redhatcop-redhat-io-v1alpha1-secretenginemount?timeout=10s

    Environment

    • Openshift 4.11
      • OLM : 0.19.0
    • Kubernetes: 1.24
    • Operator release: v0.8.4

    @sabre1041 @raffaelespazzoli can you please check ?

    Thank you

  • Support CRDs for Vault System Backend configuration

    Support CRDs for Vault System Backend configuration

    Dear Team,

    I would like to use this issue, to raise the discussion if VCO should support also APIs/CRDs for Vault System Backend configuration. After all, every aspect of Vault can be controlled using the APIs, so does it make sense to have endpoints as the following, as VCO CRDs i.e :

    just to name a few.

    Thanks, erlis

Basic Kubernetes operator that have multiple versions in CRD. This operator can be used to experiment and understand Operator/CRD behaviors.

add-operator Basic Kubernetes operator that have multiple versions in CRD. This operator can be used to experiment and understand Operator/CRD behavio

Dec 15, 2021
Provider-generic-workflows - A generic provider which uses argo workflows to define the backend actions.

provider-generic-workflows provider-generic-workflows is a generic provider which uses argo workflows for managing the external resource. This will re

Jan 1, 2022
An operator which complements grafana-operator for custom features which are not feasible to be merged into core operator

Grafana Complementary Operator A grafana which complements grafana-operator for custom features which are not feasible to be merged into core operator

Aug 16, 2022
The Elastalert Operator is an implementation of a Kubernetes Operator, to easily integrate elastalert with gitops.

Elastalert Operator for Kubernetes The Elastalert Operator is an implementation of a Kubernetes Operator. Getting started Firstly, learn How to use el

Jun 28, 2022
Minecraft-operator - A Kubernetes operator for Minecraft Java Edition servers

Minecraft Operator A Kubernetes operator for dedicated servers of the video game

Dec 15, 2022
K8s-network-config-operator - Kubernetes network config operator to push network config to switches

Kubernetes Network operator Will add more to the readme later :D Operations The

May 16, 2022
Pulumi-k8s-operator-example - OpenGitOps Compliant Pulumi Kubernetes Operator Example

Pulumi GitOps Example OpenGitOps Compliant Pulumi Kubernetes Operator Example Pr

May 6, 2022
Kubernetes Operator Samples using Go, the Operator SDK and OLM
Kubernetes Operator Samples using Go, the Operator SDK and OLM

Kubernetes Operator Patterns and Best Practises This project contains Kubernetes operator samples that demonstrate best practices how to develop opera

Nov 24, 2022
vcluster - Create fully functional virtual Kubernetes clusters - Each cluster runs inside a Kubernetes namespace and can be started within seconds
vcluster - Create fully functional virtual Kubernetes clusters - Each cluster runs inside a Kubernetes namespace and can be started within seconds

Website • Quickstart • Documentation • Blog • Twitter • Slack vcluster - Virtual Clusters For Kubernetes Lightweight & Low-Overhead - Based on k3s, bu

Jan 4, 2023
A k8s vault webhook is a Kubernetes webhook that can inject secrets into Kubernetes resources by connecting to multiple secret managers
A k8s vault webhook is a Kubernetes webhook that can inject secrets into Kubernetes resources by connecting to multiple secret managers

k8s-vault-webhook is a Kubernetes admission webhook which listen for the events related to Kubernetes resources for injecting secret directly from sec

Oct 15, 2022
Kubernetes OS Server - Kubernetes Extension API server exposing OS configuration like sysctl via Kubernetes API

KOSS is a Extension API Server which exposes OS properties and functionality using Kubernetes API, so it can be accessed using e.g. kubectl. At the moment this is highly experimental and only managing sysctl is supported. To make things actually usable, you must run KOSS binary as root on the machine you will be managing.

May 19, 2021
Test Operator using operator-sdk 1.15

test-operator Test Operator using operator-sdk 1.15 operator-sdk init --domain rbt.com --repo github.com/ravitri/test-operator Writing kustomize manif

Dec 28, 2021
a k8s operator 、operator-sdk

helloworld-operator a k8s operator 、operator-sdk Operator 参考 https://jicki.cn/kubernetes-operator/ https://learnku.com/articles/60683 https://opensour

Jan 27, 2022
Operator Permissions Advisor is a CLI tool that will take a catalog image and statically parse it to determine what permissions an Operator will request of OLM during an install

Operator Permissions Advisor is a CLI tool that will take a catalog image and statically parse it to determine what permissions an Operator will request of OLM during an install. The permissions are aggregated from the following sources:

Apr 22, 2022
Flux is a tool for keeping Kubernetes clusters in sync with sources of configuration, and automating updates to configuration when there is new code to deploy.
Flux is a tool for keeping Kubernetes clusters in sync with sources of configuration, and automating updates to configuration when there is new code to deploy.

Flux is a tool for keeping Kubernetes clusters in sync with sources of configuration (like Git repositories), and automating updates to configuration when there is new code to deploy.

Jan 8, 2023
The OCI Service Operator for Kubernetes (OSOK) makes it easy to connect and manage OCI services from a cloud native application running in a Kubernetes environment.

OCI Service Operator for Kubernetes Introduction The OCI Service Operator for Kubernetes (OSOK) makes it easy to create, manage, and connect to Oracle

Sep 27, 2022
PolarDB-X Operator is a Kubernetes extension that aims to create and manage PolarDB-X cluster on Kubernetes.

GalaxyKube -- PolarDB-X Operator PolarDB-X Operator is a Kubernetes extension that aims to create and manage PolarDB-X cluster on Kubernetes. It follo

Dec 19, 2022
Kubernetes Operator to sync secrets between different secret backends and Kubernetes

Vals-Operator Here at Digitalis we love vals, it's a tool we use daily to keep secrets stored securely. We also use secrets-manager on the Kubernetes

Nov 13, 2022