A Kubebuilder plugin to accelerate the development of Kubernetes operators

Go Reference GitHub go.mod Go version Go Report Card GitHubGitHub release (latest by date) Hombrew Get it from the Snap Store Github Downloads (by Release)

Operator Builder Logo

Operator Builder

Accelerate the development of Kubernetes Operators.

Operator Builder extends Kubebuilder to facilitate development and maintenance of Kubernetes operators. It is especially helpful if you need to take large numbers of resources defined with static or templated yaml and migrate to managing those resources with a custom Kubernetes operator.

An operator built with Operator Builder has the following features:

  • A defined API for a custom resource based on markers in static Kubernetes manifests.
  • A functioning controller that will create, update and delete child resources to reconcile the state for the custom resource/s.
  • A companion CLI that helps end users with common operations.

Operator Builder uses a workload configuration as the primary configuration mechanism for providing attributes for the source code.

The custom resource defined in the source code can be cluster-scoped or namespace-scoped based on the requirements of the project. More info here.

Prerequisites

  • Make
  • Go version 1.16 or later
  • Docker (for building/pushing controller images)
  • An available test cluster. A local kind or minikube cluster will work just fine in many cases.
  • Operator Builder installed.
  • kubectl installed.
  • A set of static Kubernetes manifests that can be used to deploy your workload. It is highly recommended that you apply these manifests to a test cluster and verify the resulting resources work as expected. If you don't have a workload of your own to use, you can use the examples provided in this guide.

Installation Options

Download the latest binary

wget

Use wget to download the pre-compiled binaries:

wget https://github.com/vmware-tanzu-labs/operator-builder/releases/download/${VERSION}/${BINARY}.tar.gz -O - |\
  tar xz && mv ${BINARY} /usr/bin/operator-builder

For instance, VERSION=v0.3.1 and BINARY=operator-builder_${VERSION}_linux_amd64

MacOS / Linux via Homebrew install

Using Homebrew

brew tap vmware-tanzu-labs/tap
brew install operator-builder

Linux snap install

snap install operator-builder

NOTE: operator-builder installs with strict confinement in snap, this means it doesn't have direct access to root files.

Docker image pull

docker pull ghcr.io/vmawre-tanzu-labs/operator-builder

One-shot container use

docker run --rm -v "${PWD}":/workdir ghcr.io/vmware-tanzu-labs/operator-builder [flags]

Run container commands interactively

docker run --rm -it -v "${PWD}":/workdir --entrypoint sh ghcr.io/vmawre-tanzu-labs/operator-builder

It can be useful to have a bash function to avoid typing the whole docker command:

operator-builder() {
  docker run --rm -i -v "${PWD}":/workdir ghcr.io/vmware-tanzu-labs/operator-builder "$@"
}

Go install

GO111MODULE=on go get github.com/vmware-tanzu-labs/operator-builder/cmd/operator-builder

Getting Started

This guide will walk you through the creation of a Kubernetes operator for a single workload. This workload can consist of any number of Kubernetes resources and will be configured with a single custom resource. Please review the prerequisites prior to attempting to follow this guide.

This guide consists of the following steps:

  1. Create a repository.
  2. Determine what fields in your static manifests will need to be configurable for deployment into different environments. Add commented markers to the manifests. These will serve as instructions to Operator Builder.
  3. Create a workload configuration for your project.
  4. Use the Operator Builder CLI to generate the source code for your operator.
  5. Test the operator against your test cluster.
  6. Build and install your operator's controller manager in your test cluster.
  7. Build and test the operator's companion CLI.

Step 1

Create a new directory for your operator's source code. We recommend you follow the standard code organization guidelines. In that directory initialize a new git repo.

git init

And intialize a new go module. The module should be the import path for your project, usually something like github.com/user-account/project-name. Use the command go help importpath for more info.

go mod init [module]

Lastly create a directory for your static manifests. Operator Builder will use these as a source for defining resources in your operator's codebase. It must be a hidden directory so as not to interfere with source code generation.

mkdir .source-manifests

Put your static manifests in this .source-manifests directory. In the next step we will add commented markers to them. Note that these static manifests can be in one or more files. And you can have one or more manifests (separated by ---) in each file. Just organize them in a way that makes sense to you.

Step 2

Look through your static manifests and determine which fields will need to be configurable for deployment into different environments. Let's look at a simple example to illustrate. Following is a Deployment, Ingress and Service that may be used to deploy a workload.

# .source-manifests/app.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: webstore-deploy
spec:
  replicas: 2                       # <===== configurable
  selector:
    matchLabels:
      app: webstore
  template:
    metadata:
      labels:
        app: webstore
    spec:
      containers:
      - name: webstore-container
        image: nginx:1.17           # <===== configurable
        ports:
        - containerPort: 8080
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: webstore-ing
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - host: app.acme.com
    http:
      paths:
      - path: /
        backend:
          serviceName: webstorep-svc
          servicePort: 80
---
kind: Service
apiVersion: v1
metadata:
  name: webstore-svc
spec:
  selector:
    app: webstore
  ports:
  - protocol: TCP
    port: 80
    targetPort: 8080

There are two fields in the Deployment manifest that will need to be configurable. They are noted with comments. The Deployment's replicas and the Pod's container image will change between different environments. For example, in a dev environment the number of replicas will be low and a development version of the app will be run. In production, there will be more replicas and a stable release of the app will be used. In this example we don't have any configurable fields in the Ingress or Service.

Next we need to use +operator-builder:field markers in comments to inform Operator Builder that the operator will need to support configuration of these elements. Following is the Deployment manifest with these markers in place.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: webstore-deploy
  labels:
    team: dev-team  # +operator-builder:field:name=teamName,type=string
spec:
  replicas: 2  # +operator-builder:field:name=webStoreReplicas,default=2,type=int
  selector:
    matchLabels:
      app: webstore
  template:
    metadata:
      labels:
        app: webstore
        team: dev-team  # +operator-builder:field:name=teamName,type=string
    spec:
      containers:
      - name: webstore-container
        image: nginx:1.17  # +operator-builder:field:name=webStoreImage,type=string
        ports:
        - containerPort: 8080

These markers should always be provided as an in-line comment or as a head comment. The marker always begins with +operator-builder:field: or +operator-builder:collection:field: See Markers to learn more.

Step 3

Operator Builder uses a workload configuration to provide important details for your operator project. This guide uses a standalone workload. Save this file to your .source-manifests directory.

# .source-manifests/workload.yaml
name: webstore
kind: StandaloneWorkload
spec:
  api:
    domain: acme.com
    group: apps
    version: v1alpha1
    kind: WebStore
    clusterScoped: false
  companionCliRootcmd:
    name: webstorectl
    description: Manage webstore application
  resources:
  - app.yaml

For a standalone workload the kind must be StandaloneWorkload. The name is arbitrary and can be whatever you like.

In the spec, the following fields are required:

  • api.domain: This must be a globally unique name that will not be used by other organizations or groups. It will contain groups of API types.
  • api.group: This is a logical group of API types used as a namespacing mechanism for your APIs.
  • api.version: Provide the intiial version for your API.
  • api.kind: The name of the API type that will represent the workload you are managing with this operator.
  • resources: An array of filenames where your static manifests live. List the relative path from the workload manifest to all the files that contain the static manifests we talked about in step 2.

For more info about API groups, versions and kinds, checkout the Kubebuilder docs.

The following fields in the spec are optional:

  • api.clusterScoped: If your workload includes cluster-scoped resources like namespaces, this will need to be true. The default is false.
  • companionCLIRootcmd: If you wish to generate source code for a companion CLI for your operator, include this field. We recommend you do. Your end users will appreciate it.
    • name: The root command your end users will type when using the companion CLI.
    • description: The general information your end users will get if they use the help subcommand of your companion CLI.

At this point in our example, our .source-manifests directory looks as follows:

tree .source-manifests

.source-manifests
├── app.yaml
└── workload.yaml

Our StandaloneWorkload config is in workload.yaml and the Deployment, Ingress and Service manifests are in app.yaml and referenced under spec.resources in our StandaloneWorkload config.

We are now ready to generate our project's source code.

Step 4

We first use the init command to create the general scaffolding. We run this command from the root of our repo and provide a single argument with the path to our workload config.

operator-builder init \
    --workload-config .source-manfiests/workload.yaml

With the basic project now set up, we can now run the create api command to create a new custom API for our workload.

operator-builder create api \
    --workload-config .source-manfiests/workload.yaml \
    --controller \
    --resource

We again provide the same workload config file. Here we also added the --controller and --resource arguments. These indicate that we want both a new controller and new custom resource created.

You now have a new working Kubernetes Operator! Next, we will test it out.

Step 5

Assuming you have a kubeconfig in place that allows you to interact with your cluster with kubectl, you are ready to go.

First, install the new custom resource definition (CRD).

make install

Now we can run the controller locally to test it out.

make run

Operator Builder created a sample manifest in the config/samples directory. For this example it looks like this:

apiVersion: apps.acme.com/v1alpha1
kind: WebStore
metadata:
  name: webstore-sample
spec:
  webStoreReplicas: 2
  webStoreImage: nginx:1.17
  teamName: dev-team

You will notice the fields and values in the spec were derived from the markers you added to your static manifests.

Next, in another terminal, create a new instance of your workload with the provided sample manifest.

kubectl apply -f config/samples/

You should see your custom resource sample get created. Now use kubectl to inspect your cluster to confirm the workload's resources got created. You should find all the resources that were defined in your static manifests.

kubectl get all

Clean up by stopping your controller with ctrl-c in that terminal and then remove all the resources you just created.

make uninstall

Step 6

Now let's deploy your controller into the cluster.

First export an environment variable for your container image.

export IMG=myrepo/acme-webstore-mgr:0.1.0

Run the rest of the commands in this step 6 in this same terminal as most of them will need this IMG env var.

In order to run the controller in-cluster (as opposed to running locally with make run) we will need to build a container image for it.

make docker-build

Now we can push it to a registry that is accessible from the test cluster.

make docker-push

Finally, we can deploy it to our test cluster.

make deploy

Next, perform the same tests from step 5 to ensure proper operation of our operator.

kubectl apply -f config/sample/

Again, verify that all the resources you expect are created.

Once satisfied, remove the instance of your workload.

kubectl delete -f config/sample/

For now, leave the controller running in your test cluster. We'll use it in Step 7.

Step 7

Now let's build and test the companion CLI.

You will have a make target that includes the name of your CLI. For this example it is:

make build-webstorectl

We can view the help info as follows.

./bin/webstorectl help

Your end users can use it to create a new custom resource manifest.

./bin/webstorectl init > /tmp/webstore.yaml

If you would like to change any of the default values, edit the file.

vim /tmp/webstore.yaml

Then you can apply it to the cluster.

kubectl apply -f /tmp/webstore.yaml

If your end users find they wish to make changes to the resources that aren't supported by the operator, they can generate the resources from the custom resource.

./bin/webstorectl generate --workload-manifest /tmp/webstore.yaml

This will print the resources to stdout. These may be piped into an overlay tool or written to disk and modified before applying to a cluster.

That's it! You have a working operator without manually writing a single line of code. If you'd like to make any changes to your workload's API, you'll find the code in the apis directory. The controller's source code is in controllers directory. And the companion CLI code is in cmd.

Don't forget to clean up. Remove the controller, CRD and the workload's resources as follows.

make undeploy

For more information, checkout the Operator Builder docs as well as the Kubebuilder docs.

Workload Collections

Operator Builder can generate source code for operators that manage multiple workloads. See workload collections for more info.

Licensing

Operator Builder can help manage licensing for the resulting project. More info here.

Testing

Testing of Operator Builder is documented here.

Comments
  • Support multiple collections per cluster

    Support multiple collections per cluster

    Today we have no mechanism to tie components to collections and so are enforcing one collection per cluster.

    It will be a common requirement to support multiple collections so we'll need to add something like the collectionRef to the components:

    apiVersion: ingress.acme.com/v1alpha1
    kind: Contour
    metadata:
      name: contour-sample
    spec:
      EnvoyImage: nginx:1.17
      namespace: ingress-system
      contourReplicas: 3
      contourImage: nginx:1.17
      collectionRef:
        kind: CloudNativePlatform
        apiVersion: platforms.acme.com/v1alpha2
        name: cloudnativeplatform-sample
    
  • Allow for multiple instances of a resource to be created with a CreateFunc

    Allow for multiple instances of a resource to be created with a CreateFunc

    Consider the following custom resource. When this resource is created, the controller would create two CronJobs to run at the scheduled time and scale the sample-deploy Deployment to the given number of replicas:

    apiVersion: autoscaling.containers.myorg.com/v1alpha1
    kind: PodScalingSchedule
    metadata:
      name: sample
      namespace: pss
    spec:
      targetReference:
        name: sample-deploy
        kind: Deployment
        apiVersion: apps/v1
      scaleOperations:
      - schedule: "54 12 * * *"
        replicas: "2"
      - schedule: "55 12 * * *"
        replicas: "5"
    

    So we will need to create a CronJob resource for each object in spec.scaleOperations array.

    We could possibly use a source manifest with a marker that indicates that field name scaleOperations needs to be an array. See example of proposed marker on line one:

    # +operator-builder:field:name=scaleOperations,contains=array
    apiVersion: batch/v1
    kind: CronJob
    metadata:
      name: sample-scaling-schedule  # +operator-builder:field:name=targetReference.name,type=string
    spec:
      schedule: "27 16 * * *"  # +operator-builder:field:name=scaleOperations.schedule,type=string
      jobTemplate:
        spec:
          template:
            spec:
              serviceAccount: sample-scaling-schedule  # +operator-builder:field:name=targetReference.name,type=string,replace="sample"
              containers:
              - name: sample-scaling-schedule  # +operator-builder:field:name=targetReference.name,type=string,replace="sample"
                image: lander2k2/deploy-update:0.1.0
                imagePullPolicy: IfNotPresent
                command:
                - bash
                args:
                - /update-deployment.sh
                - deployment  # +operator-builder:field:name=targetReference.kind,type=string
                - curfew-sample  # +operator-builder:field:name=targetReference.name,type=string
                - "2"  # +operator-builder:field:name=scaleOperations.replicas,type=string
    

    And then we would need to generate CreateFunc that returns an array of client.Object and looks something like this:

    // CreateCronJobTargetReferenceName creates the CronJob resources.
    func CreateCronJobTargetReferenceName(
    	parent *autoscalingv1alpha1.PodScalingSchedule,
    	transferLabels map[string]string,
    	updateImage string,
    ) ([]client.Object, error) {
    	resourceObjects := []client.Object{}
    
    	for n, scaleOp := range parent.Spec.ScaleOperations {
    		var resourceObj = &unstructured.Unstructured{
    			Object: map[string]interface{}{
    				"apiVersion": "batch/v1beta1",
    				"kind":       "CronJob",
    				"metadata": map[string]interface{}{
    					"name":   "" + parent.Spec.TargetReference.Name + "-scaling-schedule-" + strconv.Itoa(n),
    					"labels": transferLabels,
    				},
    				"spec": map[string]interface{}{
    					"schedule": scaleOp.Schedule,
    					"jobTemplate": map[string]interface{}{
    						"metadata": map[string]interface{}{
    							"labels": transferLabels,
    						},
    						"spec": map[string]interface{}{
    							"template": map[string]interface{}{
    								"metadata": map[string]interface{}{
    									"labels": transferLabels,
    								},
    								"spec": map[string]interface{}{
    									"restartPolicy":  "OnFailure",
    									"serviceAccount": "" + parent.Spec.TargetReference.Name + "-scaling-schedule",
    									"containers": []interface{}{
    										map[string]interface{}{
    											"name":            "" + parent.Spec.TargetReference.Name + "-scaling-schedule",
    											"image":           updateImage,
    											"imagePullPolicy": "IfNotPresent",
    											"command": []interface{}{
    												"bash",
    											},
    											"args": []interface{}{
    												"/update-replicas.sh",
    												strings.ToLower(parent.Spec.TargetReference.Kind),
    												parent.Spec.TargetReference.Name,
    												scaleOp.Replicas,
    											},
    										},
    									},
    								},
    							},
    						},
    					},
    				},
    			},
    		}
    
    		resourceObj.SetNamespace(parent.Namespace)
    
    		resourceObjects = append(resourceObjects, resourceObj)
    	}
    
    	return resourceObjects, nil
    }
    
  • bug: `cross-namespace owner references are disallowed`

    bug: `cross-namespace owner references are disallowed`

    It appears that namespaced resources (any resources that are set to clusterScoped: false) are not properly inheriting their namespace from the parent resource.

    We are overriding the namespace for child resources when the parent has its namespace set at: https://github.com/vmware-tanzu-labs/operator-builder/blob/main/internal/plugins/workload/v1/scaffolds/templates/api/resources/definition.go#L77

    However, we are seeing errors in the logs:

    2021-11-07T08:54:02.263Z        ERROR   controller.webstore     Reconciler error        {"reconciler group": "apps.acme.com", "reconciler kind": "WebStore", "name": "webstore-sample", "namespace": "test-create-parent", "error": "cross-namespace owner references are disallowed, owner's namespace test-create-parent, obj's namespace default"}
    

    To duplicate:

    kubectl create ns test
    kubectl apply -f config/samples/* -n test
    

    You can also see this error on the resource which you've created:

    kubectl get <myresourcetype> -n test -o yaml
    
    ...
    status:
      conditions:
      - lastModified: 2021-11-07 09:02:46.729276 +0000 UTC
        message: Successfully Completed Phase
        phase: DependencyPhase
        state: Complete
      - lastModified: 2021-11-07 09:02:46.7349354 +0000 UTC
        message: Successfully Completed Phase
        phase: PreFlightPhase
        state: Complete
      - lastModified: 2021-11-07 09:02:46.7438233 +0000 UTC
        message: Failed Phase with Error; cross-namespace owner references are disallowed,
          owner's namespace test-create-parent, obj's namespace default
        phase: CreateResourcesPhase
        state: Failed
      resources:
      - condition:
          created: false
          lastModified: 2021-11-07 09:02:46.7395297 +0000 UTC
          lastResourcePhase: WaitForResourcePhase
    
  • bug(multi-version): main.go not updated

    bug(multi-version): main.go not updated

    Main.go is not currently updated (overwritten) with changes specified however, we are asking to overwrite the file at https://github.com/vmware-tanzu-labs/operator-builder/blob/main/internal/plugins/workload/v1/scaffolds/templates/main.go#L60.

  • Add stub-phases to controller reconciliation

    Add stub-phases to controller reconciliation

    Add the following stub-phases to controller reconciliation:

    Controller Phases:

    • PostFlight - the very last thing that runs

    Resource Phases:

    • PreCreateResources - the ResourcePhase that runs before doing anything else
    • PostCreateResources - the last ResourcePhase that runs before doing anything else

    This will address the internal conversation that we had regarding "how do I make my controller more extensible".

  • Nice to have: `defaultNamespace` specifically related to when using `clusterScoped: false`

    Nice to have: `defaultNamespace` specifically related to when using `clusterScoped: false`

    The generated config/samples come with no metadata.namespace. It would be nice to supply a default one when using CRDs that are able to be namespaced. That way they don't just get created in the default k8s namespace.

  • Domain Spec field on workload collection isn't being used

    Domain Spec field on workload collection isn't being used

    When initializing a project with operator-builder, the domain spec field should allow the operator-builder command to be run without the --domain flag. However, it fails with a missing go.mod.

    Spec:

    name: cloud-native-platform
    kind: WorkloadCollection
    spec:
      domain: acme.com
      apiGroup: platforms
      apiVersion: v1alpha1
      apiKind: CloudNativePlatform
      clusterScoped: true
      companionCliRootcmd:
        name: cnpctl
        description: Manage platform stuff like a boss
      componentFiles:
      - ns-operator-component.yaml
      - contour-component.yaml
    

    Error:

    TEST_PATH=/tmp/test TEST_SCRIPT=platform.sh make test
    go build -o bin/operator-builder cmd/main.go
    cp bin/operator-builder /usr/local/bin/operator-builder
    mkdir /tmp/test/.test
    cp test/platform.sh /tmp/test/.test/
    (cd /tmp/test; ./.test/platform.sh)
    Error: failed to initialize project: unable to inject the configuration to "base.go.kubebuilder.io/v3": error finding current repository: could not determine repository path from module data, package data, or by initializing a module: go: cannot determine module path for source directory /tmp/test (outside GOPATH, module path must be specified)
    
    Example usage:
            'go mod init example.com/m' to initialize a v0 or v1 module
            'go mod init example.com/m/v2' to initialize a v2 module
    
    Run 'go help mod init' for more information.
    

    Only way around that is to manually initialize the go.mod.

  • fix: companion-cli, fixes #234 which allows the latest version to be updated when creating a new API

    fix: companion-cli, fixes #234 which allows the latest version to be updated when creating a new API

    The following is changed with this PR:

    • Fixes #234 which fixes a bug in the companion CLI which will not allow a file to be both overwritten and updated at the same time.
    • Move logic from the above into the apis/ folder
    • Refactor API scaffolding to be more readable (from 360 lines of code down to 281 - also allows for lots of unnecessary logic to be removed from scaffold files as well)
  • fix: companion-cli, fixes #185, fixes #194 and allows multi-kind across groups

    fix: companion-cli, fixes #185, fixes #194 and allows multi-kind across groups

    This PR addresses the following:

    • Restructures the companion CLI to use the same folder structure as that of the companion CLI command structure to make it easier to follow
    • Moves the subcommands within their own group folder, much like the controller folder to avoid collisions with the same kind existing across multiple groups
    • Adds in the version subcommand to allow for users to list versions of their components as well as their own CLI version
    • Adds in the ability to generate manifests for multiple different api versions
    • Various refactors for inefficiencies
  • Remove limitation of 1 standalone workload per cluster

    Remove limitation of 1 standalone workload per cluster

    Currently if we create two different instances of a standalone workload in a single cluster, we get the following error:

    2021-11-11T14:44:52.861-0500    INFO    controllers.apps.WebStore       expected only 1 resource of kind: [WebStore]; found 2
    

    I think the standalone workloads accidentally inherited this limitation from the workload collections

  • feat: allow `resources` field to point to a directory rather than an individual file

    feat: allow `resources` field to point to a directory rather than an individual file

    Given this tree:

    operator-builder/
    ├── resource3.yaml
    └── resources
        ├── resource1.yaml
        └── resource2.yaml
    

    And this configruation:

    name: webstore
    kind: StandaloneWorkload
    spec:
      api:
        domain: acme.com
        group: apps
        version: v1alpha1
        kind: WebStore
        clusterScoped: false
      companionCliRootcmd:
        name: webstorectl
        description: Manage webstore stuff like a boss
      resources:
      - operator-builder/resources
      - operator-builder/resource3.yaml
    

    We would consume the following files:

    operator-builder/resources/resource1.yaml
    operator-builder/resources/resource2.yaml
    operator-builder/resource3.yaml
    
  • feat: replace text with missing replace results in hard-coded values

    feat: replace text with missing replace results in hard-coded values

    When using the replace function of a marker, with the missing replace string, the values become hard-coded.

    Given the following markers:

        # +operator-builder:field:name=google.region,type=string,replace=GOOGLE_LOCATION,default=us-east1
        # +operator-builder:field:name=clusterName,type=string,replace=GOOGLE_KMS_KEY_RING,default=cluster-key-ring
        # +operator-builder:field:name=clusterName,type=string,replace=GOOGLE_KMS_KEY,default=cluster-key
        # +operator-builder:collection:field:name=google.project,type=string,replace=GOOGLE_PROJECT,default=my-project
        keyName: projects/my-project/locations/us-east1/keyRings/cluster-key-ring/cryptoKeys/cluster-key
        state: ENCRYPTED
    

    Would result in:

        keyName: "projects/my-project/locations/us-east1/keyRings/cluster-key-ring/cryptoKeys/cluster-key",
        state: "ENCRYPTED"
    

    We should error out and let the user know that they have made a mistake with their markers.

  • bug: replace marker with integer results in bad code

    bug: replace marker with integer results in bad code

    Consider the following marker:

      # +operator-builder:collection:field:name=google.projectNumber,type=int,default=12345,replace=PROJECT_NUMBER,description=`
      # +kubebuilder:validation:Required
      # Project number for the google.project that is being used.`
      member: serviceAccount:[email protected]
    

    Results in the following error:

    /home/scottd018/VSCode/github/project/operator/bin/controller-gen "crd:preserveUnknownFields=false,crdVersions=v1,trivialVersions=true" rbac:roleName=manager-role webhook paths="./..." output:crd:artifacts:config=config/crd/bases
    /home/scottd018/VSCode/github/project/operator/bin/controller-gen object:headerFile="hack/boilerplate.go.txt" paths="./..."
    go fmt ./...
    go vet ./...
    # github.com/project/operator/apis/infra/v1alpha1/cluster
    apis/infra/v1alpha1/cluster/google_kms_key.go:141:41: invalid operation: "serviceAccount:service-" + collection.Spec.Google.ProjectNumber (mismatched types untyped string and int)
    make: *** [Makefile:50: vet] Error 2
    
  • feat: add ability to add in standalone workloads to an existing operator

    feat: add ability to add in standalone workloads to an existing operator

    Situation:

    • I have a working operator with a collection/component configuration
    • I want to add a standalone workload to this operator
    • I run the operator-builder create api command to add the standalone workload
    • Because the CLI is initially scaffolded during init, it errors out
    • Making the companionCliRootcmd match the existing command with work, but the pathing is wonky (mycmd generate generate because a standalone workload expects a root command and not a subcommand)

    We should be able to add a StandaloneWorkload to an existing operator without requiring that it exists on its own.

  • feat: specify ready condition for a resource

    feat: specify ready condition for a resource

    Not all resources are core Kubernetes resources, and because we manage dependencies, we need to know when certain conditions of a resource are met. Because of this, we should allow users to specify ready conditions for their resources. A good example of this is within the operator-builder project itself using the status.ready = True concept.

    Loose requirements are:

    1. This is a marker for each resource.
    2. Might look something like this:
    +operator-builder:resource:readyCondition:field=status.ready,condition=true
    
  • feat: inline resources

    feat: inline resources

    Add the ability to specify resources inline with the operator-builder workload configuration. This obviously is not to encourage someone to type out an entire deployment in their workload config, rather it would be useful for something like a Namespace resource where one may need to be created, but it is not desirable to manage a separate file for that resource. That would change the spec for resources to:

    resources:
      - inline: |
          apiVersion: v1
          kind: Namespace
          metadata:
            name: my-namespace # +operator-builder:field:name=myField,default=my-namespace,type=string
      - file: path/to/another/resource.yaml
    

    We should also maintain backwards compatibility and assume that the absence of a file or inline key would default to it being a file. E.g. this would still be valid:

    resources:
      - path/to/legacy/resource.yaml
    
A Kubernetes Operator, that helps DevOps team accelerate their journey into the cloud and K8s.
A Kubernetes Operator, that helps DevOps team accelerate their journey into the cloud and K8s.

A Kubernetes Operator, that helps DevOps team accelerate their journey into the cloud and K8s. OAM operator scaffolds all of the code required to create resources across various cloud provides, which includes both K8s and Non-K8s resources

Nov 30, 2021
A simple Kubernetes Operator template that uses Golang, use it to build your own operators
A simple Kubernetes Operator template that uses Golang, use it to build your own operators

A simple programmatic Kubernetes Operator template. Use this to create your own Kubernetes operators with golang. Build with KIND (Kubernetes in Docke

May 13, 2022
Getting start kubernetes operators.

Robot Operator Tutorial Intro This project understand to concept of Kubebuiler and creating native Kubenetes native resources. YAML sample can be foun

Jan 13, 2022
Reconciler - A library to avoid overstuffed Reconcile functions of Kubernetes operators

reconciler A library to avoid overstuffed Reconcile functions of Kubernetes oper

May 31, 2022
Moby: an open-source project created by Docker to enable and accelerate software containerization
Moby: an open-source project created by Docker to enable and accelerate software containerization

The Moby Project Moby is an open-source project created by Docker to enable and accelerate software containerization. It provides a "Lego set" of tool

Dec 10, 2021
ControllerMesh is a solution that helps developers manage their controllers/operators better.
ControllerMesh is a solution that helps developers manage their controllers/operators better.

ControllerMesh ControllerMesh is a solution that helps developers manage their controllers/operators better. Key Features Canary update: the controlle

Jan 6, 2023
⚙️ Operating Account Operators (OAO) is a Golang tool to interact with the LDAP protocol to manage account groups, roles, ACLs/ACEs, etc...
⚙️ Operating Account Operators (OAO) is a Golang tool to interact with the LDAP protocol to manage account groups, roles, ACLs/ACEs, etc...

⚙️ OAO (Operating Account Operators) ⚙️ Operating Account Operators (OAO) is a Golang tool to interact with the LDAP protocol to manage account groups

May 11, 2023
Kubernetes OS Server - Kubernetes Extension API server exposing OS configuration like sysctl via Kubernetes API

KOSS is a Extension API Server which exposes OS properties and functionality using Kubernetes API, so it can be accessed using e.g. kubectl. At the moment this is highly experimental and only managing sysctl is supported. To make things actually usable, you must run KOSS binary as root on the machine you will be managing.

May 19, 2021
Kubectl Locality Plugin - A plugin to get the locality of pods

Kubectl Locality Plugin - A plugin to get the locality of pods

Nov 18, 2021
Kubernetes webhook development (validating admission webhook) tutorial using kubewebhook

pod-exec-guard-kubewebhook-tutorial Introduction This is a tutorial that shows how to develop a Kubernetes admission webhook. To explain this, the tut

Aug 26, 2022
A kubernetes plugin which enables dynamically add or remove GPU resources for a running Pod
A kubernetes plugin which enables dynamically add or remove GPU resources for a running Pod

GPU Mounter GPU Mounter is a kubernetes plugin which enables add or remove GPU resources for running Pods. This Introduction(In Chinese) is recommende

Jan 5, 2023
Dothill (Seagate) AssuredSAN dynamic provisioner for Kubernetes (CSI plugin).

Dothill-csi dynamic provisioner for Kubernetes A dynamic persistent volume (PV) provisioner for Dothill AssuredSAN based storage systems. Introduction

Oct 11, 2022
Kubectl plugin to ease sniffing on kubernetes pods using tcpdump and wireshark
Kubectl plugin to ease sniffing on kubernetes pods using tcpdump and wireshark

ksniff A kubectl plugin that utilize tcpdump and Wireshark to start a remote capture on any pod in your Kubernetes cluster. You get the full power of

Jan 4, 2023
octant plugin for kubernetes policy report
octant plugin for kubernetes policy report

Policy Report octant plugin [Under development] Resource Policy Report Tab Namespace Policy Report Tab Policy Report Navigation Installation Install p

Aug 7, 2022
kubectl plugin for signing Kubernetes manifest YAML files with sigstore
kubectl plugin for signing Kubernetes manifest YAML files with sigstore

k8s-manifest-sigstore kubectl plugin for signing Kubernetes manifest YAML files with sigstore ⚠️ Still under developement, not ready for production us

Nov 28, 2022
nano-gpu-agent is a Kubernetes device plugin for GPU resources allocation on node.
nano-gpu-agent is a Kubernetes device plugin for GPU resources allocation on node.

Nano GPU Agent About this Project Nano GPU Agent is a Kubernetes device plugin implement for gpu allocation and use in container. It runs as a Daemons

Dec 29, 2022
Kubectl plugin to run curl commands against kubernetes pods

kubectl-curl Kubectl plugin to run curl commands against kubernetes pods Motivation Sending http requests to kubernetes pods is unnecessarily complica

Dec 22, 2022
A Kubernetes CSI plugin to automatically mount SPIFFE certificates to Pods using ephemeral volumes
A Kubernetes CSI plugin to automatically mount SPIFFE certificates to Pods using ephemeral volumes

csi-driver-spiffe csi-driver-spiffe is a Container Storage Interface (CSI) driver plugin for Kubernetes to work along cert-manager. This CSI driver tr

Dec 1, 2022
kubectl plugin for generating nginx-ingress compatible basic-auth secrets on kubernetes clusters

kubectl-htpasswd kubectl plugin for easily generating hashed basic auth secrets. Supported hash algorithms bcrypt Examples Create the secret on the cl

Jul 17, 2022