The NiFiKop NiFi Kubernetes operator makes it easy to run Apache NiFi on Kubernetes.

Docker Automated build Docker Automated build CircleCI Go Report Card license

NiFiKop

You can access to the full documentation on the NiFiKop Documentation

The Konpyūtāika NiFi operator is a Kubernetes operator to automate provisioning, management, autoscaling and operations of Apache NiFi clusters deployed to K8s.

Overview

Apache NiFi is an open-source solution that supports powerful and scalable directed graphs of data routing, transformation, and system mediation logic. Some of the high-level capabilities and objectives of Apache NiFi include, and some of the main features of the NiFiKop are:

  • Fine grained node configuration support
  • Graceful rolling upgrade
  • graceful NiFi cluster scaling
  • encrypted communication using SSL
  • the provisioning of secure NiFi clusters
  • Advanced Dataflow and user management via CRD

Some of the roadmap features :

  • Monitoring via Prometheus
  • Automatic reaction and self healing based on alerts (plugin system, with meaningful default alert plugins)
  • graceful NiFi cluster scaling and rebalancing

Motivation

There are already some approaches to operating NiFi on Kubernetes, however, we did not find them appropriate for use in a highly dynamic environment, nor capable of meeting our needs.

Finally, our motivation is to build an open source solution and a community which drives the innovation and features of this operator.

Installation

To get up and running quickly, check our Getting Started page

Development

Checkout out the Developer page

Features

Check out the Supported Features Page

Issues, feature requests and roadmap

Please note that the NiFi operator is constantly under development and new releases might introduce breaking changes. We are striving to keep backward compatibility as much as possible while adding new features at a fast pace. Issues, new features or bugs are tracked on the projects GitHub page - please feel free to add yours!

To track some of the significant features and future items from the roadmap please visit the roadmap doc.

Contributing

If you find this project useful here's how you can help:

  • Send a pull request with your new features and bug fixes
  • Help new users with issues they may encounter
  • Support the development of this project and star this repo!

Community

If you have any questions about the NiFi operator, and would like to talk to us and the other members of the community, please join our Slack.

If you find this project useful, help us:

  • Support the development of this project and star this repo!
  • If you use the Nifi operator in a production environment, add yourself to the list of production adopters. 🤘
  • Help new users with issues they may encounter 💪
  • Send a pull request with your new features and bug fixes 🚀

Credits

License

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

Comments
  • Support for 1.14.0

    Support for 1.14.0

    From nifikop created by riccardo-salamanna: Orange-OpenSource/nifikop#141

    Feature Request

    Testing of nifikop with version 1.14.0 have been unsuccessful.

    Describe the solution you'd like to see It would be nice since to have that support, since the release fixes some important bugs for us

    Many thanks

  • mount/use existing pvc on nifi nodes

    mount/use existing pvc on nifi nodes

    From nifikop created by teplydat: Orange-OpenSource/nifikop#39

    Type of question

    Are you asking about community best practices, how to implement a specific feature, or about general context and help around nifikop ? Best practice how to mount an existing pvc on nifi

    Question

    What did you do?

    At first I created a pvc:

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: fstp-pvc
      namespace: usecase
      labels:
        pvc: fstp
    spec:
      storageClassName: "ceph-fs-storage"
      accessModes:
        - ReadWriteMany
      resources:
        requests:
          storage: 1Gi
    

    Then I tried to mount it via labels though the nificlusters.nifi.orange.com:

    ...
        storageConfigs:
          - mountPath: "/opt/fstp"
            name: fstp-pvc
            pvcSpec:
              accessModes:
                - ReadWriteMany
              selector:
                matchLabels:
                  pvc: fstp
    ...
    

    What did you expect to see? Nifi mounts the existing pvc.

    What did you see instead? Under which circumstances?

    No nifi node is scheduled by the operator.

    logs from the operator:

    PersistentVolumeClaim \"nifi-0-storagebb7tt\" is invalid: spec.resources[storage]: Required value","Request.Namespace":"usecase","Request.Name":"nifi"}
    
    {"level":"error","ts":1603277145.6576192,"logger":"controller-runtime.controller","msg":"Reconciler error","controller":"nificluster-controller","request":"usecase/nifi","error":"failed to reconcile resource: creating resource failed: PersistentVolumeClaim \"nifi-0-storagebb7tt\" is invalid: spec.resources[storage]: Required value","errorVerbose":"creating resource failed: PersistentVolumeClaim \"nifi-0-storagebb7tt\" is invalid: spec.resources[storage]: Required value\nfailed to reconcile 
    

    Environment

    • nifikop version:

    image: orangeopensource/nifikop:v0.2.0-release

    • Kubernetes version information:

    v1.16.7

    • Kubernetes cluster kind:

    nificlusters.nifi.orange.com

    • NiFi version:

    1.11.4

  • Deploying simplenificluster does nothing

    Deploying simplenificluster does nothing

    From nifikop created by Docteur-RS: Orange-OpenSource/nifikop#166

    Type of question

    Requesting help on strange behavior

    Question

    Hey,

    Followed the installation tutorial from the documentation but after the last step nothing happend.

    What I did :

    # Installing zookeeper (in default namespace...)
    
    helm repo add bitnami https://charts.bitnami.com/bitnami
    helm install zookeeper bitnami/zookeeper \
        --set resources.requests.memory=256Mi \
        --set resources.requests.cpu=250m \
        --set resources.limits.memory=256Mi \
        --set resources.limits.cpu=250m \
        --set global.storageClass=standard \
        --set networkPolicy.enabled=true \
        --set replicaCount=3
    
    # Installing certmanager
    
    kubectl apply -f \
        https://github.com/jetstack/cert-manager/releases/download/v1.2.0/cert-manager.yaml
    
    # Installing nifikops
    
    helm repo add orange-incubator https://orange-kubernetes-charts-incubator.storage.googleapis.com/
    
    kubectl create ns nifi
    
    helm install nifikop \
        orange-incubator/nifikop \
        --namespace=nifi \
        --version 0.7.5 \
        --set image.tag=v0.7.5-release \
        --set resources.requests.memory=256Mi \
        --set resources.requests.cpu=250m \
        --set resources.limits.memory=256Mi \
        --set resources.limits.cpu=250m \
        --set namespaces={"nifi"}
    

    Next, I didn't setup the customstorage class because it was optional.

    # Applying the simple nifi cluster
    
    ## Updated the storage StorageClassName to the one availble inside my cluster
    
    kubectl create -n nifi -f config/samples/simplenificluster.yaml
    

    And now..... Nothing.

    Pods and logs

    kubectl get po -n default
    
    NAME          READY   STATUS    RESTARTS   AGE
    zookeeper-0   1/1     Running   0          40m
    zookeeper-1   1/1     Running   0          40m
    zookeeper-2   1/1     Running   0          40m
    
    kubectl get po -n nifi
    
    NAME                       READY   STATUS    RESTARTS   AGE
    nifikop-5d4c9b6d6d-mz922   1/1     Running   0          33m
    
     kubectl get nificluster
    NAME         AGE
    simplenifi   32m
    
    #Nifikop logs
    
    2021-11-29T17:27:07.597Z        INFO    setup   manager set up with multiple namespaces {"namespaces": "nifi"}
    2021-11-29T17:27:07.597Z        INFO    setup   Writing ready file.
    I1129 17:27:08.718212       1 request.go:655] Throttling request took 1.013748334s, request: GET:https://10.3.0.1:443/apis/apm.k8s.elastic.co/v1?timeout=32s
    2021-11-29T17:27:09.984Z        INFO    controller-runtime.metrics      metrics server is starting to listen    {"addr": ":8080"}
    2021-11-29T17:27:09.991Z        INFO    setup   starting manager
    2021-11-29T17:27:09.992Z        INFO    controller-runtime.manager      starting metrics server {"path": "/metrics"}
    2021-11-29T17:27:09.993Z        INFO    controller-runtime.manager.controller.nificluster       Starting EventSource    {"reconciler group": "nifi.orange.com", "reconciler kind": "NifiCluster", "source": "kind source: /, Kind="}
    2021-11-29T17:27:09.993Z        INFO    controller-runtime.manager.controller.nifiuser  Starting EventSource    {"reconciler group": "nifi.orange.com", "reconciler kind": "NifiUser", "source": "kind source: /, Kind="}
    2021-11-29T17:27:09.994Z        INFO    controller-runtime.manager.controller.nifiusergroup     Starting EventSource    {"reconciler group": "nifi.orange.com", "reconciler kind": "NifiUserGroup", "source": "kind source: /, Kind="}
    2021-11-29T17:27:09.994Z        INFO    controller-runtime.manager.controller.nifidataflow      Starting EventSource    {"reconciler group": "nifi.orange.com", "reconciler kind": "NifiDataflow", "source": "kind source: /, Kind="}
    2021-11-29T17:27:09.994Z        INFO    controller-runtime.manager.controller.nifiparametercontext      Starting EventSource    {"reconciler group": "nifi.orange.com", "reconciler kind": "NifiParameterContext", "source": "kind source: /, Kind="}
    2021-11-29T17:27:09.994Z        INFO    controller-runtime.manager.controller.nifiregistryclient        Starting EventSource    {"reconciler group": "nifi.orange.com", "reconciler kind": "NifiRegistryClient", "source": "kind source: /, Kind="}
    2021-11-29T17:27:09.995Z        INFO    controller-runtime.manager.controller.nificluster       Starting EventSource    {"reconciler group": "nifi.orange.com", "reconciler kind": "NifiCluster", "source": "kind source: /, Kind="}
    2021-11-29T17:27:10.180Z        INFO    controller-runtime.manager.controller.nifiusergroup     Starting Controller     {"reconciler group": "nifi.orange.com", "reconciler kind": "NifiUserGroup"}
    2021-11-29T17:27:10.180Z        INFO    controller-runtime.manager.controller.nifiuser  Starting EventSource    {"reconciler group": "nifi.orange.com", "reconciler kind": "NifiUser", "source": "kind source: /, Kind="}
    2021-11-29T17:27:10.181Z        INFO    controller-runtime.manager.controller.nifiregistryclient        Starting Controller     {"reconciler group": "nifi.orange.com", "reconciler kind": "NifiRegistryClient"}
    2021-11-29T17:27:10.181Z        INFO    controller-runtime.manager.controller.nifiparametercontext      Starting Controller     {"reconciler group": "nifi.orange.com", "reconciler kind": "NifiParameterContext"}
    2021-11-29T17:27:10.181Z        INFO    controller-runtime.manager.controller.nifidataflow      Starting Controller     {"reconciler group": "nifi.orange.com", "reconciler kind": "NifiDataflow"}
    2021-11-29T17:27:10.284Z        INFO    controller-runtime.manager.controller.nifidataflow      Starting workers        {"reconciler group": "nifi.orange.com", "reconciler kind": "NifiDataflow", "worker count": 1}
    2021-11-29T17:27:10.294Z        INFO    controller-runtime.manager.controller.nificluster       Starting EventSource    {"reconciler group": "nifi.orange.com", "reconciler kind": "NifiCluster", "source": "kind source: /, Kind="}
    2021-11-29T17:27:10.381Z        INFO    controller-runtime.manager.controller.nificluster       Starting Controller     {"reconciler group": "nifi.orange.com", "reconciler kind": "NifiCluster"}
    2021-11-29T17:27:10.381Z        INFO    controller-runtime.manager.controller.nifiusergroup     Starting workers        {"reconciler group": "nifi.orange.com", "reconciler kind": "NifiUserGroup", "worker count": 1}
    2021-11-29T17:27:10.381Z        INFO    controller-runtime.manager.controller.nifiuser  Starting Controller     {"reconciler group": "nifi.orange.com", "reconciler kind": "NifiUser"}
    2021-11-29T17:27:10.381Z        INFO    controller-runtime.manager.controller.nifiregistryclient        Starting workers        {"reconciler group": "nifi.orange.com", "reconciler kind": "NifiRegistryClient", "worker count": 1}
    2021-11-29T17:27:10.381Z        INFO    controller-runtime.manager.controller.nifiparametercontext      Starting workers        {"reconciler group": "nifi.orange.com", "reconciler kind": "NifiParameterContext", "worker count": 1}
    W1129 17:27:10.385667       1 warnings.go:70] policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget
    W1129 17:27:10.389258       1 warnings.go:70] policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget
    2021-11-29T17:27:10.394Z        INFO    controller-runtime.manager.controller.nificluster       Starting EventSource    {"reconciler group": "nifi.orange.com", "reconciler kind": "NifiCluster", "source": "kind source: /, Kind="}
    2021-11-29T17:27:10.481Z        INFO    controller-runtime.manager.controller.nificluster       Starting workers        {"reconciler group": "nifi.orange.com", "reconciler kind": "NifiCluster", "worker count": 1}
    2021-11-29T17:27:10.495Z        INFO    controller-runtime.manager.controller.nificluster       Starting EventSource    {"reconciler group": "nifi.orange.com", "reconciler kind": "NifiCluster", "source": "kind source: /, Kind="}
    2021-11-29T17:27:10.582Z        INFO    controller-runtime.manager.controller.nifiuser  Starting workers        {"reconciler group": "nifi.orange.com", "reconciler kind": "NifiUser", "worker count": 1}
    2021-11-29T17:27:10.596Z        INFO    controller-runtime.manager.controller.nificluster       Starting EventSource    {"reconciler group": "nifi.orange.com", "reconciler kind": "NifiCluster", "source": "kind source: /, Kind="}
    2021-11-29T17:27:10.697Z        INFO    controller-runtime.manager.controller.nificluster       Starting EventSource    {"reconciler group": "nifi.orange.com", "reconciler kind": "NifiCluster", "source": "kind source: /, Kind="}
    2021-11-29T17:27:10.798Z        INFO    controller-runtime.manager.controller.nificluster       Starting Controller     {"reconciler group": "nifi.orange.com", "reconciler kind": "NifiCluster"}
    2021-11-29T17:27:10.798Z        INFO    controller-runtime.manager.controller.nificluster       Starting workers        {"reconciler group": "nifi.orange.com", "reconciler kind": "NifiCluster", "worker count": 1}
    W1129 17:34:30.395634       1 warnings.go:70] policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget
    W1129 17:42:32.401953       1 warnings.go:70] policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget
    W1129 17:50:11.406944       1 warnings.go:70] policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget
    W1129 17:56:23.413301       1 warnings.go:70] policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget
    

    Environment

    • nifikop version:

    Helm chart version : 0.7.5

    • Kubernetes version information:

    Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.5", GitCommit:"aea7bbadd2fc0cd689de94a54e5b7b758869d691", GitTreeState:"clean", BuildDate:"2021-09-15T21:10:45Z", GoVersion:"go1.16.8", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.5", GitCommit:"aea7bbadd2fc0cd689de94a54e5b7b758869d691", GitTreeState:"clean", BuildDate:"2021-09-15T21:04:16Z", GoVersion:"go1.16.8", Compiler:"gc", Platform:"linux/amd64"}

    • Kubernetes cluster kind: Managed by OVH

    Please advise...

    Thx !

  • Flows do not persist pod restart

    Flows do not persist pod restart

    From nifikop created by andrew-musoke: Orange-OpenSource/nifikop#201

    Type of question

    Are you asking about community best practices, how to implement a specific feature, or about general context and help around nifikop ? General help with Nifikop.

    Question

    What did you do? I deployed Nifi with 2 pods via NifiKops. After creating a flow on the UI, I exported the process groups to a nifi-registry as well. The cluster run for days. This is the CR I used. I then deleted the cluster pods to test resilience.

    apiVersion: nifi.orange.com/v1alpha1
    kind: NifiCluster
    metadata:
      name: simplenifi
      namespace: dataops
    spec:
      service:
        headlessEnabled: true
      zkAddress: "zookeeper.dataops.svc.cluster.local.:2181"
      zkPath: "/simplenifi"
      clusterImage: "apache/nifi:1.12.1"
      oneNifiNodePerNode: false
      nodeConfigGroups:
        default_group:
          isNode: true
          imagePullPolicy: IfNotPresent
          storageConfigs:
            - mountPath: "/opt/nifi/nifi-current/logs"
              name: logs
              pvcSpec:
                accessModes:
                  - ReadWriteOnce
                storageClassName: "gp2"
                resources:
                  requests:
                    storage: 10Gi
          serviceAccountName: "default"
          resourcesRequirements:
            limits:
              cpu: "0.5"
              memory: 2Gi
            requests:
              cpu: "0.5"
              memory: 2Gi
      clientType: "basic"
      nodes:
        - id: 1
          nodeConfigGroup: "default_group"
        - id: 2
          nodeConfigGroup: "default_group"
      propagateLabels: true
      nifiClusterTaskSpec:
        retryDurationMinutes: 10
      listenersConfig:
        internalListeners:
          - type: "http"
            name: "http"
            containerPort: 8080
          - type: "cluster"
            name: "cluster"
            containerPort: 6007
          - type: "s2s"
            name: "s2s"
            containerPort: 10000
    

    What did you expect to see? I expected the cluster to run properly and survive restarts since PVs are created. I expected to see the pipelines continue running after the pods started up.

    What did you see instead? Under which circumstances? When the pods came back up and were healthy, the UI had no flows or process groups. The registry configuration had also disappeared. I have to manually re-register the nifi-registry, re-import the process groups, add the secrets and restart the pipelines.

    1. Why would this happen when Nifi has persistent volumes?
    2. How can this behaviour be stopped?
    3. How can I persist the flows or at least automate the re-importing and restarting of pipelines from nifi-registry.

    Environment

    • nifikop version: v0.7.5-release

    • Kubernetes version information:

     Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.4", GitCommit:"b695d79d4f967c403a96986f1750a35eb75e75f1", GitTreeState:"clean", BuildDate:"2021-11-17T15:48:33Z", GoVersion:"go1.16.10", Compiler:"gc", Platform:"linux/amd64"}
    Server Version: version.Info{Major:"1", Minor:"20+", GitVersion:"v1.20.11-eks-f17b81", GitCommit:"f17b810c9e5a82200d28b6210b458497ddfcf31b", GitTreeState:"clean", BuildDate:"2021-10-15T21:46:21Z", GoVersion:"go1.15.15", Compiler:"gc", Platform:"linux/amd64"}
    
    • NiFi version:

    apache/nifi:1.12.1

  • Simplenifi cluster is running but unaccessible

    Simplenifi cluster is running but unaccessible

    From nifikop created by tmarkunin: Orange-OpenSource/nifikop#69

    Bug Report

    What did you do? I've installed simple nifi cluster following https://orange-opensource.github.io/nifikop/docs/2_setup/1_getting_started

    What did you expect to see? Running nifi cluster with 2 nodes accessible through web UI

    NAME READY STATUS RESTARTS AGE pod/nifikop-586867994d-lkmgc 1/1 Running 0 6h56m pod/nifikop-586867994d-pvnmn 0/1 Terminating 0 25h pod/simplenifi-1-nodew5925 1/1 Running 0 6h52m pod/simplenifi-2-nodegt8rh 1/1 Running 0 22h pod/zookeeper-0 1/1 Running 1 6h52m pod/zookeeper-1 1/1 Running 1 6h52m pod/zookeeper-2 1/1 Running 1 6h52m

    What did you see instead? Under which circumstances? UI is not accessible through svc service/simplenifi-all-node. Moreover I failed to curl http:localhost:8080 from inside a container

    $ curl http://localhost:8080/nifi curl: (7) Failed to connect to localhost port 8080: Connection refused

    Environment

    • nifikop version: 0.5.1

    • Kubernetes version information:

    1.18

    • Kubernetes cluster kind: Yandex cloud
  • Deploying Secure Cluster on AKS

    Deploying Secure Cluster on AKS

    From nifikop created by borkod: Orange-OpenSource/nifikop#21

    Bug Report

    Hello. This is a very interesting project 👍

    I am trying to follow https://orange-opensource.github.io/nifikop/blog/secured_nifi_cluster_on_gcp/ , but deploy it on Azure Kubernetes Service.

    I've deployed:

    • AKS cluster
    • zookeeper
    • cert-manager and issuer
    • storage class with WaitForFirstConsumer (and updated the yaml file)
    • registered a client with openid provider (using KeyCloak)

    I've updated the nifi cluster resource yaml file with appropriate values from above.

    When I try to deploy it, I don't see any pod resources even created.

    Any suggestions? What's the best way to debug why no pods are even being created? kubectl describe on the nificluster resource doesn't provide any useful information.

    I was able to deploy a working cluster on AKS using simple nifi cluster sample (not secured).

    Thanks for any suggestions and help!

  • [Feature] Migration to v1 resources

    [Feature] Migration to v1 resources

    | Q | A | --------------- | --- | Bug fix? | no | New feature? | no | API breaks? | no | Deprecations? | no | Related tickets | | License | Apache 2.0

    What's in this PR?

    Prepare version 1.0.0 with the new version v1 for stable CRDs. Migrate the package for resource definition and enable the migration webhook.

    Why?

    To do things the right way :D

    Checklist

    • [x] Implementation tested
    • [X] Error handling code meets the guideline
    • [X] Logging code meets the guideline
    • [x] User guide and development docs updated (if needed)
    • [x] Append changelog with changes

    TODO

    • [x] Helm chart: Webhook integration
    • [x] Update documentation with v1 resources
    • [x] Update sample with v1 resources
  • Change dataflow and nificluster controllers to avoid changing status on every reconciliation loop

    Change dataflow and nificluster controllers to avoid changing status on every reconciliation loop

    | Q | A | --------------- | --- | Bug fix? | yes | New feature? | no | API breaks? | no | Deprecations? | no | Related tickets | fixes #119 | License | Apache 2.0

    What's in this PR?

    Tweaks NifiCluster and NifiDataflow controllers to avoid changing their respective statuses on every reconciliation loop. Instead, the controllers will only change the status to ready/done and log any actions taken thereafter and they will respect their reconciliation interval settings (15s by default). Previously to this, the controllers would reconcile many times per second, which drastically increases the load on the control plane and on NiFi itself.

    Additionally, i changed the logger timestamp encoder to user ISO8601 instead of epoch time so it is human readable.

    Why?

    Issue #119 does an excellent job of explaining why this is an issue.

    Checklist

    • [x] Implementation tested
    • [x] Error handling code meets the guideline
    • [x] Logging code meets the guideline
    • [x] User guide and development docs updated (if needed)
    • [x] Append changelog with changes
  • Users and groups are not getting created in sslnifi cluster

    Users and groups are not getting created in sslnifi cluster

    From nifikop created by Sreenivas-Ratakonda: Orange-OpenSource/nifikop#179

    Bug Report

    After setting up the sslnifi cluster I found that the managed users are not getting created, as per the docs to login in to the Nifi cluster UI we need one admin user but that user is not getting created in the nificluster. As per the docs by default three groups gets created managed admins, managed users, managed nodes but for me the nifi user groups are not getting created

    What did you do? I have created an sslnifi cluster.

    apiVersion: nifi.orange.com/v1alpha1
    kind: NifiCluster
    metadata:
      name: sslnifi
    spec:
      service:
        headlessEnabled: false
      zkAddress: "zookeeper.zookeeper.svc.cluster.local:2181"
      zkPath: "/ssllnifi"
      clusterImage: "apache/nifi:1.12.1"
      oneNifiNodePerNode: false
      managedAdminUsers:
        -  identity : "[email protected]"
           name: "nifiadmin"
      managedReaderUsers:
        -  identity : "[email protected]"
           name: "nifiuser"
      propagateLabels: true
      nifiClusterTaskSpec:
        retryDurationMinutes: 10
      readOnlyConfig:
        # NifiProperties configuration that will be applied to the node.
        nifiProperties:
          webProxyHosts:
            - nifistandard2.trycatchlearn.fr:8443
    
      nodeConfigGroups:
        default_group:
          isNode: true
          storageConfigs:
            - mountPath: "/opt/nifi/nifi-current/logs"
              name: logs
              pvcSpec:
                accessModes:
                  - ReadWriteOnce
                storageClassName: "gp2"
                resources:
                  requests:
                    storage: 10Gi
            - mountPath: "/opt/nifi/data"
              name: data
              pvcSpec:
                accessModes:
                  - ReadWriteOnce
                storageClassName: "gp2"
                resources:
                  requests:
                    storage: 10Gi
            - mountPath: "/opt/nifi/flowfile_repository"
              name: flowfile-repository
              pvcSpec:
                accessModes:
                  - ReadWriteOnce
                storageClassName: "gp2"
                resources:
                  requests:
                    storage: 10Gi
            - mountPath: "/opt/nifi/nifi-current/conf"
              name: conf
              pvcSpec:
                accessModes:
                  - ReadWriteOnce
                storageClassName: "gp2"
                resources:
                  requests:
                    storage: 10Gi
            - mountPath: "/opt/nifi/content_repository"
              name: content-repository
              pvcSpec:
                accessModes:
                  - ReadWriteOnce
                storageClassName: "gp2"
                resources:
                  requests:
                    storage: 10Gi
            - mountPath: "/opt/nifi/provenance_repository"
              name: provenance-repository
              pvcSpec:
                accessModes:
                  - ReadWriteOnce
                storageClassName: "gp2"
                resources:
                  requests:
                    storage: 10Gi
          serviceAccountName: "default"
          resourcesRequirements:
            limits:
              cpu: "0.5"
              memory: 2Gi
            requests:
              cpu: "0.5"
              memory: 2Gi
      nodes:
        - id: 1
          nodeConfigGroup: "default_group"
          readOnlyConfig:
            nifiProperties:
              overrideConfigs: |        
                  nifi.ui.banner.text=Ciena Blueplanet Enterprise Node SSL 1
                  nifi.remote.input.socket.port=
                  nifi.remote.input.secure=true
                  nifi.remote.input.host=xxxxxxxxxxxxxxxxxxxxxxxx.us-east-1.elb.amazonaws.com
        - id: 2
          nodeConfigGroup: "default_group"
          readOnlyConfig:
            nifiProperties:
              overrideConfigs: |
                  nifi.ui.banner.text=Ciena Blueplanet Enterprise Node SSL 2
                  nifi.remote.input.socket.port=
                  nifi.remote.input.secure=true
                  nifi.remote.input.host=xxxxxxxxxxxxxxxxxxxxxxxx.us-east-1.elb.amazonaws.com
        - id: 3
          nodeConfigGroup: "default_group"
          readOnlyConfig:
            nifiProperties:
              overrideConfigs: |
                  nifi.ui.banner.text=Ciena Blueplanet Enterprise Node SSL 3
                  nifi.remote.input.socket.port=
                  nifi.remote.input.secure=true
                  nifi.remote.input.host=xxxxxxxxxxxxxxxxxxxxxxxx.us-east-1.elb.amazonaws.com
      listenersConfig:
        internalListeners:
          - type: "https"
            name: "https"
            containerPort: 8443     
          - type: "cluster"
            name: "cluster"
            containerPort: 6007
          - type: "s2s"
            name: "s2s"
            containerPort: 10000
        sslSecrets:
          tlsSecretName: "test-nifikop"
          create: true
    
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: sslnifi-all
    spec:
      selector:
        app: nifi 
        nifi_cr: sslnifi
      ports:
      - name: https
        port: 8443
        protocol: TCP
        targetPort: 8443
      type: LoadBalancer
    
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: sslnifi-1-node-lb
    spec:
      selector:
        app: nifi 
        nifi_cr: sslnifi
        nodeId: "1"
      ports:
      - name: https
        port: 8443
        protocol: TCP
        targetPort: 8443
      type: LoadBalancer
    
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: sslnifi-2-node-lb
    spec:
      selector:
        app: nifi 
        nifi_cr: sslnifi
        nodeId: "2"
      ports:
      - name: https
        port: 8443
        protocol: TCP
        targetPort: 8443
      type: LoadBalancer
    
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: sslnifi-3-node-lb
    spec:
      selector:
        app: nifi 
        nifi_cr: sslnifi
        nodeId: "3"
      ports:
      - name: https
        port: 8443
        protocol: TCP
        targetPort: 8443
      type: LoadBalancer
    
    ---
    apiVersion: nifi.orange.com/v1alpha1
    kind: NifiUser
    metadata:
      name: bpeadmin
    spec:
      identity: [email protected]
      clusterRef:
        name: sslnifi
        namespace: nifi
      createCert: true
      includeJKS: true
      secretName: bpeadmin_secrets
    
    
    

    What did you expect to see?

    we expected see managed users to be created but those users are not created in Nifi Cluster. I have created an another user bpeadmin when I query nifikop it says user created but the user is not created in Nificluster. few Nifi user groups needs be created.

    What did you see instead? Under which circumstances?

    Below we can see that there are no managed users created, which are mentioned in the Nifi Cluster config.

    Here it says that bpeadmin user is created but i have added authorizer file there is no bpeadmin user created in there.

    Users created in the Nifi Cluster config

    $ k get nifiusers.nifi.orange.com -n nifi
    NAME                                        AGE
    bpeadmin                                    18h
    sslnifi-1-node.nifi.svc.cluster.local       18h
    sslnifi-2-node.nifi.svc.cluster.local       18h
    sslnifi-3-node.nifi.svc.cluster.local       18h
    sslnifi-controller.nifi.mgt.cluster.local   18h
    

    authorizers.xml file in one of the nodes.

    <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
    <authorizers>
        <userGroupProvider>
            <identifier>file-user-group-provider</identifier>
            <class>org.apache.nifi.authorization.FileUserGroupProvider</class>
            <property name="Users File">../data/users.xml</property>
            <property name="Legacy Authorized Users File"></property>
            <property name="Initial User Identity admin">sslnifi-controller.nifi.mgt.cluster.local</property>
            <property name="Initial User Identity 1">sslnifi-1-node.nifi.svc.cluster.local</property>
            <property name="Initial User Identity 2">sslnifi-2-node.nifi.svc.cluster.local</property>
            <property name="Initial User Identity 3">sslnifi-3-node.nifi.svc.cluster.local</property>
        </userGroupProvider>
        <accessPolicyProvider>
            <identifier>file-access-policy-provider</identifier>
            <class>org.apache.nifi.authorization.FileAccessPolicyProvider</class>
            <property name="User Group Provider">file-user-group-provider</property>
            <property name="Authorizations File">../data/authorizations.xml</property>
            <property name="Initial Admin Identity">sslnifi-controller.nifi.mgt.cluster.local</property>
            <property name="Legacy Authorized Users File"></property>
            <property name="Node Identity 1">sslnifi-1-node.nifi.svc.cluster.local</property>
            <property name="Node Identity 2">sslnifi-2-node.nifi.svc.cluster.local</property>
            <property name="Node Identity 3">sslnifi-3-node.nifi.svc.cluster.local</property>
    		<property name="Node Group"></property>
        </accessPolicyProvider>
        <authorizer>
            <identifier>managed-authorizer</identifier>
            <class>org.apache.nifi.authorization.StandardManagedAuthorizer</class>
            <property name="Access Policy Provider">file-access-policy-provider</property>
        </authorizer>
    </authorizers>
    

    Detailed view at the bpeadmin user

    $ k describe  nifiusers.nifi.orange.com/bpeadmin -n nifi
    Name:         bpeadmin
    Namespace:    nifi
    Labels:       <none>
    Annotations:  banzaicloud.com/last-applied:
                    UEsDBBQACAAIAAAAAAAAAAAAAAAAAAAAAAAIAAAAb3JpZ2luYWyUk8GO0zAQht9lzk7bZHdb8AkJiQMgDrC7BwiHqT0po3Ucy3ZWWlV5d2SnSVPURXBp49HM+Pd8/xwBHT+SD9xZkG...
    API Version:  nifi.orange.com/v1alpha1
    Kind:         NifiUser
    Metadata:
      Creation Timestamp:  2021-12-27T12:16:18Z
      Generation:          2
      Managed Fields:
        API Version:  nifi.orange.com/v1alpha1
        Fields Type:  FieldsV1
        fieldsV1:
          f:metadata:
            f:annotations:
              f:banzaicloud.com/last-applied:
          f:status:
            .:
            f:id:
            f:version:
        Manager:      manager
        Operation:    Update
        Time:         2021-12-27T12:16:18Z
        API Version:  nifi.orange.com/v1alpha1
        Fields Type:  FieldsV1
        fieldsV1:
          f:metadata:
            f:annotations:
              .:
              f:kubectl.kubernetes.io/last-applied-configuration:
          f:spec:
            .:
            f:clusterRef:
              .:
              f:name:
              f:namespace:
            f:createCert:
            f:identity:
            f:includeJKS:
            f:secretName:
        Manager:         kubectl-client-side-apply
        Operation:       Update
        Time:            2021-12-27T13:01:37Z
      Resource Version:  65379941
      Self Link:         /apis/nifi.orange.com/v1alpha1/namespaces/nifi/nifiusers/bpeadmin
      UID:               7a7b71ed-2a12-466d-9f5c-073c6b42e3a7
    Spec:
      Cluster Ref:
        Name:       sslnifi
        Namespace:  nifi
      Create Cert:  true
      Identity:     [email protected]
      Include JKS:  true
      Secret Name:  bpeadmin_secrets
    Events:
      Type    Reason                  Age                 From       Message
      ----    ------                  ----                ----       -------
      Normal  ReconcilingCertificate  13m (x86 over 18h)  nifi-user  Reconciling certificate for nifi user bpeadmin
    
    

    No Nifi Groups Found

    $ kubectl get -n nifi nifiusergroups.nifi.orange.com
    No resources found in nifi namespace.
    
    

    So to summarize there is a conflict between what we see in k get nifiusers.nifi.orange.com -n nifi and authorizers.xml one says the bpeadmin user created but the other one doesn't have the the bpeadmin user in authorizers.xml

    ** So over all the Users are not getting created in Nifi Cluster **

    Environment

    • nifikop version: Followed exact steps here: https://orange-opensource.github.io/nifikop/docs/2_setup/1_getting_started

    • Kubernetes version information:

    $ k version
    Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.7", GitCommit:"1dd5338295409edcfff11505e7bb246f0d325d15", GitTreeState:"clean", BuildDate:"2021-01-13T13:23:52Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"windows/amd64"}
    Server Version: version.Info{Major:"1", Minor:"19+", GitVersion:"v1.19.13-eks-8df270", GitCommit:"8df2700a72a2598fa3a67c05126fa158fd839620", GitTreeState:"clean", BuildDate:"2021-07-31T01:36:57Z", GoVersion:"go1.15.14", Compiler:"gc", Platform:"linux/amd64"}
    
    • NiFi version:

    1.12.1

    Possible Solution

    Additional context Add any other context about the problem here.

  • Added NifiNodeGroupAutoscaler with basic scaling strategies

    Added NifiNodeGroupAutoscaler with basic scaling strategies

    | Q | A | --------------- | --- | Bug fix? | no | New feature? | yes | API breaks? | no | Deprecations? | no | Related tickets | | License | Apache 2.0

    What's in this PR?

    This PR contains an implementation of the autoscaling design voted on here: https://konpytika.slack.com/archives/C035FHN1MNG/p1650031443231799

    It adds:

    • A new custom resource called NifiNodeGroupAutoscaler
    • Adds Nodes.Spec.Labels so that nodes in a NifiCluster can be tagged with arbitrary labels and specifically for the NifiNodeGroupAutoscaler to identify nodes to manage
    • A new controller for this custom resource. Its reconciliation loop is as follows:
      • Fetch the current NifiCluster.Spec.Nodes list, filter it by the nodes within to manage by the provided NifiNodeGroupAutoscaler.Spec.NodeLabelsSelector
      • Compare the managed node list with the current NifiNodeGroupAutoscaler.Spec.Replicas setting and determine if scaling needs to happen. If the # replicas > # managed nodes, then scale up. If the # replicas < # managed nodes, then scale down. Else do nothing.
      • If a scale event happened, patch the NiFiCluster.Spec.Nodes list with the added/removed nodes & update the scale subresource status fields.
    • A NifiNodeGroupAutoscaler can manage any subset of nodes in a NifiCluster up to the entire cluster. With this, you can enable highly resourced (mem/cpu) node groups for volume bursts or to just autoscale entire clusters driven by metrics.
    • I significantly reduced the verbosity of the logs by default since it was difficult to track what the operator was actually doing. In general, i think this should be continued to the point where the operator only logs what actions it has taken or changes it has made in k8s.

    I don't necessarily consider this PR complete, which is why its in draft status. See the additional context below. However, i have tested this on a live system and it does work.

    Why?

    To enable horizontal scaling of nifi clusters.

    Additional context

    There are a few scenarios that need to be addressed prior to merging this:

    • On a scale up event, when the NifiNodeGroupAutoscaler adds nodes to the NifiCluster.Spec.Nodes list, the nifi cluster controller correctly adds a new pod to the deployment. However, when that node comes completely up and is Running in k8s, nifikop kills it and kicks off a RollingUpgrade and basically restarts the new node. This occurs here, but i'm not exactly sure what causes it. Scaling down happens "gracefully"

    • It's not possible to deploy a NifiCluster with only autoscaled node groups. The NifiCluster CRD requires that you specify at least one node in the spec.nodes list. Do we want to support this? If so, we may need to adjust the cluster initialization logic in the NifiCluster controller.

    I did all of my testing via ArgoCD. When the live state differs from the state in git, ArgoCD immediately reverts it so I had to craft my applications carefully to avoid ArgoCD undoing the changes that the HorizontalPodAutoscaler and NifiNodeGroupAutoscaler controllers were making.

    I've tested scaling up and down and successfully configured the HorizontalPodAutoscaler to pull nifi metrics from Prometheus. Here's the autoscaler config that i used for that setup:

    apiVersion: nifi.konpyutaika.com/v1alpha1
    kind: NifiNodeGroupAutoscaler
    metadata:
      labels:
        argocd.argoproj.io/instance: as-nf
      name: as-nf-scale-group
      namespace: nifi
    spec:
      clusterRef:
        name: as-nf
        namespace: nifi
      nodeConfigGroupId: scale-group
      nodeLabelsSelector:
        matchLabels:
          scale_me: 'true'
      upscaleStrategy: simple
      downscaleStrategy: lifo
    

    And the nifi_amount_items_queued_sum metric is added to the prometheus-adapter deployment as follows:

    prometheus-adapter:
      rules:
        custom:
        - seriesQuery: 'nifi_amount_items_queued'
          resources:
            # skip specifying generic resource<->label mappings, and just
            # attach only pod and namespace resources by mapping label names to group-resources
            overrides:
              kubernetes_namespace:
                resource: "namespace"
              kubernetes_pod_name: 
                resource: "pod"
          name:
            matches: "^(.*)"
            as: "${1}_sum"
          metricsQuery: (sum(<<.Series>>{<<.LabelMatchers>>}) by (<<.GroupBy>>))
    

    Checklist

    • [x] Implementation tested
    • [x] Error handling code meets the guideline
    • [x] Logging code meets the guideline
    • [x] User guide and development docs updated (if needed)
    • [x] Append changelog with changes
  • Improve the pod health checks to monitor cluster status

    Improve the pod health checks to monitor cluster status

    What steps will reproduce the bug?

    If a node disconnects from a nifi cluster, it'll stay disconnected until I manually delete the pod. I get this error regularly:

    Action cannot be performed because there is currently no Cluster Coordinator elected. The request should be tried again after a moment, after a Cluster Coordinator has been automatically elected.

    What is the expected behavior?

    The pod should be restarted to rejoin the cluster

    What do you see instead?

    Disconnected nodes that don't recover

    Possible solution

    Change the pod readiness check to hit /nifi-api/flow/cluster/summary?

    NiFiKop version

    v0.14.0-release

    Golang version

    1.19

    Kubernetes version

    v1.23.6-rke2r2

    NiFi version

    1.16.0

    Additional context

    No response

  • Make NiFi pod readiness and liveness checks configurable

    Make NiFi pod readiness and liveness checks configurable

    | Q | A | --------------- | --- | Bug fix? | no | New feature? | yes | API breaks? | no | Deprecations? | no | Related tickets | fixes #219 | License | Apache 2.0

    What's in this PR?

    This makes the NiFi Pod readiness and liveness checks configurable. These are new optional fields, so if they aren't overridden, then the current defaults are still used as before.

    I've added this to both v1 and v1alpha1 API versions.

    Why?

    See #219 for details.

    Checklist

    • [x] Implementation tested
    • [x] Error handling code meets the guideline
    • [x] Logging code meets the guideline
    • [x] User guide and development docs updated (if needed)
    • [x] Append changelog with changes
  • Make liveness and readiness checks configurable

    Make liveness and readiness checks configurable

    Is your feature request related to a problem?

    Yes. We're running into the issue described in this thread occasionally and unpredictably: https://www.mail-archive.com/[email protected]/msg14909.html

    When this happens, the only fix is to restart the NiFi pods. Restarting Zookeeper has no effect. This can be detected by querying the NiFi REST API, so we can work around this by tweaking the liveness check. However, it's not currently configurable, so i'm requesting that it be made configurable while keeping the current configuration as the default.

    Describe the solution you'd like to see

    Make the readiness and liveness checks configurable and expose the configuration in NifiCluster.Spec.PodPolicy.

    Describe alternatives you've considered

    Writing an external CronJob, but that's just a bandaid for an appropriate solution until the problem is resolved in NiFi itself.

    Additional context

    No response

  • Add cross-platform docker builds

    Add cross-platform docker builds

    | Q | A | --------------- | --- | Bug fix? | no | New feature? | yes | API breaks? | no | Deprecations? | no | Related tickets | fixes #207 | License | Apache 2.0

    What's in this PR?

    PR #205 made cross-platform docker image builds possible. This PR makes the nifikop docker image published cross-platform following the below docs:

    • https://www.docker.com/blog/faster-multi-platform-builds-dockerfile-cross-compilation-guide/
    • https://sdk.operatorframework.io/docs/advanced-topics/multi-arch/#manifest-lists

    Why?

    To enable running nifikop on multiple architectures.

    Checklist

    • [x] Implementation tested
    • [x] Error handling code meets the guideline
    • [x] Logging code meets the guideline
    • [x] User guide and development docs updated (if needed)
    • [x] Append changelog with changes
  • Removed hardcoded default-scheduler

    Removed hardcoded default-scheduler

    Signed-off-by: Michal Keder [email protected]

    | Q | A | --------------- | --- | Bug fix? | yes | New feature? | no | API breaks? | no | Deprecations? | no | Related tickets | N/A | License | Apache 2.0

    What's in this PR?

    I removed the hardcoded SchedulerName in pod definition, since when no scheduler name is supplied, the pod is automatically scheduled using default-scheduler.

    Why?

    When trying to deploy a NifiCluster to a GKE Autopilot cluster or a GKE cluster with Autoscaling with optimize-utilization profile, scheduler is changed to gke.io/optimize-utilization-scheduler via a mutating webhook (https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-autoscaler#autoscaling_profiles). This leads to endless recreation of Nifi pods, because the patch result is never empty: https://github.com/konpyutaika/nifikop/blob/389bab8cce20fa57136e567333e07ae25947a71f/pkg/resources/nifi/nifi.go#L654 https://github.com/konpyutaika/nifikop/blob/389bab8cce20fa57136e567333e07ae25947a71f/pkg/resources/nifi/nifi.go#L682-L688 https://github.com/konpyutaika/nifikop/blob/389bab8cce20fa57136e567333e07ae25947a71f/pkg/resources/nifi/nifi.go#L730

    Additional context

    To test I used a GKE Autopilot cluster with cert-manager, zookeeper and nifikop installed using below commands:

    helm install cert-manager jetstack/cert-manager \
    	--create-namespace --namespace cert-manager \
    	--set installCRDs=true \
    	--set global.leaderElection.namespace=cert-manager 
    helm install zookeeper bitnami/zookeeper \
    	--create-namespace --namespace zookeeper \
    	--set resources.requests.memory=256Mi \
    	--set resources.requests.cpu=250m \
    	--set resources.limits.memory=256Mi \
    	--set resources.limits.cpu=250m \
    	--set global.storageClass=standard-rwo \
    	--set networkPolicy.enabled=true
    helm install nifikop oci://ghcr.io/konpyutaika/helm-charts/nifikop \
    	--create-namespace --namespace nifi \
    	--set image.repository=my.gcr.repo \
    	--set image.tag=mytag \
            --set resources.requests.memory=256Mi \
            --set resources.requests.cpu=250m \
            --set resources.limits.memory=256Mi \
            --set resources.limits.cpu=250m \
            --set namespaces={"nifi"}
    

    Then I applied the NifiCluster from [samples] https://github.com/konpyutaika/nifikop/blob/389bab8cce20fa57136e567333e07ae25947a71f/config/samples/simplenificluster.yaml) with StorageClass set to standard-rwo and additional requests and limits for ephmeral-storage (required by GKE Autopilot restrictions):

    apiVersion: nifi.konpyutaika.com/v1
    kind: NifiCluster
    metadata:
      name: simplenifi
    spec:
      service:
        headlessEnabled: true
        labels:
          cluster-name: simplenifi
      zkAddress: "zookeeper.zookeeper:2181"
      zkPath: /simplenifi
      externalServices:
        - metadata:
            labels:
              cluster-name: driver-simplenifi
          name: driver-ip
          spec:
            portConfigs:
              - internalListenerName: http
                port: 8080
            type: LoadBalancer
      clusterImage: "apache/nifi:1.15.3"
      initContainerImage: 'bash:5.2.2'
      oneNifiNodePerNode: true
      readOnlyConfig:
        nifiProperties:
          overrideConfigs: |
            nifi.sensitive.props.key=thisIsABadSensitiveKeyPassword
      pod:
        labels:
          cluster-name: simplenifi
      nodeConfigGroups:
        default_group:
          imagePullPolicy: IfNotPresent
          isNode: true
          serviceAccountName: default
          storageConfigs:
            - mountPath: "/opt/nifi/nifi-current/logs"
              name: logs
              pvcSpec:
                accessModes:
                  - ReadWriteOnce
                storageClassName: "standard-rwo"
                resources:
                  requests:
                    storage: 1Gi
          resourcesRequirements:
            limits:
              cpu: "0.5"
              memory: 2Gi
              ephemeral-storage: 4Gi
            requests:
              cpu: "0.5"
              memory: 2Gi
              ephemeral-storage: 4Gi
      nodes:
        - id: 1
          nodeConfigGroup: "default_group"
        - id: 2
          nodeConfigGroup: "default_group"
      propagateLabels: true
      nifiClusterTaskSpec:
        retryDurationMinutes: 10
      listenersConfig:
        internalListeners:
          - containerPort: 8080
            type: http
            name: http
          - containerPort: 6007
            type: cluster
            name: cluster
          - containerPort: 10000
            type: s2s
            name: s2s
          - containerPort: 9090
            type: prometheus
            name: prometheus
          - containerPort: 6342
            type: load-balance
            name: load-balance
    

    Checklist

    • [x] Implementation tested
    • [ ] Error handling code meets the guideline
    • [ ] Logging code meets the guideline
    • [ ] User guide and development docs updated (if needed)
    • [ ] Append changelog with changes

    To Do

    • [ ] If the PR is not complete but you want to discuss the approach, list what remains to be done here
  • [CI] - Add support docker image for other platforms

    [CI] - Add support docker image for other platforms

    Is your feature request related to a problem?

    There is no docker image built and published for other platforms.

    Describe the solution you'd like to see

    Adds steps in Circle Ci to build and push the docker image, using the new makefile command introduced in https://github.com/konpyutaika/nifikop/pull/205.

    Describe alternatives you've considered

    No response

    Additional context

    No response

  • Add CI Linter to ensure formatting and Apache license

    Add CI Linter to ensure formatting and Apache license

    Is your feature request related to a problem?

    We're not currently continuously linting the code base and we're not enforcing the presence of the apache license header in each source file.

    Describe the solution you'd like to see

    Add a CI job which performs golang linting and ensures the presence of the apache license header in each source file.

    Describe alternatives you've considered

    No response

    Additional context

    No response

Basic Kubernetes operator that have multiple versions in CRD. This operator can be used to experiment and understand Operator/CRD behaviors.

add-operator Basic Kubernetes operator that have multiple versions in CRD. This operator can be used to experiment and understand Operator/CRD behavio

Dec 15, 2021
The OCI Service Operator for Kubernetes (OSOK) makes it easy to connect and manage OCI services from a cloud native application running in a Kubernetes environment.

OCI Service Operator for Kubernetes Introduction The OCI Service Operator for Kubernetes (OSOK) makes it easy to create, manage, and connect to Oracle

Sep 27, 2022
An operator which complements grafana-operator for custom features which are not feasible to be merged into core operator

Grafana Complementary Operator A grafana which complements grafana-operator for custom features which are not feasible to be merged into core operator

Aug 16, 2022
The DataStax Kubernetes Operator for Apache Cassandra

Cass Operator The DataStax Kubernetes Operator for Apache Cassandra®. This repository replaces the old datastax/cass-operator for use-cases in the k8s

Dec 29, 2022
The Elastalert Operator is an implementation of a Kubernetes Operator, to easily integrate elastalert with gitops.

Elastalert Operator for Kubernetes The Elastalert Operator is an implementation of a Kubernetes Operator. Getting started Firstly, learn How to use el

Jun 28, 2022
Minecraft-operator - A Kubernetes operator for Minecraft Java Edition servers

Minecraft Operator A Kubernetes operator for dedicated servers of the video game

Dec 15, 2022
K8s-network-config-operator - Kubernetes network config operator to push network config to switches

Kubernetes Network operator Will add more to the readme later :D Operations The

May 16, 2022
Pulumi-k8s-operator-example - OpenGitOps Compliant Pulumi Kubernetes Operator Example

Pulumi GitOps Example OpenGitOps Compliant Pulumi Kubernetes Operator Example Pr

May 6, 2022
Kubernetes Operator Samples using Go, the Operator SDK and OLM
Kubernetes Operator Samples using Go, the Operator SDK and OLM

Kubernetes Operator Patterns and Best Practises This project contains Kubernetes operator samples that demonstrate best practices how to develop opera

Nov 24, 2022
A kubernetes operator sample generated by kubebuilder , which run cmd in pod on specified time

init kubebuilder init --domain github.com --repo github.com/tonyshanc/sample-operator-v2 kubebuilder create api --group sample --version v1 --kind At

Jan 25, 2022
Test Operator using operator-sdk 1.15

test-operator Test Operator using operator-sdk 1.15 operator-sdk init --domain rbt.com --repo github.com/ravitri/test-operator Writing kustomize manif

Dec 28, 2021
a k8s operator 、operator-sdk

helloworld-operator a k8s operator 、operator-sdk Operator 参考 https://jicki.cn/kubernetes-operator/ https://learnku.com/articles/60683 https://opensour

Jan 27, 2022
Operator Permissions Advisor is a CLI tool that will take a catalog image and statically parse it to determine what permissions an Operator will request of OLM during an install

Operator Permissions Advisor is a CLI tool that will take a catalog image and statically parse it to determine what permissions an Operator will request of OLM during an install. The permissions are aggregated from the following sources:

Apr 22, 2022
GoBinClassify - A library that makes it easy to classify into groups

GoBinClassify GoBinClassify is a library that makes it easy to classify into gro

Feb 12, 2022
PolarDB-X Operator is a Kubernetes extension that aims to create and manage PolarDB-X cluster on Kubernetes.

GalaxyKube -- PolarDB-X Operator PolarDB-X Operator is a Kubernetes extension that aims to create and manage PolarDB-X cluster on Kubernetes. It follo

Dec 19, 2022
Kubernetes Operator to sync secrets between different secret backends and Kubernetes

Vals-Operator Here at Digitalis we love vals, it's a tool we use daily to keep secrets stored securely. We also use secrets-manager on the Kubernetes

Nov 13, 2022
k6 extension to load test Apache Kafka with support for Avro messages and SASL Authentication

xk6-kafka This project is a k6 extension that can be used to load test Kafka, using a producer. Per each connection to Kafka, many messages can be sen

Dec 7, 2021
Dec 14, 2021