Cluster API Provider for VMware Cloud Director.

Kubernetes Cluster API Provider Cloud Director

Overview

The Cluster API brings declarative, Kubernetes-style APIs to cluster creation, configuration and management. Cluster API Provider for Cloud Director is a concrete implementation of Cluster API for VMware Cloud Director.

Documentation

Contributing

The cluster-api-provider-cloud-director project team welcomes contributions from the community. Before you start working with cluster-api-provider-cloud-director, please refer to CONTRIBUTING.md.

License

Apache-2.0

Comments
  • harbor-repo.vmware.com not accessible

    harbor-repo.vmware.com not accessible

    Describe the bug

    I saw that the registry was changed in main, changed to: harbor-repo.vmware.com.

    for me this is not an address that is accessible. Now my question is do I have to submit these images myself or will this be made public?

    I'm sorry I'm still a bit new to this, but the tag says it's in beta, this means the main version is still under testing and not ready for production? because this is also the transition from alpha to beta in the kubernetes api version. This is giving me a bit of confusion. Upgrading is then possible with clusterctl moving i guess? Sorry for the extra questions.

    Thanks in advance!

    Reproduction steps

    1. executing clusterctl init with main version, pods give image pull error. 
    2. Also the link is not reachable.
    

    Expected behavior

    Link is not available for harbor-repo.vmware.com/vcloud/cluster-api-provider-cloud-director:main-branch.

    Additional context

    No response

  • Loadbalancer not reconciled successfully after pivoting

    Loadbalancer not reconciled successfully after pivoting

    Describe the bug

    After pivoting from a bootstrap cluster to a VCD cluster, the capvcd-controller attempts to create a second virtual service and pool instead of using the existing resources created by the bootstrap cluster. This causes the controller to error because it cannot add the controlplane endpoint IP as a member to the new pool as it already exists as a member of the original pool.

    Reproduction steps

    1. Create a bootstrap cluster with the CAPVCD controller built from commit 003336f85f44312252553dc1369faf409e18c10b
    2. Pivot to the VCD cluster
    3. Watch the controller logs for the following error:
    I0725 13:37:38.263497       1 gateway.go:70] Obtained Gateway [vDC 73640 Firewall] for Network Name [capvcd-192.168.52.0] of type [NSXT_FLEXIBLE_SEGMENT]
    1.6587562583347037e+09	INFO	controller.vcdcluster	Creating load balancer for the cluster at user-specified endpoint	{"reconciler group": "infrastructure.cluster.x-k8s.io", "reconciler kind": "VCDCluster", "name": "gnu", "namespace": "org-giantswarm", "host": "178.170.32.54", "port": 6443}
    I0725 13:37:38.334736       1 gateway.go:1314] Using provided IP [178.170.32.54]
    I0725 13:37:38.334742       1 gateway.go:1391] Using VIP [178.170.32.54] for virtual service
    I0725 13:37:38.433006       1 gateway.go:181] Using service engine group [&{STD-LB urn:vcloud:serviceEngineGroup:ff547027-ffb4-4107-ab4e-632a4ecc47f3}] on gateway [vDC 73640 Firewall]
    I0725 13:37:38.560491       1 gateway.go:676] LoadBalancer Pool [gnu-NO_RDE_0eb6b51f-ca72-4de2-8679-f2ea0ab33817-tcp] already exists
    1.658756258738859e+09	ERROR	controller.vcdcluster	Reconciler error	{"reconciler group": "infrastructure.cluster.x-k8s.io", "reconciler kind": "VCDCluster", "name": "gnu", "namespace": "org-giantswarm", "error": "Error creating create load balancer [gnu-NO_RDE_0eb6b51f-ca72-4de2-8679-f2ea0ab33817] for the cluster [gnu]: [unable to create virtual service; expected http response [202], obtained [400]: resp: [&http.Response{Status:\"400 Bad Request\", StatusCode:400, Proto:\"HTTP/1.1\", ProtoMajor:1, ProtoMinor:1, Header:http.Header{\"Cache-Control\":[]string{\"no-store, must-revalidate\"}, \"Content-Type\":[]string{\"application/json\"}, \"Date\":[]string{\"Mon, 25 Jul 2022 13:37:38 GMT\"}, \"X-Vmware-Vcloud-Ceip-Id\":[]string{\"615be85f-ab52-4989-9ba9-170efef7b206\"}, \"X-Vmware-Vcloud-Request-Execution-Time\":[]string{\"106\"}, \"X-Vmware-Vcloud-Request-Id\":[]string{\"ce503ec8-2a00-4c71-a9ae-6117f59a0971\"}}, Body:(*http.bodyEOFSignal)(0xc0003f5080), ContentLength:-1, TransferEncoding:[]string{\"chunked\"}, Close:false, Uncompressed:false, Trailer:http.Header(nil), Request:(*http.Request)(0xc00014fe00), TLS:(*tls.ConnectionState)(0xc0004da630)}]: [400 Bad Request]: [{\"minorErrorCode\":\"BAD_REQUEST\",\"message\":\"[ ce503ec8-2a00-4c71-a9ae-6117f59a0971 ] Overlapping subnets detected for existing virtual service virtual IP address 178.170.32.54 and 178.170.32.54.\",\"stackTrace\":null}]]: unable to create virtual service; expected http response [202], obtained [400]: resp: [&http.Response{Status:\"400 Bad Request\", StatusCode:400, Proto:\"HTTP/1.1\", ProtoMajor:1, ProtoMinor:1, Header:http.Header{\"Cache-Control\":[]string{\"no-store, must-revalidate\"}, \"Content-Type\":[]string{\"application/json\"}, \"Date\":[]string{\"Mon, 25 Jul 2022 13:37:38 GMT\"}, \"X-Vmware-Vcloud-Ceip-Id\":[]string{\"615be85f-ab52-4989-9ba9-170efef7b206\"}, \"X-Vmware-Vcloud-Request-Execution-Time\":[]string{\"106\"}, \"X-Vmware-Vcloud-Request-Id\":[]string{\"ce503ec8-2a00-4c71-a9ae-6117f59a0971\"}}, Body:(*http.bodyEOFSignal)(0xc0003f5080), ContentLength:-1, TransferEncoding:[]string{\"chunked\"}, Close:false, Uncompressed:false, Trailer:http.Header(nil), Request:(*http.Request)(0xc00014fe00), TLS:(*tls.ConnectionState)(0xc0004da630)}]: [400 Bad Request]: [{\"minorErrorCode\":\"BAD_REQUEST\",\"message\":\"[ ce503ec8-2a00-4c71-a9ae-6117f59a0971 ] Overlapping subnets detected for existing virtual service virtual IP address 178.170.32.54 and 178.170.32.54.\",\"stackTrace\":null}]", "errorVerbose": "unable to create virtual service; expected http response [202], obtained [400]: resp: [&http.Response{Status:\"400 Bad Request\", StatusCode:400, Proto:\"HTTP/1.1\", ProtoMajor:1, ProtoMinor:1, Header:http.Header{\"Cache-Control\":[]string{\"no-store, must-revalidate\"}, \"Content-Type\":[]string{\"application/json\"}, \"Date\":[]string{\"Mon, 25 Jul 2022 13:37:38 GMT\"}, \"X-Vmware-Vcloud-Ceip-Id\":[]string{\"615be85f-ab52-4989-9ba9-170efef7b206\"}, \"X-Vmware-Vcloud-Request-Execution-Time\":[]string{\"106\"}, \"X-Vmware-Vcloud-Request-Id\":[]string{\"ce503ec8-2a00-4c71-a9ae-6117f59a0971\"}}, Body:(*http.bodyEOFSignal)(0xc0003f5080), ContentLength:-1, TransferEncoding:[]string{\"chunked\"}, Close:false, Uncompressed:false, Trailer:http.Header(nil), Request:(*http.Request)(0xc00014fe00), TLS:(*tls.ConnectionState)(0xc0004da630)}]: [400 Bad Request]: [{\"minorErrorCode\":\"BAD_REQUEST\",\"message\":\"[ ce503ec8-2a00-4c71-a9ae-6117f59a0971 ] Overlapping subnets detected for existing virtual service virtual IP address 178.170.32.54 and 178.170.32.54.\",\"stackTrace\":null}]\nError creating create load balancer [gnu-NO_RDE_0eb6b51f-ca72-4de2-8679-f2ea0ab33817] for the cluster [gnu]: [unable to create virtual service; expected http response [202], obtained [400]: resp: [&http.Response{Status:\"400 Bad Request\", StatusCode:400, Proto:\"HTTP/1.1\", ProtoMajor:1, ProtoMinor:1, Header:http.Header{\"Cache-Control\":[]string{\"no-store, must-revalidate\"}, \"Content-Type\":[]string{\"application/json\"}, \"Date\":[]string{\"Mon, 25 Jul 2022 13:37:38 GMT\"}, \"X-Vmware-Vcloud-Ceip-Id\":[]string{\"615be85f-ab52-4989-9ba9-170efef7b206\"}, \"X-Vmware-Vcloud-Request-Execution-Time\":[]string{\"106\"}, \"X-Vmware-Vcloud-Request-Id\":[]string{\"ce503ec8-2a00-4c71-a9ae-6117f59a0971\"}}, Body:(*http.bodyEOFSignal)(0xc0003f5080), ContentLength:-1, TransferEncoding:[]string{\"chunked\"}, Close:false, Uncompressed:false, Trailer:http.Header(nil), Request:(*http.Request)(0xc00014fe00), TLS:(*tls.ConnectionState)(0xc0004da630)}]: [400 Bad Request]: [{\"minorErrorCode\":\"BAD_REQUEST\",\"message\":\"[ ce503ec8-2a00-4c71-a9ae-6117f59a0971 ] Overlapping subnets detected for existing virtual service virtual IP address 178.170.32.54 and 178.170.32.54.\",\"stackTrace\":null}]]\ngithub.com/vmware/cluster-api-provider-cloud-director/controllers.(*VCDClusterReconciler).reconcileNormal\n\t/go/src/github.com/vmware/cluster-api-provider-cloud-director/controllers/vcdcluster_controller.go:620\ngithub.com/vmware/cluster-api-provider-cloud-director/controllers.(*VCDClusterReconciler).Reconcile\n\t/go/src/github.com/vmware/cluster-api-provider-cloud-director/controllers/vcdcluster_controller.go:129\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile\n\t/go/src/github.com/vmware/cluster-api-provider-cloud-director/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:114\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/src/github.com/vmware/cluster-api-provider-cloud-director/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:311\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/vmware/cluster-api-provider-cloud-director/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:266\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/go/src/github.com/vmware/cluster-api-provider-cloud-director/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:227\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1581"}
    sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
    	/go/src/github.com/vmware/cluster-api-provider-cloud-director/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:266
    sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2
    	/go/src/github.com/vmware/cluster-api-provider-cloud-director/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:227
    

    Expected behavior

    The existing virtual service and pool should be reconciled by the controller.

    Additional context

    No response

  • Support multiple NIC for machines

    Support multiple NIC for machines

    Description

    Fixes https://github.com/vmware/cluster-api-provider-cloud-director/issues/235

    Checklist

    • [x] tested locally
    • [ ] updated any relevant dependencies
    • [ ] updated any relevant documentation or examples

    API Changes

    Are there API changes?

    • [x] Yes
    • [ ] No

    If yes, please fill in the below

    1. Updated conversions?
      • [ ] Yes
      • [ ] No
      • [ ] N/A
    2. Updated CRDs?
      • [x] Yes
      • [ ] No
      • [ ] N/A
    3. Updated infrastructure-components.yaml?
      • [ ] Yes
      • [ ] No
      • [ ] N/A
    4. Updated ./examples/capi-quickstart.yaml?
      • [ ] Yes
      • [ ] No
      • [ ] N/A
    5. Updated necessary files under ./infrastructure-vcd/v1.0.0/?
      • [ ] Yes
      • [ ] No
      • [ ] N/A

    Issue

    If applicable, please reference the relevant issue

    Fixes #


    This change is Reviewable

  • Workload cluster secret is wrong when deployed with secretRef

    Workload cluster secret is wrong when deployed with secretRef

    Describe the bug

    When deploying a workload cluster with secretRef, the vcloud-basic-auth secret contains the wrong information. Which means the CPI cannot connect to VCD and unset the node.cloudprovider.kubernetes.io/uninitialized: true taint on the nodes.

    It contains the base64 encoded org name followed by a forward slash. password and refreshToken are both empty.

    Which points to the following:

    • Is UserCredentialsContext correct here? https://github.com/vmware/cluster-api-provider-cloud-director/blob/07f16d2d5c3663e739ae7e9bed0e0b9c73b6c186/controllers/vcdmachine_controller.go#L511-L512
    • Clusters use secretRefin userContext>secretRef. https://github.com/vmware/cluster-api-provider-cloud-director/blob/main/examples/capi-quickstart.yaml#L51

    When deploying a cluster without secretRef (like in 0.5.1) and with only the refreshToken, the refreshToken is correctly in the secret in the WC, however the orgname/ is still in the username (not a blocker but buggy).

    Reproduction steps

    1. Create a secret in the management cluster
    2. Deploy a WC with secretRef
    3. Change context to the WC and check the following:
    k describe pod -n kube-system vmware-cloud-director-ccm-b5d58cd57-nnrqk
    
    k get secrets -n kube-system vcloud-basic-auth -oyaml
    

    Expected behavior

    The vcloud-basic-auth secret contains the information specified in secretRef.

    Additional context

    No response

  • Support for Private IPs (Not using Tier 0 ips) for Control Plane LB

    Support for Private IPs (Not using Tier 0 ips) for Control Plane LB

    Is your feature request related to a problem? Please describe.

    Security requirements say you should place ControlPlane on private networks and not using Tier0 routable on the internetl.

    Describe the solution you'd like

    Be able to have Control Plane LB only within the Virtual Network inside of VCD and not exposed on a Tier 0. This pattern is achieved with Private Endpoints or similar concepts on Azure where they advise you to have the ControlPlane internal only not exposing it on the internet.

    Describe alternatives you've considered

    Having private ips defined by provider like 10.x.x.x. on the tier0 but that will not work maybe with things like S2S vpn etc?

    Additional context

    No response

  • VCDA-3012: Allow specification of ssh key from KCP

    VCDA-3012: Allow specification of ssh key from KCP

    In KCP, the ssh key can be specified under kubeadmConfigSpec as follows:

        users:
        - name: root
          sshAuthorizedKeys:
          - "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAHCCo7SxtKx7o1SE2BfNiNsA+irrLyVYwtlvL4TUase [email protected]"
    

    This results in the jinja script having the ssh key as part of the cloud init config itself. However the current way of converting the jinja script into a shell script and embedding it into the vcd cloud init does not work. Hence I did the following:

    1. Parse the jinja and vcd cloudinit yaml files
    2. Merge the yaml files
    3. Treat the runcmd slightly specially (embed the runcmd of jinja into the vcd script)

    Testing: Created a cluster with my ssh key. After creating the dnat rule, the control plane node got the ssh key and was accessible. The worker nodes did not get the ssh key and were not accessible as a result.


    This change is Reviewable

  • VCDA-3133: pass credentials through spec

    VCDA-3133: pass credentials through spec

    Pass username/password/refreshtoken through the Spec. This combined with kuberneted rbac can be used to provide multitenant clusters

    Also make vcdmachine controller more reentrant.

    Also make some fields mandatory.


    This change is Reviewable

  • Remove redundant import, making the code base compilable again

    Remove redundant import, making the code base compilable again

    Description

    #254 erroneously re-introduced the import of io/ioutil, which had been removed previously. Go complains about it, because it is not used since 73b198a, rendering the code base uncompilable.

    This commit fixes that, and it is able to compile again.

    Checklist

    • [x] tested locally
    • [ ] updated any relevant dependencies
    • [ ] updated any relevant documentation or examples

    API Changes

    Are there API changes?

    • [ ] Yes
    • [x] No

    If yes, please fill in the below

    1. Updated conversions?
      • [ ] Yes
      • [ ] No
      • [ ] N/A
    2. Updated CRDs?
      • [ ] Yes
      • [ ] No
      • [ ] N/A
    3. Updated infrastructure-components.yaml?
      • [ ] Yes
      • [ ] No
      • [ ] N/A
    4. Updated ./examples/capi-quickstart.yaml?
      • [ ] Yes
      • [ ] No
      • [ ] N/A
    5. Updated necessary files under ./infrastructure-vcd/v1.0.0/?
      • [ ] Yes
      • [ ] No
      • [ ] N/A

    Issue

    n/a, I could open one, if needed.


    This change is Reviewable

  • VCDA-3582, 3740: Store CRS in RDE, remove CNI

    VCDA-3582, 3740: Store CRS in RDE, remove CNI

    Description

    Please provide a brief description of the changes proposed in this Pull Request Iterate through ClusterResourceSetBindings and update RDE. Also remove CNI hardcoded

    Checklist

    • [X] tested locally
    • [X] updated any relevant dependencies
    • [ ] updated any relevant documentation or examples

    API Changes

    Are there API changes?

    • [ ] Yes
    • [X] No

    If yes, please fill in the below

    1. Updated conversions?
      • [ ] Yes
      • [ ] No
      • [ ] N/A
    2. Updated CRDs?
      • [ ] Yes
      • [ ] No
      • [ ] N/A
    3. Updated infrastructure-components.yaml?
      • [ ] Yes
      • [ ] No
      • [ ] N/A
    4. Updated ./examples/capi-quickstart.yaml?
      • [ ] Yes
      • [ ] No
      • [ ] N/A
    5. Updated necessary files under ./infrastructure-vcd/v1.0.0/?
      • [ ] Yes
      • [ ] No
      • [ ] N/A

    Issue

    If applicable, please reference the relevant issue

    Fixes #


    This change is Reviewable

  • Redo compute policy rename changes and regenerate conversion files

    Redo compute policy rename changes and regenerate conversion files

    • Revert the PR (https://github.com/vmware/cluster-api-provider-cloud-director/pull/91) as zz_generated.conversion.go file is wrong
    • Redo the compute policy rename changes
    • Add omitempty to rdeId as it is optional
    • Regenerate zz_generated.conversion.go using conversion_gen tool

    This change is Reviewable

  • remove containerd proxy config

    remove containerd proxy config

    as we do the containerd proxy config via kubeadm and the build-in config overwrite our own config we have to remove it.

    Signed-off-by: Mario Constanti [email protected]


    This change is Reviewable

  • Allow users to define vm naming logic

    Allow users to define vm naming logic

    Description

    Now, VMs are named as machine.Name. In some cases, we need to name VMs with a custom logic.

    This PR adds a new field into VCDMachine CR so that users can define templates to name VMs by using Go templates and Sprig functions. .machine and .vcdMachine will refer to Machine and VCDMachine CRs in the templates.

    Example

    spec:
      vmNamingTemplate: 'mycustomprefix{{.machine.Name |sha256sum | trunc 7}}'
    

    Open Points This field must be immutable. Otherwise, users can change it and the controller will not be able to know existing VMs. I can implement a check in the validating webhook if you are OK.

    Checklist

    • [x] tested locally
    • [ ] updated any relevant dependencies
    • [ ] updated any relevant documentation or examples

    API Changes

    Are there API changes?

    • [x] Yes
    • [ ] No

    If yes, please fill in the below

    1. Updated conversions?
      • [ ] Yes
      • [ ] No
      • [ ] N/A
    2. Updated CRDs?
      • [ ] Yes
      • [ ] No
      • [ ] N/A
    3. Updated infrastructure-components.yaml?
      • [ ] Yes
      • [ ] No
      • [ ] N/A
    4. Updated ./examples/capi-quickstart.yaml?
      • [ ] Yes
      • [ ] No
      • [ ] N/A
    5. Updated necessary files under ./infrastructure-vcd/v1.0.0/?
      • [ ] Yes
      • [ ] No
      • [ ] N/A

    This change is Reviewable

  • Update vm.VM object after updating network configuration

    Update vm.VM object after updating network configuration

    Description

    When we call vm.UpdateNetworkConnectionSection method, we update network interfaces of the VM. Then we call getPrimaryNetwork function but it uses old vm object. We need to update vm object after updating network configuration for consistency.

    Checklist

    • [x] tested locally
    • [ ] updated any relevant dependencies
    • [ ] updated any relevant documentation or examples

    API Changes

    Are there API changes?

    • [ ] Yes
    • [x] No

    If yes, please fill in the below

    1. Updated conversions?
      • [ ] Yes
      • [x] No
      • [ ] N/A
    2. Updated CRDs?
      • [ ] Yes
      • [x] No
      • [ ] N/A
    3. Updated infrastructure-components.yaml?
      • [ ] Yes
      • [x] No
      • [ ] N/A
    4. Updated ./examples/capi-quickstart.yaml?
      • [ ] Yes
      • [x] No
      • [ ] N/A
    5. Updated necessary files under ./infrastructure-vcd/v1.0.0/?
      • [ ] Yes
      • [x] No
      • [ ] N/A

    This change is Reviewable

  • [CAFV-81] Modify conversion logic to restore v1beta1 fields using data annotation

    [CAFV-81] Modify conversion logic to restore v1beta1 fields using data annotation

    Description

    Please provide a brief description of the changes proposed in this Pull Request

    • Add conversion logic to restore newly created fields using data annotation.

    Checklist

    • [x] tested locally
    • [ ] updated any relevant dependencies
    • [ ] updated any relevant documentation or examples

    API Changes

    Are there API changes?

    • [x] Yes
    • [ ] No

    If yes, please fill in the below

    1. Updated conversions?
      • [x] Yes
      • [ ] No
      • [ ] N/A
    2. Updated CRDs?
      • [ ] Yes
      • [ ] No
      • [x] N/A
    3. Updated infrastructure-components.yaml?
      • [ ] Yes
      • [ ] No
      • [x] N/A
    4. Updated ./examples/capi-quickstart.yaml?
      • [ ] Yes
      • [ ] No
      • [x] N/A
    5. Updated necessary files under ./infrastructure-vcd/v1.0.0/?
      • [ ] Yes
      • [ ] No
      • [x] N/A

    Issue

    If applicable, please reference the relevant issue

    Fixes # https://github.com/vmware/cluster-api-provider-cloud-director/issues/355


    This change is Reviewable

  • [CAFV-95] Upgrade Golang.org/x/net package to v0.4.0

    [CAFV-95] Upgrade Golang.org/x/net package to v0.4.0

    Signed-off-by: ymo24 [email protected]

    Description

    Please provide a brief description of the changes proposed in this Pull Request

    Checklist

    • [ ] tested locally
    • [ ] updated any relevant dependencies
    • [x] updated any relevant documentation or examples

    API Changes

    Are there API changes?

    • [ ] Yes
    • [x] No

    If yes, please fill in the below

    1. Updated conversions?
      • [ ] Yes
      • [ ] No
      • [ ] N/A
    2. Updated CRDs?
      • [ ] Yes
      • [ ] No
      • [ ] N/A
    3. Updated infrastructure-components.yaml?
      • [ ] Yes
      • [ ] No
      • [ ] N/A
    4. Updated ./examples/capi-quickstart.yaml?
      • [ ] Yes
      • [ ] No
      • [ ] N/A
    5. Updated necessary files under ./infrastructure-vcd/v1.0.0/?
      • [ ] Yes
      • [ ] No
      • [ ] N/A

    Issue

    If applicable, please reference the relevant issue

    Fixes # Upgrade Golang.org/x/net package to v0.4.0


    This change is Reviewable

  • [CAFV-55] Update README Document about registering the RDE Entitytype payload from the schema.json

    [CAFV-55] Update README Document about registering the RDE Entitytype payload from the schema.json

    Description

    Please provide a brief description of the changes proposed in this Pull Request

    • Added new schema file which holds the all the fields required for registering CAPVCD Entity Type.
    • Updated VCD_SETUP.md doc to reference to this file, along with added note indicating that the entity_type.json file contains schema and other fields with schema linked to the proper file

    Checklist

    • [ ] tested locally
    • [ ] updated any relevant dependencies
    • [x] updated any relevant documentation or examples

    API Changes

    Are there API changes?

    • [ ] Yes
    • [x] No

    If yes, please fill in the below

    1. Updated conversions?
      • [ ] Yes
      • [ ] No
      • [ ] N/A
    2. Updated CRDs?
      • [ ] Yes
      • [ ] No
      • [ ] N/A
    3. Updated infrastructure-components.yaml?
      • [ ] Yes
      • [ ] No
      • [ ] N/A
    4. Updated ./examples/capi-quickstart.yaml?
      • [ ] Yes
      • [ ] No
      • [ ] N/A
    5. Updated necessary files under ./infrastructure-vcd/v1.0.0/?
      • [ ] Yes
      • [ ] No
      • [ ] N/A

    Issue

    If applicable, please reference the relevant issue

    Fixes #


    This change is Reviewable

  • Add cluster ID as description in VCD resources

    Add cluster ID as description in VCD resources

    Is your feature request related to a problem? Please describe.

    We can identify what clusters a resource (VS, LB Pool, DNat) belongs to based on ID but if the cluster is gone it is not straightforward to make sure that the resource can be cleaned.

    Describe the solution you'd like

    Add cluster name to the description field of

    • Virtual service
    • LB Pool
    • NAT Rules

    Describe alternatives you've considered

    No response

    Additional context

    No response

Go library for the VMware vSphere API

govmomi A Go library for interacting with VMware vSphere APIs (ESXi and/or vCenter). In addition to the vSphere API client, this repository includes:

Dec 6, 2021
App for VMware Workstartion to auto start VM's on windows reboot

VMware Workstation AutoStart This is an auto start app for VMware Workstation to auto start VM's on windows reboot with VMware Workstation installed.

Dec 15, 2021
macOS Unlocker V4.0 for VMware Workstation

macOS Unlocker V4.0 for VMware Workstation IMPORTANT Use a release from the Releases section of this GitHub repository. https://github.com/DrDonk/golo

Dec 29, 2022
Kubernetes Cluster API Provider AWS
Kubernetes Cluster API Provider AWS

Kubernetes Cluster API Provider AWS Kubernetes-native declarative infrastructure for AWS. What is the Cluster API Provider AWS The Cluster API brings

Nov 2, 2022
Cluster API Provider for KubeVirt

Kubernetes Template Project The Kubernetes Template Project is a template for starting new projects in the GitHub organizations owned by Kubernetes. A

Jan 4, 2023
capc (cap ka) is a cluster api provider for the civo platform created for the hackathon for fun

capc (cap ka) is a cluster api provider for the civo platform created for the hackathon for fun! Interested in helping drive it forward? you are more then welcome to join in!

Nov 20, 2022
Cloud-Z gathers information and perform benchmarks on cloud instances in multiple cloud providers.

Cloud-Z Cloud-Z gathers information and perform benchmarks on cloud instances in multiple cloud providers. Cloud type, instance id, and type CPU infor

Jun 8, 2022
kubetnl tunnels TCP connections from within a Kubernetes cluster to a cluster-external endpoint, e.g. to your local machine. (the perfect complement to kubectl port-forward)

kubetnl kubetnl (kube tunnel) is a command line utility to tunnel TCP connections from within a Kubernetes to a cluster-external endpoint, e.g. to you

Dec 16, 2022
A pod scaler golang app that can scale replicas either inside of cluster or out of the cluster

pod-scaler A simple pod scaler golang application that can scale replicas via manipulating the deployment Technologies The project has been created us

Oct 24, 2021
Go-gke-pulumi - A simple example that deploys a GKE cluster and an application to the cluster using pulumi

This example deploys a Google Cloud Platform (GCP) Google Kubernetes Engine (GKE) cluster and an application to it

Jan 25, 2022
Influxdb-cluster - InfluxDB Cluster for replacing InfluxDB Enterprise

InfluxDB ATTENTION: Around January 11th, 2019, master on this repository will be

Dec 26, 2022
A Terraform module to manage cluster authentication (aws-auth) for an Elastic Kubernetes (EKS) cluster on AWS.

Archive Notice The terraform-aws-modules/eks/aws v.18.20.0 release has brought back support aws-auth configmap! For this reason, I highly encourage us

Dec 4, 2022
K8s controller implementing Multi-Cluster Services API based on AWS Cloud Map.

AWS Cloud Map MCS Controller for K8s Introduction AWS Cloud Map multi-cluster service discovery for Kubernetes (K8s) is a controller that implements e

Dec 17, 2022
OpenAPI Terraform Provider that configures itself at runtime with the resources exposed by the service provider (defined in a swagger file)
OpenAPI Terraform Provider that configures itself at runtime with the resources exposed by the service provider (defined in a swagger file)

Terraform Provider OpenAPI This terraform provider aims to minimise as much as possible the efforts needed from service providers to create and mainta

Dec 26, 2022
Terraform provider to help with various AWS automation tasks (mostly all that stuff we cannot accomplish with the official AWS terraform provider)
Terraform provider to help with various AWS automation tasks (mostly all that stuff we cannot accomplish with the official AWS terraform provider)

terraform-provider-awsutils Terraform provider for performing various tasks that cannot be performed with the official AWS Terraform Provider from Has

Dec 8, 2022
Terraform Provider for Azure (Resource Manager)Terraform Provider for Azure (Resource Manager)
Terraform Provider for Azure (Resource Manager)Terraform Provider for Azure (Resource Manager)

Terraform Provider for Azure (Resource Manager) Version 2.x of the AzureRM Provider requires Terraform 0.12.x and later, but 1.0 is recommended. Terra

Oct 16, 2021
provider-kubernetes is a Crossplane Provider that enables deployment and management of arbitrary Kubernetes objects on clusters

provider-kubernetes provider-kubernetes is a Crossplane Provider that enables deployment and management of arbitrary Kubernetes objects on clusters ty

Dec 14, 2022
Terraform-provider-mailcow - Terraform provider for Mailcow

Terraform Provider Scaffolding (Terraform Plugin SDK) This template repository i

Dec 31, 2021
Provider-generic-workflows - A generic provider which uses argo workflows to define the backend actions.

provider-generic-workflows provider-generic-workflows is a generic provider which uses argo workflows for managing the external resource. This will re

Jan 1, 2022