kcp is a prototype of a Kubernetes API server that is not a Kubernetes cluster - a place to create, update, and maintain Kube-like APis with controllers above or without clusters.

kcp is a minimal Kubernetes API server

build status badge

How minimal exactly? kcp doesn't know about Pods or Nodes, let alone Deployments, Services, LoadBalancers, etc.

By default, kcp only knows about:

kubectl api-resources showing minimal API resources

Like vanilla Kubernetes, kcp persists these resources in etcd for durable storage.

Any other resources, including Kubernetes-standard resources like Pods, Nodes and the rest, can be added as CRDs and reconciled using the standard controllers.

Why would I want that?

Kubernetes is mainly known as a container orchestration platform today, but we believe it can be even more.

With the power of CustomResourceDefinitions, Kubernetes provides a flexible platform for declarative APIs of all types, and the reconciliation pattern common to Kubernetes controllers is a powerful tool in building robust, expressive systems.

At the same time, a diverse and creative community of tools and services has sprung up around Kubernetes APIs.

Imagine a declarative Kubernetes-style API for anything, supported by an ecosystem of Kubernetes-aware tooling, separate from Kubernetes-the-container-orchestrator.

That's kcp.

Is kcp a "fork" of Kubernetes? 🍴

No.

kcp as a prototype currently depends on some unmerged changes to Kubernetes, but we intend to pursue these changes through the usual KEP process, until (hopefully!) Kubernetes can be configured to run as kcp runs today.

Our intention is that our experiments improve Kubernetes for everyone, by improving CRDs and scaling resource watching, and enabling more, better controllers for everyone, whether you're using Kubernetes as a container orchestrator or not.

Our kcp specific patches are in the feature-logical-clusters feature branch in the kcp-dev/kubernetes repo. See DEVELOPMENT.md for how the patches are structured and how they must be formatted during our experimentation phase. See GOALS.md for more info on how we intend to use kcp as a test-bed for exploring ideas that improve the entire ecosystem.

What's in this repo?

First off, this is a prototype, not a project. We're exploring these ideas here to try them out, experiment, and bounce them off each other. Our basic demo leverages the following components to show off these ideas:

  • kcp, which serves a Kubernetes-style API with a minimum of built-in types.
  • cluster-controller, which along with the Cluster CRD allows kcp to connect to other full-featured Kubernetes clusters, and includes these components:
    • syncer, which runs on Kubernetes clusters registered with the cluster-controller, and watches kcp for resources assigned to the cluster
    • deployment-splitter, which demonstrates a controller that can split a Deployment object into multiple "virtual Deployment" objects across multiple clusters.
    • crd-puller which demonstrates mirroring CRDs from a cluster back to kcp

So what's this for?

Multi-Cluster Kubernetes?

kcp could be useful for multi-cluster scenarios, by running kcp as a control plane outside of any of your workload clusters.

Multi-Tenant Kubernetes?

kcp could be useful for multi-tenancy scenarios, by allowing multiple tenant clusters inside a cluster to be managed by a single kcp control plane.

Local Kubernetes Development?

kcp could be useful for local development scenarios, where you don't necessarily care about all of Kubernetes' many built-in resources and their reconciling controllers.

Embedded/low-resource scenarios?

kcp could be useful for environments where resources are scarce, by limiting the number of controllers that need to run. Kubernetes' asynchronous reconciliation pattern can also be very powerful in disconnected or intermittently connected environments, regardless of how workloads actually run.

Is that all?

No! See our GOALS.md doc for more on what we are trying to accomplish with this prototype and our docs/ directory.

What does kcp stand for?

kcp as a project stands for equality and justice for all people.

However, kcp is not an acronym.

How do I get started?

  1. Clone the repository.
  2. Install Go (1.16+).
  3. Download the latest kubectl binary for your OS.
  4. Build and start kcp in the background: go run ./cmd/kcp start.
  5. Tell kubectl where to find the kubeconfig: export KUBECONFIG=.kcp/data/admin.kubeconfig (this assumes your working directory is the root directory of the repository).
  6. Confirm you can connect to kcp: kubectl api-resources.

For more scenarios, see DEVELOPMENT.md.

This sounds cool and I want to help!

Thanks! And great!

This work is still in early development, which means it's not ready for production, but also that your feedback can have a big impact.

You can reach us here, in this repository via issues and discussion, or:

References

Owner
Prototype of Future Kubernetes Ideas
A place to explore and prototype extending Kubernetes in novel directions
Prototype of Future Kubernetes Ideas
Comments
  • ✨ Support for local cluster services DNS resolution

    ✨ Support for local cluster services DNS resolution

    Summary

    Configure synced deployments DNS Config to point to kcp DNS resolver mapping local namespaces to physical namespaces for cluster local services.

    Related issue(s)

    Fixes #505 Fixes #1465

  • Prototype 3: Transparent Multi-Cluster Cordon/Drain End-User Demo

    Prototype 3: Transparent Multi-Cluster Cordon/Drain End-User Demo

    Demo Objective

    User has a multi-cluster placeable application that can move transparently

    Demo Steps

    1. User creates a stateless web application, which is assigned to a physical cluster
    2. Physical cluster admin wants to perform maintenance on the cluster, but limit workload disruption
    3. Admin marks the cluster as cordoned (Unschedulable: true) -- new workloads are not assigned to the cluster
    4. Admin marks the cluster as drained/draining (EvictAfter: $now) -- existing workloads are rescheduled to another cluster, with some observed downtime
    5. With no workloads now scheduled on the cluster, the admin is free to operate on it, upgrade it, uninstall syncer, delete the cluster, etc.

    Action Items

    • [x] Scope the current demo as necessary to fit in prototype boundaries
    • [x] Create and link required tasks to realize demo
    • [x] #556
    • [x] #524
    • [ ] Contribute to final demo script and recording

    Nice to have

    • [ ] #525
  • APIExport consumer can't wildcard list/watch its schemas until the first APIBinding is created

    APIExport consumer can't wildcard list/watch its schemas until the first APIBinding is created

    Describe the bug A controller using APIExports can't successfully start up and wildcard list/watch the APIs it's exporting until the first APIBinding is created.

    To Reproduce Steps to reproduce the behavior:

    1. Create an APIResourceSchema
    2. Create an APIExport that exports the APIResourceSchema
    3. Try to do a wildcard list/watch through the APIExport virtual workspace, such as
    https://$server/services/apiexport/root:default:andy/controller-runtime-example-data.my.domain/clusters/*/apis/data.my.domain/v1alpha1/widgets
    
    1. Get a 404

    In a controller-runtime app, you'd see something like this:

    1.6541008964721239e+09  ERROR   controller-runtime.source       if kind is a CRD, it should be installed before calling Start   {"kind": "Widget.data.my.domain", "error": "no matches for kind \"Widget\" in version \"data.my.domain/v1alpha1\""}
    

    Expected behavior The controller should be able to start listing/watching its APIs even without any APIBindings for it

    Additional context None

  • Reworking namespaces

    Reworking namespaces

    This PR changes the pattern used to generate the downstream namespace. It generates a shorter string while maintaining a big enough random part to avoid conflicts.

    • Extends the namespaceLocator struct to include the workloadcluster UID and fully qualified path.
    • Renames the namespaceLocator logic-cluster field into workspace.
    • Spec Syncer now uses an indexer to look for the desired downstream namespace,so now we can change the namespace generation pattern without breaking previous deployments

    related: https://github.com/kcp-dev/kcp/issues/1280

  • Add initial per-workspace quota support

    Add initial per-workspace quota support

    Summary

    Add initial per-workspace quota support. I have only tested object count with configmaps. I assume other resources will work too, but I haven't tested them.

    Follow-up PRs will add support for quotas on namespaces in a workspace, making sure cpu/memory quota is enforced, etc.

    TODO

    • [x] Address FIXMEs
    • [x] Godoc
    • [x] e2e

    Dependencies

    • [x] https://github.com/kcp-dev/kubernetes/pull/75

    Related issue(s)

    Part of #1061

  • Workspaces VW: Fix obsolete permission management...

    Workspaces VW: Fix obsolete permission management...

    Summary

    Fix obsolete permission management in the workspaces virtual workspace.

    Current situation

    The workspaces virtual workspace is still based on the old approach where user workspaces used to be created in top-level organization workspaces. It checks permissions against the clusterworkspaces/content resource named with the name of the current workspace (typically an organization) in the parent workspace of the current organization workspace.

    • for get, list and watch, it checks the access verb
    • for create, it checks the member verb

    Additionally to delete a workspace it also checks the delete verb on the clusterworkspaces/workspace resource of the current organization workspace.

    This is both incompatible with:

    • the new approach that non-admin user workspaces should not be created in top-level orgs in the future
    • the Home workspaces that can be created on-the-fly (as well as their home bucket parent workspaces) when first accessed.

    Implemented changes

    In the case of workspaces that are children of a top-level organization, we keep the same mechanism as before, to keep compatibility for a while for user workspaces already created in top-level organizations)

    In all other cases, SARs in the workspaces virtual workspace are now always checked against the clusterworkspaces/workspace resource in the current workspace (not the parent anymore).

    • for get, list, watch requests, it checks the get verb
    • for create requests, it checks the create verb
    • for delete requests, it checks the get verb, as well as the delete verb on resource name of the child workspace which is being deleted.

    Due to the KCP authorizer architecture, these SAR will also check the access verb on the clusterworkspaces/content resource in the parent workspaces (through the workspaceContentAuthorizer) as well as basic membership of the top-level org through the topLevelOrgAccessAuthorizer. So we don't loose any security checks previously done.

    Related issue(s)

    Prerequisite PR for the Home workspace EPIC

  • :bug: Avoid syncers deleting namespace from other synctargets.

    :bug: Avoid syncers deleting namespace from other synctargets.

    Summary

    This PR adds an additional check (synctarget UID) when deleting downstream namespaces to avoid a reconciliation loop between different syncers from different kcp instances. (or for example, an old syncer from a previous KCP deployment)

    Related issue(s)

    related to: https://github.com/kcp-dev/kcp/issues/2041 (doesn't fix it as there are multiple root causes for that issue)

  • system:authenticated group is not added to users

    system:authenticated group is not added to users

    Describe the bug system:authenticated group is not added to users.

    To Reproduce Steps to reproduce the behavior:

    1. Create a role binding that applies to system:authenticated
    2. Make a request with an unprivileged user that should be authorized for the role binding created in step 1

    Expected behavior User would be allowed to make the request

  • Add a label to deactivate automatic namespace scheduling

    Add a label to deactivate automatic namespace scheduling

    Currently the namespace controller owns the responsibility of assigning namespaces to physical clusters. The only option for other controllers, like the deployment-splitter one, or users manually, to distribute resources across physical clusters is to deactivate the namespace controller by starting KCP with --run-controllers=false --unsupported-run-individual-controllers="workspace-scheduler,cluster".

    Meanwhile more advanced scheduling / placement scenarios are supported, this PR proposes to introduce the experimental.scheduling.kcp.dev/disabled label, that can be added to namespaces, in order to deactivate the automatic placement to physical clusters, performed by the namespace controller.

    Setting this label to false prevents the namespace controller from overriding the scheduling logic performed by the deployment-splitter controller, but also manual scheduling, when working on global load-balancing use cases for example.

    Fixes #494

  • :bug: Skip maximal permission policy authorizer for deep SAR requests

    :bug: Skip maximal permission policy authorizer for deep SAR requests

    Summary

    This PR fixes authorization for deep SAR requests on attributes that resolve to workspace with local maximum permission policy, where the APIExport is also bound, such as the APIExport in the root workspace.

    Related issue(s)

    Fixes #2384.

  • move Config to separate file

    move Config to separate file

    Summary

    Move Config, CompletedConfig and the related parts from server.go to config.go

    No code change, just file split to make it easier to consume and reason about. As discussed with @sttts

  • 🐛 kcp: fix waitForOptionalSync method to wait for a proper signal

    🐛 kcp: fix waitForOptionalSync method to wait for a proper signal

    Summary

    before this method was waiting for syncedCh which is handled by waitForSync method. syncedOptionalCh is fulfilled by kcp-start-optional-informershook.

    Related issue(s)

    Fixes #

  • build(deps): bump actions/cache from 3.0.11 to 3.2.2

    build(deps): bump actions/cache from 3.0.11 to 3.2.2

    Bumps actions/cache from 3.0.11 to 3.2.2.

    Release notes

    Sourced from actions/cache's releases.

    v3.2.2

    What's Changed

    New Contributors

    Full Changelog: https://github.com/actions/cache/compare/v3.2.1...v3.2.2

    v3.2.1

    What's Changed

    Full Changelog: https://github.com/actions/cache/compare/v3.2.0...v3.2.1

    v3.2.0

    What's Changed

    New Contributors

    ... (truncated)

    Changelog

    Sourced from actions/cache's changelog.

    3.0.11

    • Update toolkit version to 3.0.5 to include @actions/core@^1.10.0
    • Update @actions/cache to use updated saveState and setOutput functions from @actions/core@^1.10.0

    3.1.0-beta.1

    • Update @actions/cache on windows to use gnu tar and zstd by default and fallback to bsdtar and zstd if gnu tar is not available. (issue)

    3.1.0-beta.2

    • Added support for fallback to gzip to restore old caches on windows.

    3.1.0-beta.3

    • Bug fixes for bsdtar fallback if gnutar not available and gzip fallback if cache saved using old cache action on windows.

    3.2.0-beta.1

    • Added two new actions - restore and save for granular control on cache.

    3.2.0

    • Released the two new actions - restore and save for granular control on cache

    3.2.1

    • Update @actions/cache on windows to use gnu tar and zstd by default and fallback to bsdtar and zstd if gnu tar is not available. (issue)
    • Added support for fallback to gzip to restore old caches on windows.
    • Added logs for cache version in case of a cache miss.

    3.2.2

    • Reverted the changes made in 3.2.1 to use gnu tar and zstd by default on windows.
    Commits
    • 4723a57 Revert compression changes related to windows but keep version logging (#1049)
    • d1507cc Merge pull request #1042 from me-and/correct-readme-re-windows
    • 3337563 Merge branch 'main' into correct-readme-re-windows
    • 60c7666 save/README.md: Fix typo in example (#1040)
    • b053f2b Fix formatting error in restore/README.md (#1044)
    • 501277c README.md: remove outdated Windows cache tip link
    • c1a5de8 Upgrade codeql to v2 (#1023)
    • 9b0be58 Release compression related changes for windows (#1039)
    • c17f4bf GA for granular cache (#1035)
    • ac25611 docs: fix an invalid link in workarounds.md (#929)
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
  • :bug: workload/resource: handle upsynced resources

    :bug: workload/resource: handle upsynced resources

    Summary

    This PR fixes an issue with upsynced resources getting it's state changed into "Sync" when the initial synctarget is gone or unhealthy.

    If the resource is upsynced, check if the synctarget still exists, if not, delete the resource, if the synctarget still exists, do nothing, exit early.

    TODO

    • [ ] write some e2e test for the workload resource reconciler.

    Related issue(s)

    Fixes https://github.com/kcp-dev/kcp/issues/2530

  • build(deps): bump uraimo/run-on-arch-action from 2.3.0 to 2.5.0

    build(deps): bump uraimo/run-on-arch-action from 2.3.0 to 2.5.0

    Bumps uraimo/run-on-arch-action from 2.3.0 to 2.5.0.

    Release notes

    Sourced from uraimo/run-on-arch-action's releases.

    2.5.0

    This release adds a new base_image option that allows to specify a custom image to be used in the internal docker container that this Action creates. This feature has known limitations if you use image caching but will be improved in the next major release.

    Please follow the issue #98 for updates on release 3.0, this will likely be the last release of the 2.x branch.

    What's Changed

    Full Changelog: https://github.com/uraimo/run-on-arch-action/compare/v2.4.0...v2.5.0

    2.4.0

    What's Changed

    Full Changelog: https://github.com/uraimo/run-on-arch-action/compare/v2.3.0...v2.4.0

    Commits
    • a800330 Typo
    • f0018f7 Fix test, add clarification for base_image used in conjunction with caching
    • c75a65e Force none/none when using base_image, updated Readme and test
    • b346fda Merge pull request #103 from LonghronShen/master
    • b0c64c5 docs(base_image): add docs for base_image option
    • 3231f89 fix(*): revert formatting
    • 6149368 feat(base_image): add custom base_image argument
    • b101fff Merge pull request #102 from gdams/risc
    • abefa6a fix alpine
    • 1ca371e Update README.md
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
  • Bug: Upsynced resources are

    Bug: Upsynced resources are "Synced" when the initial synctarget is not ready/gone

    When upsyncing a resource, if the synctarget referenced in the "Upsync" state is deleted or non-healthy, then the scheduler will remove the "Upsync" state and set the "Sync" state when there's a Synctarget healthy.

    The workload scheduler should:

    • Remove an upsynced resource if the synctarget is gone
    • Don't change the scheduling of the resource if the synctarget is non-healthy.
Related tags
Netkit - A type parameter(generics) net kit, support tcp kcp, customize packet

Netkit Netkit is a type parameter(generics) golang package Get Started Need Go i

Jan 12, 2022
x-crafter is used to quickly create templates from your prototype, also come with a builder to quickly regenerate your code

XCrafter ?? x-crafter is used to quickly create templates from your prototype, also come with a builder to quickly regenerate your code. Install Using

Nov 29, 2021
httpx is a fast and multi-purpose HTTP toolkit allows to run multiple probers using retryablehttp library, it is designed to maintain the result reliability with increased threads.
httpx is a fast and multi-purpose HTTP toolkit allows to run multiple probers using retryablehttp library, it is designed to maintain the result reliability with increased threads.

Features • Installation • Usage • Running httpx • Notes • Join Discord httpx is a fast and multi-purpose HTTP toolkit allow to run multiple probers us

Jan 8, 2023
This project provides fully automated one-click experience to create Cloud and Kubernetes environment to run Data Analytics workload like Apache Spark.
This project provides fully automated one-click experience to create Cloud and Kubernetes environment to run Data Analytics workload like Apache Spark.

Introduction This project provides a fully automated one-click tool to create Data Analytics platform in Cloud and Kubernetes environment: Single scri

Nov 25, 2022
ETH <-> XMR atomic swap prototype

ETH-XMR Atomic Swaps This is a prototype of ETH<->XMR atomic swaps, which was worked on during ETHLisbon. Instructions Start ganache-cli with determin

Jan 7, 2023
A prototype of a plugin system in Go using syscalls (execve)

Talking binaries Creating a viable plugin system in Go is challenging. Some avenues (and architectural examples) I considered are: go-plugin Go plugin

Jan 24, 2022
A (attempt to create a) multiversion gophertunnel proxy to join the latest MC version without renderdragon

draco a multiversion gophertunnel proxy to join the latest MC version without renderdragon Purpose mojang can't seem to actually make a good update to

Dec 15, 2022
Hetzner-dns-updater - A simple tool to update a DNS record via Hetzner DNS API. Used for simple HA together with Nomad

hetzner-dns-updater A small utility tool to update a single record via Hetzner D

Feb 12, 2022
WebDAV server for SSH. Similar to sshfs but does not require proprietary MacFUSE on macOS

sshwebdav: WebDAV server for SSH sshwebdav provides a WebDAV server for a remote SSH host. sshwebdav is similar to sshfs but does not require propriet

Nov 9, 2022
Get subdomain list and check whether they are active or not by each response code. Using API by c99.nl
Get subdomain list and check whether they are active or not by each response code. Using API by c99.nl

getsubdomain Get subdomain list and check whether they are active or not by each response code. Using API by c99.nl Installation ▶ go install github.c

Oct 24, 2022
A Realtime API Gateway used with NATS to build REST, real time, and RPC APIs, where all your clients are synchronized seamlessly.
A Realtime API Gateway used with NATS to build REST, real time, and RPC APIs, where all your clients are synchronized seamlessly.

Realtime API Gateway Synchronize Your Clients Visit Resgate.io for guides, live demos, and resources. Resgate is a Go project implementing a realtime

Dec 31, 2022
Develop, update, and restart your ESP32 applications in less than two seconds
Develop, update, and restart your ESP32 applications in less than two seconds

Jaguar Develop, update, and restart your ESP32 applications in less than two seconds. Use the really fast development cycle to iterate quickly and lea

Jan 8, 2023
Dummy - HTTP server for testing cluster deployments

dummy HTTP server for testing cluster deployments. Very small container image fo

Feb 17, 2022
dynflare is a tool to automatically update dns records at Cloudflare, when the ip changes.

dynflare dynflare is a tool to automatically update dns records at Cloudflare, when the ip changes. How it works The current ips are determined by ask

Dec 7, 2021
Automatically update your Windows hosts file with the WSL2 VM IP address

Automatically update your Windows hosts file with the WSL2 VM IP address

Jan 9, 2023
Use DDNS to Update a Cloudflare Spectrum Application's IP Address

Cloudflare Spectrum DDNS NOTICE - PROJECT IS A WORK IN PROGRESS Cloudflare Spectrum's functionality is limited to specifying IP addresses for SSH and

Sep 15, 2022
Local development against a remote Kubernetes or OpenShift cluster
Local development against a remote Kubernetes or OpenShift cluster

Documentation - start here! ** Note: Telepresence 1 is being replaced by our even better Telepresence 2. Please try Telepresence 2 first and report an

Jan 8, 2023
K8s_dns_chaos: enables inject DNS chaos in a Kubernetes cluster for Chaos Engineering

k8s_dns_chaos Name k8s_dns_chaos - enables inject DNS chaos in a Kubernetes cluster for Chaos Engineering. Description This plugin implements the Kube

Dec 12, 2021
Brook is a cross-platform strong encryption and not detectable proxy. Zero-Configuration. Brook 是一个跨平台的强加密无特征的代理软件. 零配置.

Brook 中文 v20210401 [GUI] Block list(Ad Block) Bypass & Block rule [GUI] Forward DNS [GUI] OpenWrt GUI client [GUI] Fake DNS [CLI] $ brook tproxy Scrip

Jan 4, 2023