Enterprise-grade container platform tailored for multicloud and multi-cluster management

KubeSphere Container Platform

License Build Status Go Report Card KubeSphere release

logo


What is KubeSphere

English | 中文

KubeSphere is a distributed operating system providing cloud native stack with Kubernetes as its kernel, and aims to be plug-and-play architecture for third-party applications seamless integration to boost its ecosystem. KubeSphere is also a multi-tenant enterprise-grade container platform with full-stack automated IT operation and streamlined DevOps workflows. It provides developer-friendly wizard web UI, helping enterprises to build out a more robust and feature-rich platform, which includes most common functionalities needed for enterprise Kubernetes strategy, see Feature List for details.

The following screenshots give a close insight into KubeSphere. Please check What is KubeSphere for further information.

Workbench Project Resources
CI/CD Pipeline App Store

Demo Environment

Using the account demo1 / Demo123 to log in the demo environment. Please note the account is granted view access. You can also have a quick view of KubeSphere Demo Video.

Architecture

KubeSphere uses a loosely-coupled architecture that separates the frontend from the backend. External systems can access the components of the backend which are delivered as Docker containers through the REST APIs. See Architecture for details.

Architecture

Features

Feature Description
Provisioning Kubernetes Cluster Support deploy Kubernetes on your infrastructure out of box, including online and air gapped installation
Multi-cluster Management Provide a centralized control plane to manage multiple Kubernetes Clusters, support application distribution across multiple clusters and cloud providers
Kubernetes Resource Management Provide web console for creating and managing Kubernetes resources, with powerful observability including monitoring, logging, events, alerting and notification
DevOps System Provide out-of-box CI/CD based on Jenkins, and offers automated workflow tools including binary-to-image (B2I) and source-to-image (S2I)
Application Store Provide application store for Helm-based applications, and offers application lifecycle management
Service Mesh (Istio-based) Provide fine-grained traffic management, observability and tracing for distributed microservice applications, provides visualization for traffic topology
Rich Observability Provide multi-dimensional monitoring metrics, and provides multi-tenant logging, events and auditing management, support alerting and notification for both application and infrastructure
Multi-tenant Management Provide unified authentication with fine-grained roles and three-tier authorization system, supports AD/LDAP authentication
Infrastructure Management Support node management and monitoring, and supports adding new nodes for Kubernetes cluster
Storage Support Support GlusterFS, CephRBD, NFS, LocalPV (default), etc. open source storage solutions, provides CSI plugins to consume storage from cloud providers
Network Support Support Calico, Flannel, etc., provides Network Policy management, and load balancer plug-in Porter for bare metal.
GPU Support Support add GPU node, support vGPU, enables running ML applications on Kubernetes, e.g. TensorFlow

Please see the Feature and Benefits for further information.


Latest Release

KubeSphere 3.0.0 is now generally available! See the Release Notes For 3.0.0 for the updates.

Installation

KubeSphere can run anywhere from on-premise datacenter to any cloud to edge. In addition, it can be deployed on any version-compatible running Kubernetes cluster.

QuickStarts

Quickstarts include six hands-on lab exercises that help you quickly get started with KubeSphere.

Installing on Existing Kubernetes Cluster

Installing on Linux

Contributing, Support, Discussion, and Community

We ❤️ your contribution. The community walks you through how to get started contributing KubeSphere. The development guide explains how to set up development environment.

Please submit any KubeSphere bugs, issues, and feature requests to KubeSphere GitHub Issue.

Who are using KubeSphere

The user case studies page includes the user list of the project. You can submit a PR to add your institution name and homepage if you are using KubeSphere.

Landscapes



    

KubeSphere is a member of CNCF and a Kubernetes Conformance Certified platform , which enriches the CNCF CLOUD NATIVE Landscape.

Comments
  • error occurs on the page of custom monitoring page,

    error occurs on the page of custom monitoring page,

    Describe the Bug error occurs on the page of custom monitoring page 427e3b183ddf9aca19524b2bca680f3

    For UI issues please also add a screenshot that shows the issue.

    Versions Used KubeSphere: ks3.2 Kubernetes: (If KubeSphere installer used, you can skip this)

    How To Reproduce Steps to reproduce the behavior:

    1. In ks3.1.1, update the images of ks-console, ks-controller - Manager, and ks-apiserver to a nightly 20210915
    2. Install CRD: Monitoring - clusterdashboard - customResourceDefinition yaml monitoring-dashboard-customResourceDefinition.yaml
    3. entering the custom monitoring page, error occurs

    Problem analysis: Replace v1aphal1 in the following interface with v1aphal2 image

    /assign @zhu733756 @harrisonliu5

  • install kubesphere error always show waiting for etcd to start

    install kubesphere error always show waiting for etcd to start

    [master1 172.16.0.2] MSG: Configuration file already exists Waiting for etcd to start Waiting for etcd to start Waiting for etcd to start Waiting for etcd to start Waiting for etcd to start Waiting for etcd to start Waiting for etcd to start Waiting for etcd to start Waiting for etcd to start Waiting for etcd to start Waiting for etcd to start Waiting for etcd to start Waiting for etcd to start Waiting for etcd to start Waiting for etcd to start Waiting for etcd to start Waiting for etcd to start Waiting for etcd to start Waiting for etcd to start Waiting for etcd to start WARN[23:31:55 CST] Task failed ...
    WARN[23:31:55 CST] error: Failed to start etcd cluster: Failed to exec command: sudo -E /bin/sh -c "export ETCDCTL_API=2;export ETCDCTL_CERT_FILE='/etc/ssl/etcd/ssl/admin-master1.pem';export ETCDCTL_KEY_FILE='/etc/ssl/etcd/ssl/admin-master1-key.pem';export ETCDCTL_CA_FILE='/etc/ssl/etcd/ssl/ca.pem';/usr/local/bin/etcdctl --endpoints=https://172.16.0.2:2379,https://172.16.0.3:2379,https://172.16.0.4:2379 cluster-health | grep -q 'cluster is healthy'" Error: client: etcd cluster is unavailable or misconfigured; error #0: client: endpoint https://172.16.0.4:2379 exceeded header timeout ; error #1: client: endpoint https://172.16.0.2:2379 exceeded header timeout ; error #2: client: endpoint https://172.16.0.3:2379 exceeded header timeout

    error #0: client: endpoint https://172.16.0.4:2379 exceeded header timeout error #1: client: endpoint https://172.16.0.2:2379 exceeded header timeout error #2: client: endpoint https://172.16.0.3:2379 exceeded header timeout: Process exited with status 1 Error: Failed to start etcd cluster: Failed to start etcd cluster: Failed to exec command: sudo -E /bin/sh -c "export ETCDCTL_API=2;export ETCDCTL_CERT_FILE='/etc/ssl/etcd/ssl/admin-master1.pem';export ETCDCTL_KEY_FILE='/etc/ssl/etcd/ssl/admin-master1-key.pem';export ETCDCTL_CA_FILE='/etc/ssl/etcd/ssl/ca.pem';/usr/local/bin/etcdctl --endpoints=https://172.16.0.2:2379,https://172.16.0.3:2379,https://172.16.0.4:2379 cluster-health | grep -q 'cluster is healthy'" Error: client: etcd cluster is unavailable or misconfigured; error #0: client: endpoint https://172.16.0.4:2379 exceeded header timeout ; error #1: client: endpoint https://172.16.0.2:2379 exceeded header timeout ; error #2: client: endpoint https://172.16.0.3:2379 exceeded header timeout

    error #0: client: endpoint https://172.16.0.4:2379 exceeded header timeout error #1: client: endpoint https://172.16.0.2:2379 exceeded header timeout error #2: client: endpoint https://172.16.0.3:2379 exceeded header timeout: Process exited with status 1 Usage: kk create cluster [flags]

    Flags: -f, --filename string Path to a configuration file -h, --help help for cluster --skip-pull-images Skip pre pull images --with-kubernetes string Specify a supported version of kubernetes --with-kubesphere Deploy a specific version of kubesphere (default v3.0.0) -y, --yes Skip pre-check of the installation

    Global Flags: --debug Print detailed information (default true)

    Failed to start etcd cluster: Failed to start etcd cluster: Failed to exec command: sudo -E /bin/sh -c "export ETCDCTL_API=2;export ETCDCTL_CERT_FILE='/etc/ssl/etcd/ssl/admin-master1.pem';export ETCDCTL_KEY_FILE='/etc/ssl/etcd/ssl/admin-master1-key.pem';export ETCDCTL_CA_FILE='/etc/ssl/etcd/ssl/ca.pem';/usr/local/bin/etcdctl --endpoints=https://172.16.0.2:2379,https://172.16.0.3:2379,https://172.16.0.4:2379 cluster-health | grep -q 'cluster is healthy'" Error: client: etcd cluster is unavailable or misconfigured; error #0: client: endpoint https://172.16.0.4:2379 exceeded header timeout ; error #1: client: endpoint https://172.16.0.2:2379 exceeded header timeout ; error #2: client: endpoint https://172.16.0.3:2379 exceeded header timeout

    error #0: client: endpoint https://172.16.0.4:2379 exceeded header timeout error #1: client: endpoint https://172.16.0.2:2379 exceeded header timeout error #2: client: endpoint https://172.16.0.3:2379 exceeded header timeout: Process exited with status 1

  • does anyone  install edgemesh sucess in kubesphere?

    does anyone install edgemesh sucess in kubesphere?

    I follow this document : https://www.modb.pro/db/241198 Edgemesh is installed from app store and sucess. But the demo of edgemesh failed. I have run edgemesh sucess on Kubeedge without kubesphere. Does anyone know how to check the error? or have do the same work ?

  • 【加急】CentOS Linux release 7.5.1804 安装失败kubesphere2.0.2离线版

    【加急】CentOS Linux release 7.5.1804 安装失败kubesphere2.0.2离线版

    问题描述

    TASK [ks-devops/ks-devops : OpenPitrix | Waiting for openpitrix-db] *****************************************************************************************************************************************************************************
    Tuesday 29 October 2019  21:06:32 +0800 (0:00:00.776)       0:06:25.399 ******* 
    FAILED - RETRYING: OpenPitrix | Waiting for openpitrix-db (15 retries left).
    FAILED - RETRYING: OpenPitrix | Waiting for openpitrix-db (14 retries left).
    FAILED - RETRYING: OpenPitrix | Waiting for openpitrix-db (13 retries left).
    FAILED - RETRYING: OpenPitrix | Waiting for openpitrix-db (12 retries left).
    FAILED - RETRYING: OpenPitrix | Waiting for openpitrix-db (11 retries left).
    FAILED - RETRYING: OpenPitrix | Waiting for openpitrix-db (10 retries left).
    FAILED - RETRYING: OpenPitrix | Waiting for openpitrix-db (9 retries left).
    FAILED - RETRYING: OpenPitrix | Waiting for openpitrix-db (8 retries left).
    FAILED - RETRYING: OpenPitrix | Waiting for openpitrix-db (7 retries left).
    FAILED - RETRYING: OpenPitrix | Waiting for openpitrix-db (6 retries left).
    FAILED - RETRYING: OpenPitrix | Waiting for openpitrix-db (5 retries left).
    FAILED - RETRYING: OpenPitrix | Waiting for openpitrix-db (4 retries left).
    FAILED - RETRYING: OpenPitrix | Waiting for openpitrix-db (3 retries left).
    FAILED - RETRYING: OpenPitrix | Waiting for openpitrix-db (2 retries left).
    FAILED - RETRYING: OpenPitrix | Waiting for openpitrix-db (1 retries left).
    fatal: [ks-allinone]: FAILED! => {"attempts": 15, "changed": true, "cmd": "/usr/local/bin/kubectl -n openpitrix-system get pod | grep openpitrix-db-deployment | awk '{print $3}'", "delta": "0:00:00.277817", "end": "2019-10-29 21:11:40.450306", "rc": 0, "start": "2019-10-29 21:11:40.172489", "stderr": "", "stderr_lines": [], "stdout": "Pending", "stdout_lines": ["Pending"]}
    
    PLAY RECAP **************************************************************************************************************************************************************************************************************************************
    ks-allinone                : ok=244  changed=90   unreachable=0    failed=1   
    
    Tuesday 29 October 2019  21:11:40 +0800 (0:05:07.919)       0:11:33.318 ******* 
    =============================================================================== 
    

    安装环境的硬件配置 8vCPU 32GB内存 CentOS Linux release 7.5.1804 离线安装kubesphere2.0.2 安装方式all-in-one离线

    错误信息或截图

    TASK [ks-devops/ks-devops : OpenPitrix | Waiting for openpitrix-db] *****************************************************************************************************************************************************************************
    Tuesday 29 October 2019  21:06:32 +0800 (0:00:00.776)       0:06:25.399 ******* 
    FAILED - RETRYING: OpenPitrix | Waiting for openpitrix-db (15 retries left).
    FAILED - RETRYING: OpenPitrix | Waiting for openpitrix-db (14 retries left).
    FAILED - RETRYING: OpenPitrix | Waiting for openpitrix-db (13 retries left).
    FAILED - RETRYING: OpenPitrix | Waiting for openpitrix-db (12 retries left).
    FAILED - RETRYING: OpenPitrix | Waiting for openpitrix-db (11 retries left).
    FAILED - RETRYING: OpenPitrix | Waiting for openpitrix-db (10 retries left).
    FAILED - RETRYING: OpenPitrix | Waiting for openpitrix-db (9 retries left).
    FAILED - RETRYING: OpenPitrix | Waiting for openpitrix-db (8 retries left).
    FAILED - RETRYING: OpenPitrix | Waiting for openpitrix-db (7 retries left).
    FAILED - RETRYING: OpenPitrix | Waiting for openpitrix-db (6 retries left).
    FAILED - RETRYING: OpenPitrix | Waiting for openpitrix-db (5 retries left).
    FAILED - RETRYING: OpenPitrix | Waiting for openpitrix-db (4 retries left).
    FAILED - RETRYING: OpenPitrix | Waiting for openpitrix-db (3 retries left).
    FAILED - RETRYING: OpenPitrix | Waiting for openpitrix-db (2 retries left).
    FAILED - RETRYING: OpenPitrix | Waiting for openpitrix-db (1 retries left).
    fatal: [ks-allinone]: FAILED! => {"attempts": 15, "changed": true, "cmd": "/usr/local/bin/kubectl -n openpitrix-system get pod | grep openpitrix-db-deployment | awk '{print $3}'", "delta": "0:00:00.277817", "end": "2019-10-29 21:11:40.450306", "rc": 0, "start": "2019-10-29 21:11:40.172489", "stderr": "", "stderr_lines": [], "stdout": "Pending", "stdout_lines": ["Pending"]}
    
    PLAY RECAP **************************************************************************************************************************************************************************************************************************************
    ks-allinone                : ok=244  changed=90   unreachable=0    failed=1   
    
    Tuesday 29 October 2019  21:11:40 +0800 (0:05:07.919)       0:11:33.318 ******* 
    =============================================================================== 
    ks-devops/ks-devops : OpenPitrix | Waiting for openpitrix-db --------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 307.92s
    openpitrix : OpenPitrix | Installing OpenPitrix(2) ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 139.90s
    ks-monitor : ks-monitor | Getting monitor installation files ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 20.57s
    openpitrix : OpenPitrix | Getting OpenPitrix installation files ------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 18.29s
    prepare/nodes : Ceph RBD | Installing ceph-common (YUM) --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 14.92s
    prepare/nodes : KubeSphere| Installing JQ (YUM) ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 8.59s
    metrics-server : Metrics-Server | Getting metrics-server installation files -------------------------------------------------------------------------------------------------------------------------------------------------------------- 7.80s
    prepare/base : KubeSphere | Labeling system-workspace ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 6.91s
    ks-monitor : ks-monitor | Creating manifests --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 5.35s
    ks-logging : ks-logging | Creating manifests --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 4.61s
    prepare/base : KubeSphere | Getting installation init files ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 4.03s
    download : Download items ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 3.85s
    prepare/base : KubeSphere | Init KubeSphere ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 3.70s
    prepare/nodes : GlusterFS | Installing glusterfs-client (YUM) ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 3.38s
    prepare/base : KubeSphere | Create kubesphere namespace ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 3.32s
    prepare/base : KubeSphere | Creating manifests ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 3.03s
    ks-console : ks-console | Creating manifests --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2.95s
    openpitrix : OpenPitrix | Getting OpenPitrix installation files -------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2.82s
    ks-devops/s2i : S2I | Creating manifests ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2.76s
    ks-monitor : ks-monitor | Installing prometheus-operator --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2.73s
    failed!
    [root@JTGC-APP190599 scripts]# kubectl -n openpitrix-system get pod
    NAME                                                      READY   STATUS     RESTARTS   AGE
    openpitrix-api-gateway-deployment-6bc9747f6c-xxdl2        0/1     Init:0/2   0          8m43s
    openpitrix-app-manager-deployment-7df95d8848-hv8cj        0/1     Init:0/2   0          8m42s
    openpitrix-category-manager-deployment-694bd85647-6pdwm   0/1     Init:0/2   0          8m42s
    openpitrix-cluster-manager-deployment-5c8c797d59-265wf    0/1     Init:0/2   0          8m42s
    openpitrix-db-deployment-79f9db9dd9-fcp5s                 0/1     Pending    0          8m45s
    openpitrix-etcd-deployment-84d677449b-mt6r5               0/1     Pending    0          8m44s
    openpitrix-iam-service-deployment-6bc657d9c6-khgtk        0/1     Init:0/2   0          8m41s
    openpitrix-job-manager-deployment-d9d966976-7q7dv         0/1     Init:0/2   0          8m41s
    openpitrix-minio-deployment-594df9bb5-wssbw               0/1     Pending    0          8m44s
    openpitrix-repo-indexer-deployment-5856985997-fvh76       0/1     Init:0/2   0          8m40s
    openpitrix-repo-manager-deployment-b9888bf58-87zrs        0/1     Init:0/2   0          8m40s
    openpitrix-runtime-manager-deployment-54c6bb64f4-zmfnd    0/1     Init:0/2   0          8m40s
    openpitrix-task-manager-deployment-5479966bfc-lj6xl       0/1     Init:0/2   0          8m39s
    [root@JTGC-APP190599 scripts]# cat /etc/resolv.conf
    

    Installer版本 8vCPU 32GB内存 CentOS Linux release 7.5.1804 离线安装kubesphere2.0.2 安装方式all-in-one离线

  • Install Error

    Install Error

    OS Version: CentOS Linux release 7.5.1804 (Core) Kubesphere Version: kubesphere-all-advanced-2.0.0-dev-20190514

    selinux已关闭、swapoff已关闭、firewalld已关闭

    报错信息如下: kubernetes/preinstall : Update package management cache (YUM) ------------------------------------------------------------------------------------------------------------------------------- 25.34s kubernetes/preinstall : Install packages requirements ---------------------------------------------------------------------------------------------------------------------------------------- 2.96s gather facts from all instances -------------------------------------------------------------------------------------------------------------------------------------------------------------- 0.72s bootstrap-os : Install libselinux-python and yum-utils for bootstrap ------------------------------------------------------------------------------------------------------------------------- 0.70s bootstrap-os : Check python-pip package ------------------------------------------------------------------------------------------------------------------------------------------------------ 0.69s bootstrap-os : Install pip for bootstrap ----------------------------------------------------------------------------------------------------------------------------------------------------- 0.68s download : Download items -------------------------------------------------------------------------------------------------------------------------------------------------------------------- 0.66s bootstrap-os : Assign inventory name to unconfigured hostnames (non-CoreOS and Tumbleweed) --------------------------------------------------------------------------------------------------- 0.65s kubernetes/preinstall : Create kubernetes directories ---------------------------------------------------------------------------------------------------------------------------------------- 0.64s download : Sync container -------------------------------------------------------------------------------------------------------------------------------------------------------------------- 0.64s download : Download items -------------------------------------------------------------------------------------------------------------------------------------------------------------------- 0.62s download : Sync container -------------------------------------------------------------------------------------------------------------------------------------------------------------------- 0.62s bootstrap-os : Gather nodes hostnames -------------------------------------------------------------------------------------------------------------------------------------------------------- 0.60s kubernetes/preinstall : Set selinux policy --------------------------------------------------------------------------------------------------------------------------------------------------- 0.54s container-engine/docker : Ensure old versions of Docker are not installed. | RedHat ---------------------------------------------------------------------------------------------------------- 0.44s container-engine/docker : ensure service is started if docker packages are already present --------------------------------------------------------------------------------------------------- 0.44s bootstrap-os : Install epel-release for bootstrap -------------------------------------------------------------------------------------------------------------------------------------------- 0.43s kubernetes/preinstall : Create cni directories ----------------------------------------------------------------------------------------------------------------------------------------------- 0.41s kubernetes/preinstall : Hosts | populate inventory into hosts file --------------------------------------------------------------------------------------------------------------------------- 0.41s kubernetes/preinstall : Remove swapfile from /etc/fstab -------------------------------------------------------------------------------------------------------------------------------------- 0.39s failed!

  • support OIDC identity provider

    support OIDC identity provider

    Signed-off-by: hongming [email protected]

    What type of PR is this?

    /kind feature

    What this PR does / why we need it:

    Support OIDC identity provider.

    Which issue(s) this PR fixes:

    Fixes #2941

    Additional documentation, usage docs, etc.:

    See also:

    https://github.com/kubesphere/community/blob/master/sig-multitenancy/auth/how-to-configure-authentication.md

    https://github.com/kubesphere/community/blob/master/sig-multitenancy/auth/oidc-Identity-provider.md

  • centos7.6安装几小时之后机器卡死

    centos7.6安装几小时之后机器卡死

    General remarks

    全新安装centos7.6,然后安装KubeSphere v2.1,正常运行一段时间后,机器卡死

    Describe the bug(描述下问题)

    1. 全新安装的centos7.6,然后再关闭firewalld,安装最新版KubeSphere v2.1;

    2. 安装后能正常使用,top的load average在1左右如图:

    3. 过几小时,top命令的load average能够飙涨到200以上,而且登录不上去,手动重启服务器后,又能够重新使用了,会有报错信息,情况如图:

    4. 登录页面正常显示,输入账号密码登录之后返回500,如图:

    For UI issues please also add a screenshot that shows the issue.

    Versions used(KubeSphere/Kubernetes的版本) KubeSphere:kubesphere-all-v2.1.0

    Environment(环境的硬件配置)

    1 masters: 72cpu/32g 0 nodes centos版本:CentOS Linux release 7.6.1810 (Core)

    (and other info are welcomed to help us debugging)

    To Reproduce(复现步骤) Steps to reproduce the behavior:

    1. uninstall kubeSphere,第二天来查看机器,能够正常使用,负载正常;
    2. 重新安装kubeSphere, 第二天来查看,机器无法登录,登录页面可以正常显示,问题复现。
  • upgrade ingress nginx version

    upgrade ingress nginx version

    What type of PR is this?

    /kind bug

    What this PR does / why we need it:

    ingress nginx controller deployment failure due to K8s v1.22+ api removal

    Which issue(s) this PR fixes:

    Fixes #4548 #4486

    Special notes for reviewers:

    Does this PR introduced a user-facing change?

    None
    

    Additional documentation, usage docs, etc.:

    upgrade ingress nginx version
    https://github.com/kubernetes/ingress-nginx/blob/helm-chart-4.0.13/Changelog.md
    
  • Proxy DevOps APIs with group name and version

    Proxy DevOps APIs with group name and version

    What type of PR is this?

    /kind api-change /kind cleanup /area devops

    What this PR does / why we need it:

    1. Proxy DevOps APIs with group name and version
    2. Refactor old DevOps API registers in kapis package

    Which issue(s) this PR fixes:

    Fixes #4684

    Special notes for reviewers:

    Docker image for test: johnniang/ks-apiserver:proxy-devops-v1alpha1.

    Does this PR introduced a user-facing change?

    None
    

    Additional documentation, usage docs, etc.:

    
    

    /cc @kubesphere/sig-devops /cc @zryfish /cc @wansir

  • remove capability CRDs and update controller

    remove capability CRDs and update controller

    Signed-off-by: f10atin9 [email protected]

    What type of PR is this?

    /kind feature /area storage

    What this PR does / why we need it:

    Add new feature to manage storage capabilities in console

    Which issue(s) this PR fixes:

    Fixes #4075

    Special notes for reviewers:

    /cc @stoneshi-yunify @dkeven @kubesphere/sig-storage 
    

    Does this PR introduced a user-facing change?

    None
    

    Additional documentation, usage docs, etc.:

    
    
  • Add a function to shell access to the node in the kubesphere

    Add a function to shell access to the node in the kubesphere

    What type of PR is this?

    /kind documentation /kind feature

    Why we need it:

    Add a function to shell access to the node in the kubesphere

    Which issue(s) this PR fixes:

    see #4569

    Add a function to shell access to the node in the kubesphere.
    

    the proposal

  • support kubernetes audit log

    support kubernetes audit log

    What's it about?

    Many Cloud Kubernetes Provider support kubernetes audit log, e.g "https://help.aliyun.com/document_detail/91406.html"

    What's the reason why we need it?

    I believe this is an important feature for Kubesphere. There're a few use cases:

    • it is important if user are concern to cluster security.
    • debug, such as user want to know who access what resource at what moment

    Please leave your comments below if there's anyone agrees with me. Or just give me a thumb up.

    Area Suggestion

    /kind feature-request

  • use rule groups for alerting

    use rule groups for alerting

    What's it about?

    Introduce new CRDs GlobalRuleGroups, ClusterRuleGroups, and RuleGroups to provide easier alerting rule operations.

    What's the reason why we need it?

    I believe this is an important feature for Kubesphere. There're a few use cases:

    • Better to distinguish alerting rules with defferent scopes.
    • Easier to modify, enable/disable alerting rules including the builtins.
    • Conversion with PrometheusRules to maintain compatibility.

    Please leave your comments below if there's anyone agrees with me. Or just give me a thumb up.

    Area Suggestion

    /kind feature-request /area alerting

  • Kubesphere does not support grpc probe health check

    Kubesphere does not support grpc probe health check

    Describe the Bug https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/?spm=a2c4g.11186623.0.0.4841229cTTUstn#define-a-grpc-liveness-probe Kubernetes supports gRPC survival probe in version 1.24. When I define the gRPC survival probe, the pod console of kubesphere web will be blank and an error will be reported

           livenessProbe:
             failureThreshold: 3
             grpc:
               port: 7001
               service: ""
             initialDelaySeconds: 20
             periodSeconds: 15
             successThreshold: 1
             timeoutSeconds: 2
           readinessProbe:
             failureThreshold: 3
             grpc:
               port: 7001
               service: ""
             initialDelaySeconds: 10
             periodSeconds: 3
             successThreshold: 1
             timeoutSeconds: 2
    

    Versions Used KubeSphere: v3.3.0 Kubernetes: 1.25

    Expected behavior The web console can display correctly

    The url I visited:org/clusters/default/projects/xxx-server/statefulsets/app-xxxx/resource-status kubesphere

  • Documentation and feature requirements

    Documentation and feature requirements

    This is the part we want to enhance as much as possible after using kubesphere for more than 2 years.

    1. Offline deployment scenarios (simplification/minimized packaging by function/multiple architecture support/performance testing)
    2. APM integration scenarios (EFK/ELK/skywalking/istio/xx)
    3. Monitor alarms and expand scenarios (upgrade/maintenance/expansion/integration)
    4. Open source component plugging and unplugging scenarios (hope to integrate new versions as much as possible)
    5. Gateway integration multiple selection scenarios (apisix/ingress-nginx/default gateway/traefix)
    6. Service release scenario (grayscale/AB/blue-green)
    7. Storage switch scene (local/nfs/ceph)
    8. Cluster backup and backup cluster enable transfer scenario.
  • support assigning more cluster-admin role to someone

    support assigning more cluster-admin role to someone

    What's it about?

    Now if someone wants to see several clusters they care about, they need to assign a cluster-admin role to each cluster, and there is no entry for batch allocation; or is there any other solution to meet my needs

    What's the reason why we need it?

    Area Suggestion

    /kind feature-request

Production-Grade Container Scheduling and Management
Production-Grade Container Scheduling and Management

Kubernetes (K8s) Kubernetes, also known as K8s, is an open source system for managing containerized applications across multiple hosts. It provides ba

Jan 2, 2023
Influxdb-cluster - InfluxDB Cluster for replacing InfluxDB Enterprise

InfluxDB ATTENTION: Around January 11th, 2019, master on this repository will be

Dec 26, 2022
Kstone is an etcd management platform, providing cluster management, monitoring, backup, inspection, data migration, visual viewing of etcd data, and intelligent diagnosis.
Kstone is an etcd management platform, providing cluster management, monitoring, backup, inspection, data migration, visual viewing of etcd data, and intelligent diagnosis.

Kstone 中文 Kstone is an etcd management platform, providing cluster management, monitoring, backup, inspection, data migration, visual viewing of etcd

Dec 27, 2022
KubeCube is an open source enterprise-level container platform
KubeCube is an open source enterprise-level container platform

KubeCube English | 中文文档 KubeCube is an open source enterprise-level container platform that provides enterprises with visualized management of Kuberne

Jan 4, 2023
Open, Multi-Cloud, Multi-Cluster Kubernetes Orchestration
Open, Multi-Cloud, Multi-Cluster Kubernetes Orchestration

Karmada Karmada: Open, Multi-Cloud, Multi-Cluster Kubernetes Orchestration Karmada (Kubernetes Armada) is a Kubernetes management system that enables

Dec 30, 2022
Enable dynamic and seamless Kubernetes multi-cluster topologies

Enable dynamic and seamless Kubernetes multi-cluster topologies Explore the docs » View Demo · Report Bug · Request Feature About the project Liqo is

Dec 30, 2022
cluster-api-state-metrics (CASM) is a service that listens to the Kubernetes API server and generates metrics about the state of custom resource objects related of Kubernetes Cluster API.

Overview cluster-api-state-metrics (CASM) is a service that listens to the Kubernetes API server and generates metrics about the state of custom resou

Oct 27, 2022
Go-gke-pulumi - A simple example that deploys a GKE cluster and an application to the cluster using pulumi

This example deploys a Google Cloud Platform (GCP) Google Kubernetes Engine (GKE) cluster and an application to it

Jan 25, 2022
Go WhatsApp Multi-Device Implementation in REST API with Multi-Session/Account Support

Go WhatsApp Multi-Device Implementation in REST API This repository contains example of implementation go.mau.fi/whatsmeow package with Multi-Session/

Dec 3, 2022
⎈ Multi pod and container log tailing for Kubernetes

stern Stern allows you to tail multiple pods on Kubernetes and multiple containers within the pod. Each result is color coded for quicker debugging. T

Nov 7, 2022
Multi cluster kubernetes dashboard with batteries included. Build by developers, for developers.

kubetower Multi cluster kubernetes dashboard with batteries included. Built by developers, for developers. Features Restart deployments with one click

Nov 28, 2022
K8s controller implementing Multi-Cluster Services API based on AWS Cloud Map.

AWS Cloud Map MCS Controller for K8s Introduction AWS Cloud Map multi-cluster service discovery for Kubernetes (K8s) is a controller that implements e

Dec 17, 2022
CoreDNS plugin implementing K8s multi-cluster services DNS spec.

corends-multicluster Name multicluster - implementation of Multicluster DNS Description This plugin implements the Kubernetes DNS-Based Multicluster S

Dec 3, 2022
kubetnl tunnels TCP connections from within a Kubernetes cluster to a cluster-external endpoint, e.g. to your local machine. (the perfect complement to kubectl port-forward)

kubetnl kubetnl (kube tunnel) is a command line utility to tunnel TCP connections from within a Kubernetes to a cluster-external endpoint, e.g. to you

Dec 16, 2022
A pod scaler golang app that can scale replicas either inside of cluster or out of the cluster

pod-scaler A simple pod scaler golang application that can scale replicas via manipulating the deployment Technologies The project has been created us

Oct 24, 2021
A Terraform module to manage cluster authentication (aws-auth) for an Elastic Kubernetes (EKS) cluster on AWS.

Archive Notice The terraform-aws-modules/eks/aws v.18.20.0 release has brought back support aws-auth configmap! For this reason, I highly encourage us

Dec 4, 2022
KNoC is a Kubernetes Virtual Kubelet that uses an HPC cluster as the container execution environment
KNoC is a Kubernetes Virtual Kubelet that uses an HPC cluster as the container execution environment

Kubernetes Node on Cluster KNoC is a Virtual Kubelet Provider implementation that manages real pods and containers in a remote container runtime by su

Oct 26, 2022
Moby Project - a collaborative project for the container ecosystem to assemble container-based systems
Moby Project - a collaborative project for the container ecosystem to assemble container-based systems

The Moby Project Moby is an open-source project created by Docker to enable and accelerate software containerization. It provides a "Lego set" of tool

Jan 8, 2023
Boxygen is a container as code framework that allows you to build container images from code

Boxygen is a container as code framework that allows you to build container images from code, allowing integration of container image builds into other tooling such as servers or CLI tooling.

Dec 13, 2021