KubeCube is an open source enterprise-level container platform

KubeCube

License build

logo

English | 中文文档

KubeCube is an open source enterprise-level container platform that provides enterprises with visualized management of Kubernetes resources and unified multi-cluster-multi-tenant management functions. KubeCube can simplify application deployment, manage application life cycles and provide rich monitoring and log auditing functions, helping companies quickly build a powerful and feature-rich container cloud platform.

dashboard

Features

  • Out of the box

    • The learning curve is gentle, integrating functions such as unified authentication, multi-cluster management, monitoring, logs, and alarms to release productivity
    • Operation friendly, providing visual management and unified operation of Kubernetes resources, with comprehensive self-monitoring capabilities
    • Quick deployment, providing All in One Minimized deployment mode and providing High Availability Deployment for production
  • Multi-tenant

    • Provide multi-level models of tenants, projects, and spaces to meet the needs of enterprise resource isolation and software project management
    • Based on the multi-tenant model, provide permissions control, resource sharing/isolation and other capabilities
  • Unified Multi Kubernetes Cluster Management

    • Provides a central management panel for multiple Kubernetes clusters and supports cluster import
    • Provide unified identity authentication and expand Kubernetes native RBAC capabilities in multiple Kubernetes clusters Access Control
    • Quickly manage cluster resources through WebConsole and CloudShell
  • Cluster autonomy

    • When the KubeCube service is down for maintenance, each business cluster can service well, still support access control, and transparent to business Pods
  • Hot Plug

    • Provide minimal installation, users can switch functions at any time according to their needs
    • There is not need to restart the KubeCube service, while switching the function.
  • Multi-access

    • Support Open API: It is convenient to connect to users’ existing systems
    • Compatible with Kubernetes native API: seamlessly compatible with existing Kubernetes tool chains, such as kubectl
  • No vendor lock-in

    • Any standard Kubernetes cluster can be imported to better support multi-cloud and hybrid-cloud
  • Others

What it does

  • Helping enterprise build the container platform

    Simplify the learning curve, help companies complete the construction of container platforms at a relatively low cost, realize the needs of rapid application deployment, and assist companies in promoting applications to the cloud.

  • Resource isolation, quota, and RBAC

    Multi-tenant management provides three levels of resource isolation, quota management, and RBAC for tenants, projects, and spaces, fully adapting to the resource and RBAC requirements of enterprise-level private cloud construction.

  • Cluster horizontal expansion

    A unified container cloud management platform can manage multiple business Kubernetes clusters, and there is no upper limit on the number. It can not only solve the limitation of the size of a single Kubernetes cluster by adding a new Kubernetes cluster through horizontal expansion, but also meet the requirements of different business lines to monopolize the cluster.

  • Rich observability

    Supports monitoring alarms and log collection in the cluster dimension and application dimension, provides a rich workload monitoring indicator interface and a cluster dimension monitoring interface, and provides flexible log query capabilities.

Architecture

The KubeCube is composed of components such as KubeCube Service, Warden, CloudShell, and AuditLog Server. Except for Warden, which is deployed in each Kubernetes cluster as an authentication agent, the rest of the components are deployed in the management cluster.

The architecture of KubeCube described in the figure below includes interaction with users, interaction with Kubernetes API Server, Prometheus monitoring and self-developed log collection components.

architecture

Quick Start

1、Environment Requirements

2、All In One

3、Quick Experience

For Developers

Contribution

Feedback & Contact

FAQ

License

Copyright 2021 KubeCube Authors

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Comments
  • [Bug]新版的ingress不能直接使用

    [Bug]新版的ingress不能直接使用

    直接通过界面配置,会报错 找不到ingressClassName: 由于公司是云服务器,使用了两台有外网的做实验 123.123.123.111 10.10.10.31 (添加了虚拟网卡将外网绑定到主机) 123.123.123.222 10.10.10.32 (添加了虚拟网卡将外网绑定到主机) 10.10.10.31 直接使用的all-in-one模式安装 10.10.10.32 node-join-master 最总结果kube get node显示结果为 10.10.10.31 master 123.123.123.222 node (这里估计填写node ip 10.10.10.32 可以使用内网ip也应该没什么问题)

    上面括弧重的猜测已经测试: KUBERNETES_BIND_ADDRESS="10.10.10.32" node-join-master 的时候 显示的也是外网Ip 123.123.123.222 我的网卡信息为

    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
        inet6 ::1/128 scope host 
           valid_lft forever preferred_lft forever
    2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
        link/ether fa:16:3e:a0:9e:2b brd ff:ff:ff:ff:ff:ff
        inet 10.10.10.32/24 brd 10.10.10.255 scope global dynamic eth0
           valid_lft 25917702sec preferred_lft 25917702sec
        inet 123.123.123.222/24 brd 123.123.123.255 scope global eth0:1
           valid_lft forever preferred_lft forever
        inet6 fe80::f816:3eff:fea0:9e2b/64 scope link 
           valid_lft forever preferred_lft forever
     eth0:1 为追加虚拟网卡,绑定我的外网ip
    

    然后添加 deployment dep-ng nginx->80 添加 service svc-ng dep-ng 80->80 添加 ingress ing-ng svc-ng 80 域名a.cn 转发规则“/” 将域名 a.cn解析到123.123.123.111 发现不能访问 查看ingress日志 说找不到ingressClassName 修改ingress ing-ng的yml配置,添加了ingressClassName:nginx 再查看日志,发现没有错误日志,但是域名依然无法访问

  • [Bug][Help]已有k8s 1.22.1集群中部署KubeCube不成功?

    [Bug][Help]已有k8s 1.22.1集群中部署KubeCube不成功?

    k8s版本 1.21.1

    kube-apiserver.yaml安装官方文档中的kubeapiserver修改 https://www.kubecube.io/docs/installation-guide/install-on-k8s/

    apiVersion: v1
    kind: Pod
    metadata:
      annotations:
        kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 10.206.0.10:6443
      creationTimestamp: null
      labels:
        component: kube-apiserver
        tier: control-plane
      name: kube-apiserver
      namespace: kube-system
    spec:
      containers:
        - command:
            - kube-apiserver
            - --audit-log-format=json
            - --audit-log-maxage=10
            - --audit-log-maxbackup=10
            - --audit-log-maxsize=100
            - --audit-log-path=/var/log/audit
            - --audit-policy-file=/etc/cube/audit/audit-policy.yaml
            - --audit-webhook-config-file=/etc/cube/audit/audit-webhook.config
            - --authentication-token-webhook-config-file=/etc/cube/warden/webhook.config
            - --advertise-address=10.206.0.10
            - --allow-privileged=true
            - --authorization-mode=Node,RBAC
            - --client-ca-file=/etc/kubernetes/pki/ca.crt
            - --enable-admission-plugins=NodeRestriction
            - --enable-bootstrap-token-auth=true
            - --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt
            - --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
            - --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
            - --etcd-servers=https://127.0.0.1:2379
            - --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
            - --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
            - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
            - --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
            - --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
            - --requestheader-allowed-names=front-proxy-client
            - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
            - --requestheader-extra-headers-prefix=X-Remote-Extra-
            - --requestheader-group-headers=X-Remote-Group
            - --requestheader-username-headers=X-Remote-User
            - --secure-port=6443
            - --service-account-issuer=https://kubernetes.default.svc.cluster.local
            - --service-account-key-file=/etc/kubernetes/pki/sa.pub
            - --service-account-signing-key-file=/etc/kubernetes/pki/sa.key
            - --service-cluster-ip-range=10.16.0.0/12
            - --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
            - --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
          image: registry.aliyuncs.com/google_containers/kube-apiserver:v1.22.1
          imagePullPolicy: IfNotPresent
          livenessProbe:
            failureThreshold: 8
            httpGet:
              host: 10.206.0.10
              path: /livez
              port: 6443
              scheme: HTTPS
            initialDelaySeconds: 10
            periodSeconds: 10
            timeoutSeconds: 15
          name: kube-apiserver
          readinessProbe:
            failureThreshold: 3
            httpGet:
              host: 10.206.0.10
              path: /readyz
              port: 6443
              scheme: HTTPS
            periodSeconds: 1
            timeoutSeconds: 15
          resources:
            requests:
              cpu: 250m
          startupProbe:
            failureThreshold: 24
            httpGet:
              host: 10.206.0.10
              path: /livez
              port: 6443
              scheme: HTTPS
            initialDelaySeconds: 10
            periodSeconds: 10
            timeoutSeconds: 15
          volumeMounts:
          - mountPath: /var/log/audit
            name: audit-log
          - mountPath: /etc/cube
            name: cube
            readOnly: true
          - mountPath: /etc/ssl/certs
            name: ca-certs
            readOnly: true
          - mountPath: /etc/ca-certificates
            name: etc-ca-certificates
            readOnly: true
          - mountPath: /etc/kubernetes/pki
            name: k8s-certs
            readOnly: true
          - mountPath: /usr/local/share/ca-certificates
            name: usr-local-share-ca-certificates
            readOnly: true
          - mountPath: /usr/share/ca-certificates
            name: usr-share-ca-certificates
            readOnly: true
      hostNetwork: true
      priorityClassName: system-node-critical
      securityContext:
        seccompProfile:
          type: RuntimeDefault
      volumes:
      - hostPath:
          path: /var/log/audit
          type: DirectoryOrCreate
        name: audit-log
      - hostPath:
          path: /etc/cube
          type: DirectoryOrCreate
        name: cube
      - hostPath:
          path: /etc/ssl/certs
          type: DirectoryOrCreate
        name: ca-certs
      - hostPath:
          path: /etc/ca-certificates
          type: DirectoryOrCreate
        name: etc-ca-certificates
      - hostPath:
          path: /etc/kubernetes/pki
          type: DirectoryOrCreate
        name: k8s-certs
      - hostPath:
          path: /usr/local/share/ca-certificates
          type: DirectoryOrCreate
        name: usr-local-share-ca-certificates
      - hostPath:
          path: /usr/share/ca-certificates
          type: DirectoryOrCreate
        name: usr-share-ca-certificates
    status: {}
    

    【错误报告】

    image

  • [Feature]适应企业组织的多级租户模型的项目与空间概念问题

    [Feature]适应企业组织的多级租户模型的项目与空间概念问题

    1. 从企业组织关系角度来看,常见地,项目是一个企业分配公司资源的最小粒度单元
    2. 在Kubcube的组织管理中,创建租户,租户可绑定(N用户 ,N项目)

    在Kubcube的资源管理中,租户分配租用了资源,下一步分配租户资源。按逻辑来,理应继承组织管理的层级,将租户资源分配给租户内的多个项目

    但是,此时,就多了一个“空间”和“创建空间分配资源”的概念,让人觉得很奇怪。因为在一个企业组织中,很少有说我在一个空间中,基本上是我在哪个项目中。企业内带头人成立项目,下一步是项目审批通过了,就下发公司资源。好像很少说项目审批通过了,再创造一个空间,创建多个空间来下发公司资源。

    请问在资源管理时,为什么不把租户资源直接按项目来分配?这样的企业组织逻辑不是更清晰吗?空间和创建空间的最佳实践案例是什么?

  • [Feature]KubeCube适配接入用户自有监控后端

    [Feature]KubeCube适配接入用户自有监控后端

    Is your feature request related to a problem? Please describe.

    1. 目前用户可以修改hotplug关掉监控,但是关闭后重新登录显示401;
    2. 用户无法接入自有监控后端。

    Describe the solution you'd like

    1. 修改hotplug,不允许关闭监控功能;
    2. 或修改代码后添加文档,指导用户接入自有监控后端。

    Describe alternatives you've considered

    Additional context

  • [Bug] When add cluster, If my kubeconfig is wrong. kubecube container will panic.

    [Bug] When add cluster, If my kubeconfig is wrong. kubecube container will panic.

    Describe the bug A clear and concise description of what the bug is. 可以使用中文。 construct a kind cluster without sign outside ip. when visit this kind cluster will get error Get "https://192.168.4.124:57300/api?timeout=32s": x509: certificate is valid for 10.96.0.1, 172.18.0.3, not xxxx Of course, this is a problem with my configuration, but the kubecube program should not panic

    To Reproduce Steps to reproduce the behavior:

    1. Go to '...'
    2. Click on '....'
    3. Scroll down to '....'
    4. See error

    Expected behavior A clear and concise description of what you expected to happen.

    Screenshots If applicable, add screenshots to help explain your problem.

    Desktop (please complete the following information):

    • OS: [e.g. iOS]
    • Browser [e.g. chrome, safari]
    • Version [e.g. 22]

    Smartphone (please complete the following information):

    • Device: [e.g. iPhone6]
    • OS: [e.g. iOS8.1]
    • Browser [e.g. stock browser, safari]
    • Version [e.g. 22]

    Additional context Add any other context about the problem here.

  • [Bug]节点添加脚本错误相关问题

    [Bug]节点添加脚本错误相关问题

    问题一:向集群添加工作节点时执行脚本install.sh报错 2021-08-18 15:45:41 INFO get docker binary from local /bin/mv: cannot stat ‘/etc/kubecube/packages/docker-ce/linux/static/stable/x86_64/docker-19.03.8.tgz’: No such file or directory 2021-08-18 15:45:41 ERROR install kubernetes failed

    真正packages目录是在packages-master中: [root@test-ec2 x86_64]# pwd /etc/kubecube/packages-master/docker-ce/linux/static/stable/x86_64 [root@gtlm-ec2 x86_64]# ls docker-19.03.8.tgz

    问题二:添加新节点时,给的步骤链接404 链接:https://www.kubecube.io/docs/部署指南/添加节点/#向集群添加工作节点

    问题三: 创建新集群时,过程与文档:https://www.kubecube.io/docs/installation-guide/add-member-k8s 完全不符! 这个问题新手遇到容易发狂!

  • [Feature] 支持网络策略的配置或者支持不同租户下的网络隔离

    [Feature] 支持网络策略的配置或者支持不同租户下的网络隔离

    Is your feature request related to a problem? Please describe.

    • 支持网络策略的配置
    • 支持不同租户下的网络隔离

    Describe the solution you'd like

    • 租户/项目管理员可以配置网络策略
    • 适配cni插件网络策略的配置
  • Feature: add debug script and makefile

    Feature: add debug script and makefile

    Ⅰ. Describe what this PR does add debug script and makefile Ⅱ. Does this pull request fix one issue? Resolves #1
    Ⅲ. List the added test cases (unit test/integration test) if any, please explain if no tests are needed.

    Ⅳ. Describe how to verify it follow the steps on https://www.kubecube.io/docs/developer-guide/debug/ Ⅴ. Special notes for reviews

  • centos7.4安装失败

    centos7.4安装失败

    centos7.4, all in one安装脚本时报错,拉不到镜像

    2021-07-13 14:27:52 DEBUG enable and start docker
    Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /etc/systemd/system/docker.service.
    2021-07-13 14:27:57 INFO downloading images
    I0713 14:27:59.066899   15693 version.go:252] remote version is much newer: v1.21.2; falling back to: stable-1.19
    W0713 14:27:59.839835   15693 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
    
     2021-07-13 14:27:59 DEBUG spin pid: 15728                                                                                                                                                                -Error response from daemon: Get https://registry.cn-hangzhou.aliyuncs.com/v2/google_containers/kube-apiserver/manifests/v1.19.12: Get https://dockerauth.cn-hangzhou.aliyuncs.com/auth?scope=repository%3Agoogle_containers%2Fkube-apiserver%3Apull&service=registry.aliyuncs.com%3Acn-hangzhou%3A26842: dial tcp: lookup dockerauth.cn-hangzhou.aliyuncs.com on 10.198.141.241:53: no answer from DNS server
    2021-07-13 14:32:41 ERROR install kubernetes failed
    
  • [Bug] I installed all in one,bug there's some problerm with me

    [Bug] I installed all in one,bug there's some problerm with me

    image

    Internal error occurred: failed calling webhook "vresourcequota.kb.io": failed to call webhook: Post "https://warden.kubecube-system.svc:8443/validate-core-kubernetes-v1-resource-quota?timeout=10s": service "warden" not found

  • Enhance: update wechat image

    Enhance: update wechat image

    Ⅰ. Describe what this PR does update wechat image in ReadMe.

    Ⅱ. Does this pull request fix one issue?

    Ⅲ. List the added test cases (unit test/integration test) if any, please explain if no tests are needed.

    Ⅳ. Describe how to verify it

    Ⅴ. Special notes for reviews

  • Optimise getPostObjectName func

    Optimise getPostObjectName func

    Ⅰ. Describe what this PR does

    Ⅱ. Does this pull request fix one issue?

    Ⅲ. List the added test cases (unit test/integration test) if any, please explain if no tests are needed.

    Ⅳ. Describe how to verify it

    Ⅴ. Special notes for reviews

  • [Bug] 部署脚本缺少必要的终止判断

    [Bug] 部署脚本缺少必要的终止判断

    kubecube 部署的时候,有不少过程是上一步运行失败,但是不中断,继续往下执行,导致后面失败的问题。。

    类似helm 执行失败后,应该终止shell,而不是进入下一步 https://github.com/kubecube-io/kubecube-installer/blob/main/install_kubecube.sh#L121

  • [Bug]按照文档一路安装,从1.2-到1.4【logseer】 真用不起

    [Bug]按照文档一路安装,从1.2-到1.4【logseer】 真用不起

    继续观光一下 【Hotplug】hotplugs.hotplug.kubecube.io v1 common 与 pivot-cluster 都打开

    spec:
      component:
        -
          name: audit
          status: enabled
        -
          env: "address: elasticsearch-master-headless.elasticsearch.svc\n"
          name: logseer
          namespace: logseer
          pkgName: logseer-v1.0.0.tgz
          status: enabled
        -
          env: "clustername: \"{{.cluster}}\"\n"
          name: logagent
          namespace: logagent
          pkgName: logagent-v1.0.0.tgz
          status: enabled
        -
          name: elasticsearch
          namespace: elasticsearch
          pkgName: elasticsearch-7.8.1.tgz
          status: enabled
        -
          env: "grafana:\n  enabled: false\nprometheus:\n  prometheusSpec:\n    externalLabels:\n      cluster: \"{{.cluster}}\"\n    remoteWrite:\n    - url: http://172.31.0.171:31291/api/v1/receive\n"
          name: kubecube-monitoring
          namespace: kubecube-monitoring
          pkgName: kubecube-monitoring-15.4.12.tgz
          status: enabled
        -
          name: kubecube-thanos
          namespace: kubecube-monitoring
          pkgName: thanos-3.18.0.tgz
          status: enabled
    
    spec:
      component:
        -
          env: "address: elasticsearch-master.elasticsearch.svc \n"
          name: logseer
          status: enabled
        -
          env: "grafana:\n  enabled: true\nprometheus:\n  prometheusSpec:\n    externalLabels:\n      cluster: \"{{.cluster}}\"\n    remoteWrite:\n    - url: http://kubecube-thanos-receive:19291/api/v1/receive\n"
          name: kubecube-monitoring
        -
          env: "receive:\n  tsdbRetention: 7d\n  replicaCount: 1\n  replicationFactor: 1\n"
          name: kubecube-thanos
          status: enabled
    

    默认配置,配置 elasticsearch-master-headless.elasticsearch.svc和elasticsearch-master.elasticsearch.svc 都配置过,理论上不会有什么影响,还是不行,然后进行调试

    问题一:查询日志报错 “request elasticsearch fail”
    问题二:操作审计无数据(经过调试已解决) 过程如下: 查看logseer运行pod的容器日志发现如下

    2022-09-24 20:40:47.299 [http-nio-8080-exec-10]    c.n.logseer.engine.impl.ElasticSearchEngineImpl:52   INFO  - [getLogs] request to es, url: /*/_search?ignore_unavailable=true, requestBody: {
        "size": 50,
        "from": 0,
        "query": {
          "bool" : {
            "filter" : [
                {"term": {"cluster_name" : "pivot-cluster"}},
                {"term": {"namespace" : "wordpress"}}
            ],
            "must" : [
              {
                "query_string" : {
                  "default_field" : "message",
                  "query" : "elasticsearch-master.elasticsearch.svc:9200"
                }
              },
              {
                "range" : {
                  "@timestamp" : {
                    "gte" : 1664019350313,
                    "lte" : 1664022950313,
                    "format": "epoch_millis"
                  }
                }
              }
            ]
          }
        },
        "aggs": {
          "2": {
            "date_histogram": {
              "field": "@timestamp",
              "interval": "1m",
              "time_zone": "Asia/Shanghai",
              "min_doc_count": 1
            }
          }
        },
        "highlight" : {
          "fields" : {
            "message" : {}
          },
          "fragment_size": 2147483647
        },
        "sort" : [
          { "@timestamp" : "asc"}
        ],
        "_source" : {
          "excludes": "tags"
        },
        "timeout": "30000ms"
    } 
    2022-09-24 20:40:48.302 [http-nio-8080-exec-10]    c.n.logseer.engine.impl.ElasticSearchEngineImpl:65   ERROR - request elasticsearch exception: {} 
    java.net.ConnectException: null
    	at org.elasticsearch.client.RestClient$SyncResponseListener.get(RestClient.java:959)
    	at org.elasticsearch.client.RestClient.performRequest(RestClient.java:233)
    	at com.netease.logseer.engine.impl.ElasticSearchEngineImpl.getLogs(ElasticSearchEngineImpl.java:53)
    	at com.netease.logseer.service.impl.LogSearchServiceImpl.commonSearch(LogSearchServiceImpl.java:154)
    	at com.netease.logseer.service.impl.LogSearchServiceImpl.searchLog(LogSearchServiceImpl.java:79)
    	at com.netease.logseer.api.controller.LogSearchController.searchLog(LogSearchController.java:50)
    	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.lang.reflect.Method.invoke(Method.java:498)
    	at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:205)
    	at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:133)
    	at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:116)
    	at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:827)
    	at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:738)
    	at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:85)
    	at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:963)
    	at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:897)
    	at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:970)
    	at org.springframework.web.servlet.FrameworkServlet.doPost(FrameworkServlet.java:872)
    	at javax.servlet.http.HttpServlet.service(HttpServlet.java:660)
    	at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:846)
    	at javax.servlet.http.HttpServlet.service(HttpServlet.java:741)
    	at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:231)
    	at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
    	at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52)
    	at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
    	at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
    	at com.netease.logseer.api.filter.FillWebContextHolderFilter.doFilter(FillWebContextHolderFilter.java:35)
    	at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
    	at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
    	at com.netease.logseer.api.filter.AuthFilter.doFilter(AuthFilter.java:92)
    	at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
    	at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
    	at org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:99)
    	at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)
    	at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
    	at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
    	at org.springframework.web.filter.HttpPutFormContentFilter.doFilterInternal(HttpPutFormContentFilter.java:105)
    	at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)
    	at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
    	at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
    	at org.springframework.web.filter.HiddenHttpMethodFilter.doFilterInternal(HiddenHttpMethodFilter.java:81)
    	at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)
    	at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
    	at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
    	at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:197)
    	at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)
    	at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
    	at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
    	at org.springframework.boot.web.support.ErrorPageFilter.doFilter(ErrorPageFilter.java:115)
    	at org.springframework.boot.web.support.ErrorPageFilter.access$000(ErrorPageFilter.java:59)
    	at org.springframework.boot.web.support.ErrorPageFilter$1.doFilterInternal(ErrorPageFilter.java:90)
    	at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)
    	at org.springframework.boot.web.support.ErrorPageFilter.doFilter(ErrorPageFilter.java:108)
    	at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
    	at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
    	at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:199)
    	at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:96)
    	at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:528)
    	at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:139)
    	at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:81)
    	at org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:678)
    	at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:87)
    	at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:343)
    	at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:798)
    	at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:66)
    	at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:810)
    	at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1498)
    	at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: java.net.ConnectException: null
    	at org.apache.http.nio.pool.RouteSpecificPool.timeout(RouteSpecificPool.java:168)
    	at org.apache.http.nio.pool.AbstractNIOConnPool.requestTimeout(AbstractNIOConnPool.java:561)
    	at org.apache.http.nio.pool.AbstractNIOConnPool$InternalSessionRequestCallback.timeout(AbstractNIOConnPool.java:822)
    	at org.apache.http.impl.nio.reactor.SessionRequestImpl.timeout(SessionRequestImpl.java:183)
    	at org.apache.http.impl.nio.reactor.DefaultConnectingIOReactor.processTimeouts(DefaultConnectingIOReactor.java:210)
    	at org.apache.http.impl.nio.reactor.DefaultConnectingIOReactor.processEvents(DefaultConnectingIOReactor.java:155)
    	at org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor.execute(AbstractMultiworkerIOReactor.java:348)
    	at org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager.execute(PoolingNHttpClientConnectionManager.java:192)
    	at org.apache.http.impl.nio.client.CloseableHttpAsyncClientBase$1.run(CloseableHttpAsyncClientBase.java:64)
    
    

    出现一个空指针异常

    • 要么取出ES的数据为空
    • 要么查找拼接的地址为空毕竟有一个没看到访问HOST地址的请求 request to es, url: /*/_search?ignore_unavailable=true 进入logseer容器

    直接curl http://elasticsearch-master.elasticsearch.svc:9200//_search?ignore_unavailable=true 发现返回了一大堆数据,证明ES连通性是好的,不过毕竟没添加参数,curl不是很好加参数,加了参数可能就返回空报错了。 猜测是不是环境变量没有设置起,不停的调整环境变量格式,甚至configMap,内部配置文件 期望出现日志 equest to es, url: http://elasticsearch-master.elasticsearch.svc:9200//_search?ignore_unavailable=true。 是不是没有读取到address: elasticsearch-master.elasticsearch.svc 这个变量,最终放弃也许日志本来就是这么写的。
    转入logagent 的filebeat 的configMap 发现 output.elasticsearch: hosts: [elasticsearch-master.elasticsearch.svc:30435] 这个根本访问不到修改成 output.elasticsearch: hosts: [elasticsearch-master.elasticsearch.svc:9200] 再试试,嗯一样的不通(好在的是filebeate不爆连接错误了 ) 接着看了一下文档也没发现哪里不对,再修复下审计 我本来也安装的内部ES,还是当外部配置下吧

    kubectl edit deploy audit -n kubecube-system
    env:
    - name: AUDIT_WEBHOOK_HOST
      value: http://elasticsearch-master.elasticsearch:9200
    - name: AUDIT_WEBHOOK_INDEX
      value: audit
    - name: AUDIT_WEBHOOK_TYPE
      value: logs
    

    审计可以了,

    但是日志依然不通,看来只有想办法开放ES 9200端口出来用工具连连是没上传还是没查询到, 不过大体定位到如下可能的几个问题

    • 可能是logseer没有读取到环境变量导致出错
    • filebeate配置错误没有上传成功,导致查询空数据(这个数据内容空就报错应该不至于),但是确实直接安装的filebeat有一个connect 错误需要修复 费解

    目前发现的问题猜测ripple和filebeat的配置感觉这里嫌疑最大,创建了新的日志抓取任务,也没看到/etc/filebeat/inputs.d 有什么文件改动 不过也建议修复下空报错的问题,让指示得更明显,只能去看看哪里有源码了

An open source alternative to terraform enterprise.
An open source alternative to terraform enterprise.

oTF An open source alternative to terraform enterprise. Functionality is currently limited: Remote execution mode (plans and applies run remotely) Sta

Jan 2, 2023
An open-source, distributed, cloud-native CD (Continuous Delivery) product designed for developersAn open-source, distributed, cloud-native CD (Continuous Delivery) product designed for developers
An open-source, distributed, cloud-native CD (Continuous Delivery) product designed for developersAn open-source, distributed, cloud-native CD (Continuous Delivery) product designed for developers

Developer-oriented Continuous Delivery Product ⁣ English | 简体中文 Table of Contents Zadig Table of Contents What is Zadig Quick start How to use? How to

Oct 19, 2021
Enterprise-grade application development platform

Erda Overview Feature list Architecture Related repositories erda-proto erda-infra erda-ui Quick start To start using erda To start developing erda Do

Dec 28, 2022
Vilicus is an open source tool that orchestrates security scans of container images(docker/oci) and centralizes all results into a database for further analysis and metrics.
Vilicus is an open source tool that orchestrates security scans of container images(docker/oci) and centralizes all results into a database for further analysis and metrics.

Vilicus Table of Contents Overview How does it work? Architecture Development Run deployment manually Usage Example of analysis Overview Vilicus is an

Dec 6, 2022
go-opa-validate is an open-source lib that evaluates OPA (open policy agent) policy against JSON or YAML data.
go-opa-validate is an open-source lib that evaluates OPA (open policy agent) policy against JSON or YAML data.

go-opa-validate go-opa-validate is an open-source lib that evaluates OPA (open policy agent) policy against JSON or YAML data. Installation Usage Cont

Nov 17, 2022
Bubbly is an open-source platform that gives you confidence in your continuous release process.
Bubbly is an open-source platform that gives you confidence in your continuous release process.

Bubbly Bubbly - Release Readiness in a Bubble Bubbly emerged from a need that many lean software teams practicing Continuous Integration and Delivery

Nov 29, 2022
TriggerMesh open source event-driven integration platform powered by Kubernetes and Knative.

TriggerMesh open source event-driven integration platform powered by Kubernetes and Knative. TriggerMesh allows you to declaratively define event flows between sources and targets as well as add even filter, splitting and processing using functions.

Dec 30, 2022
The open source public cloud platform. An AWS alternative for the next generation of developers.
The open source public cloud platform. An AWS alternative for the next generation of developers.

M3O M3O is an open source public cloud platform. We are building an AWS alternative for the next generation of developers. Overview AWS was a first ge

Jan 2, 2023
🔥 🔥 Open source cloud native security observability platform. Linux, K8s, AWS Fargate and more. 🔥 🔥
🔥 🔥   Open source cloud native security observability platform. Linux, K8s, AWS Fargate and more. 🔥 🔥

CVE-2021-44228 Log4J Vulnerability can be detected at runtime and attack paths can be visualized by ThreatMapper. Live demo of Log4J Vulnerability her

Jan 1, 2023
Moby Project - a collaborative project for the container ecosystem to assemble container-based systems
Moby Project - a collaborative project for the container ecosystem to assemble container-based systems

The Moby Project Moby is an open-source project created by Docker to enable and accelerate software containerization. It provides a "Lego set" of tool

Jan 8, 2023
Boxygen is a container as code framework that allows you to build container images from code

Boxygen is a container as code framework that allows you to build container images from code, allowing integration of container image builds into other tooling such as servers or CLI tooling.

Dec 13, 2021
Amazon ECS Container Agent: a component of Amazon Elastic Container Service
Amazon ECS Container Agent: a component of Amazon Elastic Container Service

Amazon ECS Container Agent The Amazon ECS Container Agent is a component of Amazon Elastic Container Service (Amazon ECS) and is responsible for manag

Dec 28, 2021
The Container Storage Interface (CSI) Driver for Fortress Block Storage This driver allows you to use Fortress Block Storage with your container orchestrator

fortress-csi The Container Storage Interface (CSI) Driver for Fortress Block Storage This driver allows you to use Fortress Block Storage with your co

Jan 23, 2022
The GCP Enterprise Cloud Cost Optimiser, or gecco for short, helps teams optimise their cloud project costs.
The GCP Enterprise Cloud Cost Optimiser, or gecco for short, helps teams optimise their cloud project costs.

gecco helps teams optimise their cloud resource costs. Locate abandoned, idle, and inefficiently configured resources quickly. gecco helps teams build

Jan 9, 2022
Pulumi provider for the Elasticsearch Service and Elastic Cloud Enterprise

Terraform Bridge Provider Boilerplate This repository contains boilerplate code for building a new Pulumi provider which wraps an existing Terraform p

Nov 18, 2022
A tool for managing complex enterprise Kubernetes environments as code.

kubecfg A tool for managing Kubernetes resources as code. kubecfg allows you to express the patterns across your infrastructure and reuse these powerf

Dec 14, 2022
Influxdb-cluster - InfluxDB Cluster for replacing InfluxDB Enterprise

InfluxDB ATTENTION: Around January 11th, 2019, master on this repository will be

Dec 26, 2022
Sign Container Images with cosign and Verify signature by using Open Policy Agent (OPA)
 Sign Container Images with cosign and Verify signature by using Open Policy Agent (OPA)

Sign Container Images with cosign and Verify signature by using Open Policy Agent (OPA) In the beginning, I believe it is worth saying that this proje

Nov 30, 2022