Trusted Certificate Service for Kubernetes Platform

Trusted Certificate Service for Kubernetes Platform

Overview

Trusted Certificate Service (TCS) is a Kubernetes certificate signing solution that uses the security capabilities provided by the Intel® SGX. The signing key is stored and used inside the SGX enclave(s) and is never stored in clear anywhere in the system. TCS is implemented as a cert-manager external issuer by providing support for both cert-manager and kubernetes certificate signing APIs.

Getting started

All the examples in this page are using self-signed CA certificates. If you are looking for more advanced use cases (e.g., Istio integration) please check the sample use cases.

Prerequisites

Prerequisites for building and running Trusted Certificate Service:

  • Kubernetes cluster with one or more nodes with Intel® SGX supported hardware
  • Intel® SGX device plugin for Kubernetes
  • Intel® SGX AESM daemon
  • cert-manager. The cmctl is also used later in the examples so you may want to install it also.
  • Linux kernel version 5.11 or later on the host (in tree SGX driver)
  • git, or similar tool, to obtain the source code
  • Docker, or similar tool, to build container images
  • Container registry (local or remote)

Installing with Helm

If you want to use Helm to install TCS see the document here.

Installing with source code

This section covers how to obtain the source code, build and install it.

  1. Getting the source code
git clone https://github.com/intel/trusted-certificate-issuer.git
  1. Build and push the container image

Choose a container registry to push the generated image using REGISTRY make variable. The registry should be reachable from the Kubernetes cluster.

NOTE: By default, the enclave signing is done using a private key auto-generated by the TCS issuer. In case, if you want to integrate your own signing tool, modify/replace the enclave-config/sign-enclave.sh script accordingly before building the docker image. Refer to Intel(R) SGX SDK developer reference for more details about enclave signing.

$ cd trusted-certificate-issuer
$ export REGISTRY="localhost:5000" # docker registry to push the container image
$ make docker-build
$ make docker-push
  1. Deploy custom resource definitions (CRDs)
# set the KUBECONFIG based on your configuration
export KUBECONFIG="$HOME/.kube/config"
make install # Install CRDs
  1. Make the deployment
make deploy

By default, tcs-issuer namespace is used for the deployment.

# Ensure that the pod is running state
$ kubectl get po -n tcs-issuer
NAME                              READY   STATUS    RESTARTS   AGE
tcs-controller-5dd5c46b44-4nz9f   1/1     Running   0          30m

Create an Issuer

Once the deployment is up and running, you are ready to provision TCS issuer(s) using either a namespace-scoped TCIssuer or a cluster-scoped TCSClusterIssuer resource.

The example below creates a TCS issuer named my-ca for sandbox namespace:

kubectl create ns sandbox
cat <<EOF |kubectl create -f -
apiVersion: tcs.intel.com/v1alpha1
kind: TCSIssuer
metadata:
    name: my-ca
    namespace: sandbox
spec:
    secretName: my-ca-cert
EOF

Successful deployment looks like this:

$ kubectl get tcsissuers -n sandbox
NAME    AGE    READY   REASON      MESSAGE
my-ca   2m     True    Reconcile   Success

$ kubectl get secret my-ca-cert -n sandbox
NAME                  TYPE                                  DATA   AGE
my-ca-cert            kubernetes.io/tls                     2      3h14m

The above issuer creates and stores it's private key inside the SGX enclave and the root certificate is saved as Kubernetes Secret with name specified with spec.secretName, under the issuer's namespace.

Typically the issuer secret (my-ca-cert in our case) contains both the certificate and the private key. But in Trusted Certificate Service case, the private key is empty since they key is stored and used inside the SGX enclave. You can verify the empty private key in the secret with the following command:

kubectl get secrets -n sandbox my-ca-cert -o jsonpath='{.data.tls\.key}'

Create certificates

Creating and signing certificates can be done by using cert-manager Certificate or Kubernetes CertificateSigningRequest APIs.

Using cert-manager Certificate

This example shows how to request X509 certificate signed by the Trusted Certificate Service using cert-manger Certificate API. Create a cert-manager Certificate object and set the spec.issuerRef to TCSIssuer(or TCSClusterIssuer).

cat <<EOF |kubectl create -f -
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: my-certificate
  namespace: sandbox
spec:
  # The secret name to store the signed certificate
  secretName: demo-cert-tls
  # Common Name
  commonName: intel.sgx.demo
  # Ensure the issuerRef is set to the right issuer
  issuerRef:
    group: tcs.intel.com # TCS issuer API group
    kind: TCSIssuer      # Configured issuer type
    name: my-ca          # Configured issuer name
EOF

The cert-manager creates a corresponding CertificateRequest for the Certificate above. One has to approve the CertificateRequest so that the TCS controller can sign the request in the next reconcile loop.

$ kubectl get certificaterequest -n sandbox
NAME                   APPROVED   DENIED   READY   ISSUER   REQUESTOR                                         AGE
my-certificate-nljcz   False               False   my-ca    system:serviceaccount:cert-manager:cert-manager   1m

Privileged user needs to approve the CertificateRequest with the cert-manager's cmctl utility:

$ cmctl approve my-certificate-nljcz -n sandbox

Check if the certificate is exported to the secret referenced in the spec.secretName.

$ kubectl get certificates,secret -n sandbox
NAME                                         READY   SECRET          AGE
certificate.cert-manager.io/my-certificate   True    demo-cert-tls   2m1s

NAME                         TYPE                                  DATA   AGE
secret/default-token-69dv6   kubernetes.io/service-account-token   3      3h15m
secret/demo-cert-tls         kubernetes.io/tls                     2      2m1s
secret/my-ca-cert            kubernetes.io/tls                     2      3h14m

Using Kubernetes CSR

This example shows how to request an X509 certificate signed by the Trusted Certificate Service using Kubernetes CSR.

First, generate a PEM encoded private key (privkey.pem) and certificate signing request (csr.pem) using openssl tool:

$ openssl req -new -nodes -newkey rsa:3072 -keyout privkey.pem -out ./csr.pem -subj "/O=Foo Company/CN=foo.bar.com"

Create a Kubernetes CertificateSigningRequest using the csr (csr.pem) generated above. The spec.signerName field must refer to the TCS issuer we configured earlier in the form of ./.. In this example the signer name is tcsissuer.tcs.intel.com/sandbox.my-ca.

Note: the issuer namespace in the case of tcsclusterissuer is the namespace of the Trusted Certificate Service.

cat <<EOF |kubectl apply -f -
apiVersion: certificates.k8s.io/v1
kind: CertificateSigningRequest
metadata:
  name: test-csr
spec:
  groups:
  - system:authenticated
  request: $(cat csr.pem | base64 | tr -d '\n')
  signerName: tcsissuer.tcs.intel.com/sandbox.my-ca
  usages:
  - client auth
EOF

Now the test-csr is the in pending state waiting for approval.

$ kubectl get certificatesigningrequests
NAME       AGE   SIGNERNAME                              REQUESTOR          CONDITION
test-csr   46s   tcsissuer.tcs.intel.com/sandbox.my-ca   kubernetes-admin   Pending

Privileged user needs to approve the test-csr with the following command:

# Approve the CSR so the TCS controller generates the certificate
kubectl certificate approve test-csr

Once the request is approved Trusted Certificate Service signs it. At this point the CSR contains the requested certificate signed by the CA using the private key stored inside the SGX enclave.

You can examine the CSR with the following command:

$ kubectl describe csr

Name:         test-csr
Labels:       <none>
Annotations:  kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"certificates.k8s.io/v1","kind":"CertificateSigningRequest","metadata":{"annotations":{},"name":"test-csr"},"spec":{"groups":["system:authenticated"],"request":"LS0tLS1CRUdJTiBDRVJUSUZJQ0FURSBSRVFVRVNULS0tLS0KTUlJRGNUQ0NBZGtDQVFBd0xERVVNQklHQTFVRUNnd0xSbTl2SUVOdmJXRndibmt4RkRBU0JnTlZCQU1NQzJadgpieTVpWVhJdVkyOXRNSUlCb2pBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVk4QU1JSUJpZ0tDQVlFQTZhNzkvTmZLCmdrYzQ5dXd6TVFUajBwZzhuZjZ3VU5tcmNVaG5IUDNhUitDYjZKcE0wOVF3RHBmblI2VU13ejZFVy9RSis3WVQKMndLUFJTRVZqZ3owT29NdXh0c0tScGN2VCtWaDZkb3JjSkU0ZTdjQ2FWK1ZKN0pQRGtwYzdFNSt6VCtncVlRKwptMWhjS0FmTEk0VEpZNzJZR2MzTWt5QkVqRzNsKzl3emxHNVlpZEduYVFjNDhMNUJQSXFxOEdKelpWSTkvQWxLClVDVjcwM2pGQnpKdTBEbFpTQWd2WEo1RUhNbWVhaFBQYTFOV2dkM29mQ2FUcTlnM0xaSTBDejdWbndOK0l1bzEKNGpRcE1zNzVQTFZVUTQ2SEZ0YUxJTWZPNDlkZk94SUwwNlkwZG1XNUc0R05zNUR4SkhtYm11QlQ1NGMrUm5MUQoyVldJL2VRS2xQQW5Sdk00SmpEM3hEcENvOGViSE9nS2RsRU9MTkFPTEk0L2VMUG1GcXlTUGxuY2RTZlFqc2UvCkJQOEpuQk9Xa0xpSUZ4bzBwT1lrTUFDaHhWdDJkdURLcldRZm1JSkhUUSs0Q05OZjhlanZOZkZCY0pmNllldHUKRnlkNnA4WmwrYkV2TldYbDBKeGNQNWlFVGFYWkZqblJqMWxzZWVSbWo3OGFyRDZCUkhTTFlsM0pBZ01CQUFHZwpBREFOQmdrcWhraUc5dzBCQVFzRkFBT0NBWUVBMXYrOURlUE5XOER6Z2twVzBhU1czdW1xR05xc05zaWNhQjc1Cjc3UGsyRnNBMTMya2JWTXBBY2NCRzc1WGh4T0VkNFNYdTJ0eVI1MGxOMUpaNnJldzY5b1dUYWZTTTVXNm00RFAKcE1tVjRJbTJiajlUTUhYeHdXVjdXVk5JL2dQK1BFRDVROVJMNy82Sjh2VnV5aFhZaTAyc2NkampKaStIT0M4Ywo1TFpLem5TQUhtcmZEVGlveG5ydUNqY1ZEZlFlSGlJMkw1SW94aXAwUmt5L0Y1UkhwTjRyMHFQS25Na2F3enRYClV3alB6Nk9uWGVPK1EvVGZyRm5ka2V3OCtsSFc2akxneXNUNlU3SjdmdjVuL1lSUXdYSHJadi9LNFVneW9zU3oKZy9PSkZoOVpyWjl6WFBhT01sN1pLYnlUUXE2NGtMSmFEQys0eWIycXlUT1hMUm1xK0Y1MWc2a0tJdDdMWXdtMQpjR0N3WTc2WmFHMm9hVkQxRVNQSWtpc0I4U01ncVNEajlhQjFxRDJ0Y1E4RGxoV1o3dEdDd3M5VC9RUDlvQnpsCjc5S0g3Qnc1QnVSVFlRT0srMTdJSWUrNUx0YVFzS1dpczBsaGtvQ1R3TjdUS2FnQ1dLWWk0RE16em1wVlNvbTEKaVphb21nUDJIKy8yQ3RoOUNJN1dwVWR5WklkQgotLS0tLUVORCBDRVJUSUZJQ0FURSBSRVFVRVNULS0tLS0K","signerName":"tcsissuer.tcs.intel.com/sandbox.my-ca","usages":["client auth"]}}

CreationTimestamp:  Mon, 24 Jan 2022 16:11:10 +0200
Requesting User:    kubernetes-admin
Signer:             tcsissuer.tcs.intel.com/sandbox.my-ca
Status:             Approved,Issued
Subject:
         Common Name:    foo.bar.com
         Serial Number:  
         Organization:   Foo Company

Deployment in Azure

You can deploy TCS also in Azure by following the instructions here.

Sample use cases

Refer to more example use cases related to Istio service mesh and Trusted Certificate Service

Limitations

  • This version of the software is pre-production release and is meant for evaluation and trial purposes only.
  • The certificate authority (CA) private key transport method (via QuoteAttestation custom resource) does not guarantee any authenticity, only confidentiality, and therefore cannot protect from attacks like key substitution or key replay.
Comments
  • Failed to open Intel SGX device

    Failed to open Intel SGX device

    I'm trying to deploy tcs-issuer in k8s cluster, but got the following error:

    $ kubectl logs tcs-controller-79c499fb98-v5kv8
    2022-07-28T14:13:05.734Z        INFO    controller-runtime.metrics      metrics server is starting to listen    {"addr": ":8082"}
    [get_driver_type edmm_utility.cpp:111] Failed to open Intel SGX device.
    [get_driver_type /home/sgx/jenkins/ubuntuServer2004-release-build-trunk-215/build_target/PROD/label/Builder-UbuntuSrv20/label_exp/ubuntu64/linux-trunk-opensource/psw/urts/linux/edmm_utility.cpp:111] Failed to open Intel SGX device.
    2022-07-28T14:13:05.929Z        LEVEL(-2)       SGX     Failed to configure command
    2022-07-28T14:13:05.929Z        ERROR   setup   SGX initialization      {"error": "failed to initialize PKCS#11 library: pkcs11: 0x30: CKR_DEVICE_ERROR", "errorVerbose": "pkcs11: 0x30: CKR_DEVICE_ERROR\nfailed to initialize PKCS#11 library"}
    

    I think all the prerequisites are working correctlly.

    $ kubectl describe node zhenhui-control-plane | grep sgx.intel
                        sgx.intel.com/capable=true
                        nfd.node.kubernetes.io/extended-resources: sgx.intel.com/epc
      sgx.intel.com/enclave:    110
      sgx.intel.com/epc:        4261412864
      sgx.intel.com/provision:  110
      sgx.intel.com/enclave:    110
      sgx.intel.com/epc:        4261412864
      sgx.intel.com/provision:  110
      sgx.intel.com/enclave    1           1
      sgx.intel.com/epc        512Ki       512Ki
      sgx.intel.com/provision  0           0
    
    $ ~/zhenhui/intel-device-plugins-for-kubernetes# sudo service aesmd status
    ● aesmd.service - Intel(R) Architectural Enclave Service Manager
         Loaded: loaded (/lib/systemd/system/aesmd.service; enabled; vendor preset: enabled)
         Active: active (running) since Thu 2022-07-28 17:50:07 CST; 4h 54min ago
       Main PID: 2580841 (aesm_service)
          Tasks: 4 (limit: 304204)
         Memory: 2.3M
         CGroup: /system.slice/aesmd.service
                 └─2580841 /opt/intel/sgx-aesm-service/aesm/aesm_service
    
    7月 28 17:50:07 i10 systemd[1]: Starting Intel(R) Architectural Enclave Service Manager...
    7月 28 17:50:07 i10 usermod[2580804]: add 'aesmd' to group 'sgx_prv'
    7月 28 17:50:07 i10 usermod[2580804]: add 'aesmd' to shadow group 'sgx_prv'
    7月 28 17:50:07 i10 aesm_service[2580834]: aesm_service: warning: Turn to daemon. Use "--no-daemon" option to execute i>
    7月 28 17:50:07 i10 systemd[1]: Started Intel(R) Architectural Enclave Service Manager.
    7月 28 17:50:07 i10 aesm_service[2580841]: The server sock is 0x5652f0dfe400
    
  • QuoteAttestation: Add new field for request type

    QuoteAttestation: Add new field for request type

    New field 'type' is added to hold the type of attestation request. This is to support initiating the QuoteAttestation from CSR only quote validation. In this case, the quote attestation controller does not proceed with key wrapping.

  • internal/sgx: do not share quote between multiple issuers

    internal/sgx: do not share quote between multiple issuers

    CTK destroys the quote public key and the wrapped key after successful unwrap. Hence the same quote cannot be used for unwrapping the other keys. So, we have to use singer specific quote.

  • controllers/csr: pass full certificate chain as part of certificate

    controllers/csr: pass full certificate chain as part of certificate

    Add an command-line option to configure the csr controller such that it fills the full certificate chain (signed cert + ca-cert) in status.certificate on a successful certificate signing.

    The full cert chain with root certificate is expected by Istio v1.12, otherwise this feature should not be enabled.

  • add codegen comments quoteattestation API

    add codegen comments quoteattestation API

    Codegen tools like deepcopy-gen and client-gen are used to generate client API in sds server, and we also would like to reuse quoteattestation API of TCS issuer, so there are some codegen comments needed to add to quoteattestation API.

  • Pre-built container image is not yet available

    Pre-built container image is not yet available

    Before the container image available in public registry, you need to build the image yourself.

    Once the pre-built image available we close this issue.

  • TCS panicked when example CA cert and Key were mismatched

    TCS panicked when example CA cert and Key were mismatched

    Using latest main branch (with label PR added), the TCS controller was observed to be restarting due to a panic.

    The issue apparently was a misconfiguration of KMRA where one of the example Cert and Key were mismatched (the problem went away after removing and reconfiguring KMRA with a matching CA Cert and Key).

    But, in the interest of not having TCS crash, here is the TCS log (a few extra debug log entries starting with "Provision Signer ..." were added to assist finding the configuration problem).

    $ kubectl -n tcs-issuer logs -p --timestamps --tail=1000 $(kubectl -n tcs-issuer get pod -l control-plane=tcs-issuer --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}')
    Defaulted container "tcs-issuer" out of: tcs-issuer, init (init)
    2022-08-09T17:45:08.911806595Z 1.6600671089113495e+09   INFO    controller-runtime.metrics      Metrics server is starting to listen    {"addr": ":8082"}
    2022-08-09T17:45:09.109855651Z 1.6600671091095202e+09   INFO    SGX     Initiating p11Session...
    2022-08-09T17:45:09.110906059Z 1.6600671091106207e+09   LEVEL(-2)       setup   SGX initialization SUCCESS
    2022-08-09T17:45:09.110935704Z 1.6600671091107883e+09   INFO    setup   starting manager
    2022-08-09T17:45:09.111970007Z 1.6600671091115575e+09   INFO    Starting server {"kind": "health probe", "addr": "[::]:8083"}
    2022-08-09T17:45:09.111998044Z 1.660067109111566e+09    INFO    Starting server {"path": "/metrics", "kind": "metrics", "addr": "[::]:8082"}
    2022-08-09T17:45:09.112007150Z I0809 17:45:09.111487       1 leaderelection.go:248] attempting to acquire leader lease tcs-issuer/bb9c3a43.sgx.intel.com...
    2022-08-09T17:45:25.593663301Z I0809 17:45:25.593327       1 leaderelection.go:258] successfully acquired lease tcs-issuer/bb9c3a43.sgx.intel.com
    2022-08-09T17:45:25.594053181Z 1.6600671255934465e+09   DEBUG   events  Normal  {"object": {"kind":"ConfigMap","namespace":"tcs-issuer","name":"bb9c3a43.sgx.intel.com","uid":"c46b198a-87f8-4e73-a6ca-4096d3d81a9b","apiVersion":"v1","resourceVersion":"8088218"}, "reason": "LeaderElection", "message": "tcs-controller-6d856b7976-jb58f_a5698a2b-9b17-4eea-93a2-f9bfbfb87019 became leader"}
    2022-08-09T17:45:25.594093014Z 1.6600671255936673e+09   DEBUG   events  Normal  {"object": {"kind":"Lease","namespace":"tcs-issuer","name":"bb9c3a43.sgx.intel.com","uid":"ab98d264-47ce-4ad4-b8b9-08fa4a434ec1","apiVersion":"coordination.k8s.io/v1","resourceVersion":"8088219"}, "reason": "LeaderElection", "message": "tcs-controller-6d856b7976-jb58f_a5698a2b-9b17-4eea-93a2-f9bfbfb87019 became leader"}
    2022-08-09T17:45:25.594102592Z 1.6600671255936642e+09   INFO    controller.tcsissuer    Starting EventSource    {"reconciler group": "tcs.intel.com", "reconciler kind": "TCSIssuer", "source": "kind source: *v1alpha1.TCSIssuer"}
    2022-08-09T17:45:25.594110945Z 1.6600671255937197e+09   INFO    controller.certificatesigningrequest    Starting EventSource    {"reconciler group": "certificates.k8s.io", "reconciler kind": "CertificateSigningRequest", "source": "kind source: *v1.CertificateSigningRequest"}
    2022-08-09T17:45:25.594118719Z 1.6600671255937765e+09   INFO    controller.tcsissuer    Starting Controller     {"reconciler group": "tcs.intel.com", "reconciler kind": "TCSIssuer"}
    2022-08-09T17:45:25.594126397Z 1.660067125593806e+09    INFO    controller.certificatesigningrequest    Starting Controller     {"reconciler group": "certificates.k8s.io", "reconciler kind": "CertificateSigningRequest"}
    2022-08-09T17:45:25.596851825Z 1.6600671255963366e+09   INFO    controller.tcsclusterissuer     Starting EventSource    {"reconciler group": "tcs.intel.com", "reconciler kind": "TCSClusterIssuer", "source": "kind source: *v1alpha1.TCSClusterIssuer"}
    2022-08-09T17:45:25.596881268Z 1.660067125596285e+09    INFO    controller.certificaterequest   Starting EventSource    {"reconciler group": "cert-manager.io", "reconciler kind": "CertificateRequest", "source": "kind source: *v1.CertificateRequest"}
    2022-08-09T17:45:25.596889955Z 1.6600671255965576e+09   INFO    controller.tcsclusterissuer     Starting Controller     {"reconciler group": "tcs.intel.com", "reconciler kind": "TCSClusterIssuer"}
    2022-08-09T17:45:25.596897302Z 1.6600671255965781e+09   INFO    controller.certificaterequest   Starting Controller     {"reconciler group": "cert-manager.io", "reconciler kind": "CertificateRequest"}
    2022-08-09T17:45:25.696475853Z 1.6600671256960783e+09   INFO    controller.tcsissuer    Starting workers        {"reconciler group": "tcs.intel.com", "reconciler kind": "TCSIssuer", "worker count": 1}
    2022-08-09T17:45:25.798160624Z 1.6600671257977512e+09   INFO    SGX     Generating quote keypair...     {"forSigner": "tcsissuer.tcs.intel.com/sandbox.my-ca"}
    2022-08-09T17:45:25.799407530Z 1.6600671257990184e+09   INFO    controller.certificatesigningrequest    Starting workers        {"reconciler group": "certificates.k8s.io", "reconciler kind": "CertificateSigningRequest", "worker count": 1}
    2022-08-09T17:45:25.799463197Z 1.6600671257991374e+09   INFO    controller.tcsclusterissuer     Starting workers        {"reconciler group": "tcs.intel.com", "reconciler kind": "TCSClusterIssuer", "worker count": 1}
    2022-08-09T17:45:25.799472749Z 1.6600671257991598e+09   INFO    controller.certificaterequest   Starting workers        {"reconciler group": "cert-manager.io", "reconciler kind": "CertificateRequest", "worker count": 1}
    2022-08-09T17:45:25.799549304Z 1.660067125799438e+09    INFO    controllers.CertificateRequest  Reconcile       {"req": "inteldeviceplugins-system/inteldeviceplugins-serving-cert-2bz9w"}
    2022-08-09T17:45:28.014446204Z 1.6600671280140276e+09   INFO    SGX     Generating Quote...
    2022-08-09T17:45:28.085921380Z 1.6600671280855465e+09   INFO    controller.tcsissuer    Initiating quote attestation    {"reconciler group": "tcs.intel.com", "reconciler kind": "TCSIssuer", "name": "my-ca", "namespace": "sandbox", "signer": "tcsissuer.tcs.intel.com/sandbox.my-ca"}
    2022-08-09T17:45:28.573114837Z 1.6600671285726042e+09   INFO    SGX     Provision Signer        {"signerName": "tcsissuer.tcs.intel.com/sandbox.my-ca"}
    2022-08-09T17:45:28.573149892Z 1.660067128572742e+09    INFO    SGX     Provision Signer signers were nil       {"signerName": "tcsissuer.tcs.intel.com/sandbox.my-ca"}
    2022-08-09T17:45:28.715584362Z 1.6600671287152412e+09   INFO    SGX     Unwrapped SWK Key successfully
    2022-08-09T17:45:28.916326348Z 1.6600671289159918e+09   INFO    SGX     Unwrapped PWK Key successfully
    2022-08-09T17:45:29.025658647Z 1.6600671290254097e+09   INFO    SGX     Unwrapped Public Key successfully
    2022-08-09T17:45:29.622418096Z 1.6600671296220675e+09   INFO    SGX     Provision Signer signers validate CA failed     {"error": "mismatched CA key and certificate"}
    2022-08-09T17:45:29.686201918Z panic: interface conversion: interface is nil, not crypto11.Signer
    2022-08-09T17:45:29.686244322Z
    2022-08-09T17:45:29.686256345Z goroutine 448 [running]:
    2022-08-09T17:45:29.686267707Z github.com/intel/trusted-certificate-issuer/internal/sgx.(*SgxContext).removeSignerInToken(0xc000b92780, 0xc0002fe230)
    2022-08-09T17:45:29.686279177Z  /workspace/internal/sgx/sgx.go:255 +0xc5
    2022-08-09T17:45:29.686293447Z github.com/intel/trusted-certificate-issuer/internal/sgx.(*SgxContext).ProvisionSigner(0xc000b92780, {0xc000c11b60, 0x804}, {0xc000312400, 0x890, 0x891}, 0xc000256580)
    2022-08-09T17:45:29.686305446Z  /workspace/internal/sgx/sgx.go:351 +0x80e
    2022-08-09T17:45:29.686313282Z github.com/intel/trusted-certificate-issuer/controllers.(*IssuerReconciler).provisionSigner(0xc000b929c0, {0x1a2a9d8, 0xc000c1f4d0}, {0xc000c11b60, 0x25}, {0xc000d82ec7, 0x5}, {0xc000d82ea5, 0x7})
    2022-08-09T17:45:29.686336863Z  /workspace/controllers/issuer_controller.go:308 +0x43c
    2022-08-09T17:45:29.686350837Z github.com/intel/trusted-certificate-issuer/controllers.(*IssuerReconciler).Reconcile(0xc000b929c0, {0x1a2a9d8, 0xc000c1f4d0}, {{{0xc0007a07a0, 0x1739580}, {0xc0007a0796, 0x30}}})
    2022-08-09T17:45:29.686358002Z  /workspace/controllers/issuer_controller.go:173 +0x1305
    2022-08-09T17:45:29.686365785Z sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile(0xc0003071e0, {0x1a2a9d8, 0xc000c1f440}, {{{0xc0007a07a0, 0x1739580}, {0xc0007a0796, 0x417f34}}})
    2022-08-09T17:45:29.686373168Z  /workspace/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:114 +0x26f
    2022-08-09T17:45:29.686380365Z sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler(0xc0003071e0, {0x1a2a930, 0xc0002e9100}, {0x16796e0, 0xc000e815c0})
    2022-08-09T17:45:29.686387161Z  /workspace/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:311 +0x33e
    2022-08-09T17:45:29.686417767Z sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem(0xc0003071e0, {0x1a2a930, 0xc0002e9100})
    2022-08-09T17:45:29.686449993Z  /workspace/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:266 +0x205
    2022-08-09T17:45:29.686459051Z sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2()
    2022-08-09T17:45:29.686468217Z  /workspace/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:227 +0x85
    2022-08-09T17:45:29.686476796Z created by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2
    2022-08-09T17:45:29.686485232Z  /workspace/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:223 +0x357
    
  • Issuer does not become ready

    Issuer does not become ready

    Using the main branch of TCS and the Trusted Attestation Controller along with KMRA 2.0, the TCS Issuer, Kubernetes cluster is v1.24.3, the following TCS Issuer does not become ready.

    apiVersion: tcs.intel.com/v1alpha1
    kind: TCSIssuer
    metadata:
        name: my-ca
        namespace: sandbox
    spec:
        secretName: my-ca-cert
        selfSign: false
    

    Does not become Ready, even though it appears that all operations (quote verification, wrapped key transfer, wrapped key secret creation) have occurred.

    ~/git/trusted-attestation-controller/sandbox$ kubectl get quoteattestation,tcsissuer,secret -n sandbox
    NAME                                   AGE
    quoteattestation.tcs.intel.com/my-ca   24m
    
    NAME                            AGE   READY   REASON      MESSAGE
    tcsissuer.tcs.intel.com/my-ca   25m   False   Reconcile   Initiated key provisioning using QuoteAttestation
    
    NAME           TYPE     DATA   AGE
    secret/my-ca   Opaque   2      24m
    

    Logs attached. tac-plugin.log tcs.log kmra.log tac.log

  • Remove QuoteAttestation controller

    Remove QuoteAttestation controller

    Move the code related to the handling of QuoteAttestation CR to the Issuer controller. Now TCSIssuer owns the QuoteAttestation and checks its ready status within the issuer reconcile loop. So, we do not need a dedicated watch loop for checking the status of the QuoteAttestation CRs.

  • [RFC]: Plugin API for CA key provisioning

    [RFC]: Plugin API for CA key provisioning

    • Defined a protobuf based plugin api, plugins must expose this API on a UNIX domain socket.
    • Added KRMA referenced plugin.
    • Removed QuoteAttestation API

    FIXES #23

  • API/QuoteAttestation: Field to hold 'Nonce' used for quote generation

    API/QuoteAttestation: Field to hold 'Nonce' used for quote generation

    • Made a change to QuoteAttestation API to hold the 'nonce' used for quote generation. This value is supposed to use by the TAC/key manager while validating the provided SGX quote hash.
    • Moved QuoteAttestation API to v1alpha2
  • Kubernetes CSR extensions quote api v1alpha2 support

    Kubernetes CSR extensions quote api v1alpha2 support

    For quote v1alpha2 CSR extension, please using this oid: OidSubjectNonceExtensionName = asn1.ObjectIdentifier{1, 3, 6, 1, 4, 1, 54392, 5, 1547} to parse nonce from the csr extensions in this function: csrquote, publickey, err := getQuoteAndPublicKeyFromCSR(csr.Extensions). And the parsed nonce is also base64.StdEncoding.Encoded which need decodeExtensionValue().

  • init container failure when using containerd

    init container failure when using containerd

    Recently I discovered another bug with TCS version 0.4.0, it exist on both prebuild and locally build image when we are using contained (v1.6.8) as a runtime. It looks like init container for TCS is failing:

    Init Containers:
      init:
        Container ID:  containerd://acddb4e72879567b4ea02ab2e7ee00afc7da0286e4ae72ccf915a577e89a0ea0
        Image:         busybox:1.34.1
        Image ID:      [docker.io/library/busybox@sha256:59f225fdf34f28a07d22343ee415ee417f6b8365cf4a0d3a2933cbd8fd7cf8c1](mailto:docker.io/library/busybox@sha256:59f225fdf34f28a07d22343ee415ee417f6b8365cf4a0d3a2933cbd8fd7cf8c1)
        Port:          <none>
        Host Port:     <none>
        Command:
          /bin/chown
          5000:5000
          /home/tcs-issuer/tokens
        State:          Waiting
          Reason:       CrashLoopBackOff
        Last State:     Terminated
          Reason:       Error
          Exit Code:    1
          Started:      Fri, 02 Dec 2022 08:15:57 -0800
          Finished:     Fri, 02 Dec 2022 08:15:57 -0800
        Ready:          False
        Restart Count:  2
        Environment:    <none>
        Mounts:
          /home/tcs-issuer/tokens from tokens-dir (rw)
          /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gb8df (to)
    
  • Failed to update finalizer on Secret

    Failed to update finalizer on Secret

    When following the tutorial at README to create the TCSIssuer on sandbox namespace all is good:

    $ kubectl get tcsissuers -n sandbox
    NAME    AGE    READY   REASON      MESSAGE
    my-ca   2m     True    Reconcile   Success
    

    However, when I delete the tcsissuer the logs say:

    INFO	controllers.TCSIssuer	Failed to update finalizer on Secret	{"issuer": "tcsissuer.tcs.intel.com/sandbox.my-ca", "error": "failed to patch object (sandbox/my-ca-cert) with update finalizer : secrets \"my-ca-cert\" is forbidden: User \"system:serviceaccount:intel-system:tci-tcs-issuer\" cannot patch resource \"secrets\" in API group \"\" in the namespace \"sandbox\""}
    

    Thus, the my-ca-cert is not deleted.

  • Communication channels?

    Communication channels?

    Hey Everyone! I wanted to contribute to this project as well as intel/edge-conductor... But as is the case for the main K8s project I couldnt find means to communicate and coordinate with other contributors , e.g slack... I was wondering if I just missed it, or there is none, or if so is it internal to intel?

  • quoteattestation CRD has two copies which are not in sync

    quoteattestation CRD has two copies which are not in sync

    There are two copies of the CRDs:

    config/crd deployment/crd

    The quoteattestation CRD is not in sync. The #31 updated the config/crd but not the deployment/crd directory.

This is a SSH CA that allows you to retrieve a signed SSH certificate by authenticating to Duo.

github-duo-ssh-ca Authenticate to GitHub Enterprise in a secure way by requiring users to go through a Duo flow to get a short-lived SSH certificate t

Jan 7, 2022
Cheiron is a Kubernetes Operator made with OperatorSDK for reconciling service account and attaching imagePullSecrets to service accounts automatically

anny-co/cheiron NOTE: Cheiron is currently in very early stages of development and and far from anything usable. Feel free to contribute if you want t

Sep 13, 2021
The OCI Service Operator for Kubernetes (OSOK) makes it easy to connect and manage OCI services from a cloud native application running in a Kubernetes environment.

OCI Service Operator for Kubernetes Introduction The OCI Service Operator for Kubernetes (OSOK) makes it easy to create, manage, and connect to Oracle

Sep 27, 2022
kitex running in kubernetes cluster and discover each other in kubernetes Service way

Using kitex in kubernetes Kitex [kaɪt'eks] is a high-performance and strong-extensibility Golang RPC framework. This go module helps you to build mult

Feb 21, 2022
cluster-api-state-metrics (CASM) is a service that listens to the Kubernetes API server and generates metrics about the state of custom resource objects related of Kubernetes Cluster API.

Overview cluster-api-state-metrics (CASM) is a service that listens to the Kubernetes API server and generates metrics about the state of custom resou

Oct 27, 2022
Kubernetes OS Server - Kubernetes Extension API server exposing OS configuration like sysctl via Kubernetes API

KOSS is a Extension API Server which exposes OS properties and functionality using Kubernetes API, so it can be accessed using e.g. kubectl. At the moment this is highly experimental and only managing sysctl is supported. To make things actually usable, you must run KOSS binary as root on the machine you will be managing.

May 19, 2021
The mec platform for service register/discovery/subscribe and other functions.roject main repo.

EdgeGallery MEP project Introduction Edgegallery MEP is an open source implementation of MEC platform according to ETSI MEC 003 [1] and 011 [2] docume

Nov 15, 2022
TriggerMesh open source event-driven integration platform powered by Kubernetes and Knative.

TriggerMesh open source event-driven integration platform powered by Kubernetes and Knative. TriggerMesh allows you to declaratively define event flows between sources and targets as well as add even filter, splitting and processing using functions.

Dec 30, 2022
gokp aims to install a GitOps Native Kubernetes Platform

gokp gokp aims to install a GitOps Native Kubernetes Platform. This project is a Proof of Concept centered around getting a GitOps aware Kubernetes Pl

Nov 4, 2022
Opinionated platform that runs on Kubernetes, that takes you from App to URL in one step.
Opinionated platform that runs on Kubernetes, that takes you from App to URL in one step.

Epinio Opinionated platform that runs on Kubernetes, that takes you from App to URL in one step. Contents Epinio Contents What problem does Epinio sol

Nov 13, 2022
Frisbee is a Kubernetes-native platform for exploring, testing, and benchmarking distributed applications.
Frisbee is a Kubernetes-native platform for exploring, testing, and benchmarking distributed applications.

Why Frisbee ? Frisbee is a next generation platform designed to unify chaos testing and perfomance benchmarking. We address the key pain points develo

Dec 14, 2022
Open Service Mesh (OSM) is a lightweight, extensible, cloud native service mesh that allows users to uniformly manage, secure, and get out-of-the-box observability features for highly dynamic microservice environments.
Open Service Mesh (OSM) is a lightweight, extensible, cloud native service mesh that allows users to uniformly manage, secure, and get out-of-the-box observability features for highly dynamic microservice environments.

Open Service Mesh (OSM) Open Service Mesh (OSM) is a lightweight, extensible, Cloud Native service mesh that allows users to uniformly manage, secure,

Jan 2, 2023
crud is a cobra based CLI utility which helps in scaffolding a simple go based micro-service along with build scripts, api documentation, micro-service documentation and k8s deployment manifests

crud crud is a CLI utility which helps in scaffolding a simple go based micro-service along with build scripts, api documentation, micro-service docum

Nov 29, 2021
A multi-service dev environment for teams on Kubernetes
A multi-service dev environment for teams on Kubernetes

Tilt Kubernetes for Prod, Tilt for Dev Modern apps are made of too many services. They're everywhere and in constant communication. Tilt powers multi-

Jan 5, 2023
Hubble - Network, Service & Security Observability for Kubernetes using eBPF
Hubble - Network, Service & Security Observability for Kubernetes using eBPF

Network, Service & Security Observability for Kubernetes What is Hubble? Getting Started Features Service Dependency Graph Metrics & Monitoring Flow V

Jan 2, 2023
Azure Kubernetes Service (AKS) advanced networking (CNI) address space calculator.

aksip Azure Kubernetes Service (AKS) advanced networking (CNI) address space calculator. Download Download the the latest version from the releases pa

Dec 23, 2022
Expose kubernetes service publicly without an LoadBalancer
Expose kubernetes service publicly without an LoadBalancer

Kunnel Kunnel is short for Kubernetes tunnel, built for exposing Kubernetes service to outside the cluster without LoadBalancer or NodePort. Install B

Dec 1, 2022
Just a dummy Kubernetes Operator, playing with another dummy service

My first operator Just playing/learning to create a K8S operator in go. I will create a dummy operator that creates pods to open a shell inside It is

Dec 16, 2021
Xds - A simple xDS server, distributing Kubernetes service endpoints to clients

xDS Server for gRPC on Kubernetes A simple xDS server, distributing Kubernetes s

Nov 20, 2022