This process installs onto kubernetes cluster(s) and provisions workloads designated by the uffizzi interface

Uffizzi Cloud Resource Controller

This application connects to a Kubernetes (k8s) Cluster to provision Uffizzi users' workloads on their behalf. While it provides a documented REST API for anyone to use, it's most valuable when used with uffizzi_app. Learn more at https://uffizzi.com

Design

The Uffizzi Continuous Previews Engine empowers development teams to conduct feature-level pre-merge testing by automatically deploying branches of application repositories for full-stack and microservices applications based on user designated triggers. Uffizzi makes these on-demand test environments available for review by key stakeholders (QA, Peer review, Product designer, Product manager, end users, etc.) at a secure Preview URL. The on-demand test environments provisioned by Uffizzi have a purpose driven life cycle and follow the Continous Previews methodology - https://cpmanifesto.org - https://github.com/UffizziCloud/Continuous_Previews_Manifesto

Uffizzi's implementation leverages several components as well as public cloud resources, including a Kubernetes Cluster. This controller is a supporting service for uffizzi_app and works in conjunction with redis and postgres to provide the CP capabilty.

This controller runs within the Cluster and accepts authenticated instructions from other Uffizzi components. It then specifies Resources within the Cluster's Kubernetes control API.

This controller acts as a smart and secure proxy for uffizzi_app and is designed to restrict required access to the k8s cluster. It is implemented in Golang to leverage the best officially-supported Kubernetes API client.

The controller is required as a uffizzi_app supporting service and serves these purposes:

  1. Communicate deployment instructions via native Golang API client to the designated Kubernetes cluster(s) from the Uffizzi interface
  2. Provide Kubernetes cluster information back to the Uffizzi interface
  3. Support restricted and secure connection between the Uffizzi interface and the Kubernetes cluster

Example story: New Preview Deployment

  • main() loop is within cmd/controller/controller.go, which calls setup() and handles exits. This initializes global settings and the sentry logging, connects to the database, initializes the Kubernetes clients, and starts the HTTP server listening.
  • An HTTP request for a new Deployment arrives and is handled within internal/http/handlers.go. The request contains the new Deployment integer ID.
  • The HTTP handler uses the ID as an argument to call the ApplyDeployment function within internal/domain/deployment.go. This takes a series of steps:
    • It then calls several methods from internal/kuber/client.go, which creates Kubernetes specifications for each k8s resource (Namespace, Deployment, NetworkPolicy, Service, etc.) and publishes them to the Cluster one at a time.
      • This function should return an IP address or hostname, which is added to the data for this Deployment's state.
  • Any errors are then handled and returned to the HTTP client.

Dependencies

This controller specifies custom Resources managed by popular open-source controllers:

You'll want these installed within the Cluster managed by this controller.

Configuration

Environment Variables

You can specify these within credentials/variables.env for use with docker-compose and our Makefile. Some of these may have defaults within configs/settings.yml.

  • ENV - Which deployment environment we're currently running within. Default: development
  • CONTROLLER_LOGIN - The username to HTTP Basic Authentication
  • CONTROLLER_PASSWORD - The password to HTTP Basic Authentication
  • CONTROLLER_NAMESPACE_NAME_PREFIX - Prefix for Namespaces provisioned. Default: deployment
  • CERT_MANAGER_CLUSTER_ISSUER - The issuer for signing certificates. Possible values:
    • letsencrypt (used by default)
    • zerossl
  • POD_CIDR - IP range to allowlist within NetworkPolicy. Default: 10.24.0.0/14
  • POOL_MACHINE_TOTAL_CPU_MILLICORES - Node resource to divide for Pods. Default: 2000
  • POOL_MACHINE_TOTAL_MEMORY_BYTES - Node recourse to divide for Pods. Default: 17179869184
  • DEFAULT_AUTOSCALING_CPU_THRESHOLD - Default: 75
  • DEFAULT_AUTOSCALING_CPU_THRESHOLD_EPSILON - Default: 8
  • AUTOSCALING_MAX_PERFORMANCE_REPLICAS - Horizontal Pod Autoscaler configuration. Default: 10
  • AUTOSCALING_MIN_PERFORMANCE_REPLICAS - Horizontal Pod Autoscaler configuration. Default: 1
  • AUTOSCALING_MAX_ENTERPRISE_REPLICAS - Horizontal Pod Autoscaler configuration. Default: 30
  • AUTOSCALING_MIN_ENTERPRISE_REPLICAS - Horizontal Pod Autoscaler configuration. Default: 3
  • STARTUP_PROBE_DELAY_SECONDS - Startup Probe configuration. Default: 10
  • STARTUP_PROBE_FAILURE_THRESHOLD - Startup Probe configuration. Default: 80
  • STARTUP_PROBE_PERIOD_SECONDS - Startup Probe configuration. Default: 15
  • EPHEMERAL_STORAGE_COEFFICIENT - LimitRange configuration. Default: 1.9

Kubernetes API Server Connection

This process expects to be provided a Kubernetes Service Account within a Kubernetes cluster. You can emulate this with these four pieces of configuration:

  • KUBERNETES_SERVICE_HOST - Hostname (or IP) of the k8s API service
  • KUBERNETES_SERVICE_PORT - TCP port number of the k8s API service (usually 443.)
  • /var/run/secrets/kubernetes.io/serviceaccount/token - Authentication token
  • /var/run/secrets/kubernetes.io/serviceaccount/ca.crt - k8s API Server's x509 host certificate

Once you're configured to connect to your cluster (using kubectl et al) then you can get the value for these two environment variables from the output of kubectl cluster-info.

Add those two environment variables to credentials/variables.env.

The authentication token must come from the cluster's cloud provider, e.g. gcloud config config-helper --format="value(credential.access_token)"

The server certificate must also come from the cluster's cloud provider, e.g. gcloud container clusters describe uffizzi-pro-production-gke --zone us-central1-c --project uffizzi-pro-production-gke --format="value(masterAuth.clusterCaCertificate)" | base64 --decode

You should write these two values to credentials/token and credentials/ca.crt and the make commands and docker-compose will copy them for you.

Shell

While developing, we most often run the controller within a shell on our workstations. docker-compose will set up this shell and mount the current working directory within the container so you can use other editors from outside. To login into docker container just run:

make shell

All commands in this "Shell" section should be run inside this shell.

Compile

After making any desired changes, compile the controller:

go install ./cmd/controller/...

Execute

/go/bin/controller

Test Connection to Cluster

Once you've configured access to your k8s Cluster (see above), you can test kubectl within the shell:

kubectl --token=`cat /var/run/secrets/kubernetes.io/serviceaccount/token` --certificate-authority=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt get nodes

Tests, Linters

In docker shell:

make test
make lint
make fix_lint

External Testing

Once the controller is running on your workstation, you can make HTTP requests to it from outside of the shell.

Ping controller

curl localhost:8080 \
  --user "${CONTROLLER_LOGIN}:${CONTROLLER_PASSWORD}"

Remove all workload from existing environment

This will remove the specified Preview's Namespace and all other Resources.

curl -X POST localhost:8080/clean \
     --user "${CONTROLLER_LOGIN}:${CONTROLLER_PASSWORD}" \
     -H "Content-Type: application/json" \
     -d '{ "environment_id": 1 }'

Online API Documentation

Available at http://localhost:8080/docs/

Installation within a Cluster

Functional usage within a Kubernetes Cluster is beyond the scope of this document. For more, join us on Slack or contact us at [email protected].

That said, we've included a Kubernetes manifest to help you get started at infrastructure/controller.yaml. Review it and change relevant variables before applying this manifest. You'll also need to install and configure the dependencies identified near the top of this document.

Owner
Uffizzi
Uffizzi is a Full-Stack Preview Tool. Improving how companies build and test software with a Continuous Previews capability for your entire app eco-system.
Uffizzi
Comments
  • Improve `NetworkPolicy` for customer Namespaces

    Improve `NetworkPolicy` for customer Namespaces

    Right now we're defining NetworkPolicy Resources that block traffic coming into each Namespace from other Namespaces and Pod IP's. We must additional define "egress" rules to block a Pod's connection elsewhere. Right now customer workloads can make TCP connections to the k8s Master API, nodes, and other Namespaces that do not have a NetworkPolicy defined.

    Customer workload Pods in Deployments specified by this controller should only be able to connect to itself, the load balancer, and The Internet.

  • BUG: Multi-Attach error for volume

    BUG: Multi-Attach error for volume

    Floating error after next case:

    1. Create preview from cli (Using compose file with volumes) uffizzi preview create
    2. Wait until preview created
    3. Make changes in preview DB (e.g. add new user)
    4. Update preview uffizzi preview update

    Result: Preview failed. Error in the Event Log:

    Multi-Attach error for volume "pvc-ee60af29-0a88-42c4-94b9-81c5ccc766d1" Volume is already used by pod(s) app-deployment-4758-858df776cc-bfrhp
    
    services:
      volumes_test_app:
        image: zipofar/uffizzi_test_rails_simple:latest
        volumes:
          - share_db:/db
        deploy:
          resources:
            limits:
              memory: 1000M
    
    volumes:
      share_db:
    
    x-uffizzi-ingress:
      service: volumes_test_app
      port: 3000
    
    x-uffizzi-continuous-preview:
      delete_preview_after: 1h
      tag_pattern: uffizzi_request_*
      delete_preview_when_image_tag_is_updated: true
      deploy_preview_when_image_tag_is_created: true
    
  • Add 404 when namespace not found

    Add 404 when namespace not found

    When we find, update, delete namespace we sent error to sentry if namespace not found. For avoid this we should extract 404

    E.g. error

    HOW TO TEST:

    1. Try to create deployment Expected: all should be good

    2. Try to update deployment Expected: all should be good

    3. Try to delete deployment Expected: all should be good

  • Update github controller

    Update github controller

    Update make file. Needs to transfer data from GitLab to GitHub. Repo on the GitHub have some difference between GitLab. E.g. In the GitHub controller does not work command make setup_gke_kube

  • Add possibility to initialize data for named volume

    Add possibility to initialize data for named volume

    For example we have a compose file:

    version: "3.8"
    
    x-uffizzi:
      ingress:
        service: nginx
        port: 80
    
    services:
      nginx:
        image: nginx
       volumes:
         - app_public:/app/public
    
      app:
       image: app
       volumes:
         - app_public:/app/public
      
    volumes:
      app_public:
    

    Container app has data in directory /app/public and container nginx should be get this data when it start. As I can see this:

    • Start init container
    • Pull image which has named volume
    • Run container from this image and copy files to pvc volume

    Possibly problems:

    • We do not know size of image
    • We do not know size of files should be copy to volume
  • Increase volume size from 1Gi to 5Gi

    Increase volume size from 1Gi to 5Gi

    A customer has a 1.9GB database they'd like to import to a persistent volume. It looks like the GCE virtual disk volumes are only $0.04/month/GiB so we're going to increase from 1GiB to 5GiB.

    Support thread (internal) https://uffizzi-internal.slack.com/archives/CN4255W79/p1665633562421119?thread_ts=1665501099.686279&cid=CN4255W79

  • Use toolbox Dockerfile from platform

    Use toolbox Dockerfile from platform

    The docker-compose up command fails locally because of some error in the Dockerfile. Let's use the working toolbox Dockerfile from the platform.

    When we have the new version, we need to build it, tag with v2 and push to the project's GHCR (it's used in the CI/CD pipeline)

  • Specify third-party certificate for additional subdomains.

    Specify third-party certificate for additional subdomains.

    Child of https://github.com/UffizziCloud/uffizzi_platform/issues/239#issuecomment-1244492724

    On our production platform, we're using a purchased wildcard TLS certificate for *.app.uffizzi.com. For customers requiring additional subdomains, we must instead configure cert-manager to provision a new certificate for all subdomains.

    UX described in related ticket https://github.com/UffizziCloud/uffizzi_app/issues/257

    When a customer specifies any number of additional subdomains, our controller should add to the deployment's Ingress resource, including:

    • Add annotation cert-manager.io/cluster-issuer: letsencrypt.
    • Add additional rules for each subdomain (wildcard would probably work here as well.)
    • Add to list of tls.hosts.
    • Add tls.secretName (can be same as "root" hostname.)

    Example result Ingress in YAML:

    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      annotations:
        cert-manager.io/cluster-issuer: letsencrypt
        kubernetes.io/ingress.class: nginx
      labels:
        app: controller
        app.kubernetes.io/managed-by: uffizzi
      name: ingress-1663013659
      namespace: deployment-5713
    spec:
      rules:
      - host: deployment-5713-my-application.app.uffizzi.com
        http:
          paths:
          - backend:
              service:
                name: service-1663013657
                port:
                  number: 80
            path: /
            pathType: Prefix
      - host: foo.deployment-5713-my-application.app.uffizzi.com
        http:
          paths:
          - backend:
              service:
                name: service-1663013657
                port:
                  number: 80
            path: /
            pathType: Prefix
      - host: bar.deployment-5713-my-application.app.uffizzi.com
        http:
          paths:
          - backend:
              service:
                name: service-1663013657
                port:
                  number: 80
            path: /
            pathType: Prefix
      tls:
      - hosts:
        - deployment-5713-my-application.app.uffizzi.com
        - foo.deployment-5713-my-application.app.uffizzi.com
        - bar.deployment-5713-my-application.app.uffizzi.com
        secretName: deployment-5713-my-application.app.uffizzi.com
    

    Note that this change is almost, but not quite, the opposite of changes made earlier this year to enable using the single wildcard certificate. Do not revert these changes, do NOT use the CERT_MANAGER_CLUSTER_ISSUER environment variable as-is. https://gitlab.com/dualbootpartners/idyl/uffizzi_controller/-/merge_requests/178/diffs

  • Add option to fetch logs for previous container instance.

    Add option to fetch logs for previous container instance.

    Supports https://github.com/UffizziCloud/uffizzi_app/issues/239

    The Golang k8s API client has a Previous boolean option https://pkg.go.dev/k8s.io/[email protected]/core/v1#PodLogOptions

    I think we should expose that option up through the controller's HTTP API. Add a previous option, match default value of false.

kubetnl tunnels TCP connections from within a Kubernetes cluster to a cluster-external endpoint, e.g. to your local machine. (the perfect complement to kubectl port-forward)

kubetnl kubetnl (kube tunnel) is a command line utility to tunnel TCP connections from within a Kubernetes to a cluster-external endpoint, e.g. to you

Dec 16, 2022
A Terraform module to manage cluster authentication (aws-auth) for an Elastic Kubernetes (EKS) cluster on AWS.

Archive Notice The terraform-aws-modules/eks/aws v.18.20.0 release has brought back support aws-auth configmap! For this reason, I highly encourage us

Dec 4, 2022
OpenAIOS is an incubating open-source distributed OS kernel based on Kubernetes for AI workloads
OpenAIOS is an incubating open-source distributed OS kernel based on Kubernetes for AI workloads

OpenAIOS is an incubating open-source distributed OS kernel based on Kubernetes for AI workloads. OpenAIOS-Platform is an AI development platform built upon OpenAIOS for enterprises to develop and deploy AI applications for production.

Dec 9, 2022
Installs containerd on Windows, optionally with default CNI plugins

containerd-installer Installs containerd on Windows, optionally with default CNI plugins Usage NAME: containerd-installer.exe - Install containerd

Nov 27, 2022
vcluster - Create fully functional virtual Kubernetes clusters - Each cluster runs inside a Kubernetes namespace and can be started within seconds
vcluster - Create fully functional virtual Kubernetes clusters - Each cluster runs inside a Kubernetes namespace and can be started within seconds

Website • Quickstart • Documentation • Blog • Twitter • Slack vcluster - Virtual Clusters For Kubernetes Lightweight & Low-Overhead - Based on k3s, bu

Jan 4, 2023
PolarDB-X Operator is a Kubernetes extension that aims to create and manage PolarDB-X cluster on Kubernetes.

GalaxyKube -- PolarDB-X Operator PolarDB-X Operator is a Kubernetes extension that aims to create and manage PolarDB-X cluster on Kubernetes. It follo

Dec 19, 2022
kitex running in kubernetes cluster and discover each other in kubernetes Service way

Using kitex in kubernetes Kitex [kaɪt'eks] is a high-performance and strong-extensibility Golang RPC framework. This go module helps you to build mult

Feb 21, 2022
Go-gke-pulumi - A simple example that deploys a GKE cluster and an application to the cluster using pulumi

This example deploys a Google Cloud Platform (GCP) Google Kubernetes Engine (GKE) cluster and an application to it

Jan 25, 2022
Natural-deploy - A natural and simple way to deploy workloads or anything on other machines.

Natural Deploy Its Go way of doing Ansibles: Motivation: Have you ever felt when using ansible or any declarative type of program that is used for dep

Jan 3, 2022
A pod scaler golang app that can scale replicas either inside of cluster or out of the cluster

pod-scaler A simple pod scaler golang application that can scale replicas via manipulating the deployment Technologies The project has been created us

Oct 24, 2021
Influxdb-cluster - InfluxDB Cluster for replacing InfluxDB Enterprise

InfluxDB ATTENTION: Around January 11th, 2019, master on this repository will be

Dec 26, 2022
Kube-step-podautoscaler - Controller to scale workloads based on steps
Kube-step-podautoscaler - Controller to scale workloads based on steps

Refer controller/*controller.go for implementation details and explanation for a better understanding.

Sep 5, 2022
Runwasi - A containerd shim which runs wasm workloads in wasmtime

containerd-shim-wasmtime-v1 This is a containerd shim which runs wasm workloads

Dec 28, 2022
Kubegres is a Kubernetes operator allowing to create a cluster of PostgreSql instances and manage databases replication, failover and backup.

Kubegres is a Kubernetes operator allowing to deploy a cluster of PostgreSql pods with data replication enabled out-of-the box. It brings simplicity w

Dec 30, 2022
Open Source runtime tool which help to detect malware code execution and run time mis-configuration change on a kubernetes cluster
Open Source runtime tool which help to detect malware code execution and run time mis-configuration change on a kubernetes cluster

Kube-Knark Project Trace your kubernetes runtime !! Kube-Knark is an open source tracer uses pcap & ebpf technology to perform runtime tracing on a de

Sep 19, 2022
Kubesecret is a command-line tool that prints secrets and configmaps data of a kubernetes cluster.

Kubesecret Kubesecret is a command-line tool that prints secrets and configmaps data of a kubernetes cluster. kubesecret -h for help pages. Install go

May 3, 2022
Nebula Operator manages NebulaGraph clusters on Kubernetes and automates tasks related to operating a NebulaGraph cluster

Nebula Operator manages NebulaGraph clusters on Kubernetes and automates tasks related to operating a NebulaGraph cluster. It evolved from NebulaGraph Cloud Service, makes NebulaGraph a truly cloud-native database.

Dec 31, 2022
A CLI to sync configmaps and secrets in a kubernetes cluster

kube-sync Kube Sync is a CLI application to copy/sync configmaps and secrets from one namespace to another. Motivation While working with kubernetes,

Oct 15, 2022
Enable dynamic and seamless Kubernetes multi-cluster topologies

Enable dynamic and seamless Kubernetes multi-cluster topologies Explore the docs » View Demo · Report Bug · Request Feature About the project Liqo is

Dec 30, 2022