El Carro is a new project that offers a way to run Oracle databases in Kubernetes as a portable, open source, community driven, no vendor lock-in container orchestration system. El Carro provides a powerful declarative API for comprehensive and consistent configuration and deployment as well as for real-time operations and monitoring.

El Carro: The Oracle Operator for Kubernetes

Go Report Card

Run Oracle on Kubernetes with El Carro

El Carro is a new project that offers a way to run Oracle databases in Kubernetes as a portable, open source, community driven, no vendor lock-in container orchestration system. El Carro provides a powerful declarative API for comprehensive and consistent configuration and deployment as well as for real-time operations and monitoring.

High Level Overview

El Carro helps you with the deployment and management of Oracle database software in Kubernetes. You must have appropriate licensing rights to allow you to use it with El Carro (BYOL).

With the current release, you download the El Carro installation bundle, stage the Oracle installation software, create a containerized database image (with or without a seed database), and then create an Instance (known as CDB in Oracle parlance) and add one or more Databases (known as PDBs).

After the El Carro Instance and Database(s) are created, you can take snapshot-based or RMAN-based backups and get basic monitoring and logging information. Additional database services will be added in future releases.

License Notice

You can use El Carro to automatically provision and manage Oracle Database Express Edition (XE) or Oracle Database Enterprise Edition (EE). In each case, it is your responsibility to ensure that you have appropriate licenses to use any such Oracle software with El Carro.

Please also note that each El Carro “database” will create a pluggable database, which may require licensing of the Oracle Multitenant option.

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Quickstart

We recommend starting with the quickstart first, but as you become more familiar with El Carro, consider trying more advanced features by following the user guides linked below.

If you have a valid license for Oracle 12c EE and would like to get your Oracle database up and running on Kubernetes, you can follow this quickstart guide.

As an alternative to Oracle 12c EE, you can use Oracle 18c XE which is free to use by following the quickstart guide for Oracle 18c XE instead.

If you prefer to run El Carro locally on your personal computer, you can follow the user guide for Oracle on minikube.

Preparation

To prepare the El Carro download and deployment, follow this guide.

Provisioning

El Carro helps you to easily create, scale, and delete Oracle databases.

Firstly, you need to create a containerized database image.

You can optionally create a default Config to set namespace-wide defaults for configuring your databases, following this guide.

Then you can create Instances (known as CDBs in Oracle parlance), following this guide. Afterward, create Databases (known as PDBs) and users following this guide.

Backup and Recovery

El Carro provides both storage snapshot based backup/restore and Oracle native RMAN based backup/restore features to support your database backup and recovery strategy.

After the El Carro Instance and Database(s) are created, you can create storage snapshot based backups, following this guide.

You can also create Oracle native RMAN based backups, following this guide.

To restore from a backup, follow this guide.

Data Import & Export

El Carro provides data import/export features based on Oracle Data Pump.

To import data to your El Carro database, follow this guide.

To export data from your El Carro database, follow this guide.

What's More?

There are more features supported by El Carro and more to be added soon! For more information, check logging, monitoring, connectivity, UI, etc.

Contributing

You're very welcome to contribute to the El Carro Project!

We've put together a set of contributing and development guidelines that you can review in this guide.

Support

To report a bug or log a feature request, please open a GitHub issue and follow the guidelines for submitting a bug.

For general questions or community support, we welcome you to join the El Carro community mailing list and ask your question there.

Comments
  • Unable to build operator docker image. stat oracle/pkg/database/common: file does not exist

    Unable to build operator docker image. stat oracle/pkg/database/common: file does not exist

    Describe the bug Unable to build operator docker image locally

    To Reproduce

    cd $PATH_TO_EL_CARRO_REPO
    {
    export REPO="localhost:5000/oracle.db.anthosapis.com"
    export TAG="latest"
    export OPERATOR_IMG="${REPO}/operator:${TAG}"
    docker build -f oracle/Dockerfile -t ${OPERATOR_IMG} .
    docker push ${OPERATOR_IMG}
    }
    
    Sending build context to Docker daemon   4.71MB
    Step 1/19 : FROM docker.io/golang:1.15 as builder
     ---> 40349a2425ef
    Step 2/19 : WORKDIR /build
     ---> Using cache
     ---> b44c2a87f722
    Step 3/19 : COPY go.mod go.mod
     ---> Using cache
     ---> c359cdfe04b9
    Step 4/19 : COPY go.sum go.sum
     ---> Using cache
     ---> 6f6d2902ef22
    Step 5/19 : RUN go mod download
     ---> Using cache
     ---> 8be558325755
    Step 6/19 : COPY common common
     ---> Using cache
     ---> 1dd64c7bfbc5
    Step 7/19 : COPY oracle/main.go oracle/main.go
     ---> Using cache
     ---> 0a79c9d91f73
    Step 8/19 : COPY oracle/version.go oracle/version.go
     ---> Using cache
     ---> a9fbca9b14cf
    Step 9/19 : COPY oracle/api/ oracle/api/
     ---> Using cache
     ---> 123c5e7c856e
    Step 10/19 : COPY oracle/controllers/ oracle/controllers/
     ---> Using cache
     ---> 7c7a1ff96c61
    Step 11/19 : COPY oracle/pkg/agents oracle/pkg/agents
     ---> Using cache
     ---> 9d5ed5ea3f52
    Step 12/19 : COPY oracle/pkg/database/common oracle/pkg/database/common
    COPY failed: file not found in build context or excluded by .dockerignore: stat oracle/pkg/database/common: file does not exist
    

    Expected behavior docker build finishes successfully

  • Fix bug in testhelpers k8sUpdateWithRetryHelper

    Fix bug in testhelpers k8sUpdateWithRetryHelper

    In k8sUpdateWithRetryHelper after updating the object we make an extra Get to make sure the object has changed. This causes problems in scenarios where the object gets deleted after the update (e.g., by another controller).

    For example in my test I'm using this helper to update an object and remove it's finalizer. After removing the finalizer the object is removed and so the test fails on line envtest.go:1102 when it tries to fetch the object again to compare resourceVersions.

    I made a small change to handle this case.

  • Fix: Relink config files only once

    Fix: Relink config files only once

    RelinkConfigFiles can sometimes get called by both the dbdaemon and init_oracle. Only one call to RelinkConfigFiles is necessary. Having two calls can lead to race conditions.

    Delete unused parameter for reinitUnseededHost() function.

    b/260762391

    Change-Id: I03031bf980c9b1b95239b81af2029a179519088d

  • [Backup Schedule] Move CronAnything from oracle to common.

    [Backup Schedule] Move CronAnything from oracle to common.

    As described in go/anthos-postgres-backup-schedule-dd, we should have a general framework for Backup shedule. Oracle adopts CronAnything to do the cron backup job. Our current design is to reuse the work in oracle, so the first step is to move the CronAnything from oracle/ to common/.

    Bug: b/193256355

  • Add VolumeName in DiskSpec

    Add VolumeName in DiskSpec

    To enhance 1:1 binding between static provisioned PV and PVC

    Bug: 196033991 Doc: go/ods-postgres-static-pv

    Change-Id: I4df7a517f10cc9c1a05c78c5ae4a1cce59d0de38

  • Instructions and config to run in AWS EKS

    Instructions and config to run in AWS EKS

    I have tested all steps described here in this blog post: https://blog.pythian.com/using-el-carro-operator-on-aws/ Did minor changes from that post to create these instructions, making it easier to follow and using fewer pre-created files.

  • Refactor restore logic into a separate state machine

    Refactor restore logic into a separate state machine

    • Move everything restore-related to instance_controller_restore.go
    • Simplify code flow and readability
    • Add 2 extra statuses RestorePreparationInProgress / RestorePreparationComplete. This helps keep track of old STS/PVC removal.
    • Minor tweaks to functional tests (delete LRO might be called more than once, this is expected)
    • This should fix bug/flake '[pvc] is being deleted'
  • Allow setting DEFERRED database parameters.

    Allow setting DEFERRED database parameters.

    Formerly, attempts to set parameters with ISSYS_MODIFIABLE = DEFERRED would have resulted in the error "ORA-02096: specified initialization parameter is not modifiable with this option". Example parameter: RECYCLEBIN.

  • Prevent the password of SYS and SYSTEM password leak

    Prevent the password of SYS and SYSTEM password leak

    Setting the tracing -x flag in Bash causes all interpolated shell commands to be printed into stdout including the randomly generated SYS and SYSTEM passwords. This may let readers of image build logs gain elevated access to databases provisioned by El Carro.

    This PR temporarily disables the bash tracing for the duration of the CDB creation, then resumes tracing again.

  • Fix tag on UI image

    Fix tag on UI image

    The upstream UI image gcr.io/elcarro/oracle.db.anthosapis.com/ui does not have a latest tag:

    $ curl -s https://gcr.io/v2/elcarro/oracle.db.anthosapis.com/ui/tags/list | jq '.tags'
    [
      "v0.0.0-alpha",
      "v0.1.0-alpha"
    ]
    

    It results in an error when installing the UI as per https://github.com/GoogleCloudPlatform/elcarro-oracle-operator/blob/main/docs/content/monitoring/ui.md

    Failed to pull image "gcr.io/elcarro/oracle.db.anthosapis.com/ui:latest": rpc error: code = NotFound desc = failed to pull and unpack image "gcr.io/elcarro/oracle.db.anthosapis.com/ui:latest": failed to resolve reference "gcr.io/elcarro/oracle.db.anthosapis.com/ui:latest": gcr.io/elcarro/oracle.db.anthosapis.com/ui:latest: not found
    

    Following the convention in operator.yaml, here we change the tag to v0.1.0-alpha. With this change , the UI installs successfully:

    $ kubectl describe pod -n ui | tail
     Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                                  node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
     Events:
       Type    Reason     Age   From               Message
       ----    ------     ----  ----               -------
       Normal  Scheduled  25s   default-scheduler  Successfully assigned ui/ui-659665f8cb-5wlrz to 4633612-svr004
       Normal  Pulling    24s   kubelet            Pulling image "gcr.io/elcarro/oracle.db.anthosapis.com/ui:v0.1.0-alpha"
       Normal  Pulled     23s   kubelet            Successfully pulled image "gcr.io/elcarro/oracle.db.anthosapis.com/ui:v0.1.0-alpha" in 1.341448841s
       Normal  Created    23s   kubelet            Created container ui
       Normal  Started    23s   kubelet            Started container ui
    
  • Plugins support for Import/Export resources

    Plugins support for Import/Export resources

    Is your feature request related to a problem? Please describe. Currently only GCP Cloud storage is supported for import and export of datapumps:

    https://github.com/GoogleCloudPlatform/elcarro-oracle-operator/blob/d2aed2814023cc8b672c727b605165d8fdccfffd/oracle/api/v1alpha1/import_types.go#L41

    As a result such features are unusable in Enterprise environments with restricted networks and strict rules for data location.

    Describe the solution you'd like I suggest to change the "generic" Import/Export resources to use something like http/https urls and add ability to "load" plugins for different storages on different cloud providers or protocols (nfs, cifs or any other crazy things)

  • Updated image-type from cos to cos_containerd for GKE version >1.23

    Updated image-type from cos to cos_containerd for GKE version >1.23

  • Security Policy violation Binary Artifacts

    Security Policy violation Binary Artifacts

    This issue was automatically created by Allstar.

    Security Policy Violation Project is out of compliance with Binary Artifacts policy: binaries present in source code

    Rule Description Binary Artifacts are an increased security risk in your repository. Binary artifacts cannot be reviewed, allowing the introduction of possibly obsolete or maliciously subverted executables. For more information see the Security Scorecards Documentation for Binary Artifacts.

    Remediation Steps To remediate, remove the generated executable artifacts from the repository.

    Artifacts Found

    • third_party/runtime/libaio.so.1
    • third_party/runtime/libnsl.so.1

    Additional Information This policy is drawn from Security Scorecards, which is a tool that scores a project's adherence to security best practices. You may wish to run a Scorecards scan directly on this repository for more details.


    Allstar has been installed on all Google managed GitHub orgs. Policies are gradually being rolled out and enforced by the GOSST and OSPO teams. Learn more at http://go/allstar

    This issue will auto resolve when the policy is in compliance.

    Issue created by Allstar. See https://github.com/ossf/allstar/ for more information. For questions specific to the repository, please contact the owner or maintainer.

  • Support configuring runtimeClassName for instances

    Support configuring runtimeClassName for instances

    Is your feature request related to a problem? Please describe. If we want to run oracle containers in a sandboxed runtime like gvisor https://cloud.google.com/kubernetes-engine/docs/how-to/sandbox-pods we need additional configuration functionality.

    Describe the solution you'd like It seems like the preferable way to implement this would be to expose an equivalent of pod.spec.runtimeClassName on our instance and propagate it to the statefulset.spec.template.spec.runtimeClassName field.

    Describe alternatives you've considered Another possible option is to implement the manual affinity and taint rules. This is possible with #268

    Additional context

  • Allow configuring multi-zone/regional topology constraints

    Allow configuring multi-zone/regional topology constraints

    Is your feature request related to a problem? Please describe. We want to be able to modify topology constraints for el-carro resources when running on a multi-zonal cluster. These are defined in the k8s well-known annotations list available https://kubernetes.io/docs/reference/labels-annotations-taints

    Describe the solution you'd like The most straight-forward way to support this from a user perspective would be to simply apply these well-known annotations directly to the instance object (as it is the root resource determining compute and disk elements), and have the operator propagate these down to any created pods/PVCs.

    The specific annotations we have in mind are: topology.kubernetes.io/region topology.kubernetes.io/zone

    Describe alternatives you've considered There are some even more fine grained topology annotations which we could consider supporting instead of or in addition to these which target nodes/hostnames.

    We could also choose to expose these through some portion of the spec instead, but I suspect its easiest to use if we expose these via annotations similar to how users would interact with a Pod/Deployment.

    Additional context

  • Add ability to stop & start database instances

    Add ability to stop & start database instances

    TL;DR

    Ability to scale down the database instance when not in use to save compute resources and scale it back up when needed. Ideally automatically supported by helm upgrade.

    Is your feature request related to a problem? Please describe.

    We have an important use-case, where in a scalable infrastructure environment (e.g. GKE), we want to scale up/down all resources of an application when they don't need to run, in order to save expensive compute resources.

    The replica count of the statefulset of the database instance is not tracked in the instance CRD. A "helm upgrade" after a scale-down, to restore all statefulsets/deployments to the replicas defined in the helm chart thus works for all parts of our application, except for El Carro, which remains scaled down to 0 replicas. Scaling the statefulset manually is an option, but also not fully supported by the operator, so it may have negative side-effects.

    Describe the solution you'd like A flag to instruct the El Carro operator to cleanly stop/start the database instance (+ scale the statefulset down/up). If previously set to "scale down" to scale down, a "helm upgrade" should set it back to "scale up" and start the database instance pod again.

    The resulting flow might look like:

    1. [CREATE] “helm upgrade --install” -> Instance CRD is deployed with “instance.Spec.IsStopped: false”
    2. [STOP] Set “instance.Spec.IsStopped: true” to stop the DB
    3. [START] “helm upgrade –install” -> Sets “instance.Spec.IsStopped: false” (because that’s the only difference between the CRD in k8s and in the chart), triggering startup of the DB

    Thank you for considering this request!

    Kind regards, Maximilian Schiefer (for Regnology)

  • PDB initialization SQL scripts

    PDB initialization SQL scripts

    Is your feature request related to a problem? Please describe. The application needs the PDB to contain special changes performed by SYS, for example, permissions on dictionary objects owned by SYS with GRANT OPTION.

    Describe the solution you'd like The Database resource could contain a reference to a SQL script to be executed on PDB creation, either directly embedded as text or as a reference to a ConfigMap.

    For example, with the script provided via a ConfigMap:

    apiVersion: oracle.db.anthosapis.com/v1alpha1
    kind: Database
    metadata:
      name: mydb
    spec:
      instance: mydb
      name: MY_PDB
      admin_password: ...
      initializationScript:
        configMapRef:
          name: mydb-init-scripts
          key: init.sql
    

    alternatively, just embedded SQL:

      initializationScript:
        sql: |
          GRANT SELECT ON ALL_TABLES TO GPDB_ADMIN WITH GRANT OPTION;
    

    If the script fails, the Database should not reach the Ready state.

    Describe alternatives you've considered

    • Build a seeded image that contains PDB$SEED with the customizations (requires use of undocumented parameters).
    • Build a seeded image that contains a customized PDB for cloning; let El Carro create new PDBs as clones of the custom one.
KEDA is a Kubernetes-based Event Driven Autoscaling component. It provides event driven scale for any container running in Kubernetes
 KEDA is a Kubernetes-based Event Driven Autoscaling component. It provides event driven scale for any container running in Kubernetes

Kubernetes-based Event Driven Autoscaling KEDA allows for fine-grained autoscaling (including to/from zero) for event driven Kubernetes workloads. KED

Jan 7, 2023
Open Source runtime tool which help to detect malware code execution and run time mis-configuration change on a kubernetes cluster
Open Source runtime tool which help to detect malware code execution and run time mis-configuration change on a kubernetes cluster

Kube-Knark Project Trace your kubernetes runtime !! Kube-Knark is an open source tracer uses pcap & ebpf technology to perform runtime tracing on a de

Sep 19, 2022
KubeOrbit is an open-source abstraction layer library that turns easy apps testing&debuging on Kubernetes in a new way
KubeOrbit is an open-source abstraction layer library that turns easy apps testing&debuging on Kubernetes in a new way

KubeOrbit is an open-source abstraction layer library that turns easy apps testing&debuging on Kubernetes in a new way

Jan 6, 2023
Injective-price-oracle-ext - Injective's Oracle with dynamic price feeds (for External Integrations)

injective-price-oracle Injective's Oracle with dynamic price feeds. Allows anyon

Aug 29, 2022
Edge Orchestration project is to implement distributed computing between Docker Container enabled devices.
Edge Orchestration project is to implement distributed computing between Docker Container enabled devices.

Edge Orchestration Introduction The main purpose of Edge Orchestration project is to implement distributed computing between Docker Container enabled

Dec 17, 2021
Flux is a tool for keeping Kubernetes clusters in sync with sources of configuration, and automating updates to configuration when there is new code to deploy.
Flux is a tool for keeping Kubernetes clusters in sync with sources of configuration, and automating updates to configuration when there is new code to deploy.

Flux is a tool for keeping Kubernetes clusters in sync with sources of configuration (like Git repositories), and automating updates to configuration when there is new code to deploy.

Jan 8, 2023
Hexa is the open-source, standards-based policy orchestration software for multi-cloud and hybrid businesses.

Hexa Policy Orchestrator Hexa is the open-source, standards-based policy orchestration software for multi-cloud and hybrid businesses. The Hexa projec

Dec 22, 2022
Open, Multi-Cloud, Multi-Cluster Kubernetes Orchestration
Open, Multi-Cloud, Multi-Cluster Kubernetes Orchestration

Karmada Karmada: Open, Multi-Cloud, Multi-Cluster Kubernetes Orchestration Karmada (Kubernetes Armada) is a Kubernetes management system that enables

Dec 30, 2022
Kubernetes OS Server - Kubernetes Extension API server exposing OS configuration like sysctl via Kubernetes API

KOSS is a Extension API Server which exposes OS properties and functionality using Kubernetes API, so it can be accessed using e.g. kubectl. At the moment this is highly experimental and only managing sysctl is supported. To make things actually usable, you must run KOSS binary as root on the machine you will be managing.

May 19, 2021
TriggerMesh open source event-driven integration platform powered by Kubernetes and Knative.

TriggerMesh open source event-driven integration platform powered by Kubernetes and Knative. TriggerMesh allows you to declaratively define event flows between sources and targets as well as add even filter, splitting and processing using functions.

Dec 30, 2022
Kubernetes workload controller for container image deployment

kube-image-deployer kube-image-deployer는 Docker Registry의 Image:Tag를 감시하는 Kubernetes Controller입니다. Keel과 유사하지만 단일 태그만 감시하며 더 간결하게 동작합니다. Container, I

Mar 8, 2022
Ansible-driven CI/CD and monitoring system
Ansible-driven CI/CD and monitoring system

Ansible Semaphore Follow Semaphore on Twitter (AnsibleSem) and StackShare (ansible-semaphore). Ansible Semaphore is a modern UI for Ansible. It lets y

Sep 11, 2022
Moby Project - a collaborative project for the container ecosystem to assemble container-based systems
Moby Project - a collaborative project for the container ecosystem to assemble container-based systems

The Moby Project Moby is an open-source project created by Docker to enable and accelerate software containerization. It provides a "Lego set" of tool

Jan 8, 2023
Application open new tab in chrome when your favourite youtuber add new video.

youtube-opener This application open new tab in Chrome when your favourite youtuber add new video. It checks channel every one minute. How to run go r

Jan 16, 2022
A Simple and Comprehensive Vulnerability Scanner for Container Images, Git Repositories and Filesystems. Suitable for CI
A Simple and Comprehensive Vulnerability Scanner for Container Images, Git Repositories and Filesystems. Suitable for CI

A Simple and Comprehensive Vulnerability Scanner for Containers and other Artifacts, Suitable for CI. Table of Contents Abstract Features Installation

Jan 1, 2023
Reconstruct Open API Specifications from real-time workload traffic seamlessly
Reconstruct Open API Specifications from real-time workload traffic seamlessly

Reconstruct Open API Specifications from real-time workload traffic seamlessly: Capture all API traffic in an existing environment using a service-mes

Jan 1, 2023
Source code and slides for Kubernetes Community Days - Bangalore.
Source code and slides for Kubernetes Community Days - Bangalore.

kcdctl This is the source code for the demo done as part of the talk "Imperative, Declarative and Kubernetes" at the Kubernetes Community Days, Bengal

Sep 19, 2021