Juju is a model-driven Operator Lifecycle Manager (OLM).

Juju logo

Juju is a model-driven Operator Lifecycle Manager (OLM). Juju greatly improves the experience of running Kubernetes operators, especially in projects that integrate many operators from different publishers.

Why Juju

A Kubernetes operator is a container that drives the config and operation of a workload. By encapsulating ops code as a reusable container, the operator pattern moves beyond traditional config management to allow much more agile operations for complex cloud workloads.

Shared, open source operators take infrastructure as code to the next level with community-driven ops and integration code. Reuse of ops code improves quality and encourages wider community engagement and contribution. Operators also improve security through consistent automation. Juju operators are a community-driven devsecops approach to open source operations.

Juju implements the Kubernetes operator pattern, but is also a universal OLM that extends the operator pattern to traditional applications (without Kubernetes) on Linux and Windows. Such machine operators can work on bare metal, virtual machines or cloud instances, enabling multi cloud and hybrid cloud operations. Juju allows you to embrace the operator pattern on both container and legacy estate. An operator for machine-based environments can share 95% of its code with a Kubernetes operator for the same app.

Juju excels at application integration. Instead of simply focusing on lifecycle management, the Juju OLM provides a rich application graph model that tells operators how to integrate with one another. This dramatically simplifies the operations of large deployments.

A key focus for Juju is to simplify operator design, development and usage. Instead of making very complex operators for specific scenarios, Juju encourages devops to make composable operators, each of which drives a single Docker image, and which can be reused in different settings. Composable operators enable very rich scenarios to be constructed out of simpler operators that do one thing and do it well.

The OLM provides a central mechanism for operator instantiation, configuration, upgrades, integration and administration. The OLM provides a range of operator lifecycle services including leader election and persistent state. Instead of manually deploying and configuring operators, the OLM manages all the operators in a model at the direction of the administrator.

Open Operator Collection

The world's largest collection of operators all use Juju as their OLM. The Charmhub community emphasizes quality, collaboration and consistency. Publish your own operator and share integration code for other operators to connect to your application.

The Open Operator Manifesto outlines the values of the community and describe the ideal behaviour of operators, to shape contributions and discussions.

Multi cloud and hybrid operations across ARM and x86 infrastructure

The Juju OLM supports AWS, Azure, Google, Oracle, OpenStack, VMware and bare metal machines, as well as any conformant Kubernetes cluster. Integrate operators across clouds, and across machines and containers, just as easily. A single scenario can include applications on Kubernetes, as well as applications on a range of clouds and bare metal instances, all integrated automatically.

Juju operators support multiple CPU architectures. Connect applications on ARM with applications on x86 and take advantage of silicon-specific optimisations. It is good practice for operators to adapt to their environment and accelerate workloads accordingly.

Pure Python operators

The Python Operator Framework makes it easy to write an operator. The framework handles all the details of communication between integrated operators, so you can focus on your own application lifecycle management.

Code sharing between operator publishers is simplified making it much faster to collaborate on distributed systems involving components from many different publishers and upstreams. Your operator is a Python event handler. Lifecycle management, configuration and integration are all events delivered to your charm by the framework.

Architecture

The Juju client, server and agent are all written in Golang. The standard Juju packaging includes an embedded database for centralised logging and ersistence, but there is no need to manage that database separately.

Operators can be written in any language but we do encourage new authors to use the Python Operator Framework for ease of contribution, support and community participation.

Production grade

The Juju server has built-in support for high availability when scaled out to three instances. It can monitor itself and grow additional instances in the event of failure, within predetermined limits. Juju supports backup, restore, and rolling upgrade operations appropriate for large-scale centralised enterprise grade management and operations systems.

Get started

Our community hangs out at the Charmhub discourse which serves as a combination mailing list and web forum. Keep up with the news and get a feel for operator engineering and usage there. Get the Juju CLI on Windows, macOS or Linux with the install instructions and try the tutorials. All you need is a small K8s cluster, or an Ubuntu machine or VM to run MicroK8s.

Read the documentation for a comprehensive reference of commands and usage.

Contributing

Follow our code and contribution guidelines to learn how to make code changes. File bugs in Launchpad or ask questions on our Freenode IRC channel, and Mattermost.

Comments
  • Rename state methods and other artefacts from service to application

    Rename state methods and other artefacts from service to application

    Branch 3 in the service to application rename adventure. Here we rename state methods and other artefacts, plus some testing factory methods. As a driveby, delete some old uniter upgrade steps.

    (Review request: http://reviews.vapour.ws/r/4955/)

  • Implemented a cleanup worker for deleting metrics

    Implemented a cleanup worker for deleting metrics

    There are 3 main changes in this pr:

    1. Added a cleanup metrics and delete functions to state to allow metrics to be deleted
    2. Added an API endpoint for calling the CleanupMetrics function
    3. Added a worker that will periodically call the cleanupmetric function

    The worker never gets called - except in tests - that will come in a follow up pr. The CleanupMetrics function currently just returns an error. In a follow up it will return a list of the uuids that were deleted.

  • Removed race in history pruner tests.

    Removed race in history pruner tests.

    There was a report of a race in statushistorypruner tests https://bugs.launchpad.net/juju-core/+bug/1492095 this removes the race which consisted in a variable being written/read in various routines.

    (Review request: http://reviews.vapour.ws/r/2592/)

  • Validate LXDProfile for charm store/repo

    Validate LXDProfile for charm store/repo

    Description of change

    The following changes validates the LXDProfile of a charm and bundle if it's from the charm store.

    The following checks that the profile is correctly validated when deploying.

    Notes

    Feedback for where the validateXXX profile function should be, it's currently duplicated and would make sense to have it somewhere for reuse.

    QA steps

    Ensure that you have the feature flag turned on when bootstrapping, as this needs to be set.

    export JUJU_DEV_FEATURE_FLAGS=lxd-profile
    

    The cases I checked are the following, I'm sure I probably missed one, so if that's the case let me know so I can ensure that we cover those cases as well.

    juju deploy ./testcharms/charm-repo/bundle/lxd-profile/bundle.yaml
    juju deploy ./testcharms/charm-repo/bundle/lxd-profile-fail/bundle.yaml
    juju deploy cs:~juju-qa/bionic/lxd-profile-0
    juju deploy cs:~juju-qa/bionic/lxd-profile-fail-0
    

    Documentation changes

    When deploying a bundle with a lxd-profile configuration that isn't valid, it should show an error to the user explaining what went wrong and potentially how to fix it.

    Bug reference

    TBA

  • Opensuse support

    Opensuse support

    Description of change

    This change add the support of OpenSUSE Leap (42 series) to Juju.

    Main reason for the change is that most of our NVF run on top of OpenSUSE/SLES and I was doing some hands-on for evaluating Juju as a generic VNF-M. This PR (and other for juju/utils) is the result of my work.

    With these PR, Juju users are able to deploy OpenSUSE charms in LXD and in manual provisioned OpenSUSE host.

    This PR depends on: https://github.com/juju/utils/pull/277

    QA steps

    A part of the testing that I added, it is possible to deploy an OpenSUSE charm: https://github.com/marcmolla/juju-OpenSUSE/tree/master/charms/test-opensuseleap

    using a modified OpenSUSE image: https://github.com/marcmolla/juju-OpenSUSE/blob/master/lxd-opensuse-42.2-image.md

    I also describe how I tested the software locally: https://github.com/marcmolla/juju-OpenSUSE/blob/master/test-compiled-Juju.md

    For all those testing we also need another PR to juju/utils (https://github.com/juju/utils/pull/272)

    Documentation changes

    If this PR is accepted, we should include reference to OpenSUSE in general doc.

    Bug reference

    N/A

  • Make proxy settings truly global

    Make proxy settings truly global

    Description of change

    Make proxy settings global - previously the settings were only used on 'ubuntu' user login shell, with this change the settings are global for all users using /etc/profile.d and, on systems using systemd, also set for services launched using systemd via /etc/systemd/{system,user}.conf.d/juju-proxy.conf.

    QA steps

    1. Run unit tests
    2. Verify that if http-proxy option is set it's being set in the environment - e.g. services running via systemd should have proper env variables set.

    Documentation changes

    This change has to be noted in release notes and docs.

    Bug reference

    https://bugs.launchpad.net/juju/+bug/1666353

  • Do not restart api server when a new server certifiate is made, simply replace the cert

    Do not restart api server when a new server certifiate is made, simply replace the cert

    When machine addresses are updated, a new server certificate is made so that connections over those addresses can be made securely. Instead of restarting the API server to use the new certificate, the net listener is modified to replace the current certificate with the new one without stopping and starting the server. While the certificate is being updated, new connections to the server are disallowed.

    (Review request: http://reviews.vapour.ws/r/667/)

  • state: introduce automatic multi-environment filtering

    state: introduce automatic multi-environment filtering

    For collections that contain data for multiple environments, getCollection now returns a wrapped collection that automatically filters by environment UUID as required. This reduces the risk of unintended data leakage between environments.

    There are some areas where an actual mgo.Collection is required. These now use the new State.getRawCollection method.

    Now that this is handled automatically, this change also removes several TODOs regarding filtering by environment UUID.

    A later branch will remove all the now-unnecessary docID calls that currently exist in state.

    (Review request: http://reviews.vapour.ws/r/551/)

  • Fixed tests involving

    Fixed tests involving "file not found" system messages under Windows.

  • update utils dependency

    update utils dependency

    This should fix https://bugs.launchpad.net/juju-core/+bug/1654528 by making juju use set of TLS cipher suites that are compatible with the stock GnuTLS suites provided by precise and trusty.

    Note that this branch is targeted at 1.25 and the utils dependency is in the 1.25 branch of that repo.

  • feature: remove uvtool dependency

    feature: remove uvtool dependency

    follow up to #6580

    @perrito666 @babbageclunk

    This removes uvtool as a dependency, which should allow kvm containers to work on arm64, amd64, and ppc64el. QA has only been done by me on amd64 with vmaas.

    NB: Since merging develop back into this branch there are some network updates that break kvm containers. When the rest of the network updates are complete and land it should be fine again. In order to test, once you have checked out the branch you will need to undo the changes so you can QA. Do that with git revert -m 1 37a1a3b87fc54dc7391b4027e385ac48722ddf42 --no-commit

    QA:

    1. juju bootstrap vmaas21 kvm/purego --build-agent
    2. juju add-machine
    3. juju add-machine --series trusty
    4. juju deploy ubuntu --to kvm:0
    5. juju deploy mysql --to kvm:0
    6. juju deploy wordpress --to kvm:1
    7. juju deploy postgresql --to kvm:1 --series xenial
    8. juju add-relation wordpress mysql

    verify that machines 0 and 1 came up with xenial and trusty respectively verify that machines 0/kvm/0 cam up with xenial and 0/kvm/1 with trusty verify that machine 1/kvm/0 came up with trusty and 1/kvm/1 with xenial verify the relation was added... why not create a wordpress blog to be sure.

    1. juju remove-application ubuntu
    2. juju remove-application postgresql

    verify that the disk images for those are gone ,by juju ssh 0 (or 1) and look in /var/lib/juju/kvm/guests to be sure they were removed.

  • [JUJU-2355] CI: Speed up go generate

    [JUJU-2355] CI: Speed up go generate

    The following attempt to speed up github/generate workflow. The go generate command takes forever to trawl the codebase, so instead, we'll use grep to select the go generates and do it manually.

    We then have the option of parallelizing the generate commands.

    Checklist

    • [x] Comments saying why design decisions were made

    QA steps

    See the github action.

  • [JUJU-2353] Allow repl access for all substrates

    [JUJU-2353] Allow repl access for all substrates

    The following allows repl access to all substraits. Currently this only works on LXD, opening it up to substrait is advantageous.

    Checklist

    • [x] Comments saying why design decisions were made

    QA steps

    $ make go-install
    $ juju bootstrap aws test
    $ make juju-dqlite-repl
    

    Bug reference

    https://warthogs.atlassian.net/browse/JUJU-2353

  • 2.9 merge into 3.0

    2.9 merge into 3.0

    Forward merge of 2.9 into 3.0

    Conflicts: api/controller/caasapplicationprovisioner/client.go apiserver/backup_test.go apiserver/facades/client/backups/create.go apiserver/facades/controller/caasapplicationprovisioner/mock_test.go caas/kubernetes/provider/application/application.go caas/mocks/application_mock.go state/backups/db.go worker/caasapplicationprovisioner/application_test.go

  • JUJU-2351: Bootstrap to LXD vm

    JUJU-2351: Bootstrap to LXD vm

    This set of changes allow bootstrapping to a LXD vm instead of a container. Most of the work was already done, but it wasn't selecting the right images to bootstrap from. Additionally, there are some issues attempting to find the running status correctly for a VM. The simple solution is to just implement a retry mechanism, with enough back off, but there might be a better approach to this.

    The tests don't currently pass, but future commits will fix those.

    A simple QA to get up and running is the following:

    $ juju bootstrap lxd --constraints virt-type=virtual-machine
    

    We store a different alias for the VM image independently from a container image, as they're different. LXD itself has an API for remote lookup that doesn't require this to be explicit, but we do for the alias. This means that we suffix all aliases with /vm to indicate the difference. For containers, we leave as is, so that operators can reuse the existing deployments without a new fetch.

    Note: I'm not sold on the full "vitural-machine" and "container" naming, but I'm just passing through what LXD expects.

    Checklist

    If an item is not applicable, use ~strikethrough~.

    • [x] Code style: imports ordered, good names, simple structure, etc
    • [x] Comments saying why design decisions were made
    • [x] Go unit tests, with comments saying what you're testing
    • [ ] Integration tests, with comments saying what you're testing
    • [x] doc.go added or updated in changed packages

    QA steps

    $ juju bootstrap lxd --constraints virt-type=virtual-machine
    

    Documentation changes

    @tmihoc This will now deploy to LXD vms via a virt-type constraint.

    Bug reference

    https://warthogs.atlassian.net/browse/JUJU-2351

  • [JUJU-2339] Remove macaroons for no cs on juju

    [JUJU-2339] Remove macaroons for no cs on juju

    Remove the CharmStore macaroons as a side effort to retiring CharmStore from the client entirely.

    Relates to PRs such as #14968, #14985.

    https://warthogs.atlassian.net/browse/JUJU-2230 -- removing the CharmStore entirely from the client is coming up next.

    QA Steps

    As CharmStore charms are not supported, this doesn't change any behavior.

  • Thread through the context.Context

    Thread through the context.Context

    This is the start of threading through context.Context from the rpc methods through to the actual method calling. Currently context.Context is optional from the rpcreflect library, yet, it seems like a missed opportunity to not use this. With the new database changes, we would want to be able to cancel a sql transaction. Additionally, we could use the context.Context for opentracing to help debug/diagnose running issues.

    The code makes a lot of changes, but it's all very mechanical.

    Why this change is needed and what it does.

    Checklist

    If an item is not applicable, use ~strikethrough~.

    • [ ] Code style: imports ordered, good names, simple structure, etc
    • [ ] Comments saying why design decisions were made
    • [ ] Go unit tests, with comments saying what you're testing
    • [ ] Integration tests, with comments saying what you're testing
    • [ ] doc.go added or updated in changed packages

    QA steps

    Commands to run to verify that the change works.

    QA steps here
    

    Documentation changes

    How it affects user workflow (CLI or API). Delete section if not applicable.

    Bug reference

    Link to Launchpad bug. Delete section if not applicable.

Basic Kubernetes operator that have multiple versions in CRD. This operator can be used to experiment and understand Operator/CRD behaviors.

add-operator Basic Kubernetes operator that have multiple versions in CRD. This operator can be used to experiment and understand Operator/CRD behavio

Dec 15, 2021
An operator which complements grafana-operator for custom features which are not feasible to be merged into core operator

Grafana Complementary Operator A grafana which complements grafana-operator for custom features which are not feasible to be merged into core operator

Aug 16, 2022
Modular Kubernetes operator to manage the lifecycle of databases

Ensemble Ensemble is a simple and modular Kubernetes Operator to manage the lifecycle of a wide range of databases. Infrastructure as code with Kubern

Aug 12, 2022
Addon Operator coordinates the lifecycle of Add-ons in managed OpenShift
Addon Operator coordinates the lifecycle of Add-ons in managed OpenShift

Addon Operator Addon Operator coordinates the lifecycle of Addons in managed OpenShift. dev tools setup pre-commit hooks: make pre-commit-install glob

Dec 29, 2022
The cortex-operator is a project to manage the lifecycle of Cortex in Kubernetes.

cortex-operator The cortex-operator is a project to manage the lifecycle of Cortex in Kubernetes. Project status: alpha Not all planned features are c

Dec 14, 2022
Helm Operator is designed to managed the full lifecycle of Helm charts with Kubernetes CRD resource.

Helm Operator Helm Operator is designed to install and manage Helm charts with Kubernetes CRD resource. Helm Operator does not create the Helm release

Aug 25, 2022
KEDA is a Kubernetes-based Event Driven Autoscaling component. It provides event driven scale for any container running in Kubernetes
 KEDA is a Kubernetes-based Event Driven Autoscaling component. It provides event driven scale for any container running in Kubernetes

Kubernetes-based Event Driven Autoscaling KEDA allows for fine-grained autoscaling (including to/from zero) for event driven Kubernetes workloads. KED

Jan 7, 2023
Hexagonal architecture paradigms, such as dividing adapters into primary (driver) and secondary (driven)Hexagonal architecture paradigms, such as dividing adapters into primary (driver) and secondary (driven)

authorizer Architecture In this project, I tried to apply hexagonal architecture paradigms, such as dividing adapters into primary (driver) and second

Dec 7, 2021
The Elastalert Operator is an implementation of a Kubernetes Operator, to easily integrate elastalert with gitops.

Elastalert Operator for Kubernetes The Elastalert Operator is an implementation of a Kubernetes Operator. Getting started Firstly, learn How to use el

Jun 28, 2022
Test Operator using operator-sdk 1.15

test-operator Test Operator using operator-sdk 1.15 operator-sdk init --domain rbt.com --repo github.com/ravitri/test-operator Writing kustomize manif

Dec 28, 2021
Minecraft-operator - A Kubernetes operator for Minecraft Java Edition servers

Minecraft Operator A Kubernetes operator for dedicated servers of the video game

Dec 15, 2022
K8s-network-config-operator - Kubernetes network config operator to push network config to switches

Kubernetes Network operator Will add more to the readme later :D Operations The

May 16, 2022
a k8s operator 、operator-sdk

helloworld-operator a k8s operator 、operator-sdk Operator 参考 https://jicki.cn/kubernetes-operator/ https://learnku.com/articles/60683 https://opensour

Jan 27, 2022
Pulumi-k8s-operator-example - OpenGitOps Compliant Pulumi Kubernetes Operator Example

Pulumi GitOps Example OpenGitOps Compliant Pulumi Kubernetes Operator Example Pr

May 6, 2022
A sub module of EdgeGallery MECM which responsible for the app lifecycle management

mecm-applcm Description Application life cycle manager is part of MEP manager whose responsibility is to handle the host level life cycle management i

Jan 10, 2022
Kubernetes Admission Controller Demo: Validating Webhook for Namespace lifecycle events

Kubernetes Admission Controller Based on How to build a Kubernetes Webhook | Admission controllers Local Kuberbetes cluster # create kubernetes cluste

Feb 27, 2022
🚢 Go package providing lifecycle management for PostgreSQL Docker instances.
🚢  Go package providing lifecycle management for PostgreSQL Docker instances.

?? psqldocker powered by ory/dockertest. Go package providing lifecycle management for PostgreSQL Docker instances. Leverage Docker to run unit and in

Sep 2, 2022
operator to install cluster manager and klusterlet.

registration-operator Minimum cluster registration and work Community, discussion, contribution, and support Check the CONTRIBUTING Doc for how to con

Dec 14, 2022