A tool to build, deploy, and release any environment using System Containers.

Gitter Go Report Card

Bravetools

Bravetools is an end-to-end System Container management utility. Bravetools makes it easy to configure, build, and deploy reproducible environments either on single machines or large clusters.

Why use Bravetools

Configurable system images have many advantages, but their use has been limited. In our own development practice, we found that there were either no existing tools to automate the full lifecycle of a System Container or they had a steep learning curve. Here are some improvements that our team has noticed when using Bravetools in development and production:

  • Improved Stability. All software and configurations are installed into your images at build-time. Once your image is launched and tested, you can be confident that any environment launched from that image will function properly.

  • No overheads of a VM. Bravetools runs on LXD. LXD uses Linux containers to offer a user experience similar to virtual machines, but without the expensive overhead. You can run either single images on a local machines or scale to thousands of compute nodes.

  • Focus on code not infrastructure. Maintaining and configuring infrastructure is difficult! With any application built and deployed using Bravetools infrastructure and environment have to be configured just once. Developers can spend more time on creating and improving software and less time on managing production environments.

Table of Contents

Installing Bravetools

Latest stable binary

To get started using Bravetools:

  1. Download a platform-specific binary, rename it to brave, and add it to your PATH variable:
Operating System Binary Version
Ubuntu download release-1.55
macOS download release-1.55
Windows 8/10 download release-1.55
  1. Add your user to lxd group:
sudo usermod --append --groups lxd $USER

Install from source

Bravetools can be built from source on any platform that supports Go and LXD.

Ubuntu

Minimum Requirements

  • Operating System
    • Ubuntu 18.04 (64-bit)
  • Hardware
    • 2GB of Memory
  • Software
git clone https://github.com/bravetools/bravetools
cd bravetools
make ubuntu

Add your user to lxd group:

sudo usermod --append --groups lxd $USER

You may also need to install zfsutils:

sudo apt install zfsutils-linux

If this is your first time setting up Bravetools, run brave init to initialise the required profile, storage pool, and LXD bridge.

Linux

Minimum Rquirements

git clone https://github.com/bravetools/bravetools
cd bravetools
make linux

Add your user to lxd group:

sudo usermod --append --groups lxd $USER

Depending on your Linux distribution, you may also need to install zfs tools to enable storage pool management in Bravetools.

If this is your first time setting up Bravetools, run brave init to initialise the required profile, storage pool, and LXD bridge.

Mac OS

Minimum Requirements

  • Operating System
    • MacOS Mojave (64-bit)
  • Hardware
    • 4GB of Memory
  • Software
git clone https://github.com/bravetools/bravetools
cd bravetools
make darwin

If this is your first time setting up Bravetools, run brave init to initialise the required profile, storage pool, and LXD bridge.

Windows

Minimum Requirements

  • Operating System
    • Windows 8 (64-bit)
  • Hardware
    • 8GB of Memory
  • Software
    • Go
    • Multipass
    • BIOS-level hardware virtualization support must be enabled in the BIOS settings.
git clone https://github.com/beringresearch/bravetools
cd bravetools
go build -ldflags=“-s -X github.com/bravetools/bravetools/shared.braveVersion=VERSION” -o brave.exe

Where VERSION reflects the latest stable release of Bravetools e.g shared.braveVersion=1.53

Vagrant

  1. Start Vagrant VM:
cd vagrant
vagrant up
vagrant ssh

// execute inside Vagrant VM
cd $HOME/workspace/src/github.com/bravetools/bravetools
make ubuntu
brave init

Update Bravetools

To update existing installation of Bravetools for your platform:

git clone https://github.com/bravetools/bravetools
cd bravetools
make [darwin][ubuntu][linux]

Initialise Bravetools

When Bravetools is installed for the first time, it will set up all required components to connect your host to LXD. This is achieved by running:

$ brave init

brave init will:

  • Create ~/.bravetools directory that stores all your local images, configurations, and a live Unit database

On Mac and Windows platforms:

  • Create a new Multipass instance of Ubuntu 18.04
  • Install snap LXD
  • Enable mounting between host and Multipass

On Linux distributions:

  • Set up a new LXD profile brave
  • Create a new LXD bridge bravebr0
  • Create a new storage pool brave-TIMESTAMP

These steps ensure that Bravetools establishes a connection with LXD server and runs a self-contained LXD environment that doesn't interfere with any potentially existing user profiles and LXD bridges.

Command Reference

Usage:
  brave [command]

Available Commands:
  base        Build a base unit
  build       Build an image from a Bravefile
  configure   Configure local host parameters such as storage
  deploy      Deploy Unit from image
  help        Help about any command
  images      List images
  import      Import a tarball into local Bravetools image repository
  info        Display workspace information
  init        Create a new Bravetools host
  mount       Mount a directory to a Unit
  remove      Remove a Unit or an Image
  start       Start Unit
  stop        Stop Unit
  umount      Unmount <disk> from UNIT
  units       List Units
  version     Show current bravetools version

Flags:
  -h, --help   help for brave

To get help on any on specific command, run:

brave COMMAND -h

Quick tour

Here's a toy example showing how to create a simple container configuration, add some useful packages to it, and deploy your image as a service.

Configuration instructions are stored in a Bravefile. Let's crate a simple Bravefile that uses Alpine Edge image and installs python3:

$ touch Bravefile

Populate this Bravefile with basic configuration, adding python3 package through apk manager:

base:
  image: alpine/edge/amd64
  location: public
packages:
  manager: apk
  system:
  - python3
service:
  image: alpine-example-1.0
  name: alpine-example
  docker: "no"
  version: "1.0"
  ip: ""
  ports: []
  resources:
    ram: 4GB
    cpu: 2
    gpu: "no"

To create an image from this configuration, run:

$ brave build

[alpine-example] IMPORT:  alpine/edge/amd64
[alpine-example] RUN:  [apk update]
fetch http://dl-cdn.alpinelinux.org/alpine/edge/main/x86_64/APKINDEX.tar.gz
...

OK: 56 MiB in 30 packages
Exporting image alpine-example
9691e2cf3a58abd4ca411e8085c3117a

List all local images and confirm successful build:

$ brave images

IMAGE                             CREATED   SIZE  HASH                             
alpine-example-1.0                just now  19MB  9691e2cf3a58abd4ca411e8085c3117a

Finally, we can deploy this image as a container:

$ brave deploy

Importing alpine-example-1.0.tar.gz

Confirm that the service is up and running:

NAME            STATUS  IPV4              DISK  PROXY
alpine-example  Running eth0:10.0.0.117                                      

Because this is just an LXD container, you can access it through the usual lxc exec command:

$ lxc exec alpine-example python3

Python 3.8.6 (default, Oct  5 2020, 00:23:48) 
[GCC 10.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> 

This is a very basic example - Bravertools makes it easy to create very complex System Container environments, abstracting configuration options such as GPU support, Docker integration, and seamless port-forwarding, just to name a few. To learn more about using Bravetools, please refer to our Bravetools Documentation.

Build Documentation

Follow installation instructions for Jekyll on your platform. To serve documentation locally run:

cd docs
bundle exec jekyll serve --trace

and point your browser to http://127.0.0.1:4000/bravetools/.

Comments
  • Brave init

    Brave init

    Hi guys! Thanks for sharing this spectacular tool. I hope that it can grow in the future.

    I follow step by step the Quick Tour in the Readme, but fails because it never mention the brave init command.

    Once I ran it, the Quick tour works like a charm. Another thing is that we (users) don't know what brave init does and why we need to run this first. Or how to remove that brave init does in the future, etc. Also I checked in the https://bravetools.github.io/ docs and you only mention it one time, explaining only this: Create a new Bravetools host. Can you add more info about init?

    BTW, I think that the Readme is a little messy, because the Quick Tour is in the first place, then the Installation part and finally the Command Reference. If you order it diferrently like:

    • Installation
    • Command Reference
    • Quick Tour

    Will be easy to follow the guides.

    Thanks!!

    EDIT: Even in the https://bravetools.github.io/bravetools/intro/quickstart/ you never mention the brave init.

  • Brave compose

    Brave compose

    Add a new brave compose command which reads a brave-compose.yml file and spins up the system of containers defined in it. Defining the system in one place like this makes it much easier to see/manage large deployments made up of multiple containers. In addition, it allows for a higher level of abstraction by thinking of multiple containers as a single deployment.

    Integration with Bravefiles

    brave-compose.yaml files integrate well with the existing Bravefiles, allowing the option for default unit settings to be loaded from bravefiles. This allows for the brave-compose.yaml file to remain lean and provide a high-level overview of the system. Settings specified in the brave-compose.yaml file will override those in the provided Bravefile (if any).

    Optional Build

    Although the focus of the compose tool is on deployment, there is also an optional build flag in the brave-compose.yaml file allowing for users to build entire systems and deploy them in a single command. All units and images from a build will be cleaned up if an issue is encountered, encouraging treating the set of containers as a single entity.

    Changes to bravetools

    To make these changes I narrowed the scope of the deployment functions InitUnit and Postdeploy to accept shared.Service structs instead of the whole Bravefile. This change mostly has no effect, but one thing does change - currently the Base unit name name is stored in the DB in UnitData. After this change, the actual Image the unit was derived from name will stored instead. This actually makes more sense to me - multiple images could be based on the same base unit, so it's more important to store the actual image a unit is deployed from.

    TODO

    • ~~Example script with example compose.yaml file~~
    • ~~Documentation still needs to be added~~
    • ~~Dependencies between units to determine deployment order~~
  • "brave init" fails on new installation (darwin/arm64)

    Describe the bug I've downloaded the latest build from Github and ran it for the first time. It resulted in the following error:

    ➜  ~ brave init
    2022/11/28 11:50:21 Initialising a new Bravetools configuration
    Host OS:  darwin
    Backend:  multipass
    Storage (GB):  12
    Memory:  4GB
    Network:  10.0.0.1
    2022/11/28 11:50:21 Initialising Bravetools backend
    launch failed: instance "christian" already exists
    

    To Reproduce Unclear how to reproduce. I didn't have a Bravefile yet.

    Expected behavior I expected brave to finish the init phase successfully.

    Environment (please complete the following information):

    • Your operating system name and version: Ventura, 13.0.1 (22A400)
    • Bravetools version: 1.56
    • LXD version: don't know
  • Enable brave tools to reuse existing lxd bridges

    Enable brave tools to reuse existing lxd bridges

    If a Bridget already exists, brave tools should reuse it - creation of multiple bridges on the same host seems to result in unresolved DNSs in containers...

  • Address apk update pause

    Address apk update pause

    Small bugfix to address specific apk update issue.

    Running apk update does not close the DataDone channel unlike most other commands, leading to waiting for prompt from user before continuing. To address this use a goroutine to call op.Wait and close a channel to signal completion to select call instead.

  • Make remote network and storage names configurable in settings

    Make remote network and storage names configurable in settings

    Current remote settings:

    {
        "name": "bravetools",
        "url": "https://192.168.64.51:8443",
        "protocol": "lxd",
        "public": false,
        "profile": "profile_name"
    }
    

    Suggested remote settings:

    {
        "name": "bravetools",
        "url": "https://192.168.64.51:8443",
        "protocol": "lxd",
        "public": false,
        "storage":  "storage_name",
        "network":  "profile_namebr0",
        "profile": "profile_name"
    }
    

    This would allow to have fine-grained control over remote deployments and won't force user to have identical local and remote configurations.

  • Remote deployment

    Remote deployment

    Enables users to specify the remote to deploy to using the unit name. The first section of the unit named preceding the colon ":" will be taken as the remote name. If no remote is specified the default bravetools local remote will be used.

    For example, the following Bravefile snippet will deploy a unit called "example" at the "test" remote:

    service:
        name: test:example
        ...
    

    Also included in this is a way to specify which LXD profile to deploy the unit using, since we can no longer assume the local profile will exist on remotes. This profile can be specified in the Bravefile Service section or using a CLI flag brave deploy --profile [name]. If no profile is specified, the remote's default profile saved at ~/.bravetools/remotes will be used.

  • Command detach

    Command detach

    Currently, if executing a command that does not exit bravetools will wait indefinitely for the completion of the command. The usual ways of running a shell command in the background don't seem to work when using bravetools. For example, using sh -c ... & does not detach, nor do various uses of nohup ... &. If the program does not provide a flag to run as a daemon process it cannot be used from bravetools without freezing the build/deploy.

    Instances of programs that don't exit may include web-servers etc. that are designed to be run in the background. This means that something like starting a web-server using a post-deploy command would not work if it has no --daemon flag, like Flask.

    Adding a new field to the Bravefile run command, "detach" would enable bravetools to handle such cases and any other where running programs in the background is desired. For certain scenarios this could greatly simplify deployment and make it feel less like you are fighting bravetools to get it to do something that would have been easy with a shell script.

    Here's what the new field would like like:

    run:
      - command: flask
        args:
          - run
        detach: true
    
  • Cleanup build

    Cleanup build

    Currently if a build is interrupted, LXD images imported during the build process are often left on the server. These images will cause a conflict the next time a build is attempted, and in the meantime use up space needlessly. It would be nice if bravetools cleaned up after itself nicely.

    These changes aim to ensure that interrupted builds are correctly cleaned up.

    The following steps are taken to ensure this:

    • Images fingerprints needed for build process are recorded before starting the build. Upon cancellation, all new fingerprints are deleted to roll the system back to what it was before the build.
    • A separate goroutine intercepts SIGINT and sets abort build flag and cancels the context.Context.
    • After every stage of the build, the abort flag is checked.
    • Functions called during the build now accept a context.Context argument and check it at appropriate times to abort the build when it is safe to do so.

    This change does its best to ensure that builds are only cancelled when safe to do so - for example, cancellation is not allowed during image publishing.

    ~~For now the idea to check the diff of the image fingerprints mimics existing behavior for retrieving the fingerprint used in bravetools. I have some ideas on how to improve this and make the fingerprint check more accurate later - this would allow for more granular cleanups of just the images created during the build.~~

    • This is now implemented
  • Enable operating on multiple Units/Images with a brave command

    Enable operating on multiple Units/Images with a brave command

    It would be really convenient to be able to use a single command to control multiple units or images at once. Currently, if I wanted to stop all units to do some debugging and then start them again later this take several repetitive commands - one per unit to shut it down, and one per unit to start them again. Being able to do this in just two commands (start and stop) would be much easier.

    brave stop unit1 unit2 unit3 brave start unit1 unit2 unit3

    vs

    brave stop unit1
    brave stop unit2
    brave stop unit3
    
    brave start unit1
    brave start unit2
    brave start unit3
    

    Despite the focus of this tool of turning declarative Bravefiles into systems, I think this would be a nice addition to the usability of the command line client, and would help manage larger systems consisting of multiple units. This feature is present in other clients of similar systems, such as the LXD client (lxc) and also in the Docker CLI so I think users will be familiar and may even expect it.

  • Accept path to composefile

    Accept path to composefile

    Accepting path to composefile as well as path to dir in the CLI command brave compose - more intuitive for user.

    Closes https://github.com/bravetools/bravetools/issues/142

  • Device IP not within LXD bridge subnet

    Device IP not within LXD bridge subnet

    Running current master branch (https://github.com/bravetools/bravetools/commit/2f44c394548a47190b941b9df53e611cfd5c3ece) on Ubuntu Linux, followed uninstall instructions and ran brave init

    Attempting to deploy a unit to 10.0.0.5 results in an error - this used to work before.

    Error message:

    Device IP address "10.0.0.5" not within network "benjaminbr0" subnet
    

    Output of lxc network info benjaminbr0 show that the bridge IP is 10.131.219.1/24 Running brave deploy --ip 10.131.219.10 works fine.

    I see that there is an option to specify network bridge IP in brave init which I never used before - previously I relied on the bridge always being on 10.0.0.1. I suppose that going forwards I should now remember to set this option to the desired IP on brave init, to avoid having to adjust to the randomly selected bridge IP.

  • Remote Bravetools LXD profile init

    Remote Bravetools LXD profile init

    For the moment Bravetools creates a local LXD profile with resources for itself using Exec commands. A bravetools profile must exist at any deploy location bravetools uses.

    Using the LXD API instead would be easier and also opens up opportunities to create a bravetools-managed profile remotely.

  • brave compose subcommands

    brave compose subcommands

    Currently the brave compose command does a build and deploy of a system. This is convenient in many places, but additional flexibility would be enabled by enabling separate subcommands that do just one of these steps.

    Suggested subcommands for this would be:

    • brave compose build
    • brave compose deploy

    This is more flexibile as it allows using the tool to execute just one step at a time.

  • Brave base builds a new image

    Brave base builds a new image

    Instead of taking a remote image and importing it into bravetools untouched, brave base spins up a container based on that remote image and creates a new image from that container.

    Not only is this unnecessary/slower, it means that the fingerprint will no longer match the remote LXD image as it is actually a snapshot of a different state.

    A side-effect of this is that running brave base multiple times results in images with different fingerprints each time.

    It would be better to copy the remote image into bravetools store unchanged.

  • Path to Bravefile dir vs. Path to Bravefile in brave compose

    Path to Bravefile dir vs. Path to Bravefile in brave compose

    Right now, brave compose expects the path to a Bravefile to be provided under its services section. Should we change this to a path of a directory containing a Bravefile?

Natural-deploy - A natural and simple way to deploy workloads or anything on other machines.

Natural Deploy Its Go way of doing Ansibles: Motivation: Have you ever felt when using ansible or any declarative type of program that is used for dep

Jan 3, 2022
Kubedock is a minimal implementation of the docker api that will orchestrate containers on a Kubernetes cluster, rather than running containers locally.

Kubedock Kubedock is an minimal implementation of the docker api that will orchestrate containers on a kubernetes cluster, rather than running contain

Nov 11, 2022
Open Source runtime scanner for Linux containers (LXD), It performs security audit checks based on CIS Linux containers Benchmark specification
Open Source runtime scanner for Linux containers (LXD), It performs security audit checks based on CIS Linux containers  Benchmark specification

lxd-probe Scan your Linux container runtime !! Lxd-Probe is an open source audit scanner who perform audit check on a linux container manager and outp

Dec 26, 2022
`runenv` create gcloud run deploy `--set-env-vars=` option and export shell environment from yaml file.

runenv runenv create gcloud run deploy --set-env-vars= option and export shell environment from yaml file. Motivation I want to manage Cloud Run envir

Feb 10, 2022
Docker-NodeJS - Creating a CI/CD Environment for Serverless Containers on Google Cloud Run
Docker-NodeJS - Creating a CI/CD Environment for Serverless Containers on Google Cloud Run

Creating a CI/CD Environment for Serverless Containers on Google Cloud Run Archi

Jan 8, 2022
A simple Go app and GitHub workflow that shows how to use GitHub Actions to test, build and deploy a Go app to Docker Hub

go-pipeline-demo A repository containing a simple Go app and GitHub workflow that shows how to use GitHub Actions to test, build and deploy a Go app t

Nov 17, 2021
Build and deploy Go applications on Kubernetes
Build and deploy Go applications on Kubernetes

ko: Easy Go Containers ko is a simple, fast container image builder for Go applications. It's ideal for use cases where your image contains a single G

Jan 5, 2023
Use Terraform to build and deploy configurations for Juniper SRX firewalls.
Use Terraform to build and deploy configurations for Juniper SRX firewalls.

Juniper Terraform - SRX Overview The goal of this project is to provide an example method to interact with Juniper SRX products with Terraform. ?? Ter

Mar 16, 2022
Build and run Docker containers leveraging NVIDIA GPUs
Build and run Docker containers leveraging NVIDIA GPUs

NVIDIA Container Toolkit Introduction The NVIDIA Container Toolkit allows users to build and run GPU accelerated Docker containers. The toolkit includ

Jan 7, 2023
Christmas Hack Day Project: Build an Kubernetes Operator to deploy Camunda Cloud services

Camunda Cloud Operator Christmas Hack Day Project (2021): Build an Kubernetes Operator to deploy Camunda Cloud services Motiviation / Idea We currentl

May 18, 2022
A Go based deployment tool that allows the users to deploy the web application on the server using SSH information and pem file.

A Go based deployment tool that allows the users to deploy the web application on the server using SSH information and pem file. This application is intend for non tecnhincal users they can just open the GUI and given the server details just deploy.

Oct 16, 2021
Deploy, manage, and secure applications and resources across multiple clusters using CloudFormation and Shipa

CloudFormation provider Deploy, secure, and manage applications across multiple clusters using CloudFormation and Shipa. Development environment setup

Feb 12, 2022
The CLI tool glueing Git, Docker, Helm and Kubernetes with any CI system to implement CI/CD and Giterminism
The CLI tool glueing Git, Docker, Helm and Kubernetes with any CI system to implement CI/CD and Giterminism

___ werf is an Open Source CLI tool written in Go, designed to simplify and speed up the delivery of applications. To use it, you need to describe the

Jan 4, 2023
This repository is where I'm learning to write a CLI using Go, while learning Go, and experimenting with Docker containers and APIs.

CLI Project This repository contains a CLI project that I've been working on for a while. It's a simple project that I've been utilizing to learn Go,

Dec 12, 2021
General-purpose actions for test and release in Go

go-actions This repository provides general-purpose actions for Go. setup This action runs actions/setup-go with actions/cache. For example, jobs: l

Nov 28, 2021
Flux is a tool for keeping Kubernetes clusters in sync with sources of configuration, and automating updates to configuration when there is new code to deploy.
Flux is a tool for keeping Kubernetes clusters in sync with sources of configuration, and automating updates to configuration when there is new code to deploy.

Flux is a tool for keeping Kubernetes clusters in sync with sources of configuration (like Git repositories), and automating updates to configuration when there is new code to deploy.

Jan 8, 2023
Deploy 2 golang aws lambda functions using serverless framework.

Deploy 2 golang aws lambda functions using serverless framework.

Jan 20, 2022
Bubbly is an open-source platform that gives you confidence in your continuous release process.
Bubbly is an open-source platform that gives you confidence in your continuous release process.

Bubbly Bubbly - Release Readiness in a Bubble Bubbly emerged from a need that many lean software teams practicing Continuous Integration and Delivery

Nov 29, 2022
A helm v3 plugin to get values from a previous release

helm-val helm-val is a helm plugin to fetch values from a previous release. Getting started Installation To install the plugin: $ helm plugin install

Dec 11, 2022