DevSpace - The Fastest Developer Tool for Kubernetes ⚡ Automate your deployment workflow with DevSpace and develop software directly inside Kubernetes.

WebsiteQuickstartExamplesDocumentationBlogTwitter

Build Status Passing Latest Release License: Apache-2.0
Total Downloads (GitHub Releases) NPM Installs per Month

Join us on Slack!

Client-Only Developer Tool for Cloud-Native Development with Kubernetes

  • Build, test and debug applications directly inside Kubernetes
  • Develop with hot reloading: updates your running containers without rebuilding images or restarting containers
  • Unify deployment workflows within your team and across dev, staging and production
  • Automate repetitive tasks for image building and deployment

DevSpace Intro

DevSpace Compatibility


⭐️ Do you like DevSpace? Support the project with a star ⭐️


Contents


Why DevSpace?

Building modern, distributed and highly scalable microservices with Kubernetes is hard - and it is even harder for large teams of developers. DevSpace is the next-generation tool for fast cloud-native software development.

Standardize & Version Your Workflows

DevSpace allows you to store all your workflows in one declarative config file: devspace.yaml

  • Codify workflow knowledge about building images, deploying your project and its dependencies etc.
  • Version your workflows together with your code (i.e. you can get any old version up and running with just a single command)
  • Share your workflows with your team mates

Let Everyone on Your Team Deploy to Kubernetes

DevSpace helps your team to standardize deployment and development workflows without requiring everyone on your team to become a Kubernetes expert.

  • The DevOps and Kubernetes expert on your team can configure DevSpace using devspace.yaml and simply commits it via git
  • If other developers on your team check out the project, they only need to run devspace deploy to deploy the project (including image building and deployment of other related project etc.) and they have a running instance of the project
  • The configuration of DevSpace is highly dynamic, so you can configure everything using config variables that make it much easier to have one base configuration but still allow differences among developers (e.g. different sub-domains for testing)

Giving everyone on your team on-demand access to a Kubernetes cluster is a challenging problem for system administrators and infrastructure managers. If you want to efficiently share dev clusters for your engineering team, take a look at www.loft.sh.


Speed Up Cloud-Native Development

Instead of rebuilding images and redeploying containers, DevSpace allows you to hot reload running containers while you are coding:

  • Simply edit your files with your IDE and see how your application reloads within the running container.
  • The high performance, bi-directional file synchronization detects code changes immediately and synchronizes files immediately between your local dev environment and the containers running in Kubernetes
  • Stream logs, connect debuggers or open a container terminal directly from your IDE with just a single command.

Automate Repetitive Tasks

Deploying and debugging services with Kubernetes requires a lot of knowledge and forces you to repeatedly run commands like kubectl get pod and copy pod ids back and forth. Stop wasting time and let DevSpace automate the tedious parts of working with Kubernetes:

  • DevSpace lets you build multiple images in parallel, tag them automatically and and deploy your entire application (including its dependencies) with just a single command
  • Let DevSpace automatically start port-fowarding and log streaming, so you don't have to constantly copy and paste pod ids or run 10 commands to get everything started.

Works with Any Kubernetes Clusters

DevSpace is battle tested with many Kubernetes distributions including:

  • local Kubernetes clusters like minikube, k3s, MikroK8s, kind
  • managed Kubernetes clusters in GKE (Google Cloud), EKS (Amazon Web Service), AKS (Microsoft Azure), Digital Ocean
  • self-managed Kubernetes clusters created with Rancher

DevSpace also lets you switch seamlessly between clusters and namespaces. You can work with a local cluster as long as that is sufficient. If things get more advanced, you need cloud power like GPUs or you simply want to share a complex system such as Kafka with your team, simply tell DevSpace to use a remote cluster by switching your kube-context and continue working.



Architecture & Workflow

DevSpace Workflow

DevSpace runs as a single binary CLI tool directly on your computer and ideally, you use it straight from the terminal within your IDE. DevSpace does not require a server-side component as it communicates directly to your Kubernetes cluster using your kube-context, just like kubectl.


Features

Stop wasting time for running the same build and deploy commands over and over again. Let DevSpace automate your workflow and build cloud-native applications directly inside Kubernetes.

Automated Image Building with devspace build
  • Customizable Build Process supporting Docker, kaniko or even custom scripts
  • Parallel Image Building to save time when multiple Dockerfiles have to be built
  • Automatic Image Tagging according to custom tag schema (e.g. using timestamp, commit hash or random strings)
  • Automatic Push to any public or private Docker registry (authorization via docker login my-registry.tld)
  • Automatic Configuration of Pull Secrets within the Kubernetes cluster
  • Smart Caching that skips images which do not need to be rebuilt

DevSpace Image Building Process

Automated Deployment with devspace deploy
  • Automatig Image Building for images required in the deployment process
  • Customizable Deployment Process supporting kubectl, helm, kustomize and more
  • Multi-Step Deployments to deploy multiple application components (e.g. 1. webserver, 2. database, 3. cache)
  • Efficient Microservice Deployments by defining dependencies between projects (even across git repositories)
  • Smart Caching that skips deployments which do not need to be redeployed
  • Easy Integration into CI/CD Tools with non-interactive mode

DevSpace Deployment Process

Efficient In-Cluster Development with devspace dev
  • Hot Reloading that updates your running containers without restarting them (whenever you change a line of code)
  • Fast + Reliable File Synchronization to keep all files in sync between your local workspace and your containers
  • Port Forwarding that lets you access services and pods on localhost and allows you to attach debuggers with ease
  • Multi-Container Log Streaming that lets you stream the logs of multiple containers at once (+ color-coded prefix)
  • Terminal Proxy that opens automatically and lets you run commands in your pods directly from your IDE terminal

DevSpace Development Process

Feature-Rich Localhost UI with devspace ui
  • Graphical UI for streaming logs, opening interactive terminals, starting port-forwarding and more
  • Runs 100% on localhost: uses current kube-context, no server-side installation required

DevSpace Localhost UI Demo


Convenience Commands for Kubernetes
  • Quick Pod Selection eliminates the need to copy & paste pod names, namespaces etc. » Shows a "dropdown selector" for pods directly in the CLI when running one of these commands:
    • devspace enter to open a Interactive Terminal Session
    • devspace logs / devspace logs -f for Fast, Real-Time Logs (optionally streaming new logs)
    • devspace sync for quickly starting a Bi-Directional, Real-Time File Synchronization on demand
  • Automatic Issue Analysis via devspace analyze reporting crashed containers, missing endpoints, scheduling errors, ...
  • Fast Deletion of Deployments using devspace purge (deletes all helm charts, manifests etc. defined in the config)
  • Context Management via:
    • devspace use context shows a list of contexts (select to set current kube-context)
    • devspace use namespace shows a list of namespaces (select to set defaut namespace for current context)
    • devspace remove context shows a list of contexts (select to remove a kube-context)

Powerful Configuration
  • Declarative Configuration File that can be versioned and shared just like the source code of your project (e.g. via git)
  • Config Variables which allow you to parameterize the config and share a unified config file with your team
  • Config Overrides for overriding Dockerfiles or ENTRYPOINTs (e.g. to separate development, staging and production)
  • Hooks for executing custom commands before or after each build and deployment step
  • Multiple Configs for advanced deployment scenarios

Lightweight & Easy to Setup
  • Client-Only Binary (optional plugin for Loft for cluster sharing + multi-tenancy)
  • Standalone Executable for all platforms with no external dependencies and fully written in Golang
  • Automatic Config Generation from existing Dockerfiles, Helm chart or Kubernetes manifests (optional)
  • Automatic Dockerfile Generation (optional)

Loft.sh Plugin for Easy Namespace & Virtual Cluster Provisioning

DevSpace provides a plugin for Loft which allows users to run command such as devspace create space or devspace create vcluster for creating namespaces and virtual Kubernetes clusters in shared dev clusters.

Loft is a server-side solution for Kubernetes multi-tenancy and efficient cluster sharing which provides:

  • On-Demand Namespace Creation & Isolation with automatic RBAC, network policies, pod security policies etc.
  • Graphical UI for managing clusters, cluster users and user permissions (resource limits etc.)
  • Advanced Permission System that automatically enforces user limits via resource quotas, adminission controllers etc.
  • Fully Automatic Context Configuration on the machines of all cluster users with secure access token handling
  • 100% Pure Kubernetes and nothing else! Works with any Kubernetes cluster.

For more infos and install intructions for Loft, see Loft Documentation


Quickstart

1. Install DevSpace

via NPM
npm install -g devspace
via Yarn
yarn global add devspace
via brew
brew install devspace
via Mac Terminal
# AMD64 / Intel
curl -s -L "https://github.com/loft-sh/devspace/releases/latest" | sed -nE 's!.*"([^"]*devspace-darwin-amd64)".*!https://github.com\1!p' | xargs -n 1 curl -L -o devspace && chmod +x devspace;
sudo install devspace /usr/local/bin;

# ARM64 / Silicon Mac
curl -s -L "https://github.com/loft-sh/devspace/releases/latest" | sed -nE 's!.*"([^"]*devspace-darwin-arm64)".*!https://github.com\1!p' | xargs -n 1 curl -L -o devspace && chmod +x devspace;
sudo install devspace /usr/local/bin;
via Linux Bash
# AMD64
curl -s -L "https://github.com/loft-sh/devspace/releases/latest" | sed -nE 's!.*"([^"]*devspace-linux-amd64)".*!https://github.com\1!p' | xargs -n 1 curl -L -o devspace && chmod +x devspace;
sudo install devspace /usr/local/bin

# ARM64
curl -s -L "https://github.com/loft-sh/devspace/releases/latest" | sed -nE 's!.*"([^"]*devspace-linux-arm64)".*!https://github.com\1!p' | xargs -n 1 curl -L -o devspace && chmod +x devspace;
sudo install devspace /usr/local/bin
via Windows Powershell
md -Force "$Env:APPDATA\devspace"; [System.Net.ServicePointManager]::SecurityProtocol = [System.Net.SecurityProtocolType]'Tls,Tls11,Tls12';
Invoke-WebRequest -UseBasicParsing ((Invoke-WebRequest -URI "https://github.com/loft-sh/devspace/releases/latest" -UseBasicParsing).Content -replace "(?ms).*`"([^`"]*devspace-windows-amd64.exe)`".*","https://github.com/`$1") -o $Env:APPDATA\devspace\devspace.exe;
$env:Path += ";" + $Env:APPDATA + "\devspace";
[Environment]::SetEnvironmentVariable("Path", $env:Path, [System.EnvironmentVariableTarget]::User);

If you get the error that Windows cannot find DevSpace after installing it, you will need to restart your computer, so that the changes to the PATH variable will be applied.


2. Choose a Project

Project Command
Node.js git clone https://github.com/loft-sh/devspace-quickstart-nodejs && cd devspace-quickstart-nodejs
Python git clone https://github.com/loft-sh/devspace-quickstart-python && cd devspace-quickstart-python
Java git clone https://github.com/loft-sh/devspace-quickstart-java && cd devspace-quickstart-java
Ruby git clone https://github.com/loft-sh/devspace-quickstart-ruby && cd devspace-quickstart-ruby
Golang git clone https://github.com/loft-sh/devspace-quickstart-golang && cd devspace-quickstart-golang
PHP git clone https://github.com/loft-sh/devspace-quickstart-php && cd devspace-quickstart-php
ASP.NET git clone https://github.com/loft-sh/devspace-quickstart-asp-dotnet && cd devspace-quickstart-asp-dotnet
Want to use DevSpace with your own project?
cd /path/to/my/project/root

If you are using DevSpace for the first time, we recommend to get started with one of the demo projects listed above.


3. Initialize Your Project

Initializing a project will create the configuration file devspace.yaml which tells DevSpace how to deploy your project.

devspace init

4. Start Development

Tell DevSpace which namespace to use and start the development mode:

devspace use namespace my-namespace  # will be created by DevSpace if it does not exist
devspace dev

As soon as the terminal opens up, you can start your application:

# Node.js
npm start

# Python
python main.py

# Ruby
bundle exec rails server -p 3000 -b 0.0.0.0

# Golang
go run main.go

# Java
mvn package -T 1C -U -Dmaven.test.skip=true  # or: gradle build
java -jar target/.../my.jar

# .NET
dotnet run

# PHP
# Your app should be running already but you can still run:
php ...
composer ...

You can now:

  • Access your application via http://localhost:PORT in your browser
  • Edit your source code files and DevSpace will automatically synchronize them to the containers running in Kubernetes
  • Use a hot reloading tool like nodemon and your application will automatically reload when you edit source code files

5. Open The Development UI

When running devspace dev, DevSpace starts a client-only UI for Kubernetes. You can see that in the output of devspace dev which should contain a log line similar to this one:

#########################################################
[info]   DevSpace UI available at: http://localhost:8090
#########################################################

By default, DevSpace starts the development UI on port 8090 but if the port is already in use, it will use a different port.

You can access the development UI once you:

  • open the link from your devspace dev logs in the browser, e.g. http://localhost:8090
  • run the command devspace ui (e.g. in a separate terminal parallel to devspace dev)

Once the UI is open in your browser, it will look similar to this screenshot:

DevSpace Localhost UI

Follow this guide to learn more about the functionalities of the DevSpace UI for Kubernetes development.


6. Deploy

Initializing a project will create the configuration file devspace.yaml which tells DevSpace how to deploy your project.

devspace deploy -p production

The -p / --profile flag tells DevSpace to apply a certain profile defined in your devspace.yaml. A profile changes the base configuration by, for example, applying config patches. This allows you to have one base configuration and adapt it for different deployment targets and environment (e.g. dev as base config and a profile for production).

Having issues? Take a look at the Troubleshooting Guides and learn how to fix common issues.


7. Learn more

Follow these links to more about how to use DevSpace:

Useful Commands for Development

Command           Important Flags / Notes
devspace dev
Starts the development mode
-b • Rebuild images (force)
-d • Redeploy everything (force)
-i • Interactive mode (overrides ENTRYPOINT with [sleep, 999999] and starts interactive terminal session)
devspace ui
Opens the localhost development UI
devspace open
Opens your application after starting port-forwarding or generating an ingress
devspace enter
Opens a terminal session for a container
devspace enter -- [command]
Runs a command inside a container
devspace logs
Prints the logs of a container
-f • Stream logs (follow/attach)
devspace analyze
Analyzes your namespace for issues
devspace build
Build, tag and push images (no deploy)
-t [TAG] • Use specified [TAG] to tag all images
devspace cleanup images
Deletes old images (locally, built by DevSpace)
This is very useful after you built a lot of images and your local Docker daemon runs out of space (error: no space left on device)
devspace attach
Attaches to a running container
Requires stdin and tty to be true
devspace use namespace [NAME]
Switch to a different namespace
If you do not provide a [NAME], DevSpace will show a selector with a list of available namespaces.
devspace use context [NAME]
Switch to a different kube-context
If you do not provide a [NAME], DevSpace will show a selector with a list of available kube-contexts.

Configuration Examples

You can configure DevSpace with the devspace.yaml configuration file that should be placed within the root directory of your project. The general structure of a devspace.yaml looks like this:

# File: ./devspace.yaml
version: {config-version}

images:                 # DevSpace will build these images in parallel and push them to the respective registries
  {image-a}: ...        # tells DevSpace how to build image-a
  {image-b}: ...        # tells DevSpace how to build image-b
  ...

deployments:            # DevSpace will deploy these [Helm charts | manifests | ... ] one after another
  - {deployment-1}      # could be a Helm chart
  - {deployment-2}      # could be a folder with kubectl manifests
  ...

dev:                    # Special config options for `devspace dev`
  ports: ...            # Configure port-forwarding
  open: ...             # Configure auto-open for opening URLs after starting development mode
  sync: ...             # Configure file synchronization
  terminal: ...         # Customize the terminal to be opened
  logs: ...             # Configure multi-container log streaming
  replacePods: ...      # Replace pods during development mode
  autoReload: ...       # Tells DevSpace when to redeploy (e.g. when a manifest file has been edited)

dependencies:           # Tells DevSpace which related projects should be deployed before deploying this project
  - {dependency-1}      # Could be another git repository
  - {dependency-2}      # Could point to a path on the local filesystem
  ...

vars:                   # Make your config dynamic and easier to share (ask a question if env var is not defined)
  - name: DOMAIN_NAME   # Will be used as ${DOMAIN_NAME} in config
    question: Which hostname should we use for the ingress?

profiles:               # Configure different profiles (e.g. dev, staging, prod, debug-backend)
  - name: debug-backend
    patches:            # Change the config with patches when this profile is active
      - op: replace
        path: images.default.entrypoint
        value: [npm, run, debug]

commands:               # Custom commands: define reusable commands and run them via: devspace run [command-name]
  - name: debug-backend # The best way to share your workflows with other team mates
    command: devspace dev -i --profile=debug-backend

hooks:                  # Customize all workflows using hooks
  - command: echo
    args:
      - "before image building"
    when:
      before:
        images: all
See an example of a devspace.yaml config file
# File: ./devspace.yaml
version: v1beta10

images:
  backend:                              # Key 'backend' = Name of this image
    image: my-registry.tld/image1       # Registry and image name for pushing the image

deployments:
- name: database                        # A deployment for a postgresql database
  kubectl:                              # Deploy using kubectl
    manifests:
    - postgresql/statefulset.yaml
    - postgresql/service.yaml
- name: quickstart-nodejs               # A second deployment
  helm:                                 # Deploy using Helm
    chart:                              # Helm chart to be deployed
      name: component-chart             # DevSpace component chart is a general-purpose Helm chart
      version: 0.1.3
      repo: https://charts.devspace.sh
    values:                             # Override Values for chart (van also be set using valuesFiles option)
      containers:                       # Deploy these containers with this general-purpose Helm chart
      - image: my-registry.tld/image1   # Image of this container
        resources:
          limits:
            cpu: "400m"                 # CPU limit for this container
            memory: "500Mi"             # Memory/RAM limit for this container
      service:                          # Expose this component with a Kubernetes service
        ports:                          # Array of container ports to expose through the service
        - port: 3000                    # Exposes container port 3000 on service port 3000

dev:
  ports:
    imageName: backend
    forward:
    - port: 3000
    - port: 8080
      remotePort: 80
  open:
  - url: http://localhost:3000/login
  sync:
  - imageName: backend
    localSubPath: ./src
  terminal:
    imageName: backend
  replacePods:
  - imageName: backend
    replaceImage: loftsh/javascript:latest

dependencies:
- source:
    git: https://github.com/my-api-server
- source:
    path: ../my-auth-server

The following sections show code snippets with example sections of a devspace.yaml for certain use cases.

Configure Image Building

Build images with Docker
# File: ./devspace.yaml
images:
  auth-server:
    image: dockerhub-username/my-auth-server    # Push to Docker Hub (no registry hostname required) => uses ./Dockerfile by default
    createPullSecret: true                      # Create a Kubernetes pull secret for this image before deploying anything
  webserver:
    image: myregistry.tld/username/my-webserver # Push to private registry
    createPullSecret: true
    dockerfile: ./webserver/Dockerfile          # Build with --dockerfile=./webserver/Dockerfile
    context: ./webserver                        # Build with --context=./webserver
  database:
    image: another-registry.tld/my-image        # Push to another private registry
    createPullSecret: true
    dockerfile: ./db/Dockerfile                 # Build with --dockerfile=./db/Dockerfile
    context: ./db                               # Build with --context=./db
    # The following lines define custom tag schemata for this image
    tags:
    - devspace-${DEVSPACE_GIT_COMMIT}-######

Take a look at the documentation for more information about configuring builds with Docker.

Build images with kaniko (inside a Kubernetes pod)
# File: ./devspace.yaml
images:
  auth-server:
    image: dockerhub-username/my-auth-server    # Push to Docker Hub (no registry hostname required) => uses ./Dockerfile by default
    build:
      kaniko:                                   # Build this image with kaniko
        cache: true                             # Enable caching
        insecure: false                         # Allow kaniko to push to an insecure registry (e.g. self-signed SSL certificate)
  webserver:
    image: myregistry.tld/username/my-webserver # This image will be built using Docker with kaniko as fallback if Docker is not running
    createPullSecret: true
    dockerfile: ./webserver/Dockerfile          # Build with --dockerfile=./webserver/Dockerfile
    context: ./webserver                        # Build with --context=./webserver

Take a look at the documentation for more information about building images with kaniko.

Build images with custom commands and scripts
# File: ./devspace.yaml
images:
  auth-server:
    image: dockerhub-username/my-auth-server    # Push to Docker Hub (no registry hostname required) => uses ./Dockerfile by default
    build:
      custom:
        command: "./scripts/builder"
        args: ["--some-flag", "flag-value"]
        imageFlag: "image"
        onChange: ["./Dockerfile"]
  webserver:
    image: myregistry.tld/username/my-webserver # This image will be built using Docker with kaniko as fallback if Docker is not running
    createPullSecret: true
    dockerfile: ./webserver/Dockerfile          # Build with --dockerfile=./webserver/Dockerfile
    context: ./webserver                        # Build with --context=./webserver

Take a look at the documentation for more information about using custom build scripts.

Configure Deployments

Deploy Component Helm Chart
# File: ./devspace.yaml
deployments:
- name: quickstart-nodejs
  helm:
    componentChart: true
    values:
      containers:
      - image: my-registry.tld/image1
        resources:
          limits:
            cpu: "400m"
            memory: "500Mi"

Learn more about:

Deploy Helm charts
# File: ./devspace.yaml
deployments:
- name: default
  helm:
    chart:
      name: redis
      version: "6.1.4"
      repo: https://kubernetes-charts.storage.googleapis.com

Learn more about:

Deploy manifests with kubectl
# File: ./devspace.yaml
deployments:
- name: my-nodejs-app
  kubectl:
    manifests:
    - manifest-folder/
    - some-other-manifest.yaml

Learn more about:

Deploy manifests with kustomize
# File: ./devspace.yaml
deployments:
- name: my-deployment
  kubectl:
    kustomize: true
    manifests:
    - my-manifests/
    - more-manifests/

Take a look at the documentation for more information about deploying manifests with kustomize.

Define multiple deployments in one project
# File: ./devspace.yaml
deployments:
- name: my-deployment
  kubectl:
    manifests:
    - manifest-folder/
    - some-other-manifest.yaml
- name: my-cache
  helm:
    chart:
      name: redis
      version: "6.1.4"
      repo: https://kubernetes-charts.storage.googleapis.com

DevSpace processes all deployments of a project according to their order in the devspace.yaml. You can combine deployments of different types (e.g. Helm charts and manifests).

Take a look at the documentation to learn more about how DevSpace deploys projects to Kubernetes.

Define dependencies between projects (e.g. to deploy microservices)
# File: ./devspace.yaml
dependencies:
- source:
    git: https://github.com/my-api-server
- source:
    git: https:/my-private-git.tld/my-auth-server
- source:
    path: ../my-auth-server
  profile: production

Before deploying a project, DevSpace resolves all dependencies and builds a dependency tree which will then be deployed in a buttom-up fashion, i.e. the project which you call devspace deploy in will be deployed last.

Take a look at the documentation to learn more about how DevSpace deploys dependencies of projects.

Configure Development Mode

Configure code synchronization
# File: ./devspace.yaml
dev:
  sync:
  - localSubPath: ./src # relative to the devspace.yaml
    # Start syncing to the containers current working directory (You can also use absolute paths)
    containerPath: .
    # This tells devspace to select pods that have the following labels
    labelSelector:
      app.kubernetes.io/component: default
      app.kubernetes.io/name: devspace-app
    # Only download changes to these paths, but do not upload any changes (.gitignore syntax)
    uploadExcludePaths:
    - node_modules/
    # Only upload changes to these paths, but do not download any changes (.gitignore syntax)
    downloadExcludePaths:
    - /app/tmp
    # Ignore these paths completely during synchronization (.gitignore syntax)
    excludePaths:
    - Dockerfile
    - logs/

The above example would configure the sync, so that:

  • local path ./src will be synchronized to the container's working directory . (specified in the Dockerfile)
  • ./src/node_modules would not be uploaded to the container

Take a look at the documentation to learn more about configuring file synchronization during development.

Redeploy instead of synchronizing code
# File: ./devspace.yaml
dev:
  autoReload:
    paths:
    - ./Dockerfile
    - ./manifests/**

This configuration would tell DevSpace to redeploy your project when the Dockerfile changes or any file within ./manifests.

Take a look at the documentation to learn more about configuring auto-reloading for development.

Advanced Configuration

Use config variables
# File: ./devspace.yaml
images:
  default:
    image: john/image-name
    tags:
    - ${DEVSPACE_GIT_COMMIT}-${DEVSPACE_TIMESTAMP}
    - latest

DevSpace allows you to use certain pre-defined variables to make the configuration more flexible and easier to share with others. Additionally, you can add your own custom variables.

Take a look at the documentation to learn more about using variables for dynamic configuration.

Define config profiles and patches
# File: ./devspace-configs.yaml
images:
  backend:
    image: john/devbackend
  backend-debugger:
    image: john/debugger
deployments:
- name: app-backend
  helm:
    componentChart: true
    values:
      containers:
      - image: john/devbackend
      - image: john/debugger
profiles:
- name: production
  patches:
  - op: replace
    path: images.backend.image
    value: john/prodbackend
  - op: remove
    path: deployments[0].component.containers[1]
  - op: add
    path: deployments[0].component.containers
    value:
      image: john/cache

DevSpace allows you to define different profiles for different use cases (e.g. working on different services in the same project, starting certain debugging enviroment) or for different deployment targets (e.g. dev, staging production).

You can tell DevSpace to switch permenantly to another profile using this command: devspace use profile [config-name]

Alternatively, you can temporarily use a different profile for running a single command using the -p / --profile [NAME] flag.

Take a look at the documentation to learn more about using config profiles and patches.

Define hooks
# File: ./devspace.yaml
hooks:
  - command: echo
    args:
      - "before image building"
    when:
      before:
        images: all

The command defined in this hook would be executed before building the images defined in the config.

Take a look at the documentation to learn more about using hooks.



Troubleshooting

My application is not working

Problem

This problem can be caused by many different things.

Solution

There is no single solution for this but here are some steps to troubleshoot this problem:

1. Let DevSpace analyze your deployment

Run this command within your project:

devspace analyze
2. Check your Dockerfile

Make sure your Dockerfile works correctly. Use Google to find the best solutions for creating a Dockerfile for your application (often depends on the framework you are using).

If your pods are crashing, you might have the wrong ENTRYPOINT or something is missing within your containers. A great way to debug this is to start the interactive development mode using:

devspace dev -i

With the interactive mode, DevSpace will override the ENTRYPOINT in our Dockerfile with [sleep, 999999] and open a terminal proxy. That means your containers will definitively start but only in sleep mode. After the terminal opens you can run the start command for your application yourself, e.g. npm start.

3. Debug your application with kubectl

Run the following commands to find issues:

# Failing Pods
kubectl get po                  # Look for terminated, crashed or pending pods (restart > 1 is usually not good)
kubectl describe po [POD_NAME]  # Look at the crash reports Kubernetes provides

# Network issues
kubectl get svc                 # See if there is a service for your app
kubectl get ep                  # Make sure every service has endpoints (if not: make sure you are using the right ports in your devspace.yaml and make sure your pods are running)
kubectl get ing                 # Make sure there is an ingress for your app
Docker: Error response from daemon: Get https://[registry]/v2/: x509: certificate has expired or is not yet valid

Problem

This might happen when the VM of your Docker daemon has the wrong date/time.

Solution

Make sure the VM of your Docker daemon has the correct date/time. For Docker Desktop, you can run the following script to fix the issue:

HOST_TIME=$(date -u +"%Y.%m.%d-%H:%M:%S");
docker run --net=host --ipc=host --uts=host --pid=host -it --security-opt=seccomp=unconfined --privileged --rm -v /:/docker-vm alpine /bin/sh -c "date -s $HOST_TIME"


Contributing

Help us make DevSpace the best tool for developing, deploying and debugging Kubernetes apps.

Join us on Slack!

Reporting Issues

If you find a bug while working with the DevSpace, please open an issue on GitHub and let us know what went wrong. We will try to fix it as quickly as we can.

Feedback & Feature Requests

You are more than welcome to open issues in this project to:

Contributing Code

This project is mainly written in Golang. If you want to contribute code:

  1. Ensure you are running golang version 1.11.4 or greater for go module support
  2. Set the following environment variables:
    GO111MODULE=on
    GOFLAGS=-mod=vendor
    
  3. Check-out the project: git clone https://github.com/loft-sh/devspace && cd devspace
  4. Make changes to the code
  5. Build the project, e.g. via go build -o devspace[.exe]
  6. Evaluate and test your changes ./devspace [SOME_COMMAND]

See Contributing Guideslines for more information.


FAQ

What is DevSpace?

DevSpace is an open-source command-line tool that provides everything you need to develop, deploy and debug applications with Docker and Kubernetes. It lets you streamline deployment workflows and share them with your colleagues through a declarative configuration file devspace.yaml.

Is DevSpace free?

YES. DevSpace is open-source and you can use it for free for any private projects and even for commercial projects.

Do I need a Kubernetes cluster to use DevSpace?

Yes. You can either use a local cluster such as Docker Desktop Kubernetes, minikube, or Kind, but you can also use a remote cluster such as GKE, EKS, AKS, RKE (Rancher), or DOKS.

Can I use DevSpace with my existing Kubernetes clusters?

Yes. DevSpace is using your regular kube-context. As long as you can run kubectl commands with a cluster, you can use this cluster with DevSpace as well.

What is a Helm chart?

Helm is the package manager for Kubernetes. Packages in Helm are called Helm charts.



You can use the DevSpace for any private or commercial projects because it is licensed under the Apache 2.0 open source license.

Owner
Loft Labs, Inc.
Building Dev Tooling That Help Companies Scale Access To Kubernetes From 10 To 10,000 Engineers
Loft Labs, Inc.
Comments
  • Image build process support for an image dependent on another

    Image build process support for an image dependent on another

    Is your feature request related to a problem? I have an aspnet core solution with many projects. Each project is built into its own docker image. I created a solution-wide dockerfile to replace the individual project dockerfiles as they all do the same thing; copy source, restore deps, build, test, and then publish. Args allow me to define what project is to be published for the required image.

    What I'd like to do is build a base image for the solution that all the project specific images are based from. The base would do all the restore of dependencies, build code, and run tests. The project dockerfile would just publish and do the entrypoint stuff, and it would be based on the base image using an ARG for the tag to use (likely a "latest" based tag as the base may not have changed I suppose).

    This requires a level of build dependency control so that the base can be built first before all other images are built in parallel. So is there a way to inform the image build process to build specific images before others?

    Which solution do you suggest? I don't have one.

    Which alternative solutions exist? None that I know of.

    Additional context

    /kind feature

  • My local directory ended up getting wiped somehow on windows.

    My local directory ended up getting wiped somehow on windows.

    I wasn't paying very close attention to it so I don't have steps to reproduce at this time, but I ran devspace up and started working on something else.

    When I came back to the project, everything except for some directories I had explicitly excluded from downloading had been removed, including my .git directory. Luckily I didn't have anything I was working on in the project elsewhere at the time, but it could have been a lot worse.

    I'm going to attempt to reproduce if I can and I'll share more, but I figured you all should be aware this happened.

    If I had to guess off the top of my head, since windows gets it's time differently than linux, perhaps the root folder that was being synced was being calculated as older than the directory in the devspace deployed container and so it wiped it.

  • Sync Error - upload archive: upload send: EOF

    Sync Error - upload archive: upload send: EOF

    What happened?
    devspace sync fails with Sync Error on /path/to/files: upstream: apply changes: apply creates: upload archive: upload send: EOF.

    Log showing the upload failures and then the Sync Error:

    [info]   Start syncing
    [done] √ Sync started on /Users/jessebye/repos/pronode <-> . (Pod: need-plantations/pronode-579d65f7dd-wz74v)
    [info]   Upstream - Handling 1 removes
    [info]   Upstream - Upload 1604 create changes (size 10177223)
    [info]   Upstream - Retry upload because of error: EOF
    [info]   Upstream - Upload 1604 create changes (size 10177223)
    [info]   Upstream - Retry upload because of error: EOF
    [info]   Upstream - Upload 1604 create changes (size 10177223)
    [info]   Upstream - Retry upload because of error: EOF
    [info]   Upstream - Upload 1604 create changes (size 10177223)
    [info]   Upstream - Retry upload because of error: EOF
    [info]   Upstream - Upload 1604 create changes (size 10177223)
    [error]  Sync Error on /Users/jessebye/repos/pronode: upstream: apply changes: apply creates: upload archive: upload send: EOF
    

    What did you expect to happen instead?
    Local changes should have been uploaded with no error.

    How can we reproduce the bug? (as minimally and precisely as possible)
    Using devspace version 4.4.0 Try to use devspace sync Observe the issue.

    Local Environment:

    • Operating System: mac (Catalina)
    • Deployment method: kubectl apply
    • Devspace version: 4.4.0

    Kubernetes Cluster:

    • Cloud Provider: aws
    • Kubernetes Version: version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14.9-eks-c0eccc", GitCommit:"c0eccca51d7500bb03b2f163dd8d534ffeb2f7a2", GitTreeState:"clean", BuildDate:"2019-12-22T23:14:11Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"}

    Anything else we need to know?
    n/a

    /kind bug

  • Feature Request: Ability to set UID/GID after sync

    Feature Request: Ability to set UID/GID after sync

    Is your feature request related to a problem?
    I like the devspace tool since it is fast and is not buggy but it would be nice to have the ability to change UID/GID attributes of files after sync to make sure that application that runs in kubernetes from unprivileged user can access them.

    Which solution do you suggest?

    Add options dev.sync[*].upstreamUser.uid, dev.sync[*].upstreamUser.gid which will instruct sync to change the UID/GID for uploaded files. In addition, please add arguments to a cli version of devspace sync to support this feature too (possibly --upstream-user-uid,--upstream-user-gid).

    Which alternative solutions exist?
    Change uid/gid manually kubectl exec -it <pod> -- bash -c 'chown -R <UID>:<GID> <synced_folder>' which break the idea of auto sync.

    Additional context

    /kind feature

  • on windows 10, devspace keeps exiting and throwing an error after running fine for a few minutes.

    on windows 10, devspace keeps exiting and throwing an error after running fine for a few minutes.

    What happened?

    1. I run devspace up.
    2. devspace deployed and I'm able to work in the container.
    3. After a few minutes I get kicked out of the container and devspace.
    • I can use devspace up again to reconnect and keep working, but it keeps happening.
    1. I see this in the logs: {"level":"error","msg":"Runtime error occurred: error copying from remote stream to local connection: readfrom tcp4 127.0.0.1:61572-\u003e127.0.0.1:61574: write tcp4 127.0.0.1:61572-\u003e127.0.0.1:61574: wsasend: An established connection was aborted by the software in your host machine.","time":"2018-10-31T11:11:26-05:00"}

    What did you expect to happen instead?
    I should be able to keep working in the container however long I need to.

    How can we reproduce the bug? (as minimally and precisely as possible)
    Follow the same steps I did in the "What happened" section.

    Local Environment:

    • Operating System: windows 10
    • Deployment method: helm

    Kubernetes Cluster:

    • Cloud Provider: Baremetal via Rancher 2.0 rancher
    • Kubernetes Version: Client Version: v1.10.2 Server version: v1.11.1

    Anything else we need to know?
    This might be happening during the sync operation, I'm not sure, it seems more stable after everything is synced up. I'll update further when I'm more sure about that.

    /kind bug

  • 'devspace enter' session is interrupted

    'devspace enter' session is interrupted

    Not sure whether this is a bug. Problem is somewhat similar to this issue, but I am not on windows (ubuntu). I have devspace (without helm) and exec into three processes (once with devspace dev and twice with devspace enter) into two containers. So: A) one process on container A B) two processes on container B

    I lose contact (suddenly instead of terminal output I get a prompt) with (especially with B) quite often. Output seems to vary, e.g. last time I got: errors.log:

    {"level":"info","msg":"Sync started on /home/usr/api/trex \u003c-\u003e /home/usr (Pod: default/mymarkers-6cb47df99f-rv4ws)","time":"2019-03-27T22:48:43+01:00"}
    {"level":"info","msg":"Opening shell to pod:container \u001b[1;37mmyapp-7cc4dc5c56-4hkvp\u001b[0m:\u001b[1;37mapp\u001b[0m","time":"2019-03-27T22:49:58+01:00"}
    {"level":"fatal","msg":"[Sync] Fatal sync error: \n[Downstream] Stream closed unexpectedly. For more information check .devspace/logs/sync.log","time":"2019-03-28T08:04:50+01:00"}
    

    sync.log:

    {"container":"/home/usr","level":"info","local":"/home/usr/api/trex","msg":"[Upstream] Successfully processed 12 change(s)","pod":"mymarkers-6cb47df99f-rv4ws","time":"2019-03-27T22:48:45+01:00"}
    {"container":"/home/usr","level":"info","local":"/home/usr/api/trex","msg":"[Sync] Sync stopped","pod":"mymarkers-6cb47df99f-rv4ws","time":"2019-03-28T08:04:50+01:00"}
    {"container":"/home/usr","level":"error","local":"/home/usr/api/trex","msg":"Error: \n[Downstream] Stream closed unexpectedly, Stack: \n[Downstream] Stream closed unexpectedly\n/Users/travis/gopath/src/github.com/devspace-cloud/devspace/pkg/devspace/sync/downstream.go:187: \n/Users/travis/gopath/src/github.com/devspace-cloud/devspace/pkg/devspace/sync/downstream.go:114: ","pod":"mymarkers-6cb47df99f-rv4ws","time":"2019-03-28T08:04:50+01:00"}
    

    What is causing this? Is it a loss of internet connection? That is odd, since when I disconnect my internet and reconnect it still can continue the sessions, so seems a slight internet connection disruption should cause no problems.

    And furthermore: When getting the prompt and then devspace enter into the same container and trying to restart the service (server), it gives connection error as process seems to be still running in background. Can I somehow reattach to the (PID) process with a terminal screen?

  • overwrite.yaml not overwriting configuration

    overwrite.yaml not overwriting configuration

    Probably I'm just making a mistake with my overwrite.yaml file somewhere, but I'm not seeing it.

    I'm using Kaniko as the build engine. I'm overwriting my registry auth credentials, but it doesn't seem to be working. The image name I've overwritten isn't working either.

    I even cloned the repo and attempted to debug this a bit (on master branch) and found something strange happening.

    In pkg/devspace/config/configutil/get.go I dumped the config object in several places.

    After line 84, when "overwriteConfigRaw" is first created. The content of that object was indeed my overwrite.yaml

    But when I check the config object after the merge on line 88 (merging config and overwriteConfig) I see that the config object was not changed at all.

    To help, here is my config file and overwrite file, with some adjustments to make it a little more generic.

    at ./.devspace/config.yaml

    cluster:
      kubeContext: develop
      namespace: devspace
    devSpace:
      deployments:
      - helm:
          chartPath: ./chart
        name: devspace-app
      ports:
      - labelSelector:
          release: devspace
        portMappings:
        - localPort: 80
          remotePort: 7001
      sync:
      - containerPath: /app
        labelSelector:
          release: devspace-app
        localSubPath: ./
        uploadExcludePaths:
        - Dockerfile
        - .devspace/
        - chart/
        - vendor/
        - node_modules/
        - web/uploads/
    images:
      default:
        name: devspace/app
        registry: custom
        build:
          kaniko:
            namespace: devspace
            cache: true
    registries:
      custom:
        url: custom.registry.com
        auth:
          username: overwritethis
          password: andthis
    version: v1alpha1
    

    contents of ./.devspace/overwrite.yaml

    cluster:
      namespace: different-namespace
    images:
      default:
        name: devspace/another-name
    registries:
      custom:
        auth:
          username: RealUserName
          password: realpass
    
  • Add an .devspacerc file

    Add an .devspacerc file

    Is your feature request related to a problem? No

    Which solution do you suggest?
    adding a .devspacerc file where you can put in things like namespace or context so you don't have to type it out on the CLI every time

    Which alternative solutions exist?
    the global flag --namespace and the --switch-context

    Additional context
    This would be very useful when working in multiple contexts at the same time and not have to constantly type this switches out.

    /kind feature

  • Sync Error: Connection lost to pod for a spring boot project

    Sync Error: Connection lost to pod for a spring boot project

    Sync Error: Connection lost to pod at container restart for a spring boot project

    DevSpace ideally should have synced the changes while running in dev mode

    Use a spring boot project with gradle to reproduce this issue. Deploy the application in AKS

    Local Environment:

    • DevSpace Version: 5.8.2
    • Operating System: on Local windows, on AKS it is Linux
    • Deployment method: helm
    • Spring boot version : 2.4.2
    • Java version : 11

    Kubernetes Cluster:

    • Cloud Provider: azure
    • Kubernetes Version: Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.1", GitCommit:"7879fc12a63337efff607952a323df90cdc7a335", GitTreeState:"clean", BuildDate:"2020-04-08T17:38:50Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"windows/amd64"} Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.10", GitCommit:"41d24ec9c736cf0bdb0de3549d30c676e98eebaf", GitTreeState:"clean", BuildDate:"2021-01-18T09:12:27Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"} Dockerfile contents
      FROM test-docker.artifactory.local.com/gradle:jdk11-hotspot as build WORKDIR /app RUN gradle --version RUN rm -rf build ADD . . RUN ./build.sh CMD ["./build.sh", "run"] ################ dev ################ FROM testacr.azurecr.io/3rdparty/openjdk/11:adopt-jre-hotspot as dev COPY --from=build /app/build/libs/main.jar /app/main.jar WORKDIR /app EXPOSE 8080 ENTRYPOINT exec java -jar main.jar build.sh contents #!/bin/bash

    rm -f build/libs/.jar gradle clean build -x test --no-daemon cp build/libs/.jar build/libs/main.jar if [[ $1 == "run" ]] ; then java -jar build/libs/main.jar fi Error message [0:sync:app] Upstream - Restarting container [0:sync:app] Error: Sync Error on C:\SpringPoc........: Sync - connection lost to pod dev-namespace/ecompoc-deploy-5f4ddf4b4f-2psdd: command terminated with exit code 137 [0:sync:app] Sync stopped

    /kind bug

  • Looks like devspace sync skip files

    Looks like devspace sync skip files

    What happened?
    When I try remove and create many files in project some files skipped. We have precompiled xsl in our project, so it's need:

    1. delete already compiled xsl's
    2. run xsltroc and get fresh xsl's after this count local xsl files: $ find xhh/xsl_precompiled/ | wc -l1179 count pod xsl files: # find xhh/xsl_precompiled/ | wc -l78 How devspace is running: devspace sync --local-path=... --container-path=... --upload-only -e '.git' -e 'python_venv' --verbose

    Then if I restart devspace sync skipped files successfully synced.

    What did you expect to happen instead?
    number of files after sync in local dev = number of files in pod

    How can we reproduce the bug? (as minimally and precisely as possible)
    need to delete and recreate many files in project directory in short time. I think it's possible using git, i.e. rm project files and then run git reset --hard

    Local Environment:

    • DevSpace Version: 5.7.1
    • Operating System: linux Ubuntu 18.04
    • Deployment method: kubectl apply

    Kubernetes Cluster:

    • Cloud Provider: bare-metal
    • Kubernetes Version: Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.2", GitCommit:"c97fe5036ef3df2967d086711e6c0c405941e14b", GitTreeState:"clean", BuildDate:"2019-10-15T19:18:23Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"linux/amd64"}

    Anything else we need to know?

    /kind bug

  • Crashes on Windows

    Crashes on Windows

    What happened?
    Any devspace command started throwing one of these exceptions

    Exception 0xc0000005 0x0 0x7ff8f61f0fff 0x5fd0000
    PC=0x5fd0000
    
    runtime: unknown pc 0x5fd0000
    stack: frame={sp:0x3d2e820, fp:0x0} stack=[0x0,0x3d2ff00)
    0000000003d2e720:  0000000003d2e768  0000000003d2e790
    0000000003d2e730:  0000000003d2e758  0000000003d2e750
    0000000003d2e740:  0000000003d2e754  0000000003db0000
    0000000003d2e750:  0000000000000000  0000000000000000
    0000000003d2e760:  0000000000000000  0000000000000005
    0000000003d2e770:  0000000003d2e8b8  00007ff8f3b99f38
    0000000003d2e780:  0000000003eb3930  00007ff8f60146ce
    0000000003d2e790:  00007ff8f5ff00e8  0000000003edead0
    0000000003d2e7a0:  00007ff8f3b99f32  0000000003d2e7f0
    0000000003d2e7b0:  004f0044004e0049  0053005c00530057
    0000000003d2e7c0:  0000000003eb3930  0000000000000000
    0000000003d2e7d0:  0000000003ec59d0  006c006400050005
    0000000003d2e7e0:  00007ff8f3b99f32  0000000000000000
    0000000003d2e7f0:  00007ff800000000  00007ff8f5ff00e8
    0000000003d2e800:  0000000000000000  0000000000000000
    0000000003d2e810:  0000000000000001  00007ff8f6013783
    0000000003d2e820: <0000000000000001  0000000000000000
    0000000003d2e830:  0000000000000000  0000000003d2e928
    0000000003d2e840:  0000000000000000  0000000000000000
    0000000003d2e850:  0000000000000000  0000000000000000
    0000000003d2e860:  0000000003edead0  00007ff8f6140f00
    0000000003d2e870:  0000000003eb3930  00007ff8f6143520
    0000000003d2e880:  000000000000097d  00007ff8f2b8ccb8
    0000000003d2e890:  00007ff8f2b8a148  00007ff8f6140f28
    0000000003d2e8a0:  00007ff8f6153d4f  00007ff8f2b80000
    0000000003d2e8b0:  00007ff8f6145b14  00007ff8f6070aa0
    0000000003d2e8c0:  0000000000000000  0000000000000000
    0000000003d2e8d0:  0000000000000000  0000000000000000
    0000000003d2e8e0:  0000000003edf320  0000000000000040
    0000000003d2e8f0:  0000000000000003  00007ff8f615a3f0
    0000000003d2e900:  0000000000000001  0000000003d2eb00
    0000000003d2e910:  0000000003ec59d0  00007ff8f6051448
    runtime: unknown pc 0x5fd0000
    stack: frame={sp:0x3d2e820, fp:0x0} stack=[0x0,0x3d2ff00)
    0000000003d2e720:  0000000003d2e768  0000000003d2e790
    0000000003d2e730:  0000000003d2e758  0000000003d2e750
    0000000003d2e740:  0000000003d2e754  0000000003db0000
    0000000003d2e750:  0000000000000000  0000000000000000
    0000000003d2e760:  0000000000000000  0000000000000005
    0000000003d2e770:  0000000003d2e8b8  00007ff8f3b99f38
    0000000003d2e780:  0000000003eb3930  00007ff8f60146ce
    0000000003d2e790:  00007ff8f5ff00e8  0000000003edead0
    0000000003d2e7a0:  00007ff8f3b99f32  0000000003d2e7f0
    0000000003d2e7b0:  004f0044004e0049  0053005c00530057
    0000000003d2e7c0:  0000000003eb3930  0000000000000000
    0000000003d2e7d0:  0000000003ec59d0  006c006400050005
    0000000003d2e7e0:  00007ff8f3b99f32  0000000000000000
    0000000003d2e7f0:  00007ff800000000  00007ff8f5ff00e8
    0000000003d2e800:  0000000000000000  0000000000000000
    0000000003d2e810:  0000000000000001  00007ff8f6013783
    0000000003d2e820: <0000000000000001  0000000000000000
    0000000003d2e830:  0000000000000000  0000000003d2e928
    0000000003d2e840:  0000000000000000  0000000000000000
    0000000003d2e850:  0000000000000000  0000000000000000
    0000000003d2e860:  0000000003edead0  00007ff8f6140f00
    0000000003d2e870:  0000000003eb3930  00007ff8f6143520
    0000000003d2e880:  000000000000097d  00007ff8f2b8ccb8
    0000000003d2e890:  00007ff8f2b8a148  00007ff8f6140f28
    0000000003d2e8a0:  00007ff8f6153d4f  00007ff8f2b80000
    0000000003d2e8b0:  00007ff8f6145b14  00007ff8f6070aa0
    0000000003d2e8c0:  0000000000000000  0000000000000000
    0000000003d2e8d0:  0000000000000000  0000000000000000
    0000000003d2e8e0:  0000000003edf320  0000000000000040
    0000000003d2e8f0:  0000000000000003  00007ff8f615a3f0
    0000000003d2e900:  0000000000000001  0000000003d2eb00
    0000000003d2e910:  0000000003ec59d0  00007ff8f6051448
    rax     0x7ff8f2b8d85c
    rbx     0x7ff8f2b8d85a
    rcx     0x41
    rdi     0xffffffffffbadd11
    rsi     0x0
    rbp     0x7ff8f2cd9f00
    rsp     0x3d2e820
    r8      0x0
    r9      0x0
    r10     0x0
    r11     0x97c
    r12     0xc000007a
    r13     0x0
    r14     0x7ff8f2b8d85c
    r15     0x7ff8f5ff0000
    rip     0x5fd0000
    rflags  0x10206
    cs      0x33
    fs      0x53
    gs      0x2b
    

    Another one

    Exception 0xc0000005 0x0 0x7ff8f61f0fff 0x70e0000
    PC=0x70e0000
    
    syscall.loadsystemlibrary(0xc000046b00, 0xc00004e300, 0xc00004e300, 0x20)
            /Users/runner/hostedtoolcache/go/1.13.15/x64/src/runtime/syscall_windows.go:136 +0xe7
    syscall.LoadDLL(0x21fe88c, 0xb, 0x43a7b3, 0xc000082000, 0x200000003)
            /Users/runner/hostedtoolcache/go/1.13.15/x64/src/syscall/dll_windows.go:80 +0x15e
    syscall.(*LazyDLL).Load(0xc0000041a0, 0x0, 0x0)
            /Users/runner/hostedtoolcache/go/1.13.15/x64/src/syscall/dll_windows.go:236 +0xbb
    syscall.(*LazyProc).Find(0xc0000a5b00, 0x0, 0x0)
            /Users/runner/hostedtoolcache/go/1.13.15/x64/src/syscall/dll_windows.go:291 +0xbc
    syscall.(*LazyProc).mustFind(0xc0000a5b00)
            /Users/runner/hostedtoolcache/go/1.13.15/x64/src/syscall/dll_windows.go:309 +0x32
    syscall.(*LazyProc).Addr(0xc0000a5b00, 0xc0000cdc28)
            /Users/runner/hostedtoolcache/go/1.13.15/x64/src/syscall/dll_windows.go:318 +0x32
    syscall.GetUserProfileDirectory(0x23c, 0xc0000e6340, 0xc0000cdc18, 0xc0000e6340, 0x0)
            /Users/runner/hostedtoolcache/go/1.13.15/x64/src/syscall/zsyscall_windows.go:1910 +0x38
    syscall.Token.GetUserProfileDirectory(0x23c, 0xc000048420, 0x2c, 0x0, 0x0)
            /Users/runner/hostedtoolcache/go/1.13.15/x64/src/syscall/security_windows.go:368 +0x8a
    os/user.current(0x0, 0x0, 0x0)
            /Users/runner/hostedtoolcache/go/1.13.15/x64/src/os/user/lookup_windows.go:222 +0x1c7
    os/user.Current.func1()
            /Users/runner/hostedtoolcache/go/1.13.15/x64/src/os/user/lookup.go:15 +0x29
    sync.(*Once).doSlow(0x3af7f80, 0x2385038)
            /Users/runner/hostedtoolcache/go/1.13.15/x64/src/sync/once.go:66 +0xea
    sync.(*Once).Do(...)
            /Users/runner/hostedtoolcache/go/1.13.15/x64/src/sync/once.go:57
    os/user.Current(0xc000050a10, 0xd, 0x218da15)
            /Users/runner/hostedtoolcache/go/1.13.15/x64/src/os/user/lookup.go:15 +0xf9
    k8s.io/klog.init.1()
            /Users/runner/work/devspace/devspace/vendor/k8s.io/klog/klog_file.go:58 +0x48
    
    goroutine 6 [chan receive]:
    k8s.io/klog.(*loggingT).flushDaemon(0x3af93a0)
            /Users/runner/work/devspace/devspace/vendor/k8s.io/klog/klog.go:1010 +0x92
    created by k8s.io/klog.init.0
            /Users/runner/work/devspace/devspace/vendor/k8s.io/klog/klog.go:411 +0xdd
    rax     0x7ff8f3625784
    rbx     0x7ff8f3625782
    rcx     0x41
    rdi     0xffffffffffbadd11
    rsi     0x0
    rbp     0x0
    rsp     0x3d2f4f0
    r8      0x0
    r9      0x0
    r10     0x0
    r11     0x97c
    r12     0xc000007a
    r13     0x0
    r14     0x7ff8f3625784
    r15     0x7ff8f5ff0000
    rip     0x70e0000
    rflags  0x10206
    cs      0x33
    fs      0x53
    gs      0x2b
    

    What did you expect to happen instead?
    The command to execute as normal

    How can we reproduce the bug? (as minimally and precisely as possible)
    Run "devspace" in a terminal on Windows.

    Local Environment:

    • DevSpace Version: 5.0.2
    • Operating System: windows
    • Deployment method: helm

    Kubernetes Cluster:

    • Cloud Provider: aws and Docker Desktop
    • Kubernetes Version: Client Version: version.Info{Major:"1", Minor:"16+", GitVersion:"v1.16.6-beta.0", GitCommit:"e7f962ba86f4ce7033828210ca3556393c377bcc", GitTreeState:"clean", BuildDate:"2020-01-15T08:26:26Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"windows/amd64"} Server Version: version.Info{Major:"1", Minor:"16+", GitVersion:"v1.16.6-beta.0", GitCommit:"e7f962ba86f4ce7033828210ca3556393c377bcc", GitTreeState:"clean", BuildDate:"2020-01-15T08:18:29Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"linux/amd64"}

    Anything else we need to know?
    It was working just fine for a month and all of a sudden it started throwing the exceptions. I tried to replace the current .exe file with a freshly downloaded from here https://github.com/devspace-cloud/devspace/releases/download/v5.0.2/devspace-windows-amd64.exe but the result is the same. I tried to run it from CMD and Powershell.

    kubectl commands work as expected.

    Docker Desktop details image

    /kind bug

  • localRegistry being used even though my cluster registries work fine

    localRegistry being used even though my cluster registries work fine

    What happened? I have a local cluster created with k3d and I used the option --registry-create myregistry.localhost to have its own registry which I can push from my local machine and that the cluster can use to pull container images. Even though docker push works fine from my local machine, when I start my devspace project I receive this output:

    Ensuring image pull secret for registry: myregistry.localhost:5000...
    Couldn't retrieve username for registry myregistry.localhost:5000 from docker store
    Couldn't retrieve password for registry myregistry.localhost:5000 from docker store
    local-registry: Starting Local Image Registry
    local-registry: Port forwarding to local registry started on: 30521 -> 5000
    Ensuring image pull secret for registry: myregistry.localhost:5000...
    Couldn't retrieve username for registry myregistry.localhost:5000 from docker store
    Couldn't retrieve password for registry myregistry.localhost:5000 from docker store
    Ensuring image pull secret for registry: myregistry.localhost:5000...
    

    But as I said, I can docker push from the same machine/environment where I start my devspace project, so I don't understand why devspace says that. Plus, if in my devspace.yaml file I add:

    localRegistry:
      enabled: false
    

    it will still display the Couldn't retrieve... message, but it will actually correctly push to that registry.

    What did you expect to happen instead? I expected to not see these errors, and that it does not fallback to the localRegistry

    How can we reproduce the bug? (as minimally and precisely as possible)

    My devspace.yaml:

    version: v2beta1
    ...
    

    Local Environment:

    • DevSpace Version: 6.2.3
    • Operating System: mac m1
    • ARCH of the OS: ARM64 Kubernetes Cluster:
    • Local: k3d
    • Kubernetes Version:
    Client Version: version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.3", GitCommit:"434bfd82814af038ad94d62ebe59b133fcb50506", GitTreeState:"clean", BuildDate:"2022-10-12T10:47:25Z", GoVersion:"go1.19.2", Compiler:"gc", Platform:"darwin/arm64"}
    Kustomize Version: v4.5.7
    Server Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.6+k3s1", GitCommit:"418c3fa858b69b12b9cefbcff0526f666a6236b9", GitTreeState:"clean", BuildDate:"2022-04-28T22:16:58Z", GoVersion:"go1.17.5", Compiler:"gc", Platform:"linux/arm64"}
    

    Anything else we need to know?

  • chore(deps): bump json5 and babel-core in /ui

    chore(deps): bump json5 and babel-core in /ui

    Bumps json5, json5 and babel-core. These dependencies needed to be updated together. Updates json5 from 2.1.2 to 2.2.3

    Release notes

    Sourced from json5's releases.

    v2.2.3

    v2.2.2

    • Fix: Properties with the name __proto__ are added to objects and arrays. (#199) This also fixes a prototype pollution vulnerability reported by Jonathan Gregson! (#295).

    v2.2.1

    • Fix: Removed dependence on minimist to patch CVE-2021-44906. (#266)

    v2.2.0

    • New: Accurate and documented TypeScript declarations are now included. There is no need to install @types/json5. (#236, #244)

    v2.1.3 [code, diff]

    • Fix: An out of memory bug when parsing numbers has been fixed. (#228, #229)
    Changelog

    Sourced from json5's changelog.

    v2.2.3 [code, diff]

    v2.2.2 [code, diff]

    • Fix: Properties with the name __proto__ are added to objects and arrays. (#199) This also fixes a prototype pollution vulnerability reported by Jonathan Gregson! (#295).

    v2.2.1 [code, diff]

    • Fix: Removed dependence on minimist to patch CVE-2021-44906. (#266)

    v2.2.0 [code, diff]

    • New: Accurate and documented TypeScript declarations are now included. There is no need to install @types/json5. (#236, #244)

    v2.1.3 [code, diff]

    • Fix: An out of memory bug when parsing numbers has been fixed. (#228, #229)
    Commits
    • c3a7524 2.2.3
    • 94fd06d docs: update CHANGELOG for v2.2.3
    • 3b8cebf docs(security): use GitHub security advisories
    • f0fd9e1 docs: publish a security policy
    • 6a91a05 docs(template): bug -> bug report
    • 14f8cb1 2.2.2
    • 10cc7ca docs: update CHANGELOG for v2.2.2
    • 7774c10 fix: add proto to objects and arrays
    • edde30a Readme: slight tweak to intro
    • 97286f8 Improve example in readme
    • Additional commits viewable in compare view

    Updates json5 from 1.0.1 to 2.2.3

    Release notes

    Sourced from json5's releases.

    v2.2.3

    v2.2.2

    • Fix: Properties with the name __proto__ are added to objects and arrays. (#199) This also fixes a prototype pollution vulnerability reported by Jonathan Gregson! (#295).

    v2.2.1

    • Fix: Removed dependence on minimist to patch CVE-2021-44906. (#266)

    v2.2.0

    • New: Accurate and documented TypeScript declarations are now included. There is no need to install @types/json5. (#236, #244)

    v2.1.3 [code, diff]

    • Fix: An out of memory bug when parsing numbers has been fixed. (#228, #229)
    Changelog

    Sourced from json5's changelog.

    v2.2.3 [code, diff]

    v2.2.2 [code, diff]

    • Fix: Properties with the name __proto__ are added to objects and arrays. (#199) This also fixes a prototype pollution vulnerability reported by Jonathan Gregson! (#295).

    v2.2.1 [code, diff]

    • Fix: Removed dependence on minimist to patch CVE-2021-44906. (#266)

    v2.2.0 [code, diff]

    • New: Accurate and documented TypeScript declarations are now included. There is no need to install @types/json5. (#236, #244)

    v2.1.3 [code, diff]

    • Fix: An out of memory bug when parsing numbers has been fixed. (#228, #229)
    Commits
    • c3a7524 2.2.3
    • 94fd06d docs: update CHANGELOG for v2.2.3
    • 3b8cebf docs(security): use GitHub security advisories
    • f0fd9e1 docs: publish a security policy
    • 6a91a05 docs(template): bug -> bug report
    • 14f8cb1 2.2.2
    • 10cc7ca docs: update CHANGELOG for v2.2.2
    • 7774c10 fix: add proto to objects and arrays
    • edde30a Readme: slight tweak to intro
    • 97286f8 Improve example in readme
    • Additional commits viewable in compare view

    Updates babel-core from 6.26.3 to 7.0.0-bridge.0

    Commits

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

  • devspace deploy stuck waiting for dependency due to circular dependency

    devspace deploy stuck waiting for dependency due to circular dependency

    What happened?

    A complex dependency chain is stucking devspace deploy. Running devspace deploy --sequential-dependencies works. My guess is your dependency resolving code has a bug.

    What did you expect to happen instead?

    Everything is deployed in any order devspace chooses. We dont care about the order, just that i happens.

    How can we reproduce the bug?

    We have a rather complex dependency hierarchy, i will try my best to simplify it. We have multiple code repos, which depend on each other through their respective devspace.yaml files.

    Files

    name: hn-oppsett
    
    # no dependencies
    ...
    
    name: hn-configuration
    
    dependencies:
      hn-oppsett:
      hn-sts:
    ...
    
    name: hn-sts
    
    dependencies:
      hn-oppsett:
      hn-configuration:
      hn-personvern:
    ...
    
    name: hn-personvern
    
    dependencies:
      hn-oppsett:
      hn-configuration: # <- If i remove this dependency it works
      hn-sts:
    ...
    

    I run devspace deploy in the context of hn-personvern:

    PS C:\...\HN-Personvern> devspace deploy
    info Using namespace 'dev'
    info Using kube context 'kind-kind'
    hn-configuration Skipping dependency hn-oppsett as it was already deployed
    hn-sts Skipping dependency hn-configuration as it was already deployed
    hn-sts Waiting for dependency 'hn-configuration' to finish...                        # <- Problem1
    hn-configuration Waiting for dependency 'hn-oppsett' to finish...
    hn-oppsett <do a lot of custom powershell/bash stuff>
    hn-oppsett <do pullsecret stuff>
    hn-oppsett <deploys a lot of stuff>
    hn-configuration Skipping dependency hn-sts as it was already deployed
    hn-configuration Waiting for dependency 'hn-sts' to finish...                        # <- Problem2 waiting for Problem1
    

    Local Environment:

    • DevSpace Version: 6.2.3
    • Operating System: windows
    • ARCH of the OS: AMD64 Kubernetes Cluster:
    • Cloud Provider: local kind
    • Kubernetes Version: v1.25.2

    Anything else we need to know?

    the devspace deploy flag --sequential-dependencies is not documented here, i found it watching your release notes. I suggest you update the docs.

  • Expressions in variable using

    Expressions in variable using "$1" do not resolve correctly

    What happened?

    I am trying to put the expression $( [ $1 == "dev" ] && echo "true" || echo "false" ) inside a variable to use the result in various places in my config. But it does not seem to correctly resolve the "$1" since the expression always results to false regardless if calling devspace dev or not.

    What did you expect to happen instead?

    I expect the variable to be resolved correctly.

    How can we reproduce the bug? (as minimally and precisely as possible)

    1. Put the following inside your devspace config
    vars:
      TEST_VAR_EXPRESSION: $( [ $1 == "print" ] && echo "true" || echo "false" )
    
    1. Call devspace print
    2. Inspect that the variable is false

    You can also change the $1 == "print" to any string really it will always return false.

    My devspace.yaml:

    version: v2beta1
    
    vars:
      DEV_MODE: $( [ $1 == "dev" ] && echo "true" || echo "false" )
    
    ...
    deployments:
      my-deployment:
          helm:
            chart:
              name: my/chart
            values:
              debug: ${DEV_MODE}
    ...
    

    Local Environment:

    • DevSpace Version: 6.2.2
    • Operating System: windows
    • ARCH of the OS: AMD64 Kubernetes Cluster:
    • Cloud Provider: other
    • Kubernetes Version:
      • Client Version: v1.25.0
      • Kustomize Version: v4.5.7
      • Server Version: v1.24.2

    Anything else we need to know?

    Nope.

  • Section commands such as devspace run [command] do not work when the k8s cluster is unavailable

    Section commands such as devspace run [command] do not work when the k8s cluster is unavailable

    What happened? Section commands such as devspace run [command] do not work when the k8s cluster is unavailable. And when the k8s cluster was stopped or unavailable, we can't run the devspace commands and get an error: fatal error trying to load remote cache from current context and namespace: Get "https://0.0.0.0:49711/api/v1/namespaces/dev/secrets/devspace-cache-sb": dial tcp 0.0.0.0:49711: connect: connection refused Latest devspace version when it works - 6.1.0 Link to a recent fix (https://github.com/loft-sh/devspace/pull/2429) for another issue that requires a working cluster.

    What did you expect to happen instead? The command can be run when the cluster is unavailable

    How can we reproduce the bug? (as minimally and precisely as possible) Install version >6.1.0 stop local cluster like k3d try to execute command, like devspace run [command]

    My devspace.yaml:

    version: v2beta1
    
    require:
      devspace: 6.1.0
    ...
    commands:
      cluster_stop:
        command: |-
          devspace run destroy
          k3d cluster stop dev
        section: environment
      cluster_start:
        command: |-
          k3d cluster start dev
          devspace run dev
        section: environment
    

    Local Environment:

    • DevSpace Version: >6.1.0
    • Operating System: mac
    • ARCH of the OS: AMD64 | ARM64 | i386 Kubernetes Cluster:
    • Cloud Provider: other
    • Kubernetes Version: 1.23

    Anything else we need to know?

  • get_flag prints double value if var is defined

    get_flag prints double value if var is defined

    What happened? get_flag should print the value of the flag once.

    What did you expect to happen instead? get_flag prints the value of the flag twice.

    How can we reproduce the bug? (as minimally and precisely as possible)

    My devspace.yaml:

    version: v2beta1
    name: double-flag
    
    profiles:
      - name: local
      - name: test
    
    pipelines:
      deploy:
        run: |-
          echo $(get_flag "profile")
          echo $(get_flag "force-deploy")
    vars:
      DEVSPACE_FLAGS: --namespace double-flag
    
    $ devspace deploy --profile local --profile test
    local test local test
    false
    
Devtron is an open source software delivery workflow for kubernetes written in go.
Devtron is an open source software delivery workflow for kubernetes written in go.

Devtron is an open source software delivery workflow for kubernetes written in go.

Jan 8, 2023
vcluster - Create fully functional virtual Kubernetes clusters - Each cluster runs inside a Kubernetes namespace and can be started within seconds
vcluster - Create fully functional virtual Kubernetes clusters - Each cluster runs inside a Kubernetes namespace and can be started within seconds

Website • Quickstart • Documentation • Blog • Twitter • Slack vcluster - Virtual Clusters For Kubernetes Lightweight & Low-Overhead - Based on k3s, bu

Jan 4, 2023
A helper tool for getting OpenShift/Kubernetes data directly from Etcd.

Etcd helper A helper tool for getting OpenShift/Kubernetes data directly from Etcd. How to build $ go build . Basic Usage This requires setting the f

Dec 10, 2021
provider-kubernetes is a Crossplane Provider that enables deployment and management of arbitrary Kubernetes objects on clusters

provider-kubernetes provider-kubernetes is a Crossplane Provider that enables deployment and management of arbitrary Kubernetes objects on clusters ty

Dec 14, 2022
Access your Kubernetes Deployment over the Internet
Access your Kubernetes Deployment over the Internet

Kubexpose: Access your Kubernetes Deployment over the Internet Kubexpose makes it easy to access a Kubernetes Deployment over a public URL. It's a Kub

Dec 5, 2022
Simple CLI tool and Kubernetes deployment.

Simple Application A basic example of how to build a naml project. app.go Every project should define an app.go file. The file should implement the De

Dec 21, 2022
A tool to automate some of my tasks in ECS/ECR.

severinoctl A tool to automate some tasks in ECS/ECR. Work in progress... Prerequisites awscli working aws credentials environment AWS_REGION exported

Feb 19, 2022
go-awssh is a developer tool to make your SSH to AWS EC2 instances easy.

Describing Instances/VPCs data, select one or multiple instances, and make connection(s) to selected instances. Caching the response of API calls for 1day using Tmpfs.

Oct 11, 2021
A reverse engineered github actions compatible self-hosted runner using nektos/act to execute your workflow steps

github-act-runner A reverse engineered github actions compatible self-hosted runner using nektos/act to execute your workflow steps. Unlike the offici

Dec 24, 2022
A best practices Go source project with unit-test and integration test, also use skaffold & helm to automate CI & CD at local to optimize development cycle

Dependencies Docker Go 1.17 MySQL 8.0.25 Bootstrap Run chmod +x start.sh if start.sh script does not have privileged to run Run ./start.sh --bootstrap

Apr 4, 2022
Workflow engine for Kubernetes
Workflow engine for Kubernetes

What is Argo Workflows? Argo Workflows is an open source container-native workflow engine for orchestrating parallel jobs on Kubernetes. Argo Workflow

Jan 4, 2023
a build tool for Go, with a focus on cross-compiling, packaging and deployment

goxc NOTE: goxc has long been in maintenance mode. Ever since Go1.5 supported simple cross-compilation, this tool lost much of its value. There are st

Dec 9, 2022
A Go based deployment tool that allows the users to deploy the web application on the server using SSH information and pem file.

A Go based deployment tool that allows the users to deploy the web application on the server using SSH information and pem file. This application is intend for non tecnhincal users they can just open the GUI and given the server details just deploy.

Oct 16, 2021
Kubernetes Operator for a Cloud-Native OpenVPN Deployment.

Meerkat is a Kubernetes Operator that facilitates the deployment of OpenVPN in a Kubernetes cluster. By leveraging Hashicorp Vault, Meerkat securely manages the underlying PKI.

Jan 4, 2023
Kubernetes workload controller for container image deployment

kube-image-deployer kube-image-deployer는 Docker Registry의 Image:Tag를 감시하는 Kubernetes Controller입니다. Keel과 유사하지만 단일 태그만 감시하며 더 간결하게 동작합니다. Container, I

Mar 8, 2022
Pega-deploy - Pega deployment on Kubernetes

Pega deployment on Kubernetes This project provides Helm charts and basic exampl

Jan 30, 2022
Super simple deployment tool

Dropship Dropship is a simple tool for installing and updating artifacts from a CDN. Features Automatically performs md5sum checks of artifact that is

Oct 4, 2022
Zdeploy - Deployment file tool with golang

zdeploy 中文 Deployment file tool Transfer deployment files Provide shell/bat exec

Sep 22, 2022