Library to watch and follow kubernetes resources in CI/CD deploy pipelines

kubedog

Kubedog is a library to watch and follow Kubernetes resources in CI/CD deploy pipelines.

This library is used in the werf CI/CD tool to track resources during deploy process.

NOTE: Kubedog also includes a CLI, however it provides a minimal interface to access library functions. CLI was created to check library features and for debug purposes. Currently, we have no plans on further improvement of CLI.

Table of Contents

Install kubedog CLI

With trdl (recommended)

Linux (Bash)

Setup trdl, which will install and update kubedog:

# Add ~/bin to the PATH.
echo 'export PATH=$HOME/bin:$PATH' >> ~/.bash_profile
export PATH="$HOME/bin:$PATH"

# Install trdl.
curl -L "https://tuf.trdl.dev/targets/releases/0.1.3/linux-$(uname -m | sed 's/x86_64/amd64/;s/aarch64/arm64/')/bin/trdl" -o /tmp/trdl
mkdir -p ~/bin
install /tmp/trdl ~/bin/trdl

Add kubedog repo to trdl:

trdl add kubedog https://tuf.kubedog.werf.io 1 2cc56abdc649a9699074097ba60206f1299e43b320d6170c40eab552dcb940d9e813a8abf5893ff391d71f0a84b39111ffa6403a3e038b81634a40d29674a531

Install and activate kubedog:

# Activate kubedog binary in the current shell.
source $(trdl use kubedog 0 stable)

# Check whether kubedog is available now.
kubedog version

# Activate kubedog binary automatically during shell initializations.
echo 'source $(trdl use kubedog 0 stable)' >> ~/.bashrc

macOS (Zsh)

Setup trdl, which will install and update kubedog:

# Add ~/bin to the PATH.
echo 'export PATH=$HOME/bin:$PATH' >> ~/.zprofile
export PATH="$HOME/bin:$PATH"

# Install trdl.
curl -L "https://tuf.trdl.dev/targets/releases/0.1.3/darwin-$(uname -m | sed 's/x86_64/amd64/;s/aarch64/arm64/')/bin/trdl" -o /tmp/trdl
mkdir -p ~/bin
install /tmp/trdl ~/bin/trdl

Add kubedog repo to trdl:

trdl add kubedog https://tuf.kubedog.werf.io 1 2cc56abdc649a9699074097ba60206f1299e43b320d6170c40eab552dcb940d9e813a8abf5893ff391d71f0a84b39111ffa6403a3e038b81634a40d29674a531

Install and activate kubedog:

# Activate kubedog binary in the current shell.
source $(trdl use kubedog 0 stable)

# Check whether kubedog is available now.
kubedog version

# Activate kubedog binary automatically during shell initializations.
echo 'source $(trdl use kubedog 0 stable)' >> ~/.zshrc

Windows (PowerShell)

Setup trdl, which will install and update kubedog:

# Add %USERPROFILE%\bin to the PATH.
[Environment]::SetEnvironmentVariable("Path", "$env:USERPROFILE\bin" + [Environment]::GetEnvironmentVariable("Path", "User"), "User")
$env:Path = "$env:USERPROFILE\bin;$env:Path"

# Install trdl.
mkdir -Force "$env:USERPROFILE\bin"
Invoke-WebRequest -Uri "https://tuf.trdl.dev/targets/releases/0.1.3/windows-amd64/bin/trdl.exe" -OutFile "$env:USERPROFILE\bin\trdl.exe"

Add kubedog repo to trdl:

trdl add kubedog https://tuf.kubedog.werf.io 1 2cc56abdc649a9699074097ba60206f1299e43b320d6170c40eab552dcb940d9e813a8abf5893ff391d71f0a84b39111ffa6403a3e038b81634a40d29674a531

Install and activate kubedog:

# Activate kubedog binary in the current shell.
. $(trdl use kubedog 0 stable)

# Check whether kubedog is available now.
kubedog version

To allow automatic activation of kubedog binary for new PowerShell instances you'll need to allow execution of scripts created locally. Run in the PowerShell under Administrator:

Set-ExecutionPolicy RemoteSigned -Scope CurrentUser

# Activate kubedog binary automatically during PowerShell initializations.
if (!(Test-Path "$profile")) {
  New-Item -Path "$profile" -Force
}
Add-Content -Path "$profile" -Value '. $(trdl use kubedog 0 stable)'

Alternative binary installation

Linux

Execute in shell:

curl -L "https://tuf.kubedog.werf.io/targets/releases/0.6.1/linux-$(uname -m | sed 's/x86_64/amd64/;s/aarch64/arm64/')/bin/kubedog" -o /tmp/kubedog
sudo install /tmp/kubedog /usr/local/bin/kubedog

macOS

Execute in shell:

curl -L "https://tuf.kubedog.werf.io/targets/releases/0.6.1/darwin-$(uname -m | sed 's/x86_64/amd64/;s/aarch64/arm64/')/bin/kubedog" -o /tmp/kubedog
sudo install /tmp/kubedog /usr/local/bin/kubedog

Windows

Execute in PowerShell:

# Add %USERPROFILE%\bin to the PATH.
[Environment]::SetEnvironmentVariable("Path", "$env:USERPROFILE\bin" + [Environment]::GetEnvironmentVariable("Path", "User"), "User")
$env:Path = "$env:USERPROFILE\bin;$env:Path"

# Install kubedog.
mkdir -Force "$env:USERPROFILE\bin"
Invoke-WebRequest -Uri "https://tuf.kubedog.werf.io/targets/releases/0.6.1/windows-amd64/bin/kubedog.exe" -OutFile "$env:USERPROFILE\bin\kubedog.exe"

Usage

Community

Please feel free to reach us via project's Discussions and werf's Telegram group (there's another one in Russian as well).

You're also welcome to follow @werf_io to stay informed about all important news, articles, etc.

License

Kubedog is an Open Source project licensed under the Apache License.

Owner
werf
werf CI/CD tool and related projects
werf
Comments
  • Move Binaries off of Bintray

    Move Binaries off of Bintray

    Hi!

    We noticed today that the bintray link for downloading KubeDog (and bintray as a whole) is going away on May 1st, is there plan to migrate the KubeDog binaries off of bintray to a different platform?

    In the mean time can install the CLI from source, however the curl from bintray was convenient.

    More info on Bintray sunsetting: https://status.bintray.com/ https://jfrog.com/blog/into-the-sunset-bintray-jcenter-gocenter-and-chartcenter/

  • Fail fast when resource readiness probe failed

    Fail fast when resource readiness probe failed

    Kubedog should handle fails of readiness-probes as errors and fail tracking. For now these fails only printed to log.

    There should be statuses of child pods in the status-reports of non-ready controllers.

    There is info about readiness-probes failures in the pod's status in conditions fields.

  • Option to change the display prefix

    Option to change the display prefix

    Rather than have the output be prefixed with #, I would like to indent the output by some arbitrary number of characters. This would make the output look much better when used as a step within a build pipeline in my case.

    Would you take a PR to modify the display prefix when tracking rollouts?

  • feat: add flagger canary support

    feat: add flagger canary support

    Context

    We are using Kubedog in our pipeline to watch rollouts and now we are moving to Flagger to have Canary Deployment but Kubedog doesn't support it.

    Proposal

    Since Flagger is a popular operator for Kubernetes, we would like to add support to Flagger Canary. The changes are isolated: we have created a new tracker specific for Canary so it won't impact in other resources.

    My only concern is just about a custom specific configuration for Canary:

    func setDefaultCanarySpecValues(spec *MultitrackSpec) {
    	setDefaultSpecValues(spec)
    	*spec.AllowFailuresCount = 0
    }
    

    In a Canary deployment, we don't want to "keep trying" a rollout: if it fails, it must stop. Also I don't think it's a good idea to let people pass AllowFailuresCount because could lead a mistakes like eternal loopings.

  • Error is when following by pod

    Error is when following by pod

    ERROR: logging before flag.Parse: W1225 16:03:56.419831 28319 reflector.go:270] k8s.io/client-go/tools/watch/informerwatcher.go:110: watch of *v1.Event ended with: The resourceVersion for the provided watch is too old. ERROR: logging before flag.Parse: W1225 16:12:36.563961 28319 reflector.go:270] k8s.io/client-go/tools/watch/informerwatcher.go:110: watch of *v1.Event ended with: The resourceVersion for the provided watch is too old. ERROR: logging before flag.Parse: W1225 16:31:59.683147 28319 reflector.go:270] k8s.io/client-go/tools/watch/informerwatcher.go:110: watch of *v1.Event ended with: The resourceVersion for the provided watch is too old. ERROR: logging before flag.Parse: W1225 16:39:27.725860 28319 reflector.go:270] k8s.io/client-go/tools/watch/informerwatcher.go:110: watch of *v1.Event ended with: The resourceVersion for the provided watch is too old.

    ERROR: logging before flag.Parse: W1225 16:54:45.818893 28319 reflector.go:270] k8s.io/client-go/tools/watch/informerwatcher.go:110: watch of *v1.Event ended with: The resourceVersion for the provided watch is too old.

  • Github releases not available since v8.0.0, nor v0.9.0

    Github releases not available since v8.0.0, nor v0.9.0

    Github releases not available since v8.0.0, nor v0.9.0. For example, trying to download the following:

    v0.8.0: macOS arm64

    ... It returns:

    <Error>
    <Code>NoSuchKey</Code>
    <Message>The specified key does not exist.</Message>
    <Details>No such object: kubedog-tuf/targets/releases/0.8.0/darwin-arm64/bin/kubedog</Details>
    </Error>
    
  • Fix Job tracker hangs sometimes

    Fix Job tracker hangs sometimes

    The issue occured when deleting a Pod of the Job. Then Job tracker starts tracking newly created Pod, but a tracker for old Pod has not been stopped properly in some condition.

    Added Deleted channel to the Pod tracker, added handling of Pod deletion to Job, StatefulSet, DaemonSet and Deployment trackers.

    fixes https://github.com/werf/kubedog/issues/127

  • Kubernetes 1.16 support

    Kubernetes 1.16 support

    Hello, guys!

    Love your product.

    It looks like kubedog 0.3.3 doesn't support the latest version of Kubernetes 1.16.0, which was released 2 days ago.

    I'm getting this error:

    command:

    kubedog rollout track deployment nginx-ingress-controller -n ingress-nginx
    

    output:

    E0921 00:01:45.823083   18836 reflector.go:131] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:99: Failed to list *v1beta1.Deployment: the server could not find the requested resource
    
  • chore: release 0.9.6

    chore: release 0.9.6

    :robot: I have created a release *beep* *boop*

    0.9.6 (2022-07-29)

    Bug Fixes

    • generic: ignore jsonpath errs on Condition search (8d88c65)

    This PR was generated with Release Please. See documentation.

  • chore: release 0.9.5

    chore: release 0.9.5

    :robot: I have created a release *beep* *boop*

    0.9.5 (2022-07-26)

    Bug Fixes

    • generic: add logging and don't retry fatal errors on List/Watch (246d454)
    • generic: Condition output was malformed (8c05e40)
    • hide Header if no resources of such type being tracked (232c4ed)

    This PR was generated with Release Please. See documentation.

  • chore: release 0.9.4

    chore: release 0.9.4

    :robot: I have created a release *beep* *boop*

    0.9.4 (2022-07-21)

    Bug Fixes

    • generic-tracker: improve logging + few possible fixes (3524520)

    This PR was generated with Release Please. See documentation.

  • Research & implement new architecture for Deployment tracker

    Research & implement new architecture for Deployment tracker

    Problems with current impl:

    • unreliable detection of dependent resources (for example ReplicaSet for Deployment);
    • hard to maintain current codebase of trackers because of a lot of duplication and boilerplate code, which could be generalized;
    • hard to create new resources trackers also because of a lot of duplication and boilerplate code.

    Following problems cannot be resolved without such rework: https://github.com/werf/kubedog/issues/282 https://github.com/werf/kubedog/issues/283

  • Pod tracker hangs when there are problems with pod scheduling

    Pod tracker hangs when there are problems with pod scheduling

    0/1 nodes are available: 1 Insufficient memory.
    

    Event describes the error with the pod and pod tracker does not handle this event.

    Kubedog should somehow react on such events.

  • Print container logs in the final error report

    Print container logs in the final error report

    The problem:

    • container exits with non-zero error code, because there is some error in the application;
    • user sees some "container has been exited with non-zero code" in the end of multitracker log;
    • user needs to see application logs to fix the problem;
    • application logs has been printed somewhere in the middle of multitracker logs, so user need to search these logs to fix application problems.

    Solution: multitracker should print failed application logs, events and other information in the end of tracking log giving user all needed information immediately.

  • Rework multitracker

    Rework multitracker

    Multitracker should be renamed to something like LoggerMultitracker. Also implement common part of multitracker so that logger/printing logic reside on top of some base multitracker. Use only new reworked lowlevel resources trackers in this new multitracker (see https://github.com/werf/kubedog/issues/285).

  • Rework internal resource trackers

    Rework internal resource trackers

    The problem: there is a lot of repeated code in each tracker in pkg/tracker, it is:

    1. hard to maintain this code and fix bugs;
    2. hard to add new trackers for custom resources, because a lot of code need to be copied.

    Solution: rework pkg/tracker, so that:

    • it uses dynamic kubernetes client;
    • it implements some common base resource with typical operations needed to implement any concrete resource tracker;
    • it does not mix representational logic with status reporting mechanics (status-indicators in the current implementation should not be implemented in the resource trackers, but instead should be implemented at the multitracker level).

    1st step includes:

    • implementing new framework to build resource trackers;
    • implementing Deployment tracker on top of such framework;
    • implementing compatibility adapter for Deployment so that new tracker can be used in the current version of multitracker.
  • False positive pod error when crashloopbackoff have occured

    False positive pod error when crashloopbackoff have occured

    1. Successfully deploy initial version.
    2. Change app so that application pods are crashing.
    3. Leave app so that application pods states change to deep crashloopbackoff.
    4. Fix deploy limits and try to redeploy — first converge will crash.

    Maybe related to https://github.com/werf/werf/issues/1755

Simple project by Follow up Question from LINE MAN Wongnai about summary covid stats from API.

Simple JSON API to summarize COVID-19 stats API created by Follow-up Question from LINE MAN Wongnai. they can show summary covid stats from Province a

Feb 19, 2022
A k8s vault webhook is a Kubernetes webhook that can inject secrets into Kubernetes resources by connecting to multiple secret managers
A k8s vault webhook is a Kubernetes webhook that can inject secrets into Kubernetes resources by connecting to multiple secret managers

k8s-vault-webhook is a Kubernetes admission webhook which listen for the events related to Kubernetes resources for injecting secret directly from sec

Oct 15, 2022
Watch and react to changes in Kubernetes TLS Secrets

cert-watch Watch and react to change in Kubernetes TLS Secrets. What is cert-watch? Kubernetes has introduced a number of different ways to keep certi

Feb 4, 2022
gpupod is a tool to list and watch GPU pod in the kubernetes cluster.

gpupod gpupod is simple tool to list and watch GPU pod in kubernetes cluster. usage Usage: gpupod [flags] Flags: -t, --createdTime with pod c

Dec 8, 2021
Build and deploy Go applications on Kubernetes
Build and deploy Go applications on Kubernetes

ko: Easy Go Containers ko is a simple, fast container image builder for Go applications. It's ideal for use cases where your image contains a single G

Jan 5, 2023
Flux is a tool for keeping Kubernetes clusters in sync with sources of configuration, and automating updates to configuration when there is new code to deploy.
Flux is a tool for keeping Kubernetes clusters in sync with sources of configuration, and automating updates to configuration when there is new code to deploy.

Flux is a tool for keeping Kubernetes clusters in sync with sources of configuration (like Git repositories), and automating updates to configuration when there is new code to deploy.

Jan 8, 2023
httpserver deploy in kubernetes

httpserver deploy in kubernetes cluster What is this? The project realizes the functions of mainstream httpserver based on golang / gin, including ele

Mar 15, 2022
Christmas Hack Day Project: Build an Kubernetes Operator to deploy Camunda Cloud services

Camunda Cloud Operator Christmas Hack Day Project (2021): Build an Kubernetes Operator to deploy Camunda Cloud services Motiviation / Idea We currentl

May 18, 2022
Digitalocean-kubernetes-challenge - Deploy a GitOps CI/CD implementation
Digitalocean-kubernetes-challenge - Deploy a GitOps CI/CD implementation

DigitalOcean Kubernetes Challenge 2021 I chose to participate in the DigitalOcean Kubernetes Challenge in order to learn more about Kubernetes and to

Nov 9, 2022
Pega-deploy - Pega deployment on Kubernetes

Pega deployment on Kubernetes This project provides Helm charts and basic exampl

Jan 30, 2022
GitOops is a tool to help attackers and defenders identify lateral movement and privilege escalation paths in GitHub organizations by abusing CI/CD pipelines and GitHub access controls.
GitOops is a tool to help attackers and defenders identify lateral movement and privilege escalation paths in GitHub organizations by abusing CI/CD pipelines and GitHub access controls.

GitOops is a tool to help attackers and defenders identify lateral movement and privilege escalation paths in GitHub organizations by abusing CI/CD pipelines and GitHub access controls.

Jan 2, 2023
A curated list of awesome Kubernetes tools and resources.

Awesome Kubernetes Resources A curated list of awesome Kubernetes tools and resources. Inspired by awesome list and donnemartin/awesome-aws. The Fiery

Jan 2, 2023
Annotated and kubez-autoscaler-controller will maintain the HPA automatically for kubernetes resources.

Kubez-autoscaler Overview kubez-autoscaler 通过为 deployment / statefulset 添加 annotations 的方式,自动维护对应 HorizontalPodAutoscaler 的生命周期. Prerequisites 在 kuber

Jan 2, 2023
A kubernetes plugin which enables dynamically add or remove GPU resources for a running Pod
A kubernetes plugin which enables dynamically add or remove GPU resources for a running Pod

GPU Mounter GPU Mounter is a kubernetes plugin which enables add or remove GPU resources for running Pods. This Introduction(In Chinese) is recommende

Jan 5, 2023
A cli that exposes your local resources to kubernetes
A cli that exposes your local resources to kubernetes

ktunnel Expose your local resources to kubernetes ?? Table of Contents About Getting Started Usage Documentation Contributing Authors Acknowledgments

Jan 7, 2023
nano-gpu-agent is a Kubernetes device plugin for GPU resources allocation on node.
nano-gpu-agent is a Kubernetes device plugin for GPU resources allocation on node.

Nano GPU Agent About this Project Nano GPU Agent is a Kubernetes device plugin implement for gpu allocation and use in container. It runs as a Daemons

Dec 29, 2022
Set of Kubernetes solutions for reusing idle resources of nodes by running extra batch jobs

Caelus Caelus is a set of Kubernetes solutions for reusing idle resources of nodes by running extra batch jobs, these resources come from the underuti

Nov 22, 2022
Search Kubernetes Ingress resources.
Search Kubernetes Ingress resources.

kubectl-ingress-search Search Ingress resources. Installation Download from Releases page. cp kubectl-ingress-search /usr/local/bin/ use kubectl-ingre

Nov 7, 2021
immutable, fluent, builders for Kubernetes resources

Dies - immutable, fluent, builders for Kubernetes resources Using dies Common methods Creating dies diegen die markers +die This project contains dies

May 6, 2022