Opsani Ignite for Kubernetes: Evaluate Applications for Optimization

Opsani Ignite for Kubernetes

Release Go Report Card License stability-alpha Github All Releases

Opsani Ignite analyzes applications running on a Kubernetes cluster in order to identify performance and reliability risks, as well as inefficient configurations. It then identifies specific corrective actions that align the application's configuration with deployment best practices for production environments and may also reduce the application's resource footprint.

CAUTION: Opsani Ignite is a new tool, still in alpha. We appreciate feedback and suggestions.

Download and Install Ignite

To install opsani-ignite, download the binary for your OS (macOS, Linux or Windows) from the latest release and place it somewhere along your shell's path. Check back often as we release updated analysis capabilities frequently; if your version is more than a week old, please see if a newer version is available before using it.

Run Ignite

To run Opsani Ignite, you will first want to set up port forwarding to the Prometheus API on your cluster. A typical command looks like this (assuming your Prometheus is called prometheus-server and runs in the prometheus namespace):

kubectl port-forward service/prometheus-server 9090:80 -n prometheus

Once port forwarding is active, run the opsani-ignite executable, providing the URL to the port-forwarded Prometheus API:

opsani-ignite -p http://localhost:9090

Opsani Ignite works in three phases: discovery, analysis and recommendations.

Phase 1: Discovery

On startup, Ignite discovers the applications running on the Kubernetes cluster. By querying your Prometheus monitoring system, Ignite finds all non-system namespaces and the deployment workloads running in them; it then obtain their key settings and metrics.

discovery

By default, Ignite looks at the last 7 days of metrics for each application to capture most daily and weekly load and performance variations.

Phase 2: Analysis

Ignite analyzes each application, looking at pods and containers that make up the application in order to uncover specific omissions of best practices for reliable production deployments. It looks at important characteristics such as the pod's quality of service (QoS), replica count, resource allocation, usage, limits, and processed load. Ignite then identifies areas requiring attention that are either causing or can cause performance and reliability issues.

analysis

In addition, Ignite determines whether the application is overprovisioned and has a higher-than-necessary cloud spend. In these cases, it also estimates the likely savings that can be obtained through optimization.

Phase 3: Recommendations

When an application is selected (pressing Enter in the table of apps), Ignite produces a set of actionable recommendations for improving the efficiency, performance, and reliability of the application. The recommendations fall into several categories, including production best practices (for example, setting resource requests and limits), as well as optimal and resilient operation optimization recommendations. Applying these recommendations results in improved performance and efficiency, as well as increased resilience of their applications under load.

recommendations

Optimization Recommendations

Opsani Ignite provides analysis and a number of additional recommendations to improve performance, reliability and efficiency.

Best practices require correctly setting resource requirements in a way that meets the performance and reliability requirements of an application (typically, latency and error rate service level objectives), while using assigned resources efficiently to control cloud costs. These values can be discovered manually, often through an onerous and repetitive manual tuning process.

They can also be automatically identified using automatic optimization services, such as the Opsani optimization-as-a-service tool. Those who are interested in how continuous optimization can remediate these issues can go to the Opsani website, set up a free trial account and attach the optimizer to their application. Connecting an application to the optimizer typically takes 10-15 minutes and, in a few hours, produces concrete, tested resource specifications that can be applied using a simple kubectl command.

Interactive... Stdout... or YAML output

By default, Ignite is text-based interactive tool (using the fantastic tview package, familiar to those who use the equally magnificent k9s tool). Ignite's command line options can change the output to simple stdout text view and even full-detail YAML output that can be used to integrate Ignite into your dashboards and higher level tools.

Command Line Options

Here are Ignite's command line options:

Usage:
  opsani-ignite [
   
     [
    
     ]] [flags]

Flags:
      --config string           config file (default is $HOME/.opsani-ignite.yaml)
  -p, --prometheus-url string   URI to Prometheus API (typically port-forwarded to localhost using kubectl)
      --start string            Analysis start time, in RFC3339 or relative form (default "-7d")
      --end string              Analysis end time, in RFC3339 or relative form (default "-0d")
      --step string             Time resolution, in relative form (default "1d")
  -o, --output string           Output format (interactive|table|detail|yaml|servo.yaml)
  -b, --hide-blocked            Hide applications that don't meet optimization prerequisites
      --debug                   Display tracing/debug information to stderr
  -q, --quiet                   Suppress warning and info level messages
  -h, --help                    help for opsani-ignite

    
   

Feedback and Suggestions

The Ignite tool is the result of analyzing thousands of applications as part of our work at Opsani. We released it as an open source tool in order to share our experience and learning with the Kubernetes community and help improve application reliability and efficiency. The source code is available to review and to contribute.

We appreciate your feedback. Please send us a few lines about your experience--or, even better--a screenshot ๐Ÿ“ท with the results (be they good or not so good) at . Issues and PRs are also a great way to help improve Ignite for everyone.

Troubleshooting

Opsani Ignite records diagnostic information in opsani-ignite.log. You can increase the logging level by adding the --debug option to the command line; running the YAML output option (-o yaml) is also a great way to see the full details.

Where To Get Help

You can reach out to Opsani Technical support at or, for faster response, use the chat bot ๐Ÿ’ฌ on the Opsani web site.

Similar Resources

Litmus helps Kubernetes SREs and developers practice chaos engineering in a Kubernetes native way.

Litmus helps Kubernetes SREs and developers practice chaos engineering in a Kubernetes native way.

Litmus Cloud-Native Chaos Engineering Read this in other languages. ๐Ÿ‡ฐ๐Ÿ‡ท ๐Ÿ‡จ๐Ÿ‡ณ ๐Ÿ‡ง๐Ÿ‡ท ๐Ÿ‡ฎ๐Ÿ‡ณ Overview Litmus is a toolset to do cloud-native chaos engineer

Jan 1, 2023

KEDA is a Kubernetes-based Event Driven Autoscaling component. It provides event driven scale for any container running in Kubernetes

 KEDA is a Kubernetes-based Event Driven Autoscaling component. It provides event driven scale for any container running in Kubernetes

Kubernetes-based Event Driven Autoscaling KEDA allows for fine-grained autoscaling (including to/from zero) for event driven Kubernetes workloads. KED

Jan 7, 2023

vcluster - Create fully functional virtual Kubernetes clusters - Each cluster runs inside a Kubernetes namespace and can be started within seconds

vcluster - Create fully functional virtual Kubernetes clusters - Each cluster runs inside a Kubernetes namespace and can be started within seconds

Website โ€ข Quickstart โ€ข Documentation โ€ข Blog โ€ข Twitter โ€ข Slack vcluster - Virtual Clusters For Kubernetes Lightweight & Low-Overhead - Based on k3s, bu

Jan 4, 2023

network-node-manager is a kubernetes controller that controls the network configuration of a node to resolve network issues of kubernetes.

network-node-manager is a kubernetes controller that controls the network configuration of a node to resolve network issues of kubernetes.

Network Node Manager network-node-manager is a kubernetes controller that controls the network configuration of a node to resolve network issues of ku

Dec 18, 2022

A k8s vault webhook is a Kubernetes webhook that can inject secrets into Kubernetes resources by connecting to multiple secret managers

A k8s vault webhook is a Kubernetes webhook that can inject secrets into Kubernetes resources by connecting to multiple secret managers

k8s-vault-webhook is a Kubernetes admission webhook which listen for the events related to Kubernetes resources for injecting secret directly from sec

Oct 15, 2022

Carrier is a Kubernetes controller for running and scaling game servers on Kubernetes.

Carrier is a Kubernetes controller for running and scaling game servers on Kubernetes.

Carrier is a Kubernetes controller for running and scaling game servers on Kubernetes. This project is inspired by agones. Introduction Genera

Nov 25, 2022

Kubei is a flexible Kubernetes runtime scanner, scanning images of worker and Kubernetes nodes providing accurate vulnerabilities assessment, for more information checkout:

Kubei is a flexible Kubernetes runtime scanner, scanning images of worker and Kubernetes nodes providing accurate vulnerabilities assessment, for more information checkout:

Kubei is a vulnerabilities scanning and CIS Docker benchmark tool that allows users to get an accurate and immediate risk assessment of their kubernet

Dec 30, 2022

The OCI Service Operator for Kubernetes (OSOK) makes it easy to connect and manage OCI services from a cloud native application running in a Kubernetes environment.

OCI Service Operator for Kubernetes Introduction The OCI Service Operator for Kubernetes (OSOK) makes it easy to create, manage, and connect to Oracle

Sep 27, 2022

Kubernetes IN Docker - local clusters for testing Kubernetes

Kubernetes IN Docker - local clusters for testing Kubernetes

kind is a tool for running local Kubernetes clusters using Docker container "nodes".

Jan 5, 2023
Navmux - Test project evaluate writing the equivalent of boat repro using go

navmux Test project evaluate writing the equivalent of boat repro using go. The

Jan 10, 2022
Kubernetes OS Server - Kubernetes Extension API server exposing OS configuration like sysctl via Kubernetes API

KOSS is a Extension API Server which exposes OS properties and functionality using Kubernetes API, so it can be accessed using e.g. kubectl. At the moment this is highly experimental and only managing sysctl is supported. To make things actually usable, you must run KOSS binary as root on the machine you will be managing.

May 19, 2021
Build and deploy Go applications on Kubernetes
Build and deploy Go applications on Kubernetes

ko: Easy Go Containers ko is a simple, fast container image builder for Go applications. It's ideal for use cases where your image contains a single G

Jan 5, 2023
โšก๏ธ A dev tool for microservice developers to run local applications and/or forward others from/to Kubernetes SSH or TCP
โšก๏ธ A dev tool for microservice developers to run local applications and/or forward others from/to Kubernetes SSH or TCP

Your new microservice development environment friend. This CLI tool allows you to define a configuration to work with both local applications (Go, Nod

Jan 4, 2023
Continuous Delivery for Declarative Kubernetes, Serverless and Infrastructure Applications
Continuous Delivery for Declarative Kubernetes, Serverless and Infrastructure Applications

Continuous Delivery for Declarative Kubernetes, Serverless and Infrastructure Applications Explore PipeCD docs ยป Overview PipeCD provides a unified co

Jan 3, 2023
๐Ÿ›… Backup your Kubernetes Stateful Applications

Stash Stash by AppsCode is a cloud-native data backup and recovery solution for Kubernetes workloads. If you are running production workloads in Kuber

Jan 7, 2023
A Kubernetes Operator used for pre-scaling applications in anticipation of load

Pre-Scaling Kubernetes Operator Built out of necessity, the Operator helps pre-scale applications in anticipation of load. At its core, it manages a c

Oct 14, 2021
Frisbee is a Kubernetes-native platform for exploring, testing, and benchmarking distributed applications.
Frisbee is a Kubernetes-native platform for exploring, testing, and benchmarking distributed applications.

Why Frisbee ? Frisbee is a next generation platform designed to unify chaos testing and perfomance benchmarking. We address the key pain points develo

Dec 14, 2022
Kubernetes is an open source system for managing containerized applications across multiple hosts.
Kubernetes is an open source system for managing containerized applications across multiple hosts.

Kubernetes Kubernetes is an open source system for managing containerized applications across multiple hosts. It provides basic mechanisms for deploym

Nov 25, 2021
Cmsnr - cmsnr (pronounced "commissioner") is a lightweight framework for running OPA in a sidecar alongside your applications in Kubernetes.

cmsnr Description cmsnr (pronounced "commissioner") is a lightweight framework for running OPA in a sidecar alongside your applications in Kubernetes.

Jan 13, 2022