network-node-manager is a kubernetes controller that controls the network configuration of a node to resolve network issues of kubernetes.

Network Node Manager

network-node-manager is a kubernetes controller that controls the network configuration of a node to resolve network issues of kubernetes. By simply deploying and configuring network-node-manager, you can solve kubernetes network issues that cannot be resolved by kubernetes or resolved by the higher kubernetes Version. Below is a list of kubernetes's issues to be resolved by network-node-manager. network-node-manager is based on kubebuilder v2.3.1.

Deploy

network-node-manager now supports below CPU architectures.

  • amd64
  • arm64

Deploy network-node-manager through below command according to kube-proxy mode. After deploying network-node-manager, the "POD_CIDR_IPv4" environment variable must be set. And if you use IPv6 service, you must also set the "POD_CIDR_IPv6" environment variable.

iptables proxy mode 
$ kubectl apply -f https://raw.githubusercontent.com/kakao/network-node-manager/master/deploy/network-node-manager_iptables.yml
$ kubectl -n kube-system set env daemonset/network-node-manager POD_CIDR_IPV4=[IPv4 POD CIDR]
$ kubectl -n kube-system set env daemonset/network-node-manager POD_CIDR_IPV6=[IPv6 POD CIDR]

IPVS proxy mode
$ kubectl apply -f https://raw.githubusercontent.com/kakao/network-node-manager/master/deploy/network-node-manager_ipvs.yml
$ kubectl -n kube-system set env daemonset/network-node-manager POD_CIDR_IPV4=[IPv4 POD CIDR]
$ kubectl -n kube-system set env daemonset/network-node-manager POD_CIDR_IPV6=[IPv6 POD CIDR]

Examples are below

Example 1
$ kubectl apply -f https://raw.githubusercontent.com/kakao/network-node-manager/master/deploy/network-node-manager_iptables.yml
$ kubectl -n kube-system set env daemonset/network-node-manager POD_CIDR_IPV4="10.244.0.0/16"

Example 2
$ kubectl apply -f https://raw.githubusercontent.com/kakao/network-node-manager/master/deploy/network-node-manager_ipvs.yml
$ kubectl -n kube-system set env daemonset/network-node-manager POD_CIDR_IPV4="192.167.0.0/16"
$ kubectl -n kube-system set env daemonset/network-node-manager POD_CIDR_IPV6="fdbb::0/64"

Configuration

The following are configurations related to rules managed by network-node-manager to solve the network issue of kubernetes. Please check the configuration and its related Rule. Network-node-manager is configured through environment variable configuration. When the environment variable is changed, the rule is dynamically set as network-node-manager is redeployed.

Enable Drop Invalid Packet Rule in INPUT chain

On
$ kubectl -n kube-system set env daemonset/network-node-manager RULE_DROP_INVALID_INPUT_ENABLE=true

Off
$ kubectl -n kube-system set env daemonset/network-node-manager RULE_DROP_INVALID_INPUT_ENABLE=false

Enable External-IP to Cluster-IP DNAT Rule

On
$ kubectl -n kube-system set env daemonset/network-node-manager RULE_EXTERNAL_CLUSTER_ENABLE=true

Off
$ kubectl -n kube-system set env daemonset/network-node-manager RULE_EXTERNAL_CLUSTER_ENABLE=false

How it works?

kpexec Architecture

network-node-manager runs on all kubernetes cluster nodes in host network namespace with network privileges and manage the node network configuration. network-node-manager watches the kubernetes object through kubenetes API server like a general kubernetes controller and manage the node network configuration. Now network-node-manager only watches service object.

License

This software is licensed under the Apache 2 license, quoted below.

Copyright 2020 Kakao Corp. http://www.kakaocorp.com

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this project except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0.

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

Comments
  • nftables support

    nftables support

    This adds a wrapper script for iptables / nftables detection to support Rocky, CentOS and any other nftables based distribution. Tested with CentOS 8 and Rocky 8.

  • Support spec.externalIPs and NodePort services

    Support spec.externalIPs and NodePort services

    Currently the controller only supports LoadBalancer services, and only handles the IP addresses found in the .status.loadBalancer.ingress field. The controller should also handle the IP addresses in the .spec.externalIPs field, which can also be used to direct traffic into a cluster.

    Finally, because externalIPs can also be used with NodePort services, I feel that the controller should accept those too.

  • Dual-stack services fail

    Dual-stack services fail

    If I create a dual-stack LoadBalancer service (say, using Nordix/assign-lb-ip), network-node-manager is confused by this and tries to add IPv4 iptables rules for the IPv6 external IP:

    2021-03-18T23:27:07.341Z	INFO	controllers.Service.reconcile	create iptables rules	{"service": "bootc-system/ingress-nginx-controller", "externalIP": "192.0.2.52", "clusterIP": "10.43.150.86"}
    2021-03-18T23:27:07.441Z	INFO	controllers.Service.reconcile	create iptables rules	{"service": "bootc-system/ingress-nginx-controller", "externalIP": "2001:db8::5", "clusterIP": "10.43.150.86"}
    2021-03-18T23:27:07.445Z	ERROR	controllers.Service.reconcile	iptables v1.8.3 (legacy): host/network `2001:db8::5' not found
    Try `iptables -h' or 'iptables --help' for more information.
    	{"service": "bootc-system/ingress-nginx-controller", "error": "exit status 2"}
    github.com/go-logr/zapr.(*zapLogger).Error
    	/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128
    github.com/kakao/network-node-manager/pkg/rules.CreateRulesExternalCluster
    	/workspace/pkg/rules/rule_external_cluster.go:314
    github.com/kakao/network-node-manager/controllers.(*ServiceReconciler).Reconcile
    	/workspace/controllers/service_controller.go:208
    sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler
    	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:256
    sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
    	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:232
    sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker
    	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:211
    k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1
    	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:152
    k8s.io/apimachinery/pkg/util/wait.JitterUntil
    	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:153
    k8s.io/apimachinery/pkg/util/wait.Until
    	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:88
    2021-03-18T23:27:07.445Z	ERROR	controller-runtime.controller	Reconciler error	{"controller": "service", "request": "bootc-system/ingress-nginx-controller", "error": "exit status 2"}
    github.com/go-logr/zapr.(*zapLogger).Error
    	/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128
    sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler
    	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:258
    sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
    	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:232
    sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker
    	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:211
    k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1
    	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:152
    k8s.io/apimachinery/pkg/util/wait.JitterUntil
    	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:153
    k8s.io/apimachinery/pkg/util/wait.Until
    	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:88
    

    The service that triggers the above looks like this (redacted for clarity):

    apiVersion: v1
    kind: Service
    metadata:
      name: ingress-nginx-controller
      namespace: bootc-system
    spec:
      clusterIP: 10.43.150.86
      clusterIPs:
      - 10.43.150.86
      - 2001:db8:ff::edd5
      externalIPs:
      - 192.0.2.52
      - 2001:db8::5
      externalTrafficPolicy: Local
      ipFamilies:
      - IPv4
      - IPv6
      ipFamilyPolicy: RequireDualStack
      ports:
      - name: http
        port: 80
        protocol: TCP
        targetPort: http
      - name: https
        port: 443
        protocol: TCP
        targetPort: https
      selector:
        app.kubernetes.io/component: controller
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/name: ingress-nginx
      sessionAffinity: None
      type: LoadBalancer
    status:
      loadBalancer:
        ingress:
        - ip: 192.0.2.52
        - ip: 2001:db8::5
    

    It seems like the controller needs to look at the clusterIPs field rather than the singular clusterIP field and match the address family for the externalIP it is looking at.

  • allow-shared-ip and externalTrafficPolicy: Local on arm dosn`t share ip ipv4 and ipv6

    allow-shared-ip and externalTrafficPolicy: Local on arm dosn`t share ip ipv4 and ipv6

    I have 4 Services for an deployment. ipv4tcp ipv4udp ipv6tcp ipv6udp all use type: Loadbalancer they sharing there ip with metallb.universe.tf/allow-shared-ip but if i change on one of them (ipv4tcp externalTrafficPolicy: Local loses there his ip.) if i change the other service ipv4udp to Local changes nothing. the same for ipv6

    my deployment

    kind: Service
    apiVersion: v1
    metadata:
      name: piholev4tcp
      namespace: default
      labels:
        k8s-app: pihole
      annotations:
        metallb.universe.tf/allow-shared-ip: piholev4
    spec:
      ports:
        - name: tcp-53-53-rfi69
          protocol: TCP
          port: 53
          targetPort: 53
        - name: tcp-53-53-zfi69
          protocol: TCP
          port: 80
          targetPort: 80
        - name: tcp-443-443-92ld5
          protocol: TCP
          port: 443
          targetPort: 443
      selector:
        k8s-app: pihole
      type: LoadBalancer
      loadBalancerIP: 192.168.178.ip
      externalTrafficPolicy: Cluster
      ipFamily: IPv4
    
  • iptables: use the host's xtables lock, and wait for the lock

    iptables: use the host's xtables lock, and wait for the lock

    I noticed some odd behaviour when a node is starting afresh and lots of containers are being scheduled: it will often error out and abort with odd iptables error messages. It turns out this is due to not sharing the node's global xtables lock.

    Unfortunately it's not as simple as just passing through the lockfile: by default iptables doesn't wait for the lock and instead just returns an error when the lock is held. So this PR also updates iptables.go to correctly wait for the lock if it is held.

  • NFT version / Support for CentOS/Rocky 8

    NFT version / Support for CentOS/Rocky 8

    Do you guys plan an NFT version by any chance? Looks like this does not work with CentOS/Rocky:

    2021-08-30T15:17:21.020Z        INFO    controllers.Service.reconcile   create a iptables rule for externalIP to clusterIP      {"service": "41b0a981-d40c-53dd-84b4-7cb2a265647a/public-dns-powerdns-udp", "externalIP": "a.b.c.d", "clusterIP": "10.90.4.210"}
    2021-08-30T15:17:21.022Z        ERROR   controllers.Service.reconcile   iptables v1.8.6 (legacy): Couldn't load target `KUBE-MARK-MASQ':No such file or directory
    
  • Handle dual-stack services gracefully

    Handle dual-stack services gracefully

    With Kubernetes 1.20, Services may be natively dual-stack. This commit correctly adds IPv4 or IPv6 iptables rules corresponding to each external address found by its family, DNATting to the appropriate family ClusterIP.

    Closes: #10

  • Support for different network setups and UDP?

    Support for different network setups and UDP?

    Hi,

    Thanks for building this, we have a network setup with our custom CNI plugins where we don't allocate PodCIDR per node, that means this code

    	// Get Nodes's pod CIDR
    	node := &corev1.Node{}
    	if err := r.Client.Get(ctx, types.NamespacedName{Name: configNodeName}, node); err != nil {
    		logger.Error(err, "failed to get the pod's node info from API server")
    		return ctrl.Result{}, err
    	}
    	podCIDRs := node.Spec.PodCIDRs
    	podCIDRIPv4, podCIDRIPv6 := getPodCIDR(podCIDRs)
    	logger.WithValues("pod CIDR IPV4", podCIDRIPv4).WithValues("pod CIDR IPv6", podCIDRIPv6).Info("pod CIDR")
    

    Doesn't work, it ends up with a nil point exception. I basically removed the entire block and it works.

    Second question i have is, it's solving my issue with metallb on nodes without local pods, but not for UDP? I can't see why it wouldn't work from the iptables rules, but it doesn't, i have 2 services (with the same ip address thru metallb) for the same dns backend, one tcp and one udp, tcp works, udp doesn't. Any ideas why?

    Thanks again!

  • `pkg/iptables`  함수의 오타를 수정합니다.

    `pkg/iptables` 함수의 오타를 수정합니다.

    아래 함수 선언 및 그 함수를 사용하는 부분의 오타를 수정하였습니다.

    https://github.com/kakao/ipvs-node-controller/blob/dfe401c837b837728d944fc39b5d4cec92a8d43e/pkg/iptables/iptables.go#L53 https://github.com/kakao/ipvs-node-controller/blob/dfe401c837b837728d944fc39b5d4cec92a8d43e/controllers/service_controller.go#L206 https://github.com/kakao/ipvs-node-controller/blob/dfe401c837b837728d944fc39b5d4cec92a8d43e/controllers/service_controller.go#L212

A Controller written in kubernetes sample-controller style which watches a custom resource named Bookstore

bookstore-sample-controller A Controller written in kubernetes sample-controller style which watches a custom resource named Bookstore. A resource cre

Jan 20, 2022
GitOops is a tool to help attackers and defenders identify lateral movement and privilege escalation paths in GitHub organizations by abusing CI/CD pipelines and GitHub access controls.
GitOops is a tool to help attackers and defenders identify lateral movement and privilege escalation paths in GitHub organizations by abusing CI/CD pipelines and GitHub access controls.

GitOops is a tool to help attackers and defenders identify lateral movement and privilege escalation paths in GitHub organizations by abusing CI/CD pipelines and GitHub access controls.

Jan 2, 2023
Flux is a tool for keeping Kubernetes clusters in sync with sources of configuration, and automating updates to configuration when there is new code to deploy.
Flux is a tool for keeping Kubernetes clusters in sync with sources of configuration, and automating updates to configuration when there is new code to deploy.

Flux is a tool for keeping Kubernetes clusters in sync with sources of configuration (like Git repositories), and automating updates to configuration when there is new code to deploy.

Jan 8, 2023
Machine controller manager provider local

Out of tree (controller-based) implementation for local as a new provider. The local out-of-tree provider implements the interface defined at MCM OOT driver.

Feb 20, 2022
Kubedd – Check migration issues of Kubernetes Objects while K8s upgrade

Kubedd – Check migration issues of Kubernetes Objects while K8s upgrade

Dec 19, 2022
Scout for alarming issues in your Kubernetes cluster
Scout for alarming issues in your Kubernetes cluster

Kube-Scout An alerting tool for Kubernetes clusters issues of all types, in real time, with intelligent redundancy, and easily extendable api. Kube-Sc

Dec 20, 2022
Carrier is a Kubernetes controller for running and scaling game servers on Kubernetes.
Carrier is a Kubernetes controller for running and scaling game servers on Kubernetes.

Carrier is a Kubernetes controller for running and scaling game servers on Kubernetes. This project is inspired by agones. Introduction Genera

Nov 25, 2022
Kubernetes OS Server - Kubernetes Extension API server exposing OS configuration like sysctl via Kubernetes API

KOSS is a Extension API Server which exposes OS properties and functionality using Kubernetes API, so it can be accessed using e.g. kubectl. At the moment this is highly experimental and only managing sysctl is supported. To make things actually usable, you must run KOSS binary as root on the machine you will be managing.

May 19, 2021
Golang-for-node-devs - Golang for Node.js developers

Golang for Node.js developers Who is this video for? Familiar with Node.js and i

Dec 7, 2022
a fast changelog generator sourced from PRs and Issues

chronicle A fast changelog generator that sources changes from GitHub PRs and issues, organized by labels. chronicle --since-tag v0.16.0 chronicle --s

Nov 19, 2022
Gh-i - Search your github issues interactively
Gh-i - Search your github issues interactively

search your github issues interactively Installation • Usage • Feedback Search G

Dec 29, 2022
A Golang package for simplifying storing configuration in the OS-provided secret manager.

go-keyconfig A Golang package for simplifying storing configuration in the OS-provided secret manager. Operating System Support OS Secret Manager MacO

Jul 22, 2022
Terraform Provider for Azure (Resource Manager)Terraform Provider for Azure (Resource Manager)
Terraform Provider for Azure (Resource Manager)Terraform Provider for Azure (Resource Manager)

Terraform Provider for Azure (Resource Manager) Version 2.x of the AzureRM Provider requires Terraform 0.12.x and later, but 1.0 is recommended. Terra

Oct 16, 2021
Annotated and kubez-autoscaler-controller will maintain the HPA automatically for kubernetes resources.

Kubez-autoscaler Overview kubez-autoscaler 通过为 deployment / statefulset 添加 annotations 的方式,自动维护对应 HorizontalPodAutoscaler 的生命周期. Prerequisites 在 kuber

Jan 2, 2023
A Kubernetes Terraform Controller
A Kubernetes Terraform Controller

Terraform Controller Terraform Controller is a Kubernetes Controller for Terraform, which can address the requirement of Using Terraform HCL as IaC mo

Jan 2, 2023
the simplest testing framework for Kubernetes controller.

KET(Kind E2e Test framework) KET is the simplest testing framework for Kubernetes controller. KET is available as open source software, and we look fo

Dec 10, 2022
Kubernetes workload controller for container image deployment

kube-image-deployer kube-image-deployer는 Docker Registry의 Image:Tag를 감시하는 Kubernetes Controller입니다. Keel과 유사하지만 단일 태그만 감시하며 더 간결하게 동작합니다. Container, I

Mar 8, 2022
An Ingress controller for Kubernetes using NGINX as a reverse proxy and load balancer

NGINX Ingress Controller Overview ingress-nginx is an Ingress controller for Kubernetes using NGINX as a reverse proxy and load balancer. Learn more a

Nov 15, 2021
A kubernetes controller that watches the Deployments and “caches” the images
A kubernetes controller that watches the Deployments and “caches” the images

image-cloner This is just an exercise. It's a kubernetes controller that watches

Dec 20, 2021