Get public LoadBalancers on your local Kubernetes clusters

inlets-operator

Build Status License: MIT Go Report Card Documentation

Get public LoadBalancers on your local Kubernetes clusters.

When using a managed Kubernetes engine, you can expose a Service as a "LoadBalancer" and your cloud provider will provision a cloud load-balancer for you, and start routing traffic to the selected service inside your cluster. In other words, you get network ingress to an internal service.

The inlets-operator brings that same experience to your local Kubernetes cluster by provisioning an exit-server on the public cloud and running an inlets server process there.

Once the inlets-operator is installed, any Service of type LoadBalancer will get an IP address, unless you exclude it with an annotation.

kubectl expose deployment nginx-1 --port=80 --type=LoadBalancer

$ kubectl get services -w
NAME               TYPE        CLUSTER-IP        EXTERNAL-IP       PORT(S)   AGE
service/nginx-1    ClusterIP   192.168.226.216   <pending>         80/TCP    78s
service/nginx-1    ClusterIP   192.168.226.216   104.248.163.242   80/TCP    78s

Who is this for?

Your cluster could be running anywhere: on your laptop, in an on-premises datacenter, within a VM, or on your Raspberry Pi. Ingress and LoadBalancers are a core-building block of Kubernetes clusters, so Ingress is especially important if you:

  • run a private-cloud or a homelab
  • self-host applications and APIs
  • test and share work with colleagues or clients
  • want to build a realistic environment
  • integrate with webhooks and third-party APIs

There is no need to open a firewall port, set-up port-forwarding rules, configure dynamic DNS or any of the usual hacks. You will get a public IP and it will "just work" for any TCP traffic you may have.

How is it better than other solutions?

  • There are no rate limits for your services when exposed through a self-hosted inlets tunnel
  • You can use your own DNS
  • You can use your own IngressController
  • You can take your IP address with you - wherever you go

Any Service of type LoadBalancer can be exposed within a few seconds.

Since exit-servers are created in your preferred cloud (around a dozen are supported already), you'll only have to pay for the cost of the VM, and where possible, the cheapest plan has already been selected for you. For example with Hetzner (coming soon) that's about 3 EUR / mo, and with DigitalOcean it comes in at around 5 USD - both of these VPSes come with generous bandwidth allowances, global regions and fast network access.

Animation

Watch an animation created by Ivan Velichko

Demo GIF

Video walk-through

In this video walk-through Alex will guide you through creating a Kubernetes cluster on your laptop with KinD, then he'll install ingress-nginx (an IngressController), followed by cert-manager and then after the inlets-operator creates a LoadBalancer on the cloud, you'll see a TLS certificate obtained by LetsEncrypt.

Video demo

Try the step-by-step tutorial in the docs

inlets tunnel capabilities

The operator detects Services of type LoadBalancer, and then creates a Tunnel Custom Resource. Its next step is to provision a small VM with a public IP on the public cloud, where it will run the inlets tunnel server. Then an inlets client is deployed as a Pod within your local cluster, which connects to the server and acts like a gateway to your chosen local service.

Powered by inlets PRO

  • Automatic end-to-end encryption of the control-plane using PKI and TLS
  • Punch out multiple ports such as 80 and 443 over the same tunnel
  • Tunnel any TCP traffic at L4 i.e. Mongo, Postgres, MariaDB, Redis, NATS, SSH and TLS itself.
  • Tunnel an IngressController including TLS termination and LetsEncrypt certs from cert-manager
  • Commercially licensed and supported. For cloud native operators and developers.

Heavily discounted pricing available for personal use.

Status and backlog

Operator cloud host provisioning:

  • Provision VMs/exit-nodes on public cloud: Equinix-Metal, DigitalOcean, Scaleway, GCP, AWS EC2, Linode and Azure

With inlets-pro configured, you get the following additional benefits:

  • Automatic configuration of TLS and encryption using secured websocket wss:// for control-port
  • Tunnel pure TCP traffic
  • Separate data-plane (ports given by Kubernetes) and control-plane (port 8132)

Other features:

  • Automatically update Service type LoadBalancer with a public IP
  • Tunnel L4 tcp traffic
  • In-cluster Role, Dockerfile and YAML files
  • Raspberry Pi / armhf build and YAML file
  • ARM64 (Graviton/Odroid/Equinix-Metal) Dockerfile/build and K8s YAML files
  • Control which services get a LoadBalancer using annotations
  • Garbage collect hosts when Service or CRD is deleted
  • CI with Travis and automated release artifacts
  • One-line installer arkade - arkade install inlets-operator --help

inlets-operator reference documentation for different cloud providers

Check out the reference documentation for inlets-operator to get exit-nodes provisioned on different cloud providers here.

Get an IP address for your IngressController and LetsEncrypt certificates

Unlike other solutions, this:

  • Integrates directly into Kubernetes
  • Gives you a TCP LoadBalancer, and updates its IP in kubectl get svc
  • Allows you to use any custom DNS you want
  • Works with LetsEncrypt

Example tutorials:

Expose a service with a LoadBalancer

The LoadBalancer type is usually provided by a cloud controller, but when that is not available, then you can use the inlets-operator to get a public IP and ingress.

First create a deployment for Nginx.

For Kubernetes 1.17 and lower:

kubectl run nginx-1 --image=nginx --port=80 --restart=Always

For 1.18 and higher:

kubectl apply -f https://raw.githubusercontent.com/inlets/inlets-operator/master/contrib/nginx-sample-deployment.yaml

Now create a service of type LoadBalancer via kubectl expose:

kubectl expose deployment nginx-1 --port=80 --type=LoadBalancer
kubectl get svc

kubectl get tunnel/nginx-1-tunnel -o yaml

kubectl logs deploy/nginx-1-tunnel-client

Check the IP of the LoadBalancer and then access it via the Internet.

Annotations, ignoring services and running with other LoadBalancers controllers

By default the operator will create a tunnel for every LoadBalancer service.

There are three ways to override the behaviour:

1) Create LoadBalancers for every service, unless annotated

To ignore a service such as traefik type in: kubectl annotate svc/traefik -n kube-system dev.inlets.manage=false

2) Create LoadBalancers for only annotated services

You can also set the operator to ignore the services by default and only manage them when the annotation is true with the flag -annotated-only To create a service such as traefik type in: kubectl annotate svc/traefik -n kube-system dev.inlets.manage=true

3) Create a Tunnel resource for ClusterIP services

Running multiple LoadBalancers controllers together, e.g. inlets-operator and MetalLB, can have some issue as both will compete against each other when processing the service.

Although the inlets-operator has the flag -annotated-only to filter the services, not all other LoadBalancer controller have a similar feature.

In this case, the inlets-operator is still able to expose services by using a ClusterIP service with a Tunnel resource instead of a LoadBalancer service.

Example:

---
apiVersion: v1
kind: Service
metadata:
  name: nginx
spec:
  type: ClusterIP
  ports:
  - name: http
    port: 80
    targetPort: 80
  selector:
    app: nginx
---
apiVersion: inlets.inlets.dev/v1alpha1
kind: Tunnel
metadata:
  name: nginx
spec:
  serviceName: nginx
  auth_token: 
   

The public IP address of the tunnel is available in the service resource:

$ kubectl get services,tunnel
NAME            TYPE        CLUSTER-IP        EXTERNAL-IP       PORT(S)   AGE
service/nginx   ClusterIP   192.168.226.216   104.248.163.242   80/TCP    78s

NAME                             SERVICE   TUNNEL         HOSTSTATUS   HOSTIP            HOSTID
tunnel.inlets.inlets.dev/nginx   nginx     nginx-client   active       104.248.163.242   214795742

or use a jsonpath to get the value:

kubectl get service nginx --output jsonpath='{.status.loadBalancer.ingress[0].ip}'

Monitor/view logs

The operator deployment is in the kube-system namespace.

kubectl logs deploy/inlets-operator -n kube-system -f

Running on a Raspberry Pi

Use the same commands as described in the section above.

There used to be separate deployment files in artifacts folder called operator-amd64.yaml and operator-armhf.yaml. Since version 0.2.7 Docker images get built for multiple architectures with the same tag which means that there is now just one deployment file called operator.yaml that can be used on all supported architecures.

Provider Pricing

The host provisioning code used by the inlets-operator is shared with inletsctl, both tools use the configuration in the grid below.

These costs need to be treated as an estimate and will depend on your bandwidth usage and how many hosts you decide to create. You can at all times check your cloud provider's dashboard, API, or CLI to view your exit-nodes. The hosts provided have been chosen because they are the absolute lowest-cost option that the maintainers could find.

Provider Price per month Price per hour OS image CPU Memory Boot time
Google Compute Engine * ~$4.28 ~$0.006 Ubuntu 20.04 1 614MB ~3-15s
Equinix-Metal ~$360 $0.50 Ubuntu 20.04 1 32GB ~45-60s
Digital Ocean $5 ~$0.0068 Ubuntu 18.04 1 1GB ~20-30s
Scaleway 5.84€ 0.01€ Ubuntu 20.04 2 2GB 3-5m
Amazon Elastic Computing 2 $3.796 $0.0052 Ubuntu 20.04 1 1GB 3-5m
Linode $5 $0.0075 Ubuntu 20.04 1 1GB ~10-30s
Azure $4.53 $0.0062 Ubuntu 20.04 1 0.5GB 2-4min
Hetzner 4.15€ €0.007 Ubuntu 20.04 1 2GB ~5-10s
  • The first f1-micro instance in a GCP Project (the default instance type for inlets-operator) is free for 720hrs(30 days) a month

Contributing

Contributions are welcome, see the CONTRIBUTING.md guide.

Similar projects / products and alternatives

  • inlets - L7 HTTP / L4 TCP tunnel which can tunnel any TCP traffic. Secure by default with built-in TLS encryption. Kubernetes-ready with Operator, helm chart, container images and YAML manifests.
  • metallb - open source LoadBalancer for private Kubernetes clusters, no tunnelling.
  • Cloudflare Argo - paid SaaS product from Cloudflare for Cloudflare customers and domains - K8s integration available through Cloudflare DNS and ingress controller
  • ngrok - a popular tunnelling tool, restarts every 7 hours, limits connections per minute, SaaS-only. No K8s integration available

Author / vendor

inlets and the inlets-operator are brought to you by OpenFaaS Ltd and Alex Ellis.

Comments
  • [Feature] Add provisioner for AWS EC2

    [Feature] Add provisioner for AWS EC2

    Expected Behaviour

    [Feature] Add provisioner for AWS EC2

    Current Behaviour

    Exit nodes can be provisioned to DO/Packet

    Possible Solution

    1. Explore the "provision" package - https://github.com/inlets/inlets-operator/tree/master/pkg/provision

    2. Integrate the AWS SDK for Go

    Example: https://docs.aws.amazon.com/code-samples/latest/catalog/go-ec2-create_instance.go.html

  • Support issue with EC2 provisioner and AWS EC2 Classic

    Support issue with EC2 provisioner and AWS EC2 Classic

    I'm attempting to install inlets-operator by way of arkade which results in:

    $ kubectl logs -n inlets deploy/inlets-operator
    2020/08/08 19:10:49 Inlets client: inlets/inlets:2.7.3
    2020/08/08 19:10:49 Inlets pro: false
    W0808 19:10:49.685014       1 client_config.go:552] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
    I0808 19:10:49.685868       1 controller.go:121] Setting up event handlers
    I0808 19:10:49.685900       1 controller.go:243] Starting Tunnel controller
    I0808 19:10:49.685903       1 controller.go:246] Waiting for informer caches to sync
    I0808 19:10:49.785997       1 controller.go:251] Starting workers
    I0808 19:10:49.786007       1 controller.go:257] Started workers
    2020/08/08 19:10:49 Creating tunnel for nginx-1-tunnel.default
    I0808 19:10:49.789938       1 controller.go:315] Successfully synced 'default/nginx-1'
    2020/08/08 19:10:49 Provisioning started with provider:ec2 host:nginx-1-tunnel
    E0808 19:10:51.798084       1 controller.go:320] error syncing 'default/nginx-1-tunnel': InvalidParameter: The AssociatePublicIpAddress parameter is only supported for VPC launches.
            status code: 400, request id: 7bc05cdf-7eb5-4167-9f44-3616397a40c6, requeuing
    

    Secondary to the above, I had a hard time finding documentation for installing with AWS provider.

    Expected Behaviour

    Installing inlets-operator via arkade results in no error messages and creates an EC2 instance and whatever other steps should be done as part of what constitutes a successful install.

    Documentation should be easier to find and should provide clear steps for a successful install and what to do if there's an issue. The documentation should also explicitly detail what will be done in the providers' account.

    Current Behaviour

    Possible Solution

    Steps to Reproduce (for bugs)

    export AWS_PROFILE=default
    kubectl create ns inlets
    arkade install inlets-operator -n inlets \
        -p ec2 \
        -r us-west-2 \
        -z us-west-2a \
        --token-file ~/Downloads/access-key \
        --secret-key-file ~/Downloads/secret-access-key
    

    using arkade version is 0.6.0

    where access-key and secret-access-key files just contain the access key and secret access key respectively.

    Context

    Your Environment

    • inlets-operator version, find via kubectl get deploy inlets-operator -o wide

    • Kubernetes distribution i.e. minikube v0.29.0., KinD v0.5.1, Docker Desktop: Bare metal 4 node cluster.

    • Kubernetes version kubectl version: 1.18.6

    • Operating System and version (e.g. Linux, Windows, MacOS): Linux (Ubuntu 20.04)

    • Cloud provisioner: AWS (us-west-2)

  • Provisioning to Hetzner Cloud + some questions

    Provisioning to Hetzner Cloud + some questions

    Hi! This project looks really cool! A few questions if you don't mind:

    • is this for development environments only or would it work for production as well? I am worried about the lb being a single point of failure
    • would it be possible to add support for Hetzner Cloud? It's a very good provider with incredible prices. I use it and love it but they don't offer load balancers yet so I am considering using inlets. I could use a DigitalOcean droplet in Frankfurt for now since added latency would be small. Can the region in DO be set?
    • what about security? Does the lb provisioned have a firewall and things like fail2ban? Is password auth disabled?
    • if I use DigitalOcean for now, can the lb be changed later with possibly no downtime if/when inlets adds support for Hetzner Cloud?

    Thanks a lot in advance!

  • Add scaleway provider

    Add scaleway provider

    • [x] I have raised an issue to propose this change.

    This PR closes https://github.com/inlets/inlets-operator/issues/12

    Description

    This pull request adds support for the scaleway provider.

    My next PR will be a refactor of how we handle provisonners in the controller. As said in the issue it needs some refactoring to stay readable.

    How Has This Been Tested?

    It was tested using minikube. I launched the following commands:

    kubectl apply ./artifacts/crd.yaml
    go build && ./inlets-operator \
      --kubeconfig "$HOME/.kube/config" \
      --provider=scaleway
      --access-key="ACCESS_KEY" --secret-key="SECRET_KEY" \
      --organization-id="ORGA_ID"
    

    And then I launched example commands:

    kubectl run nginx-1 --image=nginx --port=80 --restart=Always
    kubectl expose deployment nginx-1 --port=80 --type=LoadBalancer
    kubectl get svc
    

    This returned:

    ➜ kubectl get svc
    NAME         TYPE           CLUSTER-IP       EXTERNAL-IP      PORT(S)        AGE
    kubernetes   ClusterIP      10.96.0.1        <none>           443/TCP        5h21m
    nginx-1      LoadBalancer   10.107.150.122   51.158.122.196   80:31914/TCP   2m25s
    

    Then I launched the scaleway dashboard and saw the instance. Going to the provisioned IP showed me the nginx welcome screen.

    Then I launched kubectl delete svc nginx-1. The instance, the volume and the provisioned IP were deleted.

    Here are the logs of the controller during the tests:

    2019/10/28 15:41:14 Inlets client: inlets/inlets:2.6.1
    Creating tunnel nginx-1-tunnel
    2019/10/28 15:41:15 Provisioning host with Scaleway
    2019/10/28 15:41:45 Tunnel exists: nginx-1-tunnel
    2019/10/28 15:41:52 Provisioning call took: 37.604164s
    2019/10/28 15:41:52 Status: provisioning, ID: 109b3074-ef01-4850-a2b3-8b9d3c0456ef, IP:
    2019/10/28 15:41:52 Status: active, ID: 109b3074-ef01-4850-a2b3-8b9d3c0456ef, IP: 51.158.122.196
    2019/10/28 15:41:52 Tunnel exists: nginx-1-tunnel
    2019/10/28 15:42:15 Tunnel exists: nginx-1-tunnel
    2019/10/28 15:42:45 Tunnel exists: nginx-1-tunnel
    2019/10/28 15:43:15 Tunnel exists: nginx-1-tunnel
    2019/10/28 15:43:45 Tunnel exists: nginx-1-tunnel
    2019/10/28 15:44:15 Tunnel exists: nginx-1-tunnel
    2019/10/28 15:44:45 Tunnel exists: nginx-1-tunnel
    2019/10/28 15:44:52 Deleting exit-node: 109b3074-ef01-4850-a2b3-8b9d3c0456ef, ip: 51.158.122.196
    

    How are existing users impacted? What migration steps/scripts do we need?

    Nothing.

    Checklist:

    I have:

    • [x] updated the documentation and/or roadmap (if required)
    • [x] read the CONTRIBUTION guide
    • [x] signed-off my commits with git commit -s
    • [ ] added unit tests
  • Support request for multiple LoadBalancer controllers

    Support request for multiple LoadBalancer controllers

    Is it possible to extend the annocations to bypass inlets as the LoadBalancer, flipping this around to create an annocation that inlets-operator looks for to explicitly deploy. Like many, MetalLB is the default bare metal loadbalancer in a cluster for certain situations. Deploying the inlets-operator would result in two LB competing. MetalLB will always attempt to service type LoadBalancer.

    The inlets-operator is the ideal solution for routing external traffic in to a private cluster without having to go the route of site-to-site vpn and routing traffice. However, as I suspect inlets will interrogate all service types LB and go back and open up VMs for each.

    Use case trying to be solved. IP addresses being assigned to internal services/sites, API gateway trying to be exposed to external internet traffic, inlets is ideal for this, whereas MetalLB is ideal fo assignin IP for internal use (private net)

    DB

  • GCE provisioner does not show when the VM fails to be created

    GCE provisioner does not show when the VM fails to be created

    Using the GCE provisioner, it seems like the tunnel VM sometimes never gets created, and the inlets-operator or events on the Tunnel objects do not give an indication as to what went wrong.

    As I understand it, the provisioning is done in two steps:

    • Step 1: do the Instances.Insert and check that there is no 409 Conflict in case the name clashes;
    • Step 2: one subsequent resyncs, try to fetch the VM using its name, assuming that the previous step worked.

    After looking at how the provisioning is done, I noticed in gce.go that we don't check the status of the operation after creating the operation:

    op, err := p.gceProvisioner.Instances.Insert(
    	host.Additional["projectid"],
    	host.Additional["zone"],
    	instance).Do()
    	
    // This err is only meant for network or failure-related errors;
    // VM provisioning errors are not returned in this error. Instead,
    // we have to look at "op"
    // and wait for the operation to complete.
    if err != nil {
    	return nil, fmt.Errorf("could not provision GCE instance: %s", err)
    }
    
    if op.HTTPStatusCode == http.StatusConflict {
    	log.Println("Host already exists, status: conflict.")
    }
    

    The "actual" error is never displayed to the user, since we do not keep track of the provisioning operation op. If anything goes wrong, here is what the user sees in the inlets-operator controller logs (see full logs at the bottom of this issue):

    error syncing kube-system/traefik-tunnel: could not get instance: googleapi: Error 404: The resource 'projects/jetstack-mael-valais/zones/europe-west2-b/instances/traefik-tunnel' was not found, notFound, requeuing

    Looking at the GCP audit logs, the reason for this seems to be that the instance name starts with a -:

    invalid-parameter-gcp-creation-vm

    I can think of two ways of remediating that:

    • Idea 1 "asynchronously": remember what the operation name is e.g. in some annotation and to use that on each controller sync when the state is "provisioning";
    • Idea 2 "synchronously": just block during while the operation finishes. That means that your controller will block of ~30 seconds, which isn't the "right way" but works fine most of the time, as long the workqueue has multiple goroutines/workers assigned.

    Idea 2 could be done with some polling mechanism such as

    err := wait.Poll(5*time.Second, 300*time.Second, func() (bool, error) {
    	op, err := google.zoneOperationsService.Get(project, zone, name).Do()
    	if err != nil {
    		return false, err
    	}
    	klog.V(6).Infof("Waiting for operation to be completed... (status: %s)", op.Status)
    	if op.Status == "DONE" {
    		if op.Error == nil {
    			return true, nil
    		}
    		var err []error
    		for _, opErr := range op.Error.Errors {
    			err = append(err, fmt.Errorf("%s", *opErr))
    		}
    		return false, fmt.Errorf("the following errors occurred: %+v", err)
    	}
    	return false, nil
    })
    

    The inlets-operator command arguments:

    # kubectl describe pod -l app.kubernetes.io/name=inlets-operator
    Containers:
      inlets-operator:
        Image:         ghcr.io/inlets/inlets-operator:0.12.1
        Command:
          ./inlets-operator
          -provider=gce
          -zone=europe-west2-b
          -region=lon1                     # ooops, reminder of when I used digitalocean
          -access-key-file=/var/secrets/inlets/inlets-access-key
          -license=REDACTED
    

    The inlets-operator logs:

    # kubectl logs -l app.kubernetes.io/name=inlets-operator --tail=-1
    2021/05/19 11:50:24 Operator version: 0.12.1 SHA: b3a96cc192b97afc862087260e97ad3bc2f2491b
    2021/05/19 11:50:24 Inlets client: ghcr.io/inlets/inlets-pro:0.8.3
    2021/05/19 11:50:24 Using inlets PRO.
    W0519 11:50:24.391900       1 client_config.go:552] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
    I0519 11:50:24.392512       1 controller.go:121] Setting up event handlers
    I0519 11:50:24.392542       1 controller.go:243] Starting Tunnel controller
    I0519 11:50:24.392546       1 controller.go:246] Waiting for informer caches to sync
    I0519 11:50:24.492658       1 controller.go:251] Starting workers
    I0519 11:50:24.492679       1 controller.go:257] Started workers
    2021/05/19 11:50:24 Creating tunnel for traefik-tunnel.kube-system
    I0519 11:50:24.500790       1 controller.go:315] Successfully synced 'kube-system/traefik'
    2021/05/19 11:50:24 Provisioning started with provider:gce host:traefik-tunnel
    2021/05/19 11:50:25 Creating firewall exists, updating: inlets
    2021/05/19 11:50:27 Provisioning call took: 2.532041s
    2021/05/19 11:50:27 Status (traefik): provisioning, ID: traefik-tunnel|europe-west2-b|jetstack-mael-valais, IP: 
    I0519 11:50:27.046555       1 controller.go:315] Successfully synced 'kube-system/traefik-tunnel'
    I0519 11:50:27.349417       1 controller.go:315] Successfully synced 'kube-system/traefik-tunnel'
    E0519 11:50:54.677483       1 controller.go:320] error syncing 'kube-system/traefik-tunnel': could not get instance: googleapi: Error 404: The resource 'projects/jetstack-mael-valais/zones/europe-west2-b/instances/traefik-tunnel' was not found, notFound, requeuing
    E0519 11:50:54.997725       1 controller.go:320] error syncing 'kube-system/traefik-tunnel': could not get instance: googleapi: Error 404: The resource 'projects/jetstack-mael-valais/zones/europe-west2-b/instances/traefik-tunnel' was not found, notFound, requeuing
    E0519 11:50:55.284851       1 controller.go:320] error syncing 'kube-system/traefik-tunnel': could not get instance: googleapi: Error 404: The resource 'projects/jetstack-mael-valais/zones/europe-west2-b/instances/traefik-tunnel' was not found, notFound, requeuing
    

    The traefik-tunnel object:

    # kubectl describe tunnel -n kube-system traefik-tunnel
    Name:         traefik-tunnel
    Spec:
      auth_token:    foo
      Service Name:  traefik
    Status:
      Host Id:      traefik-tunnel|europe-west2-b|jetstack-mael-valais
      Host Status:  provisioning
    Events:
      Type    Reason  Age                From             Message
      ----    ------  ----               ----             -------
      Normal  Synced  33m (x2 over 33m)  inlets-operator  Tunnel synced successfully
    
  • Support request for network timeout with DigitalOcean API

    Support request for network timeout with DigitalOcean API

    I have followed the tutorial and it works correctly and reliably on my Mac, but on my k3s RPi cluster I am getting the error:

    error syncing 'default/nginx-1-tunnel': Post https://api.digitalocean.com/v2/droplets: dial tcp: i/o timeout, requeuing

    Expected Behaviour

    The droplet should be created

    Current Behaviour

    No droplet created

    Steps

    1. GIt clone this repo
    2. Install arkade 2.a Copy my DigitalOcean API key into the file /home/pi/faas-netes/do/key
    3. arkade install inlets-operator --provider digitalocean --region lon1 --token-file /home/pi/faas-netes/do.key
    4. kubectl run nginx-1 --image=arm32v7/nginx --port=80 --restart=Always
    5. kubectl expose deployment nginx-1 --port=80 --type=LoadBalancer

    Other

    The inlets-access-key value didn't look right (even after base64 decoding) so I manually created a secret for inlets-access-key with the value of my API Key from Digital Ocean & deleted the operator pod. Sadly, same error.

  • Support request

    Support request

    Expected Behaviour

    I am unable to access https content through the proxy. Does this support https proxying? I didn't think this was a inlets-pro feature, but if so that would answer my questions.

    This could be a misunderstanding on how inlets/inlet-operator works on my part so clarification would be great.

    Current Behaviour

    I have a k3s cluster configured with Traefik, cert-manager, and metal-lb. I used inlets to allow access into the cluster so cert-manager could use http01 to issue the ssl cert. Now if my local DNS is pointing to the the local IP (issued by metal-lb) of Traefik LoadBalancer I am able to see the site with a valid LetsEncrypt cert.

    For external access I set my DNS provider to point to the IP of the inlets-tunnel on digital ocean. I am able to access the site using http, but not https.

    Possible Solution

    Document the process for configuring LoadBalancers for https access.

    Steps to Reproduce (for bugs)

    1. Install k3s (with --no-deploy servicelb option so I can use metal-lb)

    2. Use helm 3 to update k3s Traefik to enable ssl.

    helm upgrade --reuse-values -f values.yaml -n kube-system traefik stable/traefik
    
    # Traefik values.yaml
    externalTrafficPolicy: Local
    dashboard:
      enabled: true
      domain: traefik.local.lan
      ingress:
        annotations:
          traefik.ingress.kubernetes.io/whitelist-source-range: "192.168.1.0/24,127.0.0.0/8,::1/128"
    ssl:
      enabled: true
      generateTLS: true
    

    3. Install metal-lb using helm 3

    kubectl create namespace metallb-system
    helm install metallb stable/metallb --namespace metallb-system -f values.yml
    
    # metal-lb values.yaml
    configInline:
      address-pools:
        - name: default-ip-space
          protocol: layer2
          addresses:
            - 192.168.1.60-192.168.1.99
    

    4. Install cert manager

    Excluding config for brevity but the cert is issued successfully.

    5. Install inlets using helm 3

    helm repo add inlets https://inlets.github.io/inlets-operator/
    helm repo update
    kubectl apply -f https://raw.githubusercontent.com/inlets/inlets-operator/master/artifacts/crd.yaml
    kubectl create ns inlets
    kubectl create secret -n inlets generic inlets-access-key \
        --from-literal inlets-access-key="CHANGEME"
    helm install inlets-operator inlets/inlets-operator --namespace inlets -f values.yaml
    
    # inlets values.yaml
    provider: "digitalocean"
    region: "nyc1"
    

    6. Deploy an arm version of httpbin for testing.

    # httpbin service.yaml
    apiVersion: networking.k8s.io/v1beta1
    kind: Ingress
    metadata:
      name: httpbin
      namespace: httpbin
      annotations:
        kubernetes.io/ingress.class: traefik
        # ingress.kubernetes.io/ssl-redirect: "true"
        kubernetes.io/tls-acme: "true"
        cert-manager.io/cluster-issuer: letsencrypt-prod
    spec:
      tls:
        - secretName: httpbin-tls-cert
          hosts:
            - httpbin.mydomain.com
      rules:
        - host: httpbin.local.lan
          http:
            paths:
              - backend:
                  serviceName: httpbin
                  servicePort: 80
        - host: httpbin.mydomain.com
          http:
            paths:
              - backend:
                  serviceName: httpbin
                  servicePort: 80
    

    7. Configure DNS

    • Configure local DNS to point to metal-lb IP. httpbin.local.lan -> 192.168.1.60, httpbin.mydomain.com -> 192.168.1.60
    • Configure DNS provider (google domains) to point to digital ocean. httpbin.mydomain.com -> xxx.xxx.xxx.xxx

    Context

    I am trying to enable secure access to services within my cluster. I am mainly looking to safely expose home assistant.

    Your Environment

    • inlets-operator version, find via kubectl get deploy inlets-operator -o wide inlets/inlets-operator:0.6.3

    • Kubernetes distribution i.e. minikube v0.29.0., KinD v0.5.1, Docker Desktop: k3s v1.17.2+k3s1

    • Kubernetes version kubectl version: v1.17.2

    • Operating System and version (e.g. Linux, Windows, MacOS): Raspbian Buster Lite (Raspberry Pi 4)

    • Cloud provisioner: (DigitalOcean.com / Packet.com / etc) DigitalOcean.com

  • Update OS images to Ubuntu 18.04 for provisioners

    Update OS images to Ubuntu 18.04 for provisioners

    Update OS images to Ubuntu 18.04 for provisioners

    Expected Behaviour

    OS images need to be bumped to 18.04 and re-tested for each cloud provisioner

    Current Behaviour

    We have a mix of 16.04 and 18.04, as we saw in: https://github.com/inlets/inletsctl/issues/95, some providers are removing their images given that the release lost support in April 2021.

    Possible Solution

    Find new "code" or "slug" for the Ubuntu 18.04 image.

    Update the code, rebuild, deploy, test creation, verify the OS image is as launched expected in the provider's dashboard.

    Context

    Some images are being removed, but all of them are now out of support, so could become subject to vulnerabilities that do not receive a subsequent fix.

  • Simplify tunnel sync code

    Simplify tunnel sync code

    • [ ] I have raised an issue to propose this change.

    Description

    Change if-else statements to make code easier to read and also filter services by type before sending it to the work queue

    How Has This Been Tested?

    This is being tested while I am evaluating inlets-pro in a four NUC node k8s cluster using k3os and DigitalOcean as provider

    How are existing users impacted? What migration steps/scripts do we need?

    There is no impact or migration needed.

    Checklist:

    I have:

    • [ ] updated the documentation and/or roadmap (if required)
    • [X] read the CONTRIBUTION guide
    • [X] signed-off my commits with git commit -s
    • [ ] added unit tests
  • Add Google Compute Engine as provisioner for exit-node

    Add Google Compute Engine as provisioner for exit-node

    Add GCE support for provisioner of exit node in inlets-operator, and update Docs

    Signed-off-by: Utsav Anand [email protected]

    • [] ~I have raised an issue to propose this change.~
    • [x] An issue has been raised to propose this change.

    Description

    This PR adds support for using Google Compute Engine instances as the exit- nodes for inlets-operator

    How Has This Been Tested?

    Screenshot 2019-11-23 at 1 03 54 PM Screenshot 2019-11-23 at 1 04 23 PM

    How are existing users impacted? What migration steps/scripts do we need?

    No user impact

    Checklist:

    I have:

    • [x] updated the documentation and/or roadmap (if required)
    • [x] read the CONTRIBUTION guide
    • [x] signed-off my commits with git commit -s
    • [x] added unit tests
  • Fix issue #151 - Bugfix doc access-key

    Fix issue #151 - Bugfix doc access-key

    • [ X] I have raised an issue to propose this change.

    Description

    Add some text to the README to help folks installing using arkade.

    How Has This Been Tested?

    using k3d on OS X v1.21.7+k3s1 Ark 0.8.12 ghcr.io/inlets/inlets-operator:0.15.0

    How are existing users impacted? What migration steps/scripts do we need?

    Checklist:

    I have:

    • [X ] updated the documentation and/or roadmap (if required)
    • [X ] read the CONTRIBUTION guide
    • [ X] signed-off my commits with git commit -s
    • [ ] added unit tests
  • bugfix: Docs for installation overlook the access-key

    bugfix: Docs for installation overlook the access-key

    Expected Behaviour

    Quick issue, followed by bugfix: The README overlooks the installation step and if you use ark install inlets-operator you might not have the secret installed properly (the operator refers to the secret by a different name)

    Current Behaviour

    the operator container will fail to load....

    Possible Solution

    updated the README with tips.

    Steps to Reproduce (for bugs)

    Context

    Your Environment

    • inlets-operator version, find via kubectl get deploy inlets-operator -o wide

    • Kubernetes distribution i.e. minikube v0.29.0., KinD v0.5.1, Docker Desktop:

    • Kubernetes version kubectl version:

    • Operating System and version (e.g. Linux, Windows, MacOS):

    • Cloud provisioner:

  • Update build status image to use GitHub Actions #148

    Update build status image to use GitHub Actions #148

    • [ ] I have raised an issue to propose this change.

    Description

    How Has This Been Tested?

    How are existing users impacted? What migration steps/scripts do we need?

    Checklist:

    I have:

    • [ ] updated the documentation and/or roadmap (if required)
    • [x] read the CONTRIBUTION guide
    • [x] signed-off my commits with git commit -s
    • [ ] added unit tests
  • Update build status image to use GitHub Actions

    Update build status image to use GitHub Actions

    Expected Behaviour

    Correct build status to be shown

    Current Behaviour

    Shows the old Travis build.

    Possible Solution

    Update the link as per the README of openfaas/faas. Note that it has both a link to the build and an image for the badge.

  • [WIP] Add support to read lincence from secret

    [WIP] Add support to read lincence from secret

    This commit add support to take licence from a kubernetes secret. A kubernetes secret can be created, which can be passed through helm chart. Previous flag is still present to provide backward comptability.

    Signed-off-by: Vivek Singh [email protected]

    • [ ] I have raised an issue to propose this change.

    #67

    Description

    How Has This Been Tested?

    How are existing users impacted? What migration steps/scripts do we need?

    Checklist:

    I have:

    • [ ] updated the documentation and/or roadmap (if required)
    • [x] read the CONTRIBUTION guide
    • [x] signed-off my commits with git commit -s
    • [ ] added unit tests
  • Known issue: connection refused due to IPVS

    Known issue: connection refused due to IPVS

    Details

    You may run into a known issue where the client deployment for the inlets tunnel says: connection refused

    Why does this happen?

    This is due to the way your Kubernetes cluster or networking driver is configured to use IPVS. In IPVS mode, outgoing traffic is redirected to the node that the pod is running on, instead of being allowed to go to your exit-server.

    Most clusters use iptables, which do not cause this problem.

    If you've installed Calico or configured Cilium in a certain way then it may be using IPVS.

    Possible Solution

    There is a workaround, which is better for production use because the token and IP of the tunnel are deterministic, and the inlets-pro helm chart can be managed through a GitOps approach using Argo or FluxCD.

    • Provision an exit-server using inletsctl, terraform, or manually
    • Then deploy the inlets-pro client using its helm chart

    If anyone has suggestions on how to improve the operator so that when an external-ip is set, it can be compatible with IPVS, I'd love to hear from you here or in some other way.

    Full details: https://inlets.dev/blog/2021/07/08/short-lived-clusters.html

    If you want to carry on using the operator for some reason, edit the service and remove its public IP. You'll be able to see the IPs using kubectl get tunnels -A -o wide

    Steps to Reproduce (for bugs)

    Optionally create a multipass VM, or cloud VM:

    multipass launch --cpus 2 --mem 4G -d 30G --name k3s-server
    multipass exec k3s-server /bin/bash
    
    curl -sLS https://get.arkade.dev| sudo sh
    arkade get k3sup && sudo mv .arkade/bin/k3sup /usr/local/bin/
    
    1. Launch a cluster in IPVS mode k3sup install --local --k3s-extra-args="--kube-proxy-arg proxy-mode=ipvs"
    2. export KUBECONFIG=$(pwd)/kubeconfig
      1. Or install a networking driver which uses IPVS.
    3. Install IPVS tools: sudo apt update && sudo apt install ipvsadm
    4. Confirm IPVS is running: sudo ipvsadm -ln
    5. Install the inlets-operator
    6. Deploy and expose nginx
    7. Note the logs for the client saying connection refused when trying to connect to the remote IP address on port 8123 on DigitalOcean, Equinix Metal or whatever cloud is being used.
    2021/08/03 10:13:03 Starting TCP client. Version 0.8.8 - 57580545a321dc7549a26e8008999e12cb7161de
    2021/08/03 10:13:03 Licensed to: Zespre Schmidt <[email protected]>, expires: 2 day(s)
    2021/08/03 10:13:03 Upstream server: my-service, for ports: 80
    Error: unable to download CA from remote inlets server for auto-tls: Get "https://165.22.103.96:8123/.well-known/ca.crt": dial tcp 165.22.103.96:8123: connect: connection refused
    

    Note: port 8123 isn't part of the LoadBalancer service, which makes this behaviour even more questionable.

    Context

    A few people have run into this recently, but generally this hasn't been reported by users.

Proxy your Go Module`s Import Path from your own domain to a public host (e.g. github.com).

Go Modules Remote Import Path Proxy Proxy your Go Module`s Import Path from your own domain to a public host (e.g. github.com). For example Uber (buil

Nov 2, 2021
A library for the MIGP (Might I Get Pwned) protocolA library for the MIGP (Might I Get Pwned) protocol

MIGP library This contains a library for the MIGP (Might I Get Pwned) protocol. MIGP can be used to build privacy-preserving compromised credential ch

Dec 3, 2022
Go pkg for returning your public facing IP address.

#publicip This package returns the public facing IP address of the calling client (a la https://icanhazip.com, but from Go!) Author James Polera james

Nov 21, 2022
Creates a linux group of users synced to your Google Workspace users and automatically imports their public SSH keys.
Creates a linux group of users synced to your Google Workspace users and automatically imports their public SSH keys.

Creates a linux group of users synced to your Google Workspace users and automatically imports their public SSH keys.

Jan 27, 2022
Local development against a remote Kubernetes or OpenShift cluster
Local development against a remote Kubernetes or OpenShift cluster

Documentation - start here! ** Note: Telepresence 1 is being replaced by our even better Telepresence 2. Please try Telepresence 2 first and report an

Jan 8, 2023
checkip is a CLI tool and library that checks an IP address using various public services.
checkip is a CLI tool and library that checks an IP address using various public services.

checkip is a CLI tool and library that checks an IP address using various public services.

Dec 20, 2022
Nekogram X public proxy for PC

这是什么 备用代理,以防失联,适用于电脑端。 与 Nekogram X 的公共代理一致。 如何自行搭建 WSS 中继 需要自行转发 https / websocket (以使用 CDN 为例) 准备一个域名。如 tg.gov.cn 根据 这里 的记录,设置相应数量的子域名记录(目前为 8 个) 如

Dec 15, 2022
go public libary

gopublic canutil can数据解析 can矩阵表,包含所有的信号,及信号在can的格式,大小,起始位置,最大最小值等, can数据包括两种格式: intel motorola loger 日志输入打印输出包,统一格式为 时间|[级别]|[自定义标记]|[线程]|[类.函数]|[

May 12, 2022
Sonarcon - Uses the SonarQube API to interact and extract sources from public instances

Sonarcon Uses the SonarQube API to interact and extract sources from public inst

Feb 17, 2022
Super fault-tolerant gateway for HTTP clusters, written in Go. White paper for reference - https://github.com/gptankit/serviceq-paper
Super fault-tolerant gateway for HTTP clusters, written in Go. White paper for reference - https://github.com/gptankit/serviceq-paper

ServiceQ ServiceQ is a fault-tolerant gateway for HTTP clusters. It employs probabilistic routing to distribute load during partial cluster shutdown (

Jul 16, 2022
A wrapper for cloudflared that manages your local proxies for you

Cloudflared Tunnel Wrapper cfdtunnel is a wrapper for cloudflared access tunnel, designed to access multiple tunnels without having to worry about you

Dec 16, 2022
Helps you to send ssh commands to target machine in your local network from outside via gRPC
Helps you to send ssh commands to target machine in your local network from outside via gRPC

rpc-ssh In case, you don't want to make your ssh port accessible from outside local network. This repository helps you to send ssh commands to target

Nov 16, 2022
Turning a folder into a gallery website to share your videos, songs and images in the local netwrok.

What Carp Web Gallery Turning a folder into a gallery website to share your videos, songs and images in the local netwrok. Browser videos and audios i

Mar 14, 2022
Gopi - Simple API for get geo information about your IP Address, Build by go-fiber

gopi Simple API to get information from your IP Address Idea This idea come from IP zxq and literaly i clone it How to download GeoIP2 ? Remember to c

May 27, 2022
Get ip address with Golang on your computer/system

Get IP Address with Golang Get IP address(es) with go-lang is a simple command line tool to get your IP address vpn, internal, external, etc. Usage ge

Sep 5, 2022
Send email and SMS broadcasts to your contacts. SMS are sent via your Android phone connected to your PC.

Polysender Send email and SMS broadcasts to your contacts. Polysender is a desktop application, so it does not require a complicated server setup. Ema

Aug 11, 2022
Simple application in Golang that retrieves your ip and updates your DNS entries automatically each time your IP changes.

DNS-Updater Simple application in Golang that retrieves your ip and updates your DNS entries automatically each time your IP changes. Motivation Havin

Mar 10, 2022
Pure-Go library for cross-platform local peer discovery using UDP multicast :woman: :repeat: :woman:
Pure-Go library for cross-platform local peer discovery using UDP multicast :woman: :repeat: :woman:

peerdiscovery Pure-go library for cross-platform thread-safe local peer discovery using UDP multicast. I needed to use peer discovery for croc and eve

Jan 8, 2023
A fast reverse proxy to help you expose a local server behind a NAT or firewall to the internet.
A fast reverse proxy to help you expose a local server behind a NAT or firewall to the internet.

frp README | 中文文档 What is frp? frp is a fast reverse proxy to help you expose a local server behind a NAT or firewall to the Internet. As of now, it s

Jan 5, 2023