A scalable overlay networking tool with a focus on performance, simplicity and security

What is Nebula?

Nebula is a scalable overlay networking tool with a focus on performance, simplicity and security. It lets you seamlessly connect computers anywhere in the world. Nebula is portable, and runs on Linux, OSX, Windows, iOS, and Android. It can be used to connect a small number of computers, but is also able to connect tens of thousands of computers.

Nebula incorporates a number of existing concepts like encryption, security groups, certificates, and tunneling, and each of those individual pieces existed before Nebula in various forms. What makes Nebula different to existing offerings is that it brings all of these ideas together, resulting in a sum that is greater than its individual parts.

You can read more about Nebula here.

You can also join the NebulaOSS Slack group here

Supported Platforms

Desktop and Server

Check the releases page for downloads

  • Linux - 64 and 32 bit, arm, and others
  • Windows
  • MacOS
  • Freebsd

Mobile

Technical Overview

Nebula is a mutually authenticated peer-to-peer software defined network based on the Noise Protocol Framework. Nebula uses certificates to assert a node's IP address, name, and membership within user-defined groups. Nebula's user-defined groups allow for provider agnostic traffic filtering between nodes. Discovery nodes allow individual peers to find each other and optionally use UDP hole punching to establish connections from behind most firewalls or NATs. Users can move data between nodes in any number of cloud service providers, datacenters, and endpoints, without needing to maintain a particular addressing scheme.

Nebula uses elliptic curve Diffie-Hellman key exchange, and AES-256-GCM in its default configuration.

Nebula was created to provide a mechanism for groups hosts to communicate securely, even across the internet, while enabling expressive firewall definitions similar in style to cloud security groups.

Getting started (quickly)

To set up a Nebula network, you'll need:

1. The Nebula binaries for your specific platform. Specifically you'll need nebula-cert and the specific nebula binary for each platform you use.

2. (Optional, but you really should..) At least one discovery node with a routable IP address, which we call a lighthouse.

Nebula lighthouses allow nodes to find each other, anywhere in the world. A lighthouse is the only node in a Nebula network whose IP should not change. Running a lighthouse requires very few compute resources, and you can easily use the least expensive option from a cloud hosting provider. If you're not sure which provider to use, a number of us have used $5/mo DigitalOcean droplets as lighthouses.

Once you have launched an instance, ensure that Nebula udp traffic (default port udp/4242) can reach it over the internet.

3. A Nebula certificate authority, which will be the root of trust for a particular Nebula network.

./nebula-cert ca -name "Myorganization, Inc"

This will create files named ca.key and ca.cert in the current directory. The ca.key file is the most sensitive file you'll create, because it is the key used to sign the certificates for individual nebula nodes/hosts. Please store this file somewhere safe, preferably with strong encryption.

4. Nebula host keys and certificates generated from that certificate authority

This assumes you have four nodes, named lighthouse1, laptop, server1, host3. You can name the nodes any way you'd like, including FQDN. You'll also need to choose IP addresses and the associated subnet. In this example, we are creating a nebula network that will use 192.168.100.x/24 as its network range. This example also demonstrates nebula groups, which can later be used to define traffic rules in a nebula network.

./nebula-cert sign -name "lighthouse1" -ip "192.168.100.1/24"
./nebula-cert sign -name "laptop" -ip "192.168.100.2/24" -groups "laptop,home,ssh"
./nebula-cert sign -name "server1" -ip "192.168.100.9/24" -groups "servers"
./nebula-cert sign -name "host3" -ip "192.168.100.10/24"

5. Configuration files for each host

Download a copy of the nebula example configuration.

  • On the lighthouse node, you'll need to ensure am_lighthouse: true is set.

  • On the individual hosts, ensure the lighthouse is defined properly in the static_host_map section, and is added to the lighthouse hosts section.

6. Copy nebula credentials, configuration, and binaries to each host

For each host, copy the nebula binary to the host, along with config.yaml from step 5, and the files ca.crt, {host}.crt, and {host}.key from step 4.

DO NOT COPY ca.key TO INDIVIDUAL NODES.

7. Run nebula on each host

./nebula -config /path/to/config.yaml

Building Nebula from source

Download go and clone this repo. Change to the nebula directory.

To build nebula for all platforms: make all

To build nebula for a specific platform (ex, Windows): make bin-windows

See the Makefile for more details on build targets

Credits

Nebula was created at Slack Technologies, Inc by Nate Brown and Ryan Huber, with contributions from Oliver Fross, Alan Lam, Wade Simmons, and Lining Wang.

Owner
Slack
On a mission to make your working life simpler, more pleasant and more productive.
Slack
Comments
  • Node outside of LAN can only talk to light house

    Node outside of LAN can only talk to light house

    I have a bunch of computers on my LAN with one light house that is accessible from the outside world Lighthouse: 192.168.42.99 (mydomain.com:4242) Lan Machine 1 (A) : 192.168.42.200 Lan Machine 2 (B): 192.168.42.203

    Outside lan machine (C): 192.168.42.10

    using the 192.168.42.0 IPs:

    • A, B and lighthouse can ping each other without any issue
    • C can ping the lighthouse but not A nor B
    • A and B can't ping C
    • Light house can ping C

    Light house config:

    # This is the nebula example configuration file. You must edit, at a minimum, the static_host_map, lighthouse, and firewall sections
    # Some options in this file are HUPable, including the pki section. (A HUP will reload credentials from disk without affecting existing tunnels)
    
    # PKI defines the location of credentials for this node. Each of these can also be inlined by using the yaml ": |" syntax.
    pki:
      # The CAs that are accepted by this node. Must contain one or more certificates created by 'nebula-cert ca'
      ca: /etc/nebula/ca.crt
      cert: /etc/nebula/pihole.crt
      key: /etc/nebula/pihole.key
      #blacklist is a list of certificate fingerprints that we will refuse to talk to
      #blacklist:
      #  - c99d4e650533b92061b09918e838a5a0a6aaee21eed1d12fd937682865936c72
    
    # The static host map defines a set of hosts with fixed IP addresses on the internet (or any network).
    # A host can have multiple fixed IP addresses defined here, and nebula will try each when establishing a tunnel.
    # The syntax is:
    #   "{nebula ip}": ["{routable ip/dns name}:{routable port}"]
    # Example, if your lighthouse has the nebula IP of 192.168.100.1 and has the real ip address of 100.64.22.11 and runs on port 4242:
    static_host_map:
      "192.168.42.99": ["mydomain.com:4242"]
    
    
    lighthouse:
      # am_lighthouse is used to enable lighthouse functionality for a node. This should ONLY be true on nodes
      # you have configured to be lighthouses in your network
      am_lighthouse: true
      # serve_dns optionally starts a dns listener that responds to various queries and can even be
      # delegated to for resolution
      # serve_dns: true
      # interval is the number of seconds between updates from this node to a lighthouse.
      # during updates, a node sends information about its current IP addresses to each node.
      interval: 60
      # hosts is a list of lighthouse hosts this node should report to and query from
      # IMPORTANT: THIS SHOULD BE EMPTY ON LIGHTHOUSE NODES
      hosts:
              #  - "192.168.42.1"
    
    # Port Nebula will be listening on. The default here is 4242. For a lighthouse node, the port should be defined,
    # however using port 0 will dynamically assign a port and is recommended for roaming nodes.
    listen:
      host: 0.0.0.0
      port: 4242
      # Sets the max number of packets to pull from the kernel for each syscall (under systems that support recvmmsg)
      # default is 64, does not support reload
      #batch: 64
      # Configure socket buffers for the udp side (outside), leave unset to use the system defaults. Values will be doubled by the kernel
      # Default is net.core.rmem_default and net.core.wmem_default (/proc/sys/net/core/rmem_default and /proc/sys/net/core/rmem_default)
      # Maximum is limited by memory in the system, SO_RCVBUFFORCE and SO_SNDBUFFORCE is used to avoid having to raise the system wide
      # max, net.core.rmem_max and net.core.wmem_max
      #read_buffer: 10485760
      #write_buffer: 10485760
    
    # Punchy continues to punch inbound/outbound at a regular interval to avoid expiration of firewall nat mappings
    punchy: true
    # punch_back means that a node you are trying to reach will connect back out to you if your hole punching fails
    # this is extremely useful if one node is behind a difficult nat, such as symmetric
    punch_back: true
    
    # Cipher allows you to choose between the available ciphers for your network.
    # IMPORTANT: this value must be identical on ALL NODES/LIGHTHOUSES. We do not/will not support use of different ciphers simultaneously!
    #cipher: chachapoly
    
    # Local range is used to define a hint about the local network range, which speeds up discovering the fastest
    # path to a network adjacent nebula node.
    #local_range: "172.16.0.0/24"
    
    # sshd can expose informational and administrative functions via ssh this is a
    #sshd:
      # Toggles the feature
      #enabled: true
      # Host and port to listen on, port 22 is not allowed for your safety
      #listen: 127.0.0.1:2222
      # A file containing the ssh host private key to use
      # A decent way to generate one: ssh-keygen -t ed25519 -f ssh_host_ed25519_key -N "" < /dev/null
      #host_key: ./ssh_host_ed25519_key
      # A file containing a list of authorized public keys
      #authorized_users:
        #- user: steeeeve
          # keys can be an array of strings or single string
          #keys:
            #- "ssh public key string"
    
    # Configure the private interface. Note: addr is baked into the nebula certificate
    tun:
      # Name of the device
      dev: nebula1
      # Toggles forwarding of local broadcast packets, the address of which depends on the ip/mask encoded in pki.cert
      drop_local_broadcast: false
      # Toggles forwarding of multicast packets
      drop_multicast: false
      # Sets the transmit queue length, if you notice lots of transmit drops on the tun it may help to raise this number. Default is 500
      tx_queue: 500
      # Default MTU for every packet, safe setting is (and the default) 1300 for internet based traffic
      mtu: 1300
      # Route based MTU overrides, you have known vpn ip paths that can support larger MTUs you can increase/decrease them here
      routes:
        #- mtu: 8800
        #  route: 10.0.0.0/16
    
    # TODO
    # Configure logging level
    logging:
      # panic, fatal, error, warning, info, or debug. Default is info
      level: info
      # json or text formats currently available. Default is text
      format: text
    
    #stats:
      #type: graphite
      #prefix: nebula
      #protocol: tcp
      #host: 127.0.0.1:9999
      #interval: 10s
    
      #type: prometheus
      #listen: 127.0.0.1:8080
      #path: /metrics
      #namespace: prometheusns
      #subsystem: nebula
      #interval: 10s
    
    # Nebula security group configuration
    firewall:
      conntrack:
        tcp_timeout: 120h
        udp_timeout: 3m
        default_timeout: 10m
        max_connections: 100000
    
      # The firewall is default deny. There is no way to write a deny rule.
      # Rules are comprised of a protocol, port, and one or more of host, group, or CIDR
      # Logical evaluation is roughly: port AND proto AND ca_sha AND ca_name AND (host OR group OR groups OR cidr)
      # - port: Takes `0` or `any` as any, a single number `80`, a range `200-901`, or `fragment` to match second and further fragments of fragmented packets (since there is no port available).
      #   code: same as port but makes more sense when talking about ICMP, TODO: this is not currently implemented in a way that works, use `any`
      #   proto: `any`, `tcp`, `udp`, or `icmp`
      #   host: `any` or a literal hostname, ie `test-host`
      #   group: `any` or a literal group name, ie `default-group`
      #   groups: Same as group but accepts a list of values. Multiple values are AND'd together and a certificate would have to contain all groups to pass
      #   cidr: a CIDR, `0.0.0.0/0` is any.
      #   ca_name: An issuing CA name
      #   ca_sha: An issuing CA shasum
    
      outbound:
        # Allow all outbound traffic from this node
        - port: any
          proto: any
          host: any
    
      inbound:
        # Allow icmp between any nebula hosts
        - port: any
          proto: any
          host: any
    
    

    C config:

    # This is the nebula example configuration file. You must edit, at a minimum, the static_host_map, lighthouse, and firewall sections
    # Some options in this file are HUPable, including the pki section. (A HUP will reload credentials from disk without affecting existing tunnels)
    
    # PKI defines the location of credentials for this node. Each of these can also be inlined by using the yaml ": |" syntax.
    pki:
      # The CAs that are accepted by this node. Must contain one or more certificates created by 'nebula-cert ca'
      ca: /etc/nebula/ca.crt
      cert: /etc/nebula/work.crt
      key: /etc/nebula/work.key
      #blacklist is a list of certificate fingerprints that we will refuse to talk to
      #blacklist:
      #  - c99d4e650533b92061b09918e838a5a0a6aaee21eed1d12fd937682865936c72
    
    # The static host map defines a set of hosts with fixed IP addresses on the internet (or any network).
    # A host can have multiple fixed IP addresses defined here, and nebula will try each when establishing a tunnel.
    # The syntax is:
    #   "{nebula ip}": ["{routable ip/dns name}:{routable port}"]
    # Example, if your lighthouse has the nebula IP of 192.168.100.1 and has the real ip address of 100.64.22.11 and runs on port 4242:
    static_host_map:
      "192.168.42.99": ["ftpix.com:4242"]
    
    lighthouse:
      # am_lighthouse is used to enable lighthouse functionality for a node. This should ONLY be true on nodes
      # you have configured to be lighthouses in your network
      am_lighthouse: false
      # serve_dns optionally starts a dns listener that responds to various queries and can even be
      # delegated to for resolution
      #serve_dns: false
      # interval is the number of seconds between updates from this node to a lighthouse.
      # during updates, a node sends information about its current IP addresses to each node.
      interval: 60
      # hosts is a list of lighthouse hosts this node should report to and query from
      # IMPORTANT: THIS SHOULD BE EMPTY ON LIGHTHOUSE NODES
      hosts:
        - "192.168.42.99"
    
    # Port Nebula will be listening on. The default here is 4242. For a lighthouse node, the port should be defined,
    # however using port 0 will dynamically assign a port and is recommended for roaming nodes.
    listen:
      host: 0.0.0.0
      port: 0
      # Sets the max number of packets to pull from the kernel for each syscall (under systems that support recvmmsg)
      # default is 64, does not support reload
      #batch: 64
      # Configure socket buffers for the udp side (outside), leave unset to use the system defaults. Values will be doubled by the kernel
      # Default is net.core.rmem_default and net.core.wmem_default (/proc/sys/net/core/rmem_default and /proc/sys/net/core/rmem_default)
      # Maximum is limited by memory in the system, SO_RCVBUFFORCE and SO_SNDBUFFORCE is used to avoid having to raise the system wide
      # max, net.core.rmem_max and net.core.wmem_max
      #read_buffer: 10485760
      #write_buffer: 10485760
    
    # Punchy continues to punch inbound/outbound at a regular interval to avoid expiration of firewall nat mappings
    punchy: true
    # punch_back means that a node you are trying to reach will connect back out to you if your hole punching fails
    # this is extremely useful if one node is behind a difficult nat, such as symmetric
    punch_back: true
    
    # Cipher allows you to choose between the available ciphers for your network.
    # IMPORTANT: this value must be identical on ALL NODES/LIGHTHOUSES. We do not/will not support use of different ciphers simultaneously!
    #cipher: chachapoly
    
    # Local range is used to define a hint about the local network range, which speeds up discovering the fastest
    # path to a network adjacent nebula node.
    #local_range: "172.16.0.0/24"
    
    # sshd can expose informational and administrative functions via ssh this is a
    #sshd:
      # Toggles the feature
      #enabled: true
      # Host and port to listen on, port 22 is not allowed for your safety
      #listen: 127.0.0.1:2222
      # A file containing the ssh host private key to use
      # A decent way to generate one: ssh-keygen -t ed25519 -f ssh_host_ed25519_key -N "" < /dev/null
      #host_key: ./ssh_host_ed25519_key
      # A file containing a list of authorized public keys
      #authorized_users:
        #- user: steeeeve
          # keys can be an array of strings or single string
          #keys:
            #- "ssh public key string"
    
    # Configure the private interface. Note: addr is baked into the nebula certificate
    tun:
      # Name of the device
      dev: nebula1
      # Toggles forwarding of local broadcast packets, the address of which depends on the ip/mask encoded in pki.cert
      drop_local_broadcast: false
      # Toggles forwarding of multicast packets
      drop_multicast: false
      # Sets the transmit queue length, if you notice lots of transmit drops on the tun it may help to raise this number. Default is 500
      tx_queue: 500
      # Default MTU for every packet, safe setting is (and the default) 1300 for internet based traffic
      mtu: 1300
      # Route based MTU overrides, you have known vpn ip paths that can support larger MTUs you can increase/decrease them here
      routes:
        #- mtu: 8800
        #  route: 10.0.0.0/16
    
    # TODO
    # Configure logging level
    logging:
      # panic, fatal, error, warning, info, or debug. Default is info
      level: info
      # json or text formats currently available. Default is text
      format: text
    
    #stats:
      #type: graphite
      #prefix: nebula
      #protocol: tcp
      #host: 127.0.0.1:9999
      #interval: 10s
    
      #type: prometheus
      #listen: 127.0.0.1:8080
      #path: /metrics
      #namespace: prometheusns
      #subsystem: nebula
      #interval: 10s
    
    # Nebula security group configuration
    firewall:
      conntrack:
        tcp_timeout: 120h
        udp_timeout: 3m
        default_timeout: 10m
        max_connections: 100000
    
      # The firewall is default deny. There is no way to write a deny rule.
      # Rules are comprised of a protocol, port, and one or more of host, group, or CIDR
      # Logical evaluation is roughly: port AND proto AND ca_sha AND ca_name AND (host OR group OR groups OR cidr)
      # - port: Takes `0` or `any` as any, a single number `80`, a range `200-901`, or `fragment` to match second and further fragments of fragmented packets (since there is no port available).
      #   code: same as port but makes more sense when talking about ICMP, TODO: this is not currently implemented in a way that works, use `any`
      #   proto: `any`, `tcp`, `udp`, or `icmp`
      #   host: `any` or a literal hostname, ie `test-host`
      #   group: `any` or a literal group name, ie `default-group`
      #   groups: Same as group but accepts a list of values. Multiple values are AND'd together and a certificate would have to contain all groups to pass
      #   cidr: a CIDR, `0.0.0.0/0` is any.
      #   ca_name: An issuing CA name
      #   ca_sha: An issuing CA shasum
    
      outbound:
        # Allow all outbound traffic from this node
        - port: any
          proto: any
          host: any
    
      inbound:
        # Allow icmp between any nebula hosts
        - port: any
          proto: icmp
          host: any
    
        # Allow tcp/443 from any host with BOTH laptop and home group
        - port: any
          proto: tcp
          host: any
    
        - port: any
          proto: udp
          host: any
    
    

    Logs from C:

    Dec 05 15:55:20 gz-t480 nebula[32698]: time="2019-12-05T15:55:20+08:00" level=info msg="Handshake message sent" handshake="map[stage:1 style:ix_psk0]" initiatorIndex=89620360 remoteIndex=0 udpAddr="192.168.1.1:52803" vpnIp=192.168.42.198
    Dec 05 15:55:22 gz-t480 nebula[32698]: time="2019-12-05T15:55:22+08:00" level=info msg="Handshake message sent" handshake="map[stage:1 style:ix_psk0]" initiatorIndex=89620360 remoteIndex=0 udpAddr="192.168.1.198:52803" vpnIp=192.168.42.198
    Dec 05 15:55:23 gz-t480 nebula[32698]: time="2019-12-05T15:55:23+08:00" level=info msg="Handshake message sent" handshake="map[stage:1 style:ix_psk0]" initiatorIndex=89620360 remoteIndex=0 udpAddr="192.168.200.198:52803" vpnIp=192.168.42.198
    Dec 05 15:55:25 gz-t480 nebula[32698]: time="2019-12-05T15:55:25+08:00" level=info msg="Handshake message sent" handshake="map[stage:1 style:ix_psk0]" initiatorIndex=89620360 remoteIndex=0 udpAddr="172.21.0.1:52803" vpnIp=192.168.42.198
    Dec 05 15:55:27 gz-t480 nebula[32698]: time="2019-12-05T15:55:27+08:00" level=info msg="Handshake message sent" handshake="map[stage:1 style:ix_psk0]" initiatorIndex=89620360 remoteIndex=0 udpAddr="172.19.0.1:52803" vpnIp=192.168.42.198
    Dec 05 15:55:29 gz-t480 nebula[32698]: time="2019-12-05T15:55:29+08:00" level=info msg="Handshake message sent" handshake="map[stage:1 style:ix_psk0]" initiatorIndex=89620360 remoteIndex=0 udpAddr="172.17.0.1:52803" vpnIp=192.168.42.198
    Dec 05 15:55:31 gz-t480 nebula[32698]: time="2019-12-05T15:55:31+08:00" level=info msg="Handshake message sent" handshake="map[stage:1 style:ix_psk0]" initiatorIndex=89620360 remoteIndex=0 udpAddr="172.20.0.1:52803" vpnIp=192.168.42.198
    Dec 05 15:55:33 gz-t480 nebula[32698]: time="2019-12-05T15:55:33+08:00" level=info msg="Handshake message sent" handshake="map[stage:1 style:ix_psk0]" initiatorIndex=89620360 remoteIndex=0 udpAddr="172.21.0.1:58904" vpnIp=192.168.42.198
    Dec 05 15:55:35 gz-t480 nebula[32698]: time="2019-12-05T15:55:35+08:00" level=info msg="Handshake message sent" handshake="map[stage:1 style:ix_psk0]" initiatorIndex=89620360 remoteIndex=0 udpAddr="172.19.0.1:58904" vpnIp=192.168.42.198
    Dec 05 15:55:38 gz-t480 nebula[32698]: time="2019-12-05T15:55:38+08:00" level=info msg="Handshake message sent" handshake="map[stage:1 style:ix_psk0]" initiatorIndex=89620360 remoteIndex=0 udpAddr="172.17.0.1:58904" vpnIp=192.168.42.198
    Dec 05 15:55:40 gz-t480 nebula[32698]: time="2019-12-05T15:55:40+08:00" level=info msg="Handshake message sent" handshake="map[stage:1 style:ix_psk0]" initiatorIndex=89620360 remoteIndex=0 udpAddr="172.20.0.1:58904" vpnIp=192.168.42.198
    
  • Windows 10:

    Windows 10: "failed to run 'netsh' to set address: exit status 1"

    Hi,

    I'm trying to setup a simple network, with one lighthouse and one node. The lighthouse should run on Windows 10 (v1903, build 18362.476) while the node runs on macOS (Catalina, 10.15.1).

    I've deployed the certificates and prepared both of the configs. Also installed TAP driver from OpenVPN.

    However, when I start the lighthouse node this error appears:

    D:\nebula>.\nebula.exe --config .\config.yml
    time="2019-11-20T12:47:12+01:00" level=info msg="Firewall rule added" firewallRule="map[caName: caSha: direction:outgoing endPort:0 groups:[] host:any ip:<nil> proto:0 startPort:0]"
    time="2019-11-20T12:47:12+01:00" level=info msg="Firewall rule added" firewallRule="map[caName: caSha: direction:incoming endPort:0 groups:[] host:any ip:<nil> proto:1 startPort:0]"
    time="2019-11-20T12:47:12+01:00" level=info msg="Firewall rule added" firewallRule="map[caName: caSha: direction:incoming endPort:443 groups:[laptop home] host: ip:<nil> proto:6 startPort:443]"
    time="2019-11-20T12:47:12+01:00" level=info msg="Firewall started" firewallHash=3e3f317872f504cec08154d9fb0a726ebc68235d1a5075426317696bdd388336
    time="2019-11-20T12:47:12+01:00" level=info msg="Main HostMap created" network=192.168.178.122/24 preferredRanges="[192.168.178.0/24]"
    time="2019-11-20T12:47:12+01:00" level=fatal msg="failed to run 'netsh' to set address: exit status 1"
    

    Here's the lighthouse config.yml:

    pki:
      ca: D:\\nebula\\ca.crt
      cert: D:\\nebula\\lighthouse1.crt
      key: D:\\nebula\\lighthouse1.key
    
    lighthouse:
      am_lighthouse: true
      interval: 60
    
    listen:
      host: 0.0.0.0
      port: 4242
    
    local_range: "192.168.178.0/24"
    
    handshake_mac:
      key: "MYHANDSHAKE"
      accepted_keys:
        - "MYHANDSHAKE"
    
    tun:
      dev: nebula1
      drop_local_broadcast: false
      drop_multicast: false
      tx_queue: 500
      mtu: 1300
    
    logging:
      level: info
      format: text
    
    # Nebula security group configuration
    firewall:
      conntrack:
        tcp_timeout: 120h
        udp_timeout: 3m
        default_timeout: 10m
        max_connections: 100000
    
      outbound:
        - port: any
          proto: any
          host: any
    
      inbound:
        - port: any
          proto: icmp
          host: any
        - port: 443
          proto: tcp
          groups:
            - laptop
            - home
    
  • lighthouse High availability

    lighthouse High availability

    Hello What is the correct way to achieve High availability of a nebula network?

    Is it to have 2 lighthouse nodes with the same certificate and to configure all the clients to use those 2 lighthouse nodes?

  • Unbreak building for FreeBSD

    Unbreak building for FreeBSD

    I naively copied darwin files to unbreak building FreeBSD binaries. The other thing is that upstream version of water library doesn't support FreeBSD. There is a fork with added FreeBSD support https://github.com/yggdrasil-network/water and work in progress pull request to upstream: https://github.com/songgao/water/pull/37

    After these dirty hacks I'm able to start nebula on FreeBSD hosts but no traffic is passed between them:

    $ sudo ./nebula -config config.yml
    INFO[0000] Firewall rule added                           firewallRule="map[caName: caSha: direction:outgoing endPort:0 groups:[] host:a
    ny ip:<nil> proto:0 startPort:0]"
    INFO[0000] Firewall rule added                           firewallRule="map[caName: caSha: direction:incoming endPort:0 groups:[] host:a
    ny ip:<nil> proto:1 startPort:0]"
    INFO[0000] Firewall rule added                           firewallRule="map[caName: caSha: direction:incoming endPort:443 groups:[laptop
     home] host: ip:<nil> proto:6 startPort:443]"
    INFO[0000] Firewall started                              firewallHash=853d3005de969aa0cb1100731e983a740ab4218f89c78189edd389ff5e05ae99
    INFO[0000] Main HostMap created                          network=192.168.100.2/24 preferredRanges="[192.168.0.0/24]"
    INFO[0000] UDP hole punching enabled
    command: ifconfig tap0 192.168.100.2/24 192.168.100.2
    command: ifconfig tap0 mtu 1300
    INFO[0000] Nebula interface is active                    build=dev+20191217111808 interface=tap0 network=192.168.100.2/24
    INFO[0000] Handshake message sent                        handshake="map[stage:1 style:ix_psk0]" initiatorIndex=3879127975 remoteIndex=0
     udpAddr="188.116.33.203:4242" vpnIp=192.168.100.1
    INFO[0000] Handshake message received                    durationNs=446865780 handshake="map[stage:2 style:ix_psk0]" initiatorIndex=387
    9127975 remoteIndex=3879127975 responderIndex=834573217 udpAddr="188.116.33.203:4242" vpnIp=192.168.100.1
    

    tap0 interface is configured correctly:

    tap0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1300
            options=80000<LINKSTATE>
            ether 58:9c:fc:10:ff:96
            inet 192.168.100.2 netmask 0xffffff00 broadcast 192.168.100.2
            groups: tap
            media: Ethernet autoselect
            status: active
            nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
            Opened by PID 42831
    
    [email protected] ~/nebula/build/freebsd (support-freebsd*) $ netstat -rn4
    Routing tables
    
    Internet:
    Destination        Gateway            Flags     Netif Expire
    default            192.168.0.2        UGS        igb0
    127.0.0.1          link#5             UH          lo0
    192.168.0.0/24     link#1             U          igb0
    192.168.0.11       link#1             UHS         lo0
    192.168.100.0/24   link#6             U          tap0
    192.168.100.2      link#6             UHS         lo0
    

    There's no response for who-has requests:

    [email protected] ~/nebula/build/freebsd (support-freebsd*) $ sudo tcpdump -i tap0
    tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
    listening on tap0, link-type EN10MB (Ethernet), capture size 262144 bytes
    12:55:38.490465 ARP, Request who-has 192.168.100.1 tell 192.168.100.2, length 28
    12:55:39.532137 ARP, Request who-has 192.168.100.1 tell 192.168.100.2, length 28
    12:55:40.559399 ARP, Request who-has 192.168.100.1 tell 192.168.100.2, length 28
    

    Dropping it here with hope that someone would be willing to pick-up and continue this effort. I was testing on few weeks old CURRENT:

    FreeBSD monster-1 13.0-CURRENT FreeBSD 13.0-CURRENT #5 1b501770dd3-c264495(master): Wed Nov 27 01:35:34 CET 2019 [email protected]:/usr/obj/usr/src/amd64.amd64/sys/GENERIC amd64

  • Nodes not able to ping each other

    Nodes not able to ping each other

    I have a lighthouse in a cloud VM with public IP address; laptop in a home VLAN and workstation in another VLAN.

    lighthouse 10.13.1.1 workstation 10.13.1.3 (LAN IP 10.12.1.6) laptop 10.13.1.4 (LAN IP 10.12.2.221)

    All machines are running the latest version of nebula and flavours of Linux. My configurations are minimal and are as follows:

    My lighthouse config:

    pki:
      ca: /opt/nebula/ca.crt
      cert: /opt/nebula/lighthouse.crt
      key: /opt/nebula/lighthouse.key
    static_host_map:
      "10.13.1.1": ["vm_public_IP:4242"]
    lighthouse:
      am_lighthouse: true
      interval: 60
    listen:
      host: "[::]"
      port: 4242
    punchy:
      punch: true
      respond: true
    cipher: aes
    tun:
      disabled: false
      dev: nebula1
      drop_local_broadcast: false
      drop_multicast: false
      tx_queue: 500
      mtu: 1300
      routes:
      unsafe_routes:
    logging:
      level: info
      format: text
    firewall:
      conntrack:
        tcp_timeout: 12m
        udp_timeout: 3m
        default_timeout: 10m
        max_connections: 100000
      outbound:
        - port: any
          proto: any
          host: any
      inbound:
        - port: any
          proto: any
          host: any
    

    My workstation and laptop config: (the only different part is cert and key files)

    pki:
      ca: /home/ewon/nebula/ca.crt
      cert: /home/ewon/nebula/workstation.crt
      key: /home/ewon/nebula/workstation.key
    static_host_map:
      "10.13.1.1": ["vm_public_IP:4242"]
    lighthouse:
      am_lighthouse: false
      interval: 60
      hosts:
        - "10.13.1.1"
    listen:
      host: "[::]"
      port: 0
    punchy:
      punch: true
      respond: true
    cipher: aes
    tun:
      disabled: false
      dev: nebula1
      drop_local_broadcast: false
      drop_multicast: false
      tx_queue: 500
      mtu: 1300
      routes:
      unsafe_routes:
    logging:
      level: info
      format: text
    firewall:
      conntrack:
        tcp_timeout: 12m
        udp_timeout: 3m
        default_timeout: 10m
        max_connections: 100000
      outbound:
        - port: any
          proto: any
          host: any
      inbound:
        - port: any
          proto: any
          host: any
    

    I can ping from nodes to lighthouse and vice versa. However, nodes cannot ping each other. If I ping from laptop to workstation, I get the following error messages on laptop:

    ...
    ERRO[0018] Prevented a pending handshake race            certName=workstation fingerprint=5c0f3921e4fc49fc06b34fd2cc58a3242bfb69bde35728ef0219d466fcf0bb2c handshake="map[stage:1 style:ix_psk0]" initiatorIndex=526444663 issuer=b197e555563b8a4b5370b16b18dcc3ff4068caf3ec19818e86ff017ea1260845 remoteIndex=0 responderIndex=3029730967 udpAddr="10.12.1.6:45286" vpnIp=10.13.1.3
    INFO[0019] Handshake timed out                           durationNs=8720123570 handshake="map[stage:1 style:ix_psk0]" initiatorIndex=250451344 remoteIndex=0 udpAddrs="[home_public_IP:32907 10.12.1.6:45286]" vpnIp=10.13.1.3
    INFO[0020] Handshake message received                    certName=workstation fingerprint=5c0f3921e4fc49fc06b34fd2cc58a3242bfb69bde35728ef0219d466fcf0bb2c handshake="map[stage:1 style:ix_psk0]" initiatorIndex=526444663 issuer=b197e555563b8a4b5370b16b18dcc3ff4068caf3ec19818e86ff017ea1260845 remoteIndex=0 responderIndex=0 udpAddr="10.12.1.6:45286" vpnIp=10.13.1.3
    INFO[0020] Handshake message sent                        certName=workstation fingerprint=5c0f3921e4fc49fc06b34fd2cc58a3242bfb69bde35728ef0219d466fcf0bb2c handshake="map[stage:2 style:ix_psk0]" initiatorIndex=526444663 issuer=b197e555563b8a4b5370b16b18dcc3ff4068caf3ec19818e86ff017ea1260845 remoteIndex=0 responderIndex=286921894 sentCachedPackets=0 udpAddr="10.12.1.6:45286" vpnIp=10.13.1.3
    ...                                                              
    

    and on my workstation I have the following "info" messages:

    ...
    INFO[0018] Handshake message sent                        handshake="map[stage:1 style:ix_psk0]" initiatorIndex=526444663 udpAddrs="[home_public_IP:12583 10.12.2.210:59517 10.12.2.211:59517 192.168.122.1:59517]" vpnIp=10.13.1.4
    INFO[0020] Handshake timed out                           durationNs=10182400905 handshake="map[stage:1 style:ix_psk0]" initiatorIndex=526444663 remoteIndex=0 udpAddrs="[home_public_IP:12583 10.12.2.210:59517 10.12.2.211:59517 192.168.122.1:59517]" vpnIp=10.13.1.4
    ...
    

    The logs on lighthouse is only the following:

    [email protected]:/opt/nebula# ./nebula -config lighthouse.config.yml
    INFO[0000] Firewall rule added                           firewallRule="map[caName: caSha: direction:outgoing endPort:0 groups:[] host:any ip: proto:0 startPort:0]"
    INFO[0000] Firewall rule added                           firewallRule="map[caName: caSha: direction:incoming endPort:0 groups:[] host:any ip: proto:0 startPort:0]"
    INFO[0000] Firewall started                              firewallHash=21716b47a7a140e448077fe66c31b4b42f232e996818d7dd1c6c4991e066dbdb
    INFO[0000] Main HostMap created                          network=10.13.1.1/24 preferredRanges="[]"
    INFO[0000] UDP hole punching enabled
    INFO[0000] Nebula interface is active                    build=1.5.2 interface=nebula1 network=10.13.1.1/24 udpAddr="[::]:4242"
    INFO[0006] Handshake message received                    certName=laptop fingerprint=ea008f0243fbeb44254732ec24fd35fa729f1da67920f5c705ef73dced83a5b8 handshake="map[stage:1 style:ix_psk0]" initiatorIndex=392725562 issuer=b197e555563b8a4b5370b16b18dcc3ff4068caf3ec19818e86ff017ea1260845 remoteIndex=0 responderIndex=0 udpAddr="vm_public_IP:12583" vpnIp=10.13.1.4
    INFO[0006] Handshake message sent                        certName=laptop fingerprint=ea008f0243fbeb44254732ec24fd35fa729f1da67920f5c705ef73dced83a5b8 handshake="map[stage:2 style:ix_psk0]" initiatorIndex=392725562 issuer=b197e555563b8a4b5370b16b18dcc3ff4068caf3ec19818e86ff017ea1260845 remoteIndex=0 responderIndex=2000406661 sentCachedPackets=0 udpAddr="vm_public_IP:12583" vpnIp=10.13.1.4
    INFO[0011] Handshake message received                    certName=workstation fingerprint=5c0f3921e4fc49fc06b34fd2cc58a3242bfb69bde35728ef0219d466fcf0bb2c handshake="map[stage:1 style:ix_psk0]" initiatorIndex=792326294 issuer=b197e555563b8a4b5370b16b18dcc3ff4068caf3ec19818e86ff017ea1260845 remoteIndex=0 responderIndex=0 udpAddr="vm_public_IP:32907" vpnIp=10.13.1.3
    INFO[0011] Handshake message sent                        certName=workstation fingerprint=5c0f3921e4fc49fc06b34fd2cc58a3242bfb69bde35728ef0219d466fcf0bb2c handshake="map[stage:2 style:ix_psk0]" initiatorIndex=792326294 issuer=b197e555563b8a4b5370b16b18dcc3ff4068caf3ec19818e86ff017ea1260845 remoteIndex=0 responderIndex=154643726 sentCachedPackets=0 udpAddr="vm_public_IP:32907" vpnIp=10.13.1.3
    

    So it seems nodes can receive the requests from each other, but why ping (and ssh) won't work? I made sure that workstation and laptop has no firewall rules in place. There's no YAML syntax error either.

  • Low transmission efficiency on moderate/low host

    Low transmission efficiency on moderate/low host

    Using version: dev+20191224233758

    CPU Model:             Intel Core i7 9xx (Nehalem Class Core i7)
    CPU Cache Size:    4096 KB
    CPU Number:          1 vCPU
    Memory Usage:          208.73 MB / 985.53 MB
    

    Machine info generated by LemonBench. Command: curl -fsL https://ilemonra.in/LemonBenchIntl | bash -s fast

    Using iperf3 for test: server: iperf3 -s client: iperf3 -c [ip] -P 10

    2 hosts are located in the same DataCenter and both of them have up to 1Gbps bandwidth.

    TCP Raw (dierct transmission):

    [ ID] Interval           Transfer     Bandwidth       Retr
    [  4]   0.00-10.00  sec   112 MBytes  94.1 Mbits/sec  127             sender
    [  4]   0.00-10.00  sec   111 MBytes  93.4 Mbits/sec                  receiver
    [  6]   0.00-10.00  sec  94.8 MBytes  79.5 Mbits/sec  111             sender
    [  6]   0.00-10.00  sec  94.1 MBytes  78.9 Mbits/sec                  receiver
    [  8]   0.00-10.00  sec  87.6 MBytes  73.5 Mbits/sec  128             sender
    [  8]   0.00-10.00  sec  86.9 MBytes  72.9 Mbits/sec                  receiver
    [ 10]   0.00-10.00  sec  79.2 MBytes  66.4 Mbits/sec  115             sender
    [ 10]   0.00-10.00  sec  78.5 MBytes  65.9 Mbits/sec                  receiver
    [ 12]   0.00-10.00  sec  81.7 MBytes  68.5 Mbits/sec  108             sender
    [ 12]   0.00-10.00  sec  80.8 MBytes  67.7 Mbits/sec                  receiver
    [ 14]   0.00-10.00  sec   130 MBytes   109 Mbits/sec  114             sender
    [ 14]   0.00-10.00  sec   129 MBytes   108 Mbits/sec                  receiver
    [ 16]   0.00-10.00  sec   100 MBytes  84.0 Mbits/sec  117             sender
    [ 16]   0.00-10.00  sec  99.4 MBytes  83.4 Mbits/sec                  receiver
    [ 18]   0.00-10.00  sec  98.1 MBytes  82.3 Mbits/sec   79             sender
    [ 18]   0.00-10.00  sec  97.5 MBytes  81.8 Mbits/sec                  receiver
    [ 20]   0.00-10.00  sec   105 MBytes  88.1 Mbits/sec  137             sender
    [ 20]   0.00-10.00  sec   104 MBytes  87.5 Mbits/sec                  receiver
    [ 22]   0.00-10.00  sec  99.6 MBytes  83.5 Mbits/sec  144             sender
    [ 22]   0.00-10.00  sec  98.6 MBytes  82.7 Mbits/sec                  receiver
    [SUM]   0.00-10.00  sec   989 MBytes   829 Mbits/sec  1180             sender
    [SUM]   0.00-10.00  sec   981 MBytes   823 Mbits/sec                  receiver
    

    UDP Raw: Command: iperf3 -c [IP] -u -b 80M -P 10

    [ ID] Interval           Transfer     Bandwidth       Jitter    Lost/Total Datagrams
    [  4]   0.00-10.00  sec  94.7 MBytes  79.4 Mbits/sec  0.881 ms  3/39 (7.7%)  
    [  4] Sent 39 datagrams
    [  6]   0.00-10.00  sec  94.7 MBytes  79.4 Mbits/sec  0.886 ms  3/39 (7.7%)  
    [  6] Sent 39 datagrams
    [  8]   0.00-10.00  sec  94.7 MBytes  79.4 Mbits/sec  0.885 ms  3/39 (7.7%)  
    [  8] Sent 39 datagrams
    [ 10]   0.00-10.00  sec  94.7 MBytes  79.4 Mbits/sec  0.875 ms  3/38 (7.9%)  
    [ 10] Sent 38 datagrams
    [ 12]   0.00-10.00  sec  94.7 MBytes  79.4 Mbits/sec  0.841 ms  3/40 (7.5%)  
    [ 12] Sent 40 datagrams
    [ 14]   0.00-10.00  sec  94.7 MBytes  79.4 Mbits/sec  0.772 ms  7/46 (15%)  
    [ 14] Sent 46 datagrams
    [ 16]   0.00-10.00  sec  94.7 MBytes  79.4 Mbits/sec  0.829 ms  7/44 (16%)  
    [ 16] Sent 44 datagrams
    [ 18]   0.00-10.00  sec  94.7 MBytes  79.4 Mbits/sec  0.828 ms  2/37 (5.4%)  
    [ 18] Sent 37 datagrams
    [ 20]   0.00-10.00  sec  94.7 MBytes  79.4 Mbits/sec  0.829 ms  2/37 (5.4%)  
    [ 20] Sent 37 datagrams
    [ 22]   0.00-10.00  sec  94.7 MBytes  79.4 Mbits/sec  0.795 ms  6/43 (14%)  
    [ 22] Sent 43 datagrams
    [SUM]   0.00-10.00  sec   947 MBytes   794 Mbits/sec  0.842 ms  39/402 (9.7%)  
    

    Command: iperf3 -c [IP] -u -b 100M -P 10

    [ ID] Interval           Transfer     Bandwidth       Jitter    Lost/Total Datagrams
    [  4]   0.00-10.00  sec   118 MBytes  99.0 Mbits/sec  1.217 ms  0/43 (0%)  
    [  4] Sent 43 datagrams
    [  6]   0.00-10.00  sec   118 MBytes  99.0 Mbits/sec  1.290 ms  54/100 (54%)  
    [  6] Sent 100 datagrams
    [  8]   0.00-10.00  sec   118 MBytes  99.0 Mbits/sec  1.400 ms  49/93 (53%)  
    [  8] Sent 93 datagrams
    [ 10]   0.00-10.00  sec   118 MBytes  99.0 Mbits/sec  1.326 ms  9/53 (17%)  
    [ 10] Sent 53 datagrams
    [ 12]   0.00-10.00  sec   118 MBytes  99.0 Mbits/sec  1.244 ms  0/43 (0%)  
    [ 12] Sent 43 datagrams
    [ 14]   0.00-10.00  sec   118 MBytes  99.0 Mbits/sec  1.462 ms  52/97 (54%)  
    [ 14] Sent 97 datagrams
    [ 16]   0.00-10.00  sec   118 MBytes  99.0 Mbits/sec  1.358 ms  22/68 (32%)  
    [ 16] Sent 68 datagrams
    [ 18]   0.00-10.00  sec   118 MBytes  99.0 Mbits/sec  1.667 ms  123/167 (74%)  
    [ 18] Sent 167 datagrams
    [ 20]   0.00-10.00  sec   118 MBytes  99.0 Mbits/sec  1.509 ms  51/96 (53%)  
    [ 20] Sent 96 datagrams
    [ 22]   0.00-10.00  sec   118 MBytes  99.0 Mbits/sec  1.238 ms  0/42 (0%)  
    [ 22] Sent 42 datagrams
    [SUM]   0.00-10.00  sec  1.15 GBytes   990 Mbits/sec  1.371 ms  360/802 (45%)  
    

    Wireguard:

    [ ID] Interval           Transfer     Bandwidth       Retr
    [  4]   0.00-10.00  sec  40.7 MBytes  34.2 Mbits/sec  166             sender
    [  4]   0.00-10.00  sec  40.3 MBytes  33.8 Mbits/sec                  receiver
    [  6]   0.00-10.00  sec  86.7 MBytes  72.7 Mbits/sec  536             sender
    [  6]   0.00-10.00  sec  85.3 MBytes  71.5 Mbits/sec                  receiver
    [  8]   0.00-10.00  sec  43.7 MBytes  36.7 Mbits/sec  183             sender
    [  8]   0.00-10.00  sec  43.3 MBytes  36.3 Mbits/sec                  receiver
    [ 10]   0.00-10.00  sec  38.2 MBytes  32.0 Mbits/sec  129             sender
    [ 10]   0.00-10.00  sec  37.7 MBytes  31.6 Mbits/sec                  receiver
    [ 12]   0.00-10.00  sec  37.7 MBytes  31.6 Mbits/sec  127             sender
    [ 12]   0.00-10.00  sec  37.3 MBytes  31.3 Mbits/sec                  receiver
    [ 14]   0.00-10.00  sec  38.5 MBytes  32.3 Mbits/sec  125             sender
    [ 14]   0.00-10.00  sec  38.1 MBytes  31.9 Mbits/sec                  receiver
    [ 16]   0.00-10.00  sec  34.2 MBytes  28.7 Mbits/sec  133             sender
    [ 16]   0.00-10.00  sec  33.9 MBytes  28.4 Mbits/sec                  receiver
    [ 18]   0.00-10.00  sec  36.3 MBytes  30.5 Mbits/sec  178             sender
    [ 18]   0.00-10.00  sec  35.9 MBytes  30.1 Mbits/sec                  receiver
    [ 20]   0.00-10.00  sec  33.7 MBytes  28.2 Mbits/sec  104             sender
    [ 20]   0.00-10.00  sec  33.3 MBytes  28.0 Mbits/sec                  receiver
    [ 22]   0.00-10.00  sec  27.8 MBytes  23.4 Mbits/sec   87             sender
    [ 22]   0.00-10.00  sec  27.5 MBytes  23.1 Mbits/sec                  receiver
    [SUM]   0.00-10.00  sec   418 MBytes   350 Mbits/sec  1768             sender
    [SUM]   0.00-10.00  sec   413 MBytes   346 Mbits/sec                  receiver
    

    Nebula (using default):

    [ ID] Interval           Transfer     Bandwidth       Retr
    [  4]   0.00-10.00  sec  2.94 MBytes  2.47 Mbits/sec   89             sender
    [  4]   0.00-10.00  sec  2.81 MBytes  2.36 Mbits/sec                  receiver
    [  6]   0.00-10.00  sec  2.40 MBytes  2.02 Mbits/sec   94             sender
    [  6]   0.00-10.00  sec  2.28 MBytes  1.91 Mbits/sec                  receiver
    [  8]   0.00-10.00  sec  2.64 MBytes  2.22 Mbits/sec   71             sender
    [  8]   0.00-10.00  sec  2.51 MBytes  2.10 Mbits/sec                  receiver
    [ 10]   0.00-10.00  sec  2.99 MBytes  2.51 Mbits/sec   85             sender
    [ 10]   0.00-10.00  sec  2.88 MBytes  2.41 Mbits/sec                  receiver
    [ 12]   0.00-10.00  sec  2.30 MBytes  1.93 Mbits/sec   65             sender
    [ 12]   0.00-10.00  sec  2.23 MBytes  1.87 Mbits/sec                  receiver
    [ 14]   0.00-10.00  sec  2.60 MBytes  2.18 Mbits/sec   74             sender
    [ 14]   0.00-10.00  sec  2.48 MBytes  2.08 Mbits/sec                  receiver
    [ 16]   0.00-10.00  sec  2.25 MBytes  1.89 Mbits/sec   84             sender
    [ 16]   0.00-10.00  sec  2.13 MBytes  1.78 Mbits/sec                  receiver
    [ 18]   0.00-10.00  sec  3.00 MBytes  2.51 Mbits/sec   69             sender
    [ 18]   0.00-10.00  sec  2.86 MBytes  2.40 Mbits/sec                  receiver
    [ 20]   0.00-10.00  sec  3.37 MBytes  2.83 Mbits/sec   60             sender
    [ 20]   0.00-10.00  sec  3.26 MBytes  2.73 Mbits/sec                  receiver
    [ 22]   0.00-10.00  sec  3.11 MBytes  2.61 Mbits/sec   73             sender
    [ 22]   0.00-10.00  sec  2.97 MBytes  2.49 Mbits/sec                  receiver
    [SUM]   0.00-10.00  sec  27.6 MBytes  23.2 Mbits/sec  764             sender
    [SUM]   0.00-10.00  sec  26.4 MBytes  22.1 Mbits/sec                  receiver
    
    

    Nebula (using chachapoly):

    [ ID] Interval           Transfer     Bandwidth       Retr
    [  4]   0.00-10.00  sec  2.47 MBytes  2.07 Mbits/sec   31             sender
    [  4]   0.00-10.00  sec  2.35 MBytes  1.97 Mbits/sec                  receiver
    [  6]   0.00-10.00  sec  4.07 MBytes  3.42 Mbits/sec   32             sender
    [  6]   0.00-10.00  sec  3.81 MBytes  3.19 Mbits/sec                  receiver
    [  8]   0.00-10.00  sec  4.09 MBytes  3.43 Mbits/sec   32             sender
    [  8]   0.00-10.00  sec  3.93 MBytes  3.30 Mbits/sec                  receiver
    [ 10]   0.00-10.00  sec  4.33 MBytes  3.63 Mbits/sec   33             sender
    [ 10]   0.00-10.00  sec  4.12 MBytes  3.46 Mbits/sec                  receiver
    [ 12]   0.00-10.00  sec  5.71 MBytes  4.79 Mbits/sec   30             sender
    [ 12]   0.00-10.00  sec  5.54 MBytes  4.64 Mbits/sec                  receiver
    [ 14]   0.00-10.00  sec  4.04 MBytes  3.39 Mbits/sec   74             sender
    [ 14]   0.00-10.00  sec  3.88 MBytes  3.25 Mbits/sec                  receiver
    [ 16]   0.00-10.00  sec  2.87 MBytes  2.41 Mbits/sec   42             sender
    [ 16]   0.00-10.00  sec  2.68 MBytes  2.25 Mbits/sec                  receiver
    [ 18]   0.00-10.00  sec  3.27 MBytes  2.74 Mbits/sec   22             sender
    [ 18]   0.00-10.00  sec  3.09 MBytes  2.59 Mbits/sec                  receiver
    [ 20]   0.00-10.00  sec  4.42 MBytes  3.71 Mbits/sec   66             sender
    [ 20]   0.00-10.00  sec  4.26 MBytes  3.57 Mbits/sec                  receiver
    [ 22]   0.00-10.00  sec  1.98 MBytes  1.66 Mbits/sec   34             sender
    [ 22]   0.00-10.00  sec  1.88 MBytes  1.58 Mbits/sec                  receiver
    [SUM]   0.00-10.00  sec  37.3 MBytes  31.2 Mbits/sec  396             sender
    [SUM]   0.00-10.00  sec  35.5 MBytes  29.8 Mbits/sec                  receiver
    
  • Darwin nebula client not finding/receiving udpAddrs for other hosts.

    Darwin nebula client not finding/receiving udpAddrs for other hosts.

    My lighthouse most definitely knows and has a udpAddr for a remote host (Linux) and my mac host. Trouble is, my mac nebula host doesn't seem to be able to find out about the udpAddr for the other remote host (Linux).

    Here is what I see on my lighthouse: Nov 15 10:26:14 hostname nebula[1478]: time="2021-11-15T10:26:14-05:00" level=info msg="Handshake message sent" certName=pc1 fingerprint=2733666f6007c2b508475b269fdb332a240343c49ba36b3973c5ffc31f999151 handshake="map[stage:2 style:ix_psk0]" initiatorIndex=1797597823 issuer=298e1986b9d3e1f20316a9b3389ebb037dcfe465932453e2b25c4f949e3e0217 remoteIndex=0 responderIndex=1842167964 sentCachedPackets=0 udpAddr="<redacted public IP>:48415" vpnIp=192.168.200.3

    And here is what my mac nebula client reports: INFO[0868] Handshake timed out durationNs=9382057076 handshake="map[stage:1 style:ix_psk0]" initiatorIndex=788921434 remoteIndex=0 udpAddrs="[]" vpnIp=192.168.200.3

    Now, the lighthouse log absolutely shows both of these separate hosts properly, with different, correct public udpAddrs assigned for each. The other remote host (192.168.200.3) is correctly reporting the udpAddr for the mac client, but the mac client never seems to get or find out about a udpAddr for the remote host (udpAddrs="[]", as seen above).

    Why is this? Is this a bug in the Darwin nebula client that no one has reported yet?

  • Can only access node when node initiates a ping to get tunnel up.

    Can only access node when node initiates a ping to get tunnel up.

    Hi There,

    I set up a quick PoC with a lighthouse on a server in the cloud

    I also have a pi4 at my house.

    after setting up the routing, subnets, and unsafe_routes etc correctly, I notice that on startup of the services, if I initiate a ping to the pi4 from the lighthouse, I see the handshake fails with a timeout and the ping is never received on the pi4.

    The only way to get it to ping from the lighthouse, is to first send a ping out from the pi4 which sets up the handshake and from then on everything works nicely.

    Is there a way to get the pi4 to send a keepalive ping at startup of the service to build the tunnel without having to ping from the light house first?

    Cheers!

    Jon.

  • FATA[0001] no such device

    FATA[0001] no such device

    I am run nebula in Raspberry Pi 1 Model B+, sudo ./nebula -config config.yaml But it says FATA[0001] no such device. Is it mean it fail to create tun dev?

  • openwrtx64 can't run nebula

    openwrtx64 can't run nebula

    I downloaded the file nebula๏ผˆlinux-amd64.tar.gz๏ผ‰to run๏ผŒnot found errorsใ€‚ [email protected]:~# ./etc/nebula/nebula -config /etc/nebula/config.yml -ash: ./etc/nebula/nebula: not found [email protected]:~# bash ./etc/nebula/nebula -config /etc/nebula/config.yml bash: ./etc/nebula/nebula: No such file or directory [email protected]:~# /etc/nebula/nebula -config /etc/nebula/config.yml -ash: /etc/nebula/nebula: not found [email protected]:~# bash /etc/nebula/nebula -config /etc/nebula/config.yml /etc/nebula/nebula: /etc/nebula/nebula: cannot execute binary file

  • Fatal Error: TAP driver

    Fatal Error: TAP driver

    Activate failed: Failed to find the tap device in registry with specified ComponentId 'tap0901', TAP driver may be not installed

    The docs don't mention the need for a TAP driver. Which driver is recommended?

  • Add template systemd service files

    Add template systemd service files

    These allow hosts to easily join multiple different Nebula networks.

    Also, I have removed the SyslogIdentifier=nebula option, it's unnecessary because the logs already get tagged nebula by default.

  • Reorganize and rework example config.yml

    Reorganize and rework example config.yml

    The included example config.yml, while very useful, is sometimes confusing for new users. This set of proposed patches help clarify the intentions, and also reformat and reorganize the comments to make it easier to read and understand.

    Each patch touches one aspect or one section, so that it is easy to review.

    Please let me know if there is a mistake somewhere, and I will gladly correct it.

    Thank you!

  • Make hostinfo remote atomic, for consistent data access synchronization.

    Make hostinfo remote atomic, for consistent data access synchronization.

    When working on 746 I noticed that there was inconsistent lock access to HostInfo.remote pointer.

    Inspired by 728 this PR makes HostInfo.remote an atomicPointer.

  • Punchy only punch on established addrs

    Punchy only punch on established addrs

    • Punchy writes 1-byte packets to all RemoteList addresses known in its host map. For NAT / Firewall maintenance, it should only send packets to current tunnel addresses.
    • Updated HostMap Punchy to reconfigure itself on config reload. Code updated to always create a Punchy goroutine, but only send packets if GetPunch() returns true.
    • Added a punchy.frequency configuration option that can modify how frequently punchy sends out its 1-byte packets.
  • ๐Ÿ› BUG: Nebula crash on

    ๐Ÿ› BUG: Nebula crash on "Static host address could not be parsed"

    What version of nebula are you using?

    1.6.0

    What operating system are you using?

    Linux

    Describe the Bug

    I use FQDN for lighthouse addresses. When Nebula starts, if the network is not yet fully up, the DNS resolution fails and Nebula crashes. My expectation would be that Nebula does not crash and retries over time in the same way it does when it can resolve the FQDN to an IP but Nebula is not able to establish a connection.

    Logs from affected hosts

    -- Boot 6ba68dc9aa554b0094f3a442acb37401 --
    Sep 05 09:22:42 x1.fale.io nebula[1259]: time="2022-09-05T09:22:42+02:00" level=info msg="Firewall rule added" firewallRule="map[caName: caSha: direction:outgoing endPort:0 groups:[] host:any ip: proto:0 startPort:0]"
    Sep 05 09:22:42 x1.fale.io nebula[1259]: time="2022-09-05T09:22:42+02:00" level=info msg="Firewall rule added" firewallRule="map[caName: caSha: direction:incoming endPort:0 groups:[] host:any ip: proto:1 startPort:0]"
    Sep 05 09:22:42 x1.fale.io nebula[1259]: time="2022-09-05T09:22:42+02:00" level=info msg="Firewall rule added" firewallRule="map[caName: caSha: direction:incoming endPort:22 groups:[fale ssh] host: ip: proto:6 startPort:22]"
    Sep 05 09:22:42 x1.fale.io nebula[1259]: time="2022-09-05T09:22:42+02:00" level=info msg="Firewall started" firewallHash=e9fc6276b1f92e11afb6da6c768e57eae28a9aaab95b14daadb95cbce6f432af
    Sep 05 09:22:42 x1.fale.io nebula[1259]: time="2022-09-05T09:22:42+02:00" level=info msg="Main HostMap created" network=192.168.100.32/24 preferredRanges="[]"
    Sep 05 09:22:42 x1.fale.io nebula[1259]: time="2022-09-05T09:22:42+02:00" level=info msg="UDP hole punching enabled"
    Sep 05 09:22:42 x1.fale.io nebula[1259]: time="2022-09-05T09:22:42+02:00" level=error msg="Static host address could not be parsed" error="lookup lh4.fale.io: Temporary failure in name resolution" vpnIp=192.168.100.4
    

    Config files from affected hosts

    ...
    static_host_map:
      192.168.100.1:
        - lh1.fale.io:4242
      192.168.100.2:
        - lh2.fale.io:4242
      192.168.100.3:
        - lh3.fale.io:4242
      192.168.100.4:
        - lh4.fale.io:4242
    
    lighthouse:
      hosts:
        - 192.168.100.1
        - 192.168.100.2
        - 192.168.100.3
        - 192.168.100.4
    ...
    
  • ๐Ÿ› FEATURE: Accept stdin for nebula-cert print

    ๐Ÿ› FEATURE: Accept stdin for nebula-cert print

    What version of nebula are you using?

    1.6.0

    What operating system are you using?

    openSUSE Leap 15.4

    Describe the Bug

    nebula-cert requires the path to be on disk and does not accept - to indicate stdin. It would be nice if nebula-cert supported this so that the bare certificate doesn't need to be saved to disk.

    This doesn't work.

    yq '.pki.ca | .' < config.yml | nebula-cert print -path -
    Error: unable to read cert; open -: no such file or directory
    

    This works.

    yq '.pki.ca | .' < config.yml > temp.crt
    nebula-cert print -path temp.crt -json | jq '.details.notAfter'
    rm temp.crt
    

    Logs from affected hosts

    N/A

    Config files from affected hosts

    N/A

Related tags
mesh-kridik is an open-source security scanner that performs various security checks on a Kubernetes cluster with istio service mesh and is leveraged by OPA (Open Policy Agent) to enforce security rules.
mesh-kridik is an open-source security scanner that performs various security checks on a Kubernetes cluster with istio service mesh and is leveraged by OPA (Open Policy Agent) to enforce security rules.

mesh-kridik Enhance your Kubernetes service mesh security !! mesh-kridik is an open-source security scanner that performs various security checks on a

Aug 28, 2022
Web-Security-Academy - Web Security Academy, developed in GO

Web-Security-Academy - Web Security Academy, developed in GO

Feb 23, 2022
A software supply chain security inspection tool.
A software supply chain security inspection tool.

README.md murphysec ไธ€ๆฌพไธ“ๆณจไบŽ่ฝฏไปถไพ›ๅบ”้“พๅฎ‰ๅ…จ็š„ๅผ€ๆบๅทฅๅ…ท๏ผŒๅŒ…ๅซๅผ€ๆบ็ป„ไปถไพ่ต–ๅˆ†ๆžใ€ๆผๆดžๆฃ€ๆต‹ๅŠๆผๆดžไฟฎๅค็ญ‰ๅŠŸ่ƒฝใ€‚ ๅฎ‰่ฃ… macOS ไฝฟ็”จHomebrewๅฎ‰่ฃ… // TODO Windows ไฝฟ็”จscoopๅฎ‰่ฃ… scoop bucket add murphysec https://gith

Feb 20, 2022
Search and store the best cryptos for the best scalable and modern application development.

Invst Hunt Search and store the best cryptos for the best scalable and modern application development. Layout Creating... Project Challenge The Techni

Nov 12, 2021
Log4j detector and reporting server for scalable detection of vulnerable running processes.

Log4j Detector A client and reporting server to identify systems vulnerable to Log4j at scale. This work is based on Stripe's Remediation Tools, but w

Apr 8, 2022
Sep 19, 2022
set of web security test cases and a toolkit to construct new ones

Webseclab Webseclab contains a sample set of web security test cases and a toolkit to construct new ones. It can be used for testing security scanners

Sep 9, 2022
Tracee: Linux Runtime Security and Forensics using eBPF
Tracee: Linux Runtime Security and Forensics using eBPF

Tracee is a Runtime Security and forensics tool for Linux. It is using Linux eBPF technology to trace your system and applications at runtime, and analyze collected events to detect suspicious behavioral patterns.

Sep 27, 2022
GoPhish by default tips your hand to defenders and security solutions. T

GoPhish by default tips your hand to defenders and security solutions. The container here strips those indicators and makes other changes to hopefully evade detection during operations.

Sep 7, 2022
Go binary that finds .EXEs and .DLLs on the system that don't have security controls enabled

Go Hunt Weak PEs Go binary that finds .EXEs and .DLLs on the system that don't have security controls enabled (ASLR, DEP, CFG etc). Usage $ ./go-hunt-

Oct 28, 2021
Analyse binaries for missing security features, information disclosure and more.
Analyse binaries for missing security features, information disclosure and more.

extrude Analyse binaries for missing security features, information disclosure and more. ?? Extrude is in the early stages of development, and current

Sep 22, 2022
QR secrets is a cryptographically secure mechanism to store secret data with the highest levels of security and store it on physical paper.
QR secrets is a cryptographically secure mechanism to store secret data with the highest levels of security and store it on physical paper.

QR Secrets QR secrets is a cryptographically secure mechanism to store secret data with the highest levels of security. Incorporating; AES256-GCM-HKDF

Jan 12, 2022
HTTP middleware for Go that facilitates some quick security wins.

Secure Secure is an HTTP middleware for Go that facilitates some quick security wins. It's a standard net/http Handler, and can be used with many fram

Sep 13, 2022
Gryffin is a large scale web security scanning platform.

Gryffin (beta) Gryffin is a large scale web security scanning platform. It is not yet another scanner. It was written to solve two specific problems w

Sep 23, 2022
PHP security vulnerabilities checker

Local PHP Security Checker The Local PHP Security Checker is a command line tool that checks if your PHP application depends on PHP packages with know

Sep 21, 2022
Sqreen's Application Security Management for the Go language
Sqreen's Application Security Management for the Go language

Sqreen's Application Security Management for Go After performance monitoring (APM), error and log monitoring itโ€™s time to add a security component int

Sep 6, 2022
How to systematically secure anything: a repository about security engineering
How to systematically secure anything: a repository about security engineering

How to Secure Anything Security engineering is the discipline of building secure systems. Its lessons are not just applicable to computer security. In

Sep 27, 2022
Convenience of containers, security of virtual machines

Convenience of containers, security of virtual machines With firebuild, you can build and deploy secure VMs directly from Dockerfiles and Docker image

Sep 23, 2022
MQTTๅฎ‰ๅ…จๆต‹่ฏ•ๅทฅๅ…ท (MQTT Security Tools)
MQTTๅฎ‰ๅ…จๆต‹่ฏ•ๅทฅๅ…ท (MQTT Security Tools)

โ–ˆโ–ˆโ–ˆโ•— โ–ˆโ–ˆโ–ˆโ•— โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ•— โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ•—โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ•—โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ•— โ–ˆโ–ˆโ–ˆโ–ˆโ•— โ–ˆโ–ˆโ–ˆโ–ˆโ•‘โ–ˆโ–ˆโ•”โ•โ•โ•โ–ˆโ–ˆโ•—โ•šโ•โ•โ–ˆโ–ˆโ•”โ•โ•โ•โ•šโ•โ•โ–ˆโ–ˆโ•”โ•โ•โ•โ–ˆโ–ˆโ•”โ•โ•โ•โ•โ• โ–ˆโ–ˆโ•”โ–ˆโ–ˆโ–ˆโ–ˆโ•”โ–ˆโ–ˆโ•‘โ–ˆโ–ˆโ•‘ โ–ˆโ–ˆโ•‘ โ–ˆโ–ˆโ•‘ โ–ˆโ–ˆโ•‘ โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ•— โ–ˆโ–ˆโ•‘โ•šโ–ˆโ–ˆโ•”โ•โ–ˆ

Aug 25, 2022