A scalable overlay networking tool with a focus on performance, simplicity and security

What is Nebula?

Nebula is a scalable overlay networking tool with a focus on performance, simplicity and security. It lets you seamlessly connect computers anywhere in the world. Nebula is portable, and runs on Linux, OSX, Windows, iOS, and Android. It can be used to connect a small number of computers, but is also able to connect tens of thousands of computers.

Nebula incorporates a number of existing concepts like encryption, security groups, certificates, and tunneling, and each of those individual pieces existed before Nebula in various forms. What makes Nebula different to existing offerings is that it brings all of these ideas together, resulting in a sum that is greater than its individual parts.

You can read more about Nebula here.

You can also join the NebulaOSS Slack group here

Supported Platforms

Desktop and Server

Check the releases page for downloads

  • Linux - 64 and 32 bit, arm, and others
  • Windows
  • MacOS
  • Freebsd

Mobile

Technical Overview

Nebula is a mutually authenticated peer-to-peer software defined network based on the Noise Protocol Framework. Nebula uses certificates to assert a node's IP address, name, and membership within user-defined groups. Nebula's user-defined groups allow for provider agnostic traffic filtering between nodes. Discovery nodes allow individual peers to find each other and optionally use UDP hole punching to establish connections from behind most firewalls or NATs. Users can move data between nodes in any number of cloud service providers, datacenters, and endpoints, without needing to maintain a particular addressing scheme.

Nebula uses elliptic curve Diffie-Hellman key exchange, and AES-256-GCM in its default configuration.

Nebula was created to provide a mechanism for groups hosts to communicate securely, even across the internet, while enabling expressive firewall definitions similar in style to cloud security groups.

Getting started (quickly)

To set up a Nebula network, you'll need:

1. The Nebula binaries for your specific platform. Specifically you'll need nebula-cert and the specific nebula binary for each platform you use.

2. (Optional, but you really should..) At least one discovery node with a routable IP address, which we call a lighthouse.

Nebula lighthouses allow nodes to find each other, anywhere in the world. A lighthouse is the only node in a Nebula network whose IP should not change. Running a lighthouse requires very few compute resources, and you can easily use the least expensive option from a cloud hosting provider. If you're not sure which provider to use, a number of us have used $5/mo DigitalOcean droplets as lighthouses.

Once you have launched an instance, ensure that Nebula udp traffic (default port udp/4242) can reach it over the internet.

3. A Nebula certificate authority, which will be the root of trust for a particular Nebula network.

./nebula-cert ca -name "Myorganization, Inc"

This will create files named ca.key and ca.cert in the current directory. The ca.key file is the most sensitive file you'll create, because it is the key used to sign the certificates for individual nebula nodes/hosts. Please store this file somewhere safe, preferably with strong encryption.

4. Nebula host keys and certificates generated from that certificate authority

This assumes you have four nodes, named lighthouse1, laptop, server1, host3. You can name the nodes any way you'd like, including FQDN. You'll also need to choose IP addresses and the associated subnet. In this example, we are creating a nebula network that will use 192.168.100.x/24 as its network range. This example also demonstrates nebula groups, which can later be used to define traffic rules in a nebula network.

./nebula-cert sign -name "lighthouse1" -ip "192.168.100.1/24"
./nebula-cert sign -name "laptop" -ip "192.168.100.2/24" -groups "laptop,home,ssh"
./nebula-cert sign -name "server1" -ip "192.168.100.9/24" -groups "servers"
./nebula-cert sign -name "host3" -ip "192.168.100.10/24"

5. Configuration files for each host

Download a copy of the nebula example configuration.

  • On the lighthouse node, you'll need to ensure am_lighthouse: true is set.

  • On the individual hosts, ensure the lighthouse is defined properly in the static_host_map section, and is added to the lighthouse hosts section.

6. Copy nebula credentials, configuration, and binaries to each host

For each host, copy the nebula binary to the host, along with config.yaml from step 5, and the files ca.crt, {host}.crt, and {host}.key from step 4.

DO NOT COPY ca.key TO INDIVIDUAL NODES.

7. Run nebula on each host

./nebula -config /path/to/config.yaml

Building Nebula from source

Download go and clone this repo. Change to the nebula directory.

To build nebula for all platforms: make all

To build nebula for a specific platform (ex, Windows): make bin-windows

See the Makefile for more details on build targets

Credits

Nebula was created at Slack Technologies, Inc by Nate Brown and Ryan Huber, with contributions from Oliver Fross, Alan Lam, Wade Simmons, and Lining Wang.

Owner
Slack
On a mission to make your working life simpler, more pleasant and more productive.
Slack
Comments
  • Question: NAT Setup

    Question: NAT Setup

    I seem to be missing something important. If I setup a mesh of hosts with all direct public IP addresses, it works fine. However, if I have a network with a light house(public IP), then all nodes behind NAT, they will not connect to each other. The lighthouse is able to communicate with all hosts, but hosts are not able to communicate with each other.

    Watching the logs I see connections trying to be made to both the NAT public, and the private IPs.

    I have enabled punchy and punch back, but does not seem to help.

    Hope it is something simple?

  • Node outside of LAN can only talk to light house

    Node outside of LAN can only talk to light house

    I have a bunch of computers on my LAN with one light house that is accessible from the outside world Lighthouse: 192.168.42.99 (mydomain.com:4242) Lan Machine 1 (A) : 192.168.42.200 Lan Machine 2 (B): 192.168.42.203

    Outside lan machine (C): 192.168.42.10

    using the 192.168.42.0 IPs:

    • A, B and lighthouse can ping each other without any issue
    • C can ping the lighthouse but not A nor B
    • A and B can't ping C
    • Light house can ping C

    Light house config:

    # This is the nebula example configuration file. You must edit, at a minimum, the static_host_map, lighthouse, and firewall sections
    # Some options in this file are HUPable, including the pki section. (A HUP will reload credentials from disk without affecting existing tunnels)
    
    # PKI defines the location of credentials for this node. Each of these can also be inlined by using the yaml ": |" syntax.
    pki:
      # The CAs that are accepted by this node. Must contain one or more certificates created by 'nebula-cert ca'
      ca: /etc/nebula/ca.crt
      cert: /etc/nebula/pihole.crt
      key: /etc/nebula/pihole.key
      #blacklist is a list of certificate fingerprints that we will refuse to talk to
      #blacklist:
      #  - c99d4e650533b92061b09918e838a5a0a6aaee21eed1d12fd937682865936c72
    
    # The static host map defines a set of hosts with fixed IP addresses on the internet (or any network).
    # A host can have multiple fixed IP addresses defined here, and nebula will try each when establishing a tunnel.
    # The syntax is:
    #   "{nebula ip}": ["{routable ip/dns name}:{routable port}"]
    # Example, if your lighthouse has the nebula IP of 192.168.100.1 and has the real ip address of 100.64.22.11 and runs on port 4242:
    static_host_map:
      "192.168.42.99": ["mydomain.com:4242"]
    
    
    lighthouse:
      # am_lighthouse is used to enable lighthouse functionality for a node. This should ONLY be true on nodes
      # you have configured to be lighthouses in your network
      am_lighthouse: true
      # serve_dns optionally starts a dns listener that responds to various queries and can even be
      # delegated to for resolution
      # serve_dns: true
      # interval is the number of seconds between updates from this node to a lighthouse.
      # during updates, a node sends information about its current IP addresses to each node.
      interval: 60
      # hosts is a list of lighthouse hosts this node should report to and query from
      # IMPORTANT: THIS SHOULD BE EMPTY ON LIGHTHOUSE NODES
      hosts:
              #  - "192.168.42.1"
    
    # Port Nebula will be listening on. The default here is 4242. For a lighthouse node, the port should be defined,
    # however using port 0 will dynamically assign a port and is recommended for roaming nodes.
    listen:
      host: 0.0.0.0
      port: 4242
      # Sets the max number of packets to pull from the kernel for each syscall (under systems that support recvmmsg)
      # default is 64, does not support reload
      #batch: 64
      # Configure socket buffers for the udp side (outside), leave unset to use the system defaults. Values will be doubled by the kernel
      # Default is net.core.rmem_default and net.core.wmem_default (/proc/sys/net/core/rmem_default and /proc/sys/net/core/rmem_default)
      # Maximum is limited by memory in the system, SO_RCVBUFFORCE and SO_SNDBUFFORCE is used to avoid having to raise the system wide
      # max, net.core.rmem_max and net.core.wmem_max
      #read_buffer: 10485760
      #write_buffer: 10485760
    
    # Punchy continues to punch inbound/outbound at a regular interval to avoid expiration of firewall nat mappings
    punchy: true
    # punch_back means that a node you are trying to reach will connect back out to you if your hole punching fails
    # this is extremely useful if one node is behind a difficult nat, such as symmetric
    punch_back: true
    
    # Cipher allows you to choose between the available ciphers for your network.
    # IMPORTANT: this value must be identical on ALL NODES/LIGHTHOUSES. We do not/will not support use of different ciphers simultaneously!
    #cipher: chachapoly
    
    # Local range is used to define a hint about the local network range, which speeds up discovering the fastest
    # path to a network adjacent nebula node.
    #local_range: "172.16.0.0/24"
    
    # sshd can expose informational and administrative functions via ssh this is a
    #sshd:
      # Toggles the feature
      #enabled: true
      # Host and port to listen on, port 22 is not allowed for your safety
      #listen: 127.0.0.1:2222
      # A file containing the ssh host private key to use
      # A decent way to generate one: ssh-keygen -t ed25519 -f ssh_host_ed25519_key -N "" < /dev/null
      #host_key: ./ssh_host_ed25519_key
      # A file containing a list of authorized public keys
      #authorized_users:
        #- user: steeeeve
          # keys can be an array of strings or single string
          #keys:
            #- "ssh public key string"
    
    # Configure the private interface. Note: addr is baked into the nebula certificate
    tun:
      # Name of the device
      dev: nebula1
      # Toggles forwarding of local broadcast packets, the address of which depends on the ip/mask encoded in pki.cert
      drop_local_broadcast: false
      # Toggles forwarding of multicast packets
      drop_multicast: false
      # Sets the transmit queue length, if you notice lots of transmit drops on the tun it may help to raise this number. Default is 500
      tx_queue: 500
      # Default MTU for every packet, safe setting is (and the default) 1300 for internet based traffic
      mtu: 1300
      # Route based MTU overrides, you have known vpn ip paths that can support larger MTUs you can increase/decrease them here
      routes:
        #- mtu: 8800
        #  route: 10.0.0.0/16
    
    # TODO
    # Configure logging level
    logging:
      # panic, fatal, error, warning, info, or debug. Default is info
      level: info
      # json or text formats currently available. Default is text
      format: text
    
    #stats:
      #type: graphite
      #prefix: nebula
      #protocol: tcp
      #host: 127.0.0.1:9999
      #interval: 10s
    
      #type: prometheus
      #listen: 127.0.0.1:8080
      #path: /metrics
      #namespace: prometheusns
      #subsystem: nebula
      #interval: 10s
    
    # Nebula security group configuration
    firewall:
      conntrack:
        tcp_timeout: 120h
        udp_timeout: 3m
        default_timeout: 10m
        max_connections: 100000
    
      # The firewall is default deny. There is no way to write a deny rule.
      # Rules are comprised of a protocol, port, and one or more of host, group, or CIDR
      # Logical evaluation is roughly: port AND proto AND ca_sha AND ca_name AND (host OR group OR groups OR cidr)
      # - port: Takes `0` or `any` as any, a single number `80`, a range `200-901`, or `fragment` to match second and further fragments of fragmented packets (since there is no port available).
      #   code: same as port but makes more sense when talking about ICMP, TODO: this is not currently implemented in a way that works, use `any`
      #   proto: `any`, `tcp`, `udp`, or `icmp`
      #   host: `any` or a literal hostname, ie `test-host`
      #   group: `any` or a literal group name, ie `default-group`
      #   groups: Same as group but accepts a list of values. Multiple values are AND'd together and a certificate would have to contain all groups to pass
      #   cidr: a CIDR, `0.0.0.0/0` is any.
      #   ca_name: An issuing CA name
      #   ca_sha: An issuing CA shasum
    
      outbound:
        # Allow all outbound traffic from this node
        - port: any
          proto: any
          host: any
    
      inbound:
        # Allow icmp between any nebula hosts
        - port: any
          proto: any
          host: any
    
    

    C config:

    # This is the nebula example configuration file. You must edit, at a minimum, the static_host_map, lighthouse, and firewall sections
    # Some options in this file are HUPable, including the pki section. (A HUP will reload credentials from disk without affecting existing tunnels)
    
    # PKI defines the location of credentials for this node. Each of these can also be inlined by using the yaml ": |" syntax.
    pki:
      # The CAs that are accepted by this node. Must contain one or more certificates created by 'nebula-cert ca'
      ca: /etc/nebula/ca.crt
      cert: /etc/nebula/work.crt
      key: /etc/nebula/work.key
      #blacklist is a list of certificate fingerprints that we will refuse to talk to
      #blacklist:
      #  - c99d4e650533b92061b09918e838a5a0a6aaee21eed1d12fd937682865936c72
    
    # The static host map defines a set of hosts with fixed IP addresses on the internet (or any network).
    # A host can have multiple fixed IP addresses defined here, and nebula will try each when establishing a tunnel.
    # The syntax is:
    #   "{nebula ip}": ["{routable ip/dns name}:{routable port}"]
    # Example, if your lighthouse has the nebula IP of 192.168.100.1 and has the real ip address of 100.64.22.11 and runs on port 4242:
    static_host_map:
      "192.168.42.99": ["ftpix.com:4242"]
    
    lighthouse:
      # am_lighthouse is used to enable lighthouse functionality for a node. This should ONLY be true on nodes
      # you have configured to be lighthouses in your network
      am_lighthouse: false
      # serve_dns optionally starts a dns listener that responds to various queries and can even be
      # delegated to for resolution
      #serve_dns: false
      # interval is the number of seconds between updates from this node to a lighthouse.
      # during updates, a node sends information about its current IP addresses to each node.
      interval: 60
      # hosts is a list of lighthouse hosts this node should report to and query from
      # IMPORTANT: THIS SHOULD BE EMPTY ON LIGHTHOUSE NODES
      hosts:
        - "192.168.42.99"
    
    # Port Nebula will be listening on. The default here is 4242. For a lighthouse node, the port should be defined,
    # however using port 0 will dynamically assign a port and is recommended for roaming nodes.
    listen:
      host: 0.0.0.0
      port: 0
      # Sets the max number of packets to pull from the kernel for each syscall (under systems that support recvmmsg)
      # default is 64, does not support reload
      #batch: 64
      # Configure socket buffers for the udp side (outside), leave unset to use the system defaults. Values will be doubled by the kernel
      # Default is net.core.rmem_default and net.core.wmem_default (/proc/sys/net/core/rmem_default and /proc/sys/net/core/rmem_default)
      # Maximum is limited by memory in the system, SO_RCVBUFFORCE and SO_SNDBUFFORCE is used to avoid having to raise the system wide
      # max, net.core.rmem_max and net.core.wmem_max
      #read_buffer: 10485760
      #write_buffer: 10485760
    
    # Punchy continues to punch inbound/outbound at a regular interval to avoid expiration of firewall nat mappings
    punchy: true
    # punch_back means that a node you are trying to reach will connect back out to you if your hole punching fails
    # this is extremely useful if one node is behind a difficult nat, such as symmetric
    punch_back: true
    
    # Cipher allows you to choose between the available ciphers for your network.
    # IMPORTANT: this value must be identical on ALL NODES/LIGHTHOUSES. We do not/will not support use of different ciphers simultaneously!
    #cipher: chachapoly
    
    # Local range is used to define a hint about the local network range, which speeds up discovering the fastest
    # path to a network adjacent nebula node.
    #local_range: "172.16.0.0/24"
    
    # sshd can expose informational and administrative functions via ssh this is a
    #sshd:
      # Toggles the feature
      #enabled: true
      # Host and port to listen on, port 22 is not allowed for your safety
      #listen: 127.0.0.1:2222
      # A file containing the ssh host private key to use
      # A decent way to generate one: ssh-keygen -t ed25519 -f ssh_host_ed25519_key -N "" < /dev/null
      #host_key: ./ssh_host_ed25519_key
      # A file containing a list of authorized public keys
      #authorized_users:
        #- user: steeeeve
          # keys can be an array of strings or single string
          #keys:
            #- "ssh public key string"
    
    # Configure the private interface. Note: addr is baked into the nebula certificate
    tun:
      # Name of the device
      dev: nebula1
      # Toggles forwarding of local broadcast packets, the address of which depends on the ip/mask encoded in pki.cert
      drop_local_broadcast: false
      # Toggles forwarding of multicast packets
      drop_multicast: false
      # Sets the transmit queue length, if you notice lots of transmit drops on the tun it may help to raise this number. Default is 500
      tx_queue: 500
      # Default MTU for every packet, safe setting is (and the default) 1300 for internet based traffic
      mtu: 1300
      # Route based MTU overrides, you have known vpn ip paths that can support larger MTUs you can increase/decrease them here
      routes:
        #- mtu: 8800
        #  route: 10.0.0.0/16
    
    # TODO
    # Configure logging level
    logging:
      # panic, fatal, error, warning, info, or debug. Default is info
      level: info
      # json or text formats currently available. Default is text
      format: text
    
    #stats:
      #type: graphite
      #prefix: nebula
      #protocol: tcp
      #host: 127.0.0.1:9999
      #interval: 10s
    
      #type: prometheus
      #listen: 127.0.0.1:8080
      #path: /metrics
      #namespace: prometheusns
      #subsystem: nebula
      #interval: 10s
    
    # Nebula security group configuration
    firewall:
      conntrack:
        tcp_timeout: 120h
        udp_timeout: 3m
        default_timeout: 10m
        max_connections: 100000
    
      # The firewall is default deny. There is no way to write a deny rule.
      # Rules are comprised of a protocol, port, and one or more of host, group, or CIDR
      # Logical evaluation is roughly: port AND proto AND ca_sha AND ca_name AND (host OR group OR groups OR cidr)
      # - port: Takes `0` or `any` as any, a single number `80`, a range `200-901`, or `fragment` to match second and further fragments of fragmented packets (since there is no port available).
      #   code: same as port but makes more sense when talking about ICMP, TODO: this is not currently implemented in a way that works, use `any`
      #   proto: `any`, `tcp`, `udp`, or `icmp`
      #   host: `any` or a literal hostname, ie `test-host`
      #   group: `any` or a literal group name, ie `default-group`
      #   groups: Same as group but accepts a list of values. Multiple values are AND'd together and a certificate would have to contain all groups to pass
      #   cidr: a CIDR, `0.0.0.0/0` is any.
      #   ca_name: An issuing CA name
      #   ca_sha: An issuing CA shasum
    
      outbound:
        # Allow all outbound traffic from this node
        - port: any
          proto: any
          host: any
    
      inbound:
        # Allow icmp between any nebula hosts
        - port: any
          proto: icmp
          host: any
    
        # Allow tcp/443 from any host with BOTH laptop and home group
        - port: any
          proto: tcp
          host: any
    
        - port: any
          proto: udp
          host: any
    
    

    Logs from C:

    Dec 05 15:55:20 gz-t480 nebula[32698]: time="2019-12-05T15:55:20+08:00" level=info msg="Handshake message sent" handshake="map[stage:1 style:ix_psk0]" initiatorIndex=89620360 remoteIndex=0 udpAddr="192.168.1.1:52803" vpnIp=192.168.42.198
    Dec 05 15:55:22 gz-t480 nebula[32698]: time="2019-12-05T15:55:22+08:00" level=info msg="Handshake message sent" handshake="map[stage:1 style:ix_psk0]" initiatorIndex=89620360 remoteIndex=0 udpAddr="192.168.1.198:52803" vpnIp=192.168.42.198
    Dec 05 15:55:23 gz-t480 nebula[32698]: time="2019-12-05T15:55:23+08:00" level=info msg="Handshake message sent" handshake="map[stage:1 style:ix_psk0]" initiatorIndex=89620360 remoteIndex=0 udpAddr="192.168.200.198:52803" vpnIp=192.168.42.198
    Dec 05 15:55:25 gz-t480 nebula[32698]: time="2019-12-05T15:55:25+08:00" level=info msg="Handshake message sent" handshake="map[stage:1 style:ix_psk0]" initiatorIndex=89620360 remoteIndex=0 udpAddr="172.21.0.1:52803" vpnIp=192.168.42.198
    Dec 05 15:55:27 gz-t480 nebula[32698]: time="2019-12-05T15:55:27+08:00" level=info msg="Handshake message sent" handshake="map[stage:1 style:ix_psk0]" initiatorIndex=89620360 remoteIndex=0 udpAddr="172.19.0.1:52803" vpnIp=192.168.42.198
    Dec 05 15:55:29 gz-t480 nebula[32698]: time="2019-12-05T15:55:29+08:00" level=info msg="Handshake message sent" handshake="map[stage:1 style:ix_psk0]" initiatorIndex=89620360 remoteIndex=0 udpAddr="172.17.0.1:52803" vpnIp=192.168.42.198
    Dec 05 15:55:31 gz-t480 nebula[32698]: time="2019-12-05T15:55:31+08:00" level=info msg="Handshake message sent" handshake="map[stage:1 style:ix_psk0]" initiatorIndex=89620360 remoteIndex=0 udpAddr="172.20.0.1:52803" vpnIp=192.168.42.198
    Dec 05 15:55:33 gz-t480 nebula[32698]: time="2019-12-05T15:55:33+08:00" level=info msg="Handshake message sent" handshake="map[stage:1 style:ix_psk0]" initiatorIndex=89620360 remoteIndex=0 udpAddr="172.21.0.1:58904" vpnIp=192.168.42.198
    Dec 05 15:55:35 gz-t480 nebula[32698]: time="2019-12-05T15:55:35+08:00" level=info msg="Handshake message sent" handshake="map[stage:1 style:ix_psk0]" initiatorIndex=89620360 remoteIndex=0 udpAddr="172.19.0.1:58904" vpnIp=192.168.42.198
    Dec 05 15:55:38 gz-t480 nebula[32698]: time="2019-12-05T15:55:38+08:00" level=info msg="Handshake message sent" handshake="map[stage:1 style:ix_psk0]" initiatorIndex=89620360 remoteIndex=0 udpAddr="172.17.0.1:58904" vpnIp=192.168.42.198
    Dec 05 15:55:40 gz-t480 nebula[32698]: time="2019-12-05T15:55:40+08:00" level=info msg="Handshake message sent" handshake="map[stage:1 style:ix_psk0]" initiatorIndex=89620360 remoteIndex=0 udpAddr="172.20.0.1:58904" vpnIp=192.168.42.198
    
  • Nebula does not reroute through lighthouse if hole punching does not work

    Nebula does not reroute through lighthouse if hole punching does not work

    I have a Nebula network with 5 node. I have 1 lighthouse node with a public IP and 4 nodes behind NATs. When I first configured the network, I had 3 of the nodes behind NAT A in location A and 1 node behind NAT B in location B. All nodes were able to communicate with each other.

    I physically moved one of the 3 nodes behind NAT A to another location (NAT C location C). This node is now only able to talk to the lighthouse node. It is not able to talk to nodes in location A or B and vice versa.

    I have tried restarting all instances of nebula and the same problem persists.

    Now that I have looked into the issue further, I believe there may be an issue with having another node use the same reverse NAT entry as the lighthouse. The public IP for the unreachable node is: 143.208.168.126.

    When the node registers with the lighthouse, the lighthouse records the external NAT address using these messages: INFO[0055] Handshake message received handshake="map[stage:1 style:ix_psk0]" initiatorIndex=3564022980 remoteIndex=0 responderIndex=0 udpAddr="143.208.168.126:42054" vpnIp=10.27.15.4 INFO[0055] Handshake message sent handshake="map[stage:2 style:ix_psk0]" initiatorIndex=3564022980 remoteIndex=0 responderIndex=4021440982 udpAddr="143.208.168.126:42054" vpnIp=10.27.15.4

    So the lighthouse can reach node 10.27.15.4 using 143.208.168.126:42054

    When I try to ping node 10.27.15.4 with a non lighthouse node, I get the following messages:INFO[0035] Handshake message sent handshake="map[stage:1 style:ix_psk0]" initiatorIndex=1424860728 remoteIndex=0 udpAddr="143.208.168.126:42054" vpnIp=10.27.15.4 INFO[0038] Handshake message sent handshake="map[stage:1 style:ix_psk0]" initiatorIndex=1424860728 remoteIndex=0 udpAddr="192.168.1.36:4242" vpnIp=10.27.15.4

    So a non-lighthouse node is NOT able to reach 10.27.15.4 using 143.208.168.126:42054.

    I was under the impression that if a nebula node is not able to open a direct connection to another non-lighthouse node, it would use the lighthouse to relay packets. However, I am not seeing this.

    What am I doing wrong??

  • Windows 10:

    Windows 10: "failed to run 'netsh' to set address: exit status 1"

    Hi,

    I'm trying to setup a simple network, with one lighthouse and one node. The lighthouse should run on Windows 10 (v1903, build 18362.476) while the node runs on macOS (Catalina, 10.15.1).

    I've deployed the certificates and prepared both of the configs. Also installed TAP driver from OpenVPN.

    However, when I start the lighthouse node this error appears:

    D:\nebula>.\nebula.exe --config .\config.yml
    time="2019-11-20T12:47:12+01:00" level=info msg="Firewall rule added" firewallRule="map[caName: caSha: direction:outgoing endPort:0 groups:[] host:any ip:<nil> proto:0 startPort:0]"
    time="2019-11-20T12:47:12+01:00" level=info msg="Firewall rule added" firewallRule="map[caName: caSha: direction:incoming endPort:0 groups:[] host:any ip:<nil> proto:1 startPort:0]"
    time="2019-11-20T12:47:12+01:00" level=info msg="Firewall rule added" firewallRule="map[caName: caSha: direction:incoming endPort:443 groups:[laptop home] host: ip:<nil> proto:6 startPort:443]"
    time="2019-11-20T12:47:12+01:00" level=info msg="Firewall started" firewallHash=3e3f317872f504cec08154d9fb0a726ebc68235d1a5075426317696bdd388336
    time="2019-11-20T12:47:12+01:00" level=info msg="Main HostMap created" network=192.168.178.122/24 preferredRanges="[192.168.178.0/24]"
    time="2019-11-20T12:47:12+01:00" level=fatal msg="failed to run 'netsh' to set address: exit status 1"
    

    Here's the lighthouse config.yml:

    pki:
      ca: D:\\nebula\\ca.crt
      cert: D:\\nebula\\lighthouse1.crt
      key: D:\\nebula\\lighthouse1.key
    
    lighthouse:
      am_lighthouse: true
      interval: 60
    
    listen:
      host: 0.0.0.0
      port: 4242
    
    local_range: "192.168.178.0/24"
    
    handshake_mac:
      key: "MYHANDSHAKE"
      accepted_keys:
        - "MYHANDSHAKE"
    
    tun:
      dev: nebula1
      drop_local_broadcast: false
      drop_multicast: false
      tx_queue: 500
      mtu: 1300
    
    logging:
      level: info
      format: text
    
    # Nebula security group configuration
    firewall:
      conntrack:
        tcp_timeout: 120h
        udp_timeout: 3m
        default_timeout: 10m
        max_connections: 100000
    
      outbound:
        - port: any
          proto: any
          host: any
    
      inbound:
        - port: any
          proto: icmp
          host: any
        - port: 443
          proto: tcp
          groups:
            - laptop
            - home
    
  • Nodes can see the Lighthouse but they cant see eachother

    Nodes can see the Lighthouse but they cant see eachother

    Hi

    I setup a small network of 3+ nodes. Non LH nodes can ping the LH. LH can ping the nodes but the nodes cant ping eachoher.

    This seems to work only for the nodes that are on the same wifi network. Anything from external to another external node or external to internal does not work, unless there is another form of VPN is active between the exteral nodes, like Wireguard.

    The LH is behind a router so I port forwarded the default port, this seems to work given that any of the nodes can connect to the LH.

    It is interesting that when I try to ping from one of the external nodes to a node in the home wifi, there is activity on the receiving internal node, but pings are all unsuccessful meaning that the ping just stalls.

    
    time="2019-12-11T14:46:38-06:00" level=info msg="Handshake message sent" handshake="map[stage:1 style:ix_psk0]" initiatorIndex=688165208 remoteIndex=0 udpAddr="192.168.0.23:59683" vpnIp=10.x.0.12
    
    time="2019-12-11T14:46:40-06:00" level=info msg="Handshake message sent" handshake="map[stage:1 style:ix_psk0]" initiatorIndex=688165208 remoteIndex=0 udpAddr="EXTERNAL-IP:59683" vpnIp=10.x.0.12
    
    time="2019-12-11T14:46:43-06:00" level=info msg="Handshake message sent" handshake="map[stage:1 style:ix_psk0]" initiatorIndex=688165208 remoteIndex=0 udpAddr="10.3.0.2:59683" vpnIp=10.x.0.12
    
    
    

    I have all the punch stuff enabled. Am I supposed to forward more ports or port ranges?

    Please bear in mind that in the given situation WG works perfectly, and all the wg nodes can see eachother without issues, including all the traffic routing setup. I would like to setup Nebula as a fallback solution, in case one wonders why I am trying to use both.

  • lighthouse High availability

    lighthouse High availability

    Hello What is the correct way to achieve High availability of a nebula network?

    Is it to have 2 lighthouse nodes with the same certificate and to configure all the clients to use those 2 lighthouse nodes?

  • UDP send buffer errors over nebula interface

    UDP send buffer errors over nebula interface

    I'm proxying an application via an nginx proxy server to an app server over nebula, and despite a lot of tweaking to sysctl.conf settings and the read/write buffer sizes in the nebula configuration files, I keep getting regular UDP send buffer errors under even the lowest of loads. There are no buffer errors when not proxying over nebula.

    I've tried increasing the following sysctl values. Upping net.ipv4.udp_wmen_min and rmen_min in particular seems to have helped, but only up to a point, beyond which any increases still result in regular send buffer errors.

    relevant section of sysctl.conf:

    net.core.rmem_max = 16777216 net.core.wmem_max = 16777216 net.core.rmem_default = 16777216 net.core.wmem_default = 16777216 net.core.optmem_max = 16777216 net.ipv4.tcp_rmem = 4096 87380 16777216 net.ipv4.tcp_wmem = 4096 65536 16777216 net.ipv4.udp_rmem_min = 16384 net.ipv4.udp_wmem_min = 16384

    relevant section of nebula config:

    listen: host: 0.0.0.0 port: 4242 read_buffer: 33554432 write_buffer: 33554432

    I've tried upping the write buffer in the nebula conf file but increased values seem to have no effect. Any idea what I'm missing?

  • Problem with clients behind the

    Problem with clients behind the "residential NAT"

    Hello Everyone,

    I'm new to Nebula, so apologies upfront for the asking/repeating questions. I have a user behind the "residential NAT", and upon connecting to the overlay network, that user can only ping the Lighthouse host (and vice versa), but not other peers in the network. Is there any configuration tip to overcome this problem?

    I would eventually use Nebula to connect with a few hundred users worldwide, and I cannot ask them to change their network configuration.

  • Documentation question: CAs

    Documentation question: CAs

    I notice in the config file there is a comment about multiple CAs. Does this mean that you could, in theory, have multiple CAs specified here? If so, would it look like this:

    pki:
      ca: /etc/nebula/ca1.crt /etc/nebula/ca2.crt /etc/nebula/ca3.crt
    

    Or (given the comment above that about inline : |

    pki:
      ca: |
        /etc/nebula/ca1.crt
        /etc/nebula/ca2.crt
        /etc/nebula/ca3.crt
    

    In what sort of context would you imagine have multiple CA's? e.g. CA per tier (e.g. management, prod, non-prod, T&V)?

    As there are also comments about being able to use ca_name and ca_sha in the config file too, does this mean you might want to use a CA per "zone" (management, backup servers, etc.) and use that CA as part of your firewalling?

  • Unbreak building for FreeBSD

    Unbreak building for FreeBSD

    I naively copied darwin files to unbreak building FreeBSD binaries. The other thing is that upstream version of water library doesn't support FreeBSD. There is a fork with added FreeBSD support https://github.com/yggdrasil-network/water and work in progress pull request to upstream: https://github.com/songgao/water/pull/37

    After these dirty hacks I'm able to start nebula on FreeBSD hosts but no traffic is passed between them:

    $ sudo ./nebula -config config.yml
    INFO[0000] Firewall rule added                           firewallRule="map[caName: caSha: direction:outgoing endPort:0 groups:[] host:a
    ny ip:<nil> proto:0 startPort:0]"
    INFO[0000] Firewall rule added                           firewallRule="map[caName: caSha: direction:incoming endPort:0 groups:[] host:a
    ny ip:<nil> proto:1 startPort:0]"
    INFO[0000] Firewall rule added                           firewallRule="map[caName: caSha: direction:incoming endPort:443 groups:[laptop
     home] host: ip:<nil> proto:6 startPort:443]"
    INFO[0000] Firewall started                              firewallHash=853d3005de969aa0cb1100731e983a740ab4218f89c78189edd389ff5e05ae99
    INFO[0000] Main HostMap created                          network=192.168.100.2/24 preferredRanges="[192.168.0.0/24]"
    INFO[0000] UDP hole punching enabled
    command: ifconfig tap0 192.168.100.2/24 192.168.100.2
    command: ifconfig tap0 mtu 1300
    INFO[0000] Nebula interface is active                    build=dev+20191217111808 interface=tap0 network=192.168.100.2/24
    INFO[0000] Handshake message sent                        handshake="map[stage:1 style:ix_psk0]" initiatorIndex=3879127975 remoteIndex=0
     udpAddr="188.116.33.203:4242" vpnIp=192.168.100.1
    INFO[0000] Handshake message received                    durationNs=446865780 handshake="map[stage:2 style:ix_psk0]" initiatorIndex=387
    9127975 remoteIndex=3879127975 responderIndex=834573217 udpAddr="188.116.33.203:4242" vpnIp=192.168.100.1
    

    tap0 interface is configured correctly:

    tap0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1300
            options=80000<LINKSTATE>
            ether 58:9c:fc:10:ff:96
            inet 192.168.100.2 netmask 0xffffff00 broadcast 192.168.100.2
            groups: tap
            media: Ethernet autoselect
            status: active
            nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
            Opened by PID 42831
    
    kwiat@monster-1 ~/nebula/build/freebsd (support-freebsd*) $ netstat -rn4
    Routing tables
    
    Internet:
    Destination        Gateway            Flags     Netif Expire
    default            192.168.0.2        UGS        igb0
    127.0.0.1          link#5             UH          lo0
    192.168.0.0/24     link#1             U          igb0
    192.168.0.11       link#1             UHS         lo0
    192.168.100.0/24   link#6             U          tap0
    192.168.100.2      link#6             UHS         lo0
    

    There's no response for who-has requests:

    kwiat@monster-1 ~/nebula/build/freebsd (support-freebsd*) $ sudo tcpdump -i tap0
    tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
    listening on tap0, link-type EN10MB (Ethernet), capture size 262144 bytes
    12:55:38.490465 ARP, Request who-has 192.168.100.1 tell 192.168.100.2, length 28
    12:55:39.532137 ARP, Request who-has 192.168.100.1 tell 192.168.100.2, length 28
    12:55:40.559399 ARP, Request who-has 192.168.100.1 tell 192.168.100.2, length 28
    

    Dropping it here with hope that someone would be willing to pick-up and continue this effort. I was testing on few weeks old CURRENT:

    FreeBSD monster-1 13.0-CURRENT FreeBSD 13.0-CURRENT #5 1b501770dd3-c264495(master): Wed Nov 27 01:35:34 CET 2019 root@monster-1:/usr/obj/usr/src/amd64.amd64/sys/GENERIC amd64

  • Nodes not able to ping each other

    Nodes not able to ping each other

    I have a lighthouse in a cloud VM with public IP address; laptop in a home VLAN and workstation in another VLAN.

    lighthouse 10.13.1.1 workstation 10.13.1.3 (LAN IP 10.12.1.6) laptop 10.13.1.4 (LAN IP 10.12.2.221)

    All machines are running the latest version of nebula and flavours of Linux. My configurations are minimal and are as follows:

    My lighthouse config:

    pki:
      ca: /opt/nebula/ca.crt
      cert: /opt/nebula/lighthouse.crt
      key: /opt/nebula/lighthouse.key
    static_host_map:
      "10.13.1.1": ["vm_public_IP:4242"]
    lighthouse:
      am_lighthouse: true
      interval: 60
    listen:
      host: "[::]"
      port: 4242
    punchy:
      punch: true
      respond: true
    cipher: aes
    tun:
      disabled: false
      dev: nebula1
      drop_local_broadcast: false
      drop_multicast: false
      tx_queue: 500
      mtu: 1300
      routes:
      unsafe_routes:
    logging:
      level: info
      format: text
    firewall:
      conntrack:
        tcp_timeout: 12m
        udp_timeout: 3m
        default_timeout: 10m
        max_connections: 100000
      outbound:
        - port: any
          proto: any
          host: any
      inbound:
        - port: any
          proto: any
          host: any
    

    My workstation and laptop config: (the only different part is cert and key files)

    pki:
      ca: /home/ewon/nebula/ca.crt
      cert: /home/ewon/nebula/workstation.crt
      key: /home/ewon/nebula/workstation.key
    static_host_map:
      "10.13.1.1": ["vm_public_IP:4242"]
    lighthouse:
      am_lighthouse: false
      interval: 60
      hosts:
        - "10.13.1.1"
    listen:
      host: "[::]"
      port: 0
    punchy:
      punch: true
      respond: true
    cipher: aes
    tun:
      disabled: false
      dev: nebula1
      drop_local_broadcast: false
      drop_multicast: false
      tx_queue: 500
      mtu: 1300
      routes:
      unsafe_routes:
    logging:
      level: info
      format: text
    firewall:
      conntrack:
        tcp_timeout: 12m
        udp_timeout: 3m
        default_timeout: 10m
        max_connections: 100000
      outbound:
        - port: any
          proto: any
          host: any
      inbound:
        - port: any
          proto: any
          host: any
    

    I can ping from nodes to lighthouse and vice versa. However, nodes cannot ping each other. If I ping from laptop to workstation, I get the following error messages on laptop:

    ...
    ERRO[0018] Prevented a pending handshake race            certName=workstation fingerprint=5c0f3921e4fc49fc06b34fd2cc58a3242bfb69bde35728ef0219d466fcf0bb2c handshake="map[stage:1 style:ix_psk0]" initiatorIndex=526444663 issuer=b197e555563b8a4b5370b16b18dcc3ff4068caf3ec19818e86ff017ea1260845 remoteIndex=0 responderIndex=3029730967 udpAddr="10.12.1.6:45286" vpnIp=10.13.1.3
    INFO[0019] Handshake timed out                           durationNs=8720123570 handshake="map[stage:1 style:ix_psk0]" initiatorIndex=250451344 remoteIndex=0 udpAddrs="[home_public_IP:32907 10.12.1.6:45286]" vpnIp=10.13.1.3
    INFO[0020] Handshake message received                    certName=workstation fingerprint=5c0f3921e4fc49fc06b34fd2cc58a3242bfb69bde35728ef0219d466fcf0bb2c handshake="map[stage:1 style:ix_psk0]" initiatorIndex=526444663 issuer=b197e555563b8a4b5370b16b18dcc3ff4068caf3ec19818e86ff017ea1260845 remoteIndex=0 responderIndex=0 udpAddr="10.12.1.6:45286" vpnIp=10.13.1.3
    INFO[0020] Handshake message sent                        certName=workstation fingerprint=5c0f3921e4fc49fc06b34fd2cc58a3242bfb69bde35728ef0219d466fcf0bb2c handshake="map[stage:2 style:ix_psk0]" initiatorIndex=526444663 issuer=b197e555563b8a4b5370b16b18dcc3ff4068caf3ec19818e86ff017ea1260845 remoteIndex=0 responderIndex=286921894 sentCachedPackets=0 udpAddr="10.12.1.6:45286" vpnIp=10.13.1.3
    ...                                                              
    

    and on my workstation I have the following "info" messages:

    ...
    INFO[0018] Handshake message sent                        handshake="map[stage:1 style:ix_psk0]" initiatorIndex=526444663 udpAddrs="[home_public_IP:12583 10.12.2.210:59517 10.12.2.211:59517 192.168.122.1:59517]" vpnIp=10.13.1.4
    INFO[0020] Handshake timed out                           durationNs=10182400905 handshake="map[stage:1 style:ix_psk0]" initiatorIndex=526444663 remoteIndex=0 udpAddrs="[home_public_IP:12583 10.12.2.210:59517 10.12.2.211:59517 192.168.122.1:59517]" vpnIp=10.13.1.4
    ...
    

    The logs on lighthouse is only the following:

    root@vm:/opt/nebula# ./nebula -config lighthouse.config.yml
    INFO[0000] Firewall rule added                           firewallRule="map[caName: caSha: direction:outgoing endPort:0 groups:[] host:any ip: proto:0 startPort:0]"
    INFO[0000] Firewall rule added                           firewallRule="map[caName: caSha: direction:incoming endPort:0 groups:[] host:any ip: proto:0 startPort:0]"
    INFO[0000] Firewall started                              firewallHash=21716b47a7a140e448077fe66c31b4b42f232e996818d7dd1c6c4991e066dbdb
    INFO[0000] Main HostMap created                          network=10.13.1.1/24 preferredRanges="[]"
    INFO[0000] UDP hole punching enabled
    INFO[0000] Nebula interface is active                    build=1.5.2 interface=nebula1 network=10.13.1.1/24 udpAddr="[::]:4242"
    INFO[0006] Handshake message received                    certName=laptop fingerprint=ea008f0243fbeb44254732ec24fd35fa729f1da67920f5c705ef73dced83a5b8 handshake="map[stage:1 style:ix_psk0]" initiatorIndex=392725562 issuer=b197e555563b8a4b5370b16b18dcc3ff4068caf3ec19818e86ff017ea1260845 remoteIndex=0 responderIndex=0 udpAddr="vm_public_IP:12583" vpnIp=10.13.1.4
    INFO[0006] Handshake message sent                        certName=laptop fingerprint=ea008f0243fbeb44254732ec24fd35fa729f1da67920f5c705ef73dced83a5b8 handshake="map[stage:2 style:ix_psk0]" initiatorIndex=392725562 issuer=b197e555563b8a4b5370b16b18dcc3ff4068caf3ec19818e86ff017ea1260845 remoteIndex=0 responderIndex=2000406661 sentCachedPackets=0 udpAddr="vm_public_IP:12583" vpnIp=10.13.1.4
    INFO[0011] Handshake message received                    certName=workstation fingerprint=5c0f3921e4fc49fc06b34fd2cc58a3242bfb69bde35728ef0219d466fcf0bb2c handshake="map[stage:1 style:ix_psk0]" initiatorIndex=792326294 issuer=b197e555563b8a4b5370b16b18dcc3ff4068caf3ec19818e86ff017ea1260845 remoteIndex=0 responderIndex=0 udpAddr="vm_public_IP:32907" vpnIp=10.13.1.3
    INFO[0011] Handshake message sent                        certName=workstation fingerprint=5c0f3921e4fc49fc06b34fd2cc58a3242bfb69bde35728ef0219d466fcf0bb2c handshake="map[stage:2 style:ix_psk0]" initiatorIndex=792326294 issuer=b197e555563b8a4b5370b16b18dcc3ff4068caf3ec19818e86ff017ea1260845 remoteIndex=0 responderIndex=154643726 sentCachedPackets=0 udpAddr="vm_public_IP:32907" vpnIp=10.13.1.3
    

    So it seems nodes can receive the requests from each other, but why ping (and ssh) won't work? I made sure that workstation and laptop has no firewall rules in place. There's no YAML syntax error either.

  • Feature Request: ospf over nebula

    Feature Request: ospf over nebula

    What version of nebula are you using?

    1.6.1

    What operating system are you using?

    Linux

    Describe the Bug

    Hello, first I would like to congratulate everyone for the excellent work on nebula. You guys are defining the future of mesh network. My question is: is it possible to use routing protocols like ospf over nebula? I came across a scenario where in some clients I have 2 internet links configured in failover and i add in firewall rules to forward each instance of a node to a link and I would like the nodes to switch in case of a link failure. I tried ospf on top of nebula1 and nebula2 interfaces in both sites, but no success. Any tips? Or am I really trying something not yet supported? If so, is there any possibility that nebula will support this scenario in some future? To better understand my need, I made a small diagram:

    node_A node_B |interface_nebula1|----|wan1 | |wan1 |----|interface_nebula1| |network A|----| ospf | |firewall|---internet_multiwan---|firewall| | ospf |----|network B| |interface_nebula2|----|wan2 | |wan2 |----|interface_nebula2|

    Logs from affected hosts

    No response

    Config files from affected hosts

    No response

  • 🐛 BUG: Nebula attempts Relay connection through same host 3 times

    🐛 BUG: Nebula attempts Relay connection through same host 3 times

    What version of nebula are you using?

    1.6.1

    What operating system are you using?

    Linux

    Describe the Bug

    A user reported a connection issue over Slack, providing the following logs:

    Dec 19 12:20:55 <hostname> nebula[16244]: time="2022-12-19T12:20:55+01:00" level=info msg="Attempt to relay through hosts" relayIps="[192.168.100.1 192.168.100.1 192.168.100.1]" vpnIp=192.168.100.16
    Dec 19 12:20:55 <hostname> nebula[16244]: time="2022-12-19T12:20:55+01:00" level=info msg="Re-send CreateRelay request" relay=192.168.100.1 vpnIp=192.168.100.16
    Dec 19 12:20:55 <hostname> nebula[16244]: time="2022-12-19T12:20:55+01:00" level=info msg="Re-send CreateRelay request" relay=192.168.100.1 vpnIp=192.168.100.16
    Dec 19 12:20:55 <hostname> nebula[16244]: time="2022-12-19T12:20:55+01:00" level=info msg="Re-send CreateRelay request" relay=192.168.100.1 vpnIp=192.168.100.16
    

    There is one Relay IP address listed (192.168.100.1), but it's listed 3 times. Investigate why the same IP address appeared in the Relays list 3 times, and prevent it to limit spurious logs, packets and tunnels.

    Logs from affected hosts

    No response

    Config files from affected hosts

    No response

  • Ability to set monthly quota on relay

    Ability to set monthly quota on relay

    Hello,

    I love the idea of the new relay feature. It would be great if I could set a quota in the relay's config file and it would stop acting as a relay once it hit that.

    I have a little Linode VM that I use as my Lighthouse and it comes with a free allowance and after that I have to start paying. I would like to use it as a relay for up to e.g. 500 GB a month so that I don't have an unexpected bill.

  • Dns static lookerupper

    Dns static lookerupper

    Fix issue #745 and #176

    On startup, Nebula adds all the static_host_map entries to its internal RemoteList object for Nebula IP-to-internet-IP translation. If the static_host_map includes a hostname, Nebula attempts a DNS lookup inline for that hostname. That DNS lookup function will return at most 1 address for the entry. If the DNS lookup fails, Nebula exits. The DNS query is never re-executed for the lifetime of the Nebula process.

    This PR introduces the following changes:

    • For static_host_map entries with hostnames, a background goroutine runs in a loop, looking up the DNS entries. If the results are different, it kicks off a RemoteList rebuild, allowing Nebula to consume the newly returned addresses.
    • For static_host_map entries without a hostname (meaning, the list only includes IP addresses), no goroutine is created.
    • Since the DNS lookup is not in the startup path, a DNS lookup error results in a log message, but Nebula continues to run. If the lookup that failed was for the lighthouse, Nebula will not be able to connect to the lighthouse and participate in the Nebula network until the DNS loop runs again and successfully resolves the address. (I made this default to 5 minutes, but maybe we want the loop to run more frequently to cover this case, or make it retry faster if it hit a DNS lookup failure.)
    • hidden config options are included which allow for the configuration of DNS lookup timeouts, the loop cadence, and which networks are supported
    • the DNS function used will return all returned DNS entries for the given network, rather than being limited to 1 IP.
    • the network can exclude ipv6 addresses (which are problematic for lighthouses in particular, as a host connecting to a lighthouse over an IPv6 address prevents the lighthouse from learning the host's IPv4 NAT address)
  • Ipv4 allowed

    Ipv4 allowed

    allows nebula to run on IPV4 only hosts

    rebased to latest nebula master,

    based on work from https://github.com/jilyaluk

    • tested on Linux centos 7 x64, working on both IPV6 and IPV4 hosts
  • emit certificate.expiration_ttl_seconds metric

    emit certificate.expiration_ttl_seconds metric

    This change emits a gauge metric indicating how much time until the certificate expires. The value will go negative once the certificate has expired.

    I welcome feedback on the metric name, perhaps certificate.expiration_seconds would be better?

Related tags
mesh-kridik is an open-source security scanner that performs various security checks on a Kubernetes cluster with istio service mesh and is leveraged by OPA (Open Policy Agent) to enforce security rules.
mesh-kridik is an open-source security scanner that performs various security checks on a Kubernetes cluster with istio service mesh and is leveraged by OPA (Open Policy Agent) to enforce security rules.

mesh-kridik Enhance your Kubernetes service mesh security !! mesh-kridik is an open-source security scanner that performs various security checks on a

Dec 14, 2022
Web-Security-Academy - Web Security Academy, developed in GO

Web-Security-Academy - Web Security Academy, developed in GO

Feb 23, 2022
A software supply chain security inspection tool.
A software supply chain security inspection tool.

README.md murphysec 一款专注于软件供应链安全的开源工具,包含开源组件依赖分析、漏洞检测及漏洞修复等功能。 安装 macOS 使用Homebrew安装 // TODO Windows 使用scoop安装 scoop bucket add murphysec https://gith

Feb 20, 2022
CLI client (and Golang module) for deps.dev API. Free access to dependencies, licenses, advisories, and other critical health and security signals for open source package versions.
CLI client (and Golang module) for deps.dev API. Free access to dependencies, licenses, advisories, and other critical health and security signals for open source package versions.

depsdev CLI client (and Golang module) for deps.dev API. Free access to dependencies, licenses, advisories, and other critical health and security sig

May 11, 2023
Search and store the best cryptos for the best scalable and modern application development.

Invst Hunt Search and store the best cryptos for the best scalable and modern application development. Layout Creating... Project Challenge The Techni

Nov 12, 2021
Log4j detector and reporting server for scalable detection of vulnerable running processes.

Log4j Detector A client and reporting server to identify systems vulnerable to Log4j at scale. This work is based on Stripe's Remediation Tools, but w

Apr 8, 2022
Dec 28, 2022
set of web security test cases and a toolkit to construct new ones

Webseclab Webseclab contains a sample set of web security test cases and a toolkit to construct new ones. It can be used for testing security scanners

Jan 7, 2023
Tracee: Linux Runtime Security and Forensics using eBPF
Tracee: Linux Runtime Security and Forensics using eBPF

Tracee is a Runtime Security and forensics tool for Linux. It is using Linux eBPF technology to trace your system and applications at runtime, and analyze collected events to detect suspicious behavioral patterns.

Jan 5, 2023
GoPhish by default tips your hand to defenders and security solutions. T

GoPhish by default tips your hand to defenders and security solutions. The container here strips those indicators and makes other changes to hopefully evade detection during operations.

Jan 4, 2023
Go binary that finds .EXEs and .DLLs on the system that don't have security controls enabled

Go Hunt Weak PEs Go binary that finds .EXEs and .DLLs on the system that don't have security controls enabled (ASLR, DEP, CFG etc). Usage $ ./go-hunt-

Oct 28, 2021
Analyse binaries for missing security features, information disclosure and more.
Analyse binaries for missing security features, information disclosure and more.

extrude Analyse binaries for missing security features, information disclosure and more. ?? Extrude is in the early stages of development, and current

Dec 16, 2022
QR secrets is a cryptographically secure mechanism to store secret data with the highest levels of security and store it on physical paper.
QR secrets is a cryptographically secure mechanism to store secret data with the highest levels of security and store it on physical paper.

QR Secrets QR secrets is a cryptographically secure mechanism to store secret data with the highest levels of security. Incorporating; AES256-GCM-HKDF

Jan 12, 2022
HTTP middleware for Go that facilitates some quick security wins.

Secure Secure is an HTTP middleware for Go that facilitates some quick security wins. It's a standard net/http Handler, and can be used with many fram

Jan 3, 2023
Gryffin is a large scale web security scanning platform.

Gryffin (beta) Gryffin is a large scale web security scanning platform. It is not yet another scanner. It was written to solve two specific problems w

Dec 27, 2022
PHP security vulnerabilities checker

Local PHP Security Checker The Local PHP Security Checker is a command line tool that checks if your PHP application depends on PHP packages with know

Jan 3, 2023
Sqreen's Application Security Management for the Go language
Sqreen's Application Security Management for the Go language

Sqreen's Application Security Management for Go After performance monitoring (APM), error and log monitoring it’s time to add a security component int

Dec 27, 2022
How to systematically secure anything: a repository about security engineering
How to systematically secure anything: a repository about security engineering

How to Secure Anything Security engineering is the discipline of building secure systems. Its lessons are not just applicable to computer security. In

Jan 5, 2023
Convenience of containers, security of virtual machines

Convenience of containers, security of virtual machines With firebuild, you can build and deploy secure VMs directly from Dockerfiles and Docker image

Dec 28, 2022