Nebula - Getting Started

Open source mesh networking- cool!

Disclaimer

I had no part in the development or implementation of this tool. This blog post explores the tool from the perspective of the f/oss tool as it appears on Github for a home lab scenario, and what I've learned from getting it up and running in a said scenario.

This guide will use Ubuntu, UFW, so if those aren't your favorites, sorry but you'll still get the gist.

What Is Nebula

Most likely you know, or you wouldn't found your way here. But in  the developers words:

Nebula is a scalable overlay networking tool with a focus on performance, simplicity and security. It lets you seamlessly connect computers anywhere in the world. Nebula is portable, and runs on Linux, OSX, and Windows. (Also: keep this quiet, but we have an early prototype running on iOS). It can be used to connect a small number of computers, but is also able to connect tens of thousands of computers.

Nebula incorporates a number of existing concepts like encryption, security groups, certificates, and tunneling, and each of those individual pieces existed before Nebula in various forms. What makes Nebula different to existing offerings is that it brings all of these ideas together, resulting in a sum that is greater than its individual parts.

The rest of this post will explore getting a few nodes enrolled and talking to one another, based off the public documentation "Getting Started (quickly)".

Getting Started, less quickly, with some commentary and opinions

Planning

Lighthouse

You need a Lighthouse node, something that is reachable via the public internet and will point nodes to one another.

[Nebula lighthouses allow nodes to find each other, anywhere in the world. A lighthouse is the only node in a Nebula network whose IP should not change.]{style=”color:#24292e;font-family:-apple-system, BlinkMacSystemFont,;”}[[ ]{.Apple-converted-space}]{style=”color:#24292e;font-family:-apple-system, BlinkMacSystemFont,;”}

[[The ]{.Apple-converted-space}]{style=”color:#24292e;font-family:-apple-system, BlinkMacSystemFont,;”}recommendation[[ on the Github page is:]{.Apple-converted-space}]{style=”color:#24292e;font-family:-apple-system, BlinkMacSystemFont,;”}

[Nebula lighthouses allow nodes to find each other, anywhere in the world. A lighthouse is the only node in a Nebula network whose IP should not change.]{style=”color:#24292e;font-family:-apple-system, BlinkMacSystemFont,;”}[[ ]{.Apple-converted-space}]{style=”color:#24292e;font-family:-apple-system, BlinkMacSystemFont,;”}

Which is[ sound, and very beneficial if you will have multiple nodes ]{style=”color:#24292e;font-family:-apple-system, BlinkMacSystemFont,;”}across[ locations or cloud ]{style=”color:#24292e;font-family:-apple-system, BlinkMacSystemFont,;”}providers[. ]{style=”color:#24292e;font-family:-apple-system, BlinkMacSystemFont,;”}

[This node will need:]{style=”color:#24292e;font-family:-apple-system, BlinkMacSystemFont,;”}

  • Reachable via the internet
  • Have the nebula port exposed, default udp/4242
  • Console/ssh access for configuration etc

Nodes

Any device that will connect to the Nebula network will be a node, including the Lighthouse, however the lighthouse's config differs slightly and will be touch on separately.

These nodes can be any supported OS, the Nebula binaries are available via the github to download or compile.

For sake illustration one node and lighthouse is required, though I'd recommend two nodes to better demonstrate the peer to peer capabilities.

Documentation & Planning

Node Organization

A good ole spreadsheet of the the IP range you choose will be handy for this- the default network in the example config is 192.168.100.1, which could work for you but it could be anything. I made a spreadsheet with 192.168.100.10.24 in column A, hostname in column B and C for noting any group associations. For sake of a home setup this was a solid base document to help me keep track of what device is assigned what IP etc. Knowing these values and associations are handy as you look to reference your infrastructure in the future, especially if its decentralized and all not directly accessible on your LAN.

Groups can be used to limit in and outbound traffic for nodes. Outbound traffic defaults to any, and inbound to specific ports and protocols. In our examples we will be looking at a few different services, SSH, Netdata and Filebeat in a later post. Each of these have a particular port / protocol associated with them, and we could make specific rules around each. For illustrations on this post we will be sticking-with icmp. An example of making an unbound rule allowing icmp from any nebula host would look something like:

[gist https://gist.github.com/lucasjhall/0e5baf2abfc0c106430677c407127a89 /]

You can look at the example config as well for some other ideas.

Enough Chat, Let's Go

Okay, so we will end up with the following:

  • Lighthouse1 (Ubuntu, cloud based node, routable ip)
  • Node1 (Ubuntu, hardware based node, internal LAN, in my case a Pi 3B +)
  • CA (its you, not specifically a device, as this is a mutual trust environment)
  • working area, specific to mgmt1 and not (at this point) enrolled in nebula

We will be taking all of our actions from mgmt1 but will not be installing Nebula on this device for this example.

So that looks like:

[gist https://gist.github.com/lucasjhall/af232532d7609e1e76d23a8a3c7e911a ]

Working Area

This will all be on mgmt1 a place to create and manage all the items for the nodes you're enrolling, also where all the sensitive stuff will live, until you deploy and remove it, ephemeralness of this stuff is up to you-

I made a directory structure that was similar to:

[gist https://gist.github.com/lucasjhall/77e4c3dbfa9a958a3c5de85d6b81459c /]

Let's Setup Our CA

I got the nebula-cert binary that was needed for the specific OS of my mgmt1 and placed it in nebula/binaries/, if all the devices are Ubuntu with the same architecture then this is all pretty easy.

./nebula-cert ca -name "Myorganization, Inc"

This will create files named[ ]{.Apple-converted-space}ca.key[ ]{.Apple-converted-space}and[ ]{.Apple-converted-space}ca.cert[ ]{.Apple-converted-space}in the current directory. The[ ]{.Apple-converted-space}ca.key[ ]{.Apple-converted-space}file is the most sensitive file you'll create, because it is the key used to sign the certificates for individual nebula nodes/hosts. Please store this file somewhere safe, preferably with strong encryption.

The ca.key is the signing key for the CA, so use this when you're creating nodes, but otherwise store it somewhere else, a vault or a password manager that supports encrypting attached files.

Once the ca.key is created you can proceed with generating nodes needed credentials. During this command you'll reference your planned naming schema afore mentioned in the spreadsheet.

Lighthouse (on mgmt1)

The docs have this example:

./nebula-cert sign -name "lighthouse1" -ip "192.168.100.1/24"

This is telling nebula-cert to create an identity and sign it where the name is "lighthouse1" and the up will be "192.168.100.1" and it will, by default be in no groups. As lighthouses can be used to for simply pointing nodes to one another this can be beneficial.

Or you can treat a cloud based lighthouse as a regular node and run additional service and tag it with any groups that may be applicable.

I then placed all the needed pieces into the lighthouse1 directory:

  • cp the ca.cert . (generated in the CA step)
  • lighthouse1.crt
  • lighthouse1.key
  • cp example config.yaml . (from the public or cloned repo)
  • nebula binary (from github / compiled)
  • nebula.service (from github)

As an aside I was initially naming config.yml to {host}.yml, this was short sighted as you start automate the starting of the process via launchctl or systemd this makes the template of the config.yml more cumbersome, changing the nebula.service for every host, NOPE. (Unless you're using a config management tool, Chef, Puppet, Salt et al, but I digress).

Configure the Lighthouse1 Nebula config.yaml per host config notes, with special attention to:

  • On the lighthouse node, you'll need to

    ensure[ ]{.Apple-converted-space}am_lighthouse: true[ ]{.Apple-converted-space}is set.

####

Lighthouse Host Configuration (on lighthouse1)

With your choice of service provider get a machine up and patched. I chose Ubuntu 18.04 Server because it was what I was most familiar with- you do you.

I accessed my machine via the console, patched, etc. (You may want SSH or some means to transport files to the device)

I then setup my UFW to basically deny all inbound, then add a rule to except for 4242, our nebula port. Via a ufw allow 4242/udp. If you're not familiar with UFW you can read more here. Or choose the firewall of your choice. Heck have at iptables yourself, it's your life.

You can now move the entirety of that nebula/lighthouse1/ to your lighthouse1 machine, an scp -r to ~/ or whatever your preference.

From here you could try and start nebula per the docs, but remember the config is pointing to /etc/nebula/ for pki items and the like. So lets just put everything where it needs to be- And this is where we want all the pieces to end up here:

[gist https://gist.github.com/lucasjhall/b75399d35e4951e10a2d75e73e273ab7 /]

Run the binary pointed to the proper config and if no errors are reported the service has stated. So at this point our lighthouse is up and running, we can now configure our client.

Node1 (on mgmt1)

./nebula-cert sign -name "node1" -ip "192.168.100.2/24"

Configure the node Nebula config.yaml per host config notes, with special attention to:

  • [am_lighthouse]{.pl-ent}: [false]{.pl-c1}
  • [static_host_map]{.pl-ent}:, where IP is the nebula interface ip followed by real, and reachable IP of the lighthouse, [["]{.pl-pds}[192.168.100.1]{.pl-ent}["]{.pl-pds}]{.pl-s}: [["100.64.22.11:4242"]]{.pl-s}
  • [hosts]{.pl-ent}: where we list ip of the lighthouse on the nebula network, 192.168.100.1
  • reference the cloned config for in line notes on the above

As before this is telling nebula-cert to create an identity and sign it where the name is "node1" and the up will be "192.168.100.2" and it will, by default be in no groups.

I then placed all the needed pieces into the node1 directory:

  • cp the ca.cert . (generated in the CA step)
  • node1.crt
  • node1.key
  • cp example config.yaml . (from the public or cloned repo)
  • nebula binary (from github / compiled) (ensure its for your architecture)
  • nebula.service (from github)

Node1 Configuration (on node1)

Rinse and repeat with the steps in the lighthouse1 section.

I accessed my machine via the console, patched, etc. (You may want SSH or some means to transport files to the device.

You can now move the entirety of that nebula/node1/ to your lighthouse1 machine, an scp -r to ~/ or whatever your preference.

From here you could try and start nebula per the docs, but remember the config is pointing to /etc/nebula/ for pki items and the like. So lets just put everything where it needs to be- And this is where we want all the pieces to end up here:

Run the binary pointed to the proper config and if no errors are reported the service has stated. So at this point our node1 is up and running, and we should see some handshakes showing we have now connected our node1/lighthouse.

Testing

At this point if you have no errors you can ping one host via the nebula interface from another, this demonstrates the connectivity over the neb0 or whatever configure interface.

Future Posts

  • Explore groups and Example Services
  • Nebula's built in sshd

Questions