Bypassing My ISPs Port Blocking

Have you ever wanted to host something at your own house, but your ISP just won't let you? I have had constant issues in my own home environment where I am hosting an external facing service, it works for a while, and then my ISP up and blocks the port I am using. This article will teach you how to bypass this, and only requires a minimal VPS from a hosting provider.

What We Will Be Using

To get started you are going to need an external VPS with Docker and Docker Compose installed. Please use the official documentation for installing these services. I personally use Linode for this as their cheapest shared CPU plan is 5 dollars a month. I setup the standard Ubuntu VM and didn't use any of the market templates for this process. You will also need to download Nebula from GitHub to your Linode and the server on your home network that hosts your services. This is a Slack project that is open sourced and is an overlay network that will allow us to connect our devices via software and reference them by dedicated local IP Addresses. You can find out more about Nebula from Slack's Article.

Getting Started

Assuming you have your VPS setup and you are able to SSH into it, lets download the packages from the Nebula GitHub or install them via your package manager.

Arch Linux

sudo pacman -S nebula

Fedora

sudo dnf copr enable jdoss/nebula
sudo dnf install nebula

Other Distros

sudo mkdir /etc/nebula
sudo curl -L https://github.com/slackhq/nebula/releases/download/v1.5.2/nebula-linux-amd64.tar.gz --output /etc/nebula.tar.gz
sudo tar -xvf /etc/nebula.tar.gz --directory /etc/nebula

Lighthouse Setup

Now that we have nebula extracted or installed via our package manager, we need to download template configs for our Linode server. We are going to be using Linode as our lighthouse. This is a discovery node with a routable IP Address. You can download the Sample Config from here. The config is large, and can be stressful, but we are going to focus on a few specific locations.

Setting up your lighthouse config

In this part of the config, we are going to want to update the static_host_map. The IP Subnet you use is really up to you. Just avoid using the one that your local network uses.
You will need to change this section to match the public IP of your VPS.

["100.64.22.11:4242"]

Update am_lighthouse: from false to true, and remove the IP Address from the hosts section for the lighthouse config.

static_host_map:
  "192.168.100.1": ["100.64.22.11:4242"]


lighthouse:
  # am_lighthouse is used to enable lighthouse functionality for a node. This should ONLY be true on nodes
  # you have configured to be lighthouses in your network
  am_lighthouse: true
  # serve_dns optionally starts a dns listener that responds to various queries and can even be
  # delegated to for resolution
  #serve_dns: false
  #dns:
    # The DNS host defines the IP to bind the dns listener to. This also allows binding to the nebula node IP.
    #host: 0.0.0.0
    #port: 53
  # interval is the number of seconds between updates from this node to a lighthouse.
  # during updates, a node sends information about its current IP addresses to each node.
  interval: 60
  # hosts is a list of lighthouse hosts this node should report to and query from
  # IMPORTANT: THIS SHOULD BE EMPTY ON LIGHTHOUSE NODES
  # IMPORTANT2: THIS SHOULD BE LIGHTHOUSES' NEBULA IPs, NOT LIGHTHOUSES' REAL ROUTABLE IPs
  hosts:

When we generate our certificates you can specify groups for your nodes and use these in your Firewall Rules found at the bottom of the config, but this goes outside the scope of this setup. For our firewall rules lets update them to the below.

  outbound:
    # Allow all outbound traffic from this node
    - port: any
      proto: any
      host: any

  inbound:
    # Allow all inbound traffic between any nebula hosts
    - port: any
      proto: any
      host: any

We have one more edit to do to our config.yaml before we are done with it, lets update this section to match our cert names that we will be generating. This will be different for your lighthouse and your hosts.

pki:
  # The CAs that are accepted by this node. Must contain one or more certificates created by 'nebula-cert ca'
  ca: /etc/nebula/ca.crt
  cert: /etc/nebula/lighthouse1.crt
  key: /etc/nebula/lighthouse1.key

Generating our Certificates

Now that we have the config setup, lets create the Certificate Authority for your Nebula network. Run this command to achieve this, replacing the company name with whatever you want.

./nebula-cert ca -name "Myorganization, Inc"

Once the Certificate Authority has been created, we can generate the certificates for our nodes. You will need to specify the IP Address for each node in your network like I have done below. If you want to, you can leave off the -groups part of the second command.

./nebula-cert sign -name "lighthouse1" -ip "192.168.100.1/24"
./nebula-cert sign -name "server1" -ip "192.168.100.9/24" -groups "servers"

Turning Nebula into a service

To make things easier, and to not have to leave or connection up all the time we are going to make a service for Nebula.
Run this command to create our service file, feel free to use nano if it is your preferred editor.

sudo vim /etc/systemd/system/nebula.service

We will past the below into this file.

[Unit]
Description=nebula

[Service]
User=root
WorkingDirectory=/etc/nebula/
ExecStart=nebula -config config.yaml
Restart=always

[Install]
WantedBy=multi-user.target

If you are using vim press shift+:
Enter wq (for write quit)

We will need to run these commands to enable our service, reload the daemon, and start our new service.

sudo systemctl enable nebula.service
sudo systemctl daemon-reload
sudo systemctl start nebula.service

If everything was done correctly, you should be able to use the status command and see output like below.

sudo systemctl status nebula.service
● nebula.service - nebula
     Loaded: loaded (/etc/systemd/system/nebula.service; enabled; vendor preset: enabled)
     Active: active (running) since Sat 2022-04-23 22:57:49 UTC; 5 days ago
   Main PID: 791 (nebula)
      Tasks: 9 (limit: 1066)
     Memory: 9.6M
     CGroup: /system.slice/nebula.service
             └─791 /usr/local/bin/nebula -config config.yaml

You have now successfully setup your Nebula lighthouse, and we will now setup a node on our local network. To complete this step you will need to download the follow files from your server to your node.

  • server1.crt
  • server1.key
  • ca.crt
  • config.yaml

Setting up your Node

Assuming that you are setting up your node on a Linux machine, we can follow the same steps above to download the files.

Arch Linux

sudo pacman -S nebula

Fedora

sudo dnf copr enable jdoss/nebula
sudo dnf install nebula

Other Distros

sudo mkdir /etc/nebula
sudo curl -L https://github.com/slackhq/nebula/releases/download/v1.5.2/nebula-linux-amd64.tar.gz --output /etc/nebula.tar.gz
sudo tar -xvf /etc/nebula.tar.gz --directory /etc/nebula

Setting up your node config

The static_host_map: that you set to the public IP Address of your Lighthouse, will need to remain the Public IP Address of your Lighthouse.
We will need to update am_lighthouse: back to false from true, and add the Nebula IP Address of your Lighthouse to the hosts: part of this section.

static_host_map:
  "192.168.100.1": ["100.64.22.11:4242"]


lighthouse:
  # am_lighthouse is used to enable lighthouse functionality for a node. This should ONLY be true on nodes
  # you have configured to be lighthouses in your network
  am_lighthouse: false
  # serve_dns optionally starts a dns listener that responds to various queries and can even be
  # delegated to for resolution
  #serve_dns: false
  #dns:
    # The DNS host defines the IP to bind the dns listener to. This also allows binding to the nebula node IP.
    #host: 0.0.0.0
    #port: 53
  # interval is the number of seconds between updates from this node to a lighthouse.
  # during updates, a node sends information about its current IP addresses to each node.
  interval: 60
  # hosts is a list of lighthouse hosts this node should report to and query from
  # IMPORTANT: THIS SHOULD BE EMPTY ON LIGHTHOUSE NODES
  # IMPORTANT2: THIS SHOULD BE LIGHTHOUSES' NEBULA IPs, NOT LIGHTHOUSES' REAL ROUTABLE IPs
  hosts:
    - "192.168.100.1"

The final edit that is needed in our node config, is that we will need to update the cert and key to the correct name.

pki:
  # The CAs that are accepted by this node. Must contain one or more certificates created by 'nebula-cert ca'
  ca: /etc/nebula/ca.crt
  cert: /etc/nebula/server1.crt
  key: /etc/nebula/server1.key

Turning Nebula into a service

To make things easier, and to not have to see the logs all the time we are going to make a service for Nebula.
Run this command to create our service file, feel free to use nano if it is your preferred editor.

sudo vim /etc/systemd/system/nebula.service

We will past the below into this file.

[Unit]
Description=nebula

[Service]
User=root
WorkingDirectory=/etc/nebula/
ExecStart=nebula -config config.yaml
Restart=always

[Install]
WantedBy=multi-user.target

If you are using vim press shift+:
Enter wq (for write quit)

We will need to run these commands to enable our service, reload the daemon, and start our new service.

sudo systemctl enable nebula.service
sudo systemctl daemon-reload
sudo systemctl start nebula.service

If everything was done correctly, you should be able to use the status command and see output like below.

sudo systemctl status nebula.service
● nebula.service - nebula
     Loaded: loaded (/etc/systemd/system/nebula.service; enabled; vendor preset: enabled)
     Active: active (running) since Sat 2022-04-23 22:57:49 UTC; 5 days ago
   Main PID: 791 (nebula)
      Tasks: 9 (limit: 1066)
     Memory: 9.6M
     CGroup: /system.slice/nebula.service
             └─791 /usr/local/bin/nebula -config config.yaml

Once you have completed this setup, you can now test by pinging your lighthouse from your local machine. As long as it responds you have successfully completed this setup, and you are now able to communicate to your Linode using the Nebula Overlay network.

ping 192.168.100.1

Setting Up NPM and Exposing Your Services

Now that we have Nebula setup and have confirmed that we can ping our lighthouse Nebula IP from our home network, we can install NPM on our VPS using docker.

We are just going to cover the basics here, but all info for NPM can be found on their website.

Copy the below and save it in docker-compose.yaml wherever you would like to store it on your VPS. I personally create a folder for my compose files, and then create a ~/.docker directory in my home directory for storing files.

version: "3"
services:
  app:
    image: 'jc21/nginx-proxy-manager:latest'
    restart: unless-stopped
    ports:
      # These ports are in format <host-port>:<container-port>
      - '80:80' # Public HTTP Port
      - '443:443' # Public HTTPS Port
      - '81:81' # Admin Web Port
      # Add any other Stream port you want to expose
      # - '21:21' # FTP

    # Uncomment the next line if you uncomment anything in the section
    # environment:
      # Uncomment this if you want to change the location of 
      # the SQLite DB file within the container
      # DB_SQLITE_FILE: "/data/database.sqlite"

      # Uncomment this if IPv6 is not enabled on your host
      # DISABLE_IPV6: 'true'

    volumes:
      - ~/.docker/data:/data
      - ~/.docker/letsencrypt:/etc/letsencrypt

To start this docker container we will run the below command. Depending on the version of docker compose you have installed this command can differ.

# Legacy Docker Compose 
docker-compose up -d

# Newest Docker Compose
docker compose up 0d

This compose file will download the images from Docker hub and setup the container. Once this is online you can go to the Nebula IP of your VPS and access the web console.

192.168.100.1:81

From here you can update your login from the default login, and setup your redirects for your domain. All you need to do is point your domains at your VPS' public IP address, and in the proxy rules point that domain to your nodes IP address and the port the service is hosted on.

Nathanial Wilson

Nathanial Wilson