Tutorial: Setting up Unifi Controller with docker, reverse proxy and Let's Encrypt

Before we start

Scope of this tutorial

We will be setting up a Unifi controller in a docker container. It’s going to be reachable under a domain of your choice and connections will be protected by a Let’s Encrypt certificate. That is done using a reverse proxy.
Both the reverse proxy that serves your controller and the LE client will run as containers as well.

I am not going to be covering firewall or NAT rules that you might need to set up because that totally depends on your specific environment.

Why docker?

Docker is great for a lot of reasons. Most of them are completely irrelevant for the scope of this tutorial. I don’t use it because containers are more secure than host applications (although they can be, if done right) or because I need to scale my services. The main reason I use it is because it’s super simple to deploy containers with docker and in case something breaks, you just throw the old container away and create a new one.

Preconditions

What you need:

  1. Basic docker knowledge
    You don’t need to know much about docker to simply repeat all the steps in this tutorial, but you will find it very useful to be able to list running containers, start and stop them, read logs and know how docker’s volume and port mapping systems work. If you don’t know what docker is, you can read “Docker overview”.

  2. A linux server
    This can be self-hosted or cloud-hosted, it can be bare-metal or virtualized. Doesn’t matter much, but it’s gotta run docker.

  3. A domain
    You need a domain name that points to your server in order to access your many services that you run in your docker. Let’s encrypt won’t issue certificates for IP addresses.

I’m gonna be showing this on an Ubuntu 18.04, so depending on your flavor of linux some of the steps shown might vary.

Docker containers used:

Check out their documentation for more insight on how they work.

The tutorial

Step 1: Install docker

This heavily depends on your OS and I’m not gonna cover this here, the instructions in the Docker Docs are pretty solid and straightforward.

In the navigation, go to Get Docker -> Docker CE -> Linux and then your version. For your convenience, here is the link to the Ubuntu instructions.

Also I recommend you take the time to follow the “Manage Docker as a non-root user” steps. It’s considered a security best-practice.
After following this, even though you don’t need to be root to start containers anymore, processes in containers still run as root. If this concerns you, check out this article that explains how to run processes in containers as non-root. You can also do this after finishing this tutorial.

Step 2: Set up some folders

This can obviously be customized to your liking, but here’s how I do it.

All my docker scripts, which I use to create containers, go into /usr/docker.
All container data goes into /var/docker.

You can create them like this:
$ sudo mkdir /usr/docker
$ sudo mkdir /var/docker

Typically, you’ll only execute these scripts once and then whenever your container breaks or you want to modify its ports or volumes, which doesn’t happen a lot. But I find it handy to have them around, so you don’t have to remember all the parameters you used the last time you started a container.

If you opted to run processes in containers as another user (non-root), make sure that user has write access to /var/docker (and all other relevant directories).

Step 3: Create the proxy

Now, on to our first container - the reverse proxy. In short, a reverse proxy is a layer between you and a web server (the Unifi controller, in our case). It can be used to do authorization and encryption for multiple underlying services among other things, and the one we’re using here is very easy to setup.

Create the file /usr/docker/nginx-proxy (has to be done as root, so use sudo). This is the script that’ll go inside:

docker run -d -it \
--name nginx-proxy \
--restart unless-stopped \
\
-v /var/run/docker.sock:/tmp/docker.sock:ro \
-v /var/docker/nginx-proxy/certs:/etc/nginx/certs:ro \
-v /var/docker/nginx-proxy/conf.d:/etc/nginx/conf.d \
-v /var/docker/nginx-proxy/vhost.d:/etc/nginx/vhost.d \
-v /usr/share/nginx/html \
-p 192.168.1.100:80:80 \
-p 192.168.1.100:443:443 \
--label com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy \
\
jwilder/nginx-proxy:alpine

The whole thing is one long command (docker run) with a bunch of parameters. If you want to know what each of these do in detail, you can expand the section below to learn about it.

192.168.1.100 is the IP address that the proxy server will bind to. Fill in your own. Or, if you want it to listen on all interfaces, just leave the address out like so: -p 80:80 and -p 443:443.

Detailed breakdown of the docker run command

A \ just before a line break denotes that the command will continue in the next line. You could write this all in one line, but then it would be hard to read and maintain.

docker run will create and start a container (as opposed to docker create, which will only create it).
The -d option means the container is started in detached mode (in the background).
-i and -t are often used in combination (which can also be written as -it) and will allocate a tty for the container, so you can attach and detach it to see what it’s currently doing.

--name nginx-proxy sets the name of the container, for easier identification and reference.
--restart unless-stopped tells docker to automatically restart your container when it crashes or the system reboots, unless you manually stop it (then it will stay off until you turn it back on). I prefer this over --restart always because it gives you more control.

Lines starting with -v denote volumes. There are multiple ways to use volumes in docker. What we’re doing for the first four is mapping folders on the host filesystem to folders of the container’s filesystem. The syntax is -v /folder/on/host:/folder/on/container[:OPTION]. One of the options, which is used here, is ro which stands for read-only, so the container can’t modify the folder’s contents.
Note that the folders we’re referencing on the host side don’t exist yet. That is by design. When docker sees that the folder doesn’t exist, it will create it and copy the contents from the container-side of the mapping into it. That’s useful for config files, since these would be missing if you created that folder yourself.
The last volume parameter (-v /usr/share/nginx/html) is an anonymous volume. Since we don’t need to modify the files in there ourselves and they don’t need to be persistent across multiple containers, we let docker handle where on the host it’s stored. The path /usr/share/nginx/html is on the container.

Next is port mappings. The syntax is -p [BIND_ADDRESS:]HOST_PORT:CONTAINER_PORT. In this case, 192.168.1.100 is one of the IP address of my server. It makes sense to bind to a specific address when your machine is assigned multiple IP addresses and/or is in multiple vlans. When you’re setting this up on a VPS, omiting the bind address will be just fine (like so: -p 80:80).

The --label part will add a label com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy to the container, which is used by the LE container to identify which container to hook into.

Last is the only required argument for docker run, the name of the container image. Just to recap, what this basically is is the command docker run jwilder/nginx-proxy with a ton of options.


Now that the script is in place, let’s make it executable and run it:
$ sudo chmod +x /usr/docker/nginx-proxy
$ /usr/docker/nginx-proxy

It will pull (download) a bunch of layers and then print out a string of characters, which is the container id. You won’t need to remember that, since we’ve given our container a catchy name to call it by. To get an overview of your current containers, type
$ docker ps -a

It should look something like this. Make sure the status reads Up, otherwise something’s gone wrong.

CONTAINER ID     IMAGE                         COMMAND                  CREATED            STATUS           PORTS                                                   NAMES
ebb8027025c5     jwilder/nginx-proxy:alpine    "/app/docker-entrypo…"   18 seconds ago     Up 17 seconds    192.168.1.100:80->80/tcp, 192.168.1.100:443->443/tcp    nginx-proxy

Ok, your proxy server is up and running. At this point, it doesn’t have anything to proxy, so when you go to http://192.168.1.100 or your domain, you’ll get a 503 error. Let’s fix that.

Step 4: Run the Unifi controller

First, some things to note about the unifi controller. Ubiquiti has a handy table on their site that lists all the ports and what they are used for. We’re keeping it simple here, using only the bare minimum, but most of the other ports can be mapped in the same way (the exception being UDP 1900, but more on that at the end).

The three ports we’ll use are:

  • TCP 8443 for web GUI access
  • TCP 8080 for device and controller communication
  • UDP 3478 for device and controller communication via STUN

In case you wonder why Unifi needs two ports for device and controller communication, that’s a whole other story involving some networking details I might go into in another guide.

The command for this container will go into /usr/docker/unifi:

docker run -d -it \
--name unifi \
--restart unless-stopped \
\
-e "VIRTUAL_HOST=unifi.example.de" \
-e "VIRTUAL_PORT=8443" \
-e "VIRTUAL_PROTO=https" \
-e "LETSENCRYPT_HOST=unifi.example.com" \
-e "LETSENCRYPT_EMAIL=you@example.com" \
\
-v /var/docker/unifi:/config \
\
-p 192.168.5.3:3478:3478/udp \
-p 192.168.5.3:8080:8080 \
\
linuxserver/unifi-controller

Remember, you’ll need to do this as root and don’t forget to make the script executable. Also, use your own domain and email address.

You can see there are five environment variables being set here. These are not actually important for the Unifi controller, but rather our two other containers.

The first three are needed by the nginx proxy:

  • VIRTUAL_HOST
    This tells the proxy under which domain name you would like the unifi controller to be accessible. Of course this domain must point to your server.
  • VIRTUAL_PORT
    This denotes the upstream server port. It defaults to 80, but since the controller’s GUI doesn’t listen on port 80, but on port 8443 instead, that’s what we set it to.
  • VIRTUAL_PROTO
    This configures the protocol that the proxy uses to connect to the upstream server (the controller GUI). It defaults to http, but the Unifi controller only allows incoming https connections. Since you’re only supposed to use this reverse proxy on infrastructure you trust, it doesn’t verify the upstream server’s certificate and thus it doesn’t matter that the controller’s out-of-the-box certificate is self-signed.

The last two provide information for the LE companion (which we’ll setup in the next step):

  • LETSENCRYPT_HOST
    The host (commonName) for the certificate. This should be the same as VIRTUAL_HOST.
  • LETSENCRYPT_EMAIL
    The email address LE will send expiry notices to. You have to specify one.

Next is port mapping. We map TCP port 8080 on the host to port 8080 on the container and UDP port 3478 on the host to port 3478 on the container. Again, you may or may not want to bind to an IP address. Note that we didn’t map port 8443 of the controller to a port on the host. Because we use the proxy, we don’t need to, as we will be accessing our controller using https://unifi.example.com instead of https://192.168.1.100:8443.

Why did I use a different IP address here?

My home network has several VLANs. The “dot 1” net is my normal LAN which all the PCs and phones are connected to. The “dot 5” net on the other hand is sort of a management net. Servers, IPMIs, UPSs, management interfaces of switches and APs are in there. So by binding to an address in the “dot 5” net, we make sure that all the APs (and other unifi devices, but I only use their APs) can communicate with the controller. At the same time, the controller GUI is served by the reverse proxy which is actually bound to the “dot 1” net and can be accessed from there.
Of course you don’t have to do this separation, but if you want to, the option is there.


So now if you execute that script and confirm that the container is up, you can open up your domain in a browser and should see the controller setup wizard (it may still say 503 at first, give it a minute to boot up).

Step 5: Let’s encrypt it

Ok, now for the part you’ve come here for. This is essentially more of the same.

Contents of /usr/docker/nginx-letsencrypt:

docker run -d -it \
--name nginx-letsencrypt \
--restart unless-stopped \
\
--volumes-from nginx-proxy \
-v /var/docker/nginx-proxy/certs:/etc/nginx/certs:rw \
-v /var/run/docker.sock:/var/run/docker.sock:ro \
\
jrcs/letsencrypt-nginx-proxy-companion
Detailed breakdown of the docker run command

--volumes-from looks at all volumes of the nginx-proxy container and adds them to this container as well.
However, in the following line the certificate folder mount is overwritten to make it writable (rw option).


That’s all there is to that one, so save it, make it executable and run it. You can do the following command to look at the logs of the LE container and watch as it goes through all the steps to get the certificate:

$ watch -n 1 docker logs --tail 30 nginx-letsencrypt

You should see something like this:

Creating/renewal unifi.example.com certificates... (unifi.example.com)
2019-01-15 15:15:56,025:INFO:simp_le:1479: Generating new certificate private key
2019-01-15 15:15:58,613:INFO:simp_le:360: Saving key.pem
2019-01-15 15:15:58,613:INFO:simp_le:360: Saving chain.pem
2019-01-15 15:15:58,614:INFO:simp_le:360: Saving fullchain.pem
2019-01-15 15:15:58,614:INFO:simp_le:360: Saving cert.pem
Sleep for 3600s

At this point, the LE companion has installed your certifacte and associated data into the /var/docker/nginx-proxy/certs folder on the host (which, if you remember, the proxy can read from) and reloaded the proxy server. So without needing to do anything further, you can refresh your browser window and will be automatically redirected to the https version of the site.

Further information

(I’ll be updating this section if questions keep coming up)

Device auto discovery

Because of the fact that in this setup, all containers are on a bridge network and not actually on your LAN, device auto discovery will not work (afaik), since your LAN and the docker bridge network are on different broadcast domains.

5 Likes

If you have trouble following the tutorial or if you feel I could have explained some things better, please leave a reply so I can fix that.

Interesting!

I’ve always run this when I needed to mess with Unifi settings. I don’t manage any Ubiquiti devices across the internet though. Net=host here allows auto-discovery to happen.

sudo docker run \
    --rm \
    --init \
    -p 8080:8080 \
    -p 8443:8443 \
    -p 3478:3478/udp \
    -p 10001:10001/udp \
    -e TZ='America/Detroit \
    -v ~/unifi:/unifi \
    --net=host \
    --name unifi \
    jacobalberty/unifi:stable

Haven’t tried it, but it seems reasonable. With --net=host the container shares the network stack of the host, but so all network isolation is lost. I generally want to avoid going this way, but if someone needs auto-discovery very badly, this will work.
Also as I understand it, the individual port mappings are unnecessary here since the container directly binds to ports on the host.

That’s a good point and you’re absolutely correct about the --net-host and not needing the port mappings.