I am leaning towards using Debian to host my new unifi os server (on my xcp-ng hypervisor). just curious on if this is a good choice or I should go with something else like Ubuntu
They say Ubuntu, but since it runs in a Podman container, in theory it should work on Debian as well â or any other distribution that can run Podman. Or in other words, the only real prerequisite is probably Podman version 4.3.1 or higher: How to install UniFi OS Server on Linux
That said, if youâre going to dedicate a VM to it, which Iâd recommend, Iâd just go with Ubuntu, as advised in their guide.
Debian or Unbuntu - use the one you have knowledge on as you will have to update the system.
After all, theyâre not so different, and Ubuntu does has a do-release-upgrade
script, which should make upgrades to a new distro release even easier (at least in theory ) So unless you have a compelling reason not to use Ubuntu, Iâd go with what Ubiquiti officially supports in this case.
Also, although it should be distro-agnostic in theory thanks to Podman, Ubiquiti seems to support only Ubuntu, likely because they only tested it on that OS. However, this also means, that most users will probably run it on Ubuntu as well, and therefore, if you run into issues on another distribution, getting help may be more difficult.
And, if you want official support from Ubiquiti, Ubuntu is obviously the way to go â though Iâm not sure whether they actually offer support for self-hosted controllers.
I may do ubuntu. I was screwing around with Debian 13 yesterday and had to do some workarounds to get mongo db to work and some other stuff so and future me doesnât want to deal with those headaches.
Are you sure youâre talking about UniFi OS? Because from the link I posted, it looks like everything is provided as containers via Podman, so there should be no need to install MongoDB or any other dependencies anymore.
In case youâre talking about the UniFi Controller, here my notes on how I heve installed it on Debian 12 (havenât tried it on 13 yet):
Prerequisites:
sudo apt install -y gnupg curl ca-certificates apt-transport-https
Repositories:
echo 'deb [ arch=amd64,arm64 ] https://www.ui.com/downloads/unifi/debian stable ubiquiti' | sudo tee /etc/apt/sources.list.d/100-ubnt-unifi.list
sudo wget -O /etc/apt/trusted.gpg.d/unifi-repo.gpg https://dl.ui.com/unifi/unifi-repo.gpg
curl -fsSL https://www.mongodb.org/static/pgp/server-7.0.asc | \
sudo gpg -o /usr/share/keyrings/mongodb-server-7.0.gpg \
--dearmor
echo "deb [ signed-by=/usr/share/keyrings/mongodb-server-7.0.gpg ] http://repo.mongodb.org/apt/debian bookworm/mongodb-org/7.0 main" | sudo tee /etc/apt/sources.list.d/mongodb-org-7.0.list
Installation:
sudo apt update && sudo apt install -y mongodb-org
sudo systemctl enable mongod && sudo systemctl start mongod
sudo apt install -y unifi
Ohhhh im dumb i was clicking on the wrong links. i installed the controller not Unifi OS. Whoooops
Just click the link in my first reply then
Btw, regarding MongoDB: Since version 5.0, it requires AVX CPU support. When running in a VM, ensure your host CPU supports AVX (which shouldnât be an issue unless itâs really ancient) and that the VM hardware settings actually pass those CPU flags through.
Although, Iâm not even sure whether UniFi OS still uses MongoDB.
It does still. Release notes for Network indicate what versions are supported.
But does MongoDB it still need to be installed separately on the OS? According to the installation guide that I linked to in my first reply, you only need to install Podman (on Ubuntu). Then, I assume, the installation script pulls in the containers containing the UniFi OS software and all the necessary dependencies.
If it still uses MongoDB, the AVX requirement of course persists (which, as I said, shouldnât be an issue unless your CPU is older than probably around 2011 or 2012). But thanks to them delivering everything as containers, you no longer have to deal with dependency issues and/or third-party repos (except for the Podman repos) anymore, which would clearly be a step forward, imho.
Also, OS version upgrades should become a breeze, as the only OS-related dependency that remains is Podman.
Update - I started with a fresh copy of Debian 13, installed podman and worked super easy. i did follow allow crosstalks video too just to make sure
Really? Well, I can only hope that youâve somehow missed a probably very small âskipâ or âcreate local accountâ link. Because if a cloud account is actually required and UniFi OS is intended to replace the controller, I wonât be buying my next access point from Ubiquiti.
EDIT:
Ok, you deleted your post, so I guess a UniFi accout is not required?
No, you donât install it in the base OS. But Iâve seen people post that inside their containers theyâre still using MongoDB 3.6, which is also what theyâre using on all the Unifi OS appliances (CK, UDM, etc.)
If that is actually true, it would be a downgrade to the controller in that regard.
On the other hand, I always suspected that they only made the controller compatible with newer versions of MongoDB because it was no longer possible to install such an old, unsupported version on a recent operating system. But now that they deliver everything as container images, I suppose that doesnât matter anymore, and security isnât an issue either, since the DB runs in a rootless container. So, I guess itâs fine to install this thing on a cloud server and manage a dozen client sites with it.
Seriously though, if they actually based it on 3.6, itâs probably to avoid issues when migrating from older cloud keys or something, but still, there has to be a better wayâŚ
I installed Unifi OS on a new Debian 13 VM (on Proxmox) with no issues. I recovered my former (exported) setup from Unifi OS (which has been on a Debian LXC on Proxmox) with no issues. Everything came back up as it had been running on Debian 12. Super easy.
I would like to share my experience so far testing out Unifi OS server 4.2.23 running in a virtual UBUNTU 22.04 LTS environment on my test XCP NG server.
Install process went very well and was easyâŚ
Been running the server for a few weeks already and no major issues⌠Have only hosted TEST switches / APs on the test environmentâŚ
What is VERY attractive about this setup is the concept of a UNIFI Control plane that seems to be the shell that runs everything else..
Opens up the possibility for future APPS to be installed like Protect / and othersâŚ( please make that happen Ubiquiti )
So far the upgrade process of the network application seems to work well, however for those of us used to doing this manually on standard Unifi Network servers installs, there is no command line script ( GLENNR ) or default backup process that is shown as an option during the upgrade process.
Makes me wonder what happens IF the app does not update properly???
I really like the ability to simply choose what upgrade path one can take ⌠STABLE / Release Candidate / Early accessâŚ
I see there being a lot of potential for this to be very useful and while its still fairly new I hope to see it mature to a release product.
Would be nice if they specified TCP and UDP for the ports used. Not a major issue but nice for granularity.
Ports used:
- 3478, 5005, 5514, 6789, 8080, 8444, 8880, 8881, 8882, 9543, 10003, 11443.
I tried to install it on debian vm on proxmox but it didnât work. can you please describe the process you follow maybe that can help
I am happy to help.
My vanilla Proxmox Debian (13) VM is set with:
- 8 GM memory (which idles around 65% utilization all the time)
- 40 GM disk (happens to be ZFS)
- CPU at 4 cores (X86-64-v2-AES) with 4 cores
- OS is 6.x - 2.6 Kernel (in Proxmox selection)
I first backed up the Unifi LXC to a x.unf file so I could import this into my fresh OS Server VM. If you need any guidance how to do this, let me know.
After getting the new VM to have my preferred Debian (13) settings, users, ssh keys, etc., I then connected to the new VM via ssh.
There are good crosstalksolutions.com and virtualizationhowto.com blogs and YouTube videos if you need some other explanations, but what I did on the new VM is essentially the following commands via ssh (after getting the new VM to have my preferred Debian (13) settings, users, ssh keys, etc.):
- sudo apt update && sudo apt upgrade (get everything up to date)
- sudo apt install wget (so you can download the URL below)
- sudo apt install podman (fully self-contained Docker alternative)
- cd (to get into my home directory)
- wget https://fw-download.ubnt.com/data/unifi-os-server/2f3a-linux-x64-4.3.6-be3b4ae0-6bcd-435d-b893-e93da668b9d0.6-x64 (this version 4.3.6 ⌠current release as of 3 days ago ⌠older versions are available if you need them for some reason ⌠I originally installed an older one but have since auto-upgraded to this one)
- chmod +x 2f3a-linux-x64-4.3.6-be3b4ae0-6bcd-435d-b893-e93da668b9d0.6-x64 (to make the script executable ⌠change the file name if you download an older version)
- ./2f3a-linux-x64-4.3.6-be3b4ae0-6bcd-435d-b893-e93da668b9d0.6-x64 (to execute the installation)
If the install ends properly, you will see âUOS Server is running atâ along with the https URL of your VM. Not the port number 11443 after the colon. This is important to enter something like https://192.168.x.x:11443 (or whatever IP address you assigned or received via DHCP).
There are several ways to get your settings imported from the former LXC into the new VM. To make this as simple as possible for myself, I simply stopped the old LXC and made sure to change the Proxmox VM settings so I received the exact same IP address as the former LXC. Most people will want to have the LXC and VM running silultaneously, which requires a site migration initiated from the LXC to the VM (essnetially changing the set-inform address on all Unifi devices to the new IP of the VM). I preferred to stop the LXC, reconfigure Proxmox networking (MAC address) and pfSense DHCP reservation, so the new VM received the same address as the former (now stopped) LXC. This way, when I visited the new VM at the URL above (with 11443 port), I was able to simply import the backed up LXC site and the VM came active with all the former LXC site settings). The many Unifi devices did not need to change at all since they were still talking to the same IP address in the VM as they were in the LXC predecessor.
There are many variables if you use pfSense or a non-Unifi firewall, haProxy or another proxy to have fully qualified domain names instead of IP addresses. If you get tripped up on that stuff, let me know and i can help further.
Last Proxmox step. If you use the Proxmox firewall (as I do), then there are a bunch of TCP and UDP ports that need to be opened. For initial install, I suggest you tirn off the firewall on the new VM and let it get fully operational. You can then tighten the Proxmox VM firewall to just the ports you need to open by running âsudo netstat -tulpnâ (or an equivalent CLI command) in the running VM to see exactly what you need to set up. I happen to lock down things in both pfSense (layer 3) and withn Proxmox (layer 2) for maximum protection.
Let me know if you need more help. I am happy to lend a hand.
Tony