Migrating existing Ubiquiti UniFi Controller to Docker in Synology NAS

Ubiquiti Migrating existing Ubiquiti UniFi Controller to Docker in Synology NAS

Currently reading
Ubiquiti Migrating existing Ubiquiti UniFi Controller to Docker in Synology NAS

I am up, running, and migrated. What is the process when a new controller version comes out? Check the overview page to see what the latest tag version is? So for example 6.0.27-rc is the latest version supported by the Jacob container?
 
all is clearly explained here:



keep in mind, there is heavy difference between supported version and “really stable” version. Stay away from RC.
28 patches for new 6th version from September. Never happened in Unifi controller history.
Still in 5th

Am I missing something, I don’t see anything on that page that explains how to upgrade the controller?
 
Last edited:
When accessing it via reverse proxy or?

yes. I’ve seen another reference regarding Plex, and it said for that to go to custom header and add websocket.
I tried that and the error was gone. Does this sound like the appropriate fix to do?
 
yes. I’ve seen another reference regarding Plex, and it said for that to go to customer header and add websocket.
I tried that and the error was gone. Does this sound like the appropriate fix to do?
Have you tried it? While I tested it it worked fine via rp. Not sure if there was a websocks or not. Still, it won’t hurt to try.
 
Have you tried it? While I tested it it worked fine via rp. Not sure if there was a websocks or not. Still, it won’t hurt to try.

maybe it’s because I’m on controller version 6.0.28 and it’s a unifi issue. They had something a few years back where they had to implement a fix. For now if I go to custom header in the rp rule and create websocket it resolves it.

are you guys running the controller as root? Other resources have suggested in docker container go to environments then change the bind_priv and runas_uid0 to false.
 
there are 3 kind of known web socket errors:
- based on insufficient RP setup from user side, solution is known and works well. Then this isn’t error based on Syno side.
- based on Syno Docker GUI app, defined by me somewhere in 1Q/2019 here. This error is from my side a cosmetic issue, because why to use Syno Docker GUI, when you can use better Portainer environment?
- based on Unifi itself (seen in controller GUI). This issue is well known from controller v. 4.x. Seems to be everything works well, but error is still published into GUI. Operation of the controller isn’t affected. From my side (observation) it’s random from controller version side. Nothing is affected and the controller works well.

in all of these cases I can live with all of them w/o impact to my trust to Unifi or Syno
 
Is it ok to be running the data checksum on the docker shared folder, when having a Unifi container? On the bottom of the shared folder options it says running data checksum is NOT recommended for hosting databases. Do you think this includes the Unifi Controller?

Docker Checksum.PNG
 
This feature is about a Copy on Write safety, what is great part of the BTRFS. But it makes an impact to a speed.
I have got all of my "fast priority driven" containers running on etx4 FS and RAID1. It's faster than BTRFS RAID1. Rest of my data are running on BTRFS everywhere, because: added value of the BTRFS (include rest of my containers). No anomalies found till now.

Official Docker KB source:
Containers performing lots of small writes (this usage pattern matches what happens when you start and stop many containers in a short period of time, as well) can lead to poor use of Btrfs chunks.
Fragmentation is a natural byproduct of copy-on-write filesystems like Btrfs. Many small random writes can compound this issue. Fragmentation can manifest as CPU spikes when using SSDs or head thrashing when using spinning disks.

-- post merged: --

ofc, when you will host small containers, just for your family usage, you can use BTRFS w/o worries
 
Your are certainly right. I'm not very familiar with IP ROUTING so I need to dig a bit to see if there is something I can do. Anyway, Thanks for your help

Just answering to myself. The correct setup I used to make it work is the following :

My router : 192.168.0.1
The static IP I want to use for my UniFi container : 192.168.0.100
I use link aggregation on my DS1513+ on lan1 and lan2. The aggregated link is named bond0. If you are not using this configuration you can probably replace bond0 by eth0.

Please ensure that 192.168.0.100 and 192.168.0.101 are not already in use in your network topology and that your DHCP server won't serve those adresses.

Code:
sudo docker network create -d macvlan --gateway=192.168.0.1 --subnet=192.168.0.0/24 --ip-range=192.168.0.100/32 -o parent=bond0 --aux-address 'host=192.168.0.101' UniFi-Network

Please not this part is not persistent and needs to be done after each reboot (I need to write a script and add is to DSM scheduler)

sudo ip link add UniFi-Bridge link bond0 type macvlan mode bridge
sudo ip addr add 192.168.0.101/32 dev UniFi-Bridge
sudo ip link set UniFi-Bridge up
sudo ip route add 192.168.0.100/32 dev UniFi-Bridge

Now your container has a static IP, and a route so you can reach it from the synology host and from your lan.

Then, you can setup the proxy :

Source:
Description : UniFi
Protocol : https
hostname : unifi.domain.com
port : 443

Destination:
Protocol : https
hostname : 192.168.0.100
port : 443

Custom headers :
X-Real-IP $remote_addr
X-Forwarded-Host $host
X-Forwarded-For $proxy_add_x_forwarded_for
X-Forwarded-Proto $scheme
Upgrade $http_upgrade
Connection "Upgrade"

Now you can access the container by : https://unifi.mydomain.com/

No need to NAT any port number.
 
I cannot update anymore my previous post so I just add some clarification.

In a Synology Cluster (this is my context) the network interface bond0 (for the first link aggregation) is named bond1 in DSM GUI (yes, it is weird). bond1 network interface (second link aggregation) is reserved for the cluster synchronization and hearth beat and it is presented in DSM GUI as bond2.

the command : sudo ip link add UniFi-Bridge link bond0 type macvlan mode bridge
will add a new network interface which will be presented as "(unknown) UniFI-Bridge" in DSM GUI.

now, regarding making the new route persistant after a server reboot or a system update :

you can add a triggered task in the DSM Task scheduler for the "Boot-up" event that execute a script (as root) like this one :

unifi-bridge.sh

Bash:
#! /bin/sh
# Update network configuration to add a route to unifi container

# Create a virtual network interface to allow IP routing to UniFi container
# Not persistant, has to be done after server boot-up


ip link add UniFi-Bridge link bond0 type macvlan mode bridge
ip addr add your_container_static_ip/32 dev UniFi-Bridge
ip link set UniFi-Bridge up
ip route add your_bridged_IP/32 dev UniFi-Bridge
 
Last edited:
not sure what are you thinking for

My initial requirements were : set a dedicated static IP to unifi container that is not the IP address of the host (DSM) and do not have to use NAT to access the controller. NATing some ports are sometimes impossible because already used by other services (port 8443 and 8080 are not available on my system).

By re-reading my post I found a typo in proxy conf:

Code:
Destination:
Protocol : https
hostname : 192.168.0.100
port : 8443 (not 443)
 

Create an account or login to comment

You must be a member in order to leave a comment

Create account

Create an account on our community. It's easy!

Log in

Already have an account? Log in here.

Welcome to SynoForum.com!

SynoForum.com is an unofficial Synology forum for NAS owners and enthusiasts.

Registration is free, easy and fast!

Back
Top