the primary answer for you is described in your first posted thread on this forum here
In general, as you can see the Network Attached Storage (NAS) has capabilities to run more services than a file storage, e.g.:
- Synology environment dependent packages or servers (Drive, ...)
- Independent environment running in a “containers” by Docker. One of the independent environment is mentioned Unifi controller. You can install the controller directly to PC (Win, Mac, Lnx), then if you need the service availability, you need run the PC 24/7 (or during the operation time). Or you can put the controller into NAS (Docker) and all your imaginable problems are solved (even for multisite operation).
Then Docker is kind of PaaS virtualization platform. You can read more in Docker wiki or search in YT, tons of useful documents.
This is the short story of NAS evolution from data storage to multi-service hot spots environment.
The original intention of NAS was to provide additional storage for client devices and to access it via the normal network. That's different to SAN (Storage Area Network) that uses dedicated network infrastructure to deliver high-speed secure storage to enterprise servers. SAN is very focused on the task of fast, dedicated storage while NAS is competing for network access a long with all other devices.
To run a NAS for file services (AFP, SMB) doesn't take much computing power but the general underlying control software is most often a flavour of Linux and has the ability to do so much more. The first NAS I had came with 256MB RAM and a fairly weak CPU and as a NAS worked fine but to run it's admin web interface it was under-powered. So then add more oomph to address that ... now there's more power spare to run other stuff and be a marketing advatage to competitors.
.... eventually the NAS has become an off-the-shelf general purpose server that has maintained software releases to address security, bug, and software upgrades. This has become an easy alternative for those that can't or don't want to build and maintain a server themselves.
Docker is a package/service that enables virtualised 'slim' servers (containers) to run over the main OS. Within Docker multiple containers can be running and each can have as board or focused functionality as the creator of the container wanted.
Two mandatory prerequisites for VM in Syno NASs:
... 4GB+ RAM
... specific models only
Docker - no such limitations.
There is no question which is better (VM vs Docker), because the usage of them is driven by purposes. They both are similar from isolated environment point of view.
I found really clear description for you, found in web (lazy to write):
Docker is container based technology and containers are just user space of the operating system. At the low level, a container is just a set of processes that are isolated from the rest of the system, running from a distinct image that provides all files necessary to support the processes. It is built for running applications. In Docker, the containers running share the host OS kernel.
A Virtual Machine, on the other hand, is not based on container technology. They are made up of user space plus kernel space of an operating system. Under VMs, server hardware is virtualized. Each VM has Operating system (OS) & apps. It shares hardware resource from the host.
VMs & Docker – each comes with benefits and demerits. Under a VM environment, each workload needs a complete OS. But with a container environment, multiple workloads can run with 1 OS. The bigger the OS footprint, the more environment benefits from containers. With this, it brings further benefits like Reduced IT management resources, reduced size of snapshots, quicker spinning up apps, reduced & simplified security updates, less code to transfer, migrate and upload workloads.
However, containers are typically much smaller and faster, which makes them a much better fit for fast development cycles and microservices. The trade-off is that containers don’t do true virtualization; you can’t run a windows container on a Linux host for example.
and specific answer regarding Unifi controller in Docker:
- you can make a copy of the container in seconds (testing purpose of new versions), it can save your mental health, when Unifi upgrade brings troubles. Immediately Fall back to previous ver.
- you can run several versions (containers) of the controller - also for test. Perfect for Fall back.
- Backup/Restore of exact controller by your choice
- container transport from one NAS to another is really easy and fast
- not 100% sure, if you can install multiple different ver (isolated) of Unifi controllers in single machine (single OS), not tried till now (specially in Win it can be a problem). For 90% of home users it isn't mandatory feature.
- then not sure how to upgrade the exactly chosen controller only in Win environment
This possible issue was my point to comparison Unifi controller operation between Docker and VM, as one of advantages for Docker.
- what is 100% sure, you can use Backup/Restore configuration between different OS platforms, e.g. from Win to Raspberry Pi. It works.