Question Docker, VM and data security

Currently reading
Question Docker, VM and data security

Hi there,
I own a new DS machine and intend to use it for the following tasks: (private) data storage, personal devices' backups, run Docker with a few containers like PiHole, Bitwarden, RSS, etc., as well as to host a personal website and run a Mail server with a "catch them all" function. That's a fair mix of private data handling and exposure to the WWW. For this reason, and because I am neither an IT specialist, nore a security expert, my first idea was to run Docker on the "real" DSM beside my private data, and let the web hosting and webmail tasks run on a virtual machine.

Then I thought it could be easier to let the web hosting and webmail servers run in Docker containers as well, as these are supposed to be sandboxed. This way I could avoid VM-management and save some hardware capacity.

Now, I understand the differences between running a proper VM and running containers with Docker, but I couldn't find a lot of info about Docker's security on Synology NASs. Some people consider Docker to be safe enough to let containers run in the same environment than their private data, but others don't. I could find this thread on SynoForum. There are some good advices there, but as a non-specialist I have no idea how to check if my container is "requiring and exposing privileged mode the the internet" or not. Also, Fredbert wrote:
Some containers take host UID/GID parameters so I'd be careful not to use a NAS admin on the off-chance something does go wrong.
. Does it mean that if I install and run a container from my admin account, this process could get admin rights in case of a hack or bug?

What config would you setup if these were your own device and tasks? Would you run all these tasks in Docker containers, or one or several VM?

Thank's lot for your advices!
I set up most containers with a user account PUID/GUID. Stick with popular sources (original sources if available)... and high volume downloads... is a decent provider of docker images.

But yes... there's a risk there... as there is using Synology OS and apps.
You will know when a container needs to be run in privilged mode, as the dockerhub description example will indicate it. When you create a container using the UI, you have to explicitly enable the "execute container using high privilges" which informs you about the risk. Exposing services to the internet in a privilged container is a higher risk than for normal container - but not higher than running the same service on the host directly.

The docker daemon always runs as root! Though, by default only root is allowed to access the docker cli command (more accurate the docker.sock it instruments). Whoever is able to successfuly execute docker commands is able to mount your / folder into a subfolder of the container and can change things like they want. So make sure to only give access to the docker command for users you trust!

Every container started can potentialy start processes as root - which depends on how the container image is designed.. Though the root user inside a container lacks most capabilities that root requires to do low level tasks like changing network addresses, mounting filesystems. Usualy if a container requires addition capabilites, you will see it in the dockerhub description.

Many images are designed to start the main process in the container with a normal (restricted) user account. Either they have fixed UID:GID or allow to replace using ENV variables. This becomes important when you add volumes to your container and want to make sure your container actualy is able to read/write in it.

If a container process is exposed to the internet and someone exploits it, then they might potentialy manipulate whatever file is the volume you mapped into the container and replace e.g. a script or binary with something harmfull.... there is no harm other than that.
Also, Fredbert wrote:
. Does it mean that if I install and run a container from my admin account, this process could get admin rights in case of a hack or bug?
This was related to when the container interacts with the NAS filesystem. If UID/GID container environment parameters can be set then these will be used when reading/writing to the folders that the container has been assigned access. As @Telos said, it's safest to use a normal DSM user account and group and won't affect the container but will limit who from the DSM/host OS side can access the files.

Docker itself is already running it's processes as 'root': from SSH command line use this to check ps -ef | grep -i docker: it will need privileged access to interact with parts of the host OS. But Docker isolates containers such that they operate within their own constrained environment and have a set of restricted accesses to the underlying host.

If you consider that the container is like any other server on your network then how you configure it will dictate how much access it has to your network and connect devices. The same care that you take to consider the security of any new device on the network should be taken whether it's a Docker container, VM image, IoT, PC, Mac, home media device, etc.

Again, as @Telos said, look for official and reputable sources.
Thank you all for your replies!

So taking all that into account, there aren't more risks, e.g. to host a web site in Docker than on a dedicated VM (or did I misunderstand something?). Is that the way you would work personaly, with Docker for website self-hosting, or would you still setup a dedicated VM? Or maybe even something else?

look for official and reputable sources.
Sure...but not always easy to evaluate as a non-expert end-user.

Thank's again!

Create an account or login to comment

You must be a member in order to leave a comment

Create account

Create an account on our community. It's easy!

Log in

Already have an account? Log in here.

Similar threads

I want to be able to deploy Docker containers on a seperate Ubuntu VM I have running on my DS718+ . But I...
I don't have the step by step config you're after, but can tell you that you can SSH into a docker...
Use bind volumes, and the CM Project feature. Hyper Backup can then backup the volumes, and with a copy of...
Those are two different layers: one is the management ui to perform actions on the api. the other is the...
Thanks for your replies, but I found the solution: I had to allow port 8083 in the firewall.
Thank you for this - I'll give it a go and see where I get - worst case I learn something as I go!
I have no idea what you are doing, but I just tested it based on the instructions of the guide you linked...

Welcome to! is an unofficial Synology forum for NAS owners and enthusiasts.

Registration is free, easy and fast!