Docker ".../mounts/shm" ? What? Why?

Currently reading
Docker ".../mounts/shm" ? What? Why?


DS4l8play, DS202j, DS3623xs+, DSM 7.3.3-25847
Just curious... When running `df -h`, I have several lines reading:
64M  6.3M   58M  10% /volume1/@docker/containers/5aa0312poewfj76efvevb48640d3cskux82jfb6a804ab1777467c/mounts/shm
Three of my 12 running containers (calibre, jackett, sonarr ... notably, NOT radarr) appear like this. When I stop the container, the entry vanishes.

What is it about these containers that differ from the others. I'm curious... wondering what setting, if any I may have used that has this effect.

Last edited:
There is nothing to worry about, it's shared memory used for inter-process communication. I highly suggest to leave it alone :)

Update: maybee I am mistaken. Normaly every container should have a mount like this (size can be adapted dending on the application needs): for shared memory:

shm on /dev/shm type tmpfs (rw,nosuid,nodev,noexec,relatime,size=65536k)

Thus said, I am not sure what the mount you see should be, as shm should be of type tmpfs (=in-memory) and not be on a filesystem.

update2: I though you mean inside the container. I do have those on the Syno as well - but on none of my other docker hosts.

update3: I think I understand what it is: it is mountpoint of shm inside the container filesystem - what I wrote in the first line is indeed correct.

You can see that it's the shm of type tmpfs if you check it like this:
mount | grep shm
The output will provide the relevant details:
/dev/shm on /dev/shm type tmpfs (rw,nosuid,nodev,relatime)
shm on /volume2/@docker/containers/df06e86152a9fe9a46ee980c19e53c7c1635ca08ee36ef34ff6b9b2b6ff83855/mounts/shm type tmpfs (rw,nosuid,nodev,noexec,relatime,size=65536k)
update2: I though you mean inside the container. I do have those on the Syno as well - but on none of my other docker hosts.
To clarify... this was from NAS root "/", not from within a docker container.
Why only 3 containers? What distinguishes them that this would be present?
Last edited:
I have no idea ^^ As far as I know every container has shm mounts - if you enter the containers you will see that each one has. Though, I realy have no idea why an additional mountpoint in the container filesystem is created and the shm is mounted into that mountpoint.

I looked at my plex container, which has the mountpoint in its container folder, but inside the contaienr there is no evidance for special treatment that would lead to this.

Does it happen for fresh created container as well - or is it a behavior of containers that have been created long time ago?
-- post merged: --

To verify my suspicion I stopped and deleted my watchtower container. It also had a shm mountpoint in its container filesystem. The new created one does not! The same behavior can be observed if the affected container was created with docker-compose.

Some vague theories about why it exists for some containers:
1. it is something the Syno does after a reboot (I doubt it, but I am also not going to test it)
2. it is something watchtower does when replacing the container after an image updaet
3. the affected container where created with a docker version that actualy had that behavior. Since the container is not re-created after updating the docker version, the container still has the old behavior. My affected containers were created 13 months ago.
There's no time line break specific to these containers radarr and sonarr were created together, but only sonarr shows this behavior. I'll "recreate" it to see if there is a difference. I have 4 containers created through Portainer "Stacks" (all docker-compose). None of those show this.

As you said, it's probably (hopefully) inconsequential 🙂

Create an account or login to comment

You must be a member in order to leave a comment

Create account

Create an account on our community. It's easy!

Log in

Already have an account? Log in here.

Similar threads

Actualy, it would be an amazing addition to the ui if it provides volume management. It should allow to...
I don't have the step by step config you're after, but can tell you that you can SSH into a docker...
Use bind volumes, and the CM Project feature. Hyper Backup can then backup the volumes, and with a copy of...
Those are two different layers: one is the management ui to perform actions on the api. the other is the...
Thanks for your replies, but I found the solution: I had to allow port 8083 in the firewall.
Thank you for this - I'll give it a go and see where I get - worst case I learn something as I go!
I have no idea what you are doing, but I just tested it based on the instructions of the guide you linked...

Welcome to! is an unofficial Synology forum for NAS owners and enthusiasts.

Registration is free, easy and fast!