Question Moving MariaDB and others to Docker.

Currently reading
Question Moving MariaDB and others to Docker.

DS918+, DS414j
Operating system
  1. Linux
  2. Windows
  3. other
Mobile operating system
  1. Android
I'm fed up with the fact that Synology apps are very rarely updated.
MariaDB is version 10.3.2 whilst the current version is 10.5.4.
When GDPR came into effect in the UK and I contacted Synology about an update I was basically ignored, despite the fact it meant anyone using MariaDB for any business data in the UK would be in breach.
I'm therefore looking to move from any Synology apps I can to Docker versions.
I thought I'd start with MariaDB. Does anyone have any information? I can see people have done it but they don't state which docker they chose from the registry, nor what environmental variables etc they used.
Any help would be greatly appreciated.
I'm using this for Kodi multiroom so will also be looking to create a headless Kodi install with the aim of automatically updating the database.
Last edited:
Well that was much less painful than I'd anticipated. Configured users after I could connect to it.

There doesn't appear to be information as to how to update it when a new version is released, is there a way or will I have to backup and rebuild?
Glad it worked out so easy for you :)

I am not realy a mariadb user, so I can't realy tell much about the effect on updating the rdms.
Though, as long as the binary files are still compatible between versions and you did map a volume for the data folder, usualy its as simple as following this instructions.

Sidenote: a container is not a "docker", I never understood why people call it that... I guess this happens if non matter experts spread information to beginnerns. It always gives me an unpleasant feeling like if someone scratches chalk over a blackboard.
I didn't map a volume, I'm not sure how to (well I know how to, but I'm not sure what it would need to be mapped to). I'm very new to all of this and am not sure what I need to map a volume to for it to save the database in an area not tied entirely to the container. Thank you for your advice and help.
Last edited:

In the examples they use something like -v /path/on/host:/path/in/container to bind-mount a host folder (or just a single file) into a container path. The (left hand) side before the colon is always a host path OR a named volume (managed by docker, invisible to the Syno docker ui!) , the (right hand) side after the colon is the target path inside the container. Always use the container path as statet in the documentation, unless you know why you want to use a different path. In case of the official MariaDB image, it is ment to map a volume from the host (the ui has a file/folder picker for that) to the container path /var/lib/mysql.

This commandline -v option translates to the UI settings in the "volume tab". The setting can be configured in the extended settings when creating the container OR when you edit a stopped container. Though, be aware if you didn't have a mapping from the beginning and already worked with the container, your container will mount the mapped folder of the host on top of the folder in the container, which will result in previously existing files becomming invisible to the container.
Ok, I'm back. I've learnt a huge amount since I started this. I've completely rebuilt my containers and have everything purring nicely. I'm now looking at trying to create nightly backups of my databases, keeping a weeks worth of backups, especially whilst I am evolving things at a rate of knots. Does anyone have any advice on how to do this?
My reading suggested hyper backup was configured to be able to backup Mariadb if using the package rather than a container. I'm aware of virtual machine snapshots,not aware of docker or database snapshots will see what further information I can find,thank you.
Tbh, my MariaDB volume is mounted to /volume1/Docker . So as it's accessible from within DSM, I just use HyperBackup to backup my /volume1/Docker folder.

The best thing I've seen to make REAL proper backups of your containers is this:

This is how I would do it:
- ui: shutdown container
- ui: export the container configuration
- shell: create an archive of the volumes: tar czvf myarchive.tar.gz folder (I would suggest to create the archives in the parent folder of your volumes)
- copy configuration json and tar.gz file to the new machine
- shell: extract the archive to the same location on the new ds and extract the archive: tar xzvf myarchive.tar.gz
- ui: import the container configuration (if you get an error the image does not exist, you will need to empty the value for "id:". if the tag was replaced by watchtower, you might need to fix the tag in "images:" as well.
- ui: check the configuration of the container, make sure the volumes are properly recognized and/or adjust the path, if you extracted the volume data in a different location.
- start the container
That's about it :)

If you created your container configuration using docker-compose.yml you can skip the steps to export/import the configuration and the steps will be reduced to stopping the container, creating the archive, copying to the new machine, extracting it and starting your docker-compose deployment.[/icode]
Ok, I'm back. I've learnt a huge amount since I started this. I've completely rebuilt my containers and have everything purring nicely. I'm now looking at trying to create nightly backups of my databases, keeping a weeks worth of backups, especially whilst I am evolving things at a rate of knots. Does anyone have any advice on how to do this?
Snapshot & Replication +1
I am a huge fan of backing up files at rest.

With distributed applications, you knever know if messages are on the wire and/or are partialy processed at the point in time a snapshot is taken. Backups/Snapshots of running distributed systems can be quite complicated. Though, since relational databases usual rely on ACID transactions, the risk of a corrupt snapshot is unlikely, but a loss of latest updates might occour.
Last edited:
Thank you all. Shadow's quote of one-eyed-kings guide looks to be the best solution, especially as I can automate the whole thing in a single script as I'm now using docker-compose to build all my machines :)

To get the database backed up (even using a dirty backup) I tried hypervault.. It doesn't appear to offer me the ability to specify to only keep 7 backups.
I'm trying to migrate my DSM MariaDB to Docker aswell (linuxserver-version), however I have problems connecting to it through phpMyAdmin (also installed in Docker). I get "access denied" all the time when I try to login with the root-user (and yes, I've specified a environment variable for that password).
Anyone got any ideas what I could check? Logfile doesn't mention any obvious problems as far as I can tell.
How did you start you container? from docker-compose? from the ui and forgot to link the MariaDB server in the configuration of the phpMyAdmin container? You might want to share pictures of the configuration of both containers (tab: "Links" needs to be visible).
I'm only using the Docker UI within DSM. I actually have a link from the phpMyAdmin-container (below). I have no link in the MariaDB-container settings.

Create an account or login to comment

You must be a member in order to leave a comment

Create account

Create an account on our community. It's easy!

Log in

Already have an account? Log in here.

Similar threads

You can run cmd.exe to get a command window from which you can execute SSH commands. However, my personal...

Welcome to! is an unofficial Synology forum for NAS owners and enthusiasts.

Registration is free, easy and fast!

Trending threads