Install the app
How to install the app on iOS

Follow along with the video below to see how to install our site as a web app on your home screen.

Note: This feature may not be available in some browsers.

Docker advice please

As an Amazon Associate, we may earn commissions from qualifying purchases. Learn more...

36
8
NAS
Synology DS218+
Last edited:
After originally using 3rd party community packages when I first got my nas I’ve moved to docker and think it’s great. Just after a bit of advice from someone with more experience though please.

I’ve been using docker for quite a bit now so I’m comfortable with deploying containers and basic troubleshooting. To date I have just used the built in synology UI but I’m now having some odd issues.

im running a swag container from linuxserver to allow external access to my containers and this works great. Just started to have an odd issue though where I can‘t deploy this via the synology docker UI as it complains the ports needed are already in use. Net stat shows the ports aren’t in use as i run a script at boot to free up 443 and 80. Oddly I can start the container from within portainer fine but it then doesnt show in the docker UI. Ive also run docker container ls -a and aside from swag nothing is using those ports. Just looks like something in the docker UI isn’t quite upto date. Any ideas how I can fix that?

Second query which may actually negate the need for an answer to the first is should I move to docker compose instead of using the UI. I’ve seen a fair bit about this and have started to search for containers to extract the docker compose from a running container. What benefits does docker compose bring and whats the best way to switch over to using that?

Last question (and thanks if you’re still reading this), what’s the best way to back up all my docker setup? I have all containers using config folders on my NAS so I can save all of those, if I did move to docker compose would I then just need to save that compose script and that’s pretty much a full backup?

Cheers, any advice appreciated.
[automerge]1613078458[/automerge]
After writing this I decided to have a look round the forum and found someone recommending the following

sudo synoservice --restart pkgctl-Docker

Thats sorted my first query, swag is now there in the docker UI. I’d restarted my nas a few times and killed the docker package and restarted but that command has now done the trick.

if anyone had any advice on docker compose and backups that would be great. Thanks
 
if I did move to docker compose would I then just need to save that compose script and that’s pretty much a full backup?
Correct. That and all the bind volume content living on your nas.

should I move to docker compose instead of using the UI
Well yes. You mentioned you use portainer as well. You can use compose scripts in the stacks section of portainer.

The point is syno ui will not give you all the options thet you can run and execute with docker run or compose.

can‘t deploy this via the synology docker UI as it complains the ports needed are already in use.
Then something is using those ports. swag is reverse proxy that might try and utilize 80/443 so you will get an error

Oddly I can start the container from within portainer fine but it then doesnt show in the docker UI.
This is a common bug. Restarting docker package will/should refresh the view.
 
Thanks rusty, I’ll have a look at docker compose. Might cheat and use something to extract the compose from my existing running containers then see how I can add that into portainer stacks.
 
Eventually people using docker containers end up using docker-compose :)

People claiming to be too lazy to learn docker-compose, are actualy the oposite of lazy as creating and keeping containers updated from the ui or the docker run is cumbersome compared the one-time effort of creating the container configuration in docker-compose.yml.

Usualy you can start off with a docker-compose.yml provided by the image maintainer, if not found on Dockerhub, usualy found in the matching Github project. Once the docker-compose.yml is configured, you can easily change parts of the configuration and make docker-compose redeploy the containers that changed.

Updating a container to base on the lastest image version becommes very easy:
If you use a specific image tag: update the image tag and run docker-compose up -d
If you use the latest tag: docker-compose pull && docker-compose up -d

There should be a post in this forum about how to derive docker-compose.yml defintions of running containers. Actualy the tool just uses a clever way to compile the output of different docker inspect outputs and derive a docker-compose.yml from it.
 
Cheers, is this the post you meant?


Assume I just need to copy that code into a .sh script file then run it?
 
Yep, that one.

You don't need to put the command(s) in a bash file necessarily.

If you use my while loop, make sure to become root before. I checked the red5d/docker-autocompose image, it still works and actualy produces the exact output as the gerkiz/autocompose image.
 
I tried both containers mentioned in that thread, and while creating the yml for a specific container, I noticed that Synology notifications, said that the container stopped unexpectedly when I ran the command.

I also noticed, that even though the auto compose container (either) was shown then as not running in the Synology Docker GUI, that I could continue to execute the command line and create yml files for additional containers.

Is this expected behavior? ... that the compose container stops, yet "runs" briefly with each successive command.
 
Last edited:
I tried both containers mentioned in that thread, and while creating the yml for a specific container, I noticed that Synology notifications, said that the container stopped unexpectedly when I ran the command.
hmm, I get no notifcations about unexpectedly stopped containers when I run those containers.

I also noticed, that even though the auto compose container (either) was shown then as not running in the Synology Docker GUI, that I could continue to execute the command line and create yml files for additional containers.
If the container is started with the --rm parameter it is expected that the container does its single task, stops and gets immediatly removed.
that the compose container stops, yet "runs" briefly with each successive command.
That's not how it works ^^ The main process is started quickly, does it job and finshes.. each time you create a container.

Test yourself: add --name autocompose and see what happens if --rm is not used; run the command twice and check the outcome. Now try it again twice with --rm.

The whole create/run/stop/remove cycle is performed so quick, that you barely feel a performance difference to a local binary.
 
Using the command:
Code:
sudo docker run --rm -v /var/run/docker.sock:/var/run/docker.sock gerkiz/autocompose homeassistant
or, repeatedly
Code:
sudo docker run -v /var/run/docker.sock:/var/run/docker.sock gerkiz/autocompose homeassistant
has no effect on the output.

However, if I first start the container Docker GUI), and then run either of those commands, the container stops and the DSM notification appears:
vEw6qfL.png


Apart from all that, it is apparent that there is no need to manually start the container using the Synology GUI. Who knew?
 
I’ve still not got round to moving to use docker compose instead of the synology gui and portainer but have a few questions if anyone can give me a bit of advice please.

I’ve now setup authelia to control access to most of the apps I have running in docker containers. I’ve done this alongside linuxserver swag.

Now looking to expand a bit and add some monitoring/alerting. I’ve already setup netdata and it’s quite good but maybe a little overkill and complicated for my needs. Can anyone recommend any other monitoring tools? I’m mainly interested in something fairly lightweight just showing cpu, disk, ram etc and usage across system and all containers.

In terms of altering what I’d really like is a pushover notification when one of my main containers goes down or isn’t responsive just to allow me to proactively restart it. I’ve currently got monitorr setup and on my organizr homepage but want something a bit better.

I’ve seen people using a healthchecks parameter in their docker compose but setting up healthchecks.io seems better to me due to notifications and that it can integrate with organizr like monitorr does. I can see there’s a linuxserver image for healthchecks.io and a free option from them direct. What I don’t have any idea of how to do is to use either to test if a container is working. Also not sure if having authelia for security would prevent any test from the healthchecks.io sure via the internet.

Has anyone got healthchecks.io setup or could recommend another way to test containers and get notifications if there are issues?

Thanks
 
For anyone else interested in using healthchecks I’ve managed to get this working now.

I deployed and amended the following script to reflect all my running containers and scheduled it to run automatically via synology task scheduler.


Took a bit of playing around to get the right links to test and the right curl commands. I’ve also been unable to test links externally as I use authelia to secure my docker containers but I figured a healthcheck on the internal link was good enough for me.

after setting this up I’ve also added pushover to healthchecks via their website so I now get messages to my phone when one of my containers is down.
 
Hi gdb19...

So this is nothing related to Docker, just a scipt running on your Synology...

Any chance you could drop some hints in as to what you did to get these health checks running? Looks really interesting...
 
You might want to add some light weight system monitoring to the mix:
usualy people end up with a metrics database like prometheus or influxdb + matching metrics collector agents and grafana to visualize them.

When it commes to docker-compose:
An analogy would be a lumberjack that uses a blunt axe to cut down trees (=syno ui). Instead of sharpend the axe (=docker-comose), he continues with the blunt axe agueing that he has too much work to do and no time to sharpen the axe, thus keeps on cutting in a slower, more cumbersome way. Trust me, you will love to work with the shapend axe once you took your time to master how to sharpen it.
 
Hi gdb19...

So this is nothing related to Docker, just a scipt running on your Synology...

Any chance you could drop some hints in as to what you did to get these health checks running? Looks really interesting...
It doesn’t run in docker but it does check for a response from my docker containers to make sure they’re running.

just needed a bit of setup to change the ports being used and add in the healthchecks ID for each check in the hcuuid fields (Generated via the healthchecks.io website). I had to tweak some of the curl commands and links being used to check the services but to be honest that was just a lot of trial and error.

Ive attached the script I used after masking some fields in case that helps.
[automerge]1619638828[/automerge]
You might want to add some light weight system monitoring to the mix:
usualy people end up with a metrics database like prometheus or influxdb + matching metrics collector agents and grafana to visualize them.

When it commes to docker-compose:
An analogy would be a lumberjack that uses a blunt axe to cut down trees (=syno ui). Instead of sharpend the axe (=docker-comose), he continues with the blunt axe agueing that he has too much work to do and no time to sharpen the axe, thus keeps on cutting in a slower, more cumbersome way. Trust me, you will love to work with the shapend axe once you took your time to master how to sharpen it.
Cheers. I do fully intend to look at docker compose, it’s next on the list and I’ve used the red5d-dockerautocompose container to generate a compose file for all my running containers already so probably just need to adapt that.

I have got netdata installed but had seen mention of Prometheus and grafana so I’ll check them out too.

cheers
 
You might want to add some light weight system monitoring to the mix:
usualy people end up with a metrics database like prometheus or influxdb + matching metrics collector agents and grafana to visualize them.

When it commes to docker-compose:
An analogy would be a lumberjack that uses a blunt axe to cut down trees (=syno ui). Instead of sharpend the axe (=docker-comose), he continues with the blunt axe agueing that he has too much work to do and no time to sharpen the axe, thus keeps on cutting in a slower, more cumbersome way. Trust me, you will love to work with the shapend axe once you took your time to master how to sharpen it.
I’ve been messing around with influxdb and telegraf but can’t seem to get telegraf to connect to the database.

if anyone has a guide on how to set this up I’d really appreciate it. I have checked one link from here but it’s based on an older version of influx


Cheers
 
I’ve been messing around with influxdb and telegraf but can’t seem to get telegraf to connect to the database.

if anyone has a guide on how to set this up I’d really appreciate it. I have checked one link from here but it’s based on an older version of influx


Cheers
I have literally started working on a rewrite of that article to accommodate the new influx v2 changes. Should post them in a few days.

What problems do you have exactly?
 
To be honest I think I’ve been trying to cobble things together for too many different guides. At the moment I have influxdb up and running ok but can’t get telegraf to connect to the database.

I think I’m probably best just clearing the containers and taking a look at your updated post as I’m sure that will get it working.
Thanks
 
Influxdb 2.x uses the query language "Flux", while Influxdb 1.x uses the query language "InfluxQL". The two query languages are completly differet and existing InfluxQL queries need to be migrated to Flux queries.

I tested it yesterday while turning Rusty's blogpost into a docker-compose.yml. It was quite straight forward, except when it came to register the InfluxDB in Grafana and of course the surprise when I imported prexisting dashboards and figured none of them actualy uses Flux queries under the hood. A dashboard usualy has many panels, each panal has one or more metrics.. each query needs to be migrated from InfluxQL to Flux to make it work.

Thus said, for the sake of comfort I would advice to stay with inflxdb 1.x for time :)
 
Influxdb 2.x uses the query language "Flux", while Influxdb 1.x uses the query language "InfluxQL". The two query languages are completly differet and existing InfluxQL queries need to be migrated to Flux queries.

I tested it yesterday while turning Rusty's blogpost into a docker-compose.yml. It was quite straight forward, except when it came to register the InfluxDB in Grafana and of course the surprise when I imported prexisting dashboards and figured none of them actualy uses Flux queries under the hood. A dashboard usualy has many panels, each panal has one or more metrics.. each query needs to be migrated from InfluxQL to Flux to make it work.

Thus said, for the sake of comfort I would advice to stay with inflxdb 1.x for time :)
I have this task on my list to make an update on my article regarding influx 1 > 2 migration but haven't gotten around to do it. Need to fork my setup as a test to pull it through an influx migration process and see if and how the boards will behave.
 

Create an account or login to comment

You must be a member in order to leave a comment

Create account

Create an account on our community. It's easy!

Log in

Already have an account? Log in here.

Popular tags from this forum

Similar threads

There must be already be some sort of dependency, as the deluge service joins the network namespace of the...
Replies
6
Views
736
Ok got this running.. But how do I specify the custom_user/password settings in the yaml-file? EDIT...
Replies
7
Views
1,087
For the heck of it, I just checked again in docker container, and it announced an update was available. I...
Replies
4
Views
1,099
  • Question Question
Do realize, that enabling any user to run docker containers is largely the same as giving that user full...
Replies
6
Views
1,814
Hello, I already have it configured perfectly with wireguard. I was looking at the Gluetun configuration...
Replies
4
Views
1,716
Thanks... I tried something similar with rsync. The docker volume lived in...
Replies
7
Views
1,988

Thread Tags

Welcome to SynoForum.com!

SynoForum.com is an unofficial Synology forum for NAS owners and enthusiasts.

Registration is free, easy and fast!

Trending content in this forum

Back
Top