From Synology to Docker

Currently reading
From Synology to Docker

Health check? Please elaborate... Hadn't spotted that.
fpm: true ??
I did think asking for a simple docker compose was a little simple for you.. But hey, who am I to argue?
 
:)

This is how my compose looks like. You can ignore the deploy node and the element underneath (which is a yaml-anchor).

Code:
  app:
    image: ${ORGANIZR_IMAGE}
    deploy:
      <<: *default-deploy
    environment:
      PUID: ${ORGANIZR_ENV_PUID}
      PGID: ${ORGANIZR_ENV_PGID}
      TZ: ${ORGANIZR_ENV_TZ}
      fpm: 'true'
      branch: v2-master
    networks:
      private: {}
    ports:
      - published: 8082
        target: 80
        protocol: tcp
        mode: ingress
    volumes:
      - type: volume
        source: config
        target: /config
    healthcheck:
      test: "curl -f http://localhost:80 || exit 1"
      start_period: 10s
      interval: 10s
      timeout: 5s
      retries: 5
The variables are filled in by envsubst on the fly and the rendered file is send to docker stack deploy -c - organizr. This is part of my "non-syno deployment", which uses Makfiles and envsubst and is a modulized deployment library. Due to the lack of make and envsubst on Syno, it is not portable ootb.

The healthcheck does a simple curl to the embedded webserver and restarts the container if it's stale.

I don't remember why fpm is set to true. I'Ve been using Organizer for ages now, never touched the config since. It's just working.
 
I just looked at the Dockerhub description. This is why fpm is set to true in my configuration:
  • -e fpm Used to enable php to use the socket rather than TCP, often increases performance. Valid values are(comma separated) true, false
I understood this as : nginx is using a file based socket to talk to fpm, rather then using a TCP connection over the network stack.
 
Nice, a good portion of that went over my head. ;)
I've only recently started lookin at .env so am still getting my head around it, but as far as I can work out I could specify:
USER1=0010
USERGROUP1=100
USER2=0
USERGROUP2=101

In the .env
and then call the first 2 values in all containers that need to user USER1 and then call the second 2 values for any containers using the second user account.... Is that right, @one-eyed-king
If I am correct, could I then use:
SSD=/volume2/docker
HDD=/volume1
and substitute that in so:
- /volume2/docker/piwigo:/config
becomes
- ${SSD}/piwigo:/config
Could I even go one step further and use:
- ${SSD}/$containername:/config

Am I getting that right?

@kiriak Thank you, I also looked into Lychee. But Piwigo met my needs better, though couldn't now tell you why.
That said as I configure my .env further and with the simplicity of docker-composes I can always spin another up again and find out why.

@one-eyed-king Which version of docker-compose are you using? I tried to upgrade mine recently and it broke as it had a new dependency I didn't have on my Synology. Just wondering if you've solved the issue or if I need to do a bit of work on it when I've got my .env sorted a bit further. :)
 
Last edited:
Honest, I don't know. Last time I used an .env file is like 5 years ago. I am not entirely sure if they can be used all around the compose.yml or just in specific elements. If they can, your example looks valid.

I just checked it:
Code:
me@dsm:~/test-env$ cat .env
image=myimage
port=80
volume=/volume
environment=env

Code:
me@dsm:~/test-env$ cat docker-compose.yml
version: '2.4'
services:
  test:
    image: ${image}
    environment:
      myenv: ${environment}
    ports:
    - ${port}:80
    volumes:
    - ${volume}:/test

output
Code:
me@dsm:~/test-env$ docker-compose config
services:
  test:
    environment:
      myenv: env
    image: myimage
    ports:
    - 80:80/tcp
    volumes:
    - /volume:/test:rw
version: '2.4'
Though, it is invalid for top level entries, like service name, volume names and network names.

You can easily test it: make the change and use docker-compose config to see how the rendered compose.yml would look like

The snippet from my previous post uses the 3.7 schema, which would also work on Syno's Docker Engine (see: Compose file versions and upgrading). Though I use the long syntax where possible (as it is more descriptive) and swarm specific elements (actualy just ports.mode and deploy are swarm specific). I only use docker-compose on the Syno. Its a pitty that the env variables are thrown away in swarm mode deployments, which basicly makes the swarm mode unusable on Syno.

The long mode syntax for ports and volumes works with docker-compose deployments as well:
Code:
me@dsm:~/test-env$ cat .env
image=dockersuccess/docker-demo
port=8844
volume=/var/run/docker.sock
environment=test

Code:
me@dsm:~/test-env$ cat docker-compose.yml
version: '3.7'
services:
  test:
    image: ${image}
    environment:
      myenv: ${environment}
    ports:
      - published: ${port}
        target: 8080
        protocol: tcp
    volumes:
      - type: bind
        source: ${volume}
        target: /var/run/docker.sock

output:
Code:
me@dsm:~/test-env$ docker-compose config
services:
  test:
    environment:
      myenv: test
    image: dockersuccess/docker-demo
    ports:
    - protocol: tcp
      published: 8844
      target: 8080
    volumes:
    - source: /var/run/docker.sock
      target: /var/run/docker.sock
      type: bind
version: '3.7'

If you run docker-compose up and open dsm:8844, you can see the docker-demo app.
 
@one-eyed-king Which version of docker-compose are you using? I tried to upgrade mine recently and it broke as it had a new dependency I didn't have on my Synology. Just wondering if you've solved the issue or if I need to do a bit of work on it when I've got my .env sorted a bit further. :)
I realize I understood you wrong. I am using the docker-compose binary provided by the docker package.

I assume you installed the package "Python Modules" and tried sudo pip install docker-compose, which fails due to an expected file beeing a symlink instead of a file?

You can always pull and use the static binary (I used to do that back in the days when docker-compose was not provided by the docker package)
Code:
sudo curl -L "https://github.com/docker/compose/releases/download/1.28.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose

Be aware this will replace the symlink for docker-compose (that the docker package creates on package start) will be replaced by the downloaded binary.

I tried a couple of commands: all of them succeeded.
 
@one-eyed-king Cool, as I've said, my plan is to leave Synology next time I rebuild my NAS.
Too much relies on them.
Part of the reason I started my Syno to Docker was because every time PhotoStation updated it stopped my wife being able to access our photos.
Plus other things just don't work as they should (Still haven't solved SNMP),Hardware transcoding is LOUSY,
Docker is out of date.
I contacted them when GDPR came into effect in the UK and politely pointed out that anyone using the Syno package of MariaDB for any business records was in breach of GDPR as it was out of date, was informed I needed to raise a request for a feature.... Yes, enterprise level systems, requiring GDPR compatible databases in the UK, that's a request... Hmm...

Anyway back on topic. I thought I had pulled the static binary... Maybe not. Thank you again I'll have another look. But no, didn't use sudo pip. :)
 
Ok, so sftpgo cleared the sftp server.
I'm locking down any ports I don't have to have open.
Everything is now behind a reverse proxy with https.
I've consolidated my docker-composes into 1 extremely long docker compose with environmental variables.
I've also stopped using the host network to allow containers to talk to each other.
I even had to clone and modify a container using a dockerfile (it was the only way to be able to get a usable guid/gpid into sftpgo). :D
Now.... I did want to set up a mail server.... but I can't see why I actually need to.

I know I've seen somewhere on the forums that there's a way of saving web pages for later reading, but as I can't remember what it's called I can't seem to find it.
I was also wondering about a self hosted list of websites, (similar to organizr but only for external websites, if I put them all into there I'd get lost ;) )

Maybe a self hosted chess server? So I can play games against friends. :D

I did also consider something to pull in rss feeds so I can see quickly and easily if there's something new happening in the world that I'm interested in.

I'm also more than happy to share docker-compose snippets should anyone be interested.. I don't use the syno UI for containers anymore.

Oh and anything else (list of what I'm running a few posts above) people now can't live without?
 
I did also consider something to pull in rss feeds so I can see quickly and easily if there's something new happening in the world that I'm interested in
TTRSS works great

Oh and anything else (list of what I'm running a few posts above) people now can't live without?
Standard Notes and Bookstack (wiki) come to mind
 
I'm also more than happy to share docker-compose snippets should anyone be interested.. I don't use the syno UI for containers anymore.
Just put it in the Resources section. I'm sure many users will find it useful.


I did also consider something to pull in rss feeds so I can see quickly and easily if there's something new happening in the world that I'm interested in.
Maybe something like that: Docker - FreshRSS: A free, self-hostable news feed aggregator
 
I even had to clone and modify a container using a dockerfile (it was the only way to be able to get a usable guid/gpid into sftpgo). :D
Now.... I did want to set up a mail server.... but I can't see why I actually need to.

Did you try to run the container with user: ${UID}:${PID}? This declaration allows Docker to replace the id's for the first declared USER, which is USER 1000:1000 for drakkan/sftpgo. Though, the original declaration already has id's instead of a username:groupname, so I am not 100% certain the `user:` declaration realy works with this image.

Hmm, SFTP behind a reverse proxy? Afaik SFTP piggy backs on SSH, while FTPS would be SSL/TLS wrapped FTP. The fist one should be ideal for containers, as it only needs a single port, while the later would require a a controll port and a huge range for the passv data ports, which would realy suck with containers. Glad you found a solution!

Congratz for managing your endeavor so well so far :)
 
@SynoMan I'll have a look at that as well.
@one-eyed-king yes, I did try that, it didn't work. Yes, It's drakkan/sftpgo I'm using and got the information from a thread in his issues (wrong PC for the exact thread). True, the SFTP isn't behind the reverse proxy (though I may put the web interface behind it), it and vpn are the only extra ports I have open.
 
So, FreshRSS (linuxserver have depreciated their tt-rss which shows how old the first post in the "what do you run" thread is) and LinkAce (which was a pig).
Is it sad that I can't bear to look at the Syno UI when making containers?
 
Is it sad that I can't bear to look at the Syno UI when making containers?
Welcome to the club :ROFLMAO:
I have moved to portainer to manage my containers and just yesterday I have migrated creating my containers with docker create/run to docker-compose and today tested my compose.yml files directly in Portainer and so far so good
 
... and I can't even bear to look at portainer :ROFLMAO:

I have moved to portainer to manage my containers and just yesterday I have migrated creating my containers with docker create/run to docker-compose and today tested my compose.yml files directly in Portainer and so far so good
So creating stacks is NOT broken anymore? If so, this would be realy, realy helpful for those beeing alergic to the command line.
 
So creating stacks is NOT broken anymore? If so, this would be realy, realy helpful for those beeing alergic to the command line.
Seems like it's not broken, at least for what I have tried. There are still 2 containers out of 10 I'm running in CLI (adguard still docker run and portainer docker-compose)

1614176456141.png


All the stacks were created directly in portainer (except portainer) from the compose.yml I had created and yesterday in CLI

1614176631028.png
 

Create an account or login to comment

You must be a member in order to leave a comment

Create account

Create an account on our community. It's easy!

Log in

Already have an account? Log in here.

Similar threads

I am also trying to setup a Z-wave USB dongle and am getting stuck after following the same steps as...
Replies
1
Views
1,301
Thanks for your replies, but I found the solution: I had to allow port 8083 in the firewall.
Replies
5
Views
1,987
Thank you for this - I'll give it a go and see where I get - worst case I learn something as I go!
Replies
6
Views
1,774
You can run cmd.exe to get a command window from which you can execute SSH commands. However, my personal...
Replies
36
Views
3,842
For the heck of it, I just checked again in docker container, and it announced an update was available. I...
Replies
4
Views
485
  • Question
Do realize, that enabling any user to run docker containers is largely the same as giving that user full...
Replies
6
Views
1,082
Hello, I already have it configured perfectly with wireguard. I was looking at the Gluetun configuration...
Replies
4
Views
764

Welcome to SynoForum.com!

SynoForum.com is an unofficial Synology forum for NAS owners and enthusiasts.

Registration is free, easy and fast!

Back
Top