Docker Swarm Ingress not working on one out of two synology nodes?

Currently reading
Docker Swarm Ingress not working on one out of two synology nodes?

16
4
NAS
DS1815+ DS918+
Operating system
  1. Windows
Mobile operating system
  1. iOS
Last edited:
Hi folks i am running docker package 18.09.0-0513
I have a DS1815+ and a DS918+

When I setup a swarm one of the nodes ingress network works fine, the other doesn't (the 1815+).
I have tried re-installing docker.
I have put the service container on one node and then on the other note and i can still only access via the DS918+
I have checked to see if anything else is camping on the published port on the DS1815+ - it isn't.
Firewall is turned off on both nodes

I am stumped.

Does anyone else have a multi node synology swarm working?
 
Just out of curriousity, did you check if declared environments are still avaialble in the containers created by the swarm tasks? My experience is they are not. I can confirm that ingress works fine on a single machine, though.

With ingres, the published ports are not owned by the container, they are owned by the service.
While docker ps doesn't show them, docker service ls should.

Sorry, I lack a second machine to perform tests. Though with the situation of the "dropped" environments, I consider swarm mode basicly broken in Synology's docker.
 
Hi, i have that on my to do list to see if that new package finally fixed the env issue, will get to it later today.
The env issue is why I quickly never bothered with portainer when I first installed like a year or so ago.

Got distracted by this issue as this time i decided to setup portainer using a stack instead of docker run command line - spent 5 hours trying to figure out why the ports were not publishing on my master where portainer was installed, it forced me to learn what ingress does - i had never grokked that one can ingress at end node; so imagin mu surprise when accessing the portainer UI from the port on the worker node to get to the container on the master node.

I have even unsintalled docker on that DS1815+ and rebooted and re-installed, removed all my customer networks like my macvlan etc - and i am stumped :-(

will report back on the env vars.
 
Been there, done that -> still broken.
Crud :-(

What do you think the solution is here? I know docker can run nested, but never tried it , can one run something like microk8s in a docker container and get full features or is it really just best to give up on synology docker and run something like a debian linux VM and install docker / microk8s - i need access to things like macvlan for pihole and home automation software that needs to work with avahi.....
 
Last edited:
Your requirements look like swarm never was the right fit. Even though it is possible to span a macvlan across cluster nodes, unlike docker-compose, swarm does not allow to assign fixed ip's. The same is true for assigning devices, use privliged mode and many many other "low level" details.

Synyo bare metal:
I tried to install k8s with kubeadm and minikube, which failed due to missing kernel modules. I didn't try to run a container based k8s distro like Kind (which i wasn't aware exist back in those days) and neither tried k8s in DinD as the missing kernel modules still apply. I have found a blog-post that someone successfully installed k3s on a non x86_64, which should be portable to a x86_64 machine, though it will require you to compile some kernel modules...

VM:
A VM should be just about fine, though be aware that an idle kubernetes node will consume a fair amount of cpu time and ram. My homelab consists of a 3 node ESXi cluster with Xeon E3-1275v5 CPUs and the baseline for the manager nodes (when almost nothing is deployed) is arround 700-800Mhz and around 1,5Gi Ram. The 1275v5 has like 5 x ompf comapred to the 918+. Swarm puts way(!) less stress on the ressources of its nodes.

Distro:
I have seen Kubernetes on Docker Enterprise, Rancher, Openshift and IBM Cloud Private... I call them Kubernetes+ Distros, as they are basicly Kubernetes + some extensions and modifications which make your life easier and harder at the same time (you get a look of stuff out of the box, but other stuff that usualy works with vanila kubernetes just doesn't work or needs to be solved entirely differently). I honestly became a fan of vanila k8s with kubeadm. Though at the end it's a matter of taste. Never tried microk8s though.
 
Thanks for the thoughtful reply. I was already thinking about physical nodes for the piholes, another reason to consider that. macvlan is a nice to have.

For other services while the cluster doesn't allow fixed IP for the containers if you have two nodes A and B with static IP address and you use the ingress network - you effectively have static IPs to all services.

Wow on trying to install anything bare metal on synology, thats brave :)

I got Debian installed with MikroK8s - and yeah it seems to run quite heavy - around 2gb of ram doing nothing. And didn't get me much further than i had got previously with rancher and docker desktop - the useless (for doing anything) kubernetes dashboard (i am sure it is great in enterprise grade solution when you want a summary of everything).

I won't delete the VM but for home use. I don't see much upside in learning the k8s command line and equivalent of yaml as it seems after installing everything i just have a very heavy weight docker environment and given i don't do docker or k8s for my day job it seems too much. The whole point of me moving away from windows hardware and virtualization to NAS was to have 'less' at home :)

Next up docker on debian with portainer.

Oh and cause i am a glutton for punishment i tried logging the issue with Synology, again.
 
Last edited:
For other services while the cluster doesn't allow fixed IP for the containers if you have two nodes A and B with static IP address and you use the ingress network - you effectively have static IPs to all services.
Honestly, using publised ports is still the most elagant and reliable way. If you run multiple docker vm's you can still use swarm and run services as swarm services. or mix them with "plain containers". In my swarm environment I use keepalivd to introduce a failover ip, which my router uses to forward incomming traffic to. Whenever one of the nodes gets unavailable the failover ip is re-assigned to one of the other active machines.

I would realy love to see that Synology replaces their own UI with Portainer and switch from their customized Docker Engine to a vanilla version...

Basic stuff is actualy not that complicated on K8s. A docker-compose.yml translates into Deployments [or StatefulSets or DaemonSets](~=docker service declaration), Services (~=docker publish ports) and PersistantVolumeClaims (~=docker volumes). There are two project I am aware of, that use docker-compose.yml files as input, tranlaste them into the required k8s manifests and deploy those into the cluster.
 
Thanks, i saw keepalivd mentioned elsewhere, I will go find a good tutorial.

For your swarm how do you handle sate you want each swarm node / replicated container to access, are you using a clustered file system of some sort? I keep seeing gluster mentioned....

I totally agree with you about what synology should do wrt docker - i remember trying to file bugs and them telling me 'no docker makes the package' and i had to push back hard and be like 'no its your package, you compile it, etc'
On k8s - i don't want to have to learn two formats, if swarm can do all i need for small 2 node cluster and say no more than 10 to 15 containers i am good. Now if i was doing k8s at work (our devops use it for a globally scalable service) then i would take the time. Guess i am just lazy :)
 
Last edited:
For your swarm how do you handle sate you want each swarm node / replicated container to access, are you using a clustered file system of some sort? I keep seeing gluster mentioned....
I use storageos, which is a container native storage. It is not supported under plain docker or swarm, but still works more or less reliable.. It takes care of replication and "brings the container to the node" (=native nvme/ssd speed). It requires an etcd cluster to store its configuration. You can get a developer lincense which can handle up to 500gb total storage capacity. Just updates of the Docker Engine became are a real pita and 10Gbit networks helps a lot for initial syncinc of the replicas *cough*. Though, you can also create docker volumes that point to NFS4 shares on your Syno to have storage accessible to all nodes...

I totally agree with you about what synology should do wrt docker - i remember trying to file bugs and them telling me 'no docker makes the package' and i had to push back hard and be like 'no its your package, you compile it, etc'
Been there done that. Same result. The Synology UI and the modifcations docker did do make DDSM even possible are pretty much custom extensions. The same is true for "editing" a container - vanila Docker doesn't provide an api for that...

With such a small setup you will want to have 3 manager nodes. Swarm uses Raft, which require Floor(n/2)+1 nodes for quorum. Thus 3 nodes can compensate the outage of 1 node. 2 nodes on the other hand can not even compansate the loss of either one of the nodes. You will alway want to have an odd number of nodes to ensure that quorum votes can have a majority.
 
Last edited:
Though, you can also create docker volumes that point to NFS4 shares on your Syno to have storage accessible to all nodes...

This is where my mind was going. My goal is to cope with the outage of any one synology unit

I was hoping (in my stumble around not knowing what i am talking about way) that:
  • Have one debian VM per synology node running docker to act as my docker host platform.
  • Have one shared folder per synology that maps either into the debian VM docker host or directly into the containers (though i don't want to modify off the shelf containers to support NFS inside them if i can help it)
  • Somehow replicate between the two NFS shares on the two synology nodes
    • for example portainers /data directory would be on both shares so as the single portainer_portainer container moves around it always sees the same data, but that the container would just see it as /data
I am reading up on gluster and ceph, i could run one of these in the debian docker host VMs, but i don't see how they access the synology host - looks like they expect block storage.

--edit--
or how about iscsi to map into the debian docker host, 1:1 so synology node A would have one iscsi target mapped into the debian A VM running docker and likewise for node B

I assume this would give each VM based docker host its onwn block device i could use for ceph?
 
Have one shared folder per synology that maps either into the debian VM docker host or directly into the containers (though i don't want to modify off the shelf containers to support NFS inside them if i can help it

You don't have to modify images or do any sorts of monkey buisnes with your container. Instead of using bind-mounts, you simply declare a docker volume that points to a remote NFS share. This can be done from cli or docker-compse.yml declarations. A container will not notice the difference, except you get the "copy-existing-data-in-volume-folder-on-create" as a bonus. A few images depend on it.. which is consider bad design.

Ceph and Gluster are no lightweights on the hardware. Each of those consist of several components that need to be run with more then once instance.

I am not sure if NFS-Replikation will be anything other then rsync, which will not play nice with locked files. Imagine a replikation of database files that can't be synced because the files are locked.

I have to leave the iscsi topic to someone else. Never configured it myself... never had to.
 
Thanks for the advice, I will go the NFS route now (though i did get an iscsi target mapped too!).
I like the visibility of being able to see the NFS contents in file station vs blindness of a iscsi target LUN.

I now have my Debian Docker Host VM with /mnt/remotenfs (configured pointing to a NFS share on the synology NAS. I have a one node swarm. I have portainer storing its /data in bind mount to /mnt/remotenfs/portainer/data.

That all works. Thanks for your help.

I can create a new thread for any issues once i have my second node up / or if i get it working, maybe I post a how to....
 
I strongly suggest to conder using volumes bound to a remote share over os level mounted remote shares.

The volume delcartion in a docker-compose.yml looks like:

Code:
volumes:
  my_volume:
    driver_opts:
      type: nfs
      o: addr=192.168.200.19,nfsvers=4
      device: :/volume1/docker/my_app/config

Just use my_volume:/path/in/container on your service to use the volume. NFSv4 is prefered and runs more stable then v3!

Why? If the nfs share goes stale, the container will remedy the stale volume situation itself. On the long run, os level mounted remote shares eventualy will go stale and cause downtime of your containerized service..
 
Last edited:
Yes using NFSv4 on my synology cause that's what it defaulted to and I know no better (never ever used NFS before today). Does the approach store the volume as a blob on the nfs share or as file structure? I will tear down the swarm and try your approach later today. This is my fstab entry, does it look right
Code:
192.168.1.38:/volume1/vmNFS /mnt/remotenfs nfs rw,async,hard,intr,noexec 0 0
. It works in that portainer is using it... but i know fstab has lots of options....

Question on keepalived - i don't understand why that is needed in a docker container or at all? It seems to use ipvs and the ipvsadm seems to allow one to directly set up a single IP for two hosts - i am sure i am missing something as to why....
 
Oh i see! Light went on.

If I use the syntax in the stack file I don't need to mount the NFS folder in fstab on the docker host.
(and i note the volume isn't a blob :) )

Though if i do mount via fstab i could still do a volume directly in the mounted NFS on the host right?
(i ask because I am thinking about how to a VIP between the two docker host VMs to have a replicated file system - if i use your solution i have a single point of failure - the IP for one synology unit)
 
I strongly suggest to conder using volumes bound to a remote share over os level mounted remote shares.

The volume delcartion in a docker-compose.yml looks like:

Code:
volumes:
  my_volume:
    driver_opts:
      type: nfs
      o: addr=192.168.200.19,nfsvers=4
      device: :/volume1/docker/my_app/config

Just use my_volume:/path/in/container on your service to use the volume. NFSv4 is prefered and runs more stable then v3!

Why? If the nfs share goes stale, the container will remedy the stale volume situation itself. On the long run, os level mounted remote shares eventualy will go stale and cause downtime of your containerized service..

When I use this syntax the container starts, it populates the data directory, and i can get to the web interface.
And then the stack job restarts with an error on the task of "task: non-zero exit(1)" - this loops infinitely.

Code:
# docker stack ps portainer
ID                  NAME                                        IMAGE                        NODE                DESIRED STATE       CURRENT STATE            ERROR                       PORTS
kn66dhf9bqb2        portainer_agent.bzyyspjprxy6qizftmin3re3r   portainer/agent:latest       docker01            Running             Running 42 seconds ago                              
mtpwz6q7mt98        portainer_portainer.1                       portainer/portainer:latest   docker01            Running             Running 28 seconds ago                              
gx1k2hiw8q9f         \_ portainer_portainer.1                   portainer/portainer:latest   docker01            Shutdown            Failed 37 seconds ago    "task: non-zero exit (1)"


Any ideas why? The bind mount works perfectly.
This is the full syntax of the stack:

Code:
  portainer:
    image: portainer/portainer
    command: -H tcp://tasks.agent:9001 --tlsskipverify
    ports:
      - "9000:9000/tcp"
      - "8000:8000/tcp"
    volumes:
      - portainer_data:/data
     
    networks:
      - agent_network
    deploy:
      mode: replicated
      replicas: 1
      placement:
        constraints: [node.role == manager]

networks:
  agent_network:
    driver: overlay
    attachable: true

volumes:
  portainer_data:
   driver_opts:
     type: nfs
     o: addr=192.168.1.38,nfsvers=4
     device: :/volume1/vmNFS/portainer/data
 
@one-eyed-king
OMFG i think i am about 'to give back' :)

I fixed it when i realized it was un-reliability with nfs connection, so i replicated my fstab driver settings in the opts section as follows and (fingers crossed) seems to have fixed the issue so far..... i can't tell you what the settings are other than i copied them an article that said these were the best options to use in fstab...

Code:
volumes:
  portainer_data:
   driver_opts:
     type: nfs
     o: nfsvers=4,addr=192.168.1.38,rw,async,hard,intr,noexec
     device: :/volume1/vmNFS/portainer/data
 
Last edited:
Though if i do mount via fstab i could still do a volume directly in the mounted NFS on the host right?
You can mount a share as often as you want. Regardless wether os manged via /etc/fstab, manualy via mount command or managed by docker volumes :)

Docker does use the mount command in the background when using docker volumes. It transparently mounts the share into /var/lib/docker/volumes/potainer_data/_data

When I use this syntax the container starts, it populates the data directory, and i can get to the web interface.
And then the stack job restarts with an error on the task of "task: non-zero exit(1)" - this loops infinitely.

You need to make sure that the UID/GID inside the container matches the owner of the share folder. Actualy I use my remote volumes with those options since years. Though, surprisingly I can see no option in your options that would affect the permissions. Still, its always good to keep an eye on matching permissions.

Question on keepalived - i don't understand why that is needed in a docker container or at all? It seems to use ipvs and the ipvsadm seems to allow one to directly set up a single IP for two hosts - i am sure i am missing something as to why....

Keepalived is not necessarily required,. It provides a failover ip aka virtual ip. I have 3 master nodes, which also run keepalived (snapd package, because the os package was too old) to have a single ip which is the target of my routers port forwarding rules. If any of my nodes is down the cluster is still reachable from the internet :)


I run traefik as global deployed service for reverse proxy operations. It allows to use labels to declare the reverse proxy rules and is able to take care of Letsencrypt certificates. For plain containers, the labels are put on the service level, for swarm containers on the depoyment level. Its is a pitty in Traefik2 they removed the support to store certificates in etcd. This allowed to share a certificate amongst nodes, without needing to create a certificate per node or keep it in a share. This is why I still use Traefik 1.7.x
 

Create an account or login to comment

You must be a member in order to leave a comment

Create account

Create an account on our community. It's easy!

Log in

Already have an account? Log in here.

Similar threads

I can’t find any option to restore just the settings. 1710356648 Phew, managed to fix it. Within the...
Replies
4
Views
394
Good to hear. Deluge has not been updated for almost two years now as an app, nevertheless. But it gives...
Replies
12
Views
962
  • Question
Open an issue on that GitHub page. The developers will be glad to assist. OP has posted two threads on...
Replies
5
Views
965
I'm happy with email notifications but in v0.3.3 of dockcheck the author added apprise notifications...
Replies
4
Views
1,043
I am also trying to setup a Z-wave USB dongle and am getting stuck after following the same steps as...
Replies
1
Views
1,033
How did you create the Portainer container in first place? As in exact docker run commands or in case...
Replies
7
Views
1,241
Looks like I triggered you somehow with my post: it was not my intention. I have no idea whether bash or...
Replies
4
Views
1,535

Welcome to SynoForum.com!

SynoForum.com is an unofficial Synology forum for NAS owners and enthusiasts.

Registration is free, easy and fast!

Back
Top