Solved Portainer : connection to endpoint down when restarting the container

162
45
NAS
DS918+ (8GB RAM, 4x WD RED 4TB SHR) ; EATON Ellipse PRO 1200FR
Operating system
  1. Windows
Mobile operating system
  1. Android
Hello,

Yesterday, I decided to try portainer to see what it can do and if I can use it to manager my containers instead of DSM UI and I have successfully installed Portainer on my DS918+. It seems to work properly until I restart the container.
Whenever I stop/start the container, it is not able to reconnect to my endpoint. I have to destroy and recreate the container each time to get it to connect to the endpoint.

Below is the command I use to create the container :
Code:
docker create --name=portainer --restart=always -p 9000:9000 -v /volume3/docker/portainer-data:/data -v /var/run/docker.sock:/var/run/docker.sock portainer/portainer:latest

And this is the error I see in the container's logs when I want to connect to the endpoint after a restart :
Code:
http error: Unable to proxy the request via the Docker socket (err=context canceled) (code=500)

Hint, idea anyone ? :)
Thanks
 
The only thing that I have more is the PID parameter set like this: --pid=host, apart from that, all the same. Running the latest version of portainer.

Looking at this error it seems like a timeout. How much containers are you running? Is your 918 stressed so far?
 
Hi Rusty,

I currently have 2 containers running (other than portainer) and the 918+ ain't stressed at all :) still a lot of CPU, RAM headroom.

I will test again to stop/start the container and check the exact error messages I get when I'm not able to connect to the endpoint. Wierd thing is that even without any modification to the container's conf, it seems to break the access to the docker daemon and the endpoint shows as "Down" and I have to recreate the container to get it to show "Up" again. Since I have to destroy the container, its logs do not persist so right now I have no trace of the exact errors.
 
From what I experienced, you can use all docker options from the cli, but the Syno UI strips away all unsupported options once a container is edited using the UI. On the cli you can use abolut paths from the host, while the UI expects paths to be on a share. The error you experience is because of this behavior.
 
From what I experienced, you can use all docker options from the cli, but the Syno UI strips away all unsupported options once a container is edited using the UI. On the cli you can use abolut paths from the host, while the UI expects paths to be on a volume. The error you experience is because of this behavior.

Ok so if I have this type of container, I should never use the UI to do anything on these containers ?
So once it's running fine, just never touch it in the UI ? :)

I thought that once it was created with the right parameters and options and all, if I just stopped and started it from DSM UI, it would not hurt ... I was wrong :cautious:

Well then I better have portainer running and manage the containers from there ... I plan to test AdGuard also and that's why I installed portainer in the first place :)
 
I have no expierence regarding what happens during a shutdown of the nas. It is hard to tell if the ui will "normalize" the container settings on a shutdown or restart.

I moved everything to a swarm cluster long time ago. The only containers remained on the NAS are nzbget and Plex, which are not affected by the situation.

Rusty, can you pitch in?
 
Well personally I have found out that stripping certain parameters that are not supported via UI (and work in CLI ofc) never happened to me. Apart from that they can't be configured via UI but can via cli. I have a large number of containers running most of them defined via cli or portainer but they all work fine even if I edit some parameters via Syno UI (the ones that I can mod).

So not sure what to say here.

AdGuard from my end was configured via cli/portainer but there is nothing that can't be configured for it via Syno UI.
 
AdGuard from my end was configured via cli/portainer but there is nothing that can't be configured for it via Syno UI.
I tried portainer mostly because I have seen on the "DoH (DNS over HTTPS) w/ pihole in docker on DSM" thread that I needed to create some macvlan for the container and also pass the ip and macvlan configuration to the container which could not be done in the Syno UI and that portainer provided a UI allowing to bypass the limitations of Syno UI.
 
There is no real reason to do that for ADGuard. DOH works fine for me without macvlan

How did you manage to map ports 53 and 67 to you AdGuard container ?
 
I mapped it not using the bridge network but rather the host one

I don't run any other DNS service too.
So I need to switch from bridge to host for the network part to not bother with macvlan ?
 
Back to the initial topic, @Rusty do you think adding the --pid=host can solve the issue with the container restart ?
 
Back to the initial topic, @Rusty do you think adding the --pid=host can solve the issue with the container restart ?
In certain cases you want your container to share the host’s process namespace, basically allowing processes within the container to see all of the processes on the system. In that case you add the PID parameter. But I see no reason to use it here considering that portainer only relays on docker.sock. Still, you could try it.
 
Well I just confirmed today that when I stop/start the portainer container in DSM Docker UI, once I get back to the portainer UI, it show the endpoint as "down".
I tried to refresh the endpoints list but still showed as down.
Dediced to go with destroying the portainer container and recreate it and after this, the endpoint was showing as "up".
I can live with this ... just a few commands to get it running again so I won't bother ...

Marking the question as solved.
Thanks ;)
 
I have no expierence regarding what happens during a shutdown of the nas. It is hard to tell if the ui will "normalize" the container settings on a shutdown or restart.
Oh and for reference, I had a little incident where I "accidentally pulled the cord" from my DS918+ while doing some cleanup where it is plugged :cautious: and noticed a few minutes after that it was all stopped ...
I plugged it back and turned it on and ... guess what ... everything was working just like nothing happened ...
 
I did today the update to the latest version of DSM and restarted the nas and every dockercontainer was started up again. No problem with it.
Ok but you did not manually stopped and started portainer from DSM Docker UI, did you ?
You just updated DSM, clicked restart the NAS and let it do its thing...
As I said in previous post, I had my NAS shut down by pulling the cord and once it got back online after pluging back the cord everything was OK, all containers up and running, portainer able to connect to the endpoint.
But when I stopped and started portainer manually within DSM Docker UI ... nope ... it won't connect to the endpoint.
 

Create an account or login to comment

You must be a member in order to leave a comment

Create account

Create an account on our community. It's easy!

Log in

Already have an account? Log in here.

Similar threads

There must be already be some sort of dependency, as the deluge service joins the network namespace of the...
Replies
6
Views
581
How did you create the Portainer container in first place? As in exact docker run commands or in case...
Replies
7
Views
2,114
Replies
3
Views
1,452

Welcome to SynoForum.com!

SynoForum.com is an unofficial Synology forum for NAS owners and enthusiasts.

Registration is free, easy and fast!

Trending threads

Back
Top