A Portainer Agent Primer

A Portainer Agent Primer

Currently reading
A Portainer Agent Primer

4,174
1,437
NAS
DS4l8play, DS202j, DS3623xs+, DSM 8.025847-𝘣𝘦𝘵𝘢
Telos submitted a new resource:

A Portainer Agent Primer - Manage all your docker instances from a single host machine

If you have docker running on multiple machines as I do, you'll find Portainer Agent an easy “enhancement” to Portainer which will enable you to use a single Portainer instance to manage your docker instances.

Here's how... Let's say you have docker running on a Raspberry Pi (RPi) and two Synology NAS (NAS1, NAS2)

I'm going to use the RPi to host my Portainer container using an HTTP connection. To set up Portainer, running a simple command is all that is required. In this case, execute the...

Read more about this resource...
 
just a small addition:
If anyone happens to use this guide to connect to a Portainer server/agent over the Internet, I will save the effort. At that point, you would join a growing group of users with an exposed Portainer API (currently tens of thousands), for which "active" WAN users would reward you by auditing your containers' environment.

For a better understanding of the written warning, here is one of real Portainer instance exposed over the Internet, even over the unencrypted port 9000:
1653892840304.png


There is a solution called Portainer EDGE Agent, which is designed to manage container environments connected to the public Internet:
 
Last edited:
Telos you might want to redo the /var/lib/docker/volumes part. It needs to point to the directory where the volumes are actualy stored and not to a directory that serves a convention.

You might want to replace your command with this one to dynamicly determine the docker data root dir on this host:
Code:
sudo docker run -d \
  -p 9001:9001 \
  --name portainer_agent \
  --restart=always \
  -v /var/run/docker.sock:/var/run/docker.sock \
  -v $(sudo docker info --format '{{.DockerRootDir}}')/volumes:/var/lib/docker/volumes \
  portainer/agent

This way it will survice a dsm update, as it points to the actualy volumes folder that docker uses.
 
The output for echo might be different for many people. Not everyone has the docker package installed on volume1. I would advise to keep it dynamic to be a "one command fits all" solution. Basicly the dynamic version can be run on any docker host and will always interpolate the correct path.

update: your text changes in the tutorial are hillarious, I love them 🤣
 
worrisome, that was when I upgraded TrueNAS Scale on 1/2 January and found that all the maintained environment settings for Kubernetes management via Portainer (TLS secured) had completely disappeared from the TrueNAS Scale.
Since then, I only do a clean installation and test what is really fixed compared to the old one. Exactly as they promised, any changes made through the CLI will not be persistent. Never.

1654010735282.png
 
I will note that Portainer Agent is not quite a true remote tool at this version. For example if I create a stack on an agent-client, the stack/compose file will not appear when you log on directly to the remote Portainer instance. Also... when logging on directly to the remote Portainer instance, stacks which have been remotely created appear as "Limited"

dlOcrt8.png


And the Editor tab does not exist...

VOGadWW.png


... while the compose text exists only on the Portainer instance in which it was created.
 
... while the compose text exists only on the Portainer instance in which it was created.
But doesn't it make sense though?

Actualy it works like I would have expected it: object managed from a Portainer instance are known to itself and of course the outcome would be provisioned in the target environment. Portainer would need to use some sort of consensus and perform a quorum on each change to maintain a distributed state. This is nothing you light headedly implement for WAN connections.
 
I am not sure if I am able to follow the RDP comparision. The RDP Client is on your host and presents drives and devices to the RDP server. While with Portainer you instruct a different server, then connected to it and expected it to have the same details available as the other Portainer... I mean you could raise a feature request @Portainer. It might be a good usecase for a new feature :)
 
I ran up against another "nuance" of using Portainer Agent today (OK, "pros"... you may just want to mark this thread "read". But this is new to me, and may help others...

After "successfully" deploying Diun on my RPi, I decided to do the same for my Docker host on my NAS.

Since I was already logged into the RPi Portainer, which "sees" my NAS docker network, I added a Diun stack to the NAS via the RPI agent endpoint.

However... the nice folks at Microsoft, didn't like that (I use the same MSFT email account for both Diun instances) and shot me this nice error message...
Code:
Mon, 18 Jul 2022 17:57:03 EDT ERR Mail notification failed error="gomail: could not send email 1: 432 4.3.2 Concurrent connections limit exceeded. Visit https://aka.ms/concurrent_sending for more information.
Since I was struck clueless, I checked out the link provided, and came away just as clueless as when I arrived.

So... out of curiosity, I deleted the stack I had created via the Portainer Agent endpoint, and opened Portainer directly on the NAS. Using the same compose file, I launched the container and .... no errors. Was that coincidental? I'm unsure, but now I have Diun running on both systems without complaint from the nice Microsoft peeps.

If anyone can explain this ELI5, please do so. Thanks!

ConfusedTheOfficeGIF.gif
 
Since I was already logged into the RPi Portainer, which "sees" my NAS docker network, I added a Diun stack to the NAS via the RPI agent endpoint.
Not quite sure what this means, but I have to say that I deploy all my agents from a single Portainer UI to all my hosts and never had a similar situation. So bottom line I do have 4 hosts and each of them has its own DIUN instance on it, if that makes sense.
 
I deploy all my agents from a single Portainer UI to all my host
That was my intent, but it seemed to run afoul of Microsoft's email restrictions. Initially I planned to use a different email account for the second Diun instance, but after launching the stack on the second host (with same diun.yml file) I was surprised to see no errors logged.

Maybe this is happenstance, or an agent artifact, I don't know. Good to hear Pushover didn't have this issue for you. I considered Pushover, but wasn't ready to add another app to my work processes. Maybe that will change when I discover the utility of that app in my life.
 
Maybe that will change when I discover the utility of that app in my life
I use it for everything that supports it. Don't have a single notification over email, just push or webhooks. All my DSM (from all of them) is coming via push, plex, tautulli, sonarr, radarr, downloads etc... all are coming over push. ;)
 
The concurent throttling issue was high likey just a coincidence, though probably less coicidental if you have multiple duin instances that generate a mail notification for the same images at the same time. I would be surprised if a mail sending programm would keep the smtp connection open all the time.

I doubt this is related to portainer or docker in generall - it just happend to be the tooling in which the concurency throttling was triggered.
 
I doubt this is related to portainer or docker in generall - it just happend to be the tooling in which the concurency throttling was triggered.
Later today or tomorrow, I will try again, and see what happens. I thought that maybe Diun isn't releasing the SMTP connection (However @Rusty isn't seeing this).
 
I use it for everything that supports it. Don't have a single notification over email, just push or webhooks.
Pushover sounds like my next "project". Just curious (and OT)... if I were to send my notifications from this forum to Pushover, it appears that would require changing my SynoForum account email to [email protected]... is that correct?
 

Create an account or login to comment

You must be a member in order to leave a comment

Create account

Create an account on our community. It's easy!

Log in

Already have an account? Log in here.

Similar threads

I don't see how that guide changes anything. It only removes orphaned components to free up space. But...
Replies
9
Views
879

Welcome to SynoForum.com!

SynoForum.com is an unofficial Synology forum for NAS owners and enthusiasts.

Registration is free, easy and fast!

Trending threads

Back
Top