Radarr/Sonarr: Indexers not found

Currently reading
Radarr/Sonarr: Indexers not found

No go...

root@NAS:~# sudo rm $(docker info --format '{{.DockerRootDir}}')/network/files/local-kv.db -ash: docker: command not found rm: cannot remove ‘/network/files/local-kv.db’: No such file or directory
 
You did sudo to root with sudo -i, didn't you?

Yes, With and without sudo. And from my admin account. Docker is stopped in Package Center.

I've tried to locate the kb file but can't discern the path... saw a reference it was in "/var/lib/docker/network/files/local-kv.db", but apparently not for Synology.
 
Last edited:
oh, stupid me. When the docker package is stopped, the symlink to the docker command is removed. This is Synology speciality: commands of stopped packages become unavailable.

The kb file is in /volume?/@docker/network/files/local-kv.db. Make sure to replace ? with the volume number the package is installed on.
 
OK... No success.

Renamed kb, restarted Docker. Tested a few simple containers (not connected), and they worked. However, VaultWarden will not sync. Tried to get radarr and jackett to talk, but no connection. Also, the Docker GUI container terminal windows still fail with "socket closed" popup windows.

So... I am rebooting the NAS, hoping to solve the VaultWarden issue (I don't need to expand the problem beyond radarr/sonarr/jackett). I'm doubtful this will help, so if VaultWarden sync continues to fail, I'll likely return to the former db file.

Appreciate you sticking with me here.
 
Something is seriously wrong with your docker installation. The only problem I am aware of is the "socker closed" popup. Everything else is indeed very odd.

Your default network has all required flags set to true and your firewall is turned off. There is no reason container traffic should not be able to hit another container by its published host port.. and yet it does not work. crazy...

I am not realy sure what you mean with the VaultWarden Sync, but assume you refer to a custom reverse proxy rule to forward a specific path location to port 3012.

Something messes arround with the network stack/iptable rules (I have not touched a single ip table rule for docker the last 6-7 years.. never!)

The only thing leading to such a weird behavior would be to have a second device using the same ip and thus causing an ip collision. If your router is also the switch your syno is directly attached to, you might want to check it's logs for odities.

I am afraid there is not much more I can think of.
 
Last edited:
It's bizarre indeed.
Just a followup question... If I uninstall Docker, reboot and reinstall Docker (probably drop back one release), will I lose all my containers and images? Or will they be there when the reinstalled version is run?

This is an example of why I want/need a Synology bare metal backup solution... so I can rollback my NAS to a time when all was working well. Synology comes up so short here. I'll likely move radarr/sonarr/jackett/Plex to a NUC, and leave the NAS for bulk storage... and maybe a Pi for Bitwarden and AdguardHome.

My only hope now seems to be that DSM7 might completely redo my networking and "fix" whatever is broken here. /rant
 
If I uninstall Docker, reboot and reinstall Docker (probably drop back one release), will I lose all my containers and images? O
I am afraid the docker data root folder will be wiped clean, as such neither images, nor containers will be present.
Though you could easily write yourself a bash script to save/load the images and export/import the containers filesystem.

That's what I like about docker-compose configurations, since the volume mapping is configured in the compose file , it will pull the image(s), create the container(s) with the declared volumes and run them as if nothing ever happend.

My only hope now seems to be that DSM7 might completely redo my networking and "fix" whatever is broken here. /rant
That's a bold bet :) Hope it works out just fine!
 
I'd love to be able to add something useful but that's highly unlikely.

Regarding starting from scratch though, isn't that similar to what has to happen when migrating to a new NAS? The Docker package and its images and containers do not [currently] get backed up and recreated during restore. Any local folders used to map to container volumes were restored [obvs] but not the Docker environment itself.

I'm looking at this from the Docker GUI in DSM but what I did was export each container configuration, not the contents, from the source NAS. Then on the new NAS I downloaded the right image versions and then imported the saved configurations. So maybe something similar could be done to save from losing everything?
 
So maybe something similar could be done to save from losing everything?
Great thought. I did the full backup of various containers, now I'll export just the configs and see how things go. The main one I want to preserve is Bitwarden (VaultWarden), and I've already exported my passwords (I wonder if that includes OTP keys), and have a backup in Keepass just in case things go sideways.
 
Last edited:
If you export/import the configuration and keep the volume, your containers should work as before.

Remember: container state composes of the container configuration, the container filesytem and the persistent data in volumes. Typicaly the state of the container filesystem is irrevant (if the image declares volumes for each "important" folder, and you don't configure the application to write files outside those folders). As long as you used the volumes as they are suggested by the dockerhub description of your images, you should be good.

You still might want to check if you accidently didn't end up using "real" volumes (=not the bind-mount type of volume mapping the ui provides). From what I remember Portainer should have a volumes section. Volume names without a random alphanumerical name are candidates for volumes you want to preserve, it is high likely the hold persistent state you will want to keep. The alpahnumerical ones are anonymous volumes that are created, if a folder is declared as volume in the Dockerfile, but no volume mapping is declared for them in the container config (i thope this makes sense) - it is still likely they might hold persistent state you want to preseve, though they might also just contain irrelevant stuff...

fredbert's approach is more convinient than the "save/load" and "export/import" approach I was writting about: It is close to what docker-compose brings to the table, except the automatic image pull.

Good luck!
 
From the reaction, maybe I didn't have such a duff idea :)

Re. image versions: if you have been using ":latest", which has pros and cons but let's not dwell on that now, and you find you need a particular version then the exported config can be edited and the tag string changed to what you need.

My learning point was an early container I setup for PostgreSQL. It went from latest = v12 to latest = v13 and I'm no DB admin. I rolled back with this edit:

from... "image" : "postgres:latest",

to... "image" : "postgres:12",

Then it worked. My container is now pegged on the latest point release of v12.
 
From the reaction, maybe I didn't have such a duff idea :)
nah, all good. The idea is more use friendly than mine :) Though, both of our ideas are less user friendly then docker-compose..

My response was mainly about to make sure that there is no "hidden" persistent data that might get loost irreversibly when deleting the Docker package.

Everyone uses Docker in a different way and with a different level of knowledge. Some start easing in with copy/pasting docker run commands or docker-compose.yml that they find somewhere and miss that those might use "real voulmes" instead of mount-binds. For an expirenced user it's obvious to take of all necessary volume mappings.
 
Volume names without a random alphanumerical name are candidates for volumes you want to preserve, it is high likely the hold persistent state you will want to keep. The alpahnumerical ones are anonymous volumes that are created, if a folder is declared as volume in the Dockerfile, but no volume mapping is declared for them in the container config
All volumes are random alphameric except for Portainer, which probably needs updating.

After backing up all configs, my plan is to manually delete all containers and images, stop and delete docker, rename the shared folder "docker" (presumably it will be recreated when Docker is reinstalled), reboot, reinstall Docker, reboot and then attempt jackett and radarr from scratch. If that doesn't work, I'll just 😭 in my 🍺.
 
Installed Docker (not the newest version, but close). Now rebooting. I let the uninstallation delete everything (after cloning the "docker" folder) and then checked to be certain the /volume1/@appstore/Docker and /volume1/@docker folders were gone.

Will restore (import) a simple container and see what is involved... then VaultWarden. Wanting to make sure permissions are correct on the individual config folders. Fingers crossed.
 
Back from the dark side of the moon... and things are no different.

I'm wondering if I tweaked something under Control Panel that is doing this. Could a DNS issue prevent the NAS knowing where to send 172.17.0.4:1811, for example... or a blocking mechanism. Firewall is unchecked (presumably disabled), but my connection attempts between containers just time out... while connections to the outside 'net from containers seem fine. Frustrating.
 
Bummer!

Requests to ip adresses do not involve dns. Thus, I'd be surprised if dns is responsible.

But it involves routing and/or iptables magic, which themself require kernel modules, loaded/unloaded by the Docker package. After restarting the docker package (or the nas itself), the modules should be loaded and be ready to be used to perform the docker network magic.

The behavior appears to be similar to what happens if you disable the masquering option when creating a docker network.
 
When it comes to IP tables, I'm well over my head. But the one thing that stands out to me, is that up until a few weeks ago I had AdGuardHome (AGH) running via macvlan. When I decided to abandon that approach and run AGH with "host", my anal side wanted to remove the macvlan network, and the AdGuard bridge I created along with that.

Having found no understandable way of removing those two network entries, I noticed that Portainer had a remove network capability. I ticked the checkbox for those two and clicked removed, and they obligingly disappeared. I restarted the NAS to confirm, and when I didn't see them, I presumed all was well.

But now I wonder if Portainer left collateral damage in doing so.

Requests to ip adresses do not involve dns. Thus, I'd be surprised if dns is responsible.
Since DNS has no bearing, is there something to check in my router that might interfere with passing the HTTP request to the docker container? How does the request to 172.17.x.x get routed?

My NAS network has a gateway IP input, which I have my 192.168.1.1 value.
 
Last edited:
But now I wonder if Portainer left collateral damage in doing so.
It shouldn't. Highly unlickly it did. Though, your docker installation acts weird as hell, so I am not going to say its impossible.

You can use the netshoot container to troubleshoot container connections:

https://github.com/nicolaka/netshoot said:
  • Container's Network Namespace: If you're having networking issues with your application's container, you can launch netshoot with that container's network namespace like this:
    $ docker run -it --net container:<container_name> nicolaka/netshoot
  • Host's Network Namespace: If you think the networking issue is on the host itself, you can launch netshoot with that host's network namespace:
    $ docker run -it --net host nicolaka/netshoot

The image is full of network troubleshooting tools. Traceroute should help you to identify the path your packages are tacking.
How does the request to 172.17.x.x get routed?
(think) I am not entirely sure.

Since the docker0 interface has its gateway for the 172.17.0.0/16 subnet on its gateway 172.17.0.1, your DS should be able to talk to any container ip in this range without routing.

When it commes to traffic directed from outside to the container, the job is done by iptables and something else. I assume this something else is source nat (SNAT), which is the reason why you see the bridge getway ip as "client real ip" in a container. In Reponse packages the bridge gateway ip should be replaced back with the original client ip.

When it commes to traffic created in the container send to the outside, I guess container traffic leaves the default bridge network masqueraded over a host interface (which is what you prevent if the firewall is enabled and the bridge network is not allowed to communicate with the host interfce) and then uses the host's default gateway to reach out to the world. This mechanism seem broken for you.

Note: after observing how the docker bridge networks behave, this are the conclusions I came up with. I have no idea how accurate they are.
 
Thanks, I'll look into that.

Any thoughts on whether a System Configuration restore might bring things back? I have quite a load of configuration restores in Hyper Backup.

fKLZNNv.png
 

Create an account or login to comment

You must be a member in order to leave a comment

Create account

Create an account on our community. It's easy!

Log in

Already have an account? Log in here.

Similar threads

Hi, I'll start by saying, my understanding of networking is very limited so I apologize if I'm asking an...
Replies
0
Views
895

Welcome to SynoForum.com!

SynoForum.com is an unofficial Synology forum for NAS owners and enthusiasts.

Registration is free, easy and fast!

Trending threads

Back
Top