Radarr/Sonarr: Indexers not found

Currently reading
Radarr/Sonarr: Indexers not found

4,145
1,427
NAS
DS4l8play, DS202j, DS3623xs+, DSM 8.025847-𝘣𝘦𝘡𝘒
My sonarr and radarr containers recently lost connection with other Docker containers (indexes, torrent package). Coincidence or not, this occurred when I was setting up a new AdGuardHome (AGH) instance.

AGH was the first container I set up with Hosts network (as opposed to bridged), so my suspicions are that the use of Host broke my container inter-connectivity.

Here's what I tried so far...
  1. Removing all AdguardHome container/image/DNS remnants (to the best of my recollection).
  2. Stopping Jellyfin (which also uses the Host network).
  3. Dropping the firewall.
  4. Refreshing the docker IP tables per Rusty's post in a previous thread I had a few months ago.
  5. Restarting Docker.
  6. Restarting the NAS.
  7. Creating new jackett and radarr containers to test indexing using identical PGID/PUID).
  8. Reinstalling Docker
  9. Again, Restarting the NAS.
My radarr log, when setting up a new indexer gave this...
[v3.1.1.4954] System.Net.WebException: The operation has timed out.: 'http://192.168.96.10:9117/api/v2.0/indexers/rarbg/results/torznab/api?t=caps&apikey=(removed) ---> System.Net.WebException: The operation has timed out.

FWIW, the radarr/sonarr containers do have external connectivity (ex., Search).

Thanks for any insights!
 
I don't have as much experience with Docker - but I'll ask the very obvious question. Did anything in the environment change prior to the connectivity issue? What happens if you take down AdGuard?
 
Last edited:
Assumptions:
- ip 192.168.96.10 = nas ip
- port 9117 = published port on the nas
- you use {nas-ip}:{published port on the nas} inside a container to access the indexer-container and not a hostname/containername

In case you address the indexer-container by its ip, it is highly unlikely that AGH is responsible for the problem, as no dsn name resolution takes place.

Are the jacket/radarr running in a bridge network? If so, in the default bridge network or in a custom bridge network? Are those allowed in the syno firewall? A simple check is to fire up a terminal in one of the containers and use wget/curl to access the http url you had in your logs. What happens if you use this http url in a browser on your computer?

Note: while the default bridge network is easy to use, it lacks service discovery and as such requires the good old container linking for container to container commincation thru the default bridge network or bypassing the container network at all and access the target container using its published host port. A custom network on the other hand provides service discovery, which allows to use the service or container name of a container within the same network.
 
Did anything in the environment change prior to the connectivity issue? What happens if you take down AdGuard?
Not that I recognized. I updated my router firmware alongside getting AGH up, but that is all.
In case you address the indexer-container by its ip, it is highly unlikely that AGH is responsible for the problem, as no dsn name resolution takes place.
Yes. 9117 is both external and container port mapping (192.168.96.10 is NAS IP).
Are the jacket/radarr running in a bridge network? If so, in the default bridge network or in a custom bridge network?
I don't know what a custom bridge, so I'm guessing default. From docker webui (sonarr):
7BxN9oK.png

From Portainer:
nscLPRQ.png

Are those allowed in the syno firewall?
Firewall is disabled now. Before disabling All Interfaces included:
mzf4mOX.png


What happens if you use this http url in a browser on your computer?
192.168.96.10 refused to connect. (OK Not Good!)

A simple check is to fire up a terminal in one of the containers and use wget/curl to access the http url you had in your logs.
Gotta try this, but the http connection failure is worrisome.

A custom network on the other hand provides service discovery, which allows to use the service or container name of a container within the same network.
Link to doing so?

One last note:... I just recalled another thing that I did around this time of connection loss... Before I set up AGH, I removed the remnants of an older AGH which involved a macvlan network. To delete the macvlan, I used Portainer and selected the AdGuard network and AdGuard bridge and selected "delete". I don't know if doing this somehow broke the default bridge network. Is that possible?
 
You indeed do use the default bridge network. Your firewall rules match up to the bridges subnet. So firewall rules can't be the issue.

Instead of custom bridge network, I ment to write user defined bridge network. You can add a new network of type bridge, give it a unique name and you should be good. All other options while creating a network are optional. I neither use portainer, nor do I realy use the synology docker ui. Good old command line (preferably with docker-compose) it is for me :)
I am quite sure a little google-fu on "portainer create user defined bridge network" should return some useful hits... though, you don't realy need them, as creating a bridge network is quite straight forward if you ignore the optional parts.

192.168.96.10 refused to connect. (OK Not Good!)
This one actualy is a party breaker. Either the configuration for the published port 9117 went missing (which is hard to imagine), or the application inside the container does not respond (which is more likely). You might want to check the container logs for warn and error entries.

Docker networks are independend of each other (unless you have a config only network, which is used to create a network instance from it - the created network would depend on the config network). Appart from that, a macvlan network is not related to any other docker network. I'd be surprised if deleting the macvlan caused the problem.
 
What does this look like with a URL

Code:
bash http://some.url/here
It seems that something is broken. Years ago I remember getting a terminal window by default. Not so now with several containers I've checked.
 
I just went to SSH to open a terminal in radarr... here's what I did under sudo -i

docker exec -it c6537d63bb8d bash

then

curl ifconfig.me

then I got

(6) Could not resolve host ifconfig.me

[I also tried http and https in the curl command with the same result]

Stymied?
 
Why ifconfig.me? The intention was to test container to container access, wasn't it?
I assumed http://some.url/here is just a temporary replacement for http://192.168.96.10:9117/api/v2.0/indexers/rarbg/results/torznab/api?t=caps&apikey=(removed), ofc with the real apikey.
 
I had already deleted jackett container/image, and hadn't got around to recreating a new container. Which I can't do right now, so the simple website check seemed convenient in the meantime.

It connected from the NAS SSH terminal curl command, so I expected it to connect using the radarr terminal. Since it didn't, that seemed odd.

I'll get to the jackett test tomorrow, but I'm 99% certain it will fail. Maybe this is some odd DNS issue... I don't know. Sorry for the distraction.
 
Last edited:
Since both container seem to be in the default brige, it shouldn't matter if you perform the test from jackett or radarr, as long as jackett is not the indexer itself :D

I have no idea what jackett actualy is... I assume it's a download client like radarr, sonarr and such.


I looked Jackett up, it IS your indexer. Thus, a test from either inside radarr or sonarr to jackett should be fine.
Though, did you ever thought about creating a user defined bridge network, run all these containers inside this network and use the servicedor container names for container to container communication (within the bridge network)? As all these services serve a single purpose, it would make sense to run them in the same docker-compose stack...
 
did you ever thought about creating a user defined bridge network, run all these containers inside this network and use the servicedor container names for container to container communication (within the bridge network)?
I have to figure this out. I created an additional "bridge" network yesterday and moved the containers to the new network. But I must have done something wrong, because my other containers, notably vaultwarden quit responding, so I undid all that quite quickly.

I'm half wondering if I should delete the docker package and then reinstall it. But I'm concerned that Synology won't execute a full uninstall, and the reinstallation will be plagued with remnants of my current docker installation.

I'm a bit behind today as I'm hurriedly getting sonarr/jackett/qbittorrent running on an ancient Win machine, so I can get caught up on missed downloads. That, and making full backups of all my primary containers should I have to reinstall docker, or (shudder) DSM.

Before that, I will get the curl URL tested on radarr. Thanks again.
 
Freshly new jackett and radarr containers... SSH to radarr and the curl fails...

root@NAS:~# docker exec -it b802f5075124 bash root@radarr:/# curl http://192.168.1.10:9117/api/v2.0/indexers/1337x/results/torznab/api?t=caps&apikey=4r64999yijjlhoregeg8x83uyk8n66du [1] 456 root@radarr:/# curl: (7) Failed to connect to 192.168.1.10 port 9117: Connection timed out curl: (7) Failed to connect to 192.168.1.10 9117: Connection timed out ^C root@radarr:/# curl: (28) Failed to connect to 192.168.1.10 port 9117: Connection timed out

Suggestion?
 
So the container itself is not able to acccess the other container.

I understood the containers are connected to the default bridge network. As such, i am curious about the options set on the network. Can you execute sudo docker network inspect bridge --format '{{json .Options}}' and compare it with this:

{
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
}
Note: the output is piped to jq, which I think I might have downloaded myself and copied it to /bin/jq... Your output will be in a single line.
https://docs.docker.com/engine/reference/commandline/network_create/#bridge-driver-options said:
com.docker.network.bridge.name-Bridge name to be used when creating the Linux bridge
com.docker.network.bridge.enable_ip_masquerade--ip-masqEnable IP masquerading
com.docker.network.bridge.enable_icc--iccEnable or Disable Inter Container Connectivity
com.docker.network.bridge.host_binding_ipv4--ipDefault IP when binding container ports
com.docker.network.driver.mtu--mtuSet the containers network MTU

If your Options look like mine, can you do another test with disabled firewall? Just for the sake of having the firewall ruled out as the source of origin.
 
Hmm, something realy seem to be messed up.

I'd suggest you prepare everything for a new installation, but instead of wiping the docker package, just stop it and delete the database file that stores the networks, then restart the docker package again.

Use this command to delete the local database for the docker networks:
Code:
sudo  rm  $(docker info --format '{{.DockerRootDir}}')/network/files/local-kv.db

When restarting docker, the default networks should be re-created. Though, you will need to recreate the snappass_master network (which looks like it's been created from a docker-compose.yml inside a folder called snappass_master)
 

Create an account or login to comment

You must be a member in order to leave a comment

Create account

Create an account on our community. It's easy!

Log in

Already have an account? Log in here.

Similar threads

Hi, I'll start by saying, my understanding of networking is very limited so I apologize if I'm asking an...
Replies
0
Views
843

Welcome to SynoForum.com!

SynoForum.com is an unofficial Synology forum for NAS owners and enthusiasts.

Registration is free, easy and fast!

Trending threads

Back
Top