Synology + Docker + VPN + Transmission (+ LunaSea)

Currently reading
Synology + Docker + VPN + Transmission (+ LunaSea)

248
15
Operating system
  1. macOS
Mobile operating system
  1. iOS
OBJECTIVE
VPN connection for docker traffic; ideally all docker traffic, will settle for file transmission traffic.

ENVIRONMENT
• Mac desktop
• Variety of iOS devices
• Existing VPN services
• Router remains Apple Time Capsule which does not accommodate VPN at router level.

CURRENT STATE
• Apps are up and running except Transmission. Transmission is stood up, not yet connected.
• OpenVPN certificate downloaded, available w/needed credentials.

EXPERIMENTS
• Transmission solo (stood up, partially connected to sonarr)
• Transmission-openvpn ~ had it installed, was not properly configuring.
• Synology VPN Server

QUESTIONS
• How can you test? The tests I am finding hits the IP of the router (which is not protected) and not testing docker itself. For example, even with the VPN-VPNUL_CA_BT on, the IP is showing my actual ISP IP.
1629578165061.png

--
Created a separate post so title remained on target @Rusty.
Looking through Glueton now @Telos (and yes, vpn is listed as being supported, yea!)
 
Last edited:
Preference would be entire LAN, yet hardware limitations to adding vpn to the router. All devices on LAN (except Synology/Docker) run VPN locally..need to get Docker behind vpn.

and - end state includes LunaSea working and connected to LunaSea iOS app.

-----

If VPN is always on, for all of Synology, that is the preference and that is what appears to be the case when looking at Network > Network Interface. Want to ensure ZERO docker activity if the VPN happens to be disabled (ie. reboot)
 
Preference would be entire LAN
One way to do this is to either configure a VPN client on your NAS and push all your devices via the NAS towards the Internet. In this case, though, your NAS and its services will have an outside access problem, but that can be sorted if you for example configure a separate device in your lan to be a VPN client and redirect traffic towards that specific device (like a VM of VDSM for example).

need to get Docker behind vpn
By this, I guess you mean all docker containers? If so, you can configure one VPN docker container, and use it to push all the other containers via that specific one towards the net. More info and examples in this article - qBittorrent via VPN docker container running on Synology NAS
 
Last edited:
Back after a few hiccups and disruptions.

CURRENT STATE
• Services are connected and running with Transmission.
Two different VPNs set up in Synology > Docker > Network Both vanished...not sure how...need to get this reset up.
Actually the services are running under DSM 7.0 > Control Panel > Network (see image below.)
It would be nice if VPN was always on, all docker traffic directed through vpn / all traffic off if no vpn.

NEXT STEPS
• Get VPN active w/docker services
• Get Lunasea connected and working.

QUESTIONS
• How do you get the ip of DOCKER or a docker SERVICE?
Various tutorials point to different areas like that of the router which is not the situation here, router not an option.
• Will the solution for VPN used impact the way to get LunaSea working?

OTHER
• Configure one VPN docker container (k, got it and presume it is on the same network as services.) @Rusty How do use it to push all other containers via that specific one toward the net? Yes, read the BitTorrent post (only need port 8080 (or single IP of choice) covered?) Thoughts on Gluetun? Thoughts on setting up a VPN network in docker and directing the other network through it (maybe too complicated not worth it.) Thoughts on Transmission+VPN?

1629714280727.png
 

Attachments

  • 1629713375729.png
    1629713375729.png
    138.6 KB · Views: 16
How do you get the ip of DOCKER or a docker SERVICE?
all services running in docker that use either host or bridge network use the IP address of their host. In this case your nas. If you run with macvlan type of network then you can configure your own ip address for that specific docker container but that has its own limitations on the other hand.

So basically, if you want to access some container, you access it on your nas ip address and a custom port that you have configured.

Will the solution for VPN used impact the way to get LunaSea working?
If we are talking about those containers running towards the internet using a vpn tunnel, you will still be able to connect to them as normal. The fact is that you have just changed their Internet gateway by turning it into a vpn.

Configure one VPN docker container (k, got it and presume it is on the same network as services
yes on the same network as any other service that needs to utilize it

How do use it to push all other containers via that specific one toward the net?
By using the following docker-compose parameter for the container in question:

Code:
network_mode: container:<nameOfVPNcontainer>

This way you container will use the VPN container as its "network"

Also more details on the matter here: qBittorrent via VPN docker container running on Synology NAS under the "STEP02 - adding your non-VPN container to use your VPN container network" heading

Nothing special. A bundled vpn and torrent client in one. Plenty of those out there.

Thoughts on Transmission+VPN?
Same as the previous

Thoughts on setting up a VPN network in docker and directing the other network through it (maybe too complicated not worth it.)
I would say, not worth it. Once you get the container up and running its a matter of adding more ports and building the target container with the
Code:
network_mode: container:<nameOfVPNcontainer>
 
Currently giving this a go...configuration not correct. It will either deploy as expected but not reachable, or get stuck I starting mode.
very simple container with not much to do here. Send the logs and maybe we can detect the problem. Also, send the compose you are using (if there are major changes) just to be clear if there are some obvious "mistakes".
 
Last edited:
In the below compose, everything but the final gluetun service appears to be working as expected. Very possible obvious mistakes. Terms of the gluetun compose, been experimenting such as adding in the puid/pgid, the default: {} for global network (and related experiments], etc.
OpenVPN file resides in docker > media-center-config > gluetun

Code:
version: "2.4"
services:

  sonarr:
    image: linuxserver/sonarr:latest
    restart: always
    container_name: sonarr-stack2
    environment:
      - PGID=100
      - PUID=1032
    volumes:
      - /volumeUSB1/usbshare/sonarr:/media-store/sonarr:rw
      - /volume1/docker/media-center-config/sonarr:/config:rw
    networks:
      default: {}
    ports:
      - 8989:8989

  radarr:
    image: linuxserver/radarr:latest
    restart: always
    container_name: radarr-stack2
    environment:
      - PGID=100
      - PUID=1032
    volumes:
      - /volumeUSB1/usbshare/radarr:/media-store/radarr:rw
      - /volume1/docker/media-center-config/radarr:/config:rw
    networks:
      default: {}
    ports:
      - 7878:7878

  jackett:
    image: linuxserver/jackett:latest
    restart: always
    container_name: jackett-stack2
    environment:
      - PGID=100
      - PUID=1032
    volumes:
      - /volumeUSB1/usbshare/jackett:/media-store/jackett:rw
      - /volume1/docker/media-center-config/jackett:/config:rw
    networks:
      default: {}
    ports:
      - 9117:9117

  lidarr:
    image: linuxserver/lidarr:latest
    restart: always
    container_name: lidarr-stack2
    environment:
      - PGID=100
      - PUID=1032
    volumes:
      - /volumeUSB1/usbshare/lidarr:/media-store/lidarr:rw
      - /volume1/docker/media-center-config/lidarr:/config:rw
    networks:
      default: {}
    ports:
      - 8686:8686

  couchpotato:
    image: linuxserver/couchpotato:latest
    restart: always
    container_name: couchpotato-stack2
    environment:
      - PGID=100
      - PUID=1032
    volumes:
      - /volumeUSB1/usbshare/lidarr:/media-store/couchpotato:rw
      - /volume1/docker/media-center-config/couchpotato:/config:rw
    networks:
      default: {}
    ports:
      - 5050:5050

  transmission:
    image: linuxserver/transmission:latest
    restart: always
    container_name: transmission-stack2
    environment:
      - PGID=100
      - PUID=1032
    volumes:
      - /volumeUSB1/usbshare/raw:/media-store/raw:rw
      - /volume1/docker/media-center-config/transmission:/config:rw
    networks:
      default: {}
    ports:
      - 9091:9091
     
  gluetun:
    image: qmcgaw/gluetun
    container_name: gluetun
    #cap_add:
     # - NET_ADMIN
    #network_mode: bridge
    ports:
      - 8888:8888/tcp # HTTP proxy
      - 8388:8388/tcp # Shadowsocks
      - 8388:8388/udp # Shadowsocks
      - 8000:8000/tcp # Built-in HTTP control server
    # command:
    volumes:
      - /volume1/docker/media-center-config/gluetun:/gluetun:rw
    networks:
      default: {}
    environment:
      # More variables are available, see the Wiki table
      - PGID=100
      - PUID=1032
      - OPENVPN_USER=[redacted]
      - OPENVPN_PASSWORD=[redacted]
      - VPNSP=[redacted]
      - VPN_TYPE=openvpn

    restart: always     
 

networks:
  default:
    name: media-stack2
 
Everything 'except' gleutun.

Gleutun itself either launches as expected (when not altered in any way) but unable to connect to it, or the state stalls in starting mode.
 
Ok but now I have a question. Why is that VPN container part of this stack? It's not being used by any other container as a gateway, so why keep it in there? Also, does it work as a standalone container?
Intention: Once up and running, the other services in this same container would be running through its vpn.
Tweak: I have since added "network_mode: container:<gluetun>" to each service, added below image:line.
Have not tried as a stand-alone container thinking it had to be part of the same container...will try.


----
Individual container (obtained as new stack)
1629724806907.png


- these errors vanish quick, how can you go back to view errors?
 
Have not tried as a stand-alone container thinking it had to be part of the same container
If its a member of the same stack then the proper usage is this:

network_mode: service:<gluetun>

If you are running it as a separate container, then you need A) add all the ports from the client container(s) into your VPN container and delete them from the client ones and B) run with the following setup:

network_mode: container:<gluetun>
 
No intention of running as a separate container unless recommended.

Changed line in gluetun as suggested (probably wrong way mind you) and getting error..this is in the one container, multiple services.

Code:
gluetun:
    image: qmcgaw/gluetun
    container_name: gluetun
    cap_add:
      - NET_ADMIN
    network_mode: service:<gluetun>
    ports:
      - 8888:8888/tcp # HTTP proxy
      - 8388:8388/tcp # Shadowsocks
      - 8388:8388/udp # Shadowsocks
      - 8000:8000/tcp # Built-in HTTP control server
    # command:
    volumes:
      - /volume1/docker/media-center-config/gluetun:/gluetun:rw




[ATTACH type="full" alt="1629725559212.png"]4296[/ATTACH]
 

Attachments

  • 1629725559212.png
    1629725559212.png
    23.9 KB · Views: 6
No intention of running as a separate container unless recommended.
Ok, not gonna bother then

Changed line in gluetun as suggested (probably wrong way mind you) and getting error..this is in the one container, multiple services.
if one container will be a network for all the other then remove the network section from all the remaining container if you are using network_mode
 
Ok, not gonna bother then


if one container will be a network for all the other then remove the network section from all the remaining container if you are using network_mode
Removed below code...same error networks can not be combined. Also tried a version removing the networks: default: {} from each service...same error

networks:
default:
name: media-stack2

Code:
version: "2.4"
services:

  sonarr:
    image: linuxserver/sonarr:latest
    network_mode: container:<gluetun>
    restart: always
    container_name: sonarr-stack2
    environment:
      - PGID=100
      - PUID=1032
    volumes:
      - /volumeUSB1/usbshare/sonarr:/media-store/sonarr:rw
      - /volume1/docker/media-center-config/sonarr:/config:rw
    networks:
      default: {}
    ports:
      - 8989:8989

  radarr:
    image: linuxserver/radarr:latest
    network_mode: container:<gluetun>
    restart: always
    container_name: radarr-stack2
    environment:
      - PGID=100
      - PUID=1032
    volumes:
      - /volumeUSB1/usbshare/radarr:/media-store/radarr:rw
      - /volume1/docker/media-center-config/radarr:/config:rw
    networks:
      default: {}
    ports:
      - 7878:7878

  jackett:
    image: linuxserver/jackett:latest
    network_mode: container:<gluetun>
    restart: always
    container_name: jackett-stack2
    environment:
      - PGID=100
      - PUID=1032
    volumes:
      - /volumeUSB1/usbshare/jackett:/media-store/jackett:rw
      - /volume1/docker/media-center-config/jackett:/config:rw
    networks:
      default: {}
    ports:
      - 9117:9117

  lidarr:
    image: linuxserver/lidarr:latest
    network_mode: container:<gluetun>    
    restart: always
    container_name: lidarr-stack2
    environment:
      - PGID=100
      - PUID=1032
    volumes:
      - /volumeUSB1/usbshare/lidarr:/media-store/lidarr:rw
      - /volume1/docker/media-center-config/lidarr:/config:rw
    networks:
      default: {}
    ports:
      - 8686:8686

  couchpotato:
    image: linuxserver/couchpotato:latest
    network_mode: container:<gluetun>
    restart: always
    container_name: couchpotato-stack2
    environment:
      - PGID=100
      - PUID=1032
    volumes:
      - /volumeUSB1/usbshare/lidarr:/media-store/couchpotato:rw
      - /volume1/docker/media-center-config/couchpotato:/config:rw
    networks:
      default: {}
    ports:
      - 5050:5050

  transmission:
    image: linuxserver/transmission:latest
    network_mode: container:<gluetun>
    restart: always
    container_name: transmission-stack2
    environment:
      - PGID=100
      - PUID=1032
    volumes:
      - /volumeUSB1/usbshare/raw:/media-store/raw:rw
      - /volume1/docker/media-center-config/transmission:/config:rw
    networks:
      default: {}
    ports:
      - 9091:9091
      
  gluetun:
    image: qmcgaw/gluetun
    container_name: gluetun
    #cap_add:
    #  - NET_ADMIN
    network_mode: service:<gluetun>
    ports:
      - 8888:8888/tcp # HTTP proxy
      - 8388:8388/tcp # Shadowsocks
      - 8388:8388/udp # Shadowsocks
      - 8000:8000/tcp # Built-in HTTP control server
    # command:
    volumes:
      - /volume1/docker/media-center-config/gluetun:/gluetun:rw
    environment:
      # More variables are available, see the Wiki table
      - OPENVPN_USER=xxx
      - OPENVPN_PASSWORD=xxx
      - VPNSP=xxx
      - VPN_TYPE=openvpn
      - PGID=100
      - PUID=1032
      # Timezone for accurate logs times
      - TZ=America/New_York
    restart: always      
[CODE]
 
You haven't read it carefully. I wrote from all the containers
See above, also ran a version w/o the networks in each service.
Now a different error...

-- post merged: --

Code:
version: "2.4"
services:

  sonarr:
    image: linuxserver/sonarr:latest
    network_mode: container:<gluetun>
    restart: always
    container_name: sonarr-stack2
    environment:
      - PGID=100
      - PUID=1032
    volumes:
      - /volumeUSB1/usbshare/sonarr:/media-store/sonarr:rw
      - /volume1/docker/media-center-config/sonarr:/config:rw
    ports:
      - 8989:8989

  radarr:
    image: linuxserver/radarr:latest
    network_mode: container:<gluetun>
    restart: always
    container_name: radarr-stack2
    environment:
      - PGID=100
      - PUID=1032
    volumes:
      - /volumeUSB1/usbshare/radarr:/media-store/radarr:rw
      - /volume1/docker/media-center-config/radarr:/config:rw
    ports:
      - 7878:7878

  jackett:
    image: linuxserver/jackett:latest
    network_mode: container:<gluetun>
    restart: always
    container_name: jackett-stack2
    environment:
      - PGID=100
      - PUID=1032
    volumes:
      - /volumeUSB1/usbshare/jackett:/media-store/jackett:rw
      - /volume1/docker/media-center-config/jackett:/config:rw
    ports:
      - 9117:9117

  lidarr:
    image: linuxserver/lidarr:latest
    network_mode: container:<gluetun>   
    restart: always
    container_name: lidarr-stack2
    environment:
      - PGID=100
      - PUID=1032
    volumes:
      - /volumeUSB1/usbshare/lidarr:/media-store/lidarr:rw
      - /volume1/docker/media-center-config/lidarr:/config:rw
    ports:
      - 8686:8686

  couchpotato:
    image: linuxserver/couchpotato:latest
    network_mode: container:<gluetun>
    restart: always
    container_name: couchpotato-stack2
    environment:
      - PGID=100
      - PUID=1032
    volumes:
      - /volumeUSB1/usbshare/lidarr:/media-store/couchpotato:rw
      - /volume1/docker/media-center-config/couchpotato:/config:rw
    ports:
      - 5050:5050

  transmission:
    image: linuxserver/transmission:latest
    network_mode: container:<gluetun>
    restart: always
    container_name: transmission-stack2
    environment:
      - PGID=100
      - PUID=1032
    volumes:
      - /volumeUSB1/usbshare/raw:/media-store/raw:rw
      - /volume1/docker/media-center-config/transmission:/config:rw
    ports:
      - 9091:9091
     
  gluetun:
    image: qmcgaw/gluetun
    container_name: gluetun
    #cap_add:
    #  - NET_ADMIN
    network_mode: service:<gluetun>
    ports:
      - 8888:8888/tcp # HTTP proxy
      - 8388:8388/tcp # Shadowsocks
      - 8388:8388/udp # Shadowsocks
      - 8000:8000/tcp # Built-in HTTP control server
    # command:
    volumes:
      - /volume1/docker/media-center-config/gluetun:/gluetun:rw
    environment:
      # More variables are available, see the Wiki table
      - OPENVPN_USER=xxx
      - OPENVPN_PASSWORD=xxx
      - VPNSP=xxx
      - VPN_TYPE=openvpn
      - PGID=100
      - PUID=1032
      # Timezone for accurate logs times
      - TZ=America/New_York
    restart: always     

[CODE]
[ATTACH type="full"]4297[/ATTACH]
 

Attachments

  • 1629727111967.png
    1629727111967.png
    25.3 KB · Views: 5
See above, also ran a version w/o the networks in each service.
Now a different error...

-- post merged: --

Code:
version: "2.4"
services:

  sonarr:
    image: linuxserver/sonarr:latest
    network_mode: container:<gluetun>
    restart: always
    container_name: sonarr-stack2
    environment:
      - PGID=100
      - PUID=1032
    volumes:
      - /volumeUSB1/usbshare/sonarr:/media-store/sonarr:rw
      - /volume1/docker/media-center-config/sonarr:/config:rw
    ports:
      - 8989:8989

  radarr:
    image: linuxserver/radarr:latest
    network_mode: container:<gluetun>
    restart: always
    container_name: radarr-stack2
    environment:
      - PGID=100
      - PUID=1032
    volumes:
      - /volumeUSB1/usbshare/radarr:/media-store/radarr:rw
      - /volume1/docker/media-center-config/radarr:/config:rw
    ports:
      - 7878:7878

  jackett:
    image: linuxserver/jackett:latest
    network_mode: container:<gluetun>
    restart: always
    container_name: jackett-stack2
    environment:
      - PGID=100
      - PUID=1032
    volumes:
      - /volumeUSB1/usbshare/jackett:/media-store/jackett:rw
      - /volume1/docker/media-center-config/jackett:/config:rw
    ports:
      - 9117:9117

  lidarr:
    image: linuxserver/lidarr:latest
    network_mode: container:<gluetun>  
    restart: always
    container_name: lidarr-stack2
    environment:
      - PGID=100
      - PUID=1032
    volumes:
      - /volumeUSB1/usbshare/lidarr:/media-store/lidarr:rw
      - /volume1/docker/media-center-config/lidarr:/config:rw
    ports:
      - 8686:8686

  couchpotato:
    image: linuxserver/couchpotato:latest
    network_mode: container:<gluetun>
    restart: always
    container_name: couchpotato-stack2
    environment:
      - PGID=100
      - PUID=1032
    volumes:
      - /volumeUSB1/usbshare/lidarr:/media-store/couchpotato:rw
      - /volume1/docker/media-center-config/couchpotato:/config:rw
    ports:
      - 5050:5050

  transmission:
    image: linuxserver/transmission:latest
    network_mode: container:<gluetun>
    restart: always
    container_name: transmission-stack2
    environment:
      - PGID=100
      - PUID=1032
    volumes:
      - /volumeUSB1/usbshare/raw:/media-store/raw:rw
      - /volume1/docker/media-center-config/transmission:/config:rw
    ports:
      - 9091:9091
    
  gluetun:
    image: qmcgaw/gluetun
    container_name: gluetun
    #cap_add:
    #  - NET_ADMIN
    network_mode: service:<gluetun>
    ports:
      - 8888:8888/tcp # HTTP proxy
      - 8388:8388/tcp # Shadowsocks
      - 8388:8388/udp # Shadowsocks
      - 8000:8000/tcp # Built-in HTTP control server
    # command:
    volumes:
      - /volume1/docker/media-center-config/gluetun:/gluetun:rw
    environment:
      # More variables are available, see the Wiki table
      - OPENVPN_USER=xxx
      - OPENVPN_PASSWORD=xxx
      - VPNSP=xxx
      - VPN_TYPE=openvpn
      - PGID=100
      - PUID=1032
      # Timezone for accurate logs times
      - TZ=America/New_York
    restart: always    

[CODE]
[ATTACH type="full"]4297[/ATTACH]
this last error is not the same.

2 problems. Client container blocks need to use network_mode: service:nameofvpncontianer, and the VPN container block needs to use network_mode: bridge
 

Create an account or login to comment

You must be a member in order to leave a comment

Create account

Create an account on our community. It's easy!

Log in

Already have an account? Log in here.

Similar threads

I am also trying to setup a Z-wave USB dongle and am getting stuck after following the same steps as...
Replies
1
Views
1,141
Thanks for your replies, but I found the solution: I had to allow port 8083 in the firewall.
Replies
5
Views
1,738
Thank you for this - I'll give it a go and see where I get - worst case I learn something as I go!
Replies
6
Views
1,639
  • Question
Welcome to the forum! To where? What's going on? How are CF records set as well as NPM RP record for that...
Replies
1
Views
1,003
I am struggling with that since I am only a copy & paste hacker. I have installed netdata on my Synology...
Replies
0
Views
2,021
s4: It seems the INSTANCE_NAME is related to the world you have to create with the tool mentioned in the...
Replies
11
Views
4,319
  • Question
Deployed Portainer in under 30 mins and up & running. Thanks.
Replies
2
Views
4,207

Welcome to SynoForum.com!

SynoForum.com is an unofficial Synology forum for NAS owners and enthusiasts.

Registration is free, easy and fast!

Trending threads

Back
Top