Please Help: Issue with Nginx Proxy and SSL Certificate!

Currently reading
Please Help: Issue with Nginx Proxy and SSL Certificate!

19
2
NAS
Synology DS920+
Operating system
  1. Windows
Mobile operating system
  1. Android
  2. iOS
Last edited:
Hi all, I've been trying to install Nginx Proxy Manager and having major difficulties getting NPM set up with Lets Encrypt. I have provided pictures of my error messages for you but I have also copied and pasted the text for your ease of reference near the bottom of this email.

My current set-up is below…this is BEFORE introducing Nginx Proxy Manager into the equation. So this is the baseline.
  • I have Synology's internal Reverse Proxy already working beautifully with the following applications: Jellyfin, Bitwarden
  • I am using a Synology DDNS domain name. Assume the domain name is [apple.synology.me]. Assume the subdomains for these 2 applications are: [bit.apple.synology.me] and [jel.apple.synology.me]
  • Using Synology's "Certificates" manager in the Control Panel, I have one Lets Encrypt (LE) certificate set up for both applications above. The LE domain name is [apple.synology.me] and for the subject alternative name (SAN) I have put down *.apple.synology.me because there is a note in Synology that wildcard is accepted.
  • The LE certificate is already mapped to these two services I set up using Synology's internal reverse proxy in the "Applications Portal" section of the control panel.
  • This set up has worked for me thus far. However, I know that Nginx Proxy Manager (NPM) (or Caddy, Traefik) provide additional customization. In particular, one niggling issue I have is that for Bitwarden, if I want to hide the admin panel (/admin), I cannot do that using Synology's built it Reverse Proxy. So I wanted to use NPM to have more flexibility to do more things.
My issue is below:
  • Using Portainer, I have installed NPM and have it working (I’m using the often recommended JC21 version). I was not able to get NPM setup with mariaDB database, so I just installed it with SQlite version and its working fine and I can login.
  • As Synology by default uses ports 80/443 for its own reverse proxy, I used different ports for NPM. Assume I used 8882/6443 for 80/433 respectively. Admin panel is 8181.
  • Using my Asus Merlin Router, I port forwarded the external port 443 to internal port 6443 and likewise 80->8882. So I believe the router should be sending data directly to NPM.
  • In the NPM container (I’m using Portainer’s GUI to manage it), NPM has its own network (nginx_app_1). This was made automatically when I installed NPM using a docker compose file online.
  • Before I create any proxy hosts in NPM I wanted to have SSL certs added. Using the “SSL Certificates” section of NPM there are two options:
    • 1) Add SSL certificate from Let’s Encrypt, OR;
    • 2) Use ‘Custom’ to import my existing SSL certificate for [apple.synology.me]
With either option above, I am facing major issues and not able to resolve this problem and I am looking for your help. I have more details on the errors below and pictures show these errors in the attached post at the bottom.

In Option 1
, when I try to request an SSL certificate for the [apple.synology.me] domain, it doesn’t work. I get an “Internal Error” message with the following error message in a red box. I’ve marked XXX to remove any personal info.
Error: Command failed: /opt/certbot/bin/certbot certonly --non-interactive --config "/etc/letsencrypt.ini" --cert-name "npm-30" --agree-tos --email "[email protected]" --preferred-challenges "dns,http" --domains "apple.synology.me"
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Plugins selected: Authenticator webroot, Installer None

An unexpected error occurred:
requests.exceptions.ConnectionError: HTTPSConnectionPool(host='acme-v02.api.letsencrypt.org', port=443): Max retries exceeded with url: /directory (Caused by NewConnectionError(': Failed to establish a new connection: [Errno -3] Temporary failure in name resolution')) Please see the logfiles in /var/log/letsencrypt for more details.
at ChildProcess.exithandler (node:child_process:326:12)
at ChildProcess.emit (node:events:369:20)
at maybeClose (node:internal/child_process:1067:16)
at Process.ChildProcess._handle.onexit (node:internal/child_process:301:5)

Can you please tell me why is this? Is it because NPM can’t request an SSL cert for a synology DDNS address? Is it because on my Synology I already have an SSL cert for the exact same domain [apple.synology.me] and I have to delete this first?

The NPM error log shows two kinds of errors but multiple iterations of them. They look like one of two:
  • 2021/05/15 11:54:51 [error] 265#265: *6 connect() failed (111: Connection refused) while connecting to upstream, client: 127.0.0.1, server: nginxproxymanager, request: "GET /api/ HTTP/1.1", upstream: "http://127.0.0.1:3000/", host: "127.0.0.1:81"
  • 2021/05/15 15:18:41 [error] 21252#21252: *7746 upstream timed out (110: Connection timed out) while connecting to upstream, client: 192.168.16.1, server: bit.apple.synology.me, request: "POST /identity/connect/token HTTP/2.0", upstream: "http://192.168.50.67:5005/identity/connect/token", host: "bit.apple.synology.me"
In Option 2, when I try to import SSL certs from Synology, I first export the SSL cert from Synology. Synology provides me 3 files: 1) cert.pem; 2) chain.pem; 3) privkey.pem. Then I add “Custom” certificate and do the following: For the name its “Bitwarden” For the “Certificate Key” I import “privkey.pem”. For the “Certificate” I import “chain.pem”. I do not import the third file – “cert.pem” into the “Intermediate Certificate” setting.

When I do this, the settings get saved as an SSL cert and then I would make a proxy host and use the SSL certificate I just created in this step.

So now when I create a proxy host for [bit.apple.synology.me] I have the following settings: bit.apple.synology.me is the source; destination is my NAS IP:[PortNumber]. The SSLcert I choose in the dropdown is the SSL cert I imported from Synology. I turn on the following options: “Force SSL”, “HTTP/2 Support”, “HSTS enabled” and “HSTS subdomains”. Once I do that, the status of the proxy is “Offline” with a red colour. And when I put my mouse over it the error message is as follows:

error: command failed: /usr/sbin/nginx -t -g "error_log off;" nginx: [emerg] SSL_CTX_use_PrivateKey ("/data/custom_ssl/npm-26/privkey.pem") failed (SSL: error:0B080074:x509 certificate routines:X509_check_private_key:key values mismatch) nginx: configuration file /etc/nginx/nginx.conf test failed

Can someone please help me with these errors? I’ve tried my best to read as many sources as possible but now I am stuck. I want to avoid command line work as much as possible - I have portainer installed and can do any work in the containers with that method if that works.

I would like to get SSL certs working with NPM so I can stop using Synology Reverse Proxy. Greatly greatly appreciate your help.
 

Attachments

  • Admin Issues Edited.png
    Admin Issues Edited.png
    218.1 KB · Views: 740
  • Admin Issues Edited 2.png
    Admin Issues Edited 2.png
    335.7 KB · Views: 731
  • Admin Issues Edited 3.jpg
    Admin Issues Edited 3.jpg
    306.5 KB · Views: 721
Solution
Yes just to close the loop on this for everyone else's benefit (for those who may come across this in the future). Many thanks to @Rusty and @one-eyed-king for their help on this.

1) For my first problem, the issue with Lets Encrypt (LE) certificates is that its finnicky with NPM as of right now. As @Rusty pointed out, if you want to use LE certificates for Synology.me domains, you cannot automatically request certificates from Lets Encrypt using NPM. Based on my first post above, out of the TWO options in NPM for using SSL certificates, you must (as far as I know and have learned thus far) import your Lets Encrypt certificate for your Synology.me domain from your Synology NAS. This means you can only use...
You haven't done anything wrong with your setup regarding your configuration. The problem you are having is LE cert configuration for the certbot to issue your cert.

Is it because NPM can’t request an SSL cert for a synology DDNS address?
That's one reason, yes. You are not the root owner of that domain so a 3rd party will not be able to issue a cert using any LE container with certbot.
Is it because on my Synology I already have an SSL cert for the exact same domain [apple.synology.me] and I have to delete this first?
No, this is not related

If you look at the error,

Code:
 --config "/etc/letsencrypt.ini" --cert-name "npm-30" --agree-tos --email "[email protected]" --preferred-challenges "dns,http"

Here is the mention of letsencypt.ini file. To get to it you will have to make a change to your NPM compose file and add one volume bind 1st to see the files on your NAS.

Code:
    volumes:
      - ./data:/data
      - /your/local/nas/location:/etc/letsencrypt

This will mount the etc/letsencrypt location. Inside it, you will locate the INI file that needs to be edited. One of those parameters is the email address, so if you have it, great, but then the next bit is challenge mode.

It is listed as DNS and then HTTP. For DNS you will not be able to use it because you don't control the root domain that you want your cert running on (this is why it only runs from DSM), so it should drop down to HTTP challenge.

To have that going, you will need port 80/443 configured and forwarded from your router to your container. If that is not the problem, LE should be able to generate certs.

So far, this was just an explanation, but you should look into /var/log/letsencrypt/letsencrypt.log log file (Using Portrainer "console" icon for NPM container) to check for more details.

If you look at what you have shared here, you can see the current problem, and that is that you have hit a LE limit ConnectionError: HTTPSConnectionPool(host='acme-v02.api.letsencrypt.org', port=443): [B]Max retries exceeded[/B].

You can read more on limits on the LE site and their FAQ. It is also visible that the method in use was not DNS challenge but rather HTTP. So the real question now is, why did it fail so many times? Guessing the answer is in the LE logs sometime before the exceeded number of attempts.

My guess is port forward and "visibility" on port 80/443, but I could be wrong.

In Option 2, when I try to import SSL certs from Synology, I first export the SSL cert from Synology. Synology provides me 3 files: 1) cert.pem; 2) chain.pem; 3) privkey.pem. Then I add “Custom” certificate and do the following: For the name its “Bitwarden” For the “Certificate Key” I import “privkey.pem”. For the “Certificate” I import “chain.pem”. I do not import the third file – “cert.pem” into the “Intermediate Certificate” setting.
Regarding this, again, you have done it correctly. Still not 100% sure why you get any errors on this front but looks like there is some problem with a mismatched value (again not sure why).

My recommendation would be to try and switch to a custom domain for your specific needs and go for a wild card certificate. You will solve all your problems with that method and yes, it will cost you to run your own domain name, but I think 10-20$/year (or 2) is nothing to have your domain name and your own wild card cert.

In that case, you could then set up your own LE generation (even using your own separate LE container) or it should work via NPM as well. The point is that LE Synology certs will still be only possible on the DSM side, and by the looks of it, exporting them and importing them into NPM is giving you problems.

I haven't tried to run Syno domain certs via NPM so can't say if this error is expected but I would suggest to try and move away from it, as it will be another layer that will not keep you tied to the Syno brand as such considering that you are obviously pushing for full docker setup (and that is fine).

More info on running your own wild card via LE (using your own custom domain!): Let's Encrypt + Docker = wildcard certs
 
Upvote 0
Last edited:
. Then I add “Custom” certificate and do the following: For the name its “Bitwarden” For the “Certificate Key” I import “privkey.pem”. For the “Certificate” I import “chain.pem”. I do not import the third file – “cert.pem” into the “Intermediate Certificate” setting.
privkey.pem = privat key of the certificate.
cert.pem = public key of the certificate, must belong to the same certificate and is used to verify the identity of the server and to exchange a static secret for the session, using asymetric encryption which can only be decrypted with the privkey.pem (=as such only understood by the server that has the matching privkey.pem)
chain.pem = intermediate certificates without the cert.pem

Sometimes there is a fullchain.pem, which includes all entries from chain.pem and cert.pem. If a server application, does not provide an extra field for the chain.pem, you typicaly need to use the fullchain.pem as certificate. I hope this makes sense.

Please do import cert.pem as certificate and the chain.pem as intermediate certificate. A chain of trust can only be verified if the chain from the machine specific certificate and all the intermediate certificates (if any exist at all) up to the root certificate are "complete".
 
Upvote 0
privkey.pem = privat key of the certificate.
cert.pem = public key of the certificate, must belong to the same certificate and is used to verify the identity of the server and to exchange a static secret for the session, using asymetric encryption which can only be decrypted with the privkey.pem (=as such only understood by the server that has the matching privkey.pem)
chain.pem = intermediate certificates without the cert.pem

Sometimes there is a fullchain.pem, which includes all entries from chain.pem and cert.pem. If a server application, does not provide an extra field for the chain.pem, you typicaly need to use the fullchain.pem as certificate. I hope this makes sense.

Please do import cert.pem as certificate and the chain.pem as intermediate certificate. A chain of trust can only be verified if the chain from the machine specific certificate and all the intermediate certificates (if any exist at all) up to the root certificate are "complete".
This fixed it. Thanks.
 
Upvote 0
Last edited:
Thanks so much guys. @one-eyed-king !! I followed your explanation and now in Nginx Proxy I have a green "online" status show up to the two services I have set up; assuming [bitw.apple.synology.me] and [jelly.apple.synology.me] respectively.

But now I'm getting another issue. Error 504 gateway. I'm trying to figure out what the problem is step by step...
  • In Portainer, the Nginx container is showing a 'healthy' green status and there is no errors in the container logs.
  • My router configurations haven't changed from before, so external port 443 -> internal port 6443 ; and 80 -> 8882 ...so Nginx should be receiving a signal.
  • In one rule for Synology's firewall I have ports 6443 and 8882 opened only to my Source IP address (my default gateway). In a separate firewall rule I have ports 80 and 443 opened to.
  • When I type in my bitwarden instance address by IP address, I can access it just fine [192.168.xx.xx]:5005, the problem is only with the DDNS address [bit.apple.synology.me].
  • When I created the Nginx container using Docker Compose in Portainer's Stack section, it automatically created a docker network "nginx_default". My Jellyfin service is currently set on the 'host' docker network. And bitwarden is on the 'bridge' network. Do thes

Related questions:
  1. Why am I getting a 504 bad gateway error? Something must be wrong with the setup, do I need to edit the Nginx config file with something? if so what should it be? When I get these 504 Gateway time-out errors, for both sites the lockpad shows up, which means the SSL should be working right? And I also see the fav-icon show up for jellyfin and bitwarden but other then that it's a blank page with 'openresty' under the 504 Gateway Time-out message. I have copied the error messages I saw in the NPM error log.
2021/05/16 10:42:26 [error] 9766#9766: *39418 upstream timed out (110: Connection timed out) while connecting to upstream, client: 192.168.16.1, server: bitw.apple.synology.me, request: "GET / HTTP/2.0", upstream: "http://192.168.50.83:5005/", host: "bitw.apple.synology.me"
2021/05/16 10:42:39 [error] 9766#9766: *39418 upstream timed out (110: Connection timed out) while connecting to upstream, client: 192.168.16.1, server: jelly.apple.synology.me, request: "GET / HTTP/2.0", upstream: "http://192.168.50.83:8096/", host: "jelly.apple.synology.me"
2021/05/16 10:43:56 [error] 9766#9766: *39418 upstream timed out (110: Connection timed out) while connecting to upstream, client: 192.168.16.1, server: bitw.apple.synology.me, request: "GET /favicon.ico HTTP/2.0", upstream: "http://192.168.50.83:5005/favicon.ico", host: "bitw.apple.synology.me", referrer: "https://bitw.apple.synology.me/"
2021/05/16 10:44:09 [error] 9766#9766: *39418 upstream timed out (110: Connection timed out) while connecting to upstream, client: 192.168.16.1, server: jelly.apple.synology.me, request: "GET /favicon.ico HTTP/2.0", upstream: "http://192.168.50.83:8096/favicon.ico", host: "jelly.apple.synology.me", referrer: "https://jelly.apple.synology.me/"
  1. 1a) Note that I am using the SQlite database option version...I don't think that has anything to do with this error (which seems networking related) but just wanted to remind folks here in case it does have some effect. I've copied my config file below. I have just XXXX'd out the RSA and public keys.
{
"database": {
"engine": "knex-native",
"knex": {
"client": "sqlite3",
"connection": {
"filename": "/data/database.sqlite"
}
}
},
"jwt": {
"key": "-----BEGIN RSA PRIVATE KEY-----\XXXXXXXc=\n-----END RSA PRIVATE KEY-----",
"pub": "-----BEGIN PUBLIC KEY-----\XXXXXX\n-----END PUBLIC KEY-----"
}
}

2. Does the 504 bad gateway error have to do with the docker networks setup? Do I need my Jellyfin and Bitwarden apps on the same docker network as NPM - (which in this case is "nginx_default")?

Some unrelated questions:
  1. As I have NPM set up on ports 6443 and 8882 based on my setup, does my Synology still need ports 443 and 80 open? Or can I close them up? (They are open right now). As my router is forwarding 443/80 traffic to 6443/8882 as described above, I was thinking that 443/80 on the NAS dont need to be open and I like to close down as many ports as possible.
  2. This question is for my reference and understanding but I hope will help others to. Are there any disadvantages in getting NPM to use SQlite as the database? I have seen online so many people have problems with getting NPM to work with MariaDB, all kinds of gateway errors. SQlite seemed to solve it for me. Are there any security related implications with using SQlite vs a proper database? Just want to know if I made the right trade off...
 

Attachments

  • NginxProxyedited.jpg
    NginxProxyedited.jpg
    83.3 KB · Views: 111
Upvote 0
Last edited:
In one rule for Synology's firewall I have ports 6443 and 8882 opened only to my Source IP address (my default gateway). In a separate firewall rule I have ports 80 and 443 opened to.
When I type in my bitwarden instance address by IP address, I can access it just fine [192.168.xx.xx]:5005, the problem is only with the DDNS address [bit.apple.synology.me].
I'd be surprised if this works. If you forward a port from the router to your nas, the original source ip is retained. You will need to allow 0.0.0.0/0 as source ip.

[update]: this is only true if the container is in host or macvlan mode. For bridge networks, the syno firewall (if enabled) need to allow the bridge networks ip range, which npm is running in, as source ip range.

When I created the Nginx container using Docker Compose in Portainer's Stack section, it automatically created a docker network "nginx_default". My Jellyfin service is currently set on the 'host' docker network. And bitwarden is on the 'bridge' network. Do thes

Judging by your screenshots, I assume the ip 192.168.50.83 belongs to the DS and the ports are the host ports of a port mapping. If this is true, you are not using any private container networks from npm to the target containers. As long as the host port of the port mappings actualy exist and is wired to the correct container port, it should work.
As I have NPM set up on ports 6443 and 8882 based on my setup, does my Synology still need ports 443 and 80 open? Or can I close them up? (They are open right now). As my router is forwarding 443/80 traffic to 6443/8882 as described above, I was thinking that 443/80 on the NAS dont need to be open and I like to close down as many ports as possible.
If you are not using the webstation or any other reverse proxy rules, you can block these ports or at least limit the source ips to your local lan. But then again: if they are not accessible from the internet, nothing from outside your lan should be able to access them anyway.
 
Upvote 0
I'd be surprised if this works. If you forward a port from the router to your nas, the original source ip is retained. You will need to allow 0.0.0.0/0 as source ip.

[update]: this is only true if the container is in host or macvlan mode. For bridge networks, the syno firewall (if enabled) need to allow the bridge networks ip range, which npm is running in, as source ip range.
I mixed two topics as it seems. The 0.0.0.0/0 (or a country specific range) as source ip is still true for incomming wan traffic beeing forwarded from the router to the NAS (and therefore to access the npm container).

The "update" addresses the reverse proxied traffic from npm to the target service. Though, I haven't thought it thru before posting :) Depending wether you use host/macvlan, the source ip will become the host or npm container ip. If the container is in a bridge network, the ip of the bridge network's gatway (or to be safe the whole subnet it is in) needs to be allowed in the syno firewall.

I hope my update didn't lead to any confusion.
 
Upvote 0
Yes just to close the loop on this for everyone else's benefit (for those who may come across this in the future). Many thanks to @Rusty and @one-eyed-king for their help on this.

1) For my first problem, the issue with Lets Encrypt (LE) certificates is that its finnicky with NPM as of right now. As @Rusty pointed out, if you want to use LE certificates for Synology.me domains, you cannot automatically request certificates from Lets Encrypt using NPM. Based on my first post above, out of the TWO options in NPM for using SSL certificates, you must (as far as I know and have learned thus far) import your Lets Encrypt certificate for your Synology.me domain from your Synology NAS. This means you can only use option 2 when using a synology.me domain.

Using the “SSL Certificates” section of NPM there are two options:
  • 1) Add SSL certificate from Let’s Encrypt, OR;
  • 2) Use ‘Custom’ to import my existing SSL certificate for [apple.synology.me]

After that, when you export your SSL LE certificate from Synology, you will get a zip file with 3 files inside. They are the privkey.pem, the cert.pem, and the chain.pem. The thing to keep in mind is importing the correct LE certificate file to the correct section in NPM. @one-eyed-king did a fantastic job explaining this part above and I quote it here:
import cert.pem as certificate and the chain.pem as intermediate certificate. A chain of trust can only be verified if the chain from the machine specific certificate and all the intermediate certificates (if any exist at all) up to the root certificate are "complete".
The privkey.pem file which he didnt mention gets imported into the 'certificate key' section of NPM.

2) My second problem about the Error 504 Gateway was because of my Synology's firewall settings. I had the Syno firewall really locked down, I only allowed web applications (i.e. that are on different ports) using my NAS IP.

The problem with this is that when you create NPM (using portainer and a common docker compose file online) , it automatically creates the nginx_default network which you see below, and this has its own IP address and gateway.
1621295340076.png


Thanks to @one-eyed-king who made me realize, that the way to solve the gateway problem (in my case) was to go into my synology firewall settings, and I added the gateway IP address listed there (192.168.16.1) as an entry in my firewall rules and gave it permission to access all web applications (in this case Jellyfin and Bitwarden).

After that, the web applications both started working with my subdomain names!

Hope this helps others :)
 
Upvote 1
Solution

Create an account or login to comment

You must be a member in order to leave a comment

Create account

Create an account on our community. It's easy!

Log in

Already have an account? Log in here.

Similar threads

I would like to thank you all for your efforts, but unfortunately still not working . But I found a...
Replies
23
Views
2,694
Those are two different layers: one is the management ui to perform actions on the api. the other is the...
Replies
12
Views
1,592

Welcome to SynoForum.com!

SynoForum.com is an unofficial Synology forum for NAS owners and enthusiasts.

Registration is free, easy and fast!

Trending threads

Back
Top