Resource icon

Tutorial Synology Reverse Proxy under the hood

Currently reading
Tutorial Synology Reverse Proxy under the hood

pid /run/nginx.pid ... Investigation

where I found nginx.conf (except docker sources):
./etc.defaults/syslog-ng/patterndb.d/nginx.conf
./etc/syslog-ng/patterndb.d/nginx.conf
./etc/nginx/nginx.conf

just this one: ./usr/share/init/nginx.conf
contains "pid /run/nginx.pid":
Bash:
exec /usr/bin/nginx $startArg -g 'pid /run/nginx.pid; daemon on; master_process on;'

and there is just single place with the "nginx.pid" :
Bash:
/var/run/

so
Bash:
ps -ef | grep nginx
respond:
Bash:
root     11577 28405  0 16:30 pts/16   00:00:00 grep --color=auto nginx
root     17213     1  0 16:06 ?        00:00:00 nginx: master process /usr/bin/nginx -g pid /run/nginx.pid; daemon on; master_process on;
http     30367 17213  0 16:08 ?        00:00:00 nginx: worker process
http     30368 17213  0 16:08 ?        00:00:00 nginx: worker process
http     30369 17213  0 16:08 ?        00:00:00 nginx: worker process
http     30370 17213  0 16:08 ?        00:00:00 nginx: worker process

then
17213 is the PID of nginx MASTER process

Bash:
kill -9 17213
then again
Bash:
ps -ef | grep nginx
...whether there is any nginx process running or port 80 is occupied - next killing candidate
Bash:
netstat -tulpn |grep 80
Bash:
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      30367/nginx: worker
tcp6       0      0 :::80                   :::*                    LISTEN      30367/nginx: worker
tcp6       0      0 fe80::211:32ff:fe2:3260 :::*                    LISTEN      -
tcp6       0      0 fe80::211:32ff:fe2:3260 :::*                    LISTEN      -
udp        0      0 127.0.0.1:161           0.0.0.0:*                           12805/snmpd
udp6       0      0 fe80::211:32ff:fe21:123 :::*                                12623/ntpd
udp6       0      0 fe80::211:32ff:fe21:123 :::*                                12623/ntpd
killed

finally:
Bash:
synoservice --restart nginx

still same sh.t
-- post merged: --

I can't get the error:
Bash:
1202#1202: signal process started
2021/11/04 02:30:23 [error] 1202#1202: open() "/run/nginx.pid" failed (2: No such file or directory)

both directory and file are available
 
Seems like you are running DSM7?

If so, can you check the output of sudo systemctl status nginx?

Do you have manually added conf files? If so, rename them and add a suffix to their filename like .bak to make the nginx.conf.run (the one actualy used) ignore those files.

Also you might want to temporarily move /etc/nginx/sites-enabled/server.ReverseProxy.conf somewhere else to make sure to start with a minimal valid configuration

Then retest the config (in DSM7: sudo nginx -t -c /etc/nginx/nginx.conf.run) and restart the configuration (In DSM7: sudo systemctl restart nginx), then check the status again.
 
Bash:
synoservice -status nginx
service [nginx] status=[error]
required upstart job:
[nginx] is stop.
=======================================

Bash:
synoservice -enable nginx
service [nginx] start failed, synoerr=[0x0000]

Bash:
start nginx
Response:
start: Job failed to start

still in error stages when used hard-enable or restart, reload options.

Check of Listening service by nginx and 443:
Bash:
netstat -tulpn | grep 'nginx|443'
is as expected
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:443 0.0.0.0:* LISTEN 30368/nginx: worker

And this is the direct connection established from WAN by https//WAP-IP:NAS-PORT (PF by router):
Bash:
netstat -plant | grep NAS-PORT
tcp 0 0 NAS-IP:NAS-PORT 192.168.1.1:port ESTABLISHED 30369/nginx: worker

seems to be just the RP part of Nginx is broken, totaly.
 
no no, here is 6.x version
thx, for the hint, already tested, as you can see
;)
-- post merged: --

DSM6? DSM7?
6
-- post merged: --

Also you might want to temporarily move /etc/nginx/sites-enabled/server.ReverseProxy.conf somewhere else to make sure to start with a minimal valid configuration

no mannually created files there. There is just single file:
synowstransfer.conf

contains:
Bash:
server {
    listen 5357 default_server;
    listen [::]:5357 default_server;


    location / {
            proxy_pass http://unix:/tmp/synowstransfer.sock;
    }
}
 
The synowstransfer configuration exstis on DSM7 as well: slightly different name, same content.

The RP rules are in /etc/nginx/app.d/server.ReverseProxy.conf in DSM6. I don't recall if there is a naming schema to load it (like in conf.d) or it simply loads everything in it (like in sites-enabled). I assume you already tried to see what happens if you move the file out and restart everything -> that's what made your write "seems to be just the RP part of Nginx is broken, totaly.", right?
 
this is my last attempt to recreate the RP in DSM 6.2.4-25556 Update 2 (last available ver)

1. I deleted the entire contents of the RP UI.

2. /etc/nginx/app.d/server.ReverseProxy.conf .... is empty

3. /var/runnginx.pid ...is available and contains just PID for the nginx master process = 17230 ...OK

4. /usr/bin/nginx ... is available as main command for the:
exec /usr/bin/nginx $startArg -g 'pid /run/nginx.pid; daemon on; master_process on;'

5. ps -ef | grep nginx ..... root PID 17230 is running, include several subPID for http workers ... OK

6. /etc/nginx/sites-enabled/synowstransfer.conf ... contains only WS setup, what is OK

7. synoservice -status nginx ...... everything is now OK
Service [nginx] status=[enable]
required upstart job:
[nginx] is start.
=======================================

8.
Bash:
netstat -tulpn |grep 443
tcp 0 0 0.0.0.0:8443 0.0.0.0:* LISTEN 27843/docker-proxy
tcp 0 0 0.0.0.0:443 0.0.0.0:* LISTEN 17230/nginx: master
seems to be ok

9.
Bash:
netstat -plant | grep Main-DSM-Port-for-HTTPS
tcp 0 0 0.0.0.0:PORT 0.0.0.0:* LISTEN 17230/nginx: master
great ... it's working

10. NAS RESTART
time for 4th :coffee: ristretto today

11.
Bash:
synoservice -status nginx
...... everything is now OK

12.
Bash:
ps -ef | grep nginx
... root PID 17230 is running, include several subPID for http workers

13. So let's prepare first RP record for first candidate - DSM
Source: https, fqdn, 443, HSTS,HTTP/2
Destination: https,localhost, port
done

14. /etc/nginx/app.d/server.ReverseProxy.conf
server {
listen 433 ssl http2;
listen [::]:433 ssl http2;

server_name FQDN; # valid

ssl_certificate /usr/syno/etc/certificate/ReverseProxy/xxxxx/fullchain.pem; # checked, this is my valid cert

ssl_certificate_key /usr/syno/etc/certificate/ReverseProxy/xxxxx/privkey.pem; # checked, this is my valid keycert

add_header Strict-Transport-Security "max-age=15768000; includeSubdomains; preload" always;

location / {

proxy_connect_timeout 60;

proxy_read_timeout 60;

proxy_send_timeout 60;

proxy_intercept_errors off;

proxy_http_version 1.1;

proxy_set_header Upgrade $http_upgrade; # custom WS from UI

proxy_set_header Connection $connection_upgrade; # custom WS from UI

proxy_set_header Host $http_host;

proxy_set_header X-Real-IP $remote_addr;

proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

proxy_set_header X-Forwarded-Proto $scheme;

proxy_pass https://localhost:NASPORT; # valid

}

error_page 403 404 500 502 503 504 @error_page;

location @error_page {
root /usr/syno/share/nginx;
rewrite (.*) /error.html break;
allow all;
}

}
...as expected, no errors or misconfiguration there

15.
Bash:
curl -i https://fqdn
works .. this is my site

16.
Bash:
curl -i http://fqdn
HTTP/1.1 301 Moved Permanently
Location: https://fqdn
Content-Length: 0
Date: Fri, 05 Nov 2021 07:39:45 GMT
Server: Server
redirection also works as expected

17. LAST CHECK from the BROWSER

I'm back to life!!!

So I will create the rest of RP records and several tests of them.

I learned something new again.
 
Last edited:
so, another problem is discovered - the port defined in the new RP Destination record is redirected to localhost:443 and not to the defined port in the RP record.
EDIT:
observed for Docker targets only
config contains right definition of the proxy_pas direction:
proxy_pass https://localhost:CONTAINERPORT
ofc, the CONTAINERPORT is the available localhost container port
 
Last edited:
for the Bridge docker netw. dest, specified by 172.17.0.0/16 & gtw:172.17.0.1 in Docker Netw

RP config doesn't work with direct local IP from the Docker container target:
proxy_pass https://172.17.0.X:CONTAINERPORT

in real is achieved DSMhost:443 target

then routing doesn't work to the docker netw., must be managed by Nginx somewhere (because DSM setup of static routing doesn't work in this case)

EDIT

a ping from the DSMhost to the Bridge docker netw. dest is working
for each from the 172.17.0.0/16 subnet
 
So checked Nginx logs - now I have connected Nginx Access/Error logs by SFTP into my PowerQuery data kingdom (added value of this issue)

1636114991554.png


1. When I open a site (http referrer) by RP service I can see the same behaviour from the Nginx side in the Access log:
- Nginx realizes the Requests from Remote Access (defined by my router IP ofc) with HTTP 200 status = Success
- then Nginx is finishing this Request w/o error, even though it has routing to a port diff than 443 in the RP settings

2. This behaviour is confirmed by the empty record in the Nginx Error log

still out of the primary reason evidence
 
I'm back to life!!!
That's great news! Congratz! (I think :))

so, another problem is discovered - the port defined in the new RP Destination record is redirected to localhost:443 and not to the defined port in the RP record.
Is this the chain? WAN-https -> RP-https (NAS) -> container-http

I have seen containerized applications with enforced https that responded with a redirect, which will make cure brower move that url.

You might want to investigate the bahavior with curl -ivL $PUBLIC_URL

RP config doesn't work with direct local IP from the Docker container target:
proxy_pass https://172.17.0.X:CONTAINERPORT
Please do not use container ip's, use the published port on the NAS itself. Container ip's are ephemeral and will change when you update the image.
 
thx, mate
RE: container IP (it was just a check for this case), never used and never will be used by me. ;) I like my mental health.

RE: curl -ivL fqdn:
# curl -ivL https://fqdn
* TLSv1.2 (OUT), TLS header, Certificate Status (22):
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Client hello (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS change cipher, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Finished (20):
> GET / HTTP/2
> Host: fqdn
> User-Agent: curl/7.54.0
> Accept: */*
under rules of the Handshake protocol:

enum {
hello_request(0), client_hello(1), server_hello(2),
certificate(11), server_key_exchange (12),
certificate_request(13), server_hello_done(14),
certificate_verify(15), client_key_exchange(16),
finished(20), (255)
} HandshakeType;

struct {
HandshakeType msg_type; /* handshake type */
uint24 length; /* bytes in message */
select (HandshakeType) {
case hello_request: HelloRequest;
case client_hello: ClientHello;
case server_hello: ServerHello;
case certificate: Certificate;
case server_key_exchange: ServerKeyExchange;
case certificate_request: CertificateRequest;
case server_hello_done: ServerHelloDone;
case certificate_verify: CertificateVerify;
case client_key_exchange: ClientKeyExchange;
case finished: Finished;
} body;
} Handshake;

except steps:
certificate_request(13)
certificate_verify(15)
what is OK in this case (same results for my sites hosted in diff sites/services)
-- post merged: --

Is this the chain? WAN-https -> RP-https (NAS) -> container-http

in some cases yes
 
Yupp, the tls handshake looks fine,

I was hopping for details between the TLS information and and the document itself, that might shed light on what sort of 301 302 level redirect take place on the urls that cause a "wrong redirect". Just in case to check if the containerized application itself gets "creative" in some undesired ways *cough*
 
I've run out of ideas
but thx for the support again. Needs to wait for the Syno 2nd level support, because it is out of range of the 1st level.
But it will take 2 weeks:
- because some needs to understand, that this issue is not about "please shutdown your NAS"
- explain to them that it will not send them any DAT export from DSM = entire NAS dump, just exact files
- and persistently write off that I'm asking for a 2nd level + assisted TeamViewer (under my control), w/o direct access to the NAS for some virtual person.

last time I dealt with the strange behaviour of SynoDrive.
 
Practical the only thing they should need are the files located in /etc/nginx (and the target of the symlinks ins the folder).

Actual as I understood that the problem is limited to rp rules, actualy /etc/nginx/app.d/server.ReverseProxy.conf should be everything they need. Worst case /etc/nginx/nginx.conf and all files declared as include directive in it as well.

The truth must be somewhere in those files.

Just FYI and probably unrelated, but who knows: on 30.09.21, a Letsencrypt intermediate CA expiration caused validation errors on the LE chain of trust on systems with Openssl < v1.1.0 (see: openssl version. LE uses two trust chains and EACH of them needs to be valid in order to be considered validation. In Openssl >=1.1.0 it is sufficient that at least on of the chains can be validated. With older openssl versions the validation will fail, if the expired chain still exists in the "truststore", even though when a valid chain exists. Neither docker containers not synology packages are affected by this, as they provide their own private openssl version, which usualy is >1.1.0)
 
Re: the sources of the problem - my concern is about routing within DSM.
Because ping is working (tested). I can ping from host interface into docker container IP, incl. docker gateway. Then internal Docker setup could be affected.
Docker daemon or, even Docker proxy damage). I don’t use proxy, but some faulty reason, should be happen.
Docker proxy listener worker is watching 8433 port.
omg, I need check both of them. Thx for a kicking.

Re: LE. Reasonable. I don’t have LE. I have wildcard.
 

Create an account or login to comment

You must be a member in order to leave a comment

Create account

Create an account on our community. It's easy!

Log in

Already have an account? Log in here.

Similar threads

Thank You for the great input. I try not to Muck around with SSH on the NAS. I mistakenly waited too long...
Replies
3
Views
2,006
BobW submitted a new resource: How to Setup Custom Error Pages for Nginx-Proxy-Manager (NPM) - Setup...
Replies
0
Views
761

Welcome to SynoForum.com!

SynoForum.com is an unofficial Synology forum for NAS owners and enthusiasts.

Registration is free, easy and fast!

Trending threads

Back
Top