nginx volume mount and layer4 reverse proxing

Currently reading
nginx volume mount and layer4 reverse proxing

Last edited:
Can you repaste the template file as code? When I am not at work, I use my couch laptop, which has a 12,5" screen with with a misserable resolution.

Make sure to add the nginx container to the other container network as well. And keep in mind: localhost in a container is local from the container perspective and not the host's perspective!

The location snippets I provided, expect the same "path" to be present at the target. Though you wanted to use subdomains, which will make your whole reverse proxy experience way more pleasent than path-based reverse proxying (especialy if you re-map paths). Also Portainer always had an odd effect for me in the past: rp worked reliable until the Portainer container got restarted - in order for it to work again I had to restart the nginx container.. this makes absolutly no sense to me, but this is how it was.
 
I've built out my nginx.conf as attached. The HTTP {} blocks works as that is just proxying to my backend web-apps on layer7 . Quite happy with that. Now, one of my biggest goals of setting up nginx (as stated in my first post) is to also try and use the layer4 proxying to try get my Synology VPN Plus also available trough this nginx, so my web-apps and VPN available on port 443 via the internet. Everything done wth a seperate subdomain. . With some googling I discovered TCP traffic needs to be specified in a separate stream {} block. But.... apparently the servers in the stream {} block and the http {} block cannot listen on the same port, unless they listen to a seperate IP address... I did not expect to run into that issue as I initially saw that it's possible to ceate as many 'virtual servers' in the http {} block as you want, all listening on 0.0.0.0:443 .

I was planning to forward port 443 on my router to the docker container's IP address. How should I work around this problem...?
 

Attachments

  • nginx.conf.txt
    13.6 KB · Views: 17
Last edited:
For sure you need to use a stream block to intercept the whole traffic on tcp level and use preread to determine which domain should be forwarded to which target. You can add an addition server block (or http block, not sure from the top of my head) which listens on a different port, which can be used as the target in the preread block. Since forwarding from preread to the specific server blocks, this will be handled strictly in the container and does not require additionaly mapped (aka. published) ports for the container.

This way you have a single "dispatcher" that does tcp passthrough to all target services, including one or more servers in the same nginx container that listens on different ports (or same port !=443 and different server_names) and can be used for http reverse proxying based on subdomains and/or paths.

I hope this makes sense :)

update: Actualy I would not perform tls termination on the pispatcher level and on the target servers. You will break the TLS context, if you try to do both. Even though it's more boilerplate, I moved my definitions in the target server defintions.
-- post merged: --

I took same parts of your config to create an example snippet:
Code:
...
stream {

  map $ssl_preread_server_name $name {
      vpn.mydomain.net router3.mydomain.net:1195;
      #portainer.mydomain.net portainer;
     #nas3.mydomain.net nas3;
      default                default }

# add upstream for portainer and nas if default doesn't handle it. On asecond though: creating an upstream for a single target doesn#t make much sense. You should be able to safely replace the second default in the prepread block with localhost:8443.
upstream default {
    server localhost:8443;
}

  server {
    listen 443 ssl;
    proxy_pass $targetBackend;
    ssl_preread on;
  }
}

...
http
{
    map $http_upgrade $connection_upgrade
    {
        default upgrade;
        ''      close;
    }
    ...
    server
    {
        # set DNS resolver as Docker internal DNS
        resolver 192.168.1.194 valid=10s;
        resolver_timeout 5s;
        server_name portainer.mydomain.net;

        # NGINX listener (at the container level)
        listen 8443 ssl;

        # Supported HTTPS protocols
        ssl_protocols TLSv1.2;

        # SSL Certificate components (bound mind from host)
        ssl_certificate /certs/certificate.crt;
        ssl_certificate_key /certs/certificate.key;
        location /
        {
            set $target http://192.168.1.193:9000;
            proxy_pass $target;
        }
    }
    server
    {
        # set DNS resolver as Docker internal DNS
        resolver 192.168.1.194 valid=10s;
        resolver_timeout 5s;

        server_name nas3.mydomain.net;

        # NGINX listener (at the container level)
        listen 8443 ssl;
        #listen 80;

        # Supported HTTPS protocols
        ssl_protocols TLSv1.2;

        # SSL Certificate components (bound mind from host)
        ssl_certificate /certs/certificate.crt;
        ssl_certificate_key /certs/certificate.key;
        location /
        {
            set $target https://192.168.1.193:5001;
            proxy_pass $target;
        }
    }
    ...
}

The idea is to only specificy those subdomains that use tcp passthrough to other hosts or direct https/http services. Use the default upstream to catch all other subdomains, which listen on the same port, but use different server_names. Not sure if it works ootb or needs some further fine tuning , but it's wort trying :)
 
The idea is to only specificy those subdomains that use tcp passthrough to other hosts or direct https/http services. Use the default upstream to catch all other subdomains, which listen on the same port, but use different server_names. Not sure if it works ootb or needs some further fine tuning , but it's wort trying :)

This idea is just the solution I was hoping to find. I tought for a moment I might need a second NGINX server. The first one that does the TCP stream for VPN, and reroute everything else to the 2nd container, which will have the configs for http/https routes. Definitly gonna give this a try. Thank you!
 
The examples I took from your code at first gave me this error:

Code:
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration

/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/

/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh

10-listen-on-ipv6-by-default.sh: error: IPv6 listen already enabled

/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh

/docker-entrypoint.sh: Configuration complete; ready for start up

2020/10/04 21:27:26 [emerg] 1#1: unknown "targetbackend" variable

nginx: [emerg] unknown "targetbackend" variable

It was also complaining about an unexpected '}' block. I think because behind the default default there was no ; .

It was also complaining that no certs were defined..

So I ended up with these modifications to get rid of the errors:

Code:
stream {
  map $ssl_preread_server_name $name {
      vpn.vlnet.nl router3.vlnet.nl:1195;
      #portainer.vlnet.nl portainer;
     #nas3.vlnet.nl nas3;
      default                default; }
# add upstream for portainer and nas if default doesn't handle it. On asecond though: creating an upstream for a single target doesn#t make much sense. You should be able to safely replace the second default in the prepread block with localhost:8443.
upstream default {
    server localhost:8443;
}
  server {
    listen 443 ssl;
    proxy_pass router3.vlnet.nl:1195;
    ssl_preread on;
    ssl_certificate /certs/certificate.crt;
    ssl_certificate_key /certs/certificate.key;
  }
}

In the http {} block I did change from port 443 to 8443 .

But now with everything I'm getting:

1601847511343.png


1601847498774.png
 
I just drafted the example to illustrate the idea and highlight what needs to be changed in the configuration. It is far away from beeing a complete configuration.

Your screenshots indicate that the tls chain of trust can't be verified - either the cert missmatches the domain in the url or there is something off with the intermediate chain. If you do tcp passthrough, the tls certificate is expected at the target location (!) (what I did). if you offload TLS on the reverse proxy (how you changed it), the backeds should expect plain tcp or http instead of https.

Can you do me a favor and create a streamlined version of your config, similar to what I tried to do? Lets start with a minium viable configuration that does what you need. Once we reach there you can go crazy and add everything else :)
 
Ok will try do that.

Can I ask you one general advise: For the layer4 proxying in the stream {} block, so you generally recommend SSL passtrough? Do not offload TLS?

I think in my case it's better for the VPN. I think the backend (The Synology RT2600AC router) expects SSL traffic to come in, and it obviously also has the certificate installed. I did not intend to offload TLS in the stream {} block..

In the http {} block, I do obviously want TLS offloading as not everything in my backend runs with HTTPS.
 
Can I ask you one general advise: For the layer4 proxying in the stream {} block, so you generally recommend SSL passtrough? Do not offload TLS?
It realy depends on the use case.

Of course it would be more comfortable to perform TLS offloading at one point and use tcp/http for all targets. If this is working for all your applications, then carry on. But I am afraid mixing offloading and passthrough based on the domain name won't be possible. This is why I usualy prefere passthrough.
 
Last edited:
Can you do me a favor and create a streamlined version of your config, similar to what I tried to do? Lets start with a minium viable configuration that does what you need. Once we reach there you can go crazy and add everything else :)

I have been going crazy already for a while. The first thing I wanted to do is test if Synology VPN Plus actually works trough NGINX.

This is the stream block:
Code:
stream {
  map $ssl_preread_server_name $name {
      vpn.vlnet.nl router3;
      #portainer.vlnet.nl portainer;
     #nas3.vlnet.nl nas3;
      default                default;
    }
# add upstream for portainer and nas if default doesn't handle it. On asecond though: creating an upstream for a single target doesn#t make much sense. You should be able to safely replace the second default in the prepread block with localhost:8443.
upstream default {
    server localhost:8443;
}
upstream router3 {
    server router3.vlnet.nl:1195;
}
  server {
    listen 443;
    ssl_preread on;
    proxy_pass $name;
  }
}

And this works to a certain extend. The VPN page of the router shows in the browser. But when starting a VPN connection, it fails. Also on my Android phone:

1602165549463.png


Apparently NGINX is not able to pass the VPN packets properly to my RT2600AC router. Or the RT2600AC doesn't 'fall' for it. OR, something from network layer3 even is required ............ Disappointed. But even tough this was one of my main goals, I'm still happy with having this nginx docker container setup.

So because this doesn't work I removed the entire stream {} block as it was useless anyway. This gives me the ability to set all the servers under the http {} block to listen on port 443 and addition make use of proxy_set_header X-Forwarded-For $remote_addr. This header couldn't be used before because the backends would then see that the connection is coming in from '127.0.0.1', obviously because the layer7 http traffic was re-routed within nginx to itself due to the stream {} block.

1602167211314.png


I know OpenVPN does seem to work trough NGINX, but due to the lack of routing capabilities on Synology Routers I would like to use the TAP interface. But apparently that doesn't work on non-rooted Android and iOS devices. Here again I'm wondering should I have gone for Ubiquiti network gear if that gives better routing possibilities when using OpenVPN TUN..

Anyway, attached what I have now in nginx.conf and it all works!

Next thing I'm going to try out is getting my MailPlus server behind the NGINX:
NGINX Docs | Configuring NGINX as a Mail Proxy Server . Here they are speaking about a mail-module which I'm hoping is already included in the Docker image...
 

Attachments

  • nginx.conf.txt
    24.9 KB · Views: 8
The whole VPN thing is limited to solutions that encapulate everything in TLS traffic - the whole vpn control and vpn payload traffic. It does not work if the vpn payload is transfered in a dedicated binary format. Which I am afraid might be the situation. It realy just works for "pure" vpn over tls. You could use wireshark to make sure this was the problem - though, it strongly smells like it is

Having the streams redirected with the proxy ip as origin sucks - I hadn't have this one on my plate. Seems I didn't pay attention to this in the past. You for sure want the X-Forwarded headers, many applications can leverage them to get details about the initial protocol and source ip.

Great that you found your way to nginx and its configuration options. I told you its not that hard. Though, expressing the correct configuration can be a repeating challenge (less so if you stick with subdomain forwarding of the location /)
 
Been looking trough NGINX documentations and googling around. Am I right to assume that if I want NGINX to proxy forward normal HTTPS 443 traffic to a backend WITHOUT terminating SSL, I have to use a stream {} block instead of http {} block?

The reason I'm asking this:
As this NGINX container is now 'internet-facing' for incoming 443/80 from my router, CalDAV and CardDAV traffic also flows trough this container if the user is outside of the internal network (WiFi). Inside the internal network, DNS would point the devices via CNAME directly to the NAS running the CalDAV and CardDAV packages. The Reverse proxy on the NAS would send the traffic to the correct ports on the NAS. The (Android) smartphone then goes over to 3G/4G/5G when leaving the house (As Android does not have native CalDAV/CardDAV support, the app DAVx5 is used). As soon as the app DAVx5 tries to sync, it then has to go trough this NGINX reverse proxy. Inside the NGINX reverse proxy I'm using the exact same cert files (altough copied inside the container folder) that is imported in the NAS. Still, every DAV client going trough this NGINX container is showing this cert error:

1602606067428.png


Here are the related server blocks:
Code:
 server
    {
        # set DNS resolver as Docker internal DNS
        resolver 192.168.1.194 valid=10s;
        resolver_timeout 5s;
        # Pass header info to the target service
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP  $remote_addr;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto https;
        proxy_set_header X-Forwarded-Port 443;
        
        proxy_buffering off;
        # Connection upgrade to HTTP1.1
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        server_name carddav.vlnet.nl;
        # NGINX listener (at the container level)
        listen 443 ssl;
        #listen 80;
        # Supported HTTPS protocols
        ssl_protocols TLSv1.2;
        # SSL Certificate components (bound mind from host)
        ssl_certificate /certs/certificate.crt;
        ssl_certificate_key /certs/certificate.key;
        location / 
        {
            set $target https://192.168.1.193:8443;
            proxy_pass $target;
        }
    }

Code:
server
    {
        # set DNS resolver as Docker internal DNS
        resolver 192.168.1.194 valid=10s;
        resolver_timeout 5s;
        # Pass header info to the target service
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP  $remote_addr;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto https;
        proxy_set_header X-Forwarded-Port 443;
        
        proxy_buffering off;
        # Connection upgrade to HTTP1.1
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        server_name calendar.vlnet.nl;
        # NGINX listener (at the container level)
        listen 443 ssl;
        #listen 80;
        # Supported HTTPS protocols
        # ssl_protocols TLSv1.2;
        # SSL Certificate components (bound mind from host)
        ssl_certificate /certs/certificate.crt;
        ssl_certificate_key /certs/certificate.key;
        location / 
        {
            set $target https://192.168.1.193:20003;
            proxy_pass $target;
        }
    }

Because DAV works fine trough the Synology Reverse proxy, I took a deep dive into /var/tmp/nginx/ReverseProxy.tmp . There I found that Synology apparently creates and stores seperate .PEM cert files for every entry. So I copied those cert files into my NGINX container, made the changes in nginx.conf , restarted the container. But still getting the same cert errors on the DAV client devices.... I'm a bit lost....
 
Last edited:
Been looking trough NGINX documentations and googling around. Am I right to assume that if I want NGINX to proxy forward normal HTTPS 443 traffic to a backend WITHOUT terminating SSL, I have to use a stream {} block instead of http {} block?
Depends on the use-case: if you need ssl-passthrough from client to server, then the stream module is your ownly option, though you could use the html module to terminate ssl and create a new ssl context in proxy_pass (to be fair you can terminte ssl and rewrap it into a new ssl context with the stream module as well). I usualy head either for ssl-passthrough or offload ssl termination to the proxy and use http from rp to service.

With ssl-passthrough, you need to make sure that a valid certificate for your public url is available at the target service. With ssl offloading, you need to make sure that a valid certificate for your public url is available in the reverse proxy. I don't know from the top of my head if nginx even bothers to verify a target services ssl certificate for proxy_pass. Some reverse proxies do, some don't - it makes like easier if they don't ;)

I troubleshooted a reverse proxy problem between a F5 LTM loadbalancer and an apache reverse proxy today. The F5 has the public interface, does SSL two-way-auth and proxies to the https apache rp, which again talks plain html to the target http service on the same host. Took me some time to figure out that the cipher suite the F5 used for the backend communication was distinct from the cipher suite the hardend apache server accepeted... Why am I mentioning it: Those type of issues are hard to troubleshoot as it leavs no trails in the logs or any sort of usufull feedback from browser. Believe me if I say life is easier if a reverse proxy does not verify the target services ssl certificate (or at least allows to disable the verification).



As soon as the app DAVx5 tries to sync, it then has to go trough this NGINX reverse proxy. Inside the NGINX reverse proxy I'm using the exact same cert files (altough copied inside the container folder) that is imported in the NAS. Still, every DAV client going trough this NGINX container is showing this cert error:
Did you need to import a client certificate in you android dav app? This would indicate two-way-ssl auth, which is only going to work with ssl-passthrouh.


Having the streams redirected with the proxy ip as origin sucks - I hadn't have this one on my plate. Seems I didn't pay attention to this in the past.
Actualy the stream module has a directive to retain the client ip in the server block: proxy_protocol, though you would have to enable it in the http module for your target servers (in your nginx.conf) as well.

I think you will like this slide deck: TCP and UDP Load Balancing with NGINX: Tips and Tricks

Because DAV works fine trough the Synology Reverse proxy, I took a deep dive into /var/tmp/nginx/ReverseProxy.tmp . There I found that Synology apparently creates and stores seperate .PEM cert files for every entry. So I copied those cert files into my NGINX container, made the changes in nginx.conf , restarted the container. But still getting the same cert errors on the DAV client devices.... I'm a bit lost....
I am actualy unclear if this required two-way-auth. Would be helpfull to know which cert is used in rp, which in dev, and which one actuals gets shown in the android app.
 
OK I think I solved it. Apart from the Android DAV sync apps, I also utilize CalDav Synchroniser plugin for Outlook for desktop/laptop use. This was also whining about cert errors on the CardDAV feature only..

I noticed in a debug reports the dockerhost as backend was visible. So somehow it saws it connects to 192.168.1.193 (docker host) as stated in the proxy_pass in the NGINX config. I did a number of more changes, but I don't know what solved it:

- Created DNS A record for 192.168.1.193 and changed proxy_pass to use an FQDN instead of IP address
- Changed proxy_pass to use HTTP instead of HTTPS
- Disabled auto redirect from http to HTTPS in NAS DSM settings (this is done by the NGINX container anyway).
- Additionally, the Synology CardDAV package also has it's own HTTP to HTTPS redirect setting. Also disabled.
1603020229983.png

- Copied /usr/syno/etc/certificate/ReverseProxy/4cfeb8a5-396c-4dda-8416-2df353461c16/fullchain.pem (and its key) into the NGINX container and configured those files in the http {} block for CardDAV (attempted this before, with no luck)

One of these things fixed the cert errors, but I'm not exactly sure which..

Code:
server
    {
        # set DNS resolver as Docker internal DNS
        resolver 192.168.1.194 valid=10s;
        resolver_timeout 5s;
        # Pass header info to the target service
        proxy_set_header Host $http_host;
        proxy_set_header X-Real-IP  $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto https;
        proxy_set_header X-Forwarded-Port 443;
        
        proxy_buffering off;
        # Connection upgrade to HTTP1.1
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        server_name carddav.vlnet.nl;
        # NGINX listener (at the container level)
        listen 443 ssl;
        #listen 80;
        # Supported HTTPS protocols
        ssl_protocols TLSv1.2;
        # SSL Certificate components (bound mind from host)
        ssl_certificate /certs/carddav/fullchain.pem;
        ssl_certificate_key /certs/carddav/privkey.pem;
        location / 
        {
            set $target http://dockerhost1.vlnet.nl:8442;
            proxy_pass $target;
        }
    }

Did you pass your exam?
 
I am still in exam preparations.... I started to learn for the aws architect exams 2 years ago, then project demands kept me from following up on it. In the last two years aws added so many new services that I basicly had to start from the beginning...

I am glad reverse proxying is now working for you! if you'd still be desparate, I would've offered a remote sharing session to sort things out.

I assume what solved your issue is to offload ssl in the reverse proxy using the expected target certificates and then speaking purely http to the backend systems. The URL you use to access a https service always must match the Common Name (CN) or one of the Subject Alternative Names (SAN), otherwise the certificat will fail the validation of the certificate chain. Of course your A-Record is also part of the solution, as it is required to match to the certificates CN or one of the SANs.
 
I am glad reverse proxying is now working for you! if you'd still be desparate, I would've offered a remote sharing session to sort things out.
Thank you! I kinda got the hang of it, at least in the http {} blocks. For example I've also managed to get it working with other domains. Like when a user comes in at domain2.nl , nginx will redirect that user to www.domain2.com using a return 301 instead of proxy_pass .

I will play with stream {} blocks on a later time for things like e-mail related traffic. Right now I've trown myself into another project: Installing MailCow on Synology NAS .

I assume what solved your issue is to offload ssl in the reverse proxy using the expected target certificates and then speaking purely http to the backend systems. The URL you use to access a https service always must match the Common Name (CN) or one of the Subject Alternative Names (SAN), otherwise the certificat will fail the validation of the certificate chain. Of course your A-Record is also part of the solution, as it is required to match to the certificates CN or one of the SANs.
For some reason on the Android DAVx5 app the problem just creeps back after a couple of hours... On desktops with an Outlook CalDAV/CardDAV plugin it works fine..

I'm going to contact the app support of DAVx5. As a paying customer, they should reply...

I am still in exam preparations.... I started to learn for the aws architect exams 2 years ago, then project demands kept me from following up on it. In the last two years aws added so many new services that I basicly had to start from the beginning...
Good luck! I also need to get into AWS (and Azure) one day. Did have a training about both and was surprised by the fact you can also easily deploy Docker containers in both cloud environments. :)
Ye I'm tired of being just a workplace engineer....
 
Good luck! I also need to get into AWS (and Azure) one day. Did have a training about both and was surprised by the fact you can also easily deploy Docker containers in both cloud environments. :)
Thanks, mate! I must admit that I fell in love with the managed Kubernetes service EKS on AWS. You get a resilient Kubernetes controlplane for cheap and can configure a cluster autoscaller to deploy as many compute nodes as your workloads requires. You only have to wory about your deployment descriptors, everything else is taken care of. Azure offers the same with AKS.

Ye I'm tired of being just a workplace engineer....
In my company we do discuss our personal development plans during the annual performance review. I usualy depict why I feel that I outgrow my current taks (=bring up a problem) and follow up with a perspective where I see myself going and the value this will provide to the company and me (=draft a solution).

I hope eventualy you will get to a position you will love!
 
Last edited:
As I mentioned before I'm exploring with the e-mail proxy possibilies of NGINX. Also from this topic you can get the idea I'm now currently running 2 mail servers inside my network, and working next to each other (sharing a domain, using aliasses with internal relay domain tricks :)) (topic needs updating).


For the NGINX configuration I only intend it to let it handle inbound MAP (SSL/TLS) and SMTP traffic from the Internet. I do not plan to use the stream {} blocks for this, but use the actual mail functionalities available. The challange comes that according to the official documentations that nginx wants to use an HTTP auth server ..

SMTP (25):
This is for incoming e-mail traffic. After searching and searching it seems it's pretty straight forward if al e-mails can just go to 1 backend.

This works:

Code:
http {

# e-mail auth server test
    server {
        listen 127.0.0.1:8008;
        server_name _;
        access_log /var/log/nginx/localhost.access_log main;
        error_log /var/log/nginx/localhost.error_log info;
        root /var/www/localhost/htdocs;
        location ~ .php$ {
                add_header Auth-Server 192.168.1.5;
                add_header Auth-Port 25;
                return 200;
        }
    }
}
mail {
        server_name _;
        auth_http localhost:8008/auth-smtppass.php;
       
        # auth_http http://dockerhost1.vlnet.nl:80/mail/auth.php;
        proxy_pass_error_message on;
        imap_capabilities  "IMAP4rev1"  "UIDPLUS";
       
        server {
                listen 25;
                protocol smtp;
                timeout 5s;
                proxy on;
                xclient off;
                smtp_auth none;
        }
}

And on the mail server I can see the incoming e-mail traffic coming from this nginx reverse proxy. I have to mention I'm not too entirely sure if this is really beneficial as it makes me wonder if BlackList checkups are still working this way.. .


IMAP (993):
This is obviously for devices that are outside the home network. So for example I walk out the door with my phone, lose my WiFi signal, I would still want my phone to be able to fetch my emails via 4G (could ofcourse use VPN..).

I've found this can be used to great benefit when it comes to multiple e-mail server environments, as the http auth server that nginx talks to can return to which backend to connect to based on the user logging in. So this can be done with CGI or PHP script as shown in an example here.

Ok, thats great to know if it even works. So to see if this even works I tried to start very simple. Like the SMTP 25 traffic I did something very simple and non-dynamic. I tried to configure to forward all IMAP requests to my mail server where my mailbox is, like so:

Code:
http {
        log_format main
                '$remote_addr - $remote_user [$time_local] '
                '"$request" $status $bytes_sent '
                '"$http_referer" "$http_user_agent" '
                '"$gzip_ratio"';

                server {
                listen 8009;
                server_name _;
                access_log /var/log/nginx/localhost.access_log main;
                error_log /var/log/nginx/localhost.error_log info;
                root /var/www/localhost/htdocs;
                location ~ .php$ {
                        add_header Auth-Server 192.168.1.193;
                        add_header Auth-Port 993;
                        return 200;
                }
        }
}
mail {
        server_name _;
        auth_http localhost:8009/auth-smtppass.php;
        proxy_pass_error_message on;
        ssl_certificate     /certs/vlnet/fullchain.pem;
        ssl_certificate_key /certs/vlnet/privkey.pem;
        ssl_protocols       TLSv1 TLSv1.1 TLSv1.2;
        ssl_ciphers         HIGH:!aNULL:!MD5;
        ssl_session_cache   shared:TLSSSL:16m;
        ssl_session_timeout 10m;
        imap_capabilities  "IMAP4rev1"  "UIDPLUS";
        starttls  on; ## enable STARTTLS for all mail servers
       
        server {
                listen 587 ssl;
                protocol smtp;
                timeout 5s;
                proxy on;
                xclient off;
                smtp_auth none;
        }
        server {
                listen 465 ssl;
                protocol smtp;
                timeout 5s;
                proxy on;
                xclient off;
                smtp_auth none;
        }
        server {
                listen 993 ssl;
                protocol imap;
                timeout 5s;
                proxy on;
                xclient off;
        }
}

I then forwarded port 993 on my router to the nginx docker container (macvlan), switched off WiFi on my phone, but no...

In the nginx docker contaner, I see these errors ...

Code:
2020/11/07 23:13:21 [error] 21#21: *1 auth http server 192.168.1.193:80 did not send server or port while in http auth state, client: 77.63.127.194, server: 0.0.0.0:993, login: "yuri"

2020/11/07 23:13:22 [error] 21#21: *3 auth http server 192.168.1.193:80 did not send server or port while in http auth state, client: 77.63.127.194, server: 0.0.0.0:993, login: "yuri"

2020/11/07 23:13:47 [error] 21#21: *5 auth http server 192.168.1.193:80 did not send server or port while in http auth state, client: 77.63.127.194, server: 0.0.0.0:993, login: "yuri"

2020/11/07 23:13:48 [error] 21#21: *7 auth http server 192.168.1.193:80 did not send server or port while in http auth state, client: 77.63.127.194, server: 0.0.0.0:993, login: "yuri"

2020/11/07 23:13:49 [error] 21#21: *9 auth http server 192.168.1.193:80 did not send server or port while in http auth state, client: 77.63.127.194, server: 0.0.0.0:993, login: "yuri"

2020/11/07 23:13:49 [error] 21#21: *11 auth http server 192.168.1.193:80 did not send server or port while in http auth state, client: 77.63.127.194, server: 0.0.0.0:993, login: "yuri"

2020/11/07 23:13:50 [error] 21#21: *13 auth http server 192.168.1.193:80 did not send server or port while in http auth state, client: 77.63.127.194, server: 0.0.0.0:993, login: "yuri"

2020/11/07 23:13:51 [error] 21#21: *15 auth http server 192.168.1.193:80 did not send server or port while in http auth state, client: 77.63.127.194, server: 0.0.0.0:993, login: "yuri"

2020/11/07 23:13:52 [error] 21#21: *17 auth http server 192.168.1.193:80 did not send server or port while in http auth state, client: 77.63.127.194, server: 0.0.0.0:993, login: "yuri"

2020/11/07 23:13:52 [error] 21#21: *19 auth http server 192.168.1.193:80 did not send server or port while in http auth state, client: 77.63.127.194, server: 0.0.0.0:993, login: "yuri"

2020/11/07 23:17:31 [error] 21#21: *21 auth http server 192.168.1.193:80 did not send server or port while in http auth state, client: 77.63.127.194, server: 0.0.0.0:993, login: "yuri"

2020/11/07 23:17:32 [error] 21#21: *23 auth http server 192.168.1.193:80 did not send server or port while in http auth state, client: 77.63.127.194, server: 0.0.0.0:993, login: "yuri"


So such as simple instruction to send back the headers that nginx wants, yet somehow it doesn't work....

Maybe a long shot to ask, but anyone have any tips?
There is not much information to find about this on the internet..
 

Create an account or login to comment

You must be a member in order to leave a comment

Create account

Create an account on our community. It's easy!

Log in

Already have an account? Log in here.

Similar threads

  • Question
No need to internally also push it to HTTPS. You can use HTTPS (public) to HTTP (internal, that works for...
Replies
3
Views
2,573
  • Solved
Yes just to close the loop on this for everyone else's benefit (for those who may come across this in the...
Replies
12
Views
17,077
Replies
21
Views
7,174
It's okay, this morning it just came up, so not sure what was happening. I deleted and re-ran the docker...
Replies
2
Views
2,385
Thanks... I tried something similar with rsync. The docker volume lived in...
Replies
7
Views
587
Well, that's the reason as you already noticed. So you haven't migrated the @docker content? Do you have...
Replies
5
Views
1,968

Welcome to SynoForum.com!

SynoForum.com is an unofficial Synology forum for NAS owners and enthusiasts.

Registration is free, easy and fast!

Trending threads

Back
Top