Question Fail2ban for Docker Containers

Currently reading
Question Fail2ban for Docker Containers

12
1
NAS
DS918+ & DS415play
Operating system
  1. Linux
  2. macOS
  3. Windows
Mobile operating system
  1. Android
Hi Everyone,

I wanted to secure our DS918+ Docker containers from brute-force attacks using Fail2ban (Docker container). Having now installed Fail2ban, we installed Bitwarden using Rusty's tutorial (much appreciated) and can get Fail2ban to regulate repeated failed Bitwarden login attempts.

Now I would like to add our Plex and Odoo log to the Fail2ban filter and Jail databases but once added, this creates an endless container restart constraint.

Any advice appreciated.
 
Can you share what you did so far and how you did it?

The git repo holds the required configuration items to configure fail2ban properly. Do the expected folders and the files in them exist on the host path (which must be a subfolder of a share!) that is used as the source for the volume target /data?

Did you correct the host side of the volume mappings to match the folder paths on your host?

Note: the files in jail.d have a ignoreip setting, which should cover your home lan (the docker networks are covered by 172.0.0.1/8 already - ugly solution, but it works :) ) to ensure your lan devices are never blocked.

You need to make sure that f2b is able to access the log files of the applications you want to protect, the config in filter.d is configured properly to detect failed logins (the conf for bitwarden and bitwarden-admin pre-exists in the repo) and matching conf files exit in jail.d to configure the constraints of a block. Shouldn't be that impossible to configure ^^
 
I followed the steps of sosandroid/docker-fail2ban-synology but I still can access. Even when I manually ban my IP address (I tested via a VPN connection (my desktop to external server) and banned that IP) I still can get access to the site/pages.
I've tried with crazymax's way too (chain=DOCKER-USER instead of INPUT and even with 'legacy' FORWARD) but still get a response every time. The IP tables seem ok, right?
Bash:
admin@DS1621:~$ sudo iptables -S | grep f2b
-N f2b-authelia
-N f2b-nginx-proxy-manager
-A INPUT -p tcp -j f2b-nginx-proxy-manager
-A INPUT -p tcp -j f2b-authelia
-A f2b-authelia -s 213.152.188.22/32 -j DROP
-A f2b-authelia -j RETURN
-A f2b-nginx-proxy-manager -s 213.152.188.22/32 -j DROP
-A f2b-nginx-proxy-manager -j RETURN
I don't know what f2b-nginx-proxy-manager and f2b-authelia refer to. I know I used name=authelia (as it's the jail name in f2b) and it adds f2b in front of it. If f2b-authelia is supposed to be the container name or hostname.... yeah then it won't work because the container is just 'authelia'.
 
You might find some hints in Step 9 of fail2ban for vaultwarden.
As one-eyed-king mentioned, your ignore ranges need to be correct. Otherwise it is mainly setting up the banning periods.
Thanks Silverj! I started from scratch following your guide and, although I use Bitwarden hosted, I even installed Vaultwarden as a proof of concept, to get it working. Vaultwarden works like a charm, no problem. But I don't know why I cannot get fail2ban to block.
The ban works perfect, four times a wrong password and <bam> it's banned. That is: in the iptables. I tried it with expanding the ports (though the action says allports, in the iptables I see -A INPUT -p tcp -m multiport --dports 80,8080,443,8443 -j f2b-vaultwarden Isn't that multiport?
Anyway, I expanded the list of ports so that the ports of Nginx Proxy Manager are also there, but nope, the banned IP addresss still gets through. So I tinkered with DOCKER-USER and FORWARD, but alas...

I'm sure I followed everything you, Rusty and SOSdroid wrote, what made me wonder: what if there's something different in hardware, that's extremely likely , right? But then a kind of different that's "just me" and not you, rusty, sosdroid and a lot of other folks... So what about an dynamic link aggregation? I have a balanced-tcp bond. So the nas is not using eth0 but ovs_bond0. Could that be it? If so, how to fix? If not... still what else to do?
 
So what about an dynamic link aggregation? I have a balanced-tcp bond. So the nas is not using eth0 but ovs_bond0. Could that be it? If so, how to fix? If not... still what else to do?
Hi Bogey...

Alas in my small family I have no need for link aggregation, so I do not know, but it seems like a likely candidate for a problem.
Could it be the definition of the network in Docker? With a macvlan network in Docker you can specify the parent network (--parent), but I have not seen this option for a bridge network.
You will need one of the Docker experts here to answer I'm afraid (there are some). I'm certainly interested in an answer...
 
@one-eyed-king or @Rusty would you happen to know how the iptables work in relation to eth0 vs ovs_bond0? Could that be the way to get Fail2ban working on my ds1621+?
I haven't used f2b at all so really can't say anything on the matter as I haven't even tested it. I will say that bond vs single interface is probably not the problem here as you are still running all in a single IP address in the end. It's not load-balanced (with multiple IPs).

My guess at the top of my head might be NPM (reverse). If you are running it in bridge mode, it might not detect your client IP as it should (if it's in bridge mode) and all your clients get the UP address of the NPM itself.

Again, just an idea considering that running NPM and all its services over it, will not resolve proper client IP unless NPM is in host network configuration.
 
Somehow I cannot wrap my head around that. If the logs show the clients real IP and fail2ban adds the right rule in place, then why isn't traffic dropped? I mean isn't iptables before NPM in the queue?

So I tested some more, read on, it get's better (promise!).
First: I replaced NPM by Synology's RP: no change, still no block. Ok, that was obvious as the real IP wasn't logged.
But a manual ban didn't work either. I fixed the real IP but as expected the ban didn't work (as the manual ban wouldn't work so why would this, right?)
Second: I wanted to see if it's a Docker problem or if a web station site is not blocked either... Damn! Again it didn't make any difference... but then I found something 'weird' in the logs... the client address showed a Cloudflare IP address?
So first I switched off Proxy mode, but alas. Then I used a different domain with a different DNS.
Holy cloudfart Batman!!! Traffic was blocked: "This site can’t be reached"
So it turns out that Cloudflare does some stuff that makes the Synology to give traffic a pass. Obviously I'm not a one eyed king but I'm not totally blind either ;), so I think that Cloudflare acts/is a proxy (even with proxy mode off) and traffic is presented with their IP address and that does not match with the drop rule.
I found out that fail2ban offers a ban action called cloudflare where a rule is entered at Cloudflare that blocks traffic and that works!
The only thing left I wonder about is: how can one block traffic if is going through Cloudflare? And I think it's by using real_ip_header CF-Connecting-IP; instead of real_ip_header X-Forwarded-For;
If just add it in /etc/nginx/sites-enabled/cloudflare_realip.conf I get a warning:
nginx: [emerg] "real_ip_header" directive is duplicate in /etc/nginx/sites-enabled/cloudflare_realip.conf:3
Ideas? Suggestions?
 
As far as I remember cloudflare acts like a caching reverse proxy. This would explain your behavior. Have you ever noticed that if you resolve the ip for one of your CF domain entries, that the resolved ip is different then the target ip of the A record?

Typicaly reverse proxies either enrich header information, like the real-ip, out of the box, or require some sort of manual configuration to achive this.

You will need to find out if CF injects headers by its own or if it provides a setting to activate the injection, then make sure that NPM retains or renames the header so that the target application receives a header is actualy able to make sense of, and finaly be lucky that the application logs this detail in its logs.
 
Have you ever noticed that if you resolve the ip for one of your CF domain entries, that the resolved ip is different then the target ip of the A record?
No, can't say I have.

Typicaly reverse proxies either enrich header information, like the real-ip, out of the box, or require some sort of manual configuration to achive this.
You're correct. On their website they write:
The original visitor IP address appears in an appended HTTP header called CF-Connecting-IP.

You will need to find out if CF injects headers by its own or if it provides a setting to activate the injection, then make sure that NPM retains or renames the header so that the target application receives a header is actualy able to make sense of, and finaly be lucky that the application logs this detail in its logs.
They describe that here. However, I lack the skills to pull that off. So for me it's either through CF and banning on their side or not through CF and using the standard. For fail2ban I just add two actions: iptables-multiport and cloudflare. Banning and unbanning work, blocking and unblocking too. Would I like to have it all work here, sure but for now I'm happy. :)
Thank you for help and input one-eyed-king, Rusty and silverj. (y)
 
After seeing this blog post, I realise you already posted parts of the solution in your previous response. You should fire up the both grep lines inside the NPM container, to see whether the included nginx version was compiled with the http_realip_module enabled.

I assume you already applied the settings from the link you provided in your last response. Your previous response indicated that the directive real_ip_header is used twice in a complementing set of config files.

I want be able to dig deeper into the topic, as I am not running NPM myself, and I have no idea how the nginx.conf is configured and what subfolder it includes allong the way.
 
After seeing this blog post, I realise you already posted parts of the solution in your previous response. You should fire up the both grep lines inside the NPM container, to see whether the included nginx version was compiled with the http_realip_module enabled.
Screenshot 2021-06-14 at 12.07.58.png

and on Synology the results are blank.

I assume you already applied the settings from the link you provided in your last response. Your previous response indicated that the directive real_ip_header is used twice in a complementing set of config files.
That was on Synology's RP where I put the command in a file: /etc/nginx/sites-enabled/cloudflare_realip.conf
In NPM (docker) I just use put it in the advanced tab of the proxy host settings and that works.

I want be able to dig deeper into the topic, as I am not running NPM myself, and I have no idea how the nginx.conf is configured and what subfolder it includes allong the way.

NPM has the ability to include different custom configuration snippets in different places.
Custom configuration snippet files are supposed to be put in /data/nginx/custom
The /data folder is mapped in Docker:
volumes: - /volume1/docker/npm/data:/data

Tree:
Code:
└── nginx
    ├── dead_host
    ├── default_host
    │   └── site.conf
    ├── default_www
    ├── dummycert.pem
    ├── dummykey.pem
    ├── proxy_host
    │   ├── 11.conf
    │   ├── 12.conf
    │   ├── 1.conf
    │   ├── 3.conf
    │   ├── 4.conf
    │   ├── 5.conf
    │   ├── 6.conf
    │   ├── 7.conf
    │   └── 9.conf
    ├── redirection_host
    ├── stream
    └── temp
The proxy_host files are the entries the user added via the GUI. The contents are like this:
# ------------------------------------------------------------
# vaultwarden.example.com
# ------------------------------------------------------------
server {
set $forward_scheme http;
set $server "192.168.0.100";
set $port 1025;
listen 80;
listen [::]:80;
listen 443 ssl http2;
listen [::]:443;

server_name vaultwarden.example.com;
# Custom SSL
ssl_certificate /data/custom_ssl/npm-5/fullchain.pem;
ssl_certificate_key /data/custom_ssl/npm-5/privkey.pem;

# Block Exploits
include conf.d/include/block-exploits.conf;

# HSTS (ngx_http_headers_module is required) (63072000 seconds = 2 years)
add_header Strict-Transport-Security "max-age=63072000; preload" always;

# Force SSL
include conf.d/include/force-ssl.conf;

access_log /data/logs/proxy_host-4.log proxy;

set_real_ip_from 172.23.0.0/16;
#real_ip_header CF-Connecting-IP;
real_ip_header X-Forwarded-For;
real_ip_recursive on;


location / {
# HSTS (ngx_http_headers_module is required) (63072000 seconds = 2 years)
add_header Strict-Transport-Security "max-age=63072000; preload" always;

# Proxy!
include conf.d/include/proxy.conf;
}

# Custom
include /data/nginx/custom/server_proxy[.]conf;
}
The advanced custom settings I put in vai the GUI are in green.
The other settings can be manipulated via the GUI (see attachments):

The Docker container has /etc/nginx/nginx.conf, in it is a/o:
# Includes files with directives to load dynamic modules. include /etc/nginx/modules/*.conf;
...but there's no folder modules.
I've attached /etc/nginx/nginx.conf

I hope helps.
 

Attachments

  • npm-gui1.png
    npm-gui1.png
    38.6 KB · Views: 40
  • npm-gui4.png
    npm-gui4.png
    35.7 KB · Views: 38
  • npm-gui3.png
    npm-gui3.png
    25.8 KB · Views: 33
  • npm-gui2.png
    npm-gui2.png
    40.7 KB · Views: 38
  • nginx.conf.txt
    3 KB · Views: 7
Last edited:
The response from NPM indicates that it's build with the module enabled: this is good :)
The response from the build in Syno-RP not so much: it will not work...

Appart from that: every "include" directive will "import" an additional configuration snippet. The main nginx.conf and all included files will be the bits and pieces that make up your nginx configuration.

You could try to include this configuration (taken from a link of one of your posts) in the main nginx.conf or create a seperate one and include it main nginx.conf:
Code:
set_real_ip_from 103.21.244.0/22;
set_real_ip_from 103.22.200.0/22;
set_real_ip_from 103.31.4.0/22;
set_real_ip_from 104.16.0.0/13;
set_real_ip_from 104.24.0.0/14;
set_real_ip_from 108.162.192.0/18;
set_real_ip_from 131.0.72.0/22;
set_real_ip_from 141.101.64.0/18;
set_real_ip_from 162.158.0.0/15;
set_real_ip_from 172.64.0.0/13;
set_real_ip_from 173.245.48.0/20;
set_real_ip_from 188.114.96.0/20;
set_real_ip_from 190.93.240.0/20;
set_real_ip_from 197.234.240.0/22;
set_real_ip_from 198.41.128.0/17;
set_real_ip_from 2400:cb00::/32;
set_real_ip_from 2606:4700::/32;
set_real_ip_from 2803:f800::/32;
set_real_ip_from 2405:b500::/32;
set_real_ip_from 2405:8100::/32;
set_real_ip_from 2c0f:f248::/32;
set_real_ip_from 2a06:98c0::/29;

#use any of the following two

real_ip_header CF-Connecting-IP;
#real_ip_header X-Forwarded-For;

Then restart nginx (or the NPM container itself) and give it a try.

Though, shouldn't CF also have a mode for "DNS Only" instead of "Proxy" (like indicated here: Identifying subdomains compatible with Cloudflare's proxy )? Wouldn't that solve your problem at hand without even having to use the set_real_ip_from directive...
 

Create an account or login to comment

You must be a member in order to leave a comment

Create account

Create an account on our community. It's easy!

Log in

Already have an account? Log in here.

Similar threads

Not sure why the script hadn't historically worked, but it started working for me yesterday. It also...
Replies
6
Views
3,933
I can’t find any option to restore just the settings. 1710356648 Phew, managed to fix it. Within the...
Replies
4
Views
390
Good to hear. Deluge has not been updated for almost two years now as an app, nevertheless. But it gives...
Replies
12
Views
960
  • Question
Open an issue on that GitHub page. The developers will be glad to assist. OP has posted two threads on...
Replies
5
Views
963
I'm happy with email notifications but in v0.3.3 of dockcheck the author added apprise notifications...
Replies
4
Views
1,041
I am also trying to setup a Z-wave USB dongle and am getting stuck after following the same steps as...
Replies
1
Views
1,030
How did you create the Portainer container in first place? As in exact docker run commands or in case...
Replies
7
Views
1,240

Welcome to SynoForum.com!

SynoForum.com is an unofficial Synology forum for NAS owners and enthusiasts.

Registration is free, easy and fast!

Back
Top