BitWarden - self hosted password manager using vaultwarden/server image

Docker BitWarden - self hosted password manager using vaultwarden/server image

Currently reading
Docker BitWarden - self hosted password manager using vaultwarden/server image

Yes. You can import any cert as long as you have the fullchain (or just a cert) and private key for it. There is no reason why you can't export it from DSM and import it as a "custom" option.


If you are asking while the cert is still on the DSM side, then just use the certificate UI, and expand the certificate dropdown menu. It will list all the services that use it, including all your docker services as those are almost certainly running via the internal reverse proxy. As such, those will be listed in the certificate menu.

If you need a hand, let me know here, in PM or on my private chat (link is on the site where the article is, top right corner).
Hey Rusty, so I have attempted to deploy the docker-compose text found here via Portainer Stacks (my first time!!), but modified it a little bit to include it in an existing docker network (see below). I figured I would keep it all together since I had already allowed the network 172.17.0.0/16 on my Synology Firewall. Question: is it necessary to have NPM to share the same network as my vaultwarden instance?

My problem I am facing now, which appears to be a very common problem for many users, is I am getting a Bad Gateway message on the initial login screen. Perhaps it has something to do with how I configured my network? I'll continue digging in Github!

Code:
version: '3'
services:
  app:
    image: 'jc21/nginx-proxy-manager:latest'
    network_mode: "bridge"
    restart: unless-stopped
    ports:
      - '4480:80'
      - '81:81'
      - '44443:443'
    environment:
      DB_MYSQL_HOST: "db"
      DB_MYSQL_PORT: 3306
      DB_MYSQL_USER: "npm"
      DB_MYSQL_PASSWORD: "npm"
      DB_MYSQL_NAME: "npm"
    volumes:
      - /volume1/docker/nginxproxymanager/data:/data
      - /volume1/docker/nginxproxymanager/letsencrypt:/etc/letsencrypt
    depends_on:
      - db

  db:
    image: 'jc21/mariadb-aria:latest'
    network_mode: "bridge"
    restart: unless-stopped
    environment:
      MYSQL_ROOT_PASSWORD: 'npm'
      MYSQL_DATABASE: 'npm'
      MYSQL_USER: 'npm'
      MYSQL_PASSWORD: 'npm'
    volumes:
      - /volume1/docker/nginxproxymanager/data/mysql:/var/lib/mysql
 
Via the built in one, the new Login Portal, the only elements you can add there is using the Access Control Profile options to control access to it (allow/deny).

Multiple rules can be added (same principle as with firewall, top to bottom apply).

Personally I have not tried it since DSM 7 as I stopped using the built in one, but if you can configure it to block the /admin elements of BW it should work just fine. I have it configured via NPM the same way, meaning that access to the admin element is only allowed from the internal subnet.

Do you know of any iOS DDNS apps that run in the iPhone (or some other method) where I can apply a DDNS name to my phone. Then on my router firewall I can add specifically that DDNS name for firewall rules? Mobile phones are tough since they bounce from different networks and grab different ip addresses.
 
Hey Rusty, so I have attempted to deploy the docker-compose text found here via Portainer Stacks (my first time!!), but modified it a little bit to include it in an existing docker network (see below). I figured I would keep it all together since I had already allowed the network 172.17.0.0/16 on my Synology Firewall. Question: is it necessary to have NPM to share the same network as my vaultwarden instance?

My problem I am facing now, which appears to be a very common problem for many users, is I am getting a Bad Gateway message on the initial login screen. Perhaps it has something to do with how I configured my network? I'll continue digging in Github!

So nevermind! I think I might be all set! 🥳

My solution: I took down the stack, deleted everything and started from scratch. My docker-compose "file" remained essentially the same as above, except I removed the "network_mode" language. This resulted in Docker auto-generating it's own docker network (172.23.0.1/16, in my case) for this stack. I then added this Docker network (actually 172.23.0.1/30 since there are only two containers) to the firewall to allow all ports (though I probably only need to expose the few ports used in the stack).

This eliminated the "Bad Gateway" message. I proceeded to upload the pre-existing Let's Encrypt SSL cert from the Synology Security > Certificate page. However, I ran into an issue here. Not sure why, but it deemed it a self-signed cert and I was not able to navigate to my pages. For now, I have generated a new Lets Encrypt cert for each of my subdomains. Is there a way to do a wildcard SSL cert via NPM?
 
Do you know of any iOS DDNS apps that run in the iPhone (or some other method) where I can apply a DDNS name to my phone. Then on my router firewall I can add specifically that DDNS name for firewall rules? Mobile phones are tough since they bounce from different networks and grab different ip addresses.
Uff not sure I get what you are trying to do here?

So nevermind! I think I might be all set!
Very sorry for the late reply, but yesterday I had a very long work day.

Was about to say that the network element needs to be changed as probably the network subnet is the issue here but glad you figured it out. I need to move that network_mode from the guide anyways.

Is there a way to do a wildcard SSL cert via NPM?
Yes, there is, but it will only work via a DNS validation challenge and that will work only if your domain DNS server is one like cloudflare, google, or the like. The generation can be issued in the end by using the *.domain.com format (notice the *), and it will generate the wild card cert.

If you need a hand with this let me know here or in private

Again, glad you figured it out!
 
Last edited:
Yes, there is, but it will only work via a DNS validation challenge and that will work only if your domain DNS server is one like cloudflare, google, or the like. The generation can be issued in the end by using the *.domain.com format (notice the *), and it will generate the wild card cert.

If you need a hand with this let me know here or in private

Again, glad you figured it out!

So I have since obtained a domain and have set Cloudflare as my authoritative nameserver. I think I got everything mostly set up except I am running into a few issues, which I'm hoping you may have some insight on:
  • I have created a number of subdomain for the various web services, including Synology DSM (dsm.mydomain.com). Most of them seem to work perfectly fine, but Synology DSM is very slow and sometimes doesn't even load. Why just DSM?
  • I have a subdomain for external access to Nginx Proxy Manager and one for Portainer. Both services are sitting behind an Access List HTTP basic authentication prompt. Once I get past the Access List and onto their respective sign-in pages and type in my credentials, I get "Unauthorized" with NPM and "Unable to retrieve server settings and status" with Portainer. Does it have something to do with HTTP vs HTTPS?
  • I was auto-blocked by DSM when I was trying to authenticate and sync my Synology Calendar (CalDAV) and Contacts (CardDAV) to my Thunderbird mail client. My only solution was to turn off auto-block. Under Security > Protection > Allow/Block List, I have since put the NPM Docker IP address in the allow list...though not positive if that is working because I was still having trouble authenticating in Thunderbird.
A few other questions that was wondering about, which are maybe related to the issues I'm having above....
  1. If I created a Let's Encrypt SSL ticket from NPM using DNS challenge with Cloudflare (I used these instructions found here:
    To view this content we will need your consent to set third party cookies.
    For more detailed information, see our cookies page.
    ), is this effectively the Origin CA certificate that encrypts traffic between Cloudflare and my web server? Or do I need to create a separate one in the Cloudflare dashboard under SSL/TLS > Origin Server? If so, does that need to be added to Synology DSM certificates? Or would I use the certificate in NPM?
  2. Is it better to have a CNAME record for each of my subdomains? Or is a single CNAME for my wildcard (*.mydomain.com)?
  3. On the Synology DSM side, I have added the IPv4 Ranges from Cloudflare under "Trusted Proxies". Does it help if I keep DoS Protection enabled? Would you recommend disabling it since Cloudflare already offers DoS protection?

Thanks in advance for any suggestions!
 
effectively the Origin CA certificate that encrypts traffic between Cloudflare and my web server?
Correct

Or do I need to create a separate one in the Cloudflare dashboard under SSL/TLS > Origin Server? If so, does that need to be added to Synology DSM certificates?
No need

Is it better to have a CNAME record for each of my subdomains? Or is a single CNAME for my wildcard (*.mydomain.com)?
You can have a single wild record and that will work, but for a free CF account you will not be able to use the proxy setting for that particular record (orange cloud). That means that your public IP will be exposed, and it will work, so its up to you if you want to use CF proxy protection or not.

If you do want proxy protection, then you will have to make a single a host/cname record and activate the proxy setting.

On the Synology DSM side, I have added the IPv4 Ranges from Cloudflare under "Trusted Proxies". Does it help if I keep DoS Protection enabled? Would you recommend disabling it since Cloudflare already offers DoS protection?
Haven't tested it, so from my point you will have to test it and see if it works better or the same.

Most of them seem to work perfectly fine, but Synology DSM is very slow and sometimes doesn't even load. Why just DSM?
Is the record proxied on CF or not?


I was auto-blocked by DSM when I was trying to authenticate and sync my Synology Calendar (CalDAV) and Contacts (CardDAV) to my Thunderbird mail client.
Again it might depend 1st if the record is proxied or not on the CF side. Check that to begin with.
 
Last edited:
You can have a single wild record and that will work, but for a free CF account you will not be able to use the proxy setting for that particular record (orange cloud). That means that your public IP will be exposed, and it will work, so its up to you if you want to use CF proxy protection or not.

If you do want proxy protection, then you will have to make a single a host/cname record and activate the proxy setting.

Unless I misunderstand your comment, this is not accurate:

XRC6zIS.png
 
I was auto-blocked by DSM when I was trying to authenticate and sync my Synology Calendar (CalDAV) and Contacts (CardDAV) to my Thunderbird mail client.
That's quite odd. Try entering the URL into a browser. If the URL is correct, you should be met with a login window where you would enter your NAS credentials. If that window does not appear, it suggests an error in your URL formation.
 
Last edited:
Thanks for your responses. Here is my CF DNS:

4631A814-1A3E-4EB0-86E5-652CA44D9309.jpeg


Ideally, I want everything proxies. @Telos , I noticed your screenshot above shows an ‘A’ type for your wildcard. Does it make a difference that I am using CNAME?

When I navigate on a web browser to my DSM subdomain, it takes very long for it to respond, which when it does after 30 secs or so, I begin to see the favicon, but the browsing screen turns blank white:

A135E4DB-B7AC-4747-97F2-00743FF2B8EB.jpeg


Also, I realized we are getting in the weeds around Cloudflare and DSM and have strayed from the topic of BitWarden. Happy to create a new post with my questions elsewhere!

Thanks again @Rusty @Telos !
 
Last edited:
I noticed your screenshot above shows an ‘A’ type for your wildcard. Does it make a difference that I am using CNAME?
I'm not the most expert here, but when using * with a CNAME, the DNS record expects a domain = *.domain.com. By using it as an A record, it is a wildcard.

My guess is that Cloudflare is "smart enough" to know that the domain. *.domain.com, is actually a wildcard... but it's simpler to show as an A record.

However... if you use an "A" record, you need to enter an IP for the content field. If you don't have a fixed IP (and I don't), your IP updater will need to update the wildcard IP in addition to the root domain name.
 
To extend on Telos response: a wildcard domain is subdomain level scoped, e.g. *.domain.com will only cover x.domain.com, but not cover y.x.domain.com, which will require a *.x.domain.com entry.

The * subdomain is a wildcard domain by convention. I have yet to see a dns provider that doesn't support it that way. It doesn't really matter if the wildcard domain entry points to an A record and/or AAAA record or a CNAME record. For instance, if you have a dyndns domain configured in your router that updates your WAN ipv4, than the wildcard domain could be a CNAME to the dyndns domain.
 
Last edited:
Hi! Today I found out, that there is something bad with my Bitwarden (Vaultwarden) in Docker. Until now everything worked perfectly for several years. But today I wanted to add a new password and a red box warning with some HTML errors popped up (see screenshot below).

1666084731544.png


OK, I opened my Portainer (which is the version 2.15.1) and saw, that vaultwarden is not running but has the "created" status.

1666084835528.png


I don't know if there is anything wrong with vaultwarden or with Portainer (other containers in my docker are running correctly). Maybe vautwarden was updated automatically few days ago (I use the ouroboros for auto-updating all my containers), and this last update made a mess...

I checked the files in my docker/bitwarden folder and it looks like this:

1666085160300.png


Which means none of those files are newer that the vaultwarden's created date that I see in Portainer.


NOW: can anyone here please help me to fix it without loosing my passwords database (there is not only my user account in bitwarden, but also three more user accounts - my family members - and I really would not like to loose everything).
I tried to open the web portal of my bitwarden/waultwarden service via the local IP+port, but it is not running. Also trying to open it via the custom subdomain, but does not work as well (I have the reverse proxy correctly configured in my DSM, as well as valid LE SSL certificate for this subdomain), page not found:

1666085642516.png


Again, everything was working few days ago perfectly, but now such a problem... :(

I will be very appreciate for anyone's help with it.


PS: I have the Snapshots for the whole docker folder configured (1 snapshot per day, 8.20 am every day) and also HyperBackup task for the docker folder (daily at 6.15). So if there would be no solution, Then I could try to recover from backup. But I would rather first try to find out, what is the problem and how to correct it in a normal way, just to avoid this problem anytime later.

PS2: Looks like the vaultwarden was updated automatically on Oct 15 3:24 from the image just one day older, see the screenshots below from the Portainer. If I click the START button, it does not work and returns this error:

1666086952438.png


1666086847225.png



1666086603809.png
 
But today I wanted to add a new password and a red box warning with some HTML errors popped up (see screenshot below).
Add where? From the embedded UI? From a browser plugin? How do we reproduce your steps?

I am using the most recent alpine tag (release 4 days ago) and it can be used from the embedded ui or through a browser plugin without any issues.

OK, I opened my Portainer and saw, that vaultwarden is not running but has the "created" status.
Did you try to start the created container? If so, are there any errors in the container logs?

Which means none of those files are newer that the vaultwarden's created date that I see in Portainer.
The modification data changes when a file was created or updated. it just tells you when something changed last. The creation date of db.sqlite and your certificates is from 2019. The -shim and -wal files are temporary files.

A 405 error usually means the http access method (GET, PUT, POST,DELETE, ...) is not supported (which could be caused by problem in the proxy settings - though I would be surprised if the build-in syno-rp allowes to even configure that) The client application (embedded UI/browser plugin/desktop app, mobile app) specifies the HTTP method it uses for each request it sends to the backend, this is nothing you can influence. Just make sure to use recent versions of the client applications.
 
I have just done the update to 1.26 (didn't had time over the weekend) with 0 issues.

Looking at the git change log, no breaking changes, but there is one BUG active for users using the outside DB (not your case).

Tested creating, editing and deleting an item with no issues at all.

1666088208640.png
 
Add where? From the embedded UI? From a browser plugin? How do we reproduce your steps?
Add the pasword via browser plugin. Now I am sure taht it's because the vaultwarden container is not running
I am using the most recent alpine tag (release 4 days ago) and it can be used from the embedded ui or through a browser plugin without any issues.


Did you try to start the created container? If so, are there any errors in the container logs?
yes, but as I said, it does not want to start - when I click thre START button in Portainer, it gives me "Failure - Request failed with status 400" error
The modification data changes when a file was created or updated. it just tells you when something changed last. The creation date of db.sqlite and your certificates is from 2019. The -shim and -wal files are temporary files.

A 405 error usually means the http access method (GET, PUT, POST,DELETE, ...) is not supported (which could be caused by problem in the proxy settings - though I would be surprised if the build-in syno-rp allowes to even configure that) The client application (embedded UI/browser plugin/desktop app, mobile app) specifies the HTTP method it uses for each request it sends to the backend, this is nothing you can influence. Just make sure to use recent versions of the client applications.
 
Add the pasword via browser plugin. Now I am sure taht it's because the vaultwarden container is not running

yes, but as I said, it does not want to start - when I click thre START button in Portainer, it gives me "Failure - Request failed with status 400" error
Can you try and make a new contianer and connect it to the existing volume content? Have you tried that?

Also, if you still have the old "latest" image, use that one instead and revert back to it and see if the contianer will boot up.
 

Create an account or login to comment

You must be a member in order to leave a comment

Create account

Create an account on our community. It's easy!

Log in

Already have an account? Log in here.

Similar threads

So this means that I can copy to its directory from another DiskStation directory and share (using File...
Replies
3
Views
1,450
I'll delete everything I can containers/images/etc, and start fresh over the weekend. While I really like...
Replies
48
Views
6,634
I use it with the Reeder app and wanted to have filtered feeds there. I'll play around with it a bit more.
Replies
61
Views
9,964
I ran across a very complete how-to-install-nextcloud on Docker using the Synology UI (just the UI, not...
Replies
28
Views
8,298
Hello, i just tried to follow these steps above, but all I get is a psql: could not connect to server...
Replies
43
Views
11,356
I discovered if you use fireflyiii/core:latest everything works just fine
Replies
35
Views
16,849

Welcome to SynoForum.com!

SynoForum.com is an unofficial Synology forum for NAS owners and enthusiasts.

Registration is free, easy and fast!

Back
Top