BitWarden - self hosted password manager using vaultwarden/server image

Docker BitWarden - self hosted password manager using vaultwarden/server image

Currently reading
Docker BitWarden - self hosted password manager using vaultwarden/server image

I periodically go into Docker and EXPORT the container into a subdirectory in my Docker shared folder. And, I use hyperbackup to back up the entire Docker shared folder (which includes all the data, settings, etc for all my docker containers/applications) to another NAS every two hours.

I just exported my containers to the docker shared folder, I never included that. Currently I run snapshots on docker shared folder every 12 hours, and hyper backup ever 24 hours off site.
 
How are you guys backing up Bitwarden (Vaultwarden image)? Is it safe to use hyper backup or snapshot replication to take snapshots considering its a sqlite3 database? Or are you doing backups through the admin page "Backup database?"
I’m using Snapshot & Replication to replicate the docker shared folder, and also Hyperback to backup to another NAS. And I’m also running daily mariadb database dump via script to a local back folder and to a USB thumb-drive.

In case of a disaster and something happens to my docker folder I still have the database safely on a thumb drive.

Don’t know if it is waterproof but it works for me.
 
I do pretty much the same, except that all my compose files are in git repos and my volumes are stored on "Portworx" (only recommended for k8s/swarm cluster setups, adds no value for a single docker node), which always creates an additional replica per volume. From time to time I "remove" all my deployments, create consistent backups of my volumes and re-reploy everything again.
 
How are you guys backing up Bitwarden (Vaultwarden image)? Is it safe to use hyper backup or snapshot replication to take snapshots considering its a sqlite3 database? Or are you doing backups through the admin page "Backup database?"
Snapshot works just fine for me
 
Hi, since updating to DSM7 (now on version 7.1-42661 Update 4), the websocket Live Sync feature stopped working (as mentioned here). I have since removed the custom RP .conf file and instead set up my Reverse Proxy back up with Synology's native RP and have tried following what is suggested on this page. If I understood correctly, it appears some folks got it to work without having to use the separate Ngnix Proxy Manager.

When navigating to my web vault on Firefox and hitting F12, I see a 502 Bad Gateway message along with three error messages: 1) "Firefox can’t establish a connection to the server at wss://bw.[mydomain].synology.me/notifications/hub?access_token=eyJ0e[...]hX6aoA. WebSocketTransport.js:99:44"; 2) "Error: Failed to start the connection: Error: There was an error with the transport. Utils.js:218:39"; and 3) "Error: There was an error with the transport. consoleLog.service.ts:53:16".

Does anyone have any ideas what my issue might be? I've been running in circles for the past few days now and am getting close to giving it up :( Any insights or suggestions would be much appreciated!
 
Hi, since updating to DSM7 (now on version 7.1-42661 Update 4), the websocket Live Sync feature stopped working (as mentioned here). I have since removed the custom RP .conf file and instead set up my Reverse Proxy back up with Synology's native RP and have tried following what is suggested on this page. If I understood correctly, it appears some folks got it to work without having to use the separate Ngnix Proxy Manager.

When navigating to my web vault on Firefox and hitting F12, I see a 502 Bad Gateway message along with three error messages: 1) "Firefox can’t establish a connection to the server at wss://bw.[mydomain].synology.me/notifications/hub?access_token=eyJ0e[...]hX6aoA. WebSocketTransport.js:99:44"; 2) "Error: Failed to start the connection: Error: There was an error with the transport. Utils.js:218:39"; and 3) "Error: There was an error with the transport. consoleLog.service.ts:53:16".

Does anyone have any ideas what my issue might be? I've been running in circles for the past few days now and am getting close to giving it up :( Any insights or suggestions would be much appreciated!
Try:

  1. Enter the DSM control panel.
  2. Navigate to Network → General → Advanced Settings
  3. Make sure the ‘Enable Multiple Gateways’ unchecked.
 
Pardon my ignorance, but what is "Enable multiple gateways" useful for? When would one use it?
One case would be to use outgoing VPN (client connection to a commercial VPN provider), while allowing incoming DDNS access to some selfhosted services. This will not work in all cases but it will for most.
 
Try:

  1. Enter the DSM control panel.
  2. Navigate to Network → General → Advanced Settings
  3. Make sure the ‘Enable Multiple Gateways’ unchecked.
Hey Rusty, thanks for the quick reply! it looks like this option is already unchecked.

My Reverse Proxy Rules > Destination > Protocol > is HTTP and my Custom Headers show both $http_upgrade and $connection_upgrade. Should those be unchecked since the script apparently generates those parameters in my websocket.locations.Vaultwarden file? Is the idea that websocket should be querying the server using http/clear text? Or https? The GET message shows it is using wss:// ad opposed to ws://. Is that encrypted version of websocket kind of like https vs http?

Does it make a diff if I use the host IP address vs localhost vs 127.0.0.1 In my RP configs?
 
looks like this option is already unchecked
Ok expected as it is disabled by deafault.

My Reverse Proxy Rules > Destination > Protocol > is HTTP and my Custom Headers show both $http_upgrade and $connection_upgrade. Should those be unchecked since the script apparently generates those parameters in my websocket.locations.Vaultwarden file?
Correct. Using a custom reverse proxy, or in this case a script to generate the RP host file, means that you should not have anything in the Synology reverse proxy interface. It could be a reason for the clash.

Does it make a diff if I use the host IP address vs localhost vs 127.0.0.1 In my RP configs?
If this is a docker instance of VW then its localhost is not the same sa the NAS localhost. Localhost vs 127.0.0.1 will be the same, but localhost vs your NAS IP in case of docker vs baremetal NAS, will not be the same.
 
I removed the Custom Headers from the DSM Reverse Proxy GUI and re-ran the script. When looking at my logs by running:
Code:
docker logs vaultwarden-server | grep -i websockets
, it appears that the websocket had been running, but then stopped working about an hour ago (probably when I removed those Custom Headers), and is now continuing to run:
1664232861803.png
 
I am not sure if I am missing out on something, but the websocket support (aka. livesync?) for vaultwarden (formerly known as bitwardenrs) always required reverse proxy rules with location mappings that require a specific path to be proxied to port 3012. This can not be done through the UI alone and either requires manual reverse proxy rules or the usage of a reverse proxy that actually allows creating such rules, like ngninx proxy manager.

@Rusty wrote a blog post about it: Bitwarden - LiveSync feature
 
I am not sure if I am missing out on something, but the websocket support (aka. livesync?) for vaultwarden (formerly known as bitwardenrs) always required reverse proxy rules with location mappings that require a specific path to be proxied to port 3012.
This is correct, but as far as I seen in the script (though I haven't deep dived into it), it does set up exactly that just outside the regular "built-in Synology RP UI" considering that that addition requires multiple paths for the VW instance to run, and default RP can't handle it.

In any event, I have it running using NPM (as my main reverse proxy), and really haven't had a single hiccup since setting it up. Truth is that this shift was mainly motivated by DSM7, but also some more complex reverse needs that the default instance did not provide. Also, it is much more slower then the NPM one.
 
I removed the Custom Headers from the DSM Reverse Proxy GUI and re-ran the script. When looking at my logs by running:
Code:
docker logs vaultwarden-server | grep -i websockets
, it appears that the websocket had been running, but then stopped working about an hour ago (probably when I removed those Custom Headers), and is now continuing to run:View attachment 10826
So is it running in the end or not?
 
Last edited:
So is it running in the end or not?
Unfortunately it is not, despite the logs stating that the websocket server started. It seems like there is something else going on which is preventing my browser from connecting to the server. I have tried Microsoft Edge too with no success. :(

Perhaps my next step is to try NPM. I actually did attempt to install and use it per your instructions here but I hit a road block with importing existing SSL certs. Is it possible to import the Let’s Encrypt certs that DSM set up for me? And how would I go about identifying where they are and which ones apply to each docker service I host?
 
Is it possible to import the Let’s Encrypt certs that DSM set up for me?
Yes. You can import any cert as long as you have the fullchain (or just a cert) and private key for it. There is no reason why you can't export it from DSM and import it as a "custom" option.

And how would I go about identifying where they are and which ones apply to each docker service I host?
If you are asking while the cert is still on the DSM side, then just use the certificate UI, and expand the certificate dropdown menu. It will list all the services that use it, including all your docker services as those are almost certainly running via the internal reverse proxy. As such, those will be listed in the certificate menu.

If you need a hand, let me know here, in PM or on my private chat (link is on the site where the article is, top right corner).
 
@Rusty how could we add an extra auth in front of the admin endpoint, if using the built in synologys reverse proxy? (As discussed here: "Bypass admin page security" should not be overrideable by admin-page · Issue #2761 · dani-garcia/vaultwarden)

Or can this only be done if you use one of the other reverse proxy services such as NGIX proxy manager. (As discussed here: NGINX proxy manager)
Via the built in one, the new Login Portal, the only elements you can add there is using the Access Control Profile options to control access to it (allow/deny).

Multiple rules can be added (same principle as with firewall, top to bottom apply).

Personally I have not tried it since DSM 7 as I stopped using the built in one, but if you can configure it to block the /admin elements of BW it should work just fine. I have it configured via NPM the same way, meaning that access to the admin element is only allowed from the internal subnet.
 

Create an account or login to comment

You must be a member in order to leave a comment

Create account

Create an account on our community. It's easy!

Log in

Already have an account? Log in here.

Similar threads

So this means that I can copy to its directory from another DiskStation directory and share (using File...
Replies
3
Views
1,449
I'll delete everything I can containers/images/etc, and start fresh over the weekend. While I really like...
Replies
48
Views
6,622
I use it with the Reeder app and wanted to have filtered feeds there. I'll play around with it a bit more.
Replies
61
Views
9,950
I ran across a very complete how-to-install-nextcloud on Docker using the Synology UI (just the UI, not...
Replies
28
Views
8,292
Hello, i just tried to follow these steps above, but all I get is a psql: could not connect to server...
Replies
43
Views
11,344
I discovered if you use fireflyiii/core:latest everything works just fine
Replies
35
Views
16,836

Welcome to SynoForum.com!

SynoForum.com is an unofficial Synology forum for NAS owners and enthusiasts.

Registration is free, easy and fast!

Back
Top