Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
As an Amazon Associate, we may earn commissions from qualifying purchases. Learn more...
Well this falls in a general Docker category so we will make this in a separate resourceyou might want to add this in the tutorial so docker noobs like myself won't be lost
I have migrated from keepass to BitWarden_RS without any issue.exporting from Keepass and importing keepas XML in Bitwarden RS gives an error
How many entries do you have ?Or it's too big (which I would really doubt)
Well at least you got it all imported. Definitely agree that this limit should be raised. Not like there is a huge amount of heavy data involvedThanks for the help. Tried most of the above.
Apparently it's a size thing. I have more than 300 records. I had to split it up in sets of around 100 entries to make the import work. Still a pretty lousy import algorithm if you ask me, that it simple quits with an unexpected error...
And I'm on a DS718+ with 8 GB RAM, you would say that's more than decent enough.
Looks solid m8. Keep in mind that locking down your BW is also important. Have you locked down signups as well? Just in case if anyone stumbles upon your BW url that they can't signup and create a vault.Hi all, thank you so much for this fantastic thread and all the information that it had. It's amazing and I have learnt a lot. I have managed to configure bw in my syno and everything is working perfect. Still there is a voice in the back of my head telling me that exposing my NAS to Internet is risky. This are the countermeasures that I have in place.
How secure is my setup? I know that it cannot be 100% safe but is safe “enough”? Is something else that I can do?
- Https connection to bw using a xxx.synology.me:xxxx domain with a non-standard port and a let’s encrypt ssl certificate
- ISP router and google WiFi with port forwarding of only the needed port for bw
- Reverse proxy set up in Syno
- Docker bw is run by a user that only has access to docker local shared folder
- Firewall rules to allow all lan traffic, allow traffic to bw port from only local country ips and deny everything else
- Admin account disabled, ssh disabled.
thanks!
In short, you can use this tutorial the same way but with this new image. The point is that mprasil has moved its development to bitwardenrs image but underneath its the same code.Hi, I am using the mprasil-bitwarden version (which I installed couple months ago with help of your tutorial in my DSM Docker). Actually three persons (users) including me have their own local (DSM) accounts and personal bitwarden aults here, two of them hav the 2FA turned on.
Now I would like to install the bitwardenrs version. My questions are:
1) is there any easy way just to "update" from mprasil to bitwardenrs or will I have to do a completely new installation?
2) should I then first export the vault as a json file (every single file for every single user) and later import it again or not? My database is mapped to /volume1/docker/bitwarden (I can see there these files: db.sqlite3, db.sqlite3-shm, db.sqlite3-wal, rsa_key.der, rsa_key.pem, rsa_key.pub.der) - is this OK and when I will configure that new bitwardenrs in a same way as the previous one, will in automatically reconect to this database again?
3) should I first turn off (disable) the 2FA?
Thanks and sorry for my questions. I am not very familiar with Docker or Linux. But your tutorial is such a great donethat even someone like me was able to set everything to get to work without problems.
Can you explain in detail what you did and where?I followed the update to enable LiveSync. However after removing the DSM Reverse Proxy config, modifying the custom reverse proxy .conf file, making changes to ports (outside i run it on port 4545) and copying the file over, restarting nginx, I lost any other connectivity to my DS1511+. I had to remove the custom reverse proxy config file and restart nginx. Wondering what I did wrong?
This is my .conf file:Can you explain in detail what you did and where?
server {
listen 4545 ssl;
listen [::]:4545 ssl;
server_name mydomain.net];
ssl_certificate /usr/syno/etc/certificate/system/default/fullchain.pem;
ssl_certificate_key /usr/syno/etc/certificate/system/default/privkey.pem;
location / {
proxy_connect_timeout 60;
proxy_read_timeout 60;
proxy_send_timeout 60;
proxy_intercept_errors off;
proxy_http_version 1.1;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://192.168.1.20:4545;
}
location /notifications/hub {
proxy_pass http://192.168.1.20:3012;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
location /notifications/hub/negotiate {
proxy_pass http://192.168.1.20:80;
}
error_page 403 404 500 502 503 504 @error_page;
location @error_page {
root /usr/syno/share/nginx;
rewrite (.*) /error.html break;
allow all;
}
}
192.168.1.20
is your Syno IP then you could put 127.0.0.1 instead and set the ports in your nginx conf file to match the local ports.proxy_pass http://192.168.1.20:4545;
declaration in location /
, as it will use itself as an upstream, but with http instead of https and thus cause a protocoll mismatch. The problem is not the protocoll, it is that you introduce a loop. This can't be correct.mydomain.net
to what you have set in this file.We use essential cookies to make this site work, and optional cookies to enhance your experience.