First time NAS user. Allocation of hard drive space, how to configure RAID, do I need to stay logged in?

11
3
NAS
DS423+
Operating system
  1. Windows
I intend to use my new NAS drive primarily as file storage for editing video files. My plan is to have the 4 drives RAID configured with two drives for storage and the other two for simultaneous backups. I will also periodically manually backup these to an external drive. I do not want or need internet access.

I've setup the NAS (4 bay with 4 20TB drives) and created a Storage Pool (RAID SHR). The total capacity is now 54.5TB. I guess that the 54.5TB is three drives - but 5.5TB sounds like a lot for an operating system. The Storage Pool is 52.4TB. Another 2.1TB less.

Do I need to change the RAID configuration to accomplish what I want? How do I configure the RAID setup?

Do I need to stay logged in at all times to either the NAS or the Synology website? When I logged out I received an email notification that "Your Synology Account was signed out" with the message, "for security reasons, your Synology Account (my email address) automatically signed out from (my username)".

This is an initial setup so I have no data to lose and can install or reconfigure anything at this point.

Charles
 
Last edited:
In case you have 4 basic disks: they are completely separated, if one fails, you lose the data on that disk and you can replace it with any disk, smaller or larger. The system will run as usual, (system is installed on all disks) but with a warning that a disk has failed.

There is one exeption: upon setup, by default, the first disk will contain your own installed "packages" (apps). You can change the package location to other disk (you cannot move them), but If that disk fails, you will lose functionality that came with these apps. So make sure you hyperbackup these packages.

BTRFS has advantages, but also some disadvantages: It takes a 6% overhead on disk space, is slightly slower than EXT4. Take care not to switch on checksums if your disk is heavily changing data like in a docker of database application as calculating and writing checksums will hit performance.
Good to know this is how it works. Thanks.

I hate to say it, but what you want would mean you'd have about only 16TB of actual storage.

Here is what I think you want to do.

Drives 1 & 2 set as SHR (for redundancy and growth potential) as Storage Pool 1. This is your main storage area.

Drives 3 & 4 as SHR as Storage Pool 2. This would be used for backing up your main storage area.

That means your main pool would effectively show you only have around 16TB of actual storage available.e You can do a HyperBackup of storage pool 1 to storage pool 2. This is important that they are on different physical pools. Why?

Well, from time to time synology 'upgrades' things like the filesystem (eg from EXT4 to BTRFS, or from a non encrypted pool to an encrypted drive pool) and the ONLY thing you can do is nuke your pool and rebuild it. By putting a backup on a separate pool, it lets you pave your main storage pool 1 and rebuild from storage pool 2.

Also, your backup probably wont need as many drives as your main pool as hyper backups tend to be of limited data and magically compress on the backup side.

Here's the problem, if your NAS only has 4 slots, you've basically turned 4 drives into kind of just one drive of useful space. Sure you have a lot more redundancy on your main pool and backup pool, but this is pretty limiting.

For a lot of my data I like to have 4 drives for my main pool and 2 drives for a backup pool, so for my needs, that is the minimum size of NAS. But even in such a NAS, you only get effectively 3 drives actual storage space because of all the backup and redundancy.

Things get a little better with an 8 bay as you can often get away with 6 drives for the main pool and 2 for the backup, in which case you effectively get about 5 drives of actual storage.

Not sure any of that will make sense, but it's offered in the spirit of being helpful and of things I wish I realized when I started with Synology.
Thanks, this is very useful to know and helps me in my decision to change my original strategy from using SHR to not using it at all.

Based on all the above comments I'm going to remove SHR entirely, use the 4 drives as storage, and back them up externally as I would with any PC. As zombiephysicist pointed out the amount of actual storage that you get in a 4 bay NAS using SHR is very limited. Given that it cost me over a thousand dollars to just buy the hard drives, because I wanted to maximise my storage space, it's ridiculous to end up with 20TB to 50TB of storage depending on the setup. My reason for buying the NAS was not to setup a redundancy system (although I considered it as an option until I learned here more about it), it was to create a very large storage system with internal access from my home computers.
 
Upvote 0
Good to know this is how it works. Thanks.


Thanks, this is very useful to know and helps me in my decision to change my original strategy from using SHR to not using it at all.

Based on all the above comments I'm going to remove SHR entirely, use the 4 drives as storage, and back them up externally as I would with any PC. As zombiephysicist pointed out the amount of actual storage that you get in a 4 bay NAS using SHR is very limited. Given that it cost me over a thousand dollars to just buy the hard drives, because I wanted to maximise my storage space, it's ridiculous to end up with 20TB to 50TB of storage depending on the setup. My reason for buying the NAS was not to setup a redundancy system (although I considered it as an option until I learned here more about it), it was to create a very large storage system with internal access from my home computers.

Well if you are going to external storage, then I would suggest using SHR. Using all 4 drives for SHR. That will give you 3 drives of actual storage, and 1 for redundancy, and the ability to grow later. So essentially with 20TB drives you'd get around 51TB of usable space, and anyone of those drives could die and you could recover. Also, when 30tb or 40tb drives come out, you could upgrade your space over time.

The only problem is if you have 51tb of SHR storage space available, even if you get a single 24tb drive (about 22tb usable), you obviously cannot backup all of the data. This may be just fine for a while if you are not using all the space, and you can worry about it later when you exceed the capacity of the single drive. That is what I would likely do. Having a 4 drive raid with no redundancy is begging to lose data IMO. The above would be a decent compromise solution IMO.

Good luck and let us know what you end up doing!
 
Upvote 0
Well if you are going to external storage, then I would suggest using SHR. Using all 4 drives for SHR. That will give you 3 drives of actual storage, and 1 for redundancy, and the ability to grow later. So essentially with 20TB drives you'd get around 51TB of usable space, and anyone of those drives could die and you could recover. Also, when 30tb or 40tb drives come out, you could upgrade your space over time.

The only problem is if you have 51tb of SHR storage space available, even if you get a single 24tb drive (about 22tb usable), you obviously cannot backup all of the data. This may be just fine for a while if you are not using all the space, and you can worry about it later when you exceed the capacity of the single drive. That is what I would likely do. Having a 4 drive raid with no redundancy is begging to lose data IMO. The above would be a decent compromise solution IMO.

Good luck and let us know what you end up doing!
Good argument for staying with SHR. I suppose I could always offload data once I get close to 50TB. I bought 4 20TB drives because that was the maximum usable for my model according to the Synology website. When 30TB or 40TB becomes available I'd have to buy a new NAS as well as the HDDs which at that level will be very expensive.

I'll think everything over and decide what to do. I'll definitely report back after I've used the NAS for a bit.
 
Upvote 0
Good argument for staying with SHR. I suppose I could always offload data once I get close to 50TB. I bought 4 20TB drives because that was the maximum usable for my model according to the Synology website. When 30TB or 40TB becomes available I'd have to buy a new NAS as well as the HDDs which at that level will be very expensive.

I'll think everything over and decide what to do. I'll definitely report back after I've used the NAS for a bit.
Synology just isn’t updating the compatible drives for their older models. I can confirm on 5 different synologies it can use the 24tb exos drives just fine with no issues.
 
Upvote 0
Last edited:
Synology just isn’t updating the compatible drives for their older models. I can confirm on 5 different synologies it can use the 24tb exos drives just fine with no issues.
Good to know, thanks.

One last point I'd like to make at the risk of stirring up controversy. I've been working with computers since the 1980s and aside from the one time there was a class action lawsuit over failed hard drives when one of mine failed (was it WD?), I have never had a hard drive fail - ever. I still have drives that are 20 plus years old and still work - lots of them. I know the common knowledge is that all hard drives will fail eventually, but so far not in my lifetime.

On one of the NAS YouTube videos the comment was made that if a hard drive is going to fail it's going to fail in the first week or two, after that there's hardly any chance of it failing. So, unless NAS drives are manufactured differently or the Synology system is more susceptible to corrupting content or creating failure, in this day and age the chances of one drive failing are minuscule, the chances of multiple drives failing at the same time are practically nil. And if you are doing regular backups, which you should always be doing anyway, you should be assured that you are safe having multiple hard drives of data without layers of redundancy, unless of course, as we discussed above you are running a business where you can not afford any down time.
 
Upvote 0
Good to know, thanks.

One last point I'd like to make at the risk of stirring up controversy. I've been working with computers since the 1980s and aside from the one time there was a class action lawsuit over failed hard drives when one of mine failed (was it WD?), I have never had a hard drive fail - ever. I still have drives that are 20 plus years old and still work - lots of them. I know the common knowledge is that all hard drives will fail eventually, but so far not in my lifetime.

On one of the NAS YouTube videos the comment was made that if a hard drive is going to fail it's going to fail in the first week or two, after that there's hardly any chance of it failing. So, unless NAS drives are manufactured differently or the Synology system is more susceptible to corrupting content or creating failure, in this day and age the chances of one drive failing are minuscule, the chances of multiple drives failing at the same time are practically nil. And if you are doing regular backups, which you should always be doing anyway, you should be assured that you are safe having multiple hard drives of data without layers of redundancy, unless of course, as we discussed above you are running a business where you can not afford any down time.
Speaking from experience, if you explained this on Reddit you'd be downvoted into oblivion.

RAID does nothing for unintended file deletion, file corruption, malware encryption and a host of other problems that are far more likely to occur than a drive failure. Depending on the size of the volume restoration can take as long or much longer than from a backup. Recovering a RAID volume also stresses the remaining drives.

RAID is a useful technology and has its place, but it's important to understand the trades.
 
Upvote 0
Good to know, thanks.

One last point I'd like to make at the risk of stirring up controversy. I've been working with computers since the 1980s and aside from the one time there was a class action lawsuit over failed hard drives when one of mine failed (was it WD?), I have never had a hard drive fail - ever. I still have drives that are 20 plus years old and still work - lots of them. I know the common knowledge is that all hard drives will fail eventually, but so far not in my lifetime.

On one of the NAS YouTube videos the comment was made that if a hard drive is going to fail it's going to fail in the first week or two, after that there's hardly any chance of it failing. So, unless NAS drives are manufactured differently or the Synology system is more susceptible to corrupting content or creating failure, in this day and age the chances of one drive failing are minuscule, the chances of multiple drives failing at the same time are practically nil. And if you are doing regular backups, which you should always be doing anyway, you should be assured that you are safe having multiple hard drives of data without layers of redundancy, unless of course, as we discussed above you are running a business where you can not afford any down time.

I disagree with this from first hand knowledge. A NAS working 24 hours a day is different. I've had drives start to go a year in. And 3 years in. Glad you havent experienced it.

But think of it this way. The more drives you buy, the more points of failure you have. You increase your chance of a dud. If you just combine them RAID 0 to get most space and speed, you are increasing the odds of not only experiencing a failure, but when one drive goes, it effectively kills the data on all the other drives. Ergo why you want something like SHR or RAID5 to give you some warning so you can replace the railing drive.

Often drives do not just "die" they start throwing errors and it often gives you plenty of time to replace the drive and the RAID will rebuild itself.
 
Upvote 0
Good to know, thanks.

One last point I'd like to make at the risk of stirring up controversy. I've been working with computers since the 1980s and aside from the one time there was a class action lawsuit over failed hard drives when one of mine failed (was it WD?), I have never had a hard drive fail - ever. I still have drives that are 20 plus years old and still work - lots of them. I know the common knowledge is that all hard drives will fail eventually, but so far not in my lifetime.

On one of the NAS YouTube videos the comment was made that if a hard drive is going to fail it's going to fail in the first week or two, after that there's hardly any chance of it failing. So, unless NAS drives are manufactured differently or the Synology system is more susceptible to corrupting content or creating failure, in this day and age the chances of one drive failing are minuscule, the chances of multiple drives failing at the same time are practically nil. And if you are doing regular backups, which you should always be doing anyway, you should be assured that you are safe having multiple hard drives of data without layers of redundancy, unless of course, as we discussed above you are running a business where you can not afford any down time.

I am also a person who has been working with computers since the 80's - the early 80's in fact before hard drives even existed in the PC market. IOW, I've been working with hard drives since they came in to being. And yes, they can be very reliable but they do fail. You are welcome to google "backblaze hard drive stats" to see proof of said failures, specifically for drives running 24/7 in a data center environment. As you noted, regular backups are an important part of a good risk minimization strategy. What RAID buys you is the convenience of not having to do a restoration and a 2nd level of protection in case your backup has issues. I feel much more comfortable having RAID, automated backups that run weekly, and manual backups that I run periodically and take offsite.
 
Upvote 0
On one of the NAS YouTube videos the comment was made that if a hard drive is going to fail it's going to fail in the first week or two, after that there's hardly any chance of it failing. So, unless NAS drives are manufactured differently or the Synology system is more susceptible to corrupting content or creating failure, in this day and age the chances of one drive failing are minuscule, the chances of multiple drives failing at the same time are practically nil.
Nice discussion for in the lounge.
You are very fortunate. Most of us are not, and many of us lost some data as a result of disk failure without proper backup or redundancy. I am one of those that had two disks failing at the same time (power spike or whatever).
And that was even before ransomware.
At a certain moment, statistics tell that you may be hit as well, and I really hope your backup strategy is ok and your data will survive.
I will step out of the topic as I am a beta guy that needs facts and figures.
Succes
 
Upvote 0
With quality NAS/Enterprise rated drives, multiple drive failures failing at the same time should be very rare, but it can happen, hence why it is always important to have external backups (RAID is not a backup....)
 
Upvote 0
With quality NAS/Enterprise rated drives, multiple drive failures failing at the same time should be very rare, but it can happen, hence why it is always important to have external backups (RAID is not a backup....)

From what I've read the most common situation where multiple drives fail within a short period of time is when you get a number of drives from the same build lot and there was an issue during the manufacturing of that lot. Some IT admins will purposely source a set of drives from multiple vendors or in different shipments to try to avoid that possibility. But with random drives it is rare to see multiple drive failures within a short period of time.
 
Upvote 0
From what I've read the most common situation where multiple drives fail within a short period of time is when you get a number of drives from the same build lot and there was an issue during the manufacturing of that lot. Some IT admins will purposely source a set of drives from multiple vendors or in different shipments to try to avoid that possibility. But with random drives it is rare to see multiple drive failures within a short period of time.
Can't disagree with the logic behind this, and have seen a few posts detailing that scenario.
 
Upvote 0
You are welcome to google "backblaze hard drive stats" to see proof of said failures, specifically for drives running 24/7 in a data center environment.

As you noted, the problem with data center statistics is that the drives used in data centers are being hammered 24/7 by multiple users. That's one of the reasons I'd never buy refurb drives, regardless of hours. The hours in use don't show the level of use. Most SOHO or personal use won't be anywhere near that level.
 
Upvote 0
Last edited:
Speaking from experience, if you explained this on Reddit you'd be downvoted into oblivion.
Ha ha, good thing I'm not on Reddit, ;-)

RAID does nothing for unintended file deletion, file corruption, malware encryption and a host of other problems that are far more likely to occur than a drive failure.
Exactly my point. These are more likely to affect data without good constant backups than a failed drive.

You are welcome to google "backblaze hard drive stats" to see proof of said failures, specifically for drives running 24/7 in a data center environment.
Thank you for clarifying this, so having drives run 24/7 make them more susceptible to failure than under normal home use when computers are turned off at night, and backups drives are only plugged in when they are backing up. Good to know. Which brings up the debate should you run a NAS 24/7 if you are shutting everything down at night so it is in effect running for no reason.
 
Upvote 0
Which brings up the debate should you run a NAS 24/7 if you are shutting everything down at night so it is in effect running for no reason.
There is not really a debate here, there is no evidence whatsoever that running 24/7 has impact on lifetime (other than some ancient posts on 1990 HDD), disks are made to run and to start up.
There is evidence that powered on devices do consume energy, also there is evidence that an active device is a possible target for malware/hackers.

So switch it off when you do not use it.
8 hours a day without energy use (-23W*0.4€/kWh) saves about 140€(or$) in 5 years in a 4 bay nas. A free disk after 5 years.
 
Upvote 0
Thank you for clarifying this, so having drives run 24/7 make them more susceptible to failure than under normal home use when computers are turned off at night, and backups drives are only plugged in when they are backing up. Good to know. Which brings up the debate should you run a NAS 24/7 if you are shutting everything down at night so it is in effect running for no reason.

Sort of. My point was that most home users don't exercise their hard drives anywhere near what a data center does, so relying on stats based on data center experience isn't really useful.

There are two components to drive life: the platter bearing/motor, and the servos that drive the heads. As long as the drive is kept at the spec operating temperature, spinning the platter doesn't create much wear. I run my NASes 24x7 (one of which is over 10 years old, used for backups) because nights are when my device backups occur, updates are downloaded/applied, and my security cameras record 24/7. My primary computer is also on 24/7 as my work hours are irregular and I don't particularly want to wait for the NAS to spin up.

There is some data that suggests powering up is when most failures occur. I wouldn't be particularly worried about that or the additional electricity used, but YMMV.
 
Upvote 0
You need to use the power down button or initiate a shutdown from the web interface. Removing power is not a good idea.
Power button is what I meant, not sure what you mean by "removing power." I looked it up and see that the power button is OK as well as the web interface. "if you press and hold the button for approximately 4 seconds until the LED blinks and you hear a beep sound, your NAS will gracefully shut down."

You can set a shutdown/startup schedule in the scheduler, eg 23:00h off, 08:00h on
This is a good idea too, most of the time, but I'd want to turn this off if I wanted it shut down for a longer period of time.
 
Upvote 0

Create an account or login to comment

You must be a member in order to leave a comment

Create account

Create an account on our community. It's easy!

Log in

Already have an account? Log in here.

Similar threads

Have you thought of using the DS918+ as a Hyper backup target? You can then restore any file or the whole...
Replies
3
Views
1,056
Thanks PunchCardBoss, Changing the subnet on the Deco to the default Netgear, basically worked. I was...
Replies
2
Views
1,345
Considering the current state, just connect to that IP address and continue with dsm installation. The...
Replies
4
Views
1,258
Does browsing for Synology Web Assistant find your NAS? That’s one way to try to find it on the LAN. Other...
Replies
3
Views
1,384
The 1621+ would be a logical replacement, I chose the DS1522+ as it is slightly newer. Answers: A)...
Replies
1
Views
1,440
  • Question
Hello, I'm running I'm on DSM 7.2 (DSM 7.2.1-69057 Update 5). When adding a new user, I let him the...
Replies
0
Views
613
I have been working with my NAS all day... mapping drives and successfully using the mappings, etc. I...
Replies
20
Views
2,820

Welcome to SynoForum.com!

SynoForum.com is an unofficial Synology forum for NAS owners and enthusiasts.

Registration is free, easy and fast!

Trending threads

Back
Top