Install the app
How to install the app on iOS

Follow along with the video below to see how to install our site as a web app on your home screen.

Note: This feature may not be available in some browsers.

NAS Compares Are SMR Hard Drives still 'BAD' in 2024?

As an Amazon Associate, we may earn commissions from qualifying purchases. Learn more...

Welcome to NASCompares YouTube channel!

Check out our next video below.

To view this content we will need your consent to set third party cookies.
For more detailed information, see our cookies page.

View: https://www.youtube.com/watch?v=iRInPe13G0o

NAS Compares Video

- - -

Check out FREE NAS advice section on nascompares.com
 
I'm now wondering what Seagate's new Mozaic HAMR are like - compatibility-wise then?
 
I’m old school: No matter what others, or manual says: Not only no SMR— ALL Drives in Any Raid must be same model, each with same firmware.
That approach gives the best “starting point” for data.
 
I’m old school: No matter what others, or manual says: Not only no SMR— ALL Drives in Any Raid must be same model, each with same firmware.
That approach gives the best “starting point” for data.
No arguments there, however there's no way of knowing what firmware you'll get with drives you purchase, all I can say is purchase boxed retail drives from a known good supplier.
 
Last edited:
Yes! If necessary call. A good reseller will be able to determine at least that they came from same shipment
It’s worth it for me!
Now if you are doing a large array. 10 or so, you should be able to specify. Maybe only same shipment for 2-4. If you have a good rapport with your reseller, it’s better, obviously.
Another trick I’ve done, if you need 2 order 3.. check & verify, and return one if necessary, or what I do. Test all for 3-4 hours then replace 1… the one pulled is tested externally with MFG’s software, and when verified OK, it then sits on shelf as good spare for future, knowing it is truly identical. Do same for power supplies if multiple NAS’s are same. Seeing that after warranty is over— YOU are Tech Support: you may as well stack the deck in your favor!!!
 
Boxed retail and warranty are as much as I care about. I do not generally have the money up front to buy a large quantity of large drives - differing firmware on retail drives has never presented any issue to me, and I am on my 3rd NAS in 9 years.
 
Last edited:
As long as we both are comfortable with our decisions…. That’s all that matters!
First raid I was involved with was 1982-3 at work. At home mid 1990’s. Was a Raid, but Drives acting in multiple sets - of pairs - of drives, luminance on one, chroma different signals on other. Bad frames map either individual frames locked out manually in FAT or in a prom.
 
Yes! If necessary call. A good reseller will be able to determine at least that they came from same shipment
It’s worth it for me!
Now if you are doing a large array. 10 or so, you should be able to specify. Maybe only same shipment for 2-4. If you have a good rapport with your reseller, it’s better, obviously.
Another trick I’ve done, if you need 2 order 3.. check & verify, and return one if necessary, or what I do. Test all for 3-4 hours then replace 1… the one pulled is tested externally with MFG’s software, and when verified OK, it then sits on shelf as good spare for future, knowing it is truly identical. Do same for power supplies if multiple NAS’s are same. Seeing that after warranty is over— YOU are Tech Support: you may as well stack the deck in your favor!!!

I kind of have a different philosophy particularly with SHR. I do NOT want drives from the same batch as they will tend to fail at the same time. And the last thing I want is failure around the same time as the resilience of at least my RAIDs are about 1 drive failure can be handled at a time.

And it hasn't been theoretical for me. In the past I've had several drives from the same batch die around the same time. Luckily got my data copied in time.

Now since my drives tend to not all be part of the same batch, when they start throwing bad blocks or some other problem, it tends to be one at a time, which makes for easy replacement.

As always, YMMV.
 
I’d rather have all eggs in one CMR basket.
If bad, hopefully before warranty is up. I use a brand I trust.
You plan is have multiple runs simultaneously. Greater failure rate due to multiple versions. ?
 
I’d rather have all eggs in one CMR basket.
If bad, hopefully before warranty is up. I use a brand I trust.
You plan is have multiple runs simultaneously. Greater failure rate due to multiple versions. ?

I agree on the brand you can trust and the CMR, but dont agree on multiple versions.

However, drives with the same MTBF are mathematically more likely to fail than those with staggered MTBFs. So that suggests over time, staggering the age of drives used can have some advantage in reducing the likelihood of multiple simultaneous failures versus all used from a single batch at the same time.

If you have a 5 drive raid and lets say they are identical made drives, you have a much higher chance of more than 1 of the drives deciding to fail at the exact same time. And for a 1 drive fault tolerant raid that would be bad.

Where as if all the drives are from different lots, the odds of them having an identical defect go down. Ergo, the odds of more than 1 failure at the same time go down.

Here is a real world example (and I had something similar):




I'm not saying there is a right or wrong here, just having nearly lost a LOT of data due to a simultaneous failure of multiple identical drives from a single batch, I have vowed never to do that again. From one of the posts I linked to above:

The big danger in a homogeneous array is that you might suffer some rapid-fire failures that exceed your redundancy. This can and does happen to people, but it's pretty unusual. I had one client who had a bunch of the (terrible, awful) Seagate 1.5TB drives and they had built a large array out of RAIDZ1 for backups, and it got to the point where they were praying resilver operations would finish on one drive before the next failed.

I had a near data death experience just like the one above where one drive outright died and another from the batch lasted barely long enough for me to copy off the data before it died permanently. I'm talking within hours of one another. And if they both died, that's it, all data would have been POOF.

Having drives from different batches, I have never again suffered a simultanous multi drive failure again. I dont even care if the drives die within the same week/month. But just some reasonable separation in time from minutes/hours apart, where you can slap in a replacement drive to rebuild the RAID.

That said, perhaps I havent had the simultaneous failure had nothing to do with different batches of drives and was just good luck, and perhaps my multi drive failure on the same batch of drives was simple bad luck.

I'm not telling you to change your preference. Just one more variable to consider. Everyone's use cases differ. And having drives match can have other benefits that you may care about more than I do the failure risk. No wrong or right answers, just depends on your use case.
 
Last edited:
Last year I retired x number of 1-TB Drives that started out life with custom firmware in a TV Station video server for many years. Retired, sat on shelf. Bunch given to me as it was being trashed. I used them in video raid for 4-5 more years at home. then on shelf for 2, then I put them in 720+.. and Synology didn’t like firmware: ‘Use it Anyway’ only a couple blocks reported in the 2 years it was used, then swapped for SSD’s
I’ve always been partial to Seagate’s, based on work use. These old as hills 4 1TB are still in drawer, but I doubt I’ll ever need them again. Like the Energizer Bunny—They just keep on going!
Firmware was custom written from the late 90’s.
Yes I did have a series of 3 identical early WD Red drives die in a 2 year period. If I hadn’t had redundant data storage it would have been more of a concern. As it was, I added new Seagate drives one by one, and upgraded drives and size when finished. Yes, tested removed drives with MFG Software to confirm they were dying.
Redundant Storage is worth the expense! If a drive dies - I have a spare. If an array dies - data is stored in multiple locations. Just rebuild the array, and copy back. Spare drives, redundant storage, spare power supplies. The SSD’s added to each NAS on eSATA connections…… All of above allow me to become oblivious to failures. Once a drive acts up, I’ll react to it. In the mean time: Ignorance is Bliss!😉
 
I agree on the brand you can trust and the CMR, but dont agree on multiple versions.

However, drives with the same MTBF are mathematically more likely to fail than those with staggered MTBFs. So that suggests over time, staggering the age of drives used can have some advantage in reducing the likelihood of multiple simultaneous failures versus all used from a single batch at the same time.

If you have a 5 drive raid and lets say they are identical made drives, you have a much higher chance of more than 1 of the drives deciding to fail at the exact same time. And for a 1 drive fault tolerant raid that would be bad.

Where as if all the drives are from different lots, the odds of them having an identical defect go down. Ergo, the odds of more than 1 failure at the same time go down.

Here is a real world example (and I had something similar):




I'm not saying there is a right or wrong here, just having nearly lost a LOT of data due to a simultaneous failure of multiple identical drives from a single batch, I have vowed never to do that again. From one of the posts I linked to above:



I had a near data death experience just like the one above where one drive outright died and another from the batch lasted barely long enough for me to copy off the data before it died permanently. I'm talking within hours of one another. And if they both died, that's it, all data would have been POOF.

Having drives from different batches, I have never again suffered a simultanous multi drive failure again. I dont even care if the drives die within the same week/month. But just some reasonable separation in time from minutes/hours apart, where you can slap in a replacement drive to rebuild the RAID.

That said, perhaps I havent had the simultaneous failure had nothing to do with different batches of drives and was just good luck, and perhaps my multi drive failure on the same batch of drives was simple bad luck.

I'm not telling you to change your preference. Just one more variable to consider. Everyone's use cases differ. And having drives match can have other benefits that you may care about more than I do the failure risk. No wrong or right answers, just depends on your use case.
All of that simply highlights the need to have at least a secondary full backup of your data, in the event of unexpected critical failure occurring that could lead to catastrophic loss.
 
All of that simply highlights the need to have at least a secondary full backup of your data, in the event of unexpected critical failure occurring that could lead to catastrophic loss.

I very much agree and do. That said, for me it's a once bitten kind of moment. I don't need that level of excitement, or toil, if I can do something easy to avoid it. That said, with data people can have all kinds of bad luck, and I cannot say that my anecdotal experiences should be extrapolated upon, and they are far from definitive. But they are my experiences, and I act accordingly. I share them in case they may prove useful for others, but may they dont. As always, YMMV.
 
I very much agree and do. That said, for me it's a once bitten kind of moment. I don't need that level of excitement, or toil, if I can do something easy to avoid it. That said, with data people can have all kinds of bad luck, and I cannot say that my anecdotal experiences should be extrapolated upon, and they are far from definitive. But they are my experiences, and I act accordingly. I share them in case they may prove useful for others, but may they dont. As always, YMMV.
get it completely, unfortunately we all know that S*** happens.
 
Last edited:
Sh** Happens. But this all depends on your need for a - and speed of continuing on with daily workload via some type of: a backup solution. By backup I don’t mean getting a backup from a location, and awaiting X number of hours in order to install it.
I mean, you discover a data location is bad, then you proceed onward with your workload: get that same info from an alternate location within 5-10 seconds, and your work continues on un-delayed, from this new location. And then a repair of the original faulty location can be done later - at your leisure.
 

Create an account or login to comment

You must be a member in order to leave a comment

Create account

Create an account on our community. It's easy!

Log in

Already have an account? Log in here.

Similar threads

  • Article Article
The Best ‘Price per TB’ NAS HDD Deals of Black Friday 2024 This Black Friday brings a range of...
Replies
0
Views
241
  • Article Article
Choosing the Best Hard Drives for your NAS When choosing a NAS server for your home or business...
Replies
0
Views
203
  • Article Article
How Noisy Are Seagate, WD and Synology Hard Drives? If you have ever been in close proximity to any...
Replies
0
Views
186
  • Article Article
The Seagate Ironwolf Pro 24TB HDD Review Seagate and their Ironwolf series of hard drives have fast...
Replies
0
Views
2,485
  • Article Article
How to Upgrade the Hard Drive in Your BeeStation Why Would You want to change the hard drive in the...
Replies
0
Views
2,137
  • Article Article
Welcome to NASCompares YouTube channel! Check out our next video below...
Replies
0
Views
979
  • Article Article
Are NAS Drives Safe Enough to Use in 2024? Are you a NAS owner? Perhaps you are considering buying a NAS...
Replies
0
Views
688

Thread Tags

Tags Tags
None

Welcome to SynoForum.com!

SynoForum.com is an unofficial Synology forum for NAS owners and enthusiasts.

Registration is free, easy and fast!

Trending content in this forum

Back
Top