Seagate 8 TB disk warnings

Currently reading
Seagate 8 TB disk warnings

75
15
NAS
DS 918+, DS1019+, DS1819+
Operating system
  1. Windows
Mobile operating system
  1. Android
Hello everyone, One of the disks indicates that the number of bad sectors is increasing. This can also be seen in the graph in storage management, but Ironwolf health indicates that everything is fine (normal status). How should you view/assess this deviation?
 
I can’t speak on leaks, but I see your analogy. I have no statistics on this and you may well be right. It might be cool to see if drive blaze or anyone keeps statistics on this, bad block growth. As far as I know they just keep actual total drive failure statistics.

That said, my totally unscientific stats are I’ve had 6 drives out of probably 50 that reported bad blocks. 4 of them stabilized after several months and have gone on to be just fine (they are now over 7 years old and still running). 2 of them, the block counts kept increasing linearly for too long for my comfort so I pulled them. I can’t tell if they would have failed, or stabilized, but after the 7th report (i think past 6 months) with the block counts continuously throwing more bad blocks I had had enough and replaced them.

Of course that is highly anecdotal and statistically irrelevant, but just my experience. But it does show that drives are not like leaks at least in some cases and can “fix themselves” and it’s the entire point of why modern drives come with spare blocks and literally have functions to remap bad blocks (ie to fix their own leaks).

Having suffered water damage in the home, I dearly wish water pipes had a similar capability!
-- post merged: --



Btw, how old is this drive? How many hours of use. Does it exhibit any clicking noise or do you have a way to see or hear that it randomly has spin downs and spin ups?

If you hear any clicking or hear excessive unexplained spin up/downs, seek to replace the drive immediately.
4512 bad blocks today
46331 hours running time
It makes no specific noise, the disk is from 2017
 
Upvote 0
Last edited:
4512 bad blocks today
46331 hours running time
It makes no specific noise, the disk is from 2017

80 (first report is basically irrelevant as it is usually undercounted and misrepresentative) to 4232 over 3 months, then to 4336 (increase of 104 in say a few days?) then 4512 (increase of 176 in say a couple of days). So the graph over 3 data points is somewhat accelerating.

The good news is the bad block count is increasing very little. The bad news is the rate of increase, ie the curve, is going up and it’s a pretty old drive. That said, considering it’s an old drive I definitely would have a new drive on hand. Your dive runtime is about 5.3years. Not ancient but not new either. I think most people would likely swap the drive at this point, and that’s the reasonable action at this point.

If the rest of the raid5 array is running ok, I personally would wait a couple of more days to see if the rate of increase dies down a bit. But I have a higher risk tolerance than most.

The other good news is the system seems to be working well and you should have plenty of time to get a new drive for replacement. I don’t think the drive is in danger of failing imminently. At the rate it’s going including acceleration, I think you’d have over a week before things get dire, if the bad block count doesn’t decelerate.

For now, you are still dealing with very very low bad block counts, which is good. But based on how old the drive is, and that the bad block count seems to be accelerating and not decelerating, I would err on the side of caution and replace. Others might tempt fate and wait for the bad block count to get bad in absolute terms and get up to say 20,000 bad blocks, but that’s beyond even my tolerance for risk. For me, if the next report shows the rate of increase goes up more than 176 block substantially (say an increase of 1000 blocks) or total count goes over 10,000, I would replace.

Have you gotten the new drive replacement yet?
 
Upvote 0
80 (first report is basically irrelevant as it is usually undercounted and misrepresentative) to 4232 over 3 months, then to 4336 (increase of 104 in say a few days?) then 4512 (increase of 176 in say a couple of days). So the graph over 3 data points is somewhat accelerating.

The good news is the bad block count is increasing very little. The bad news is the rate of increase, ie the curve, is going up and it’s a pretty old drive. That said, considering it’s an old drive I definitely would have a new drive on hand. Your dive runtime is about 5.3years. Not ancient but not new either. I think most people would likely swap the drive at this point, and that’s the reasonable action at this point.

If the rest of the raid5 array is running ok, I personally would wait a couple of more days to see if the rate of increase dies down a bit. But I have a higher risk tolerance than most.

The other good news is the system seems to be working well and you should have plenty of time to get a new drive for replacement. I don’t think the drive is in danger of failing imminently. At the rate it’s going including acceleration, I think you’d have over a week before things get dire, if the bad block count doesn’t decelerate.

For now, you are still dealing with very very low bad block counts, which is good. But based on how old the drive is, and that the bad block count seems to be accelerating and not decelerating, I would err on the side of caution and replace. Others might tempt fate and wait for the bad block count to get bad in absolute terms and get up to say 20,000 bad blocks, but that’s beyond even my tolerance for risk. For me, if the next report shows the rate of increase goes up more than 176 block substantially (say an increase of 1000 blocks) or total count goes over 10,000, I would replace.

Have you gotten the new drive replacement yet?
I have a new one ordered which is arriving tomorrow or Tuesday. The Amazon disk i returned it to the seller because refurbished and only 4 mo guarantee from Seagate.
 
Upvote 0
Last edited:
Thought this was interesting and related:


“Since December 2022, Proxmox VE has sent around 497 notification e-mails about this drive. ZFS Scrub events, Pending sectors, increases in offline uncorrectable sectors, and error count increases all triggered SMART data notifications. Still, the drive continued on in its ZFS RAID 1 mirror.

Of course, without additional layers of redundancy, we would not have let this drive sit in a degraded state for almost fifteen months. It was also just a fun one to watch and see what would happen. In the end, this shucked 8TB WD drive lasted 6.83 years in service before being retired. It will have contributed to multiple STH articles during its lifespan.“

Like I said, it’s more about the trajectory of the bad blocks over time. They tend to settle down and can continue to be used for quite some time. Of the 6 drives where I have bad blocks, 4 of them settled down and went on to use them for 3 years after those reports, and still no additional bad blocks have resulted. The trajectory is key. But as always, ymmv.
 
Upvote 0
Last edited:
FWIW... Synology says...
Thanks! Today i got another message from the syno that the bad sectors have been rising but still the disk is in healthy mode. I have the spare disk at hand. I can wait to replace it but what happens if another one suddenly goes worn also; 4 disks raid 5
5424 bad blocks today so it keeps going, replace now?
 
Upvote 0
Thanks! Today i got another message from the syno that the bad sectors have been rising but still the disk is in healthy mode. I have the spare disk at hand. I can wait to replace it but what happens if another one suddenly goes worn also; 4 disks raid 5
5424 bad blocks today so it keeps going, replace now?

Wow that went up about 900 more bad blocks since last time. That rate is accelerating. No bueno.

If another drive dies, you lose all your data. (Hopefully you have a backup—you should and you should make sure it’s up to date).

You could probably wait for one more bad report if you are morbidly curious (and your data is fully backed up). But the rate of acceleration of bad reported blocks is growing which is not a good nor encouraging sign. If you do not have a full and good backup and you are not confident in the remaining 4 drives, you should now replace this drive.

If you are confident in the other 4 drives and have a backup, and you are morbidly curious to see how the drive dies or not, I would be curious to see its progression.

I personally have “run this experiment” several times because I find it educational to see how these drives perform or die in these edge cases. But I always had a full (multiple) and timely backup and didn’t mind doing a full restore.

If this is important and at risk data, you should not “run the experiment” and likely move to secure the data now.

If the rate of increase continues as is, your next bad block report would put you near 10,000 bad blocks. Still within tolerance of likely spares, but I start to get nervous when I go above that.

Anyway, thanks for keeping us up to date and good luck with how it turns out!
 
Upvote 0

Create an account or login to comment

You must be a member in order to leave a comment

Create account

Create an account on our community. It's easy!

Log in

Already have an account? Log in here.

Similar threads

  • Question
Define “compatible”. If you mean the list that Synology produces of tested drives, you'll have to ask...
Replies
1
Views
773
  • Question
500TB NAS that would be nice :) Apart from the volume size not exceeding 108TB are there any HDD size /...
Replies
4
Views
1,454
  • Question
Go to this site: https://www.synology.com/en-us/compatibility Pick your NAS model and choose HDD/SSD...
Replies
4
Views
2,297
Replies
7
Views
3,458
Hunh! I've got a DS1521+ and EXOS were on the list. Oh, well...not that I care anymore anyway! I hate...
Replies
3
Views
6,227
If the drive is not on the list, it doesn't mean it will not work. As long as it is not on the...
Replies
1
Views
3,332

Welcome to SynoForum.com!

SynoForum.com is an unofficial Synology forum for NAS owners and enthusiasts.

Registration is free, easy and fast!

Trending threads

Back
Top