Install the app
How to install the app on iOS

Follow along with the video below to see how to install our site as a web app on your home screen.

Note: This feature may not be available in some browsers.

Seagate 8 TB disk warnings

As an Amazon Associate, we may earn commissions from qualifying purchases. Learn more...

75
15
NAS
DS 918+, DS1019+, DS1819+
Operating system
  1. Windows
Mobile operating system
  1. Android
Hello everyone, One of the disks indicates that the number of bad sectors is increasing. This can also be seen in the graph in storage management, but Ironwolf health indicates that everything is fine (normal status). How should you view/assess this deviation?
 
I can’t speak on leaks, but I see your analogy. I have no statistics on this and you may well be right. It might be cool to see if drive blaze or anyone keeps statistics on this, bad block growth. As far as I know they just keep actual total drive failure statistics.

That said, my totally unscientific stats are I’ve had 6 drives out of probably 50 that reported bad blocks. 4 of them stabilized after several months and have gone on to be just fine (they are now over 7 years old and still running). 2 of them, the block counts kept increasing linearly for too long for my comfort so I pulled them. I can’t tell if they would have failed, or stabilized, but after the 7th report (i think past 6 months) with the block counts continuously throwing more bad blocks I had had enough and replaced them.

Of course that is highly anecdotal and statistically irrelevant, but just my experience. But it does show that drives are not like leaks at least in some cases and can “fix themselves” and it’s the entire point of why modern drives come with spare blocks and literally have functions to remap bad blocks (ie to fix their own leaks).

Having suffered water damage in the home, I dearly wish water pipes had a similar capability!
[automerge]1708713105[/automerge]


Btw, how old is this drive? How many hours of use. Does it exhibit any clicking noise or do you have a way to see or hear that it randomly has spin downs and spin ups?

If you hear any clicking or hear excessive unexplained spin up/downs, seek to replace the drive immediately.
4512 bad blocks today
46331 hours running time
It makes no specific noise, the disk is from 2017
 
Upvote 0
Last edited:
4512 bad blocks today
46331 hours running time
It makes no specific noise, the disk is from 2017

80 (first report is basically irrelevant as it is usually undercounted and misrepresentative) to 4232 over 3 months, then to 4336 (increase of 104 in say a few days?) then 4512 (increase of 176 in say a couple of days). So the graph over 3 data points is somewhat accelerating.

The good news is the bad block count is increasing very little. The bad news is the rate of increase, ie the curve, is going up and it’s a pretty old drive. That said, considering it’s an old drive I definitely would have a new drive on hand. Your dive runtime is about 5.3years. Not ancient but not new either. I think most people would likely swap the drive at this point, and that’s the reasonable action at this point.

If the rest of the raid5 array is running ok, I personally would wait a couple of more days to see if the rate of increase dies down a bit. But I have a higher risk tolerance than most.

The other good news is the system seems to be working well and you should have plenty of time to get a new drive for replacement. I don’t think the drive is in danger of failing imminently. At the rate it’s going including acceleration, I think you’d have over a week before things get dire, if the bad block count doesn’t decelerate.

For now, you are still dealing with very very low bad block counts, which is good. But based on how old the drive is, and that the bad block count seems to be accelerating and not decelerating, I would err on the side of caution and replace. Others might tempt fate and wait for the bad block count to get bad in absolute terms and get up to say 20,000 bad blocks, but that’s beyond even my tolerance for risk. For me, if the next report shows the rate of increase goes up more than 176 block substantially (say an increase of 1000 blocks) or total count goes over 10,000, I would replace.

Have you gotten the new drive replacement yet?
 
Upvote 0
80 (first report is basically irrelevant as it is usually undercounted and misrepresentative) to 4232 over 3 months, then to 4336 (increase of 104 in say a few days?) then 4512 (increase of 176 in say a couple of days). So the graph over 3 data points is somewhat accelerating.

The good news is the bad block count is increasing very little. The bad news is the rate of increase, ie the curve, is going up and it’s a pretty old drive. That said, considering it’s an old drive I definitely would have a new drive on hand. Your dive runtime is about 5.3years. Not ancient but not new either. I think most people would likely swap the drive at this point, and that’s the reasonable action at this point.

If the rest of the raid5 array is running ok, I personally would wait a couple of more days to see if the rate of increase dies down a bit. But I have a higher risk tolerance than most.

The other good news is the system seems to be working well and you should have plenty of time to get a new drive for replacement. I don’t think the drive is in danger of failing imminently. At the rate it’s going including acceleration, I think you’d have over a week before things get dire, if the bad block count doesn’t decelerate.

For now, you are still dealing with very very low bad block counts, which is good. But based on how old the drive is, and that the bad block count seems to be accelerating and not decelerating, I would err on the side of caution and replace. Others might tempt fate and wait for the bad block count to get bad in absolute terms and get up to say 20,000 bad blocks, but that’s beyond even my tolerance for risk. For me, if the next report shows the rate of increase goes up more than 176 block substantially (say an increase of 1000 blocks) or total count goes over 10,000, I would replace.

Have you gotten the new drive replacement yet?
I have a new one ordered which is arriving tomorrow or Tuesday. The Amazon disk i returned it to the seller because refurbished and only 4 mo guarantee from Seagate.
 
Upvote 0
Last edited:
Thought this was interesting and related:


“Since December 2022, Proxmox VE has sent around 497 notification e-mails about this drive. ZFS Scrub events, Pending sectors, increases in offline uncorrectable sectors, and error count increases all triggered SMART data notifications. Still, the drive continued on in its ZFS RAID 1 mirror.

Of course, without additional layers of redundancy, we would not have let this drive sit in a degraded state for almost fifteen months. It was also just a fun one to watch and see what would happen. In the end, this shucked 8TB WD drive lasted 6.83 years in service before being retired. It will have contributed to multiple STH articles during its lifespan.“

Like I said, it’s more about the trajectory of the bad blocks over time. They tend to settle down and can continue to be used for quite some time. Of the 6 drives where I have bad blocks, 4 of them settled down and went on to use them for 3 years after those reports, and still no additional bad blocks have resulted. The trajectory is key. But as always, ymmv.
 
Upvote 0
FWIW... Synology says...
The operating system will identify and mark bad sectors on a drive to skip them in the future. You can continue to use the drive as long as it is in a healthy status.
 
Upvote 0
Last edited:
FWIW... Synology says...
Thanks! Today i got another message from the syno that the bad sectors have been rising but still the disk is in healthy mode. I have the spare disk at hand. I can wait to replace it but what happens if another one suddenly goes worn also; 4 disks raid 5
5424 bad blocks today so it keeps going, replace now?
 
Upvote 0
Thanks! Today i got another message from the syno that the bad sectors have been rising but still the disk is in healthy mode. I have the spare disk at hand. I can wait to replace it but what happens if another one suddenly goes worn also; 4 disks raid 5
5424 bad blocks today so it keeps going, replace now?

Wow that went up about 900 more bad blocks since last time. That rate is accelerating. No bueno.

If another drive dies, you lose all your data. (Hopefully you have a backup—you should and you should make sure it’s up to date).

You could probably wait for one more bad report if you are morbidly curious (and your data is fully backed up). But the rate of acceleration of bad reported blocks is growing which is not a good nor encouraging sign. If you do not have a full and good backup and you are not confident in the remaining 4 drives, you should now replace this drive.

If you are confident in the other 4 drives and have a backup, and you are morbidly curious to see how the drive dies or not, I would be curious to see its progression.

I personally have “run this experiment” several times because I find it educational to see how these drives perform or die in these edge cases. But I always had a full (multiple) and timely backup and didn’t mind doing a full restore.

If this is important and at risk data, you should not “run the experiment” and likely move to secure the data now.

If the rate of increase continues as is, your next bad block report would put you near 10,000 bad blocks. Still within tolerance of likely spares, but I start to get nervous when I go above that.

Anyway, thanks for keeping us up to date and good luck with how it turns out!
 
Upvote 0
I have replaced the drive and the pool is now repairing itself with the new disk. I tested the old disk but the bad blocks are huge so i did a good job by replacing it.
 
Upvote 0

Create an account or login to comment

You must be a member in order to leave a comment

Create account

Create an account on our community. It's easy!

Log in

Already have an account? Log in here.

Popular tags from this forum

Similar threads

I use a 2.5" mechanical hard disk HDD inside a USB 3.0 enclosure case brands Orico or Kesu. I’m confused...
Replies
2
Views
399
You will get varied opinions on "best models to buy" - my preference are the WD (Western Digital)...
Replies
4
Views
697
Conducted a test via FTP, shows more than 110 MB/s, and through Synology drive no more than 30 MB/s.
Replies
6
Views
1,079
You can say the same about PC applications. I wouldn't expect Synology to be different. Backup/restore.
Replies
8
Views
1,202

Thread Tags

Tags Tags
disk

Welcome to SynoForum.com!

SynoForum.com is an unofficial Synology forum for NAS owners and enthusiasts.

Registration is free, easy and fast!

Trending content in this forum

Back
Top