- 75
- 15
- NAS
- DS 918+, DS1019+, DS1819+
- Operating system
- Windows
- Mobile operating system
- Android
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
As an Amazon Associate, we may earn commissions from qualifying purchases. Learn more...
4512 bad blocks todayI can’t speak on leaks, but I see your analogy. I have no statistics on this and you may well be right. It might be cool to see if drive blaze or anyone keeps statistics on this, bad block growth. As far as I know they just keep actual total drive failure statistics.
That said, my totally unscientific stats are I’ve had 6 drives out of probably 50 that reported bad blocks. 4 of them stabilized after several months and have gone on to be just fine (they are now over 7 years old and still running). 2 of them, the block counts kept increasing linearly for too long for my comfort so I pulled them. I can’t tell if they would have failed, or stabilized, but after the 7th report (i think past 6 months) with the block counts continuously throwing more bad blocks I had had enough and replaced them.
Of course that is highly anecdotal and statistically irrelevant, but just my experience. But it does show that drives are not like leaks at least in some cases and can “fix themselves” and it’s the entire point of why modern drives come with spare blocks and literally have functions to remap bad blocks (ie to fix their own leaks).
Having suffered water damage in the home, I dearly wish water pipes had a similar capability!
[automerge]1708713105[/automerge]
Btw, how old is this drive? How many hours of use. Does it exhibit any clicking noise or do you have a way to see or hear that it randomly has spin downs and spin ups?
If you hear any clicking or hear excessive unexplained spin up/downs, seek to replace the drive immediately.
4512 bad blocks today
46331 hours running time
It makes no specific noise, the disk is from 2017
I have a new one ordered which is arriving tomorrow or Tuesday. The Amazon disk i returned it to the seller because refurbished and only 4 mo guarantee from Seagate.80 (first report is basically irrelevant as it is usually undercounted and misrepresentative) to 4232 over 3 months, then to 4336 (increase of 104 in say a few days?) then 4512 (increase of 176 in say a couple of days). So the graph over 3 data points is somewhat accelerating.
The good news is the bad block count is increasing very little. The bad news is the rate of increase, ie the curve, is going up and it’s a pretty old drive. That said, considering it’s an old drive I definitely would have a new drive on hand. Your dive runtime is about 5.3years. Not ancient but not new either. I think most people would likely swap the drive at this point, and that’s the reasonable action at this point.
If the rest of the raid5 array is running ok, I personally would wait a couple of more days to see if the rate of increase dies down a bit. But I have a higher risk tolerance than most.
The other good news is the system seems to be working well and you should have plenty of time to get a new drive for replacement. I don’t think the drive is in danger of failing imminently. At the rate it’s going including acceleration, I think you’d have over a week before things get dire, if the bad block count doesn’t decelerate.
For now, you are still dealing with very very low bad block counts, which is good. But based on how old the drive is, and that the bad block count seems to be accelerating and not decelerating, I would err on the side of caution and replace. Others might tempt fate and wait for the bad block count to get bad in absolute terms and get up to say 20,000 bad blocks, but that’s beyond even my tolerance for risk. For me, if the next report shows the rate of increase goes up more than 176 block substantially (say an increase of 1000 blocks) or total count goes over 10,000, I would replace.
Have you gotten the new drive replacement yet?
“Since December 2022, Proxmox VE has sent around 497 notification e-mails about this drive. ZFS Scrub events, Pending sectors, increases in offline uncorrectable sectors, and error count increases all triggered SMART data notifications. Still, the drive continued on in its ZFS RAID 1 mirror.
…
Of course, without additional layers of redundancy, we would not have let this drive sit in a degraded state for almost fifteen months. It was also just a fun one to watch and see what would happen. In the end, this shucked 8TB WD drive lasted 6.83 years in service before being retired. It will have contributed to multiple STH articles during its lifespan.“
Thanks! Today i got another message from the syno that the bad sectors have been rising but still the disk is in healthy mode. I have the spare disk at hand. I can wait to replace it but what happens if another one suddenly goes worn also; 4 disks raid 5FWIW... Synology says...
Thanks! Today i got another message from the syno that the bad sectors have been rising but still the disk is in healthy mode. I have the spare disk at hand. I can wait to replace it but what happens if another one suddenly goes worn also; 4 disks raid 5
5424 bad blocks today so it keeps going, replace now?
We use essential cookies to make this site work, and optional cookies to enhance your experience.