Failing #1 Drive

Currently reading
Failing #1 Drive

ds1815+, ds713+, ds1511+
Operating system
  1. Windows
Mobile operating system
  1. iOS
Hello all, I have run into an issue with my DS1815+. The number 1 drive is failing. It just happens to be the youngest drive in the pool too.:mad: I put it in in February I think. My question is, I recall way back when my DS1511 #1 drive failed that the OS is on that drive and it is not plug and play like the rest of the drives are and I'm seeking advice as to best practice of replacing it. I did a Google search but can’t find anything specific to replacing drive 1. Maybe I'm over thinking it?

My second question is, I'm not able to change this drive out until the 10th at the earliest. I have my IP cams recording to a separate pool on the same NAS. Is there any danger of leaving it alone until then? What should I do to prepare for a total failure? Shut off 2FA? Anything else?

And lastly after reading a bit, apparently there's been an issue with these newer drives? I was unaware of this but, I did notice that these two latest drives are running 15-20 degrees hotter than the rest of the group (WD100EFAX-68LHPNO). The two newest drives 1 & 2 are the W/D WD101EFAX-68LDBN0.

Thanks in advance for any assistance.
There's no panic to replace a failing drive... presuming redundancy is in place. I run drives to failure (a cold spare is on the shelf).

Repair is straight forward... presuming redundancy... replace failed drive with good drive, and "repair".

If you replace the drive before failure, it's a good practice on btrfs formatted systems to run a file scrub just beforehand.
There's a DSM partition created on all disks, part of the storage 'overhead' that you lose from each drive. You should be able to replace any disk from an/any data protected storage pool - i.e. not disks in pools 'without data protection' such as those configured as Basic, RAID 0, JBOD, or SHR (with 0 disks of protection) - and the NAS should work plus rebuild the storage pool.

There is another point, which you didn't mention, that relates to where packages install their applications and data. When you only have one Volume, i.e. volume1, then Package Center installs all packages here, not in the DSM partition on all disks. When you add another Volume you then get the option to select where to install new packages, but there is no GUI help to move installed packages between volumes. If you've added more disks, created a new storage pool, and setup a volume here and then want to remove the original volume1[, its storage pool, or its drives] now it's a pain because those packages can't be migrated officially without a HB backup and restore to the new volume. Or there're are unofficial SSH command line ways, but I haven't used them.

Maybe you were remembering about volume1 and not disk [bay] 1?

Running with a failing disk means that there's either less or no data protection for the affected storage pool. DSM should still run but it is likely that Surveillance Station package is installed and run from volume1 (the at risk volume?) even if the recording etc are to a separate volume.

Doing a reset, via the pin-hole, performs various tasks depending on the type of reset such as reseting network configuration and reseting passwords (2FA will be disabled).

As for the WD Red drives it seems your original are the older WD Red 10TB and the new drives you purchased are the same but branded WD Red Plus 10TB. WD have a smaller range called 'Red' now and this is using SMR technology to read/write data to the platters. The older generation call 'Red' are now called 'Red Plus' and continue to use the CMR technology. Not sure why your newer drives run hotter.
fredbert, wow thanks for all that info! I should've mentioned I was using SHR w/one disk failure on Pool/Volume 1. I am only running Basic on Pool/Volume 2 with one hard drive to record IP cameras from a Vivotek/Vast 2 server. I was completely unaware of how packages were installed when having two volumes. I must have overlooked that when creating the new volume. That is good info. I'm also using my old ds1511 as a full backup via HB so, I'm not really worried about losing data. More worried about losing time if I have to completely reconfigure the ds1815. I do have the config backed up too. I was just thinking it was different when replacing drive 1 versus the other drive due to the OS being on that drive and I might do something to cause a complete "re-do". Like I said, maybe I'm over thinking it. I've already ordered two older model HDs. Not impressed with the new ones at all!
Thanks all. I'm feeling a little better now. Telos, I'm planning on switching to btfrs on my next upgrade. I've been eyeing the ds1821+. Maybe I will score one and some 16TB HDs for a great deal sometime in the near future. The ds1815 is doing good for the most part, so not in any rush. Now if the ten year old ds1511 goes to NAS heaven, I'll get more serious.:oops:

Create an account or login to comment

You must be a member in order to leave a comment

Create account

Create an account on our community. It's easy!

Log in

Already have an account? Log in here.

Similar threads

  • Solved
As a follow up. I migrated all the data off the NAS. Swapped the failing drive out. Rebuilt it as RAID...
That never happens to me...just ask my wife. ;-) I'm going to put a red star on the calendar. :D
Welcome to the forum. Your update path should be this: Have you managed to download these...
Does browsing for Synology Web Assistant find your NAS? That’s one way to try to find it on the LAN. Other...
I've not seen this, but have never used the mobile app... only PC apps.
  • Solved
That I’d rather use a drive mfg I know and trust, than a drive and firmware from 2 different MFG’s.

Welcome to! is an unofficial Synology forum for NAS owners and enthusiasts.

Registration is free, easy and fast!