My story of down grading from 5x4tb hdd shr2 to 4x6tb hdd shr2 without rebuilding/starting over

Currently reading
My story of down grading from 5x4tb hdd shr2 to 4x6tb hdd shr2 without rebuilding/starting over

7
2
NAS
1819+
Operating system
  1. Windows
Mobile operating system
  1. Android
Last edited:
well where do i start beside at the beginning of the idea, and sorry for the long lead into the actual details. some might like the backstory some can skip to the TLDR; section.

primary nas ds1819+ - bought in oct-2020
secondary nas ds1815+ - bought in jan-2015

well the idea really started when i looked at the uptime of the hdd in my secondary nas (ds1815+). they were running around 47k to 53k uptime hours which is about right since i bought the ds1815+ back in 2015. i bought 5 x wd purple drives (5400 rpm) since they were rated for 24/7 use (and really cheap) and i was only running 1gbe cable setup for the network. i've only had 1 purple drive die in the 5+ yrs. replaced it in 2019 with a 1 x wd red 3tb (5400 rpm) drive which lead me to upgrade the remaining 4x3tb purple drives shortly after with 4x3tb wd red drives (5400 rpm). the next part is to move the old 4tb drives down to the 1815+ from the 1819+.

recently i upgraded the home network in 2020 with unifi (moving from the synology mesh system which replaced the orbi setup, both those systems are great at the price points they were at, orbi around $350 a few years back, synology was around $500 for main and 2 sats). the unifi setup is now a mixed of 500/20 cable modem isp speed, 1gbe stuff and some 10gbe servers (ds1819+, dedicated emby pc/server) and my main pc (if you want a good cheap 10gbe card look into the qnap one as they have drivers for alot of os outside of qnap os unlike synology nic cards and the qnap ones are really cheap and better than the asus ones, my opinion).

when i setup a iperf server in a docker and did some testing, i could only avg about 300-500 transfers which i deduced was from the 5400 rpm hdd, not really a bad thing for what i am using the primary ds1819+ for but it wouldnt hurt to upgrade to 7200 rpm drives. only bad part was doing the research i realized that the nas hdd market had completely changed from 5yrs ago due to the SMR vs CMR issue.

doing a few months of research here and there it all boiled down to either getting wd red pro vs wd gold drives. it was a juggle to decide if the 2.5m hrs mtbf and 550tb write performance was worth the cost (gold had eclipse the price of red pro by now compared to last year where red pro were more than golds) or settle with the red pro 1m hrs mtbf and the 300tb write performance. in the end i stumbled across a double newegg sale on wd red pro for $175, down from $195 down from $229 sale that ended in a few days. after hee hawing for a couple days finally broke down and hit buy on the cart for 5x6tb drives for $175 each. unknown to me it was a 1 time use $20 off per drive in your cart deal so when i went back to buy a 6th drive for a cold spare, the deal was gone and the drives shot up to $195 each and let me to my unique situation.

i researched into looking on how to slim down my current config 5x4tb hdd (1 cold spare/6th hdd ) to 4x6tb (1 cold spare, hence the above issue of not being able to buy a 6th $175 hdd). in all the reddit threads, synology forums and looking in the synoforums, i came to the conclusion that i couldnt just slim down everything unless

1. wipe out, restart with only 4 hdd and restore the data from the backups

not looking forward to this as i have a lot of configs into the current setup of the unifi network vlans (main, iot and guest portal), external access control for emby server and unifi controller access, snapshots, hyperback to the ds1815+, pc backups via synology business backup.

2. replace 1 4tb hdd with a 6tb drive and rebuild the shr2 5 times (which would tax the heck out of the array which didnt want to do unless it was a last resort)

3. ask someone if the following could be done...

use the 3 empty bays on the ds1819+, build a 3x6tb shr storage pool/volume and move everything to that and then add a 4th 6tb hdd to convert the shr to shr2. one thing that did come up what was how would i do this if the dsm was on bay 1 hdd as that was how i build the system from day one. installed 1 hdd and loaded dsm to it and then added 2 other hdd to build a shr1 array (i later added the 4th and 5th hdd down to road to convert it to a shr2 array).

so i posted a thread on the synoforums with my question and a very helpful person replied with the exact answer i was hoping to get in this thread ...

ds1819+ move volume1 to volume2 question -- many thanks to user "EAZ1964" for the reply.

now that i had my info for what i needed i just had to wait for the newegg shipment to arrive and start the process as detailed in the next section.

TLDR the junk story
-------------------
01. hdd's arrived 3pm 25-feb-21
02. minor admin stuff (record s/n and warranty expiration date in excel tracker)
03. installed 2x6tb into bay 1 & 2
04. shutdown, uninstalled non critical apps (snapshots, hyperbackup's, antivirus, everything except pi-hole)
05. went into storage manager, storage pool, configurations -- rsync to custom 600/500 (this speeds up parity, rebuilds, etc a lot to the point it only took about ~32hrs to conver t ds1819+ the point of moving from shr to shr2 which was phase 2)
06. shutdown emby server (this was the only major device that may impede rsync, everything else just needed access to pi-hole)
07. verify backups all good (i have 2 external usb3 hdd that i save backups to on top of the ds1815+) and disconnected them to ensure no corruption from the ds1819+
08. went into storage manager, created storage pool 2 with 2x6tb hdd from bay 1 & 2 (why i did this instead of adding the 3rd hdd from bay 3, i had no idea why i had this brain fart)
09. waited about 18hrs for this parity check to finish (would have been a few days if i had left it at defaults in rsync settings)
10. built volume 2 as shr1 with the 2x6tb
11. here is the stupid part, added the 3rd 6tb hdd to the storage pool 2/volume 2, this kicked off another ~8hrs of parity checking which probably would have been less if i had just did it in step 8
12. parity checking all complete, 3x6tb in shr1
13. started moving the biggest shared folder from volume 1 to volume 2, 2.5tb took about 5-6hrs (started before i went to bed 12am , woke up around 6am and was almost done)
14. started moving the other shared folder over the course about 4-5hrs (total moved was about 4tb of data)
15. shutdown docker/pi-hole, moved the docker folder, started docker/pi-hole and it immediately crash loops
16. look into the log files and the config is pointing to volume 1 even though the folder is now on volume 2
17. tried re-doing pi-hole setup, still crashes. blew out the container then the image and tried to re-download the image from repository, get failed to query server ??? wtf
18. removed docker and deleted the docker share folder (this was stupid since the whole point was to store pi-hole settings in the docker folder so you could rebuild without losing all the configs for pi-hole) to re-install from scratch as i didn't have time to figure out what to change so pi-hole would look at volume 2 instead of volume 1
19. realized since pi-hole is not running (i have my unifi network force all dns lookups to my pi-hole server on the synology nas) which isnt working since its looking for its brains on volume 1 instead of volume 2
20. change setting in unifi network to point to 1.1.1.1 for dns lookups, still docker cant find the repository, hmmm stumped. walked away took a smoke break and brick upside the stupid noggin fixes the issue
21. realized that i manually set the "preferred dns servers" in the network settings in the synology to point to the pi-hole server for dns lookups, added 1.1.1.1 to alternative server and could now get docker to queries (i removed this alternative dns server later)
22. created the docker folder on volume 2
23. re-installed pi-hole and up and running
24. after all folders moved to volume 2 on storage pool 2, verified all shared were accessible and folders looked right.
25. deleted volume 1, then storage pool 1 (surprisingly this was done under 20 secs for both)
26. added 4th 6tb drive to ds1819+
27. changed rsync back to middle check box (faster rsync) no longer set to custom
28. went into storage manager, started the conversion from shr1 to shr2 with 4th 6tb drive, parity check is going slower but i am not too concerned with this part as i now have a working volume 2
29. i may bump up the rsync before i goto bed to 600/500 and then change it back once i wake up in the morning.
30. pulled bays 5-8 from ds1819+ (4tb red drives)
31. installed 3x4tb hdd into ds1815+
32. open storage manager on ds1815+ created volume 2 using the 3x4tb drives (not making that mistake again twice, last time i did storage pool 2 then volume 2 which is backwards and creates 2 steps instead of doing step 2 first which is create the volume which in turn builds the storage pool from that using the 3 hdd. saves a lot of time)

now that everything is in phase 2 of the re-build i am happy that i was able to down grade from 5hdd shr2 to a 4hdd shr2 setup and keep 4 bays open for doing this process again if i run out of space and need to upgrade the 6tb drives to 8tb drives and move to a new volume. since i had a lopsided bay allocation i had to hodge podge a shr1 setup first instead of just building a new volume using 4 hdd in shr2 from the get go. probably would have save about a days worth of time waiting indirectly.

anyway thats the end of the story hope you folks liked it and hopefully it will help someone that may run into this same issue or something along these same lines.

thanks for everyone's help in getting to this nirvana.

attached is pic of my 2 synology nas, top is 1819+ and bottom is ds1815+ that is in the process of building volume 2 / storage pool 2.

edit 1 : fixed typos and spelling.
 

Attachments

  • 20210227_140425.jpg
    20210227_140425.jpg
    1.3 MB · Views: 24

Create an account or login to comment

You must be a member in order to leave a comment

Create account

Create an account on our community. It's easy!

Log in

Already have an account? Log in here.

Similar threads

  • Question
I have my Synology plugged into a UPS with USB. I have the Synology setup so that it will shutdown my...
Replies
0
Views
2,591
Try and use SSH to get to /var/log/messages file and tail that file to see if there are any more...
Replies
2
Views
2,659

Welcome to SynoForum.com!

SynoForum.com is an unofficial Synology forum for NAS owners and enthusiasts.

Registration is free, easy and fast!

Back
Top