Hi all,
Long time lurker, first time poster! Hoping to get some advice on how best to transition my current NAS.
A bit of context on my setup:
However, the best path forward is a bit tricky given my current volume is configured as SHR2. If at all possible, I am hoping to figure out a way to migrate the SHR2 volume without having to completely wipe all 12 drives and reconfigure a new RAID array.
So, with all that in mind, my question is the following: what is/isn’t possible when it comes to accessing an SHR2 volume from a non-Synology device? I think there are 3 possible scenarios:
Am I thinking about the options here correctly? Is there anything I’m missing or not thinking of? Any advice would be greatly appreciated!
Cheers
LVS Output:
PVS Output:
cat /proc/mdstat Output:
Note: data scrubbing is currently running on storage pool
Long time lurker, first time poster! Hoping to get some advice on how best to transition my current NAS.
A bit of context on my setup:
- I recently purchased a server rack and am looking to move everything over to be rack mounted. The rack itself is 22” deep.
- I currently have a Synology DS3622xs+ in SHR2 configuration (previously migrated from a DS1819+). Current capacity is 164 TB, with around 120 TB filled. All drives are 18 TB.
- I’m looking to migrate to a 24 bay rack-mounted NAS, both to increase my total storage amount and to clean up my rack a bit (I currently have the DS3622 sitting on top of the rack).
However, the best path forward is a bit tricky given my current volume is configured as SHR2. If at all possible, I am hoping to figure out a way to migrate the SHR2 volume without having to completely wipe all 12 drives and reconfigure a new RAID array.
So, with all that in mind, my question is the following: what is/isn’t possible when it comes to accessing an SHR2 volume from a non-Synology device? I think there are 3 possible scenarios:
- Storage array is essentially RAID 6, and can be used/expanded as RAID 6 in a new machine– if I am understanding correctly, SHR2 volumes where all drives are the same size are the same setup as if they were in RAID 6. Is that correct? If it is, does that mean I would be able to swap the drives into a non-Synology machine and the RAID would be not only recognized, but expandable?
- To add an additional wrinkle here, it’s worth noting that all drives weren’t always the same size. I’ve included output from the pvs, lvs, and cat /proc/mdstat commands at the end of this post in case they are helpful.
- Storage array can be mounted, but not expanded – I could also imagine a scenario where moving the storage array to a new machine would result in the drives being recognized as RAID 6, BUT due to how SHR2 works, I would lose the ability to expand the array. Not the end of the world, as I could presumably keep the old Synology volume available, and then create a new array / volume for future expansion
- Storage array can be recovered (i.e., using this guide + Ubuntu 18.04), but not mounted long-term – my suspicion is that this is the most likely scenario. If that’s the case, then I will probably end up buying some new drives, creating a new RAID array in the new chassis, copying over the data from my Synology (while the drives are still in the DS3622), formatting the old drives, and then adding those drives to the rack chassis to expand the new RAID array further.
Am I thinking about the options here correctly? Is there anything I’m missing or not thinking of? Any advice would be greatly appreciated!
Cheers
LVS Output:
Bash:
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
alloc_cache_1 shared_cache_vg1 -wi-ao---- 750.00g
syno_vg_reserved_area shared_cache_vg1 -wi-a----- 12.00m
syno_vg_reserved_area vg1 -wi-a----- 12.00m
volume_2 vg1 -wi-ao---- 163.67t
PVS Output:
Bash:
PV VG Fmt Attr PSize PFree
/dev/md2 vg1 lvm2 a-- 9.10t 0
/dev/md3 vg1 lvm2 a-- 54.57t 0
/dev/md4 vg1 lvm2 a-- 27.25t 0
/dev/md5 vg1 lvm2 a-- 54.57t 0
/dev/md6 vg1 lvm2 a-- 18.18t 548.00m
/dev/md7 shared_cache_vg1 lvm2 a-- 931.51g 181.50g
cat /proc/mdstat Output:
Note: data scrubbing is currently running on storage pool
Bash:
Personalities : [raid1] [raid6] [raid5] [raid4] [raidF1]
md6 : active raid6 sdc9[11] sdd9[0] sdi9[7] sdj9[8] sdk9[9] sdl9[10] sdh9[6] sdg9[5] sdf9[4] sde9[3] sdb9[2] sda9[1]
19524426240 blocks super 1.2 level 6, 64k chunk, algorithm 2 [12/12] [UUUUUUUUUUUU]
md3 : active raid6 sdc7[14] sde7[6] sdi7[10] sdj7[11] sdk7[12] sdl7[13] sdh7[9] sdg7[8] sdb7[5] sda7[4] sdd7[3] sdf7[7]
58594024320 blocks super 1.2 level 6, 64k chunk, algorithm 2 [12/12] [UUUUUUUUUUUU]
[===================>.] resync = 95.3% (5585611116/5859402432) finish=39.9min speed=114327K/sec
md5 : active raid6 sdc8[12] sdi8[8] sdj8[9] sdk8[10] sdl8[11] sdh8[7] sdg8[6] sdf8[5] sde8[4] sdb8[3] sda8[2] sdd8[1]
58594345600 blocks super 1.2 level 6, 64k chunk, algorithm 2 [12/12] [UUUUUUUUUUUU]
md2 : active raid6 sdc6[16] sda6[6] sdi6[12] sdj6[13] sdk6[14] sdl6[15] sdh6[11] sdg6[10] sdd6[5] sdf6[9] sde6[8] sdb6[7]
9767429120 blocks super 1.2 level 6, 64k chunk, algorithm 2 [12/12] [UUUUUUUUUUUU]
md4 : active raid6 sdc5[18] sdi5[14] sdj5[15] sdk5[16] sdl5[17] sdh5[13] sdg5[12] sde5[10] sdf5[11] sdb5[9] sda5[8] sdd5[7]
29254354560 blocks super 1.2 level 6, 64k chunk, algorithm 2 [12/12] [UUUUUUUUUUUU]
md7 : active raid1 nvme1n1p1[0] nvme0n1p1[1]
976757952 blocks super 1.2 [2/2] [UU]
md1 : active raid1 sdc2[2] sda2[0] sdj2[11] sdk2[10] sdl2[9] sdi2[8] sdh2[7] sdg2[6] sdf2[5] sde2[4] sdd2[3] sdb2[1]
2097088 blocks [12/12] [UUUUUUUUUUUU]
md0 : active raid1 sdc1[2] sda1[0] sdj1[11] sdk1[10] sdl1[9] sdi1[8] sdh1[7] sdg1[6] sdf1[5] sde1[4] sdd1[3] sdb1[1]
2490176 blocks [12/12] [UUUUUUUUUUUU]