Slow Raid10 read speed DS1621+

Currently reading
Slow Raid10 read speed DS1621+

Do you have any insight into comparison between RAID 6 and SHR 2? From what I heard SHR 2 is a custom solution, while RAID 6 should be a "standard" type of RAID, which should be more easily recoverable in case of any issues.
I am interested mostly in performance comparison and reliability between the two.

SHR2 is not really a custom RAID, just an appliqué onto standard linux software RAID*. The satisfying thing is that Synology ironed-out some underlying issues with the standard RAID in order to give them a cleaner sheet to work with. As such, SHR/SHR2 arguably surpasses the level of trust of the standard RAID. It is much more than a feature to run dissimilar drive sizes (which, like you, is not a feature I make use of but nice to have I guess).

Pretty similar with Btrfs, which on a Synology isn't actually the genuine (and rather wobbly) Btrfs at all. It is another appliqué that provides an ingenious integrated layer. Synology Btrfs + SHR + snapshots makes a system that is considerably greater and more reliable than the base (and rather old) technologies.

Another way of looking at it you are paying a lot of money for some weak Synology hardware. The actual money is spent on Synology's software development which has proven to be rather good and worthy of preference.

[* Somewhere a Synology developer just passed-out, along with their fanboys..]
 
SHR2 is not really a custom RAID, just an appliqué onto standard linux software RAID*. The satisfying thing is that Synology ironed-out some underlying issues with the standard RAID in order to give them a cleaner sheet to work with. As such, SHR/SHR2 arguably surpasses the level of trust of the standard RAID. It is much more than a feature to run dissimilar drive sizes (which, like you, is not a feature I make use of but nice to have I guess).

Pretty similar with Btrfs, which on a Synology isn't actually the genuine (and rather wobbly) Btrfs at all. It is another appliqué that provides an ingenious integrated layer. Synology Btrfs + SHR + snapshots makes a system that is considerably greater and more reliable than the base (and rather old) technologies.

Another way of looking at it you are paying a lot of money for some weak Synology hardware. The actual money is spent on Synology's software development which has proven to be rather good and worthy of preference.

[* Somewhere a Synology developer just passed-out, along with their fanboys..]
What would be the reason for not supporting SHR on RS, FS, DS*xs units? If SHR provides an improved reliability of the file system, wouldn't the business high-performance NAS users be interested in having it? Unless Synology thinks of SHR as less stable than standard RAID.
 
What would be the reason for not supporting SHR on RS, FS, DS*xs units? If SHR provides an improved reliability of the file system, wouldn't the business high-performance NAS users be interested in having it? Unless Synology thinks of SHR as less stable than standard RAID.
Because Synology’s official position is SHR for soho, conventional raid for pro/business
 
And what is the official reasoning behind it?
Nobody here is Synology, so we can only guess.

- First has to be market segmentation, or at least segmentation as Synology sees it
- Second has to be technical issues - once you hit the Xeon architecture with dedicated network hardware we can only guess at the additional hurdles

Note that some RS products do allow SHR / Btrfs, including my own systems.
 
And what is the official reasoning behind it?
When I asked once during a QA session, reason was performance and “we don’t see businesses mixing different size drives as part of a single volume”. Meaning performance again.

I would also say, best to ask them directly.
 
Gents,
what about using for the testing purpose one defined sample :
- a bunch of data that matches the data you want to run on this storage pool or on the source/target computer for the most frequent transfers
- to keep the folder structure (incl. deep)
- to keep the file diversity (a lot of small files or just a few large files)

... to keep the same testing environment from the transferred/received data point of view.
Then you can use this sample for plenty of tests (SMB, NFS), command-line transfer with specified options, ...
Because the tools like Crystal disk will never give you the same sample as is your operated/expected data architecture or transfer environment.

Re Syno RAID, there are plenty of posts from my side here about how it is glued (BTRFS over devmapper). Originally the BTRFS (till the project was abandoned by Facebook) never supported RAID5/6. And the BTRFS RAID10 never outperformed BTRFS RAID0 (means on the real test, not by the tool like Crystal disk, ...).
You can read some test results here:
But you have to count that it is not Syno HW or the same Linux kernel.

PS: You can't expect serious support from Syno support.
 
Guys, do you have any suggestions as to which "mode" to chose for the storage pool - single volume or multiple volume support? The documentation about the actual performance difference is not provided by Synology. Do you know any reliable comparisons? Is there any other significance for having multiple volumes, e.g. when it comes to RAID rebuild, stability, reliability? I still need to chose between RAID 10 and RAID 6 for my 6 HDD setup and then chose the storage pool mode.
 
Guys, do you have any suggestions as to which "mode" to chose for the storage pool - single volume or multiple volume support? The documentation about the actual performance difference is not provided by Synology. Do you know any reliable comparisons? Is there any other significance for having multiple volumes, e.g. when it comes to RAID rebuild, stability, reliability? I still need to chose between RAID 10 and RAID 6 for my 6 HDD setup and then chose the storage pool mode.
Try Raid10 first, it takes a couple of minutes to ”build” and test performance.
 
@Spanksky My results from DS1821+ RAID 10 configuration, using 6 Ironwolf 12 TB drives are about:
- 700 MB/s read
- 700 MB/s write

I copied 20 GB file back and forth between NAS and my Windows 10 PC. The only setting I changed before the test with 6 drives was to set Jumbo frames to 9k on my PC and on the NAS. This setting I changed after previously testing 4 drives configuration I migrated from the previous NAS. With 4 drives, the results were around 400 MB/s. Jumbo frames made no difference in that test.
 
@Spanksky My results from DS1821+ RAID 10 configuration, using 6 Ironwolf 12 TB drives are about:
- 700 MB/s read
- 700 MB/s write

I copied 20 GB file back and forth between NAS and my Windows 10 PC. The only setting I changed before the test with 6 drives was to set Jumbo frames to 9k on my PC and on the NAS. This setting I changed after previously testing 4 drives configuration I migrated from the previous NAS. With 4 drives, the results were around 400 MB/s. Jumbo frames made no difference in that test.
Thank you for sharing.

Sounds like in line with what I got. Are you trying Raid6 or SHR2?
 
Last edited:
RAID 6. On DSM 6.2.4.
The speed of creation is horrible. Looks like it will be several days...
I may want to go back to RAID 10, at the expense of 11 TB...
-- post merged: --

Does anyone know what pairs of bays (numbers) should not fail at the same time when running RAID 10 on 6 or 8 bays? Or is it not pairs but triplets?
 
You will be running 3 pairs of mirrored drives. Critical failure would be if drive 1 and 2 fail at the same time for example.
 
Last edited:
You will be running 3 pairs of mirrored drives. Critical failure would be if drive 1 and 2 fail at the same time for example.
Should I assume that it is always bays:
- 1 & 2
- 3 & 4
- 5 & 6
- 7 & 8
that should not fail simultaneously?
-- post merged: --

I would actually imagine that it is a mirror of 3 stripped drives. This is where the performance increase would be coming from.

You are right. This article confirms this:
-- post merged: --

Given the above, if I had 4 HDDs (1, 2, 3, 4) from the previous setup, and then added 2 new HDDs (5, 6) - same model and firmware, but apparently different physical construction - would it be recommended to arrange them like this:
- 1, 2, 3, 4, 5, 6
or mix them a bit, so that the new ones are not in a pair together, like this:
- 1, 2, 3, 5, 4, 6
?
I am concerned about given batch having higher probability of failure, if being negatively affected in production or in transportation, which the second ordering should in theory address.
-- post merged: --

Is the first ordering better in this case, because of difference in physical production, which would make a pair from the same batch better in performance ?
 
Should I assume that it is always bays:
- 1 & 2
- 3 & 4
- 5 & 6
- 7 & 8
that should not fail simultaneously?
-- post merged: --

I would actually imagine that it is a mirror of 3 stripped drives. This is where the performance increase would be coming from.

You are right. This article confirms this:
-- post merged: --

Given the above, if I had 4 HDDs (1, 2, 3, 4) from the previous setup, and then added 2 new HDDs (5, 6) - same model and firmware, but apparently different physical construction - would it be recommended to arrange them like this:
- 1, 2, 3, 4, 5, 6
or mix the a bit, so that the new ones are not in a pair together, like this:
- 1, 2, 3, 5, 4, 6
?
I am concerned about given batch having higher probability of failure, if being negatively affected in production or in transportation.
I don't know what best praxis is regarding different "types" of HDDs and how they interact in a mirrored pair. Just looking at it from a logical point of view I would mix different types of drives in the mirrored pairs just in case there is a defect in that specific batch. But even if that's the case the probability of them failing at the same time is most likely negligible.

I'm sure that the order you mentioned is correct but if you shut down your NAS and after that mix up all the drives the array will still work. So it's not a guarantee that a specific order is intact if you don't see to it yourself (and don't mix up the drives after creating the array)
 
Last edited:
So, after 1,5 days of building RAID 6, it finally finished. 6 Irontwolf 12 TB drives. Single 10 GB network card.
My results are somewhat less consistent compared to RAID 10.

When copying a 20 GB file to the NAS, I am getting 700-900 MB/s, oscillating between these values in couple gentle waves as seen in the speed graph on the Windows file copy dialog. Disk utilization on the NAS reaches 97%.

However, when copying the file back to the PC, my PC seems to have trouble. One of the 12 CPU threads hits 100% usage. The progress of the file copy dialog is very jerky and jumpy. The whole Windows explorer is being affected, presumably by the CPU usage. Disk utilization on NAS reaches only 50%. The effective speed is around 400-500 MB/s, but Windows has trouble to accurately measure it, sometimes giving crazy results in 2+ GB/s range.
I turned off jumbo frames, but it did not help.

Any ideas what the problem could be?
-- post merged: --

I reverted the driver of my Asus 10 GB network card from 3.1.6 to 2.2.3.0 and the problem with copying files from NAS to my PC is gone!

The results settle between 560-580 MB/s, after starting a bit lower at around 400 MB/s, when copying files from the NAS to the PC.

Comparing to RAID 10, where I had 700 MB/s both ways, the slower speed of reading data from the NAS is a bit disappointing.
 
Last edited:
So, next I updated DSM to the latest 6.2.4 patch version 5. SMB was restarted as part of the update.
Now the file copy speed from NAS to the PC is around and above 850 MB/s. I don't understand what is happening :D

Interestingly, the DSM resource manager registers much lower network speeds than what I see in Windows.
 
So, next I updated DSM to the latest 6.2.4 patch version 5. SMB was restarted as part of the update.
Now the file copy speed from NAS to the PC is around and above 850 MB/s. I don't understand what is happening :D

Interestingly, the DSM resource manager registers much lower network speeds than what I see in Windows.
Start the test by eliminating SMB, your network and your PC from the equation.

SSH in to your NAS, sudo -i (for root access) and type: hdparm -t /dev/md2

This will give you the volume performance. Read only though.
 
Last edited:
/dev/md2:
Timing buffered disk reads: 2692 MB in 3.00 seconds = 896.96 MB/sec
That should be your goal then.

To test your network, have you tried iperf? You can add it to your docker (super simple to setup) and then run iperf from your PC and “ping” you NAS ip. This will give you raw network performance between you PC and NAS.
-- post merged: --

Btw are your Ironwolfs 5400 or 7200 rpm?
 

Create an account or login to comment

You must be a member in order to leave a comment

Create account

Create an account on our community. It's easy!

Log in

Already have an account? Log in here.

Similar threads

  • Question
There is nothing wrong here with the setup and configuration, but the fact is that you essentially have a...
Replies
2
Views
1,310
Are you absolutely certain your cables are good for GB. I went so far as only use CAT6.
Replies
9
Views
1,596
I know this is late - and it is for your info only, but just an FYI: I bought two of these switches and...
Replies
7
Views
1,930
Welcome! To start, the switch here is probably not the issue but just to be on the safe side, check in...
Replies
1
Views
1,264
I think they all get used for data redundancy... but someone more knowledgeable can confirm
Replies
6
Views
1,613

Welcome to SynoForum.com!

SynoForum.com is an unofficial Synology forum for NAS owners and enthusiasts.

Registration is free, easy and fast!

Back
Top