Slow Raid10 read speed DS1621+

Currently reading
Slow Raid10 read speed DS1621+

@Spanksky Please contact Synology support for comment on this. They must know the hardware limitations. I would love to hear what they say. This would be the deciding factor for me to buy DS1621+, or DS1621sx+ or stay with my 918+.
 
Last edited:
If your need is for a system to perform close to RAID 10 theoretical limits then you need some basics in place for your next purchase:
  • A system with an I/O backplane & lanes capable of accessing all required drives simultaneously
  • A minimum of SAS connectivity and drives
  • A host operating system that resides on a volume independent of the array
  • A host operating system that caches to a volume independent of the array
  • A hardware RAID management system
  • A workload matched at or below the raw I/O limits
  • Sufficient RAM for a cache capable of the task
  • A manufacturer that publishes RAID 10 performance to benchmark against
You have purchased a basic consumer linux box with a shared backplane, software RAID only, limited to a single read or write tasks (ie not both simultaneously), SATA-only, with an OS and cache mirrored to all the drives of the storage array and an SMB package that resides on a single volume.

I understand your desires for higher performance as I share that requirement. It costs money to achieve it.
How would you explain this part (from the test that provanguard linked):

I think we need to agree to disagree.
1644921462707.png

-- post merged: --

@Spanksky Please contact Synology support for comment on this. They must know the hardware limitations. I would love to hear what they say. This would be the deciding factor for me to buy DS1621+, or DS1621sx+ or stay with my 918+.

To be honest I'm liking the flexibility of several volumes on my NAS, it doesn't break everything as soon as I try to do changes.

I'll be sticking to Raid0 that is snapshot to a Basic or Raid1 volume. I also added an nvme volume that I use for all docker containers and installed apps.

If I contact Synology I would have to create the Raid10 volume again and that would break my entire setup.
 
How would you explain this part (from the test that provanguard linked):

I think we need to agree to disagree.

I'm not sure we disagree. That image shows a sequential test of 1GB at QD1 for those 6 drives in RAID 10 (effectively a single user test) produces a transfer of 805MB/s read and 630MB/s write over 10GbE. The caching, drive speed, drive speed at write point (fast when empty, slow as drive fills closer to the spindle) and network variables are not known but the figures look in the ballpark to me for 3+3 drives on a Plus series.

☕
 
@Spanksky Any news from Synology?
Hello,

Thank you for contacting Synology.

Please check the below performance page, these are speeds for Plus models using RAID 5 in an optimal setting.

Performance | Synology Inc.

Unfortunately, there has been no testing for RAID 10.

I would recommend performing a SMART test on the disks in the NAS.

HDD/SSD | DSM - Synology Knowledge Center

It would also recommend checking the speed when directly connected to the NAS device via ethernet from a PC.

Best Regards,
-- post merged: --

As they only test RAID5 for 1621+ and the sequential speed they "guarantee" with 6 drives easily saturates a 10Gbit connection it's at least a solution.
 
Thanks for the update. That's quite disappointing IMO that Synology support is not able to provide a reference performance results for all supported RAID types, including RAID 10. I wonder if QNAP does it, they have NAS models based on the same CPU.
 
Thanks for the update. That's quite disappointing IMO that Synology support is not able to provide a reference performance results for all supported RAID types, including RAID 10. I wonder if QNAP does it, they have NAS models based on the same CPU.

If you´re considering 6 drives as you stated you can always fallback to Raid5 (if you have the same issues with Raid10) and still easily saturate a 10Gbit connection. That would be my solution at least (right now I don't need that much storage though).
 
From what I read, RAID 5 is bad from reliability point of view. People are recommending between RAID 10 and RAID 6. But I cannot find any reviews of DS1621+ or DS1821+ (the latter is something that I have started considering, given nearly the same price) with 10 Gbe connection and loaded with 6 HDDs and with RAID 10. There are some reviews with RAID 6, which seems to work quite fast.
I still have some thinking to do about the setup. From a simple calculation about data increase, I guess my current setup should last at least 2 years, by which time there should be a newer gen of 6 or 8 bay NASes, e.g. DS1624+.
 
From what I read, RAID 5 is bad from reliability point of view. People are recommending between RAID 10 and RAID 6. But I cannot find any reviews of DS1621+ or DS1821+ (the latter is something that I have started considering, given nearly the same price) with 10 Gbe connection and loaded with 6 HDDs and with RAID 10. There are some reviews with RAID 6, which seems to work quite fast.
I still have some thinking to do about the setup. From a simple calculation about data increase, I guess my current setup should last at least 2 years, by which time there should be a newer gen of 6 or 8 bay NASes, e.g. DS1624+.

I will be selling some of my large capacity drives and purchase 4 new lower capacity ones. Purely for Raid0+Raid1 with replication.

Before I set this up I will test Raid10 with the 4 identical drives to see if there is a difference. My Exos drives are different capacity and different firmware versions. I don't know if that could be the issue (even though the Raid10 treated all drives according to the lowest capacity one)
 
I ordered DS1821+ in the end. I have 6 HDDs in total and will try to setup RAID6 or RAID10. Fingers crossed that the speed and most importantly the long term reliability of the setup will get great :)
 
Last edited:
After some testing I still can't get Raid10 read speed higher that half of theoretical.

These are 4 new identical (Single drive benchmarked at 228MB/s r/w) drives in Raid10.
1645216975562.png


In the meantime I realized that testing with Chrystal Disk at 1GiB is quite useless for array throughput as RAM and built in HDD cache is doing a lot of work.

8 GiB is a bit more realistic for my 4 GB RAM NAS, Raid10:
1645217335992.png


Same 8 GiB test with same drives in Raid0:
1645217377940.jpeg


I might test Raid5 just for the sake of it to see if the read speed is 3x for 4 drives, unfortunately the build time for the array is 5 hours.
 
I created an SHR array with 4 drives and got the following result:
1645265957359.png


I get higher performance with SHR then Raid10, that in it self is quite indicative that Raid10 is broken. At least on Version: 7.0.1-42218 Update 2 and DS1621+.

I have sent my findings to Synology through my support ticket.

Now back to my Raid0+Raid1 setup :)
 
I'd aim at SHR (SHR2, hopefully) on Btrfs, rather than regular RAID. Synology has it nailed down and there is no reason to use regular RAID if your NAS is capable of the Synology enhancements.

☕
Do you have any insight into comparison between RAID 6 and SHR 2? From what I heard SHR 2 is a custom solution, while RAID 6 should be a "standard" type of RAID, which should be more easily recoverable in case of any issues.
I am interested mostly in performance comparison and reliability between the two. I don't care that match about features related to disk sizes. I assume that RAID 6 can also be enlarged by replacing all disks one by one with larger ones.
-- post merged: --

I created an SHR array with 4 drives and got the following result:
View attachment 5433

I get higher performance with SHR then Raid10, that in it self is quite indicative that Raid10 is broken. At least on Version: 7.0.1-42218 Update 2 and DS1621+.

I have sent my findings to Synology through my support ticket.

Now back to my Raid0+Raid1 setup :)
I would imagine that you would get higher speeds when going from 4 identical disks to 6.
Have you tried tweaking settings on the network cards on the PC, e.g. jumbo frames set to 9k? I don't know if similar settings are available on the NAS.
 
-- post merged: --


I would imagine that you would get higher speeds when going from 4 identical disks to 6.
Have you tried tweaking settings on the network cards on the PC, e.g. jumbo frames set to 9k? I don't know if similar settings are available on the NAS.
6 Drives would definitely improve performance. I just wanted to check if the performance scales with number of drives. And it does with SHR, Raid0 and Raid1. Not with Raid10 though.

Network performance is fine with this setup considering I'm getting close to 900 MB/s with Raid0 and over 1000 MB/s with buffer (when testing with ChrystalMark and 1 GiB).

Iperf between my PC and NAS is showing 8,7 Gbit/s (=1080 MB/s). Reason being that I have some CAT5e cable that I need to replace.
 

Create an account or login to comment

You must be a member in order to leave a comment

Create account

Create an account on our community. It's easy!

Log in

Already have an account? Log in here.

Similar threads

  • Question
There is nothing wrong here with the setup and configuration, but the fact is that you essentially have a...
Replies
2
Views
1,172
Are you absolutely certain your cables are good for GB. I went so far as only use CAT6.
Replies
9
Views
1,425
I know this is late - and it is for your info only, but just an FYI: I bought two of these switches and...
Replies
7
Views
1,812
Welcome! To start, the switch here is probably not the issue but just to be on the safe side, check in...
Replies
1
Views
1,191
I think they all get used for data redundancy... but someone more knowledgeable can confirm
Replies
6
Views
1,529

Welcome to SynoForum.com!

SynoForum.com is an unofficial Synology forum for NAS owners and enthusiasts.

Registration is free, easy and fast!

Back
Top