Slow Raid10 read speed DS1621+

Currently reading
Slow Raid10 read speed DS1621+

26
7
NAS
DS1621+
Operating system
  1. Windows
Mobile operating system
  1. iOS
Hi all,
Thought I might be able to get some help here regarding my slow read speeds with RAID10 and DS1621+.

I have 4 drives, benchmarking them all through DSM7 I get a read throughput of 230 - 240 MB/s (Write is close to this speed too).

After building a RAID10 storage pool and testing with hdparm I get this result:
root@NAS:~# hdparm -t /dev/md2
/dev/md2:
Timing buffered disk reads: 1342 MB in 3.01 seconds = 445.47 MB/sec

(Real life Read is similar through SMB, about 460MB/s). One strange thing is that I get the same write speed as read through SMB.

I have a 10Gbit setup for my main PC and the DS1621+.
 
Are you sure, I’m getting same read and write performance on a raid10 setup. Never seen that before.

It should be reaching 800MB/s in sequential read considering each individual drive is benchmarked at 230MB/s.
 
Last edited:
I'm never sure but in my experience that looks about right.

I understand the disappointment though as we take it for granted that the scaling of RAID10 gives us multiples of the number of spindles as sure as night follows day.

Then we remember the dedicated RAID hardware, then the HBAs and the duplex / bandwidth of SAS etc and gently nod in agreement that the additional expense of a real server does add meaningful levels of performance to RAID10. Makes a SATA box running on software RAID look a little sad at times.

In better news, when used in SHR / BTRFS and with faster drives, these AMD units can fly along. I can get 1200MB/s sequential R/W from my 21+ and without the 10GbE bottleneck I'd expect even more as I can see the backplane moving over 3000MB/s around at times.


 2022-01-23 at 18.33.03 copy.png
 
I'm never sure but in my experience that looks about right.

I understand the disappointment though as we take it for granted that the scaling of RAID10 gives us multiples of the number of spindles as sure as night follows day.

Then we remember the dedicated RAID hardware, then the HBAs and the duplex / bandwidth of SAS etc and gently nod in agreement that the additional expense of a real server does add meaningful levels of performance to RAID10. Makes a SATA box running on software RAID look a little sad at times.

In better news, when used in SHR / BTRFS and with faster drives, these AMD units can fly along. I can get 1200MB/s sequential R/W from my 21+ and without the 10GbE bottleneck I'd expect even more as I can see the backplane moving over 3000MB/s around at times.


View attachment 5336


I don't really agree that there is not enough raw performance in a DS1621+ to serve one user at the speeds I'm expecting. You are talking about hardware expected to serve hundreds of users.

In any case I did some testing and got following:

No raid 1 Drive: 230MB/s read and 230 MB/s write

2 Drive Raid 1: 460MB/s read and 230 MB/s write (Theoretical performance for 2 Drives: 2x read, 1x write)

3 Drive Raid 0: 690MB/s read and 690 MB/s write (Theoretical performance for 3 Drives: 3x read, 3x write)

4 Drive Raid 10: 460MB/s read and 460 MB/s write (Theoretical performance for 4 Drives: 4x read, 2x write)

The only raid array that does not perform according to the theoretically expected is the Raid 10 array. And considering it's performing exactly 2x performance and not 4x performance for read indicates there is a bug, incorrect setting, etc.

I'm also quite certain that for example Raid 0 with 4 drive would scale accordingly to 920/920 MB/s, unfortunately I need one of the drives for a temporary backup and couldn't try it.
 
Anybody else running Raid 10? on DS1621+ or DS1821+?

How is your scaling on Raid 10? Please view my result above (scaling for Raid 1 and Raid 0 is more or less perfect considering theoretical and only 50% read on Raid 10).
 
Last edited:
Is this really true? I have been thinking about getting DS1621+ and running RAID10 with 6 Ironwolf 12TB drives. Would I see only 460 MB/s transfer speeds over SMB? I currently have 250 MB/s with DS918+, 4 drives in RAID10 and 2,5Gbe USB adapter. It would seem like a meaningless upgrade from performance point of view.
How does the DS1621xs+ compare to DS1621+ in RAID10 ?

@Spanksky Have you tried Synology support to confirm this?
 
By the way, the previous Intel based version from this form factor, the DS1618+ has been reported to get 1200 MB/s in read and 600 MB/s in write in RAID10 according to this review:
 
I doubt that there is an issue with raw power, the cpu is at max 10% utilisation during these tests.

I can only contribute it to some kind of bug or incompatible HW combination. My drives are Exos 16 and on the approved list.

As written above I use 3 drive Raid0 now with a periodical snapshot to a fourth. That gives me rw at 690MB/s

A week ago I also installed a m.2 drive that I have formated as a volume. SMB performance maxes out my 10gbit network at about 1,100MB/s
 
Ok, then you are hitting the limit of read+writes that the volume can support.

☕
Agreed but it’s still an issue considering that the same drives and NAS unit are performing according to theoretical expectations for raid0 but only half of theoretical for raid10.
 
I have been reading some benchmarks for DS1621+ and xs+ and some of them showed that having nvme cache in the NAS can actually lower the performance drastically, compared to only having HDDs.

@Spanksky Are your tests doing reads and writes separately or simultaneously?
 
There was no nvme cache during tests. I focused all tests on read performane as write was performing according to expectations.

See my first post, there it’s stated what command I used for read performance test.
 
Last edited:
@Spanksky Do you have possibility to test just copying large files to and from the NAS using SMB and a Windows PC with multigig ethernet card and a NVME drive? Is this how you performed your tests?
 
@Spanksky Do you have possibility to test just copying large files to and from the NAS using SMB and a Windows PC with multigig ethernet card and a SSD? Is this how you performed your tests?
My SMB performance tests were the same result as with using hdparm in command line. I just wanted to remove any dependencies on SMB protocol and network performance, that's why I focused on hdparm.

Right now I have a 3 drive raid0 and SMB is performing accordingly (I did the same test with Raid10 and there was still an issue with read performance as hdparm indicated)

I attached an image with my Raid0 setup. A 10GB file transfer. While transferring this file the NAS CPU -usage is at 15% (I stated 10% earlier). The nvme in the PC is at 3500MB/s so it's not bottlenecking the test.
 

Attachments

  • 1.jpg
    1.jpg
    23.5 KB · Views: 27
Agreed but it’s still an issue considering that the same drives and NAS unit are performing according to theoretical expectations for raid0 but only half of theoretical for raid10.

If your need is for a system to perform close to RAID 10 theoretical limits then you need some basics in place for your next purchase:
  • A system with an I/O backplane & lanes capable of accessing all required drives simultaneously
  • A minimum of SAS connectivity and drives
  • A host operating system that resides on a volume independent of the array
  • A host operating system that caches to a volume independent of the array
  • A hardware RAID management system
  • A workload matched at or below the raw I/O limits
  • Sufficient RAM for a cache capable of the task
  • A manufacturer that publishes RAID 10 performance to benchmark against
You have purchased a basic consumer linux box with a shared backplane, software RAID only, limited to a single read or write tasks (ie not both simultaneously), SATA-only, with an OS and cache mirrored to all the drives of the storage array and an SMB package that resides on a single volume.

I understand your desires for higher performance as I share that requirement. It costs money to achieve it.
 

Create an account or login to comment

You must be a member in order to leave a comment

Create account

Create an account on our community. It's easy!

Log in

Already have an account? Log in here.

Similar threads

  • Question
There is nothing wrong here with the setup and configuration, but the fact is that you essentially have a...
Replies
2
Views
846
Are you absolutely certain your cables are good for GB. I went so far as only use CAT6.
Replies
9
Views
935
I know this is late - and it is for your info only, but just an FYI: I bought two of these switches and...
Replies
7
Views
1,522
Welcome! To start, the switch here is probably not the issue but just to be on the safe side, check in...
Replies
1
Views
1,025
I think they all get used for data redundancy... but someone more knowledgeable can confirm
Replies
6
Views
1,364
Need to read the blog once I get some time... Interestingly...the NAS benchmark (new feature in DSM --...
Replies
12
Views
2,003
Thanks so much guys. I don't want to give up on this device just yet - my needs are pretty modest compared...
Replies
14
Views
3,242

Welcome to SynoForum.com!

SynoForum.com is an unofficial Synology forum for NAS owners and enthusiasts.

Registration is free, easy and fast!

Back
Top