LAN file share between NAS and Desktop(running on M.2 NVMe) - a consideration

Currently reading
LAN file share between NAS and Desktop(running on M.2 NVMe) - a consideration

2,486
840
NAS
Synology, TrueNAS
Operating system
  1. Linux
  2. Windows
So it's a time to share my experiences with M.2 NVMe disks in my desktops.
I would like to open discussion with you, about your experiences.

First, some basics:
"Non-Volatile Memory Express” = NVMe, is an open standard developed to allow modern SSDs to operate at the range of current physical barriers (PCIe revision, memory cells, ...).
It allows this kind of memory to operate as an SSD directly through the PCIe interface rather (better) than through SATA interface and being limited by the slower SATA speeds.
Just to be sure M.2 standard (without NVMe) is just "relatively" normal SSD based on SATA interface only. Then don't be surprised when you order M.2 and you will expect M.2 NVMe performance (or slot) !!

Test environments:
Comp1:
Based on W10Home OS, MB Asus ROG Maximus X Hero with chipset Z370, 32GB/DDR4 2666, M.2 NVMe Samsung 970 EVO PLUS (250GB), 1x1Gbps(MB) + Intel Pro PCI 1Gbps card with LAG
Comp2: Based on W10Pro OS, MB Gigabyte Designare with chipset Z390, 64GBGB/DDR4 2666, M.2 NVMe Samsung 970 EVO (500GB), 2x1Gbps(MB) with LAG
Network: Switch Unifi48 with enabled LAG for both sides (computers and also NAS), Jumbo frames, Non-Blocking Throughput: 70 Gbps, Switching Capacity: 140 Gbps
Wires: Cat 6a
NAS: 1813+ with 1x Bond with LAG for (2xG LAN), Jumbo Frames, target Volume based on 2xHDD in RAID1 (Seagate Constellation CS, ST1000NC001-1DY162) ... HDD benchmarking (hdparm -t /dev/sda & b) = 157.78 MB/sec
SMB, iSCSI for file transfers

Reason of the test:
Some of you can ask your self "Why the hell are you use the DS1813+ w/o 10Gbps and w/o fast SSD (MLC or SLC) for such test of M.2 NVMe disks at another side?"
Because I would like to disprove ideas about frequent purchase decision for NAS based on number of NAS bays only.

Short summary:
I wasn't able to achieve transfer performance between NVMe disks and target Volume in NAS (based on NVMe speed). What a surprise :cool: , as was expected.
Reasons:
- I have also 2x 10Gbps SFP ports in my switch, but there is still bottleneck at NAS side (without 10Gbps card), even with 2xLAG for LAN it will provide max 2Gbps only (under performed vs NVMe)
- NAS disk groups operation model and used disks. Even when I will purchase SSD(MLC) to RAID1 my performance will provide just partial capacity for such enormous speed from M.2NVMe. Then it will be waste investment. ... To be sure, When you have in operation SHR with 3+ disks or RAID5, or RAID6 you can absolutely forget for same speed as for RAID1 or SHR with 2 disks.

Idea - 10Gbps NIC in the NAS (purchase enabler or nice to have)
when I will purchase next NAS (I will do that) I need take into consideration that 10Gbps NAS interface for LAN I can utilize only with:
- min. 3+ fast SSDs (MLC or SLC) in RAID0 group. Reason: when you have e.g. 500MBps Write performance for single disk (basic, no RAID), then you can utilize theoretically your NIC (network interface Card/Controller/Adapter) in your NAS up to 4 000Mbps. But it's just 40% of your 10Gbps network environment. But every RAID will decrease the performance of basic disk. Specially in NAS w/o independent disk controller. What is case of Synology and others from similar range.
Even worse - Try to imagine how it will decrease when you use SHR with 3+ disk (RAID5) or with SHR-2 (RAID6) ... as hard impact to your final disk group performance.
Btw, there is still "smart" consideration about identical disks with identical performance for such RAID5/6 SHR/SHR2 configuration. Because, when you will use different disks, your performance will decrease to slowest from them.

Then there is a question - do I really need such 10Gbps network speed for the NAS?
YES - when I have RAID0 (mentioned above) target Volume in the NAS = 3+ disks
NO - when I have just RAID1/5/6 or SHR equivalent. OFC it's valid for basic disk or JBOD.

Where is the reason or point for the usage such fast speed for M.2 NVMe disk? Many times mentioned in threads here - in desktop computing services - Photo Edit Station (desktop), Video Edit Station (desktop) as well. Game 3D Engine operation. Or statistical algorithm computing. For all of them you need really fast CPU-RAM-Disk data transfers. Then you need fast swap NAS environment, e.g. for Adobe Lr.
In this case is the speed of LAN minor problem, to be sure - as minimum 2Gbps LAG for file transfers between NAS/Comp.
2Gbps = 256MBps .... what is:
- 10 RAW pictures per second for Post processing in Adobe Lr (up to camera specification)
- 7 seconds of 4k/60fps RAW record or up to 60sec by h.265

Btw for photographers is better to use this kind of scenario:
- USB3.0 SD card reader connected directly to the NAS for initial data transfer from SD card to NAS
- for rest of the operation use Network connection to the NAS (library management, Edit, ...)

And last for Mark from NAScompares:
- for PLEX server you can forget for 10Gbps NIC
It's simple - there is still one mandatory bottleneck = TV NIC = mainly about 100Mbps only.

In next stage I will prepare some screenshots.
 
Nice write up. Just ordered a pair of WD blue 500GB SSD (sata) for my 718+ (and same for 918+). Wanna see how these units will perform with full ssd storage. Atm, I have 718+ running with a single SSD and the NAS is flying but as a POC.

Regarding getting those high end speeds, will not show up until I go full 10G lan. That might happen at some point later this year or next year.
 
Thx M8, good luck with such tests. I have really big satisfaction with the small 718+. In this case - I have 718+ for primary backup purpose, then I like HDD more than SSD :cool: . And both of them 718/918 are too small (bays) for my main usage in other cases.
OFC for 2x1G LAG is the WD blue 500GB SSD useful solution. Send here some test results.

E2E 10G network topology include all appliances has reason only when NAS will provide:
-RAID0 with fast SSDs
or
- M.2 NVMe for primary storage purpose (not for a cache only)

The there is a question - Chicken or Egg?
Or build up better operation architecture for Desktop computing (my case).
 
The there is a question - Chicken or Egg?
Indeed

And both of them 718/918 are too small (bays) for my main usage in other cases.
True and I agree. Moving them both to SSD just to get fast services running on them, main storage is what 48bays of RS units is for ;)
 
I am running 10GBE - from what I understand with the SSD cache or 10 GBE deliberation - caching doesn't really help much with larger sized files, in my case I have large ISO's so 10 GBE was the more impactful solution.
 
I am running 10GBE - from what I understand with the SSD cache or 10 GBE deliberation - caching doesn't really help much with larger sized files, in my case I have large ISO's so 10 GBE was the more impactful solution.

The cache has impact to some processes, not for entire transfers.
When you have one of (4) Synology PCIe expansion to 10GBE operation, there is still bottleneck in your Volumes, except RAID0 scenario as was mentioned above. Because NAS disks speed is the bottleneck.
Another one is the opposite side - source of the data transfer to the NAS - is there also 10GBE?
Is there RAID0 from 3x fast SSD or single NVMe?
 
Agree about the disk array being a possible bottleneck with transfer speeds. The ideal solution here might be the rumored expansion card that supersedes the E10G18-T1 - supporting both 10 GBE and SSD caching.
 
The ideal solution here might be the rumored expansion card that supersedes the E10G18-T1 - supporting both 10 GBE and SSD caching.
re - ideal solution mentioned:
there isn’t expansion Ethernet card that will accelerate your slow disks, even with 100TB cache (what is totally buls.t). Because there isn’t understanding of the cache principles.

To be sure the Cache:
is component that stores data so that future requests for that data can be served faster; the data stored in a cache might be the result of an earlier computation or a copy of data stored elsewhere.
 
I do not have SSD caching setup on my 1819+ but I can tell you running on a ~40 TB SHR-1 array, 10 GBE with Jumbo Frames enabled, HGST Deskstars @ 7200 RPM, the Gigabit transfer limit of ~112MB/S has easily doubled and at times triples when transferring files from 10 GBE client, through 110 GBE switch, to/from NAS.
 
similar 231MBps with 2x1G LAG for my setup (above)
but it’s just for big files (>2GB), e.g. CSV exports in my case
for a 5GB bunch of small data (docs, ...) is the speed under 40% of possible utilization, what is normal for such transfer.

finally, glad to see, that another HDD NAS user is here :)
 

Create an account or login to comment

You must be a member in order to leave a comment

Create account

Create an account on our community. It's easy!

Log in

Already have an account? Log in here.

Similar threads

  • Question
On the surface of it this thread seems to be a continuation of this other thread...
Replies
2
Views
1,966
Welcome to the forum. Have you tried to download using any other method besides Download Station? Just to...
Replies
1
Views
1,193
I have shared a file which can be downloaded by friends. I was checking the Log Center to see if the file...
Replies
0
Views
550
Ahah, it was the Sofware Decoding setting. I had it set to On. In the app it says If yhou have problems...
Replies
3
Views
7,002
Agree with you, SMB will make the speed so much better and that is what I see when using Gs Richcopy 360...
Replies
39
Views
6,590

Welcome to SynoForum.com!

SynoForum.com is an unofficial Synology forum for NAS owners and enthusiasts.

Registration is free, easy and fast!

Back
Top