- 2,486
- 838
- NAS
- Synology, TrueNAS
- Operating system
- Linux
- Windows
So it's a time to share my experiences with M.2 NVMe disks in my desktops.
I would like to open discussion with you, about your experiences.
First, some basics:
"Non-Volatile Memory Express” = NVMe, is an open standard developed to allow modern SSDs to operate at the range of current physical barriers (PCIe revision, memory cells, ...).
It allows this kind of memory to operate as an SSD directly through the PCIe interface rather (better) than through SATA interface and being limited by the slower SATA speeds.
Just to be sure M.2 standard (without NVMe) is just "relatively" normal SSD based on SATA interface only. Then don't be surprised when you order M.2 and you will expect M.2 NVMe performance (or slot) !!
Test environments:
Comp1: Based on W10Home OS, MB Asus ROG Maximus X Hero with chipset Z370, 32GB/DDR4 2666, M.2 NVMe Samsung 970 EVO PLUS (250GB), 1x1Gbps(MB) + Intel Pro PCI 1Gbps card with LAG
Comp2: Based on W10Pro OS, MB Gigabyte Designare with chipset Z390, 64GBGB/DDR4 2666, M.2 NVMe Samsung 970 EVO (500GB), 2x1Gbps(MB) with LAG
Network: Switch Unifi48 with enabled LAG for both sides (computers and also NAS), Jumbo frames, Non-Blocking Throughput: 70 Gbps, Switching Capacity: 140 Gbps
Wires: Cat 6a
NAS: 1813+ with 1x Bond with LAG for (2xG LAN), Jumbo Frames, target Volume based on 2xHDD in RAID1 (Seagate Constellation CS, ST1000NC001-1DY162) ... HDD benchmarking (hdparm -t /dev/sda & b) = 157.78 MB/sec
SMB, iSCSI for file transfers
Reason of the test:
Some of you can ask your self "Why the hell are you use the DS1813+ w/o 10Gbps and w/o fast SSD (MLC or SLC) for such test of M.2 NVMe disks at another side?"
Because I would like to disprove ideas about frequent purchase decision for NAS based on number of NAS bays only.
Short summary:
I wasn't able to achieve transfer performance between NVMe disks and target Volume in NAS (based on NVMe speed). What a surprise
, as was expected.
Reasons:
- I have also 2x 10Gbps SFP ports in my switch, but there is still bottleneck at NAS side (without 10Gbps card), even with 2xLAG for LAN it will provide max 2Gbps only (under performed vs NVMe)
- NAS disk groups operation model and used disks. Even when I will purchase SSD(MLC) to RAID1 my performance will provide just partial capacity for such enormous speed from M.2NVMe. Then it will be waste investment. ... To be sure, When you have in operation SHR with 3+ disks or RAID5, or RAID6 you can absolutely forget for same speed as for RAID1 or SHR with 2 disks.
Idea - 10Gbps NIC in the NAS (purchase enabler or nice to have)
when I will purchase next NAS (I will do that) I need take into consideration that 10Gbps NAS interface for LAN I can utilize only with:
- min. 3+ fast SSDs (MLC or SLC) in RAID0 group. Reason: when you have e.g. 500MBps Write performance for single disk (basic, no RAID), then you can utilize theoretically your NIC (network interface Card/Controller/Adapter) in your NAS up to 4 000Mbps. But it's just 40% of your 10Gbps network environment. But every RAID will decrease the performance of basic disk. Specially in NAS w/o independent disk controller. What is case of Synology and others from similar range.
Even worse - Try to imagine how it will decrease when you use SHR with 3+ disk (RAID5) or with SHR-2 (RAID6) ... as hard impact to your final disk group performance.
Btw, there is still "smart" consideration about identical disks with identical performance for such RAID5/6 SHR/SHR2 configuration. Because, when you will use different disks, your performance will decrease to slowest from them.
Then there is a question - do I really need such 10Gbps network speed for the NAS?
YES - when I have RAID0 (mentioned above) target Volume in the NAS = 3+ disks
NO - when I have just RAID1/5/6 or SHR equivalent. OFC it's valid for basic disk or JBOD.
Where is the reason or point for the usage such fast speed for M.2 NVMe disk? Many times mentioned in threads here - in desktop computing services - Photo Edit Station (desktop), Video Edit Station (desktop) as well. Game 3D Engine operation. Or statistical algorithm computing. For all of them you need really fast CPU-RAM-Disk data transfers. Then you need fast swap NAS environment, e.g. for Adobe Lr.
In this case is the speed of LAN minor problem, to be sure - as minimum 2Gbps LAG for file transfers between NAS/Comp.
2Gbps = 256MBps .... what is:
- 10 RAW pictures per second for Post processing in Adobe Lr (up to camera specification)
- 7 seconds of 4k/60fps RAW record or up to 60sec by h.265
Btw for photographers is better to use this kind of scenario:
- USB3.0 SD card reader connected directly to the NAS for initial data transfer from SD card to NAS
- for rest of the operation use Network connection to the NAS (library management, Edit, ...)
And last for Mark from NAScompares:
- for PLEX server you can forget for 10Gbps NIC
It's simple - there is still one mandatory bottleneck = TV NIC = mainly about 100Mbps only.
In next stage I will prepare some screenshots.
I would like to open discussion with you, about your experiences.
First, some basics:
"Non-Volatile Memory Express” = NVMe, is an open standard developed to allow modern SSDs to operate at the range of current physical barriers (PCIe revision, memory cells, ...).
It allows this kind of memory to operate as an SSD directly through the PCIe interface rather (better) than through SATA interface and being limited by the slower SATA speeds.
Just to be sure M.2 standard (without NVMe) is just "relatively" normal SSD based on SATA interface only. Then don't be surprised when you order M.2 and you will expect M.2 NVMe performance (or slot) !!
Test environments:
Comp1: Based on W10Home OS, MB Asus ROG Maximus X Hero with chipset Z370, 32GB/DDR4 2666, M.2 NVMe Samsung 970 EVO PLUS (250GB), 1x1Gbps(MB) + Intel Pro PCI 1Gbps card with LAG
Comp2: Based on W10Pro OS, MB Gigabyte Designare with chipset Z390, 64GBGB/DDR4 2666, M.2 NVMe Samsung 970 EVO (500GB), 2x1Gbps(MB) with LAG
Network: Switch Unifi48 with enabled LAG for both sides (computers and also NAS), Jumbo frames, Non-Blocking Throughput: 70 Gbps, Switching Capacity: 140 Gbps
Wires: Cat 6a
NAS: 1813+ with 1x Bond with LAG for (2xG LAN), Jumbo Frames, target Volume based on 2xHDD in RAID1 (Seagate Constellation CS, ST1000NC001-1DY162) ... HDD benchmarking (hdparm -t /dev/sda & b) = 157.78 MB/sec
SMB, iSCSI for file transfers
Reason of the test:
Some of you can ask your self "Why the hell are you use the DS1813+ w/o 10Gbps and w/o fast SSD (MLC or SLC) for such test of M.2 NVMe disks at another side?"
Because I would like to disprove ideas about frequent purchase decision for NAS based on number of NAS bays only.
Short summary:
I wasn't able to achieve transfer performance between NVMe disks and target Volume in NAS (based on NVMe speed). What a surprise
Reasons:
- I have also 2x 10Gbps SFP ports in my switch, but there is still bottleneck at NAS side (without 10Gbps card), even with 2xLAG for LAN it will provide max 2Gbps only (under performed vs NVMe)
- NAS disk groups operation model and used disks. Even when I will purchase SSD(MLC) to RAID1 my performance will provide just partial capacity for such enormous speed from M.2NVMe. Then it will be waste investment. ... To be sure, When you have in operation SHR with 3+ disks or RAID5, or RAID6 you can absolutely forget for same speed as for RAID1 or SHR with 2 disks.
Idea - 10Gbps NIC in the NAS (purchase enabler or nice to have)
when I will purchase next NAS (I will do that) I need take into consideration that 10Gbps NAS interface for LAN I can utilize only with:
- min. 3+ fast SSDs (MLC or SLC) in RAID0 group. Reason: when you have e.g. 500MBps Write performance for single disk (basic, no RAID), then you can utilize theoretically your NIC (network interface Card/Controller/Adapter) in your NAS up to 4 000Mbps. But it's just 40% of your 10Gbps network environment. But every RAID will decrease the performance of basic disk. Specially in NAS w/o independent disk controller. What is case of Synology and others from similar range.
Even worse - Try to imagine how it will decrease when you use SHR with 3+ disk (RAID5) or with SHR-2 (RAID6) ... as hard impact to your final disk group performance.
Btw, there is still "smart" consideration about identical disks with identical performance for such RAID5/6 SHR/SHR2 configuration. Because, when you will use different disks, your performance will decrease to slowest from them.
Then there is a question - do I really need such 10Gbps network speed for the NAS?
YES - when I have RAID0 (mentioned above) target Volume in the NAS = 3+ disks
NO - when I have just RAID1/5/6 or SHR equivalent. OFC it's valid for basic disk or JBOD.
Where is the reason or point for the usage such fast speed for M.2 NVMe disk? Many times mentioned in threads here - in desktop computing services - Photo Edit Station (desktop), Video Edit Station (desktop) as well. Game 3D Engine operation. Or statistical algorithm computing. For all of them you need really fast CPU-RAM-Disk data transfers. Then you need fast swap NAS environment, e.g. for Adobe Lr.
In this case is the speed of LAN minor problem, to be sure - as minimum 2Gbps LAG for file transfers between NAS/Comp.
2Gbps = 256MBps .... what is:
- 10 RAW pictures per second for Post processing in Adobe Lr (up to camera specification)
- 7 seconds of 4k/60fps RAW record or up to 60sec by h.265
Btw for photographers is better to use this kind of scenario:
- USB3.0 SD card reader connected directly to the NAS for initial data transfer from SD card to NAS
- for rest of the operation use Network connection to the NAS (library management, Edit, ...)
And last for Mark from NAScompares:
- for PLEX server you can forget for 10Gbps NIC
It's simple - there is still one mandatory bottleneck = TV NIC = mainly about 100Mbps only.
In next stage I will prepare some screenshots.