7.2 Link aggregation question....

Currently reading
7.2 Link aggregation question....

1,322
263
NAS
DS 718+, 2x-DS 720+
Router
  1. RT2600ac
Operating system
  1. Windows
Mobile operating system
  1. iOS
Last edited:
Question on upcoming 7.2 feature that I have no background on...
Link Aggregation....

I have 1 computer (Supermicro Server at W7-64) that has 2x GB ethernet ports... Let's assume 1 IP is: 192.168.1.101 Other is not yet configured...
My Switch is Un-managed.....
Experimental 720+ Nas is IP 192.168.1.156 and only 1 GB Connection is configured... Other is not yet configured....
All 4 computers, (only 1 with 2x GB ports) connect via un-managed Switch... 2 of 3 NAS's connect to un-managed swtich, one NAS has one GB Connection to Router (2600)
No VLAN in use.... Everybody is on Same Subnet...

With Upcoming 7.2 on NAS, can the connection to the one computer that does have 2x GB ports, conected to un-managed swtich, if it is possible to link aggregation at ~2GB connection to the 720+ with 7.2 on it...... If both GB ports on each are connected to Un-Managed Switch?

IF SO, Then: How does all the other computers with only 1GB connection still connect to the 720+ without Link Aggregation? As they do now? Is Link Aggregation an ALL or Nothing Connection? or can it be shared with just one NAS?
And... On Supermicro W7-64..... Can it still connect to other NAS's without Link Aggregation?

I've looked off and on about this for months.... and cannot make sense on how this can happen.....

So I ask here....
 
Do you have an issue with the '.' key on your keyboard? It seems to be getting stuck.

Which devices are you considering using LAG: just the DS720+; the NAS and the Supermicro Server? And what new LAG feature is there in DSM 7.2, hasn't this been around for a long time in DSM?

If you are just going to do it on the NAS then what happens is that each client connection to the NAS will access via one port or the other, and is managed by the NAS. In effect two clients would have access via the separate ports, while three would share the ports and depending on the LAG type will dictate how it is shared.

Since you have an unmanaged switch you have to select either Adaptive Load Balancing (or Balance-SLB, if you have vSwitch enabled). In this situation the three client connections cannot evenly share the aggregated 2 Gbps, instead two of the connections will share one of the 1 GbE ports. For NAS that don't have those two options then you need a managed switch that supports LACP 802.3ad. When setting up LACP on switches some ask how to manage the data flows by defining the hash algorithm (my TP-Link does and Netgear does not):
1682345696960.png


If you access the NAS a lot using SMB then I wouldn't use LAG as it doesn't play well with SMB multi-channel, and that's in the SMB Service beta.
 
If you access the NAS a lot using SMB then I wouldn't use LAG as it doesn't play well with SMB multi-channel, and that's in the SMB Service beta
I think the question here was about the multichannel support but I could be wrong. Bottom line, for multichannel, you don't configure LAG at all but rather have both LAN adapters as separated Interfaces in the same subnet.

On the other end (PC for example) that supports smb multichannel (this has to be a compatible x86 device, as both M1/2 macs are not supported as of yet, and NAS to NAS via File Station also does not work.

Also, communication needs to go via SMB using the network name of the NAS, not a specific IP address (this includes the LAG one as well), and ofc the SMB multichannel settings need to be configured.

Everything else in terms of LAG is the same in 7.2 as before, as @fredbert already stated.
 
Bottom line, for multichannel, you don't configure LAG at all but rather have both LAN adapters as separated Interfaces in the same subnet.
I can confirm this. My DS1520+ is currently configured, all 1 GbE:
  • LAN 1 + 2 as Balance-SLB LAG with 192.168.A.A, as default interface.
  • LAN 3 standalone interface with 192.168.A.B.
  • LAN 4 standalone interface with 192.168.A.C.
Mac Mini M1:
  • 10GbE port with 192.168.A.X, to 10 GbE switch port, as primary interface.
  • Thunderbolt 1GbE adapter (via TB4-TB2 adapter) with 192.168.A.Y to 1 GbE switch port.
  • WiFi 'ax' connection with 192.168.A.Z, with variable transmit rate.
    • Regardless of the whether the Mac's WiFi is primary interface with a high transmit rate, or not, I cannot get WiFi to participate in multi-channel.
The Mac's two Ethernet interfaces create active 1 Gbps connections to 192.168.A.A and .A.B. So this shows that the LAG doesn't have two connections.

Code:
       id         client IF             server IF   state                     server ip                 port   speed
========================================================================================================================
M    4562  en0    (Ethernet)  14         9          [session active       ]   192.168.A.B              445    1.0 Gb
ALT  4563  en6    (Ethernet)  30         8          [session active       ]   192.168.A.A              445    1.0 Gb
ALT  4564  en1    (wifi    )  11        10          [session inactive     ]   192.168.A.C              445    102.9 Mb

And here's a speed test, which varies but I'm happy with anything that's better than the usual ~100 MB/s single port performance:

1682350806722.png
 
In the end, not all that difficult of a scenario. What you'd need:

NAS model supporting a 10GBE card, and the card itself
A minimum of cat 6a network cabling
A switch that support 10 GBE
10GBE network adapters for any wired PC's or Macs that you want faster speeds.

Note - performance will vary. I have found that spindle to spindle transfers are roughtly 2x to 3x faster than 1GBE. The fastest speeds would be to use SSD's across the board on clients and NAS, which can work depending on what your use and storage needs are.
 
Last edited:
Expense of 10Gb is out of question...
(Though all cables & patch-field here is CAT6). But agree, if speed was needed, 10Gb makes the most sense...
My only interest in 7.2 is LAG.... None of other 7.2 features interests me...
PS: SSD's, in all computers now... Some have 2x in raid 0 for fast temporary storage.
 
Does anyone know why ASi Macs don't play well with SMB Multichannel?

My Windows server (10GbE) has no issue with multichannel - connecting here to my rustic arm-powered RS217:

 2023-04-25 at 13.45.20.png


With my macOS equivalent (10GbE) to the same RS217:

 2023-04-25 at 13.45.48.png


macOS shows 'Multichannel On' and reports both interfaces (10.0.1.40-41) on the RS217 as connected:

Code:
rob@Smaug ~ % sudo smbutil multichannel -a
Password:
Session: /Volumes/iMazing Apple iOS Backups
Info: Setup Time: 2023-04-18 10:51:22, Multichannel ON: no, Reconnect Count: 0
    Total RX Bytes: 3270363756, Total TX Bytes: 12678161210
       id         client IF             server IF   state                     server ip                 port   speed
========================================================================================================================
M     182     N/A                          N/A      [session active       ]   10.0.1.44                 445    N/A
Session: /Volumes/Spare Shared Folder 1
Info: Setup Time: 2023-04-25 13:04:57, Multichannel ON: yes, Reconnect Count: 0
    Total RX Bytes: 5245472684, Total TX Bytes: 9744198676
       id         client IF             server IF   state                     server ip                 port   speed
========================================================================================================================
M     338  en6    (Ethernet)  14         2          [session active       ]   10.0.1.40                 445    1.0 Gb
Server NIC:
    name: NA, idx: 3, type:    NA, speed 1.0 Gb, state idle
        ip_addr: 10.0.1.41
Server NIC:
    name: NA, idx: 2, type:    NA, speed 1.0 Gb, state connected
        ip_addr: 10.0.1.40
Client NIC:
    name: en6, idx: 14, type: wired, speed 10.0 Gb, state connected
        ip_addr: 10.0.1.16

rob@Smaug ~ %

❔

☕
 
Unless you live in a mansion, or run an RF-spewing industrial workshop in every room, the humble Cat5e is easily enough for 10GbE and is easier to work with, terminate and you can get more cables in any given hole.

☕
Cat 6a is the minimum cabling standard for 10GBE. The humble Cat5e will max out at ~110 MB/s
 
Cat 6a is the minimum cabling standard for 10GBE. The humble Cat5e will max out at ~110 MB/s
You are mixing standards with actual capability. If you don't have the challenging environment and/or running at the maximum length permissible then the humble Cat5e will not 'max out' at 110 MB/s. It's just a passive connection with identical wiring - just where do you think the extra data will disappear to?

[Just Robbie typing, from a largish UK home with 10 GbE networking over Cat5e, with extensive experience of RF propagation and exploitation who has actively contributed to the standards bodies and a previous member of the frequency allocation committee....]
 
My Cat 5e, longest run:

2.00m stranded patch to wall plate
27.00m solid conductors to wall plate
4.00m stranded patch to rear patch panel
0.75m stranded patch to front patch panel
+ 0.10m stranded patch to switch
_______
33.85m Total (inc 10 interconnects)

 2023-04-25 at 19.04.15.png


No point quoting random internet sources without mentioning what the actual standards are, the test conditions used or why those test conditions were chosen. The actual copper paths are identical; the only things changed for the higher standards are physical separation and twist.

☕
 
Last edited:
In 2013, When we built house, I ran wires to boxes installed in rooms.. Used CAT6 for ethernet runs, Cat5E for phones... Quad Shielded RG6 for all RF runs, and a couple HDMI Feeds from Edit System to Theater.
When Cable Company Installed I wanted them to use my cable... They balked until they tested my Quad RG6, that tested better than their cable, so they installed on my cables..
With exception of the Buried in dry pipe runs to the garage (~130'), which work fine at GB Speeds now,.. My longest CAT6 run is ~45’-50’ in the house, so I’m good for 10G if I get new gear. 48 Port Patch-field is also CAT6 Rated..
So I have good Runs: For the future.
Unfortunately -- no current gear has spare slot (Supermicro) for 10G card. all other Computer's Ethernet is 1GB, as are switches, Routers, Modem, and NAS’s. AND: With Synology Blocking USB to 2.5/5Gb Ethernet adapters....... That prevents an obvious 'simple' upgrade... Long Before this, I got around the NAS bandwidth issue by having 2x SSD's at raid 0 for fast temporary storage.... On one Laptop and Supermicro... So at present I have no need for Feeding Video to edit system from a NAS....
But a speedup of storing large .iso files generated from Supermicro to a NAS: THAT would certainly 'be nice'......
So — with the present limitations, though I might try LAG, if it can be done..... I’ll definitely watch 10Gb from the sidelines. As 10G would be incredibly $$$$$, and I'm not willing to replace nearly everything....
 
But a speedup of storing large .iso files generated from Supermicro to a NAS: THAT would certainly 'be nice'......
This is why I did the upgrade to 10 GBE for one PC NIC, unmanaged switch with (2) multigig ports, and the NAS add on NIC cost was ~$500 or so. I intend to replace my current switch with a 5 10 GBE ports later this year. Costs keep dropping (as they should...)
 

Create an account or login to comment

You must be a member in order to leave a comment

Create account

Create an account on our community. It's easy!

Log in

Already have an account? Log in here.

Similar threads

  • Question
I will assume you mean 8GB. More RAM is always better. I believe that Synology will use excess RAM as...
Replies
2
Views
801
Nop clue as to why but the noise ceased. no errors other than the 1. go figure.
Replies
9
Views
2,507

Welcome to SynoForum.com!

SynoForum.com is an unofficial Synology forum for NAS owners and enthusiasts.

Registration is free, easy and fast!

Trending threads

Back
Top