Photo by Nam Anh / Unsplash

Jumbo frames Brocade ICX6610

Homelab Dec 5, 2024

A quick post on enabling jumbo frames on the Brocade ICX6610 switch and a separate storage network between my Proxmox nodes and TrueNAS VM.

Enabling jumbo frames

On the switch

On the Brocade switch you have to enable jumbo frames globally first and restart the switch.

enable
conf t
jumbo
exit
wri mem
reload

Then you can create virtual interfaces per vlan with specific mtu values.

enable
conf t
vlan 10
router-interface ve 10
interface ve 10
ip address 10.33.10.9/24
ip mtu 1500
exit
wri mem

Normal MTU example

enable
conf t
vlan 80
router-interface ve 80
interface ve 80
ip address 10.33.80.9/24
ip mtu 9216
exit
wri mem

Jumbo MTU example

In Proxmox

  • On each node go to the System > Network tab
  • Select the interface that needs jumbo frames and click the edit button
    • Select Advanced and enter the MTU value 9216
    • Press OK
  • Create or edit the bridge, e.g. vmbr2, attached to that interface
    • Select Advanced and enter the MTU value 9216
    • Press OK
  • Click Apply Configuration
  • Go to the Datacenter > SDN > Zones tab
  • Add a vlan zone
    • Give it a name, f.e. san
    • Select the bridge vmbr2
    • Enter MTU value 9216
    • Select the nodes
    • Press OK
  • Go to the Datacenter > SDN > VNets tab
  • Click Create button
    • Give it a name, f.e. san
    • Alias: Storage Devices
    • Zone: san
    • Tag: 80
    • Press OK
  • Create a subnet
    • Subnet: 10.33.80.0/24
    • Gateway: 10.33.80.9
  • Go to the Datacenter > SDN and press Apply

In TrueNAS

  • Select the VM
  • Go to the Hardware tab
  • Add a Network Device
    • Bridge: san
    • Model: VirtIO
    • Select Advanced and enter MTU: 1
    • Press OK
  • Open TrueNAS webui
  • Go to Network
  • Edit the new interface
    • MTU: 9216 (this is the maximum value TrueNAS Scale allows)
    • IP Address: 10.33.80.10/24
    • Press Save
  • Press Test Changes
  • Press Save Changes

Does it matter?

All this trouble, but does it make a difference? Theoretically it should, but does it matter in 2024? Subnet 10.33.80.0/24 is using jumbo frames with an mtu of 9216. Subnet 10.33.30.0/24 is using the default mtu 1500.

Iperf

In iperf I see 9.85 versus 9.35 Gbits/s, a difference of 5.45%.

  -----------------------------------------------------------
Server listening on 5201 (test #1)
-----------------------------------------------------------
Accepted connection from 10.33.80.117, port 42088
[  5] local 10.33.80.116 port 5201 connected to 10.33.80.117 port 42094
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-1.00   sec  1.15 GBytes  9.84 Gbits/sec
[  5]   1.00-2.00   sec  1.15 GBytes  9.85 Gbits/sec
[  5]   2.00-3.00   sec  1.15 GBytes  9.86 Gbits/sec
[  5]   3.00-4.00   sec  1.15 GBytes  9.86 Gbits/sec
[  5]   4.00-5.00   sec  1.15 GBytes  9.86 Gbits/sec
[  5]   5.00-6.00   sec  1.15 GBytes  9.86 Gbits/sec
[  5]   6.00-7.00   sec  1.15 GBytes  9.86 Gbits/sec
[  5]   7.00-8.00   sec  1.15 GBytes  9.85 Gbits/sec
[  5]   8.00-9.00   sec  1.15 GBytes  9.84 Gbits/sec
[  5]   9.00-10.00  sec  1.15 GBytes  9.85 Gbits/sec
[  5]  10.00-10.00  sec   403 KBytes  9.51 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-10.00  sec  11.5 GBytes  9.85 Gbits/sec                  receiver
-----------------------------------------------------------
Server listening on 5201 (test #2)
-----------------------------------------------------------
Accepted connection from 10.33.30.117, port 45260
[  5] local 10.33.30.116 port 5201 connected to 10.33.30.117 port 45272
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-1.00   sec  1.08 GBytes  9.27 Gbits/sec
[  5]   1.00-2.00   sec  1.09 GBytes  9.36 Gbits/sec
[  5]   2.00-3.00   sec  1.09 GBytes  9.35 Gbits/sec
[  5]   3.00-4.00   sec  1.09 GBytes  9.36 Gbits/sec
[  5]   4.00-5.00   sec  1.09 GBytes  9.35 Gbits/sec
[  5]   5.00-6.00   sec  1.09 GBytes  9.36 Gbits/sec
[  5]   6.00-7.00   sec  1.09 GBytes  9.36 Gbits/sec
[  5]   7.00-8.00   sec  1.09 GBytes  9.35 Gbits/sec
[  5]   8.00-9.00   sec  1.09 GBytes  9.35 Gbits/sec
[  5]   9.00-10.00  sec  1.09 GBytes  9.36 Gbits/sec
[  5]  10.00-10.00  sec   189 KBytes  6.18 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-10.00  sec  10.9 GBytes  9.35 Gbits/sec                  receiver

DD - Netcat

On the receiving machine run:

netcat -l -p 9919 > /dev/null

From the sender:

dd if=/dev/zero bs=4M count=1024 > /dev/tcp/10.33.80.116/9919

Pushing this raw data sequentially using dd I see no difference.

root@jpl-proxmox7:~# dd if=/dev/zero bs=4M count=1024 > /dev/tcp/10.33.30.116/9919
1024+0 records in
1024+0 records out
4294967296 bytes (4,3 GB, 4,0 GiB) copied, 3,98586 s, 1,1 GB/s
root@jpl-proxmox7:~# dd if=/dev/zero bs=4M count=1024 > /dev/tcp/10.33.30.116/9919
1024+0 records in
1024+0 records out
4294967296 bytes (4,3 GB, 4,0 GiB) copied, 3,75894 s, 1,1 GB/s
root@jpl-proxmox7:~# dd if=/dev/zero bs=4M count=1024 > /dev/tcp/10.33.30.116/9919
1024+0 records in
1024+0 records out
4294967296 bytes (4,3 GB, 4,0 GiB) copied, 3,95569 s, 1,1 GB/s

root@jpl-proxmox7:~# dd if=/dev/zero bs=4M count=1024 > /dev/tcp/10.33.80.116/9919
1024+0 records in
1024+0 records out
4294967296 bytes (4,3 GB, 4,0 GiB) copied, 3,9944 s, 1,1 GB/s
root@jpl-proxmox7:~# dd if=/dev/zero bs=4M count=1024 > /dev/tcp/10.33.80.116/9919
1024+0 records in
1024+0 records out
4294967296 bytes (4,3 GB, 4,0 GiB) copied, 3,90774 s, 1,1 GB/s
root@jpl-proxmox7:~# dd if=/dev/zero bs=4M count=1024 > /dev/tcp/10.33.80.116/9919
1024+0 records in
1024+0 records out
4294967296 bytes (4,3 GB, 4,0 GiB) copied, 4,12261 s, 1,0 GB/s

Tags