Skip to end of metadata
Go to start of metadata

Problem

If you have jumbo frames configured on BrickStor and are seeing the storage go unavailable or periodically losing connections between clients or hosts and the BrickStor. 

Solution

 

  1. Verify Jumbo Frames are configured on BrickStor by verifying the expected network vnic is configured to 9000.
    # dladm show-linkprop -p mtu data0
    LINK         PROPERTY        PERM VALUE          DEFAULT        POSSIBLE
    data0        mtu             rw   9000           9000           1500-9000
  2. Connect to the ESX/ESXi host using an SSH session. (You will need to enable SSH on the ESX host first)

  3. Verify MTU size is set to Jumbo Frames on your ESX vmkernel port and look for MTU 9000.
    Verify MTU size on host
    # esxcfg-nics -l
    Name    PCI           Driver      Link Speed     Duplex MAC Address       MTU    Description
    vmnic0  0000:02:00.00 e1000       Up   1000Mbps  Full   xx:xx:xx:xx:xx:xx 9000   Intel Corporation 82545EM Gigabit Ethernet Controller (Copper)
    vmnic1  0000:02:01.00 e1000       Up   1000Mbps  Full   xx:xx:xx:xx:xx:xx 9000   Intel Corporation 82545EM Gigabit Ethernet Controller (Copper)
    esxcfg-vmknic -l
    Interface  Port Group/DVPort   IP Family IP Address Netmask         Broadcast   MAC Address          MTU     TSO     MSS      Enabled Type
    vmk1       iSCSI               IPv4      10.10.10.10 255.255.255.0 10.10.10.255 XX:XX:XX:XX:XX:XX    9000    65535     true    STATIC
  4. In the command shell of the ESX host, run the ping command to make sure you can ping the storage:
    # vmkping x.x.x.x

    where x.x.x.x is the hostname or IP of the BrickStor.

    If you can't ping the storage from the host go to the BrickStor and verify you can ping the host from the Brickstor. If you can't ping in both directions there is a network problem that needs to be resolved first.
  5. Now run the vmkping command with the -s and -d options to ensure that jumbo frame packets are not being fragmented.
    # vmkping -d -s 8972 x.x.x.x

    Note: If you have more then one vmkernel port on the same network (such as a heartbeat vmkernel port for iSCSI) then all vmkernel ports on the host on the network would need to be configured with Jumbo Frames (MTU: 9000) too. If there are other vmkernel ports on the same network with a lower MTU then the vmkping command will fail with the -s 8972 option. Here in the command -d option sets DF (Don't Fragment) bit on the IPv4 packet.
  6. In ESXi 5.1 and later, you can specify which vmkernel port to use for outgoing ICMP traffic with the -I option:

    # vmkping -I vmkX x.x.x.x
    A successful ping response is similar to:
     
    # vmkping 10.1.2.1
    PING server(10.1.2.1): 56 data bytes
    64 bytes from 10.1.2.1: icmp_seq=0 ttl=64 time=10.245 ms
    64 bytes from 10.1.2.1: icmp_seq=1 ttl=64 time=0.935 ms
    64 bytes from 10.1.2.1: icmp_seq=2 ttl=64 time=0.926 ms
    --- server ping statistics ---
    3 packets transmitted, 3 packets received, 0% packet loss
    round-trip min/avg/max = 0.926/4.035/10.245 ms

    An unsuccessful ping response is similar to:
     
    # vmkping 10.1.2.1
    PING server (10.1.2.1) 56(84) bytes of data.
    --- server ping statistics ---
    3 packets transmitted, 0 received, 100% packet loss, time 3017ms

What to Review if you are not seeing 100% successful pings

  • There may be switches in between the host and the storage that do not have Jumbo Frames set 100% everywhere.
  • If you see intermittent ping success, this might also indicate you have incompatible NICs teamed on the VMotion port. Either team compatible NICs or set one of the NICs to standby.
  • If you do not see a response when pinging by the hostname of the server, initiate a ping to the IP address. Initiating a ping to the IP address allows you to determine if the problem is a result of an issue with hostname resolution. If you are testing connectivity to another VMkernel port on another server remember to use the VMkernel port IP address because the server's hostname usually resolves to the service console address on the remote server.
  • In vSphere 5.5 VXLAN has its own vmkernel networking stack therefore ping connectivity testing between two different vmknics on the transport VLAN must be done from the ESXi console with the either of this syntax:

    Testing VXLAN in ESX 5.5 and Later
    # vmkping ++netstack=vxlan <vmknic IP> -d -s <packet size> 
    
    # esxcli network diag ping --netstack=vxlan --host <vmknic IP> --df --size=<packet size>


    For more information in fixing or configuring the ESX host go to the VMware Knowledge Base

  • Testing VMkernel network connectivity with the vmkping command (1003728)

  • iSCSI and Jumbo Frames configuration on ESX/ESXi (1007654)

There is no content with the specified labels