5

I have an linux vm running on kvm with virtio-net, and I want to check the link speed. How can I do that?

What I tried so far:

# ethtool eth0
Settings for eth0:
Link detected: yes

It seems that ethtool does not support virtio-net(yet?) I have the version 3.16-1 from debian jessie, does ethtool support it in newer version? It seams version 6 is the newest one.

 # cat /sys/class/net/eth0/speed
cat: /sys/class/net/eth0/speed: Invalid argument


 # lspci | grep -iE --color 'network|ethernet'
00:12.0 Ethernet controller: Red Hat, Inc Virtio network device

  # lshw -class network
  *-network
       description: Ethernet interface
       product: Virtio network device
       vendor: Red Hat, Inc
       physical id: 12
       bus info: pci@0000:00:12.0
       logical name: eth0
       version: 00
       serial: 4e:ff:a8:bf:61:12
       width: 32 bits
       clock: 33MHz
       capabilities: msix bus_master cap_list rom ethernet physical
       configuration: broadcast=yes driver=virtio_net driverversion=1.0.0 ip=172.30.2.152 latency=0 link=yes multicast=yes
       resources: irq:10 ioport:c080(size=32) memory:febf2000-febf2fff memory:febe0000-febeffff

I found one link that describes the problem in the redhat kb, but unfortunately I do not have an subscription to read it.

ddio
  • 88
  • 1
  • 1
  • 8

3 Answers3

14

Virtio is a para-virtualized driver, which means the OS and driver are aware that it's not a physical Device. The driver is really an API between the guest and the hypervisor so it's speed is totally disconnected from any physical device or Ethernet standard.

This is a good thing as this is faster than the hypervisor pretending to be a physical device and applying an arbitrary "link speed" concept to flow.

The VM just dumps frames onto a bus and it's the host's job to deal with the physical devices; no need for the VM to know or care what the link speed of hosts physical devices are.

One of the advantages of this is that when packets are moving between 2 VMs on the same host they can send packets as fast as the host's CPU can move them from one set of memory to another, setting a "linkspeed" here just puts in an unneeded speed limit.

This also allows the host to do adaptor teaming and spread traffic across multiple links without every VM needing to be explicitly configured to get the full bandwidth of the setup.

If you want to know how fast you can actually transfer data from your VM to another location you need to do actual throughput tests with tools like iperf.

slm
  • 7,355
  • 16
  • 54
  • 72
Nath
  • 1,282
  • 9
  • 10
2

To expand a bit on this because I too recently came into this and was also semi-confused by the lack of speed details when running ethtool on a VM:

$ ethtool eth0
Settings for eth0:
    Link detected: yes

When I looked into lshw output:

$ lshw -class network -short
H/W path            Device      Class          Description
==========================================================
/0/100/3                        network        Virtio network device
/0/100/3/0          eth0        network        Ethernet interface

This is telling us that the device driver being used for this VM is virtualized, in this case this is a VM running on KVM and so the VM is using the virtio_* drivers for all its interactions with "hardware".

$ lsmod | grep virt
virtio_rng             13019  0
virtio_balloon         13864  0
virtio_net             28096  0
virtio_console         28066  1
virtio_scsi            18453  2
virtio_pci             22913  0
virtio_ring            22746  6 virtio_net,virtio_pci,virtio_rng,virtio_balloon,virtio_console,virtio_scsi
virtio                 14959  6 virtio_net,virtio_pci,virtio_rng,virtio_balloon,virtio_console,virtio_scsi

These kernel modules are available to certain OSes (Linux, BSD, and Windows). With these drivers installed in your VM, the kernel in your VM has special access to the underlying hardware through the kernel that's running on your hypervisor.

Remember that with hypervisors there's 2 distinct types. ESX/vsphere are considered type-1. Reminder on the types:

  • Type-1, native or bare-metal hypervisors
  • Type-2 or hosted hypervisors

KVM is more akin to a type-2, but has some elements, such as virtio_*, that make it behave and perform more like a type-1, by exposing to virtualization the underlying Linux kernel of the hypervisor in such a way that VMs can have semi-direct access to it.

The speed of my NIC?

Given you're running on a paravirtualized hypervisor you have to go onto the actual hypervisor to find out your NIC's theoretical speed using ethtool. In lieu of that can only find out by doing something like using iperf to benchmark the NIC under load, and experimentally find out what the NICs speed appears to be.

For example, here I have 2 servers that are running on 2 different hypervisors. Using iperf on both servers:

$ sudo yum install iperf

Then running one server as an iperf server on host1 VM:

host1$ iperf -s
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------ 

Then on a client VM host2:

host2$ iperf -c 192.168.100.25
------------------------------------------------------------
Client connecting to 192.168.100.25, TCP port 5001
TCP window size: 85.0 KByte (default)
------------------------------------------------------------
[  3] local 192.168.100.101 port 55854 connected with 192.168.100.25 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  10.0 GBytes  8.60 Gbits/sec

On host1's output you'll see this:

$ iperf -s
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
[  4] local 192.168.100.25 port 5001 connected with 192.168.100.101 port 55854
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0-10.0 sec  10.0 GBytes  8.60 Gbits/sec

Here we can see that the NIC was able to go up to 8.60Gbits/sec.

slm
  • 7,355
  • 16
  • 54
  • 72
1

Cumulus Networks has upstreamed their patch to set the speed on the virtio_net driver. It is useful in network simulations with their Cumulus VX VM product.

Patch is in Ubuntu Xenial. not sure if it is in Fedora today.

More details can be found here. https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1581132

Another reference is my blog post about why having a speed setting on the virtio driver is useful.

http://linuxsimba.com/network-bonds-vagrant-libvirt

linuxsimba
  • 11
  • 1