TL;DR
(1) Only the first 2 VFs are transferred to the VM
and
(2) No traffic to the VM.
Setup
- Host is Ubuntu 16.04
- Intel 82599 (supports SR-IOV) attached via PCIe
- Driver
ixgbe
- Guest VM is Ubuntu 16.10
- Using Libvirt on KVM as hypervisor
Process
Trying to utilize SR-IOV functionality. Updating sriov_numvfs
to 4 in both NICs results in getting 4 VFs per NIC. Running the VM and connecting it to both NICs on the Intel 82599.
Using traffic generator to test the setup.
Issue
Prior to running the VM, the ixgbe
driver creates 8 more links (one per VF) in the host, all are visible in ip link
and are in state down. After VM activation only 2 VFs (the first VF in each NIC, meaning function 0) are transferred to the VM.
Before VM activation
- Checking
lspci
in the host shows both NICs and all VFs on the PCI bus. - Checking
ip link
in the host shows the all 8 links created by the driver (state down with assigned MAC addresses), and both card NICs.
After VM activation
- Checking
lspci
in the host shows both NICs and all VFs on the PCI bus. - Checking
lspci
in the guest shows only the 2 VFs transferred. - Checking
ip link
in the host after VM activation shows the remaining 6 links created by the driver (with assigned MAC addresses), shows both card NICs, and all VFs (with assigned MAC addresses). - Checking
ip link
in the guest shows the 2 NICs which are connected to the VFs (with MAC addresses - correct and matching the HW). - Checking libvirt in the host (
virsh net-dumpxml
on both NICs) shows all 8 VFs sorted and attached to the VM.
but...
No traffic to the VM.
There is traffic from the VM to the outside.
Any ideas?
Working on it
1
Trying to bypass the automation process of the driver, following this link, the VM is started with two bridged networks to the 2 card NICs. VM goes up normally and there's traffic from both NICs. Next the new device is added using the virsh attach-device
command and command executes successfully. First the XML file only contains the PCI address of the VF. No apparent change is evident in VM, not in ip link
, not in lspci
... nothing. The --config
flag was raised so the state is checked again after a reboot, and again nothing. Next the PCI address of the NIC (the PF) is added explicitly, and also the VF MAC address is explicitly specified. After virsh attach-device
with explicit parameters - still nothing.
2
Going basic, following this link, the PCI device is detached from the host manually and injected to the VM. End result is that the PCIe card is not vHBA and consequentially not NPIV compatible (see here), and an error message notifies of this accordingly.
3
Another approach is using the passthrough
forwarding mode, as described here. This is not a desired mode of work, since it intentionally allows only one vNIC access to one NIC at a time (and the whole purpose is utilizing the SR-IOV functionality), and the behavior is similar to the hostdev
forwarding mode: if the NIC name is stated in the pf
directive is works like a basic bridge, and if the VF name is stated in the pf
directive there's nothing.
4
Similar to the Passthrough approach, there's the MACvTap approach, described here, here and here. This is not applicable. The ixgbe
driver sets the VF link names, so they're treated differently. There's no option of specifying names of the VFs as interfaces, and stating the interface name results in passing the interface similar to the passthrough
forwarding mode. This might result from the driver version, the kernel version, libvirt version - or some combination of them.
5
Changing SFPs doesn't seem to help either. Changed to several different models, none of which worked with the Intel card, except for one which did receive power and the link was up (was visible on the PCIe bus), but was not detected by the ixgbe
driver, nor any other kernel modules and no interfaces were created.