Brief description: the host is Windows Server 2016 Datacenter Build 14393 (UEFI) running Hyper-V role. SR-IOV and Intel VT are enabled. 4x Intel® Optane ™ SSD 900P 280GB PCIe NVMe 3.0 drives are connected to the host. To check DDA support, I execute the PowerShell script that returns OK, DDA is supported. Ubuntu Server 16.04 (gen 2) VM is deployed. All required packages were installed to the VM. Then, I've connected those 4x Optane SSDs to the Ubuntu Server VM via DDA. After that, additional settings -LowMemoryMappedIoSpace 1Gb -HighMemoryMappedIoSpace 4Gb
were applied. So, VM has passthrough connected 4x NVMe drives. The reboot/shutdown/power-on the Ubuntu VM works without a problem. Also, there are no issues with the connectivity between drives and VM.
The problem case: when the Windows Server host reboots, the boot of Ubuntu Server VM fails.
From what I found, I assume a NUMA node configuration is the problem. The VM's boot fails because of NVMe SSDs that are connected to another NUMA. For the information, 2x SSD drives are connected to # 0 NUMA node and another 2x SSDs have the connection to # 1 NUMA one.
So VM's boot runs on the # 1 NUMA node and fails. The VM boots only when NVMe SSDs running on # 0 NUMA are disconnected.