Use the virtio interface instead. It's a para-virtualized device exposed to the host by the guess' kernel. There's no network adapter in the guess anymore, reducing the driver overhead significantly. Its support has been included in all Linux kernels ⩾ 2.6.25
So-called "full virtualization" is a nice feature because it allows you to run any operating system virtualized. However, it's slow because the hypervisor has to emulate actual physical devices such as RTL8139 network cards . This emulation is both complicated and inefficient.
Virtio is a virtualization standard for network and disk device drivers where just the guest's device driver "knows" it is running in a virtual environment, and cooperates with the hypervisor. This enables guests to get high performance network and disk operations, and gives most of the performance benefits of paravirtualization.
https://wiki.libvirt.org/page/Virtio
See also the virtio tag and
I don't know the exact name for virtio in VMware but looks like it's VMXNET 3
VMXNET 3: The VMXNET 3 adapter is the next generation of a paravirtualized NIC designed for performance, and is not related to VMXNET or VMXNET 2. It offers all the features available in VMXNET 2, and adds several new features like multiqueue support (also known as Receive Side Scaling in Windows), IPv6 offloads, and MSI/MSI-X interrupt delivery. For information about the performance of VMXNET 3, see Performance Evaluation of VMXNET3 Virtual Network Device.
Choosing a network adapter for your virtual machine (1001805)
You can read VMware's Performance Evaluation of VMXNET3 Virtual Network Device
For more information read Virtualbox's recommendations on improving performance, many of which apply to VMware as well
VirtualBox provides a variety of virtual network adapters that can be "attached" to the host's network in a number of ways. Depending on which types of adapters and attachments are used the network performance will be different. Performance-wise the virtio network adapter is preferable over Intel PRO/1000 emulated adapters, which are preferred over PCNet family of adapters. Both virtio and Intel PRO/1000 adapters enjoy the benefit of segmentation and checksum offloading. Segmentation offloading is essential for high performance as it allows for less context switches, dramatically increasing the sizes of packets that cross VM/host boundary.
Three attachment types: internal, bridged and host-only, have nearly identical performance, the internal type being a little bit faster and using less CPU cycles as the packets never reach the host's network stack. The NAT attachment is the slowest (and safest) of all attachment types as it provides network address translation. The generic driver attachment is special and cannot be considered as an alternative to other attachment types.
The number of CPUs assigned to VM does not improve network performance and in some cases may hurt it due to increased concurrency in the guest.
Here is the short summary of things to check in order to improve network performance:
- Whenever possible use virtio network adapter, otherwise use one of Intel PRO/1000 adapters;
- Use bridged attachment instead of NAT;
- Make sure segmentation offloading is enabled in the guest OS. Usually it will be enabled by default. You can check and modify offloading settings using ethtool command in Linux guests.
Perform a full, detailed analysis of network traffic on the VM's network adaptor using a 3rd party tool such as Wireshark. To do this, a promiscuous mode policy needs to be used on the VM's network adaptor. Use of this mode is only possible on networks: NAT Network, Bridged Adapter, Internal Network and Host-only Adapter.
To setup a promiscuous mode policy, either select from the drop down list located in the Network Settings dialog for the network adaptor or use the command line tool VBoxManage; for details, refer to Section 8.8, “VBoxManage modifyvm”.
Promiscuous mode policies are:
deny
(default setting) which hides any traffic not intended for this VM's network adaptor.
allow-vms
which hides all host traffic from this VM's network adaptor, but allows it to see traffic from/to other VMs.
allow-all
which removes all restrictions - this VM's network adaptor sees all traffic.