When you create or configure a virtual machine, you can specify network adapter (NICs) type.
■
| |
■
| |
■
|
|
Vlance:
Emulated
version of the AMD 79C970 PCnet32 LANCE NIC, an older 10 Mbps NIC with
drivers available in most 32-bit guest operating systems except Windows
Vista and later.
A virtual machine configured with this network adapter
can use its network immediately.
Flexible:
Identifies
itself as a Vlance adapter when a virtual machine boots, but
initializes itself and functions as either a Vlance or a VMXNET adapter,
depending on which driver initializes it.
Without VMware tools installed, it runs in Vlance mode. However, when you install VMware tools, it changes to high performance VMXNET adapter.
E1000:
Emulated
version of the Intel 82545EM Gigabit Ethernet NIC, with drivers
available in most newer guest operating systems, including Windows XP
and later and Linux versions 2.4.19 and later.
VMXNET:
Optimized
for performance in a virtual machine and has no physical counterpart.
Because operating system vendors do not provide built-in drivers for
this card, you must install VMware Tools to have a driver for the VMXNET
network adapter available.
Enhanced VMXNET (VMXNET 2):
Based
on the VMXNET adapter but provides high-performance features commonly
used on modern networks, such as jumbo frames and hardware offloads.
VMXNET 2 (Enhanced) is available only for some guest operating systems
on ESX/ESXi 3.5 and later.
VMXNET 3:
Next generation of a paravirtualized NIC designed for performance. VMXNET 3 offers all the features available in VMXNET 2 and adds several new features, such as multiqueue support (also known as Receive Side Scaling in Windows), IPv6 offloads, and MSI/MSI-X interrupt delivery. VMXNET 3 is not related to VMXNET or VMXNET 2.
SR-IOV (Single Root I/O Virtualization):
vSphere 5.1 and later supports Single Root I/O Virtualization (SR-IOV).
SR-IOV is a specification that allows a single Peripheral Component
Interconnect Express (PCIe) physical device under a single root port to
appear to be multiple separate physical devices to the hypervisor or the
guest operating system.
SR-IOV
uses physical functions (PFs) and virtual functions (VFs) to manage
global functions for the SR-IOV devices.
PFs are full PCIe functions
that include the SR-IOV Extended Capability which is used to configure
and manage the SR-IOV functionality. It is possible to configure or
control PCIe devices using PFs, and the PF has full ability to move data
in and out of the device.
VFs are lightweight PCIe functions that
contain all the resources necessary for data movement but have a
carefully minimized set of configuration resources.
SR-IOV-enabled
PCIe devices present multiple instances of themselves to the guest OS
instance and hypervisor. The number of virtual functions presented
depends on the device. For SR-IOV-enabled PCIe devices to function, you
must have the appropriate BIOS and hardware support, as well as SR-IOV
support in the guest driver or hypervisor instance.
SR-IOV Architecture:
![]() |
Image: VMware |
You should choose this adapter type if you are running latency sensitive workload and need high performance adapter. However, do check if your guest operating system does support this type of adapter as it is supported by limited versions of operating systems.
Also, number of available virtualization features like vMotion, DRS, FT and other will be reduced as they are not compatible with SR-IOV. Please check VMware KB article on SR-IOV for information.
You can also check VMware KB article on which adapter you should choose for virtual machine.
great
ReplyDelete