As you probably already know, there are basically two different schools in the virtualiztion champ:
The two approach are vastly different: the former requires extensive kernel modifications on both guest and host OSes but give you maximum performance, as both kernels are virtualization-aware and so they are optimized for the typical workload they experience. The latter approach is totally transparent to the guest OS and often do not require many kernel-level changes to the host side but, as the guest OS is not virtualization aware, it generally has lower performance.
So it appear that you had to do a conscious choice between performance and guest OS compatibility: the paravirtualized approach prioritize performance, while the HVM one prioritize compatibility. However, in this case it is possible to have the best of both worlds: by using para-virtualized guest device driver in an otherwise HVM environment, you can have compatibility and performance.
In short, a paravirtualizad device driver is a limited, targeted form of paravirtualization, useful when running specific guest OSes for which paravirtualization drivers are available. While being largely transparent to the guest OS (you simply need to install a driver), they relieve the virtualizer from emulating a real physical device (which is a complex operation, as it must emulate register, port, memory, ecc), substituting the emulation with some host-side syscall. The KVM-based framework to write paravirtualized drivers is called VirtIO.
Things are much more complex than this, of course. Anyway, in this article I am not going to explain in detail how a paravirtualized driver works, but to measure the performance implication of using it. Being a targeted paravirtualization form requiring guest-specific drivers, it is obvious that VirtIO is restricted to areas where it matter most, so disk and network subsystems are prime candidates for those paravirtualized drivers. Let see if, and how, both Linux (CentOS 6 x86-64) and Windows (Win2012R2 x64) are affected from that paravirtualized goodness.
All test run on a Dell D620 laptop. The complete system specifications are:
On the guest side, we have:
The VirtIO paravirtualized drivers are already included in the standard Linux kernel, so for the CentOS guest no special action or installation was needed. On the Windows guest, I installed the VirtIO disk and network drivers from the virtio-0.1-74.iso package.
For quick disk benchmark, I used dd on the Linux side and ATTO on the Windows one. To pose additional strain on guest disk subsystem and the host virtualizer, I run all disk tests against a ramdisk drive: in this manner I was sure that eventual differences were not masked out by the slow mechanical disk. Networking speed was measured with the same tool on both VMs: iperf, version 2.0.5.
Host CPU load was measured using mpstat.
Ok, let see the numbers...
The first graph shows CentOS 6 guest disk speed with and without the paravirtualized driver:
Native performances are included for reference only. We can see that para-virtualized disk driver provide a good speedup versus the standard virtualized IDE controller. Anyway, both approaches are far behind the native scores.
Net speed now:
In this case the paravirtualized network driver makes an huge difference: while it can‘t touch native speed, it is way ahead of the virtualized E1000 NIC adapter. The RTL8139 was benchmarked for pure curiosity, and it show a strange behavior: while output speed is in line with NIC speed (100 Mb/s), input speed is much higher (~400 Mb/s). Strange, but true.
While host CPU load is lower on the full virtualized NICs, it is only because they deliver much lower performance. In other word, the Mb/s per CPU load ratio is much higher on the para virtualized network driver.
Let see if Windows guest has some surprise for us. Disk benchmark first:
This time, the fully virtualized IDE driver is much behind the para-virtualized driver. In other word: always install the paravirtualized driver when dealing with Windows guests.
Network, please:
The paravirtualized driver continues to be much better then the fully virtualized NICs.
It is obvious that the paravirtualized drivers are an important piece of the KVM ecosystem. While the fully virtualized drivers are quite efficient and the only way to support a large variety of guest OSes, you should really use a paravirtualized driver if available for your guest virtual machine.
Obviously performance are only part of the equation, stability being even more important. Anyway I found the current VirtIO drivers release very stable, at least with the tested guests.
In short: when possible, use the VirtIO paravirtualized drivers!
[转] KVM VirtIO paravirtualized drivers: why they matter,布布扣,bubuko.com
[转] KVM VirtIO paravirtualized drivers: why they matter
原文:http://www.cnblogs.com/popsuper1982/p/3814841.html