Bench speed on KVM vs OpenVZ

Back to KVM vs OpenVZ.
I installed instances on 2 servers with each virtualization.
When doing bench new-site sitename --install-app erpnext I found (by doing a script to do the bench command and have it logged) that the bench run on KVM is slower than in OpenVZ.

KVM:

CPU: 4x
RAM: 8GB
SSD: 100GB

OpenVZ:

CPU: 3x
RAM: 3GB
SSD: 40GB

Yes the comparison is not really apple-to-apple, and this is not a real benchmark test, but the result is (in average):
KVM: ± 290-300 secs
OVZ: ± 130-140 secs

I did tried same and different providers.
Checking with the provider, they said maybe the code is more optimized to run on OVZ.
Can anyone confirm that ERPNext on OpenVZ is faster than on KVM?
Or perhaps the KVM provider just having bad hardware?
Thank you.

Well, your test was an interesting run, but it didn’t take into consideration the “virtualization” differences between the two server types and how they each interact with multiple client server instances.

KVM systems promise to deliver to the client instance a specific amount of memory and CPU resources as well as disk space. The OpenVZ systems “appear” to promise the same things, however in reality they are far more prone to host client loading conditions than the KVM systems.

For example, if you are running an ERPNext system on an OpeinVZ host service and everything is going well, you may be lured into a false sense of a good service level. However, if that hosting company also sells another client an instance on the same hardware that your ERPNext system is host on, you will start to notice some delays in how your system works. In an OpenVZ environment, there is very little in the way of throttling controls to suppress any single virtual server from eating up all of the system resources available on the hardware. So, of the other unknown client server instance sharing your same hardware suddenly starts some complex database crunching your ERPNext instance could come to a complete halt. I have actually experienced this and it is not pleasant. Likewise, if you started some large report in ERPNext that required maybe a year of data to be cataloged, you could potentially shut down someones website being hosted on the same hardware.

KVM systems prevent much of this kind of abuse. They sequester a fixed amount of minimum resources to each virtualized server on the the same hardware. While it is still possible to experience some slow responses on KVM servers, it is usually never a case of being completely immobilized.

Both systems allocate the hardware resources to the highest demand instances, but only KVM systems will assure you of at least having some level of functionality even during times of highest demand on the hardware.

OpenVZ virtualization has improved some over the past 3 years but it is still not the best choice if you are leasing the virtualization services from a VPS provider.

The OpenVZ vs KVM debate is not the only factor to consider when looking at performance. Sometimes it is a matter of the distance to the server that can make or break a performance benchmark. There are actually far too many parameters that directly affect performance to even begin to compare them here.

Do some more research on the pro’s and con’s of the virtualization methods as well as the other factors that directly affect performance before you jump all in on any given providers promises.

Hope this helps, and as always… Your mileage may vary :sunglasses:

BKM

1 Like

Thanks @bkm for the insight.

I got it regarding the difference of OVZ and KVM. And you assure me that the OVZ can be the fastest (on a very good day) or crippled one (on the bad days). This happens to the two sites of mine. And yes the KVM never ceased. The OVZ I need to restart over and over. Sometimes it can even build frappe :smiley:
And I realized my observations (not even a test) was far from having true result.

It just strike me that the speed difference is that much (almost 2.5x).

And yes I had decided to use the KVM for production because stability is more important here (speed is still tolerable). And I think your past posting about this virtualization that affected my decision :smiley:

Again… thanks for the insight.