Your submission was sent successfully! Close

Thank you for contacting us. A member of our team will be in touch shortly. Close

You have successfully unsubscribed! Close

Thank you for signing up for our newsletter!
In these regular emails you will find the latest updates about Ubuntu and upcoming events where you can meet our team.Close

LXD crushes KVM in density and speed

This article was last updated 4 years ago.


  • LXD achieves 14.5 times greater density than KVM
  • LXD launches instances 94% faster than KVM
  • LXD provides 57% less latency than KVM

LXD is the container-based hypervisor lead by Canonical. Today, Canonical published benchmarks showing that LXD runs guest machines 14.5 times more densely and with 57% less latency than KVM.

The container-based LXD is a dramatic improvement on traditional virtualisation and particularly valuable for large hosting environments. Web applications, for example, can be hosted on a fraction of the hardware using LXD than KVM resulting in substantial long term savings for large organisations.

Latency-sensitive workloads like voice or video transcode showed 57% less latency under LXD than KVM, making LXD an important new tool in the move to network function virtualisation in telecommunications and media, and the convergence of cloud and high performance computing.

Mark Shuttleworth announced the results at the OpenStack Developer Summit in Vancouver, Canada, saying “LXD crushes traditional virtualisation for common enterprise environments, where density and raw performance are the primary concerns. Canonical is taking containers to the level of a full hypervisor, with guarantees of CPU, RAM, I/O and latency backed by silicon and the latest Ubuntu kernels.”

The introduction of containers in Linux by the LinuxContainers.org project, lead by Canonical, has sparked a series of disruptions such as Docker for application distribution, culminating in the recent introduction by Canonical of LXD, which behaves exactly like a full hypervisor but eliminates the overhead of virtualization or machine emulation. While LXD is only suitable for Linux workloads, the majority of guests in OpenStack environments are Linux, making LXD a compelling choice for private clouds where efficiency is highly valued.

Early adopters include institutions with many Linux virtual machines running common code such as Tomcat applications under low load. LXD offers much higher density than KVM as the underlying hypervisor can consolidate common processes more efficiently. LXD’s density comes from the fact that the same kernel is managing all the workload processes, as does its improved latency and quality of service.

Ubuntu is the most popular platform for large-scale KVM virtualisation and the most widely used platform for production OpenStack deployments. “We will of course continue to improve KVM in Ubuntu, but we are extremely excited to enable LXD alongside it for guests where raw performance, density or latency are of particular importance,” said Mark Baker, product manager for OpenStack at Canonical.

The testing

The target platform for this analysis was an Intel server running Ubuntu 14.04 LTS. The testing involved launching as many guest instances as possible with competing hypervisor technologies, LXD and KVM.

Density

In the density test, an automated framework continually launched instances while checking hypervisor resources and stopped when resources were depleted. The same test was used for LXD and KVM; only the command line tool used to launch the images was different.

The server with 16GB of RAM was able to launch 37 KVM guests, and 536 identical LXD guests. Each guest was a full Ubuntu system that was able to respond on the network. While LXD cannot magically create additional CPU resources, it can use memory much more efficiently than KVM. For idle or low load workloads, this gives a density improvement of 1450%, or nearly 15 times more density than KVM.

Containers utilize resources more efficiently at steady-state after booting. As a result, there is a dramatic improvement in the number of instances that can be packed onto a single server providing significant cost benefits due to more efficient utilization of resources.

Speed

Not only did the test show that LXD could launch and sustain 14.5x the guests than KVM, it also starkly highlighted the difference in startup performance between the two technologies. The full 536 guests started with LXD in substantially less time than it took KVM to launch its 37 guests. On average, LXD guests started in 1.5 seconds, while KVM guests took 25 seconds to start.

Latency

LXD’s container approach lets performance critical applications run at bare metal performance while retaining the isolation of workloads and the ability to support a wide range of Linux operating systems as guests. Without the emulation of a virtual machine, LXD avoids the scheduling latencies and other performance hazards often found in virtualization. Using a sample 0MQ workload, testing resulted in 57% less latency for guests under LXD in comparison to KVM.

More information about LXD can be found at www.ubuntu.com/cloud/tools/lxd.

Canonical is exhibiting at OpenStack Summit, Vancouver. Visit booth P3 for further details and to meet the team.

Ubuntu cloud

Ubuntu offers all the training, software infrastructure, tools, services and support you need for your public and private clouds.

Newsletter signup

Get the latest Ubuntu news and updates in your inbox.

By submitting this form, I confirm that I have read and agree to Canonical's Privacy Policy.

Related posts

Entra ID authentication on Ubuntu at scale with Landscape

Authd allows Entra ID authentication on both Ubuntu Desktop and Server. Learn how to configure Authd at scale using Landscape and Cloud-init

Join Canonical in London at Dell Technologies Forum

Canonical is excited to be partnering with Dell Technologies at the upcoming Dell Technologies Forum – London, taking place on 26th November. This prestigious...

Profile-guided optimization: A case study

Software developers spend a huge amount of effort working on optimization – extracting more speed and better performance from their algorithms and programs....