Skip to content


Xen Project @ FOSDEM’14: an Event Report

As usual, the first weekend of February (1st & 2nd Feb this year) is FOSDEM weekend. Taking place at “ULB Solbosch Campus, Brussels, Belgium, Europe, Earth”, FOSDEM is the Open Source event of the year. At least for Europe: the website claims that FOSDEM hosts 5,000+ geeks and hackers and 512 lectures!

But it doesn’t stop here: the main wireless network provided (essid FOSDEM) was IPv6 only… as announced in the opening keynote(video at online already!). And in fact it took a while for me to figure out why my Android phone could connect but not get any IP ?!?!

Like 20140202_100927_good last year, I went to FOSDEM to see all the cool stuff and to help man the The Xen Project booth. Although this time I also had a my own talk to deliver. This was only my second FOSDEM, so this may still be my “nubiennes” talking, but it is really hard to describe how big and amazing the event is. Even more so, if you consider it is entirely run and organized by volunteers. And members of the Xen Project helped organize parts of FOSDEM beyond our booth : Ian Jackson and Tim Mackey were Video Volunteers and Lars Kurth helped organize a DevRoom.

It is amazing to have the chance to choose from such a huge amount of super high quality talks and presentations. It is great to see what the most thriving Open Source projects have to show off at their booths, in the exposition area, and to collect some gadgets. Being one of those, The Xen Project was giving away T-Shirts and some other gadgets, for free (some project use FOSDEM to raise funds). And that, like last year, has been quite a success!

T-Shirts 20140201_114724_goodapart, this year we decided we really wanted to do our best to show everyone what Xen is really capable of. That is why we decided to invite community members to host demos. It was an experiment: and it has been a great success! Basically, we invited Xen related projects and or companies that use Xen for their solutions and products, to submit a request for an exclusive slot at the booth. They would then show what they do to FOSDEM attendees. I’m calling it a success because we were fully booked and able to show demos of the following projects: Xen on an Android tablet (a SAMSUNG NEXUS 10), Xen Orchestra, Mirage OS, OSv, Qubes OS and ClickOS. I think we really should do this again next year. I ran the QubesOS demo myself, and it was very pleasant to notice people appreciating what high level of isolation and security this very special Linux distribution provides. It leverages some of the most advanced Xen features, such as stub and driver domains (a couple of people were amazed when I showed them how untrusted PDF conversion works in Qubes! :-D ). The presence of two Cloud Operating Systems, MirageOS and OSv (both running on top of Xen, of course) also raised quite some interest. Many attendees were impressed about the responsiveness of SAMSUNG’s solution for virtualizing the GPU in their dual Android tablet demo and the performances and the super quick boot time of ClickOS. Others liked the clean and effective interface of Xen Orchestra. One person even commented that Xen Orchestra looks more impressive than what VMWare provides.

It was great to have the chance to talk with many people that know and use Xen happily and fruitfully, and that were willing to acknowledge all our efforts: technical and not. And of course, to grab a free T-Shirt! Having done pretty much the same last year allows me to run a comparison. There is little doubt that there were more people interested in Xen, and much more awareness about the project doing well and actually expanding (e.g. into embedded and automotive). It was also good, as a developer, to get the chance to talk to users (of any kind) and other members of the Xen community. Like the volunteers showing the demos. For example. I had a great discussion with developers from Samsung about helping them upstream some (at least the kernel and Xen parts) of the incredible work they have done with GPU virtualization on the NEXUS 10.

20140202_101132_good

Even when I wasn’t at the booth, there have been  many chances to hear about Xen, in the various talks given in the Virtualization, BSD and Automotive devrooms, as was announced in this post from some weeks ago. Oh, speaking of the talks, another pretty awesome fact: this year, everything –I mean, every single talk– has been recorded, and videos will be available online soon.

Most of the action, as far as the Xen Project was concerned, happened in the Virtualization & IaaS devroom. Let’s not forget that members of the Xen Project helped record these. A big thank you to them and also a big thank you to the whole FOSDEM video team. Check out out the streams on the FOSDEM website (some are available already). Of course, videos and slides for the Xen talks will be available on the usual aggregators (vimeo and slideshare).

marks-pic

All in all, I personally very much enjoyed having been at FOSDEM. For The Xen  Project, I think we really did good in teaming together with all the members of the community and presenting ourselves to current and prospective users in very good shape. Can’t wait for next year!

Posted in Community, Events, Xen Hypervisor.

Tagged with , , , , , , , , , , , , , , , , .


Linux 3.14 and PVH

The Linux v3.14 will sport a new mode in which the Linux kernel can run thanks to Mukesh Rathor (Oracle).

Called ‘ParaVirtualized Hardware,’ it allows the guest to utilize many hardware features – while at the same time having no emulated devices. It is the next step in PV evolution, and it is pretty fantastic.

Here is a great blog that explains the background and history in detail at:
The Paravirtualization Spectrum, Part 2: From poles to a spectrum.

The short description is that Xen guests can run as HVM or PV. PV is a mode where the kernel lets the hypervisor program page-tables, segments, etc. With EPT/NPT capabilities in current processors, the overhead of doing this in an HVM (Hardware Virtual Machine) container is much lower than the hypervisor doing it for us. In short, we let a PV guest run without doing page-table, segment, syscall, etc updates through the hypervisor – instead it is all done within the guest container.

It is a hybrid PV – hence the ‘PVH’ name – a PV guest within an HVM container.

The benefits of this – less code to maintainer, faster performance for syscall (no context switches into the hypervisor), less traps on various operations etc. – in short better and faster response.

The code going in will allow users to use it with Xen 4.4 (which as of Jan 2014 is in RC3). From a standpoint of what the guest does compared to the normal PV – it is almost
no different except that it reports itself as an HVM guest (without any PCI devices) and is much faster. Naturally, since we are still working through the kinks of the PVH ABI, there is nothing set in stone.

That means the next version of Xen might not even run with this version of Linux PVH (or vice-versa). It is very experimental and unstable. Naturally we want to make this production quality, and we are working furiously toward that goal.

It helps us immensely if users also try it out so we can track bugs and issues we have not even thought of.

HOW TO USE IT

The only things needed to make this work as PVH are:

  • Get the latest version of Xen and compile/install it. See http://wiki.xen.org/wiki/Compiling_Xen_From_Source for details or http://wiki.xen.org/wiki/Xen_4.4_RC3_test_instructions
  • Get the latest version of Linux , see < a href="http://wiki.xenproject.org/wiki/Mainline_Linux_Kernel_Configs#Configuring_the_Kernel">http://wiki.xenproject.org/wiki/Mainline_Linux_Kernel_Configs#Configuring_the_Kernel for details. The steps are:

    cd $HOME
    git clone git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
    cd linux
  • Compile with CONFIG_XEN_PVH=y
    • Based on your current distro.
      cp /boot/config-`uname -r `$HOME/linux/.config
      make menuconfig
      Where one will select Processor type and features —> Linux guest support —>Support for running as a PVH guest (NEW)
    • Or from scratch:
      make localmodconfig

      You should see:
      Support for running as a PVH guest (XEN_PVH) [N/y] (NEW)“In case you missed it:

      make menuconfig
      Processor type and features ---; Linux guest support ---;
      Xen guest support (which will now show you:)
      Support for running as a PVH guest (NEW)
    • If you prefer to edit .config, these should be enabled:

      CONFIG_HYPERVISOR_GUEST=y
      CONFIG_PARAVIRT=y
      CONFIG_PARAVIRT_GUEST=y
      CONFIG_PARAVIRT_SPINLOCKS=y
      CONFIG_XEN=y
      CONFIG_XEN_PVH=y

      You will also have to enable the block, network drivers, console, etc which are in different submenus.
    • Install it. Usually doing:

      make modules_install && install

      is suffice. It should generate the initramfs.cpio.gz and the kernel and stash them in /boot directory.
  • Launch it with ‘pvh=1‘ in your guest config (for example):
    extra="console=hvc0 debug kgdboc=hvc0 nokgdbroundup initcall_debug debug"
    kernel="/boot/vmlinuz-3.13+"
    ramdisk="/boot/initramfs-3.13+.cpio.gz"
    memory=1024
    vcpus=4
    name="pvh"
    vif = [ 'mac=00:0F:4B:00:00:68' ]
    vfb = [ 'vnc=1, vnclisten=0.0.0.0,vncunused=1']
    disk=['phy:/dev/sdb1,xvda,w']
    pvh=1
    on_reboot="preserve"
    on_crash="preserve"
    on_poweroff="preserve

    using ‘xl’. Xend ‘xm’ does not have PVH support.

It will bootup as a normal PV guest, but ‘xen-detect’ will report it as an HVM guest.

Items that have not been tested extensively or at all:

  • Migration (xl save && xl restore for example).
  • 32-bit guests (won’t even present you with a CONFIG_XEN_PVH option)
  • PCI passthrough
  • Running it in dom0 mode (as the patches for that are not yet in Xen upstream). If you want to try that, you can merge/pull Mukesh’s branch:

    cd $HOME/xen
    git pull git://oss.oracle.com/git/mrathor/xen.git dom0pvh-v7

    And use on the Xen command line the dom0pvh=1 bootup parameter. Remember to recompile and install the new version of Xen. The patches in Linux do not contain the necessary code to setup guests.
  • Memory ballooning
  • Multiple VBDs, NICs, etc.

Things that are broken:

  • CPUID filtering. There are no filtering done at all which means that certain cpuid flags are exposed to the guest. The x2apic will cause a crash if the NMI handler is invoked. The APERF will cause inferior scheduling decisions.
  • Does not work with AMD hardware.
  • Does not work with 32-bit guests.

If you encounter errors, please email with the following (please note that the guest config has “on_reboot=”preserve”, on_crash=”preserve” – which you should have in your guest config to contain the memory of the guest):

  • xl dmesg
  • xl list
  • xenctx -s $HOME/linux/System.map -f -a -C <domain id>
    [xenctx is sometimes found in /usr/lib/xen/bin/xenctx ]
  • The console output from the guest
  • Anything else you can think off.

to [email protected]

Stash away your vmlinux file (it is too big to send via email) – as we might need it later on.

That is it!

Thank you!

Posted in Linux, Xen Development, Xen Hypervisor.


Improved Xen support in FreeBSD

FreeBSD Logo
As most FreeBSD users already know, FreeBSD 10 has just been released, and we expect this to be a very good release regarding Xen support. FreeBSD with Xen support includes many improvements, including several performance and stability enhancements that we expect will greatly please and interest users. With many bug fixes already completed, the following description only focuses on new features.

New vector callback

Previous releases of FreeBSD used an IRQ interrupt as the callback mechanism for Xen event channels. While it’s easier to setup, using a IRQ interrupt doesn’t allow to inject events to specific CPUs, basically limiting the use of event channels in disk and network drivers. Also, all interrupts were delivered to a single CPU (CPU#0), not allowing proper interrupt balancing between CPUs.

With the introduction of the vector callback, events can now be delivered to any CPU, allowing FreeBSD to have specific per-CPU interrupts for PV timers and PV IPIs, and balancing the others across the several CPU usually available on a domain.

PV timers

Thanks to the introduction of the vector callback, now we can make use of the Xen PV timer, which is implemented as a per-CPU singleshot timer. This alone doesn’t seem like a great benefit, but it allows FreeBSD to avoid making use of the emulated timers, greatly reducing the emulation overhead and the cost of unnecessary VMEXITs.

PV IPIs

As with PV timers, the introduction of the vector callback allows FreeBSD to get rid of the bare metal IPI implementation, and instead route IPIs through event channels. Again, this allows us to get rid of the emulation overhead and unnecessary VMEXITS, providing better performance.

PV disk devices

FLUSH/BARRIER support has been recently added, together with a couple of fixes that allow FreeBSD to run with a CDROM driver under XenServer (which was quite of a pain for XenServer users).

Support for migration

With these new features, migration doesn’t break since it has been reworked to handle the fact that timers and IPIs are also paravirtualized now.

Merge of the XENHVM config into GENERIC

One of the most interesting improvements from a user/admin point of view (and something similar to what the pvops Linux kernel is already doing), the GENERIC kernel on i386 and amd64 now includes full Xen PVHVM support, so there’s no need to recompile a Xen-specific kernel. When run as a Xen guest, the kernel will detect the available Xen features and automatically make use of them in order to obtain the best possible performance.

This work has been done in conjunction between Spectra Logic and Citrix.

Posted in Announcements, Community, Xen Development, Xen Support.

Tagged with , , , , , .


libvirt support for Xen’s new libxenlight toolstack

Originally posted on my blog, here.

Xen has had a long history in libvirt.  In fact, it was the first hypervisor supported by libvirt.  I’ve witnessed an incredible evolution of libvirt over the years and now not only does it support managing many hypervisors such as Xen, KVM/QEMU, LXC, VirtualBox, hyper-v, ESX, etc., but it also supports managing a wide range of host subsystems used in a virtualized environment such as storage pools and volumes, networks, network interfaces, etc.  It has really become the swiss army knife of virtualization management on Linux, and Xen has been along for the entire ride.

libvirt supports multiple hypervisors via a hypervisor driver interface, which is defined in $LIBVIRT_ROOT/src/drvier.h – see struct _virDriver.  libvirt’s virDomain* APIs map to functions in the hypervisor driver interface, which are implemented by the various hypervisor drivers.  The drivers are located under $LIBVIRT_ROOT/src/<hypervisor-name>.  Typically, each driver has a $LIBVIRT_ROOT/src/<hypervisor-name>/<hypervisor-name>_driver.c file which defines a static instance of virDriver and fills in the functions it implements.  As an example, see the definition of libxlDriver in $libvirt_root/src/libxl/libxl_driver.c, the firsh few lines of which are

static virDriver libxlDriver = {
    .no = VIR_DRV_LIBXL,
    .name = “xenlight”,
    .connectOpen = libxlConnectOpen, /* 0.9.0 */
    .connectClose = libxlConnectClose, /* 0.9.0 */
    .connectGetType = libxlConnectGetType, /* 0.9.0 */
    ...
}

The original Xen hypervisor driver is implemented using a variety of Xen tools: xend, xm, xenstore, and the hypervisor domctrl and sysctrl interfaces.  All of these “sub-drivers” are controlled by an “uber driver” known simply as the “xen driver”, which resides in $LIBVIRT_ROOT/src/xen/.  When an API in the hypervisor driver is called on a Xen system, e.g. virDomainCreateXML, it makes its way to the xen driver, which funnels the request to the most appropriate sub-driver.  In most cases, this is the xend sub-driver, although the other sub-drivers are used for some APIs.  And IIRC, there are a few APIs for which the xen driver will iterate over the sub-drivers until the function succeeds.  I like to refer to this xen driver, and its collection of sub-drivers, as the “legacy Xen driver”.  Due to its heavy reliance on xend, and xend’s deprecation in the Xen community, the legacy driver became just that – legacy.  With the introduction of libxenlight (aka libxl), libvirt needed a new driver for Xen.

In 2011 I had a bit of free time to work on a hypervisor driver for libxl, committing the initial driver in 2b84e445.  As mentioned above, this driver resides in $LIBVIRT_ROOT/src/libxl/.  Subsequent work by SUSE, Univention, Redhat, Citrix, Ubuntu, and other community contributors has resulted in a quite functional libvirt driver for the libxl toolstack.

The libxl driver only supports Xen >= 4.2.  The legacy Xen driver should be used on earlier versions of Xen, or installations where the xend toolstack is used.  In fact, if xend is running, the libxl driver won’t even load.  So if you want to use the libxl driver but have xend running, xend must be shutdown followed by a restart of libvirtd to load the libxl driver.  Note that if xend is not running, the legacy Xen driver will not load.

Currently, there are a few differences between the libxl driver and the legacy Xen driver.  First, the libxl driver is clueless about domains created by other libxl applications such as xl.  ‘virsh list’ will not show domains created with ‘xl create …’.  This is not the case with the legacy Xen driver, which is just a broker to xend.  Any domains managed by xend are also manageable with the legacy Xen driver.  Users of the legacy Xen driver in libvirt are probably well aware that `virsh list' will show domains defined with `xm new ...' or created with `xm create ...', and might be a bit surprised to find this in not the case with the libxl driver.  But this could be addressed by implementing functionality similar to the `qemu-attach' capability supported by the QEMU driver, which allows “importing” a QEMU instance created directly with e.g. `qemu -m 1024 -smp ...'.  Contributions are warmly welcomed if this functionality is important to you :-) .

A second difference between the libxl and legacy Xen drivers is related to the first one.  xend is the stateful service in the legacy stack, maintaining state of defined and running domains.  As a result, the legacy libvirt Xen driver is stateless, generally forwarding requests to xend and allowing xend to maintain state.  In the new stack, however, libxl is stateless.  Thererfore, the libvirt libxl driver itself must now maintain the state of all domains.  An interesting side affect of this is losing all your domains when upgrading from libvirt+xend to libvirt+libxl.  For a smooth upgrade, all running domains should be shutdown and their libvirt domXML configuration exported for post-upgrade import into the libvirt libxl driver.  For example, in psuedo-code

for each domain
    virsh shutdown domain
    virsh dumpxml > domain-name.xml
perform xend -> libxl upgrade
restart libvirtd
for each domain
    virsh define domain-name.xml

It may also be possible to import xend managed domains after upgrading to libxl.  On most installations, the configuration of xend managed domains is stored in /var/lib/xend/domains/<dom-uuid>/config.sxp.  Since the legacy Xen driver already supports parsing SXP, this code could be used read any existing xend managed domains and import those into libvirt.  I will need to investigate the feasibility of this approach, and report any findings in a future blog post.

The last (known) difference between the drivers is the handling of domain0.  The legacy xen driver handles domain0 as any other domain.  The libxl driver currently treats domain0 as part of the host, thus e.g. it is not shown in `virsh list'.  This behavior is similar to the QEMU driver, but is not necessarily correct.  Afterall, domain0 is just another domain in Xen, which can have devices attached and detached, memory ballooned, etc., and should probably be handled as such by the libvirt libxl driver.  Contributions welcomed!

Otherwise, the libxl driver should behave the same as the legacy Xen driver, making xend to libxl upgrades quite painless, outside of the statefullness issue discussed above. Any other differences between the legacy Xen driver and the libxl driver are bugs – or missing features.  Afterall, the goal of libvirt is to insulate users from underlying churn in hypervisor-specific tools.

At the time of this writing, the important missing features in the libxl driver relative to the legacy Xen driver are PCI passthrough and migration.  Chunyan Liu has provided patches for both (here and here) of these features, the first of which is close to committing upstream, IMO.

The libxl driver is also in need of improved parallelization.  Currently, long running operations such as create, save, restore, core dump, etc. lock the driver, blocking other operations, even those that simply get state.  I have some initial patches that introduce job support in the libxl driver, similar to the QEMU driver.  These patches allow classifying driver operations into jobs that modify state, and thus block any other operations on the domain, and jobs that can run concurrently.  Bamvor Jian Zhang is working on a patch series to make use of libxl’s asynchronous variants of these long running operations.  Together, these patch sets will greatly improve parallelism in the libxl driver, which is certainly important in for example cloud environments where many virtual machine instances can be started in parallel.

Beyond these sorely needed features and improvements, there is quite a bit of work required to reach feature parity with the QEMU driver, where it makes sense.  The hypervisor driver interface currently supports 193 functions, 186 of which are implemented in the QEMU driver.  By contrast, only 86 functions are implemented in the the libxl driver.  To be fair, quite a few of the unimplemented functions don’t apply to Xen and will never be implemented.  Nonetheless, for any enthusiastic volunteers, there is quite a bit of work to be done in the libvirt libxl driver.

Although I thoroughly enjoy working on libvirt and have healthy respect for the upstream community, my available time to work on upstream libvirt is limited.  Currently, I’m the primary maintainer of the Xen drivers, so my limited availability is a bottleneck.  Other libvirt maintainers review and commit Xen stuff, but their primary focus is on the rapid development of other hypervisor drivers and host subsystems.  I’m always looking for help in not only implementation of new features, but also reviewing and testing patches from other contributors.  If you are part of the greater Xen ecosystem, consider lending a hand with improving Xen support in libvirt!

Posted in Community, Uncategorized, Xen Development.


First Xen Project 4.4 Test Day on Monday, January 20

Release time is approaching, so Xen Project Test Days have arrived!

On Monday, January 20, we are holding a Test Day for Xen 4.4. Release Candidate 2.

Xen Project Test Day is your opportunity to work with code which is targeted for the next release, ensure that new features work well, and verify that the new code can be integrated successfully into your environment.  This is the first of a few Test Days for the 4.4 release, scheduled to occur at roughly 2 week intervals.

General Information about Test Days can be found here:
http://wiki.xenproject.org/wiki/Xen_Test_Days

and specific instructions for this Test Day are located here:
http://wiki.xenproject.org/wiki/Xen_4.4_RC2_test_instructions

XEN 4.4 FEATURE DEVELOPERS:

If you have a new feature which is cooked and ready for testing in RC2, we need to know about it and how to test it. Either edit the instructions page or send me a few lines describing the feature and how it should be tested.

Right now, RC2 is labelled a general test (e.g., “Does Xen compile, install, and do the things Xen normally does?”). We don’t have any specific tests of new functionality identified. If you have something new which needs testing in RC2, we need to know about it.

EVERYONE:

Please join us on Monday, January 20, and help make sure the next release of Xen is the best one yet!

Posted in Announcements, Community, Events.


Xen Related Talks @ FOSDEM 2014

Going to FOSDEM’14? Well, you want to check out the schedule of the Virtualization & IaaS devroom then, and make sure you do not miss the talks about Xen. There are 4 of them, and they will provide some details about new and interesting usecases for virtualization, like in embedded systems of various kind (from phones and tablets to network middleboxes), and about new features in the upcoming Xen release, such as PVH, and how to use them with profit.

Here they are the talks, in some more details:
- Dual-Android on Nexus 10 using XEN, on Saturday morning
- High Performance Network Function Virtualization with ClickOS, on Saturday afternoon
- Virtualization in Android based and embedded systems, on Sunday morning
- How we ported FreeBSD to PVH, on Sunday afternoon

There actually is more: one called Porting FreeBSD on Xen on ARM, in the BSD devroom, and one about MirageOS one in the miscellaneous Main track, but the schedule for them has not been announced yet.

Last but certainly not least, there will be a Xen-Project booth, where you can meet the members of the Xen community as well as enjoying some other, soon to be revealed, activities. I and some of my colleagues from Citrix will be in Brussels, and will definitely spend some time at the booth, so come and visit us. The booth will be in building K, on level 1.

Read more here: http://xenproject.org/about/events.html

Edit:

The schedule for the FreeBSD and MirageOS talks have been announced. Here it comes:
- Porting FreeBSD on Xen on ARM, will be given on Saturday early afternoon (15:00), in the BSD devroom
- MirageOS: compiling functional library operating systems, will happen on Sunday late morning (13:00), in the misc main track

Also, there is another Xen related talk, in the Automotive development devroom: Xen on ARM: Virtualization for the Automotive industry, on Sunday morning (11:45).

Posted in Announcements, Events, Partner Announcements, Xen Hypervisor.

Tagged with , , , , .


2013 : A Year to Remember

2013 has been a year of changes for the Xen Community. I wanted to share my five personal highlights of the year. But before I do this, I wanted to thank everyone who contributed to the Xen Project in 2013 and the years before. Open Source is about bringing together technology and people : without your contributions, the Xen Project would not be a thriving and growing open source project.

Xen Project joins Linux Foundation

The biggest community story of 2013, was the move of Xen to the Linux Foundation in April. For me, this journey started in December 2011, when I won in-principle agreement from Citrix to find a neutral, non-profit home for Xen. This took longer than I hoped: even when the decision was made to become a Linux Foundation Collaborative project, it took many months of hard work to get everything off the ground. Was it worth it? The answer is a definite yes: besides all the buzz and media interest in April 2013, interest in and usage of Xen has increased in the remainder of 2013. The Xen Project became a first class citizen within the open source community, which it was not really before.

Wiki Page Visits

Monthly visits by users to the Xen Project wiki doubled after moving Xen to the Linux Foundation.

Of course, the ripples of this change will be felt for many years to come. Some of them, are covered in the other 4 highlights of 2013. I personally believe that the Xen Project Advisory Board (which is made up of 14 major companies that fund the project), will have a positive impact on the community going forward. This will become apparent next year, when initiatives that are funded by the Advisory Board – such as an independently hosted test infrastructure, more coordinated marketing and PR, growing the Xen talent pool and many others – will kick into gear.

Developer Community Growth

Besides growth in website visits, we have also seen a marked increase in developer list conversations in 2013.

Besides growth in website visits, we have also seen a marked increase in developer list conversations in 2013.

In 2013, we have also seen significant growth of our developer community. Significant growth is showing in a number of different metrics, such as conversations on the developer list, the number of contributors to the project (an increase of 11% compared to 2012) as well as an increase of patches submitted. This means that in 2014, we will have to look at some challenges associated with this growth: for example, developer list traffic in November 2013 was beyond 4500 messages (compared to 2700 in January 2013). Too much for many of our developers.

Shorter Release Cycles

Another notable change that started in late 2012, was a reduction of the Release Cadence for the Xen Hypervisor and a better approach to release planning. I wanted to thank George Dunlap – our Xen Release coordinator – for driving these changes. Let’s look at release times since Xen 4.0: it took 11 months to develop Xen 4.1, 18 months to develop Xen 4.2, 10 months to develop Xen 4.3 and 6 or 7 months for Xen 4.4 (planned to release in February 2014). The goal is to release Xen twice a year, while increasing the number of features that are going into each Xen release. If you look at the list of planned Xen 4.4 features, we are well on track to achieving this goal.

Innovation, Innovation, Innovation

In 2013 the Xen Project started to innovate in many different technology areas. This is reflected in the many presentations that were given at the Xen Project Developer Summit. Besides the usual improvements to performance and scalability, I wanted to pick out some personal highlights.

  • Xen Project Kicks *aaSXen 4.3 saw some real advances in cloud security: very timely, given that cloud and internet security was a very hot topic in 2013. Next year, we will look at making many of these features easier to use and integrate them better into Linux distros.
  • Another notable change is PVH guest support (coming to Xen 4.4 for Linux and FreeBSD). PVH combines the best elements of HVM and PV into a mode which allows Xen to take advantage of many of the hardware virtualization features without needing to emulate an entire physical server. This will allow for increased efficiency, as well as reduced footprint in Linux, FreeBSD and other operating systems. A special thank you to Mukesh Rathor from Oracle, who developed this groundbreaking technology.
  • Of course, we also have to mentioned Xen on ARM support that first appeared in Xen 4.3 and will be hardened for Xen 4.4. This support is helping to expand Xen Hypervisor usage into new market segments. In October 2013, we saw first prototypes of Android running on top of Xen at remarkable speed. But more on this later. A special thank you to Stefano Stabellini and Ian Campbell for driving this effort.
  • Support for VMWare guests: just before XMas Verizon posted a patch series for review, that will allow users to run Linux, Windows and other guest images that were built for VMware products, unchanged within Xen. These features will not make it into Xen 4.4, but should be available later in 2014.
  • Intel and Samsung showed groundbreaking work in GPU virtualization at the last Xen Project Developer Summit, which have the potential to extend Xen into new market sefgments.

Of course, not all of these innovations will make it into Xen 4.4: some will appear in Xen 4.5 or later.

New Frontiers of Virtualization

One of the things which surprised me personally, is that we are seeing the Xen Hypervisor adopted in many new (and unexpected) market segments. Examples are: Automotive and In-Vehicle Infotainment, Mobile Use-cases, Network Function Virtualization, Set-Top Boxes and other Embedded Applications. This will without doubt, be a theme of 2014. It may seem counter intuitive, but I believe that expanding the use of Xen to new frontiers, will create benefits and opportunities in server virtualization and cloud computing. It also proves, that Xen is an extremely flexible platform that can be customized for many different applications.

In any case, thank you all for making 2013 an exceptional year!

And a Happy New Year to all of you!

Posted in Community.


Where Would You Like to See the Next Xen Project User Summit Held?

In 2013, we held the first major Xen event aimed specifically at users: the Xen Project User Summit. In 2014, we want to do it again — but where and when?

The Xen Project wants to hold its second Xen Project User Summit.  We’d like to hold it somewhere which is accessible by a large percentage of our user community.  And we’d like to schedule it at a time which makes sense, possibly in coordination with some existing conference.

We need your help to pick the time and place.  Give us your preferences in a very quick 2 minute survey found here:

https://www.surveymonkey.com/s/YJQCHJ6

It’s very quick and easy to do.  And you may just find that the next User Summit is too convenient for you to pass up.

Posted in Announcements, Community, Events, Xen Summit.

Tagged with , .


What is the ARINC653 Scheduler?

The Xen ARINC 653 scheduler is a real time scheduler that has been in Xen since 4.1.0.  It is a cyclic executive scheduler with a specific usage in mind, so unless one has aviation experience they are unlikely to have ever encountered it.

The scheduler was created and is currently maintained by DornerWorks.

Background

The primary goal of the ARINC 653 specification [1] is the isolation or partitioning of domains.  The specification goes out of its way to prevent one domain from adversely affecting any other domain, and this goal extends to any contended resource, including but not limited to I/O bandwidth, CPU caching, branch prediction buffers, and CPU execution time.

This isolation is important in aviation because it allows applications at different levels of certification (e.g. Autopilot – Level A Criticality, In-Flight Entertainment – Level E Criticality, etc…) to be run in different partitions (domains) on the same platform.  Historically to maintain this isolation each application had its own separate computer and operating system, in what was called a federated system.  Integrated Modular Avionics (IMA) systems were created to allow multiple applications to run on the same hardware.  In turn, the ARINC653 specification was created to standardize an Operating System for these platforms.  While it is called an operating system and could be implemented as such, it can also be implemented as a hypervisor running multiple virtual machines as partitions.  Since the transition from federated to IMA systems in avionics closely mirrors the transition to virtualized servers in the IT sector, the latter implementation seems more natural.

Beyond aviation, an ARINC 653 scheduler can be used where temporal isolation of domains is a top priority, or in security environments with indistinguishability requirements, since a malicious domain should be unable to extract information through a timing side-channel.  In other applications, the use of an ARINC 653 scheduler would not be recommended due to the reduced performance.

Scheduling Algorithm

The ARINC 653 scheduler in Xen provides the groundwork for the temporal isolation of domains from each other. The domain scheduling algorithm itself is fairly simple:  a fixed predetermined list of domains is repeatedly scheduled with a fixed periodicity resulting in a complete and, most importantly, predictable schedule.  The overall period of the scheduler is know as a major frame, while the individual domain execution windows in the schedule are know as minor frames.

Major_Minor_Frame

As an example, suppose we have 3 domains all with periods of 5, 6, 10 ms and worst case running times respectively of 1 ms, 2 ms, and 3 ms.  The major frame is set to the least common multiple of these periods (30 ms) and minor frames are selected so that the period, runtime, and deadline constraints are met.  One resulting schedule is shown below, though there are other possibilities.

ExampleSchedule

The ARINC 653 scheduler is only concerned with the scheduling of domains. The scheduling of real-time processes within a domain is performed by that domain’s process scheduler.  In a compliant ARINC 653 system, these processes are scheduled using a fixed priority scheduling algorithm, but if ARINC 653 compliance is not a concern any other process scheduling method may be used.

Using the Scheduler

Directions for using the scheduler can be found on the Xen wiki at ARINC653 Scheduler. When using the scheduler, the most obvious effect will be that the cpu usage and execution windows for each domain will be fixed regardless of whether the domain is performing any work.

Currently multicore operation of the scheduler is not supported.  Extending the scheduling algorithm to multiple cores is trivial, but the isolation of domains in a multicore system requires a number of mitigation techniques not required in single-core systems.[2]

References

[1] ARINC Specification 653P1-3, “Avionics Application Software Standard Interface Part 1 – Required Services” November 15, 2010

[2] EASA.2011/6 MULCORS – Use of Multicore Processors in airborne systems

Posted in Xen Development.

Tagged with , .


Announcing the 1.0 release of Mirage OS

We’re very pleased to announce the release of Mirage OS 1.0. This is the first major release of Mirage OS and represents several years of development, testing and community building. You can get started by following the install instructions and creating your own webserver to host a static website! Also check out the release notes and download page.

What is Mirage OS and why is it important?

Most applications that run in the cloud are not optimized to do so. They inherently carry assumptions about the underlying operating system with them, including vulnerabilities and bloat.

Compartmentalization of large servers into smaller ‘virtual machines’ has enabled many new businesses to get started and achieve scale. This has been great for new services but many of those virtual machines are single-purpose and yet they contain largely complete operating systems which typically run single applications like web-servers, load balancers, databases, mail servers and similar services. This means a large part of the footprint is unused and unnecessary, which is both costly due to resource usage (RAM, disk space etc) and a security risk due to the increased complexity of the system and the larger attack surface.

Cloud OS Diagram

On the left, you see a typical application stack run in the cloud today. Cloud Operating systems such as MirageOS remove the Operating System and replace it with a Language Runtime that is designed to cooperate with the Hypervisor.

Mirage OS is a Cloud Operating System which represents an approach where only the necessary components of the operating system are included and compiled along with the application into a ‘unikernel’. This results in highly efficient and extremely lean ‘appliances’, with the same or better functionality but a much smaller footprint and attack surface. These appliances can be deployed directly to the cloud and embedded devices, with the benefits of reduced costs and increased security and scalability.

Some example use cases for Mirage OS include: (1) A lean webserver, for example the openmirage.org, website is about 1MB including all content, boots in about 1 second and is hosted on Amazon EC2. (2) Middle-box applications such as small OpenFlow switches for tenants in a cloud-provider. (3) Easy reuse of the same code and toolchain that create cloud appliances to target the space and memory constrained ARM devices.

How does Mirage OS work?

Mirage OS works by treating the Xen hypervisor as a stable hardware platform and using libraries to provide the services and protocols we expect from a typical operating system, e.g. a networking stack. Application code is developed in a high-level functional programming language OCaml on a desktop OS such as Linux or Mac OSX, and compiled into a fully-standalone, specialized unikernel. These unikernels run directly on Xen hypervisor APIs. Since Xen powers most public clouds such as Amazon EC2, Rackspace Cloud, and many others, Mirage OS lets your servers run more cheaply, securely and faster on those services.

Mirage OS is implemented in the OCaml language, with 50+ libraries which map directly to operating system constructs when being compiled for production deployment. The goal is to make it as easy as possible to create Mirage OS appliances and ensure that all the things found in a typical operating system stack are still available to the developer. Mirage OS includes clean-slate functional implementations of protocols ranging from TCP/IP, DNS, SSH, OpenFlow (switch/controller), HTTP, XMPP and Xen Project inter-VM transports. Since everything is written in a single high-level language, it is easier to work with those libraries directly. This approach guarantees the best possible performance of Mirage OS on the Xen Hypervisor without needing to support the thousands of device drivers found in a traditional OS.

Bind 9 vs. Mirage OS throughput comparison

Performance comparison of Bind 9 vs. a DNS server written in Mirage OS.

An example of a Mirage OS appliance is a DNS server and below is a comparison with one of the most widely deployed DNS servers on the internet, BIND 9. As you can see, the Mirage OS appliance outperforms BIND 9 but in addition, the Mirage OS VM is less than 200kB in size compared to over 450MB for the BIND VM. Moreover, the traditional VM contains 4-5 times more lines of code than the Mirage implementation, and lines of code are often considered correlated with attack surface. More detail about this comparison and others can be found in the associated ASPLOS paper.

For the DNS appliance above, the application code was written using OCaml and compiled with the relevant Mirage OS libraries. To take full advantage of Mirage OS it is necessary to design and construct applications using OCaml, which provides a number of additional benefits such as type-safety. For those new to OCaml, there are some excellent resources to get started with the language, including a new book from O’Reilly and a range of tutorials on the revamped OCaml website.

We look forward to the exciting wave of innovation that Mirage OS will unleash including more resilient and lean software as well as increased developer productivity.

Posted in Announcements.

Tagged with .