Using OpenStack images on XenServer – Fedora 22, CentOS 7

For a long time, I’ve been using kickstart scripts (link to GitHub repo) to set up Fedora and CentOS virtual machines on a XenServer host. In the last year or so, the trend of cloud computing has led distributions to release prebuilt “cloud” images in OpenStack-compatible qcow2 or raw disk format, which happen to be broadly compatible with hypervisors. Fedora Cloud’s introduction with F21 prompted me to look into ways of using cloud-init/cloud-config without an entire private cloud infrastructure.

It should no longer be necessary to use a kickstart to install a new VM, because the distribution’s prebuilt images easily work on XenServer with a few conversions.

(Kickstart scripts remain useful for customizing an image, of course; they are often the mechanism with which Linux distros build such images.)

What are prebuilt images?

When I say “prebuilt images”, I mean VM hard disk files released by the Linux distribution. For instance, Fedora 22’s Cloud Base and Atomic Host images are provided in qcow2 and xz’d raw files:

Fedora 22 Cloud Base and Atomic Host images

These releases are designed to work in actual cloud infrastructure—meaning a compute hypervisor (usually KVM), a metadata service that supplies configuration like hostname and networking at boot time, and some APIs that can programmatically affect the virtual machine’s behaviour and configuration. OpenStack is the leading example.

But OpenStack is overkill when you’re just virtualizing a handful of VMs. You don’t need a private cloud when you’re not running a cluster or spinning up machines programmatically. That’s exactly why I found myself running XenServer.

Nonetheless, unless you’re using Xen full paravirtualization (which there are now good reasons to avoid), these images should broadly work with all major hypervisors: QEMU-KVM, VirtualBox, Xen PVHVM, VMware, etc… with minor format tweaks.

How to convert a prebuilt image for use in XenServer

Broadly, there are three steps in the process, the first of which is most important:

  1. Convert qcow2 disk image to VHD.
  2. Import VHD in XenCenter.
  3. Customize imported machine and convert to template.

You can optionally also export the template to an XVA file.

1. Convert qcow2 to VHD

The qemu-img utility can do this. Use your package manager of choice to install (e.g. yum install qemu-img or dnf install qemu-img on F22+). You should do this on another Linux machine (even a VM is okay), because messing with the Xen dom0 is not recommended.

Locate your downloaded *.qcow2 file, which might look something like Fedora-Cloud-Base-22-20150521.x86_64.qcow2. If it’s compressed, like CentOS-Atomic-Host-7.1.2-GenericCloud.qcow2.xz, decompress it first.

Use the command $ qemu-img convert -f qcow2 -O vpc [input file] [output file] to do the conversion. For example,

$ qemu-img convert -f qcow2 -O vpc Fedora-Cloud-Base-22-20150521.x86_64.qcow2 Fedora-Cloud-Base-22-20150521.x86_64.vhd

2. Import the new VHD

If you have XenCenter installed on Windows, use the File -> Import… option to load the VHD. Follow the prompts to set up the VM’s CPU, memory, storage, and networking allocations.

Manual import on command line

Ugh, not using the UI? That means a whole lot more work to import. Are you sure about this???

If you do not have access to XenCenter, it’s a more involved process.

Transfer the newly converted disk image to the hypervisor dom0, such as by copying it into a shared storage location (e.g. NFS image library), and you should be able to use xe vdi-import to load the VHD:

First, get the size of the disk image with $ qemu-img info [VHD file]. Note the size in bytes.

$ qemu-img info Fedora-Cloud-Base-22-20150521.x86_64.vhd
image: Fedora-Cloud-Base-22-20150521.x86_64.vhd
file format: vpc
virtual size: 3.0G (3221471232 bytes)
disk size: 516M
cluster_size: 2097152

Create a VDI in XenServer using the command line tool to hold this new data:

# set SIZE to size in bytes, e.g.
$ SIZE=3221471232
# set SR to the UUID of a storage repository in which to store the VDI
$ SR=$(xe sr-list name-label='NFS virtual disk storage' --minimal)
$ UUID=$(xe vdi-create name-label=Fedora-Cloud-Base-22-20150521.x86_64 virtual-size=$SIZE sr-uuid=$SR type=user)

Then load the VHD:

$ xe vdi-import uuid=$UUID filename=Fedora-Cloud-Base-22-20150521.x86_64.vhd format=vhd --progress

If all has gone well, you get output to the effect of

[|] ######################################################> (100% ETA 00:00:00) 
Total time: 00:00:24

You can check that it’s there by doing

$ xe vdi-list uuid=$UUID

It’s time to make a VM (important: must be PVHVM) to which to attach this VHD. You’ll need to create the CD drive, set up networking, etc, all on the command line. The CD drive should be installed with a cloud-init/cloud-config datasource(Aren’t you regretting not using the GUI now?)

$ VM=$(xe vm-install new-name-label=Fedora-Cloud-Base-22-20150521 template='Other install media')
# make an optical drive, which you might need for cloud-init
$ xe vm-cd-add cd-name='cloud-init-example.iso' vm=$VM device=3

# get the list of networks and their UUIDs; select one
$ xe network-list
# the following line is an example
$ xe vif-create network-uuid=b4187ad6-916e-d1d4-90a7-2b7f1353bca2 vm-uuid=$VM device=0

Now, create the virtual block device (VBD) that associates the VHD disk image with the VM.

$ VBD=$(xe vbd-create vm-uuid=$VM device=0 vdi-uuid=$UUID bootable=true mode=RW type=Disk)

 

The VM is now ready (although you’ll need to adjust CPU and RAM, which is outside the scope of this guide), either to be booted or to be stored as a template!

3. Customize and convert to template

I like to convert the now-ready VM to a template before using it for anything. This makes it a lot easier to deploy from this point onward. It’s also helpful to tweak the default CPU/memory parameters if desired.

When it’s ready, you can select a halted VM, and choose VM -> Convert to Template… in XenCenter. The equivalent for the xe CLI is something I haven’t figured out yet; the process might require taking a snapshot, and copying the snapshot to become a template.

 

Installing a Puppet master on CentOS 7 with nginx and Unicorn

Puppet master node successful test

I was experimenting with configuration management tools, and wanted to set up a Puppet master node for my virtualized machines.

It is unfortunate that most guides out there today are tailored specifically for Ubuntu, or rely on prepackaged DEBs that do all the work (which, obviously, don’t really help on CentOS/Fedora/RedHat). This guide on DigitalOcean for setting up a Puppet master on Ubuntu 14.04 is pretty solid, but my own preferences are for CentOS and Fedora. Furthermore, I have completely migrated to using nginx in all my servers, though many deployment guides for Puppet still use Apache and Passenger. And the closest I could find in a guide for CentOS 6, nginx, and Unicorn used SysVinit and God… which are unnecessary now that systemd has been adopted.

(If you’re not as picky, just use Foreman Installer. It’ll take care of everything… even on CentOS 7.)

This guide will use:

  • CentOS 7 (at the time of writing, latest release)
    • systemd
  • nginx 1.7.x (mainline, from official nginx repository)
  • Unicorn
  • Puppet open source 3.7.x

Continue reading “Installing a Puppet master on CentOS 7 with nginx and Unicorn”

PVHVM CentOS 7 on XenServer

In this post:

  1. Benchmarks
  2. Prebuilt image
  3. Kickstart script

Following my previous post on running CentOS 7 and Ubuntu 14.04 as fully-paravirtualized guests on XenServer, I ran some benchmarks to compare the relative performance of fully-paravirtualized (henceforth abbreviated PV) domUs against HVM guests using paravirt drivers and interrupts/timers (henceforth PVHVM).

The performance differences between the two types has been studied for some time. Once upon a time, PV was undoubtedly faster, free of the overhead associated with full hardware emulation. With newer hardware features that have been supported for the last few years, PVHVM, which takes advantage of features in the processor as well as a Linux kernel that recognizes that it is operating as a virtual guest, has surpassed PV performance in most arenas.

Benchmarks

Benchmarks have severe limitations. The statistics here are only meant to be compared relatively among themselves—between the PV and PVHVM guests running exactly the same specs and software. It would be a futile exercise to compare them against VMs running on other servers, which might have better SANs, lighter workloads, or faster CPUs and RAM. The specific test profiles in the Phoronix software are also based on outdated versions of Apache httpd and nginx, which makes them unreliable for assessing real-world performance.

Some of the relevant comparisons:

It’s worth noting that CentOS 7 with a 3.10 kernel performed poorly compared to other distributions—both Fedora 20 (kernel 3.15) and Ubuntu 14.04 (kernel 3.13) outperformed CentOS on web serving workloads (not shown). But the evidence pretty conclusively showed that PVHVM generally performed better than PV on all of the operating systems.

Prebuilt image

Update (2017-04-28): Because these images are now out of date and insecure, the .xva images have been deleted. You should instead use the distribution’s latest cloud images in .qcow2 format, converted for XenServer.

To that end, I’d like to offer a prebuilt CentOS 7.0.1406 image that runs in PVHVM on XenServer. You should feel free to choose between this and the PV version from my previous post, depending on your needs. (If you need to accommodate higher density on your server, you might be better off with PV. Run benchmarks yourself to decide what you should use.)

As before, you can decompress (xz -d ___.xvz.xz or use your GUI of choice) then import through XenCenter (File – Import…) or the command line (xe vm-import filename=___.xva).

This image is provided with no guarantees. Please let me know in the comments if you find an issue with it.

  • CentOS 7.0.1406 (as of 2014-07-31)
    Filename: centos-7.0.1406-20140731-pvhvm-template.xva.xz
    Size: 325 MB xz-compressed; 1.4 GB decompressed
    Specs: 2 vCPUs, 2 GB RAM, 8 GB disk without swap, installed software, with XenServer Tools 6.4.93
    SHA256 hash: c3ef221ae886cea4c3be09996d0cb2049dc2ac8f10dd5323f85beee25ce9d4cd
    MD5 hash: 44583aa3cdbf1db1c62b2db05530ce6f
    Username: centos
    Password: Asdfqwerty

Kickstart script

A PVHVM system requires no special accommodations when installing, except that UEFI and GPT are not certain to be supported. Merely select the “Other install media” option in XenCenter, and use a standard installer ISO/DVD. Do NOT use any of the CentOS or RHEL templates! Those will create PV guests.

An automated kickstart like the one used to create the image above may help you build a generic template. Hit <Tab> at the CentOS DVD menu and append a ks=__ parameter to use a kickstart script hosted at an HTTP location.

The image above was built with the cent70-server-pvhvm.ks script at revision e278f2a8139fb624bc2cdcd9a80d8b51b7910de3, embedded below. If there are any updates to this script, they will be added to the develop branch on GitHub. You can also edit it yourself before deploying.

[github file=/frederickding/xenserver-kickstart/blob/e278f2a8139fb624bc2cdcd9a80d8b51b7910de3/centos-7.0/cent70-server-pvhvm.ks][/github]

Did this help you?

If you were able to use this image or the kickstart, I’d appreciate a brief comment to let me know it worked for you. I’d hope that the bandwidth costs are going to good use!

Paravirtualized CentOS 7 and Ubuntu 14.04 on XenServer

Update (2015-10-21): since the images below have not been updated since July 2014, I highly recommend that you no longer use them; instead, you should use a kickstart script to install from the latest packages. Also, as I’ve noted in the comments, PVHVM will likely perform better overall. Update (2017-04-28): the .xva files have now been deleted; instead see this new guide for converting .qcow2 cloud images for use with XenServer.

One of the most frequently visited blog posts on my site is a guide to installing paravirtualized Fedora 20 on XenServer using an automated kickstart file. With the recent releases of RedHat Enterprise Linux 7 (and the corresponding CentOS 7 — versioned at 7.0.1406) and Ubuntu 14.04 LTS “Trusty Tahr”, as well as prerelease versions of the next iteration of XenServer, I thought it was time to revisit this matter and show you the scripts for optimized paravirtualized guests running the newest versions of CentOS and Ubuntu.

Table of Contents

  1. Prebuilt images
  2. OpenStack gripes
  3. XenServer version differences
  4. Kickstart scripts
    1. CentOS 7
    2. Ubuntu 14.04
  5. Installation instructions
    1. CentOS 7
    2. Ubuntu 14.04

Prebuilt images for the lazy

If you’re lazy, you can skip the process and download prebuilt XenServer images that you can decompress (xz -d ___.xvz.xz or use your GUI of choice) then import through XenCenter (File – Import…) or the command line (xe vm-import filename=___.xva). These images do not have XenServer Tools installed, because you should install them yourself using the tools that match your XenServer version.

These images are provided with no guarantees. Please let me know (comments below are fine) if you find an issue with them.

  • CentOS 7.0.1406 (as of 2014-07-16)
    Filename: centos-7.0.1406-20140716-template.xva.xz
    Size: 322 MB xz-compressed; 1.6 GB decompressed
    Specs: 2 vCPUs, 2 GB RAM, 8 GB disk without swap, installed software
    SHA256 hash: ab69ee14476120f88ac2f404d7584ebb29f9b38bdf624f1ae123bb45a9f1ed94
    MD5 hash: 91e3ce39790b0251f1a1fdfec2d9bef0
    Username: centos
    Password: Asdfqwerty
  • Ubuntu 14.04 LTS (as of 2014-07-16)
    Filename: ubuntu-14.04-20140716-template.xva.xz
    Size: 549 MB xz-compressed; 1.9 GB decompressed
    Specs: 2 vCPUs, 2 GB RAM, 8 GB disk including 1 GB swap, installed software
    SHA256 hash: 1c691324d4e851df9131b6d3e4a081da3a6aee35959ed3defc7f831ead9b40f2
    MD5 hash: e2ed6cfb629f916b9af047a05f8a192d
    Username: ubuntu
    Password: Asdfqwerty

Side note on OpenStack

It’s true that private cloud IaaS tools like OpenStack have been growing in popularity, and increasingly, vendors are distributing cloud images suitable for OpenStack (see Fedora Cloud images). My instructions in the rest of this blog post won’t help you build images for an IaaS platform. You might as well just get the vendor cloud images if you’re going to be using OpenStack.

You can skip down to the next heading if you don’t want to read about my experiences with OpenStack.

OpenStack isn’t right for everyone

I tested out OpenStack + KVM on an HP baremetal server with 12 physical cores and 48 GB of RAM recently. Despite the simplified installation process enabled by RedHat, it didn’t fit my needs, and I went back to using XenServer. OpenStack was a mismatch for my needs and also has a few infrastructural problems, and hopefully someone reading this will be able to tell me if I’m out of my mind or if these are actually legitimate concerns:

  • Size of deployment. Even though it can be used on a single baremetal server, OpenStack is optimal for deployments involving larger private clouds with many servers. When working with a single host, the complexity wasn’t worth my time. This is where admins need to judge whether they fall on the virtualization side or the cloud side of a very blurry line.
  • Complex networking. Networking in OpenStack using Neutron follows an EC2 model with floating IPs, though there are various “flat” options that will more simply bridge virtual networks. The floating IP model is poorly suited to situations when the public Internet-routable network has an existing external DHCP infrastructure, and no IPs or IP ranges can be reserved.
  • Abstraction. From what I could tell, there were ridiculous levels of abstraction. On a single-host node that hosts the block storage service (Cinder) as well as the virtualization host (Nova), an LVM logical volume created by Cinder would be shared as an iSCSI target, mounted by the same machine, and only then exposed to qemu-kvm by the Nova compute service.
  • Resource overhead. The way that packstack deployed the software on a CentOS 7 server placed OpenStack—compute service (Nova), block storage (Cinder), object storage (Swift), image storage (Glance), networking (Neutron), identity service (Keystone), and control panel (Horizon)—and all its dependency components—MariaDB, RabbitMQ, memcache, Apache httpd, KVM hypervisor, Open vSwitch, and whatever else I’m forgetting—on the nonvirtualized baremetal operating system. That’s a ton of services, and attack surface, for the host… And the worst part: because each of those programs realized that the server has 48 GB of physical RAM, they all helped themselves to as much as they could grab. MariaDB was configured automatically with huge memory buffers; RabbitMQ seemed to claim more than 3 GB of virtual memory. By the time any virtual guests had been started up, the baremetal system was reporting at least 7-9 GB of used RAM!

That’s when I had enough. Technical benefits of KVM aside, and management capabilities of OpenStack aside, I decided to move firmly back into virtualization territory. XenServer’s minimal dom0 design and light overhead was much more suitable for my needs.

Note your XenServer version

XenServer Creedence requires no fixes

XenServer Creedence alpha 4—the most recent prerelease version that I am using—uses a newer Xen hypervisor and bundled tools. Consequently, it seems to have a patched version of pygrub that can read the CentOS 7 grub.cfg, which uses the keywords linux16 and initrd16, and which is no longer affected by the same parsing bugs on default="${next_entry}" that necessitated the fixes at the end of the post-installation script.

Fixes needed by XenServer 6.2

However, XenServer 6.2 cannot handle the out-of-box installation (ext4 /boot partition, GPT, etc) under paravirtualization without additional customization. Kickstart scripts are still the easiest way to ensure that the guests are bootable out of the box, by predefining a working partition scheme, selecting a minimal package set, fixing the bootloader script, and generalizing the installation.

Additionally, XenServer 6.2 lacks a compatible built-in template for Ubuntu 14.04. Thus, it cannot use netboot to install 14.04; you must use the 14.04 server ISO image to install.

The scripts to do it yourself

CentOS 7

I determined that the true minimal @core installation is too minimal for typical needs (it doesn’t come with bind-utils, lsof, zip, etc) so this image is installed with the @base group. About 456 packages are included.

[github file=”/frederickding/xenserver-kickstart/blob/develop/centos-7.0/cent70-server.ks”]

Ubuntu 14.04:

[github file=”/frederickding/xenserver-kickstart/blob/develop/ubuntu-14.04/trusty-server.ks”]

The process to do it yourself

CentOS 7

  1. Use the CentOS 6 template for a baseline.
    XenCenter add VM wizard - CentOS 6 template
  2. Give your VM a name. (screenshot)
  3. IMPORTANT: Boot up a CentOS 7 installer with parameters. You can use the netboot ISO, or boot directly from an HTTP mirror (e.g. http://mirror.rackspace.com/CentOS/7.0.1406/os/x86_64/). This is also the screen where you specify the boot parameters:
    console=hvc0 utf8 nogpt noipv6 ks=https://github.com/frederickding/xenserver-kickstart/raw/develop/centos-7.0/cent70-server.ks
    Note: you may have to host the kickstart script on your own HTTP server, since occasional issues, possibly SSL-related, have been observed with netboot installers being unable to fetch the raw file through GitHub.
    Installing CentOS 7 with a URL to boot from
  4. Set a host server. (screenshot)
  5. Assign vCPUs and RAM; Anaconda demands around 1 GB of memory when no swap partition is defined. (screenshot)
  6. Create a primary disk for the guest. Realistically, you need only 1-2 GB for the base installation, but XenServer may force you to set a minimum of 8 GB. No matter what size you set here, the kickstart script will make the root partition fill the free space. (screenshot)
  7. IMPORTANT: Configure networking for the guest. It’s critical that this works out of the box (i.e. DHCP), since the script asks Anaconda to download packages from the HTTP repositories. (screenshot)
  8. Finish the wizard and boot up the VM.
    Ready to boot up the installer
  9. The VM will boot into the CentOS 7 installer, which will run without interaction until it completes.
  10. Press <Enter> to halt the machine. At this point, you can remove the ISO (if any).
  11. Boot up the VM. It should go right into the login screen on the command line — from which you can do further configuration as needed.
    Paravirtualized CentOS 7 installed in XenServer

Ubuntu 14.04

As mentioned above, this process will differ slightly if you are on XenServer 6.2 or older.

  1. On XenServer Creedence: Use the Ubuntu 14.04 template.
    XenCenter add VM wizard - Ubuntu 14.04 templateOn XenServer 6.2 or older: Use the Ubuntu 12.04 template for a baseline.
    XenCenter add VM wizard - Ubuntu 12.04 template
  2. Give your VM a name. (screenshot)
  3. IMPORTANT: On any version of XenServer: Boot up the 14.04 server ISO installer with parameters. You cannot use the netboot ISO.
    On XenServer Creedence only: You can boot from an HTTP mirror, such as http://us.archive.ubuntu.com/ubuntu/.
    Installing Ubuntu 14.04 requires the server ISOThis is also the screen where you specify the boot parameters: append ks=https://github.com/frederickding/xenserver-kickstart/raw/develop/ubuntu-14.04/trusty-server.ks to the existing parameters line.
    Note: you may have to host the kickstart script on your own HTTP server, since issues, possibly SSL-related, have been observed with netboot installers being unable to fetch the raw file through GitHub.
  4. Set a host server.
  5. Assign vCPUs and RAM.
  6. Create a primary disk for the guest. Realistically, you need only about 2 GB for the base installation, but XenServer may force you to set a minimum of 8 GB. No matter what size you set here, the kickstart script will make the root partition fill the free space.
  7. IMPORTANT: Configure networking for the guest. It’s critical that this works out of the box (i.e. DHCP), since the script asks the installer to download packages from online repositories.
  8. Finish the wizard and boot up the VM.
  9. The VM will boot into the Ubuntu installer, which will run without interaction until it completes.

    Note: if you are warned that Grub is not being installed, you should nevertheless safely proceed with installation.

  10. Press <Enter> to halt the machine. At this point, you can remove the ISO (if any).
  11. Boot up the VM. It should go right into the login screen on the command line — from which you can do further configuration as needed, such as installing XenServer Tools.

Final thoughts

I recognize that these instructions require the use of a Windows program—XenCenter. I have not tried to conduct this installation using command line tools only. If you are a users without access to a Windows machine from which to run XenCenter, you can nevertheless deploy the kickstart-built XVA images above using nothing more than 2 or 3 commands on the dom0. If anyone can come up with a process to run through a kickstart-scripted installation using the xe shell tools, please feel free to share in the comments below.

I hope this has helped! I welcome your feedback.