Archive for the ‘Uncategorized’ Category

So why won’t EBS work for Windows images in UEC/Eucalyptus 1.6?

March 14, 2010

I’ve been trying to find an answer to this one for a little while, on and off.  The problem isn’t actually with Eucalyptus itself, as it can be reproduced under a plain Ubuntu libvirt / KVM setup.  Basically, start a SCSI Windows image using virt-manager, then try to attach a second disk using virsh, with something like:

virsh attach-device <my domain> <path to my disk snippet>

The disk snippet is of the format:

<disk type=’file’ device=’disk’>
<source file=’<path to disk image>‘/>
<target dev=’sdb’ bus=’scsi’/>

This basically replicates the process Eucalyptus follows ‘under the hood’ when attaching an EBS volume.

In the above, the disk attaches fine, but isn’t recognised in the Windows guest, even after a SCSI bus rescan.  The closest you can get to the device being recognised is to “Add/Remove hardware”, which leaves you with a second SCSI controller, albeit a broken one.

The answer comes in Daniel Berrange’s blog entry, , in the section “Disk controllers and drive addressing”:

At around the same time that I was looking at static PCI addressing, Wolfgang Mauerer, was proposing a way to fix the SCSI disk hotplug support in libvirt. The problem here was that at boot time, if you listed 4 SCSI disks, you would get 1 SCSI controller with 4 SCSI disks attached. Meanwhile if you tried hotplugging 4 SCSI disks, you’d get 4 SCSI controllers each with 1 disk attached. In other words we were faking SCSI disk hotplug, by attaching entirely new SCSI controllers.

So there’s the explanation: libvirt is “hotplugging” by adding additional SCSI controllers.  Whilst Linux can cope with this sort of abuse, Windows can’t, hence fails to add the device.  The necessary fix is detailed in this patch series:

I guess the solution for Ubuntu users is to either:

  • Put together an updated .deb for Karmic with the required patches applied
  • Wait until Lucid is released, which includes a more recent version of the Libvirt 0.7.x series

Running Windows on Eucalyptus (Improved)

October 14, 2009

I’ve previously written about running Windows on Eucalyptus here, using a method that involved using parts of the Windows boot chain to create a Eucalyptus ramdisk.  This works well enough, but can be quite time-consuming to set up.

I’ve since come up with a better way, using only open source components, which means I can make it all freely available!  And in addition, there’s a perl script included to create randomised Windows access credentials.  Read on…

Creating the image

As before, you need to create a KVM base image of the Windows variant you’re wanting to install.  I haven’t tried a Xen equivalent, but there’s no reason that it shouldn’t work also.  Vital prerequisites are:

  • The virtual disk of the base image MUST  be configured as SCSI, as this is what Eucalyptus expects.
  • The disk should have a minimum of two partitions: the first contains the operating system installation; the second can be as small as you wish, and is present to instruct Eucalyptus not to try to run Linux disk operations on the image (thanks to EtienneG for this tip; see comments on the previous post).
  • Ensure that Remote Desktop is enabled, or you won’t be able to access the running image
  • If using Eucalyptus 1.6 (perhaps as part of the UEC), you’ll need to install the e1000 network driver to the image.  Instructions for this are here.

Installing the random credential generator

Download the win-euca-blobs tarball from .  Extract the tarball, and copy the AWS directory to the root of the Windows image C:\ drive (if you wish to put this elesewhere, you’ll need to modify the path in AWS\set-admin-password.bat).

Next, install a perl runtime onto the image (I’ve used the ActiveState one found here, which is free, and seems to work well).

Finally, add AWS\set-admin-password.bat as a startup script for the virtual image, following the instructions found on this page.  You can test the script by adding the following lines to the libvirt xml of your image, in the <devices> section:

<serial type=”file”>
<source path=’/tmp/console.log’/>
<target port=’1’/>

On next startup of the VM, something like the following should spit out in /tmp/console.log

** Access Credentials **

Username: administrator
Password: ef11kr4o

I’ve tested the above script on Windows XP, and I’d imagine it should also work just fine on Server 2003 and Windows 2000.  Beyond that, mileage will vary.  And, if anyone more Windows-savvy would like to re-implement the script in .vbs or similar and contribute it, I’ll gladly update the posting!

Create the bundle

Finally, bundle your image using the memdisk and win-grub.img files supplied in the win-euca-blobs tarball, using something like:

mkdir kernel
ec2-bundle-image -i /path-to/memdisk -d ./kernel –kernel true
ec2-upload-bundle -b kernel -m ./kernel/memdisk.manifest.xml
EKI=`ec2-register kernel/memdisk.manifest.xml | awk ‘{print $2}’`
echo $EKI

mkdir ramdisk
ec2-bundle-image -i /path-to/win-grub.img -d ./ramdisk –ramdisk true
ec2-upload-bundle -b ramdisk -m ramdisk/win-grub.img.manifest.xml
ERI=`ec2-register ramdisk/win-boot.img.manifest.xml | awk ‘{print $2}’`
echo $ERI

mkdir image
ec2-bundle-image -i path-to/<windows_disk>.img -d ./image –kernel $EKI –ramdisk $ERI
ec2-upload-bundle -b image -m ./image/<windows_disk>.manifest.xml
EMI=`ec2-register image/<windows_disk>.manifest.xml | awk ‘{print $2}’`
echo $EMI

Launch it!

The image should start in the same way as any other.  To retrieve the login credentials, run ec2-get-console-output (or similar – I’m using eucatools these days).

Elasticfox for Eucalyptus

October 5, 2009

As some of you are aware, the latest (version 1.7) of Elasticfox doesn’t work without pain on Eucalyptus.  This is because of the addition of VPC (Virtual Private Cloud) functionality – and a resulting upgrade to the EC2 API version used – which breaks Eucalyptus compatibility.

Elasticfox 1.6 worked pretty well with Eucalyptus, but is no longer available for direct download any more.  It can be easily built from Subversion, however,  Here’s how:

Checkout revision 107 of Elasticfox

On ubuntu, checkout with the following command:
$ svn co -r 107 elasticfox

This will extract the Elasticfox source from repository into the elasticfox directory.

Configure the build environment

Basically, you just need the java jar command to be available, which is part of the sun Java Development Kit.  On Ubuntu:

$ sudo apt-get install sun-java6-jdk

$ export JAVA_HOME=/usr/lib/jvm/java-6-sun

Build Elasticfox

Switch to the elasticfox directory, then execute the script:

$ cd elasticfox

$ sh ./

This creates an .xpi package of Elasticfox under the dist directory

Install in Firefox

In Firefox, click to File –> Open File .  Navigate to the elasticfox/dist directory, and select the file elasticfox-1.6.000107.xpi .  Hit Open to install the Elasticfox plugin.


This is pretty simple, and will require you to have the Eucalyptus eucarc file somewhere handy.  From the Firefox Tools menu, select Elasticfox.  This will prompt for your EC2 credentials; enter these as follows:

Account Name: value from EC2_USER_ID= in eucarc

AWS Access Key: value from EC2_ACCESS_KEY= in eucarc

AWS Secret Access Key: value from EC2_SECRET_KEY= in eucarc

Now, add your Eucalyptus deployment as a Region.  To do this, click Regions in the top left, then add using the following:

Region Name: you can choose this

Endpoint URL: value from EC2_URL= in eucarc

And you can now control your Eucalyptus instance using Elasticfox!

Running Windows on Eucalyptus

August 5, 2009

This post has now been superseded.  See here for the update.

This post details how to create a Microsoft Windows image to run on a Eucalyptus instance with nodes running running KVM. In my example, I’ve created an XP image; a similar methodology can be used for Server 2003 and other NT versions.

The approach should theoretically also work on paravirtualised Xen nodes, though I haven’t tried it!

Windows XP running on Eucalyptus

Windows XP running on Eucalyptus

Create the base image

You first need to create a Windows base image using KVM.  There are already plenty of how-tos around explaining this process, so I won’t go into any detail here.  The critical part is that the install should be onto a SCSI disk image, as this is what Eucalyptus expects.

Generate the bootloader kernel

For this step, you’ll need to download and compile the memdisk component of syslinux.

Syslinux can be downloaded from .  I just grabbed the latest version.  Extract the archive, switch to the syslinux-<version>/memdisk directory, and compile memdisk using make.  You’ll need to install nasm first, as this is a dependency; to do this on Ubuntu, type:

# sudo apt-get install nasm

Make will compile a number of files into the memdisk directory – you’ll just need the one called memdisk

Create the bootloader ramdisk

This is the interesting part.  Basically, we’re going to create a Windows boot image, which will then launch the Windows virtual machine itself.

First, create a blank virtual floppy disk, using:

# dd bs=512 count=2880 if=/dev/zero of=win-boot.img

Now, attach this image as a floppy drive to the Windows VM you created above, and start the instance.  Format the disk either by right-clicking on the floppy drive icon in My Computer, or through a terminal using format a: (mkfs.msdos doesn’t work here, as the resulting file system isn’t bootable).

Finally, copy the following 4 files from the root of the windows c:\ to the floppy disk:


(for the above in greater depth, see: )

Bundle the image

Create the eucalyptus bundles using the same method as for a linux image.  You’ll need the memdisk and win-boot.img files you’ve just created, together with the hard disk image file from the Windows KVM virtual machine.

The command sequence (assuming you have the EC2 tools set up correctly) should be:

mkdir kernel
ec2-bundle-image -i /path-to/memdisk -d ./kernel --kernel true
ec2-upload-bundle -b kernel -m ./kernel/memdisk.manifest.xml
EKI=`ec2-register kernel/memdisk.manifest.xml | awk '{print $2}'`
echo $EKI
mkdir ramdisk
ec2-bundle-image -i /path-to/win-boot.img -d ./ramdisk --ramdisk true
ec2-upload-bundle -b ramdisk -m ramdisk/win-boot.img.manifest.xml
ERI=`ec2-register ramdisk/win-boot.img.manifest.xml | awk '{print $2}'`
echo $ERI
mkdir image
ec2-bundle-image -i path-to/<windows_disk>.img -d ./image --kernel $EKI --ramdisk $ERI
ec2-upload-bundle -b image -m ./image/<windows_disk>.manifest.xml
EMI=`ec2-register image/<windows_disk>.manifest.xml | awk '{print $2}'`
echo $EMI

And that’s it!

Management interfaces for the Kernel Virtual Machine (KVM)

June 8, 2009

With linux distributions increasingly standardising on the Kernel Virtual Machine (KVM) as the hypervisor of choice, it seems initially surprising that there aren’t more high quality management interfaces available.  Most distribution documentation focuses on the use of virsh (a useful command line tool, but not particularly user-friendly), and virt-manager (which shows promise, but seems pretty under-developed and feature-short when compared to the VMWare Vi client and web interface).

A little research, however, shows a number of open source offerings that I’ll aim to evaluate in the quest to settle on the ‘perfect’ KVM administrative interface.  The first 3 are web-based, but each with different aims:

Proxmox VE

Proxmox VE describes its “vision” as to:

“Setup a complete virtual server infrastructure within 1 hour.” Starting from bare metal, it is possible to create a full featured enterprise infrastructure including an email proxy, web proxy, groupware, wiki, web cms, crm, trouble ticket system, intranet … – including backup/restore and live migration.

Nowadays people are faced with more and more complex server software and installation methods. But Proxmox VE is different.

Proxmox VE is simple to use:

  • Pre-built Virtual Appliances
  • Install and manage with a view clicks
  • Selection of products for the use in the enterprise

    Proxmox VE is licensed under GPLv2 (Open source). Open source and commercial Virtual Appliances are supported.



    Eucalyptus takes the concept of server virtualisation a stage further, aiming to bring the means to create private and hybrid server clouds in the datacenter.  Currently under heavy development, it forms the backbone of the Ubuntu Enterprise Cloud.  It describes itself as:

    EUCALYPTUS – Elastic Utility Computing Architecture for Linking Your Programs To Useful Systems – is an open-source software infrastructure for implementing “cloud computing” on clusters. Eucalyptus Systems is the pioneer in open source cloud computing technology that delivers hybrid cloud deployments for enterprise data centers. Leveraging Linux and web service technologies that commonly exist in today’s IT infrastructure, Eucalyptus enables customers to quickly and easily create elastic clouds in minutes. This “no lock-in” approach provides users with ultimate flexibility when delivering their SLAs.

    Eucalyptus is more than just virtualization. Along with building virtual machines, the technology supports the network and storage infrastructure within the cloud environment. Eucalyptus works with multiple flavors of Linux including Ubuntu, OpenSuse, Debian, and CentOS. Eucalyptus currently supports Xen and KVM hypervisors.



    Ovirt isn’t strictly just a management UI; instead the project aims to provide the components required to construct a virtualised infrastructure based on KVM nodes controlled by a management server.  Sponsored by Redhat, its core is a ruby-based web console.  From the project site:

    oVirt is an open, cross-platform virtualization management system. oVirt provides both a small image that runs on a host and provides virtualization services to VMs there, and also a web-based management console that lets you allocate and group hosts and storage, install and remove virtual machines, level resources across a large group of machines, and much more. oVirt is designed to scale from a small group of users with little need for access control and quota management, all the way up to hundreds or even thousands of hosts with robust control over grouping, permissions, and quotas.

    The oVirt host image is a small, stateless Fedora build that is meant to run from a flash drive, a CDROM, or entirely in RAM via PXE … The combination of libvirt and collectd means that a properly set-up remote management tool can securely handle all aspects of virtual machine management and monitoring on the oVirt host.

    The oVirt management console also uses libvirt, along with a kerberos/LDAP server, for secure transport, monitoring, and management. It has several components:

  • A host browser that listens for oVirt hosts to advertise themselves and their capabilities over the network
  • A task engine that reads a task queue from a Postgresql database and makes the appropriate libvirt calls over the transport
  • A Rails-based web UI that allows users to manage virtual machines, view usage and performance statistics and graphs, group and ungroup hosts and storage servers, delegate groups of machines to other users, manage quota and SLA for groups of users and machines, and many other management capabilities.
  • Url:

    The final two I’m examining consist of a feature-rich desktop client based solution, and a command line application that looks potentially quite useful in the datacenter.  These are:


    ConVirt provides an interesting alternative to a traditional VMWare ESX / VirtualCenter setup, seeming to provide a large subset of VMware’s functionality, but based on an open source platform and free to download.  Covirt gives an overview of their product as:

    ConVirt provides enterprise-class management of open source virtualization platforms, making open source virtualization an extremely viable and cost-effective choice for enterprises. ConVirt lets you manage the complete lifecycle of Xen and KVM virtualization platforms from a central, GUI dashboard. With sophisticated template-based provisioning, centralized monitoring, configuration management and administration, IT administrators can now automate the entire virtual machine lifecycle on open source platforms. ConVirt is an open source product backed by commercial, enterprise-class support, so you get the best of both worlds: a sophisticated, commercially-backed solution that is also highly cost effective.


    OpenNebula Virtual Infrastructure Engine

    OpenNebula doesn’t really count as a UI, as it’s currently only administered from the command line (though a Ruby-based web interface is planned as part of the GSoC).  It does, however, promise to provide a mature platform on which to build a hybrid or private cloud, incorporating many of the high availability features that the datacaenter requires.  From the website:

    OpenNebula is an open source virtual infrastructure engine that enables the dynamic deployment and re-placement of virtualized services (groups of interconnected virtual machines) within and across sites. OpenNebula extends the benefits of virtualization platforms from a single physical resource to a pool of resources, decoupling the server not only from the physical infrastructure but also from the physical location.

    OpenNebula can be primarily used as a virtualization tool to manage your virtual infrastructure in the data-center or cluster. This application is usually referred as private cloud, and with OpenNebula you can also dynamically scale it to multiple external clouds, so building a hybrid cloud. When combined with a Cloud interface, OpenNebula can be used as engine for public clouds, providing a scalable and dynamic management of the back-end infrastructure.


    So there we have it: a varied selection of virtualisation management products to provide control over KVM, each with their own particular focus.  Now all that’s required is the time to get on with the evaluation…

    Fixing rt61pci in Ubuntu Gutsy

    December 9, 2007

    I’d experienced what were, judging by the forums, similar issues to many others using Gutsy with the supplied rt2x00 drivers. Wireless when connected worked just fine, but wouldn’t connect on login; you’d need to either click on the network manager icon, and then on the wireless connection to force a reconnect, or with the network using ‘manual configuration’, untick/tick the configured connection.

    There are various solutions proposed using the ‘legacy’ rt61 drivers, but I wanted something that worked well with network manager – hence fixing the Gutsy ‘rt61pci’ approach. Here’s how, from the terminal:

    • Do a search for rt2x00-cvs-20070914.tar.bz2 , and download (you can find this here). I found issues with the latest rt2x00 source and the stock kernel, but this version seems to work fine.
    • Unpack the file, using the command: tar xjvf rt2x00-cvs-20070914.tar.bz2
    • Following the advice found on the rt2x00 forum, detailed here , using a text editor make the following changes to the source:
      • in source/rt2x00/rt2x00mac.c : search for “rt2x00dev->, and remove that specific block of text where found, including the comma (there should be two instances of this).
      • in source/rt2x00/rt2x00_compat.h : Add the line:
    • #define IEEE80211_TXCTL_LONG_RETRY_LIMIT (1 << 20)
      below the line:

      #define RT2X00_COMPAT_H

    • Now, from the source/rt2x00 directory, do a make .  The source should compile cleanly.  Then, sudo make install
    • Now, change to the /lib/modules/2.6.22-14-generic directory.  Backup the contents of the ubuntu/wireless/rt2x00 directory to somewhere outside of the /lib/modules structure, and delete.  You can use this to restore from if you find that the newly compiled modules don’t work as anticipated.
    • Finally, to neaten things up, do a sudo mv rt2x00 ubuntu/wireless/ which will move your newly compiled modules to the default location.

    Reboot, and the wireless should now work as intended!