Skip to main content

Virtualisation, HPC and Cloud Computing

Virtualisation has obvious benefits for much commercial IT, where existing servers often have utilisation rates of 10% or less. It's less clear whether virtualisation is so useful for High-Performance Computing (HPC), where systems are often kept running at utilisation rates above 90%. Nevertheless, there are potential benefits from adopting some aspects of virtualisation in these environments. The question is, do the benefits outweigh the cost in performance?

This was the topic of yesterday's workshop on System Level Virtualisation for HPC (which was part of EuroSys 2008). The workshop was rather small but I did learn quite a bit.

A couple of talks investigated the performance hits of using virtualised OS's instead of addressing the hardware directly. This varied, depending on the type of application; if IO was minimal, the slowdown was minimal too. An "average" slowdown seemed to be on the order of 8%.

Samuel Thibault of Xensource looked at ways of using the flexibility of a hypervisor to implement just those parts of an operating system that are absolutely necessary - ignoring more general-purpose facilities such as virtual memory, pre-emptive threads and the like. Essentially, this was using Xen as a "micro-kernel done right".

Larry Rudolph, who works for VMWare but was here speaking with his MIT hat, gave a scenario where virtualisation might benefit the HPC user. Some problems require massive machines that researchers only get access to maybe twice a year. The example Larry gave was modelling global ocean currents. If the researchers can save the messages sent between regions of this model, they can later run more detailed analyses on one particular region on a much smaller machine, using the recorded messages to playback the effect of the rest of the simulation. This gets complicated if the new micro-simulation gets out of sync with the orginal and in this case the system may have to switch other regions from record mode to full simulation. The point is that it is easier to handle this mixture of recorded and actual simulations using virtual machines than having to control it all "by hand".

The discussion that ended the workshop brought all these points together. Virtualisation makes grid or cloud computing more feasible, which may make HPC resources available to more people. New CPUs have explicit support for virtualisation that will make it almost cost-free for the data centre, and there is a good chance that these advances will also reduce the overhead for some HPC applications too. The key problem seems to be predicting which applications will run smoothly on a virtualised system and which will suffer noticeable slowdowns. As always, more research is needed!

Comments

Popular posts from this blog

Webinar: Powering your business with Cloud Computing

On October 14th, I will be hosting a Grid Computing Now! web seminar on the topic of Cloud Computing. We have lined up two very interesting speakers who are using Cloud now to make businesses work. Ross Cooney had a good technological solution to sell but couldn't make it economic until Cloud Computing allowed him to pay for his computation only when he needed it. He will discuss the instant benefits and long term impact of cloud computing to the development, competitiveness and scalability of your application. Alan Williamson created the BlueDragon Java CFML runtime engine that powers MySpace.com. He advises several businesses and will give an overview of the different types of services available and how to avoid being locked-in to a single supplier. You can register for this event here .

Technology Strategy Board: Information Day, 22nd October

I've been asked to publicise the following event. The Technology Strategy Board has arranged an Information Day for Wednesday 22nd October to outline the various R & D Competitions being planned over the next 9 months. This Information Day will provide delegates with an opportunity to find out about the activities of the Technology Strategy Board and gain an understanding of the application process for Collaborative R&D Competitions as well as find out about other Technology Strategy Board activities. The event, being held at the Hyatt Regency Hotel in Central Birmingham, will open at 09:30 for a 10:00 start and will close at approximately 16:30; a full agenda will be available shortly. To register for this event please click on the following link and complete the on-line registration form For more information on the Technology Strategy Board please visit their web site

Business Model Canvas

A Business Model Canvas is a tool for mapping the core functions and capabilities of an organisation.  Compared to the Core Diagrams that I described in an earlier post , the business model canvas attempts to present more aspects of the business, starting with the value proposition – a statement of what the organisation offers to its users (in the business world, to its customers).  It shows the activities and resources, as Core Diagrams do, but also shows user relationships & channels, and also benefits and costs.  I’m not aware of any universities that have used this tool but you can find examples from elsewhere on the web. We are considering business model canvases as a tool for mapping the strategic capabilities of units at the University of Edinburgh.  Phil Taylor, our EA contractor, sketched an outline of what a business model canvas might begin to look like for HR: This is only intended to be suggestive: the real canvas would need to result from in-depth discussions abo