Skip to main content

Virtualisation, HPC and Cloud Computing

Virtualisation has obvious benefits for much commercial IT, where existing servers often have utilisation rates of 10% or less. It's less clear whether virtualisation is so useful for High-Performance Computing (HPC), where systems are often kept running at utilisation rates above 90%. Nevertheless, there are potential benefits from adopting some aspects of virtualisation in these environments. The question is, do the benefits outweigh the cost in performance?

This was the topic of yesterday's workshop on System Level Virtualisation for HPC (which was part of EuroSys 2008). The workshop was rather small but I did learn quite a bit.

A couple of talks investigated the performance hits of using virtualised OS's instead of addressing the hardware directly. This varied, depending on the type of application; if IO was minimal, the slowdown was minimal too. An "average" slowdown seemed to be on the order of 8%.

Samuel Thibault of Xensource looked at ways of using the flexibility of a hypervisor to implement just those parts of an operating system that are absolutely necessary - ignoring more general-purpose facilities such as virtual memory, pre-emptive threads and the like. Essentially, this was using Xen as a "micro-kernel done right".

Larry Rudolph, who works for VMWare but was here speaking with his MIT hat, gave a scenario where virtualisation might benefit the HPC user. Some problems require massive machines that researchers only get access to maybe twice a year. The example Larry gave was modelling global ocean currents. If the researchers can save the messages sent between regions of this model, they can later run more detailed analyses on one particular region on a much smaller machine, using the recorded messages to playback the effect of the rest of the simulation. This gets complicated if the new micro-simulation gets out of sync with the orginal and in this case the system may have to switch other regions from record mode to full simulation. The point is that it is easier to handle this mixture of recorded and actual simulations using virtual machines than having to control it all "by hand".

The discussion that ended the workshop brought all these points together. Virtualisation makes grid or cloud computing more feasible, which may make HPC resources available to more people. New CPUs have explicit support for virtualisation that will make it almost cost-free for the data centre, and there is a good chance that these advances will also reduce the overhead for some HPC applications too. The key problem seems to be predicting which applications will run smoothly on a virtualised system and which will suffer noticeable slowdowns. As always, more research is needed!

Comments

Popular posts from this blog

Changing Principles

In EA, architecture principles set a framework for making architectural decisions.  They help to establish a common understanding across different groups of stakeholders, and provide guidance for portfolios and projects.  Michael Durso of the LSE gave a good introduction to the idea in a webinar last week for the UCISA EA community.

Many organisations take the TOGAF architecture principles as a starting point.  These are based on the four architectural domains of TOGAF: business, information/data, applications, technology/infrastructure.  These principles tend to describe what should be done, e.g. re-use applications, buy in software rather than build it, keep data secure.  See for example the principles adopted at Plymouth University and the University of Birmingham.

Recently though, I encountered a different way of looking at principles.  The user experience design community tend to focus more on how we should do things.  E.g. we should start with user needs, use iterative developm…

Why the UCISA Capability Model is useful

What do Universities do?

This may seem a strange question to ask and the answer may seem obvious.  Universities educate students and undertake research.  And perhaps they work with industrial partners and create spin-off companies of their worn.  And they may work with local communities, and affiliation bodies for certain degress, and they definitely report on their activities to government bodies such as HEFCE.  They provide student services and support.  The longeryou think about it, the more things you can think of that a University does.

In business, the things that an organisation does are called "capabilities", which is a slightly strange term.  I think it is linked to the HR idea of a combination of the CAPacity and ABILITY to do a task.  Whatever the name, it is a useful concept.  A capability is more basic than a process: a University may change the way it educates students but as long as it remains a University it will educate them one way or another.

A capability …

"No more us & them"

WonkHE recently posted an interesting opinion piece with the title Academics and Administrators: No more ‘us and them’. In that post, Paul Greatrix rebutted criticisms of professional services (administrative) staff in Universites from some academics. To illustrate his point, he quoted recent articles in which administrators were portrayed as a useless overhead on the key tasks at hand (teaching and research).

This flows both ways, as Greatrix himself points out. As Enterprise Architect, I work with Professional Services colleagues and I have heard some of them express opinions that clearly fail to understand the nature of academic work. Academics cannot be treated as if they were factory workers, churning out lectures on a treadmill.

I think these comments reveal a fundamental clash of ideas about how a University should work. Some people who come into management positions for other sectors tend to frame the University as a business, with students and research funders as customers t…