Skip to main content

Virtualisation, HPC and Cloud Computing

Virtualisation has obvious benefits for much commercial IT, where existing servers often have utilisation rates of 10% or less. It's less clear whether virtualisation is so useful for High-Performance Computing (HPC), where systems are often kept running at utilisation rates above 90%. Nevertheless, there are potential benefits from adopting some aspects of virtualisation in these environments. The question is, do the benefits outweigh the cost in performance?

This was the topic of yesterday's workshop on System Level Virtualisation for HPC (which was part of EuroSys 2008). The workshop was rather small but I did learn quite a bit.

A couple of talks investigated the performance hits of using virtualised OS's instead of addressing the hardware directly. This varied, depending on the type of application; if IO was minimal, the slowdown was minimal too. An "average" slowdown seemed to be on the order of 8%.

Samuel Thibault of Xensource looked at ways of using the flexibility of a hypervisor to implement just those parts of an operating system that are absolutely necessary - ignoring more general-purpose facilities such as virtual memory, pre-emptive threads and the like. Essentially, this was using Xen as a "micro-kernel done right".

Larry Rudolph, who works for VMWare but was here speaking with his MIT hat, gave a scenario where virtualisation might benefit the HPC user. Some problems require massive machines that researchers only get access to maybe twice a year. The example Larry gave was modelling global ocean currents. If the researchers can save the messages sent between regions of this model, they can later run more detailed analyses on one particular region on a much smaller machine, using the recorded messages to playback the effect of the rest of the simulation. This gets complicated if the new micro-simulation gets out of sync with the orginal and in this case the system may have to switch other regions from record mode to full simulation. The point is that it is easier to handle this mixture of recorded and actual simulations using virtual machines than having to control it all "by hand".

The discussion that ended the workshop brought all these points together. Virtualisation makes grid or cloud computing more feasible, which may make HPC resources available to more people. New CPUs have explicit support for virtualisation that will make it almost cost-free for the data centre, and there is a good chance that these advances will also reduce the overhead for some HPC applications too. The key problem seems to be predicting which applications will run smoothly on a virtualised system and which will suffer noticeable slowdowns. As always, more research is needed!

Comments

Popular posts from this blog

2016 has been a good year

So much has happened over the last year with our Enterprise Architecture practice that it's hard to write a succinct summary.  For my day-to-day experience as enterprise architect, the biggest change is that I now have a team to work with.  This time last year, I was in the middle of a 12-month secondment to create the EA practice, working mainly on my own.  Now my post has been made permanent and I have recruited two members of staff to help meet the University's architectural needs.

I have spent a lot of the year meeting people, listening to their concerns and explaining how architecture can help them.  This communication remains vital, the absolute core of what we do and we will continue to meet people in this way.  We also talk to people in other Universities in order to learn from what they are doing and to share our own experience back.  A highlight in this regard was my trip to the USA last January.

Our biggest deliverable for the past year was the design of the data wa…

A new EA Repository

One of my goals since starting this job two years ago has always been to create a repository for architecture documents.  The idea is to have a central store where people can find information about the University's applications, data sources, business processes, and other architectural information.  This store will make it easier for us to explain our plans, to show the current state of the University's information systems, and to explain what Enterprise Architecture is all about.

It's taken a long time to reach this goal, mainly because we're often had more pressing and immediate work to be done.  The creation of a repository is one of those tasks that is very important but never quite urgent.  So I'm now very happy to say that we are in the process of deploying a repository and modelling tool.


This is the culmination of a careful process to select the most appropriate tool for our needs.  We began by organising several workshops to gather requirements from a rang…

A brief summary of our major initiatives

I notice that in 2016 I wrote 34 posts on this blog.  This is only my fifth post in 2017 and we're already three-quarters of the way through the year.  Either I've suddenly got lazier, or else I've had less time to spend writing here.  As I'm not inclined to think of myself as especially lazy, I'm plumping for the latter explanation.

There really is a lot going on.  The University has several major initiatives under way, many of which need input from the Enterprise Architecture section.

The Service Excellence programme is overhauling (the buzzword is "transforming") our administrative processes for HR, Finance, and Student Administration.  Linked to this is a programme to procure an integrated ERP system to replace the adminstrative IT systems. 

Enabling Digital Transformation is a programme to put the middleware and architecture in place so that we can make our processes "digital first".  We're implementing an API framework, a notification…