In my work with the Grid Computing Now! Knowledge Transfer Network, I talk about "virtualisation" and "service-oriented architecture" just as much as "grid" itself. People sometimes ask what is the difference between these concepts. My first answer is perhaps rather glib - I say that I don't care as long as the technology gets the job done. Although this is not a straight answer, those of us on the GCN! team believe it is important to put the business answers before any notion of technological purity.
But if we turn to the question as stated, I think that as long as a solution includes the key concepts of virtualised resources and dynamic allocation of applications across those resource, then that to me is enough to call the system a grid. But, of course, we can go further.
A recent conversation reminded me of the important point that distributed systems typically have to manage failure. As systems scale to many machines and many sites, then some of those are going to fail some of the time. The systems have to be resilient enough to adapt and recover. Systems also have to cope with additions and deletions from the set of available resources.
This is most obvious in cycle-stealing grids, which use spare power of desktop PCs, and of scientific grids, which link many research sites across the world. The interesting question is whether this also applies to the data centre. That seems to depend to some extent on how the system is designed. For example, Google is built specifically around this approach; they have always used lots of generic systems and just replaced resources when they fail. I believe Ebay's massive server farms use the same dynamic approach.
This question arose in a conversation I had with Liam Newcombe, an independent consultant. We were supposed to be talking about Green IT (of which more another time), but our discussion wandered to include all sorts of ideas. Liam is working on an open source model of data centre reliability and performance. He believes that reliability is best achieved by adopting this approach of explicitly allowing for it within the software - rather than, for example, attempting to make the hardware itself ultra-reliable.
A key question must be how high up the stack does this awareness have to extend? Can we write applications without worrying about this or does every application have to have some potential adaptability built in? It's a fascinating topic and I look forward to reading the book the Liam is co-authoring, in due course.
But if we turn to the question as stated, I think that as long as a solution includes the key concepts of virtualised resources and dynamic allocation of applications across those resource, then that to me is enough to call the system a grid. But, of course, we can go further.
A recent conversation reminded me of the important point that distributed systems typically have to manage failure. As systems scale to many machines and many sites, then some of those are going to fail some of the time. The systems have to be resilient enough to adapt and recover. Systems also have to cope with additions and deletions from the set of available resources.
This is most obvious in cycle-stealing grids, which use spare power of desktop PCs, and of scientific grids, which link many research sites across the world. The interesting question is whether this also applies to the data centre. That seems to depend to some extent on how the system is designed. For example, Google is built specifically around this approach; they have always used lots of generic systems and just replaced resources when they fail. I believe Ebay's massive server farms use the same dynamic approach.
This question arose in a conversation I had with Liam Newcombe, an independent consultant. We were supposed to be talking about Green IT (of which more another time), but our discussion wandered to include all sorts of ideas. Liam is working on an open source model of data centre reliability and performance. He believes that reliability is best achieved by adopting this approach of explicitly allowing for it within the software - rather than, for example, attempting to make the hardware itself ultra-reliable.
A key question must be how high up the stack does this awareness have to extend? Can we write applications without worrying about this or does every application have to have some potential adaptability built in? It's a fascinating topic and I look forward to reading the book the Liam is co-authoring, in due course.
Comments