Skip to main content

Synching the 2.0 web

Advocates of web 2.0 suggest that we can access nearly all of the services we need from web suppliers. We can edit our documents, store our photos or company data, and run our applications. It sounds great - but what happens when the web is unavailable? Over the last few years I have travelled quite a bit and I've often found myself in places with no wifi connectivity - or at least none at a price I'm willing to pay. So I value having a copy of my data on my laptop, so that I can carry on working.

I've put forward this argument at a couple of events recently. At an excellent session on Web 2.0 and science at the UK e-Science All Hands Meeting, the response was that 3G coverage will soon be sufficient to give us access almost everywhere. The next generation will take it for granted, the way they take GSM talk coverage for granted already. I have to admit that this scenario seems quite likely, although of course there are still places that don't even have talk coverage.

Nevertheless, there are still problems. Cloud services are far from 100% reliable, at least as yet. The word from companies using cloud computing for their business is that we should expect failure and deploy applications on multiple providers. I believe we should do the same with our data. In addition to guarding against technical failures, it would protect us from vendors who go out of business or close down a service. It would would also prevent vendors from taking advantage of "lock-in" to increase their prices.

So, we need systems that can replicate data from one data store to another. Fortunately, we know how to do this, whether via Grid or via P2P technologies. Unfortunately, we seem no nearer achieving standards for interoperability, so we will need to build systems that interface to the variety of proprietary systems out there.

Ideally, the data should be self-describing, so that two copies can be synchronised by a different application from the one that actually created the copies. I'm put in mind of the apparently simple problem of syncing my calendar between my PDA and my PC. When I migrated my PC calendar to a new application, the next synchronisation created two copies of each event. You'd have thought that the iCalendar format would tag each event with a UUID so that multiple copies could be easily reconciled, but it seems that this doesn't happen. Let's make this a ground rule for storing data in the cloud.

I'll leave the last word to a panellist at the Cloud Computing event in Newcastle. When I explained that I wanted my data on my laptop so that I could work on the plane, he suggested that perhaps I'd be better using the time to read a good book.


Popular posts from this blog

2016 has been a good year

So much has happened over the last year with our Enterprise Architecture practice that it's hard to write a succinct summary.  For my day-to-day experience as enterprise architect, the biggest change is that I now have a team to work with.  This time last year, I was in the middle of a 12-month secondment to create the EA practice, working mainly on my own.  Now my post has been made permanent and I have recruited two members of staff to help meet the University's architectural needs.

I have spent a lot of the year meeting people, listening to their concerns and explaining how architecture can help them.  This communication remains vital, the absolute core of what we do and we will continue to meet people in this way.  We also talk to people in other Universities in order to learn from what they are doing and to share our own experience back.  A highlight in this regard was my trip to the USA last January.

Our biggest deliverable for the past year was the design of the data wa…

A new EA Repository

One of my goals since starting this job two years ago has always been to create a repository for architecture documents.  The idea is to have a central store where people can find information about the University's applications, data sources, business processes, and other architectural information.  This store will make it easier for us to explain our plans, to show the current state of the University's information systems, and to explain what Enterprise Architecture is all about.

It's taken a long time to reach this goal, mainly because we're often had more pressing and immediate work to be done.  The creation of a repository is one of those tasks that is very important but never quite urgent.  So I'm now very happy to say that we are in the process of deploying a repository and modelling tool.

This is the culmination of a careful process to select the most appropriate tool for our needs.  We began by organising several workshops to gather requirements from a rang…

A brief summary of our major initiatives

I notice that in 2016 I wrote 34 posts on this blog.  This is only my fifth post in 2017 and we're already three-quarters of the way through the year.  Either I've suddenly got lazier, or else I've had less time to spend writing here.  As I'm not inclined to think of myself as especially lazy, I'm plumping for the latter explanation.

There really is a lot going on.  The University has several major initiatives under way, many of which need input from the Enterprise Architecture section.

The Service Excellence programme is overhauling (the buzzword is "transforming") our administrative processes for HR, Finance, and Student Administration.  Linked to this is a programme to procure an integrated ERP system to replace the adminstrative IT systems. 

Enabling Digital Transformation is a programme to put the middleware and architecture in place so that we can make our processes "digital first".  We're implementing an API framework, a notification…