I first came across the notion of "technical debt" in a web article about Agile development methods. The phrase instantly struck me as a pithy summary of all the problems that IT people experience with poor quality code that is rushed out to meet deadlines, code that will cause problems down the line. We know the problems are there, waiting to hurt us, while our business partners tend to be oblivious to them.
So I was attracted by a local BCS Event on the topic. I went along this evening to hear Ali Law lead an interesting and wide-ranging discussion. Ali gave me a lot to consider. I've always thought of technical debt in terms of maintainability, i.e. of code becoming harder to adapt as it becomes twisted out of shape. Tonight's discussion suggested that technical debt could include defects, redundant functionality, mismatch with a changing environment, and complexity (of requirements, of design, of code...), as well as maintainability and supportability. It doesn't just cover in-house code; it can apply to third-party systems that haven't been upgraded or that have become unsupported. Someone also suggested that there should be an equivalent notion of "process debt", referring to business processes that are too complex, out of date or otherwise costly.
There are also many impacts of technical debt. The obvious one to me was future cost, in terms of time required to fix things left unfinished. One person in the audience had an example where the problems with a system directly affected sales, so the debt was more visible and immediate. There can be other problems too. You might encounter lack of trust with your business partners and/or customers. Your developers might be demoralised at having to work with hard-to-maintain code.
My question was, how can we provide evidence of the problems that may be incurred by technical debt? Ali based his proposals for managing the problem on the assumption that we can measure various aspects of the problem. He said that we need to decide what types of debt concern us most, which may vary from one system to another within our portfolio. Then we should find ways of measuring that debt in a way that connects to business value, whether that value is sales, costs, or user experience. Then make these costs and risks evident to the decision makers so that they can be included in the prioritisation and investment decisions.
There was a lot of discussion about how to make use of these ideas. The main point is that this presentation brings the problems into known processes for handling risks, trends, costs and priorities. There were some side conversations about some techniques for reducing particular problems - such as using a good usability analysis to avoid creating redundant functionality.
For me, the big question is how much of the problem can we really measure? Defects are relatively easy (although I liked Ali's suggested that we should score defects by the operation cost or loss of user experience they involve, so that we can measure a cumulative cost). The discussion mentioned that we should measure actual usage of functionality, so that we can remove code that it never used. Ali said that his firm has had success with Halstead and McCabe metrics in the past. I've never been convinced by these, but to be fair I have never tried them in anger. Ali did say that there are now several tools available for measuring code complexity; perhaps I should ask one of my team to investigate these.
This event was a useful opportunity to reflect on the question and to hear some potentially useful ideas. I've only captured the gist of the discussion here; there were many points made in passing that were worth hearing. For example, I've barely mentioned trust, which was actually a major part of Ali's message. I don't know how many of these ideas I'll be able to put into practice but I hope we can start with one or two and see how we do.
So I was attracted by a local BCS Event on the topic. I went along this evening to hear Ali Law lead an interesting and wide-ranging discussion. Ali gave me a lot to consider. I've always thought of technical debt in terms of maintainability, i.e. of code becoming harder to adapt as it becomes twisted out of shape. Tonight's discussion suggested that technical debt could include defects, redundant functionality, mismatch with a changing environment, and complexity (of requirements, of design, of code...), as well as maintainability and supportability. It doesn't just cover in-house code; it can apply to third-party systems that haven't been upgraded or that have become unsupported. Someone also suggested that there should be an equivalent notion of "process debt", referring to business processes that are too complex, out of date or otherwise costly.
There are also many impacts of technical debt. The obvious one to me was future cost, in terms of time required to fix things left unfinished. One person in the audience had an example where the problems with a system directly affected sales, so the debt was more visible and immediate. There can be other problems too. You might encounter lack of trust with your business partners and/or customers. Your developers might be demoralised at having to work with hard-to-maintain code.
My question was, how can we provide evidence of the problems that may be incurred by technical debt? Ali based his proposals for managing the problem on the assumption that we can measure various aspects of the problem. He said that we need to decide what types of debt concern us most, which may vary from one system to another within our portfolio. Then we should find ways of measuring that debt in a way that connects to business value, whether that value is sales, costs, or user experience. Then make these costs and risks evident to the decision makers so that they can be included in the prioritisation and investment decisions.
There was a lot of discussion about how to make use of these ideas. The main point is that this presentation brings the problems into known processes for handling risks, trends, costs and priorities. There were some side conversations about some techniques for reducing particular problems - such as using a good usability analysis to avoid creating redundant functionality.
For me, the big question is how much of the problem can we really measure? Defects are relatively easy (although I liked Ali's suggested that we should score defects by the operation cost or loss of user experience they involve, so that we can measure a cumulative cost). The discussion mentioned that we should measure actual usage of functionality, so that we can remove code that it never used. Ali said that his firm has had success with Halstead and McCabe metrics in the past. I've never been convinced by these, but to be fair I have never tried them in anger. Ali did say that there are now several tools available for measuring code complexity; perhaps I should ask one of my team to investigate these.
This event was a useful opportunity to reflect on the question and to hear some potentially useful ideas. I've only captured the gist of the discussion here; there were many points made in passing that were worth hearing. For example, I've barely mentioned trust, which was actually a major part of Ali's message. I don't know how many of these ideas I'll be able to put into practice but I hope we can start with one or two and see how we do.
Comments
I wonder if the notion of interest on technical debt is something that people think about... It seems to me that over time not adressing an issue can actually accumulate further debt.
An example might be; Technical staff member leaves out some fix due to time constraints in a project. Time passes and the project agrees to fix the issue as it is proving costly. Some time has passed and the details of how to resolve the problem have actually been to some extent forgotten. The cost to correct might be much higher than it would have been addressed when it intially surfaced. Also the business has been paying for the ommission presumably in the intervening time.