Our Supplementary Evidence to the EPR Inquiry on Independent Reviews
From Nhs It Info
Contents |
Introduction
1. At the Committee’s second evidence session, Dr Richard Taylor asked Professor Brian Randell to provide a short note describing where independent technical reviews had previously helped major projects to succeed. This supplementary evidence has been prepared in response to that request.
Examples of independent reviews
2. In 1998, the project to develop the New En-Route air traffic control centre (NERC) at Swanwick was three years late, over budget, and facing continuous scrutiny by the press [1] and by the Parliament. Mrs Gwyneth Dunwoody MP, as Chair of the Transport Committee, called for an independent review of NERC, and this was carried out by DERA (now QinetiQ [2]) and Arthur D Little. The review reported that the project was likely to succeed if a number of technical and management recommendations were implemented [3]. One conclusion was that the Chief Executive Officer had such a powerful commitment to the success of the Project and this “very likely inhibited more open discussion at such meetings on project problems and possible Operational date slippage. This in turn stifled debate and helped reduce the effectiveness of the review meetings”. The recommendations were followed and NERC came into service in 2002; it has proved very successful in operation.
3. In MoD there is an Annual Major Projects Review, which is published. It would make sense for all of the major programmes across Government to be included in an Annual Major Programmes Review similar to the MoD Major Project Review. This would get the facts about those programmes into the open on a regular basis for scrutiny and debate.
4. In the USA, the Office of the Undersecretary of Defense introduced a programme of independent project reviews in 1999 (the Tri-service Assessment Initiative). A status report [4] states that “As a direct result of the assessments conducted to date (19 since inception), Project Managers are implementing relatively low-cost post-assessment recommendations and realizing high returns.”
5. A report by Jack Ferguson, director of Software Intensive Systems for the US Department of Defense [5] describes their independent expert programme reviews.
At the DoD, our large development efforts face problems with the lack of software management expertise and of real data on the causes of problems. To address these issues, we are implementing independent expert program reviews (IEPRs) at appropriate points in the system life cycle. The Defense Science Board Task Force on Defense Software made this industry best practice its top recommendation. IEPRs leverage the scarce technical talent resident in government and industry to help DoD program managers better understand risks, problems, and best practices. Independent expert teams provide a comprehensive assessment of the programs, identify risks, and make recommendations for management and risk mitigation. Participation in these assessments is voluntary; program managers request assessments and control assessment report distribution. The review team and program staff jointly establish assessment scope and initial issue areas. They also establish a follow-up review schedule to evaluate actions taken as a result of the assessment. To date, 42 such IEPRs have been performed. Besides significantly reducing the overall risks on the programs reviewed, the IEPR results are giving the DoD stronger experience based insights that help software-intensive–system programs as a whole. Based on generic, systemic issues found across the assessments, IEPRs give feedback to DoD and senior acquisition managers, identifying recommended changes in policy, education, and training. These findings let us base risk mitigation and process improvement decisions on real data rather than anecdotes. They also provide information on the unintended consequences of well-meaning policy directives.
6. The MITRE Corporation and the Software Engineering Institute (SEI) in the USA both frequently review major programmes for the US Department of Defense. The SEI’s publication on lessons learned from independent technical assessments [6] contains the following summary.
All of the assessments summarized in this paper were on large scale, DoD (or related government agency) programs. All of the programs were in actual or perceived difficulty. Some of the recommendations were for substantial restructuring or cancellation of the effort. With this in mind, we look to some of the root causes of the problems uncovered, and attempt to compare and contrast them to similar works in the non-defense world. In doing this, we find that there are more similarities than there are differences. The most significant drivers to failure on these systems continue to be management and culture related, just as they are in commercial systems. Technological failings, while they exist, also have a strong management flavor, as they tend to cluster around failings in the systems engineering process. There are no technology "silver bullets," and anyone promoting any technology as a panacea should be viewed with suspicion. A recent Defense Science Board report states: "Too often, programs lacked well thought-out, disciplined program management and/or software development processes. ... In general, the technical issues, although difficult at times, were not the determining factor. Disciplined execution was.". There are numerous examples as to how this lack of disciplined execution manifests. Some deficiencies are related to human nature. Self-interest leads people to primarily consider their tenure on a job, cleaning up problems left for them by their predecessors and often not considering long-term consequences of short-term decisions. There is also a tendency to try to place blame on other organizations: customers and program offices cannot hold to a set of requirements; contractors don't live up to their obligations; vendor's products don't live up to their performance and capability claims. It is obviously someone else's fault. This is all a case of lack of discipline. We find that in programs in trouble, there are NO innocent parties. All stakeholders involved participated (at some level) in creating or abetting failure.
7. According to Dr Robert Charette (who was chief designer of the IEPR process referred to above), the US National Academy of Science and Engineering routinely carries out reviews of troubled programmes and makes recommendations to help them to succeed. Dr Charette has taken part in many reviews, including the post-Challenger shuttle review for NASA [7], and he is willing to provide personal evidence to the Committee if you request him to do so. Dr Charette is also author of a recently-published article reviewing American and other efforts to develop national healthcare IT systems [8].
8. I and other members of the UK Computing Research Committee have participated in many independent project reviews for public-sector and private-sector organisations. The systems reviewed include large (several million lines of software), distributed (many scores of processors), information systems processing large quantities of real-time data. Many have involved complex supply chains, with suppliers in the UK and overseas - mainland Europe or the USA - with complex, multi-party (including multiple government agency) procurement organizations. Several have had challenging programmes, with multiple deliveries and complex integration activities to carry out prior to delivery. On several occasions, these reviews have occurred late in programmes. Whilst there are more opportunities and alternatives for improvements early in a programme, our experience is that it is usually still possible to identify courses of action which significantly improve the likelihood of successful project outcome.
UK Policy
9. The Information Tribunal has recently ordered the Office of Government Commerce (OGC) to publish its Gateway reviews of the ID-card programme. In response to the decision, the information commissioner Richard Thomas said: “Disclosure is likely to enhance public debate of issues such as the programme’s feasibility and how it is managed”. It seems likely that the same ruling would apply to NPfIT.
10. On 3rd April 2000, the Committee of Public Accounts published its Session 1999-2000 Thirteenth Report entitled "The 1992 and 1998 Information Management and Technology Strategies of the NHS Executive". This report concluded (paragraph 39 & 40)
“Evaluation of the success of IT projects is essentially to identify lessons learned and avoid the same problems in future. We have previously expressed our concerns about the failure of the NHS Executive to evaluate important aspects of the 1992 Strategy in its reports on the Hospital Support Systems Initiative and Read Codes. .... The Executive assured us that they are committed to evaluation of ongoing projects .and of the 1998 Strategy. But they have yet to develop their plans in detail. We expect the Executive to produce a programme for these evaluations, and to let us see it as soon as possible".
11. Hence the Committee of Public Accounts recognised as long ago as the year 2000 that evaluation of projects while they were still in progress is a potentially valuable act. In response, the Department of Health commissioned two reviews of direct relevance to the Health Committee's enquiry. Between August and October 2001, Professor Denis Protti, School of Health Information Science, University of Victoria, Victoria, British Columbia, Canada was commissioned to review "the state of progress of Information for Health". His report (the Protti Report) contains many pages of detailed recommendations. It was undoubtedly critical. In response, in 2003, the Department of Health commissioned a report from the PA Consulting Group (Core National Evaluation of the Electronic Records Development and Implementation Sites). This report also made a number of important recommendations.
12. Between April and July 2003, the Department of Health commissioned a review "The Public View on Electronic Health Records", conducted by the Consumers' Association and the researchers they commissioned: Research Works Limited (qualitative) and BMRB International Limited (quantitative). This report's findings are fascinating and its recommendations are most interesting.
13. The difficulty is that Connecting for Health appears to have largely ignored the recommendations made in these reviews. If they have not done so, they should be invited to explain to the Health Select Committee which recommendations they have implemented and how they have implemented these recommendations.
Conclusions
14. The Health Select Committee may wish to address two issues: (a) having independent timely information, which can only come from a thorough independent review; (b) monitoring that the Department pays attention to the review report, through a continuous programme of Health Select Committee scrutiny of the Programme. In this context, it may be worth noting that the House of Commons Work and Pensions Sub- Committee (Report HC 311-II Published on 14 July 2004) (paragraph 26) says:
"We recommend that, as formal evidence to Parliament, the Department should present an implementation assessment for each major IT project. We envisage that such an IT Implementation Assessment (ITIA) would be similar to a Regulatory Impact Assessment (RIA) that is currently required. An ITIA should set out in some detail the Government's justification for embarking on the IT programme, including purpose, timings, costs, IT requirements and major risks".
15. In many ways the review recommended by Professor Randell, on behalf of the 23 academics, would produce an independent IT Implementation Assessment similar to that proposed by the House of Commons Work and Pensions Sub- Committee, which together with the Department's response may move the National Programme for IT genuinely forward.
16. Please let me know if you would like me to provide the Committee with any of the reports or other documents referred to in this evidence.
Dr Martyn Thomas CBE
May 2007
Notes:
1. http://news.bbc.co.uk/1/hi/uk/politics/75220.stm
2. http://www.qinetiq.com/home/case_studies/aviation/swanwick_air_traffic_control_centre.html
3. http://www.publications.parliament.uk/pa/cm199899/cmselect/cmenvtra/586/586mem01.htm
4. http://www.stsc.hill.af.mil/crosstalk/2000/11/baldwin.html
5. IEEE Software, July/August 2001
6. http://www.sei.cmu.edu/pub/documents/01.reports/pdf/01tn004.pdf
7. Such reviews by the National Academies of Science are routinely published, and play an inportant part in restoring public confidence in troubled projects. An Assessment of Space Shuttle Flight Software Development Processes, R. Charette (Chairman). Commission on Engineering and Technical Systems, National Research Council, (National Academies Press, 1993), 194 pp. [http://books.nap.edu/openbook.php?isbn=030904880X]
8. R. Charette, "Dying for Data: A comprehensive system of electronic medical records promises to save lives and cut health care costs-but how do you build one?,"IEEE Spectrum, vol. 43, no. 10, pp.22-27, 2006. [http://www.spectrum.ieee.org/oct06/4589]