In this discussion paper, Marie-Claire Richer, Director of the Transition Support Office, and Patricia Lefebvre, Director of Quality, Risk and Performance at the MUHC, describe the challenges of evaluating health IT projects. Carole Lapierre, Alain Biron, Monique Périé and Claude Lemieux also participated in the paper. —Prepared as part of the 2010 program of the MUHC-ISAI

Health information technologies (IT) hold tremendous promise for improving care and outcomes. However, measuring these improvements is difficult and there is much debate about how health IT projects should be treated from an evaluation perspective.

Improved safety and quality of care are the compelling reasons behind investments in information technology (IT) systems in the healthcare sector. The seminal studies in this area are the Institute of Medicine reports of 1999 to 2001, which pointed to IT systems as an essential tool in preventing adverse events and improving use of evidence-based practice. However, evaluating IT systems to demonstrate improvements in quality and safety remains extraordinarily difficult, and there is currently much debate about exactly how these systems should be treated from an evaluation perspective. Should they be studied in the same way as health technologies and drugs with the aim of producing ‘robust scientific evidence’ for their adoption? (1) This often appears to be what funders of such systems require in order to continue their investment, but is it feasible? Or are alternative approaches required?

These questions are currently being explored at Montreal’s McGill University Health Centre (MUHC) as it undertakes a major redevelopment project in tandem with a vast expansion of health IT capabilities. This paper was developed through a discussion among some key players supporting the evaluation of IS-IT projects at the MUHC, from Quality, Risk and Performance, which is responsible for providing support in quality and performance improvements initiatives, from Information Services, and from the Transition Support Office (TSO). The TSO was created in 2008 as a project management office to support the harmonization and optimization of clinical and administrative practices, provide dedicated support for projects linked to the redevelopment, and support the evaluation of these projects, including implementation of IS-IT systems.

A framework for evaluation

In the evaluation of IT projects, DeLone and McLean’s (1992) widely used framework (2) includes three groups of factors that characterize whether a system is successful: the inherent quality of the system and the clinical information it produces; the system’s uptake by users and its perceived usefulness in supporting clinical practice; and the system’s impact on patient outcomes.

The discussion for this paper focused on the challenges of evaluating IT projects on each of these counts. The interrelation between system design, user uptake and outcomes requires increased collaboration between different groups in the hospital to set project objectives that are meaningful for all parties, select evaluation methods and metrics in consequence, and ensure follow-up on findings.

Aligning technological and clinical evaluation objectives

The MUHC has completed the display phase of the deployment of its clinical information system, and begun the data entry phases. The metrics captured to date focus on user-friendliness and processes, but clinically relevant metrics are also being measured. In the Allergy module of the MUHC’s clinical information system, for example, the proportion of patients with documented allergy status and the rate of incident reports per patient day were measured along with metrics regarding user satisfaction.

There is growing interest in linking data from different information systems to provide outcome measures that will support clinical and organizational decision-making. Since IT systems provide timely access to point-of-care clinical data, they can also be expected to improve the quality and safety of care. For example, once it becomes possible to see immediately that a patient has a positive C. difficile test and intervene promptly, outcomes for nosocomial infections should improve. The time between the completion of laboratory tests and the initiation of a prescription can serve as a metric to track, measure and improve practices.

Sequencing evaluation projects

In cases where clinical processes are being optimized and IT systems are being introduced, it can be difficult to decide on the order in which to introduce changes and conduct evaluations. Should process optimization come first and the IT tool be implemented subsequently? In a project such as optimizing operating room processes, it may be better to work on both initiatives at the same time, as the IT system provides a tool with which to measure the impact of process changes.

Evaluation methods

Evaluations are conducted using a number of methods, from data extraction to time-and-motion observational studies, to audits and chart reviews. The preferred, and least costly, method is to extract information from databases in order to capture clinical events as they occur and link this information to the administrative databases. In the absence of extractable information from databases, more resource-intensive data collection methods, such as observation, are required.

The MUHC generates and stores an enormous amount of information and is currently developing a system that will ensure the interoperability of these databases. The growing collaboration between clinical groups, administrators and departments such as Information Services. Finance and Quality and Performance, will create the organizational capability to conduct evaluations that support decision-making. This speaks to advantages of building in-house expertise rather than relying on external consultants for evaluation.

The MUHC is establishing common evaluation practices and tools for use by the TSO and Quality, Risk and Performance office. Evaluations on a particular indicator will always employ the same metrics and the same source of information on those metrics. The metrics chosen to measure performance on different indicators should be ones where norms and benchmarks are established in the literature so that the institution can define reasonable goals.

Setting objectives

During the evaluation phase of a project to implement a new information system, it is important to have a clearly articulated objective, find the right metrics to assess progress toward that objective, and define the right moment to take the pre- and post-measurements. However, people find it very difficult, especially with IT systems, to state their objectives in implementing a given system. The focus falls very naturally on the ‘how’ rather than the ‘why’. The involvement of IS-IT specialists and the long and arduous process of assuring uptake by clinicians can focus assessment too narrowly on the technical aspect of the adoption and less on the clinical perspective. It is essential to help people articulate the ‘why’ and restate and revise it as needed throughout the project implementation and evaluation period.

Ongoing evaluation

IT systems are designed with input from people who are familiar with organizational needs to make sure that the information required to monitor performance on a given indicator is valid and can be extracted. The MUHC is now defining rules to see Quality, Risk and Performance involved from the start of IT projects.

Implementing new IT systems allows for the harmonization of definitions and data entry. Once this is done and people are trained, these systems will provide real-time analysis of care processes and alert personnel automatically when a step is missing or a time lag is too long, whether treatment conforms to best practices, or if there were avoidable side effects. For myocardial infarction, for example, the time of arrival at the ER is recorded, as is the time of angioplasty and whether there was a readmission within 30 days. This information can easily be extracted, can help the organization identify opportunities for improvement in the way patients with myocardial infarction are treated, and assess whether these improvement initiatives are effective.

Actionable indicators

The results of IT system evaluations are of interest to a number of different parties: Ministry of Health, hospital administration, project leaders and managers, and clinicians. Each level is able to act on a different indicator to contribute to a same overall goal and it is important to tailor results of an evaluation to each.

Results need to be presented in different ways to different audiences. The Operations Committee may be interested in the number of users of an IT system: Which clinicians? How many patients now have accurate and complete data recorded? Clinician groups want to hear about outcome improvements obtained through documentation. Administration, which provides the funds necessary to complete the project, wants evidence of efficacy and cost-effectiveness gains. Results need to be presented to the people who can act on them to bring about improvements.

An IT system evaluation team may look at five different indicators for different reasons, but it is important to include a few that are able to convince clinicians that proper and generalized use of the system will change care and outcomes. For example, if clinicians see that use of an electronic prescription system enables their patients to be started on antibiotics four hours earlier, that will encourage use.

When does evaluation stop?

The literature is clear on the fact that unless you provide feedback, changes will not be lasting.(3) To perpetuate a change, measurement must be ongoing. The correct measurement interval will depend on the nature of the project. There will likely be educational and training components that continue on from the original project to assure that new personnel are equipped to use a system.

The complexity of its management structure makes the health sector an especially difficult place in which to bring about change.(4) Professionals working in the health sector can present obstacles to change unless they are invested in the process. In this context, a change process encompasses the actions, reactions and interactions of various elements in the organization that enable it to move from one state to another.(5-7) Evaluation, communication of results and clear accountability for acting on them nurture a culture of continuous IT-enabled improvement.

References:

1. Greenlagh, Trisha and Russell, Jill. (2010). Why do Evaluations of eHealth Programs Fail? And alternative set of guiding principles. PLoS Medicine, 7(11).

2. DeLone, W.H., and McLean, E.R. (1992). “Information Systems Success: The Quest for the Dependent Varia-ble,” Informa-tion Systems Research (3:1), 60-95.

3. Ferlie, E. & Shortell, S. M. (2001). Improving the quality of health care in the United Kingdom and United States: A framework for change. The Milbank Quarterly, 79, 281-315.

4. Glouberman, S. & Zimmerman, B. (2002). Complicated and complex systems: What would a successful reform of Medi-care look like? Discussion paper no. 8, Commission on the Future of Health Care in Canada. Ottawa: Government of Canada.

5. McNulty, T. & Ferlie, E. (2002). Reengineering health care: the complexities of organizational transformation. Oxford, England: Oxford University Press.

6. Pettigrew (1987). Context and action in the transformation of a firm. Journal of Manage-ment Studies, 24(6), 649-70.

7. Pettigrew, A.M., Ferlie, E., & McKee, L. (1992). Shaping Strategic Change in Large Organizations : The case of the National Health Service. New-bury Park: Sage Publications.