It has now been well established that early diagnosis and treatment of rheumatoid arthritis (RA) improve outcomes1.The barriers to early diagnosis and treatment are many but the most frequently cited are a shortage of rheumatologists and an increasing burden of inflammatory arthritis as the population increases and ages2,3. These barriers have prompted many groups across the country, at both the local and national levels, to develop innovative models of care (MOC) to improve access, diagnosis, and treatment for RA. Further, funding agencies such as The Canadian Institutes of Health Research4 and The Arthritis Society5 have research priorities in developing MOC for arthritis; policy makers are more than ever looking to optimize access and improve quality of care, and patients are demanding it as well.
For an MOC to be a “model” for others to follow and implement, evaluation of its performance is key. The Arthritis Alliance of Canada (AAC) has developed 6 system-level performance measures to assess whether an MOC is effective in improving access to care and early treatment of inflammatory arthritis6. The term model of care is a frequently used term in our current healthcare landscape and it can carry different meanings in different contexts. Therefore, before we can evaluate these MOC we have to know what they are. An MOC has been defined as a “framework that outlines the optimal manner in which condition-specific care should be made available and delivered to consumers at a system level”7. An MOC should bridge the gap between evidence supporting a specific protocol of care and how best to deliver said protocol to the consumer. MOC understandably require significant resources and change to current processes8; therefore, evaluation of these MOC is critical to justify their existence, and most importantly, to know that they are meeting the outcomes they were developed to address. Bourgeault also highlights that evaluation of processes is important as it can help to eliminate or improve inefficient steps and can also help to scale up beyond the original site8.
In this issue of The Journal, Barber, et al9 describe the results of measuring and adherence to the AAC’s performance measures. Four of the 6 measures were analyzed: wait time to rheumatology consult, percentage of patients seen in yearly followup, percentage taking disease-modifying antirheumatic drugs (DMARD), and time to starting DMARD. The data were collected from triage and/or cohort databases supplemented with chart reviews. The performance measures were evaluated in 5 MOC all aimed at improving early access and care for patients with RA. The group found that data were not systematically collected at any of the 5 sites, and chart reviews were required at all sites despite 4 of the MOC maintaining a clinical or research database. The only measure that could be collected at all sites was wait times; yearly followup could be collected at only 2 sites and only 3 sites could calculate DMARD therapy and time to DMARD start. On the data that were available and collected, median wait times varied between sites from 21–75 days, yearly followup was 83–100%, percentage of patients taking DMARD was 90–100%, and time to start of DMARD within 2 weeks of diagnosis was 87% or higher at all 3 sites.
This is the first study in Canada, to the authors’ knowledge, to examine the feasibility of reporting on system-level performance measures for MOC. The findings in this study highlight the difficulty of collecting performance measures in MOC designed to improve access and treatment for RA. However, where performance measures were available, it highlighted how these types of data could be helpful to inform future directions. For example, the significant disparity in wait times across the MOC noted in the study would be helpful in discussions and planning of workforce allocation. The study also confirms that the MOC have a high percentage of both patients taking DMARD and patients with a time to starting DMARD within 2 weeks. This is similar to other studies10 and confirms that these MOC are meeting these outcomes. This study has some limitations, including small patient numbers and limited data collection; some sites had just implemented their MOC in the last few years. This study also represents a convenience sample and may not represent the breadth and extent of MOC currently being used across the country, and may not reflect the optimal MOC.
Given the current interest in MOC by various national organizations, there may be the assumption that MOC are developed at a system or national level; however, most MOC for arthritis in Canada currently have been developed at the local level11 by interested and engaged individuals who give their time and expertise to innovate with no other significant resources. Bourgeault notes that there can be a penalty for early innovators8 that are called on to discuss their innovation — this can leave them with little time and resources to not only evaluate their innovation but to sustain and improve their MOC over time. There can be other reasons why MOC may not be evaluated, including realizing that a program that was presented widely did not meet outcomes, and that there may not be the capacity or expertise for scientific evaluation12 within that institution. Those who implement MOC may not excel at evaluating the innovation8. Most of the focus of MOC has been on implementation, but evaluation must be part of the development from the outset. Evaluation requires resources, but it is also imperative to know whether the innovative MOC has had an effect on patient care. Incorporating the evaluative component of an MOC at the planning stage ensures that there are adequate resources allocated for it and that there is systematic collection of the necessary data at the outset — 2 issues the Barber, et al study highlights. It has been suggested by others that 5% of a program’s funds be reserved for evaluation12.
MOC have been identified as a key component to improving the access, diagnosis, and treatment of inflammatory arthritis, and there have been multiple MOC developed across the country to address the gaps in care. Most of these MOC have concentrated more of their efforts and resources on implementation and less on evaluation. Ongoing monitoring and evaluation of these MOC must take place; otherwise, how can we call them “model” practices for others to adopt?
Footnotes
See Arthritis healthcare performance measurement, page 1501
REFERENCES
- 1.
- 2.
- 3.
- 4.
- 5.
- 6.
- 7.
- 8.
- 9.
- 10.
- 11.
- 12.