Abstract
Objective. To develop system-level performance measures for evaluating the care of patients with inflammatory arthritis (IA), including rheumatoid arthritis (RA), psoriatic arthritis, ankylosing spondylitis, and juvenile idiopathic arthritis.
Methods. This study involved several methodological phases. Over multiple rounds, various participants were asked to help define a set of candidate measurement themes. A systematic search was conducted of existing guidelines and measures. A set of 6 performance measures was defined and presented to 50 people, including patients with IA, rheumatologists, allied health professionals, and researchers using a 3-round, online, modified Delphi process. Participants rated the validity, feasibility, relevance, and likelihood of use of the measures. Measures with median ratings ≥ 7 for validity and relevance were included in the final set.
Results. Six performance measures were developed evaluating the following aspects of care, with each measure being applied separately for each type of IA except where specified: waiting times for rheumatology consultation for patients with new onset IA, percentage of patients with IA seen by a rheumatologist, percentage of patients with IA seen in yearly followup by a rheumatologist, percentage of patients with RA treated with a disease-modifying antirheumatic drug (DMARD), time to DMARD therapy in RA, and number of rheumatologists per capita.
Conclusion. The first set of system-level performance measures for IA care in Canada has been developed with broad input. The measures focus on timely access to care and initiation of appropriate treatment for patients with IA, and are likely to be of interest to other arthritis care systems internationally.
- QUALITY INDICATORS
- RHEUMATOID ARTHRITIS
- PSORIATIC ARTHRITIS
- JUVENILE IDIOPATHIC ARTHRITIS
- ANKYLOSING SPONDYLITIS
It is estimated that over a million Canadians have inflammatory arthritis (IA), including rheumatoid arthritis (RA), ankylosing spondylitis (AS), juvenile idiopathic arthritis (JIA), and psoriatic arthritis (PsA)1. Early detection and treatment improves outcomes in RA2,3 and evidence is emerging for other types of IA4,5,6. Rheumatologists are the medical specialists primarily responsible for diagnosing and treating people with IA. Unfortunately, the number of rheumatologists in many regions may be inadequate to ensure timely access to care for patients7,8. The rising burden of IA, a projected shortage of rheumatologists7,8,9, and recognized gaps in current care10 prompted the Arthritis Alliance of Canada (AAC) to consider new models of care delivery to optimize access, treatment, and patient outcomes in IA1. A model of care is a strategy that describes an optimal, evidence-based approach to care delivery with an emphasis on what resources and processes are needed to deliver high-quality care at a community level11.
A critical component of model of care implementation is evaluation. The objective of our study was to develop system-level measures to evaluate a model of care as they are reported at a regional, provincial, or national level in contrast with patient-provider level measures, which identify and evaluate individual physician performance. The measures focus on measuring access to care and treatment provided by a model of care and are for research and quality improvement. Although the measures have been developed in a Canadian context, the measures are likely highly relevant to other arthritis settings.
MATERIALS AND METHODS
The AAC has over 36 member organizations from across Canada12 and provides a central focus for national arthritis-related initiatives, including this project.
The performance measures were developed over 4 phases (Figure 1).
Process for selection of system-level performance measures for IA. IA: inflammatory arthritis; AAC: Arthritis Alliance of Canada; DMARD: disease-modifying antirheumatic drug; RA: rheumatoid arthritis.
Phase 1: Establishing measurement priorities
A sequential process was used to establish measurement priorities (Figure 1; Supplementary Appendix, available online at jrheum.org). First, a literature review of published measures and clinical practice guidelines for IA care was conducted to identify possible measurement themes. The results of this review informed a draft set of measurement themes that was mapped onto the 6 dimensions of quality of care (effectiveness, accessibility, safety, efficiency, acceptability, and appropriateness13; see Supplementary Figure 1 and Supplementary Figure 2 for search strategy and results, available online at jrheum.org). The measurement themes were reviewed during a series of meetings with healthcare workers and patients both in person and by teleconference to first expand and then refine the themes, considering feasibility and priorities. Thirteen out of 19 (68%) AAC members, including rheumatologists and health services researchers with expertise in models of care and/or performance measurement, completed an anonymous online poll to select the most feasible, valid, and relevant themes, and these were presented at an AAC meeting for feedback prior to measure specification. No demographic data were collected to characterize the respondents.
A working group of 5 rheumatologists and health services researchers then prepared the specifications for the final set of measures for development.
Phase 2: Systematic search to support candidate performance measures
A systematic search was conducted to identify existing guidelines and performance measures to ensure the proposed measures were supported by current recommendations, and to harmonize the proposed measure with existing ones (Figure 1; Supplementary Figure 1, available online at jrheum.org).
PubMed and DynaMed electronic databases were searched from 2009 to May 26, 2014 using keywords, synonyms, and MeSH headings for the concepts of IA and Guideline or Indicators (Supplementary Figure 2). A grey literature search was also conducted including rheumatologic society Websites, the National Quality Indicator, and Guidelines Clearing Houses. English language documents from the European continent, Australia, New Zealand, Canada, United States, and the United Kingdom were included. Only guidelines and measures developed and endorsed by a medical society or national healthcare quality regulatory body were included.
The search and preliminary article selection were completed by a medical librarian (Doctor Evidence LLC, a literature review company) based on a predefined protocol. Final article selection was based on relevance to support the measures (conducted by CEHB). This was not a systematic review according to the Preferred Reporting Items for Systematic reviews and Meta-Analyses guidelines14. The supporting recommendations were abstracted and included in a background document provided to panelists in Phase 3.
Phase 3: Online national modified Delphi panel to establish IA performance measures
To ensure wide input, a 3-round, online, modified Delphi panel was conducted using a platform called ExpertLens15,16. This platform has previously been used to elicit expert opinion on healthcare topics17,18.
For the online panel, 50 people were invited to participate, including rheumatologists, researchers, allied health professionals, government representatives, and people with arthritis. A purposive sampling strategy was used to ensure that all interested groups and all provinces were represented. Individuals were identified by the AAC based on prior involvement in work or activities relevant to measure development. No honoraria or incentives were offered for participation. The University of Calgary Conjoint Research Ethics Board approved the project and RAND’s Human Subjects Protection Committee determined that the study was exempt from review.
The panel process took place from September 22 to June 11, 2014, and included 2 rounds of online voting and a discussion round in between. Each round was open for 7–14 days. Periodic reminders were sent to maximize engagement.
In Round 1, panelists were asked to answer a set of 7 questions about each of the 6 candidate measures. They were provided with a background document, which included the rationale and methods for the project, as well as the measure specifications.
Participants were asked to rate the validity and feasibility of candidate measures while considering multiple facets of validity and feasibility19,20,21 using a modification of published questions (Table 1). We also asked panelists about the relevance of the measure and the likelihood of use of the measure in their health system.
Wording of questions for online Delphi panel to assess validity, feasibility, relevance, and likelihood of use in comparison to previously used panelist questions for quality indicator development.
In Round 2, panelists participated in an asynchronous, anonymized, online discussion. The discussion was moderated by a health services researcher and rheumatologist who had experience with the platform and with the development of the performance measures (CEHB). The moderator asked questions of the panel to clarify responses and to raise discussion of comments submitted anonymously during Round 1 until all concerns raised by participants had been addressed. During this round, panelists also reviewed automatically generated bar graphs showing the distribution of the group’s responses. After the discussion round, the core research team made a small number of minor changes to the measures to acknowledge the comments made by participants. In Round 3, participants revised their responses to Round 1 questions in light of the Round 2 discussion and feedback.
To be included in the final set, measures had to be rated as valid and relevant (median scores ≥ 7 on a 9-point scale), with no disagreement. Disagreement was calculated according to the RAND/UCLA Appropriateness Method handbook22: when the Interpercentile Range (IPR; difference between the 30th and 70th percentiles) is larger than the Interpercentile Range Adjusted for Symmetry (IPRAS), which was calculated using the following formula: IPRAS = 2.35 + (Asymmetry Index [AI] × 1.5), where the AI is the absolute difference between 5 and the central point of the IPR.
Candidate measures with uncertain ratings of data availability or reliability and/or uncertain ratings of whether the health system has control over the measure (i.e., ratings 4–6 on a 9-point scale) were also included in the final set because feasibility of measurement will be tested prior to widespread measure implementation.
Phase 4: AAC comment
The final set of measures was submitted for wide AAC comment. The measures were first presented at a 2-h workshop during the Second Annual Conference and Research Symposium in November 2014 and the workshop participants were asked to respond anonymously to the question “How likely would you be to use this measure for quality improvement in your arthritis care system?” using an electronic response system.
The second method for obtaining input was posting the measures on the AAC Website for 1 month and advertising in the AAC newsletter that the document was open for comment.
RESULTS
Phase 1: Establishing measurement priorities
Following an iterative process of engagement (Figure 1), 16 measurement themes were presented to AAC participants in an online poll to decide which would be most valid, relevant, and feasible to use as performance measures. The top 7 candidate measurement themes were presented at an AAC meeting for additional feedback. The themes included access and waiting times for rheumatologic care (considered broadly in this phase to encompass rheumatologist and/or allied health professional care), consultation for RA within 4 weeks, percentage of patients with RA in low disease activity (LDA) state or remission, percentage of patients with RA receiving a disease-modifying antirheumatic drug (DMARD; Supplementary Material, available online at jrheum.org), time to DMARD therapy (Supplementary Material, available online at jrheum.org), rheumatologists per capita (Supplementary Material, available online at jrheum.org), and percentage of patients with IA seen by a rheumatologist.
During the process, it was determined that the focus of the performance measures should be on the access to rheumatologist care and appropriate treatment, and that the measurement of clinical outcomes was beyond the scope of the project. Therefore, the candidate measure on percentage of patients with RA in LDA or remission was not included. Access to rheumatologist care was prioritized for measure development as opposed to rheumatology care (which encompassed access to other allied health professions) because rheumatologists are the primary specialists responsible for diagnosing and prescribing medication and also because of the concerns about feasibility of measuring access to rheumatology care. The access and waiting time measures were further refined, and standard of consultation within 4 weeks for RA was identified as a benchmark of the waiting time measure (Table 2). Finally, the percentage of patients with IA seen by a rheumatologist was split into 2 measures: 1 identifying the number of patients with newly diagnosed IA and a second identifying the percentage of patients seen in yearly followup (at a minimum to ensure patients were not getting lost to followup; Supplementary Material, available online at jrheum.org). Six measures were selected for further development.
Final set of system level performance measures for IA (complete measure specifications shown in the Supplementary Data, available online at jrheum.org).
Phase 2: Systematic search to support candidate performance measures
The systematic search identified 1007 articles, and 115 were selected for full-text review (Supplementary Figure 1, available online at jrheum.org). Of these, 26 documents were included that were directly relevant to the candidate measurement themes: 9 RA guidelines23,24,25,26,27,28,29,30,31, 6 RA quality measure documents32,33,34,35,36,37, 4 PsA guidelines38,39,40,41, 3 AS guidelines42,43,44, 1 AS quality measure document45, 1 JIA guideline46, and 2 JIA quality measure documents47,48. Each guideline recommendation and quality measure was abstracted and included in a document available to panelists in Phase 3. One quality measure developed by the US National Committee on Quality Assurance and endorsed by the American College of Rheumatology37 and by the National Quality Forum34 is used for reporting programs such as the Physician Quality Reporting System33 in the United States, and was identified as highly relevant for use in Canada, and measurement specifications were harmonized.
Phase 3: Online national modified Delphi panel to establish IA performance measures
Out of 50 participants invited to participate in the online modified Delphi panel, 42 (84%) answered at least 1 question in Round 1. However, not all participants answered each question, and the highest number of responses to any question was 42. In Round 2, 32 participants (64%) generated 88 comments. Following Round 2 discussions, minor modifications were made to some measures. In Round 3, 36 participants provided their final ratings (72% of the total invited participants and 86% of Round 1 participants engaged in Round 3).
A total of 42 participants answered demographic questions and a single participant answered some questions on the performance measures in Round 1, but did not answer demographic questions. Fifty-one percent of respondents were physicians (Table 3). There was participant representation from all Canadian provinces except Prince Edward Island, the Yukon, the North West Territories, and Nunavut.
Characteristics of ExpertLens panel participants (n = 43). Values are n (%) unless otherwise stated.
The results for panel voting are shown in Table 4.
Results of online modified Delphi panel to develop performance measures for IA (Round 3). Values are median (interquartile range).
All 6 measures were included in the final set and are shown in Table 2. The full specifications of the measures are provided in the Supplementary Appendix (available online at jrheum.org). The measures focus on the access to rheumatology care and provision of treatment, and cover concepts reported separately for each IA subtype except as indicated. Overall, there was agreement on the validity, relevance, and likelihood of use of the measures (median scores 7–9).
During the discussion round of the online panel, the original wording of #3 was modified from percentage of patients with IA seen yearly by a rheumatologist to rheumatology team member to reflect the care provided to patients in some models of care, including nurse-led clinics. The same change was not made to #2 (percentage of patients with IA seen by a rheumatologist) because that measure is intended to reflect incident cases of RA that have not been seen by a rheumatologist for confirmation of diagnosis and treatment.
There was a degree of uncertainty about the feasibility of reporting some of the measures. For example, for #1 (waiting times for rheumatologist consultation for patients with IA) the median ratings for information availability were 5–6, depending on the subtype of IA. Similarly, there was uncertainty (median scores of 6) about whether reliable and unbiased information was available to report on the following measures: waiting times for rheumatologic consultation for patients with AS and PsA (#1), the percentage of patients with IA seen by a rheumatologist (#2), time to DMARD therapy in RA (#5), and rheumatologists per capita (#6).
There was also uncertainty over the system’s ability to control the performance of 3 of the measures (median scores of 6): the waiting time measure for RA, AS, and PsA (#1); time to DMARD therapy in RA (#5); and rheumatologists per capita (#6).
Two measures (#1 and #5) incorporated benchmarks set by the Wait Time Alliance (WTA; a committee that developed the Canadian rheumatology wait time benchmarks)49. In Round 3, we asked panelists’ opinions of the WTA benchmarks, and they strongly agreed to their inclusion (median 7–8 on a 9-point scale, where 9 is strongly agree).
Phase 4: AAC comment
During the AAC workshop, 34 participants indicated that they were likely or very likely to use the measures for quality improvement in their arthritis care systems (Supplementary Material, available online at jrheum.org).
During roundtable discussions focusing on the feasibility of using the measures and participants’ willingness to pilot-test the measures, it was noted that data sources varied by province/region, which affected the feasibility of implementation. However, some common themes emerged, including the potential use of administrative data, clinical data, registries, and clinical databases for measurement. Overall, there was a high level of enthusiasm for pilot-testing the implementation of the performance measures, but many cited lack of resources or data as potential barriers.
Two comments were provided following the AAC Website posting of the measures. One comment supported the development of the measures, and another suggested some minor clarifications to the specifications, which were incorporated.
DISCUSSION
To our knowledge, we have developed the first set of system-level performance measures for IA care in Canada through a rigorous methodology involving a multiphase process for varied input. The 6 system-level performance measures address access to specialist care and treatment for people with IA. Reporting on 3 of the measures is further subdivided by type of IA (AS, PsA, RA, JIA), given disease heterogeneity. These performance measures are important for quality improvement and research purposes. The measures can be used in arthritis care settings to examine existing models of care or when evaluating institution of innovations such as central intake systems or alternative models of care delivery, such as telemedicine. While the measures may highlight underserviced regions, this information should not be used for accountability purposes to penalize low resource areas, but should instead help direct resource use and innovations in care delivery to where they are most needed.
The final performance measures developed were deemed valid and relevant by our expert panel and aligned with existing models of care for IA in Canada50. While there was some uncertainty about the availability and the reliability of data sources to report on the measures, this varied somewhat according to individual measures and reflects that in many Canadian arthritis centers, high-quality data sources are not currently readily available to measure access to care or adherence to guideline-based treatment. However, a priori we decided not to exclude measures with uncertain feasibility ratings because it was recognized that those domains would need to be tested. We also did not discard measures for which participants indicated uncertainty in the domain of “control over the measure” (How well can the factors that determine performance on this measure be controlled at the health system level?). While in previous measure development studies this concept has been incorporated into assessments of validity, this also relates to feasibility of measure implementation and will be evaluated during testing. To our knowledge, ours is the first study separating validity and feasibility questions to test 1 concept per question. This yielded valuable information, reflecting the nuances of the participants’ perceptions in these domains.
Strengths of our methodology include the use of an online modified Delphi panel platform to explore opinions about the performance measures because it allowed us to gain input from a wide variety of geographically dispersed stakeholders. Typical panels for performance measure development often use a smaller in-person meeting for Round 2 and traditionally have included only clinicians and researchers22. Additionally, our panel response rates were high (86% in Round 1 and 72% in Round 3) and compared favorably with other online modified Delphi panels run on this platform21. Another strength of our study was wide input at all development stages to identify measurement priorities and to comment on the measurement specifications. The participants represented arthritis care settings and models of care from across Canada, including academic and community practices. Recruiting participants from rural regions was challenging. Consequently, only 1 participant was from a rural area. This likely reflects the geographic distribution of arthritis health professionals, who cluster in urban centers. Inclusion of 7 healthcare professionals from community practice favorably affects the generalizability of our findings. We also had arthritis patient input through engagement at AAC meetings (Phases 1 and 4) as well as participation in the ExpertLens panel (Phase 3).
A limitation of our work is that we were not able to develop all measurement themes presented during the initial discussions. During the measure development phase of the project, access to care and guideline-based treatment were prioritized because understanding access is critical before evaluating outcomes. Additionally, because of the heterogeneity of diseases encompassed by the term IA, it was not feasible to develop performance measures for reporting on guideline-based treatment for IA subtypes beyond RA. Appropriate guideline-based treatment for AS, PsA, and JIA is closely tied to monitoring disease activity, and measurement of outcomes was beyond the project scope. Also, although a systematic search was used to define our evidence base for measure development, it was not a formal systematic review and it is possible some relevant recommendations or measures were missed.
An additional limitation is the possibility of some overlap of respondents between Phase 3 and Phase 4; however, the degree of overlap is unknown because the participants in Phase 3 were anonymous. We were also not able to characterize the respondents during the online poll in Phase 1 because no demographic data were collected, which may be a limitation in describing whether these individuals had the requisite knowledge for this measure development stage. Finally, it is possible that a responder bias could have influenced the results of our measure development or our AAC comment period because it is likely that the most invested and enthusiastic members participated in this work.
We anticipate that some of the measures will be identified in administrative health data, while others may be best reported using clinic databases or clinical registries. We caution, however, that measures identified using different data sources may vary in important ways because of the differences in the population being studied or the type of data available and therefore may not always be comparable. For example, for Performance Measure 4 (percentage of patients with RA treated with a DMARD), clinic data identified DMARD prescribed whereas claims data identified DMARD dispensed.
Further work on harmonizing data collection across arthritis care settings for reporting on these measures is ongoing at a national level through related projects at the AAC. Additionally, an expansion of the measurement framework is planned, including measures encompassing best practices for management of IA, provision of patient information on self-management, and access to arthritis resources.
We have developed a set of system-level performance measures for IA care. The measures were developed in a Canadian setting, but both our process and findings are relevant to other arthritis care settings. The measures focus on access to specialist care and provision of guideline-based treatment, and are reported for each subtype of IA. The measures were rigorously developed and will be tested in arthritis care settings for feasibility of implementation. The measures are a critical starting point for evaluating access to arthritis care and for use in quality improvement.
ONLINE SUPPLEMENT
Supplementary data for this article are available online at jrheum.org.
Acknowledgment
Jaime Coish is the executive director of the Arthritis Alliance of Canada (AAC) and facilitated administrative support, meeting organization, and participant engagement in this project. Lina Gazizova is a project manager for the AAC and provided administrative and meeting support for this project and helped facilitate participant communication and engagement in this project. Jonathan Riley contributed to the writing of the Canadian Institutes for Health Research planning grant, which helped fund this project. Jenny Wang contributed to the organization and the literature review for the first meeting of interested parties in Phase 1 of this project described in Supplementary Figure 2 (available online at jrheum.org). Samra Mian contributed to the preparation of slide decks for meetings in Phase 1 of this project and conducted literature searches to supplement background information for the project.
APPENDIX 1.
List of study collaborators. Members of the Arthritis Alliance of Canada Performance Measurement Development Panel: Vandana Ahluwalia, MD, FRCPC, Corporate Chief of Rheumatology, William Osler Health System; Henry Averns, MB, ChB, FRCP (UK), FRCPC, Rheumatologist; Cheryl Barnabe, MD, FRCPC, MSc, Assistant Professor, Division of Rheumatology, Department of Medicine, Department of Community Health Sciences, University of Calgary, ARC Research Scientist; Claire Bombardier, MD, MSc, FRCPC, Professor of Medicine, Rheumatology Division Director, University of Toronto, Senior Scientist, Institute for Work and Health; Susan J. Bartlett, PhD, Association Professor, Divisions of Clinical Epidemiology, Rheumatology and Respirology, McGill University/MUHC; Sasha Bernatsky, MD, FRCPC, PhD, Associate Professor, Divisions of Rheumatology and Clinical Epidemiology, McGill University; Jennifer Burt, PT, Rheumatology Services, St. Clare’s Mercy Hospital, Eastern Health; Debbie Feldman, PT, PhD, Professor of Medicine, School of Rehabilitation, Université de Montréal; Dafna D. Gladman, MD, FRCPC, Professor of Medicine, University of Toronto, Division of Rheumatology, Senior Scientist, Toronto Western Research Institute; Beverly Greene, RN, MN, Director Chronic Disease Prevention Unit, New Brunswick Department of Health; Boulos Haraoui, MD, FRCPC, Associate Professor, Division of Rheumatology, Department of Medicine, University of Montreal; Nigil Haroon, MD, PhD, DM, Assistant Professor, Division of Rheumatology, Department of Medicine University of Toronto; Catherine Hofstetter, Patient, Canadian Arthritis Patient Alliance; Adam M. Huber, MSc, MD, IWK Health Centre and Dalhousie University; Stephanie Keeling, MD, FRCPC, MSc, Associate Professor, Division of Rheumatology, Department of Medicine, University of Alberta; Bianca Lang, MD, FRCPC, Professor, Division of Rheumatology, Department of Pediatrics, Dalhousie University, Head Division of Rheumatology, IWK Health Centre; Sharon A. Le Clercq, MD, FRCPC, Associate Clinical Professor, Division of Rheumatology, Department of Medicine, University of Calgary; Theresa Lupton, RN, CCRP, Nurse Clinician Rheumatology, Alberta Health Services; Anne Lyddiatt, Patient; Rashmi Mandhane, BScPT, Alberta Health Services; Kimberly Morishita, MD, MHSc, FRCPC, Clinical Assistant Professor, Division of Rheumatology, Department of Pediatrics, University of British Columbia, Pediatrician, BC Children’s Hospital; Angelo Papachristos, BSc, BScPT, MBA, ACPAC, Advance Practice Physiotherapist, St. Michael’s Hospital, Clinical Lecturer University of Toronto; Patricia Patrick, BsN, RN, Nurse, Clinician Rheumatology; Dawn Richards, PhD, person who lives with rheumatoid arthritis, Vice President, Canadian Arthritis Patient Alliance; David Robinson, MD, MSc, FRCPC, Associate Professor of Medicine, University of Manitoba; Natalie J. Shiff, MD, FRCPC, MHSc, Assistant Professor, Division of Rheumatology, Department of Pediatrics, University of Saskatchewan; Trudy Taylor, MD, FRCPC, Assistant Professor, Department of Medicine, Division of Rheumatology, Division of Medical Education, Dalhousie University; Regina Taylor-Gjevre, MD, FRCPC, MSc, Professor, Division of Rheumatology, Department of Medicine, University of Saskatchewan; Glen T.D. Thomson, MD, FRCPC Rheumatologist; Carter Thorne, MD, Director, The Arthritis Program Southlake Regional Health Centre, Assistant Professor Department of Med, Division of Rheumatology, University of Toronto; Karine Toupin-April, PhD, Associate Scientist, Children’s Hospital of Eastern Ontario Research Institute, Assistant Professor, Department of Pediatrics, Faculty of Medicine, University of Ottawa; Peter Tugwell, MD, FRCPC, Cochrane Collaboration Musculoskeletal Review Group, Department of Medicine, University of Ottawa; Marie D. Westby, PT, PhD, Physical Therapy Teaching Supervisor, Mary Pack Arthritis Program, Vancouver Coastal Health; Jessica Widdifield, PhD, McGill University; Linda Woodhouse, PT, PhD, Associate Professor and David Magee Endowed Chair in Physical Therapy, University of Alberta, Scientific Director, Alberta Health Services Bone and Joint Health Strategic Clinical Network; Michel Zummer, MD, FRCPC, Chief, Rheumatology, Hôpital Maisonneuve-Rosemont, Associate Professor, Université de Montréal.
Footnotes
Full Release Article. For details see Reprints/Permissions at jrheum.org
Funding from a Canadian Institutes for Health Research Planning Grants 2013-10-15 (Funding reference number is 218913). Additional funding and in-kind resources provided by the Arthritis Alliance of Canada (AAC). Dr. Lacaille holds the Mary Pack Chair in Arthritis Research from the University of British Columbia and The Arthritis Society of Canada.
- Accepted for publication October 31, 2015.
Free online via JRheum Full Release option