Abstract
Objective. Administrative data are increasingly being used for research and surveillance about rheumatic diseases. However, literature reviews have revealed a lack of consistency in methods for conducting observational rheumatic disease studies, a situation that can lead to findings that cannot be compared. Our purpose was to develop best-practice consensus statements about the use of administrative data for rheumatic disease research and surveillance in Canada.
Methods. We convened 52 decision makers, epidemiologists, clinicians, and researchers to a 2-day workshop. Prior to this, participants formed working groups to examine 3 best-practice categories: case definitions, epidemiology methods, and comorbidity and outcomes measurement. The groups conducted systematic or scoping reviews on key topics. At the workshop, evidence from the reviews was presented and consensus-building techniques were used to develop the best-practice statements. The statements were presented, discussed, revised (as needed), and then subjected to voting.
Results. Thirteen best-practice consensus statements were developed and endorsed by consensus. For the first category, these consensus statements addressed validation techniques for rheumatic disease case definitions and case ascertainment bias. The consensus statements for epidemiology methods focused on confounding and drug exposure measurement. For comorbidity and outcomes measurement, consensus statements were developed for multiple conditions, including osteoporosis and fragility fractures, cancer, infections, cardiovascular disease, and renal disease. Strengths and limitations of administrative data were identified in relation to each topic.
Conclusion. Our best-practice consensus statements are consistent with other recent guidelines, including those for rheumatic disease biologics registries, but address additional issues specific to administrative data. Continuing work focuses on disseminating these consensus statements to multiple audiences.
Arthritis and rheumatic diseases are associated with significant burden1, and there is great need for continuing research and surveillance. Administrative health databases (physician billing, hospitalization, and prescription drug records) have been used in Canada and other countries to monitor disease prevalence and incidence and a variety of health outcomes2,3,4. The strengths of these data sources include accurate and complete records of healthcare use without recall bias, inclusion of entire populations, and followup over multiple years. Nevertheless, the findings from administrative health databases may be difficult to compare because of differences in study designs, definitions, and analytic techniques.
The adoption of standardized and valid methods for the use of administrative data would help to ensure comparability and accuracy. We report the results of activities sponsored by the Canadian Arthritis Network to develop consensus statements about best practices for the use of administrative data for research and surveillance of rheumatic diseases. Although the focus was predominantly motivated by the needs of Canadian researchers, this report has broad applicability to administrative data sources from other countries.
MATERIALS AND METHODS
In February 2011, a 2-day consensus meeting was convened of 52 decision makers, epidemiologists, clinicians, and researchers with expertise in using administrative health databases for chronic disease research and surveillance. All members of the Public Health Agency of Canada (PHAC) Canadian Chronic Disease Surveillance System Arthritis Working Group were invited to participate. This Working Group was composed of researchers and provincial and national government representatives from across Canada who are contributing to the development of a national surveillance system for arthritis and related conditions. A “snowball” sampling technique was used to identify other potential meeting participants. (This is a nonprobability sampling technique where a clear sampling frame is hard to define5; initial respondents help identify further potential participants.)
In preparation for the consensus meeting, participants were assigned to 1 of 3 working groups (Appendix 1); each group was assigned 1 category for consensus statement development. The categories, although not reflecting a specific theoretical framework, were selected by the meeting leaders (LL, SB, DL), in consultation with arthritis researchers and epidemiologists, because they were identified as priority areas of concern in the use of administrative data for arthritis surveillance and research: (1) case definitions for rheumatic diseases; (2) methodological issues for pharmacoepidemiologic studies; and (3) identification of comorbid conditions and outcomes.
Each group had 1 meeting leader. The group members met by conference calls between June 2010 and January 2011. Their primary objective was to identify priorities and to propose evidence-based consensus statements for discussion at the meeting. The groups achieved this objective by engaging in focused discussions and conducting systematic or scoping reviews to inform the development of consensus statements. The reviews included Canadian studies as well as studies from elsewhere. Results of these reviews were presented at the February meeting as background evidence to support the proposed consensus statements.
During the meeting, proposed consensus statements were presented, discussed, revised (as needed), and then subjected to voting. To optimize consensus, we used techniques similar to those used by other recent consensus-building efforts in rheumatology, such as the 3e Initiative in Rheumatology6. During the voting process, participants were asked to respond to the following question for each consensus statement: Do you agree with including this recommendation, as worded, in the consensus statements? Participants were given the opportunity to vote yes or no, or to abstain from voting.
We used TurningPoint software7, which allows private voting and automatic compiling; the percentage of acceptance was presented with each round of voting. If > 80% acceptance was achieved, the statement was adopted. Otherwise, a moderator provided the participants with the opportunity to discuss and revise the consensus statement. The participants then voted a second time. If consensus of 70% or greater was achieved, the statement was adopted.
RESULTS
A total of 13 consensus statements were developed for the 3 categories. Table 1 lists the statements and endorsement rates.
Category I — case definitions for rheumatic diseases
The consideration of case definitions arose, in part, from ongoing work by PHAC on national surveillance strategies, using hospitalization records and physician billing claims.
1. Case definitions for rheumatic diseases should be justified based on study purpose, validity assessment, and feasibility. Case definitions may be used for studies with different purposes, including identifying a patient cohort where a high positive predictive value with minimal false positives would be desirable. Alternatively, the purpose may be to identify all probable cases, which requires high sensitivity. Given that the optimal case definition depends on the study purpose, participants decided not to recommend specific case definitions that would be recommended for all studies, and instead recommended consulting the results of validation studies to make a decision based upon the needs of the specific analysis to be conducted.
Although currently, even for disease surveillance, the best case definitions for the more common conditions remain unknown, in practice, agencies such as PHAC are performing pilot studies with algorithms for osteoarthritis that focus on sensitivity by allowing cases to be defined by a single physician billing code, because this condition tends to be underdetected using administrative data. In contrast, the PHAC pilot studies for RA surveillance use algorithms that require 2 or more billing codes within their algorithm, because a single billing code may lack specificity.
2. Validation studies of rheumatic disease case definitions using administrative data should adhere to published guidelines on their conduct and reporting.
A systematic review of the literature revealed that validation studies do not always adopt consistent methods or report the results in a standardized way. Complete and accurate reporting about the methods adopted in validation studies would allow users of case definitions to assess the potential for bias and generalizability.
Discussion focused on the need for education and uptake of published criteria for evaluating the quality of validation studies. Three guidelines can be used: (1) Standards for Reporting of Diagnostic Accuracy (STARD)8; (2) Quality Assessment Tool for Diagnostic Accuracy Studies (QUADAS)9; and (3) modification of the STARD and QUADAS guidelines10.
3. Authors should acknowledge the limitations of their administrative data when ascertaining cases of rheumatic diseases and the implications of these limitations on their findings. Inherent limitations of using administrative health databases to ascertain cases of arthritis and rheumatic diseases for research and surveillance were recognized. These include the potential for misclassification bias because of errors or inconsistencies in diagnosis codes and incompleteness of administrative databases. Moreover, the date of diagnosis recorded in administrative data may not correspond to the date of disease onset.
In Canada and other countries, physician billing claims do not consistently record services provided by physicians receiving non-fee-for-service remuneration, and not all provinces, sectors, and territories require the practice of shadow billing, in which parallel claims are submitted by non-fee-for-service physicians. Recent national surveys of Canadian rheumatologists did indicate that almost all adult rheumatologists do submit billing claims (with the remainder mostly providing shadow billing)2, as do pediatric rheumatologists, although more pediatric rheumatologists are salaried11. Administrative health databases also fail to record nonphysician health services, such as private physiotherapy, occupational therapy, and complementary medicine treatments; this is true not only for Canadian administrative databases but in other countries as well.
Many physician billing claims databases have only a single diagnosis field; individuals with multiple comorbid conditions may have a lower probability of having any form of arthritis or rheumatic disease diagnosis identified in a database than individuals who have been diagnosed with one or a small number of chronic conditions.
Category II — methodological issues for pharmaco-epidemiologic studies
Observational studies provide important information about disease burden12, care patterns13,14, and comparative drug effects15. Administrative data are particularly highly valued for pharmaco-epidemiology studies (the study of drug effects). However, observational data can be prone to certain biases16. The best-practices statements focused largely on the conduct of pharmaco-epidemiologic studies.
4. Authors should address confounding by indication, use appropriate methods to avoid or reduce this bias, and estimate and discuss the effect of potential residual confounding. In observational studies about rheumatic diseases, confounding by indication represents a challenge because many outcomes of interest, as well as the decision to prescribe the drug studied, are associated with disease severity. For example, infection risk may be heightened by concomitant immunosuppressants (e.g., glucocorticoids), and disease severity could potentially also heighten infection risk. Several approaches to statistically adjust for confounding by indication have been proposed, although no single method is clearly superior. These options include using (1) clinical measures or proxy measures of disease severity; (2) propensity scores; or (3) instrumental variables.
In some administrative health databases, laboratory results (e.g., sedimentation rate) could theoretically be used as markers of disease activity, but they are currently unavailable in Canadian administrative databases. Several proxy markers of disease severity have been used, including occurrence or frequency of rheumatologist visits, adjunct drug use (e.g., corticosteroids or nonsteroidal antiinflammatory drugs), or joint surgery17,18. Extraarticular manifestations of rheumatoid arthritis (RA) have also been used, but the accuracy and completeness of recording of these manifestations in administrative data is unknown. A systematic review of disease severity indices identified only 2 claims-based indices of RA severity, each having some limitations19. The authors concluded that further research is needed to define, develop, and validate widely applicable measures of disease severity for studies using administrative data.
Propensity scores quantify the probability of a subject being assigned to a treatment, given known covariates. Propensity scores can be used in adjustment or matching, to reduce the confounding effects of covariates that are related to drug exposures and to outcomes. Adjusting for, or matching by, propensity score may be an efficient approach20, particularly for rare outcomes or multiple unbalanced covariates, where the estimate of their effect on the outcome is not of interest21.
Alternatively, an instrumental variable approach is one in which a measured covariate that is correlated with the unmeasured confounder (e.g., disease severity), but not related to the outcome, is incorporated into the analysis. Under ideal situations, controlling for an instrumental variable adjusts for the nonrandom allocation of drug therapies. A classic example is from a study of the effects of smoking on physical function, where average cigarette price for the state in which the subject resided (the measured covariate) was used as an instrumental variable for the unmeasured covariate (cigarettes actually smoked per day by a subject)22. Unfortunately, useful instrumental variables are often difficult to identify. Further, if correlation between the instrumental variable and the unmeasured potential confounder is weak, instrumental variables can lead to large standard errors and biased estimates of the association between exposure and outcome23.
Finally, there is another set of techniques for taking selection bias into account: the family of econometric corrections, such as methods developed by James Heckman.
5. Authors should use appropriate methods to address other common sources of confounding and bias, such as channeling, immortal time, and depletion of susceptible subjects. Channeling occurs when drugs are preferentially prescribed to patients with different baseline characteristics that place them at differential risk for the outcome of interest. When a drug is believed to be associated with a given complication, patients at high risk for that outcome may be preferentially given an alternative drug. This can lead to observational data suggesting that the alternative drug places patients at a higher risk of the complication, or that the drug of interest places them at lower risk, when in fact differences are due to their baseline profile and not the drugs they received.
Immortal time bias may arise if periods of drug exposure are misclassified, such as when exposure is classified as “ever-never” exposed, yet the person-time of exposure contributing to the analysis includes periods of both exposure and nonexposure24. This can result in a falsely protective rate (or in lowering the rate ratio closer to the null value if the drug effect is harmful). A properly performed time-dependent Cox proportional hazards model (or similar approach that classifies person-time from cohort entry until the first prescription as unexposed, and the subsequent person-time as exposed) can help avoid this bias.
Depletion of subjects most susceptible to an outcome can occur when events tend to arise early on, primarily in those most vulnerable (for whatever reason) to the outcome. Dixon, et al provided an example of this from their rheumatology biologics registry25. They note that when patients at high inherent risk have an event and then stop therapy, the cohort taking the drug then becomes depleted of those high-risk patients and the cohort becomes of increasingly lower risk. Left censorship can cause problems if outcome analyses omit the early periods at risk. In either case, one might not in the end detect a true drug effect, because early events were missed and/or people most susceptible to the outcome were excluded. It may be useful to consult the general approaches outlined in a European League Against Rheumatism (EULAR) points paper26 (addressing data analyses based on biologics registries), which states: “Along with overall relative risk measures, time-dependent measures of incidence and relative risks need to be provided. These should be presented in a coordinated way with changes to cohort numbers so that the reader can easily identify the numerator and denominator during each specified time period.” Restricting analyses to “new users” is one design approach to deal with some of these issues.
6. Authors should clearly define and justify the risk window related to the exposure, based on biologic plausibility, and should perform analyses to evaluate the sensitivity of the results to the risk window choice. The EULAR position paper recommends that authors should define and justify the risk window and, whenever possible, categorize the exposure as (1) taking drug; (2) taking drug + lag window; or (3) ever treated. A risk window is the period following drug exposure during which an event is ascribed to that exposure. A lag period is sometimes used to exclude an early period (e.g., right after exposure initiation, for a specific duration) from the risk window. This lag period would be relevant in cases in which a drug’s onset is slow and could not possibly cause the outcome in that early period of time. Second, the use of multiple risk attribution models and lag windows is encouraged if appropriate, but needs to be accompanied by a description of numbers and relative risks for each model. Third, if the same association under study has previously been published, authors should consider using a similar analysis model and definitions for purposes of reproducibility.
7. Authors must acknowledge limitations of their administrative data, such as potentially incomplete and/or inaccurate capture of health services. Implications for design, analysis, and results should be discussed. In some countries, including Canada, pharmacy databases maintain information on beneficiaries of provincial/territorial drug plans. The information generally includes people who are elderly or low income. There are a few Canadian provinces, such as British Columbia and Manitoba, that collect information on all prescription drugs, including those paid through private insurance or by the patient. Medications received in hospital are unavailable in some databases, including most of Canada’s provincial databases.
Pharmacy databases record prescription dispensations but not consumption. This may be problematic for research about rheumatic disease because patients may self-adjust the dosages of their prescribed medications. Consequently, duration or dose of treatment may be subject to measurement error, especially for drugs such as glucocorticoids that patients may use in highly variable ways over time.
Finally, as noted for statement 3, physician billing data may be incomplete. Physician specialty information may contain measurement error, which will affect the results of studies about specialist care. When discussing the implications of data completeness for the study results, authors might note how incomplete data may, for example, underestimate disease burden.
Category III — Identification of comorbid conditions and outcomes
People with rheumatic diseases commonly have multiple comorbid conditions. In arthritis studies using administrative health databases, comorbid conditions are of interest as outcomes or covariates for risk adjustment. The “best practices” statements made recommendations for the definition of comorbid conditions for use as either an outcome or a covariate.
Working group 3 focused on the following conditions because of their high prevalence in rheumatic diseases and because they may complicate treatments: osteoporosis and fractures, cancer, infections, cardiovascular diseases, and renal disease. Two cardiovascular events were the focus of systematic reviews: acute myocardial infarction (AMI) and cerebrovascular accidents (CVA), because of their frequent selection as outcomes of interest. The diagnosis of congestive heart failure (CHF) was also evaluated.
8. Osteoporosis diagnostic codes in administrative data should not be used alone for comorbidity adjustment or as an outcome of interest. Our systematic review of validation studies for osteoporosis diagnosis and osteoporosis-related fractures identified 11 studies, 2 dealing with osteoporosis and the remainder with osteoporotic fractures27. These studies demonstrate that the validity of osteoporosis diagnosis was improved when at least 3 years of data from hospital and physician visits claims were used (area under the receiver-operating characteristic curve above 0.70) and when pharmacy data were used for case ascertainment. Nonetheless, the positive predictive value, PPV, of existing algorithms to ascertain osteoporosis cases is low (i.e., below 60%).
9. Hospital discharge data, and physician and procedural data when available, can be used to identify hip fractures. Fractures that do not require hospitalization, in particular fractures of the radius/ulna and of the humerus, can be identified in physician billing data by combining diagnostic and procedural codes. Additional research is needed before recommending the use of administrative data to identify vertebral fractures.
There is good evidence to support the use of hospital data for identification of hip fractures. Diagnosis codes in physician billing claims data and procedure codes from hospital data can further improve validity of case ascertainment. Vertebral fractures are more difficult to identify, even when combining physician billing claims and procedure codes. There was some evidence to support the use of administrative data to define other fractures that do not require hospitalization (such as fractures of the radius/ulna, humerus, and potentially other sites) if physician claims and procedures data are available28,29,30.
10. When using administrative databases (exclusive of cancer registries) to define cancer outcomes, authors should choose an algorithm that has been demonstrated to have good sensitivity and excellent specificity for the cancer of interest in a comparable population. Additionally, implications of an imperfect case definition should be discussed.
The challenges of identifying cancer from administrative health databases have been discussed in depth by others31. Linkage to cancer registries can improve case recording32. The sensitivity of physician billing claims varies according to cancer type and patient characteristics32. For example, when using only physician billing claims and hospital data to ascertain cancers, false-positive results are more likely among persons of older age (colorectal, lung, and prostate), female sex (colorectal and lung), and nonwhite race (breast, colorectal, lung, and prostate). False positives can be reduced by maximizing the cancer definition algorithm’s specificity. On the other hand, false negatives are more common in individuals with multiple comorbid conditions and in-situ tumors (breast, colorectal, and prostate) or unstaged (breast) and/or untreated cancers. False negatives can be addressed by maximizing the cancer definition algorithm’s sensitivity. These recommendations did not address specifically nonmelanoma skin cancers, which can be particularly challenging to identify even within cancer registry data, because of underreporting.
11. When using administrative data to identify serious infections as outcomes or comorbidities, hospitalization data can be used to identify serious bacterial infections. If greater sensitivity is desired, it is recommended to use a more comprehensive definition to identify individual infections and/or a diagnostic code for infection found in any position of the claims data. Current data are not sufficient to recommend the use of administrative data to identify opportunistic infections. For infections that are reportable, such as tuberculosis and meningococcal diseases, multiple sources of data should be used, if available, to ensure greater completeness of case ascertainment.
The systematic review of validation studies for serious infections requiring hospitalization identified 8 studies33. The positive predictive value of administrative health databases varied according to the type of infection and its prevalence. Overall, hospitalization data provided acceptable levels of validity for bacterial infections, including pneumonias. Sensitivity was improved when case definitions used a broader range of diagnostic codes or combined use of multiple administrative health databases. The strategy of using diagnostic codes to screen for infections, followed by chart review to confirm infections, demonstrated the highest accuracy for identifying bacterial infections. However, this strategy may not always be practical or feasible.
In contrast, the validity of administrative health databases for opportunistic infections has received limited attention and results have been suboptimal. The group identified the need for further research. For reportable infections, such as tuberculosis and meningococcal disease, validation studies demonstrated that relying solely on 1 source of data, such as hospitalization data, led to incomplete recording of cases, and that sensitivity was improved by using multiple sources of data, including notifiable disease reports, laboratory test results, or use of specific antimicrobial medications.
12. Hospitalization data can be used to identify acute myocardial infarction (AMI) or cerebrovascular accident (CVA) as a covariate or outcome. Authors should take into consideration that hospitalization data to identify congestive heart failure (CHF) have significant limitations because of their relatively low sensitivity and specificity. When using vital statistics data, authors need to acknowledge that the accuracy of AMI as a cause of death is limited.
A total of 76 studies were included in the systematic review34. The sensitivity and specificity of hospitalization data for identifying AMI and CVA were high (above 80%) in most studies. Lower sensitivity and PPV were observed in studies using stricter criteria such as the MONICA criteria for AMI35. In contrast, results of validation studies for CHF pointed toward less reliable ascertainment, with low sensitivity (usually below 70%), highly variable but often low PPV, but acceptable specificity (above 70%). Therefore it was felt that data supported the use of hospitalization data for identifying AMI and CVA, but not for identifying CHF. The results of the review additionally suggested limited accuracy of AMI as a cause of death, within vital statistics data.
13. Administrative data can be used to identify kidney disease requiring dialysis. Current data do not support the use of diagnostic codes from hospitalization data to adjust for acute or chronic kidney disease as a comorbidity or outcome.
Our systematic review included 23 studies and demonstrated variable accuracy of diagnostic and procedure-based codes to identify renal disease36. Sensitivity was generally low for acute and chronic disease, except in samples with underlying coronary artery disease and for the identification of acute renal failure requiring dialysis. In contrast, specificity was consistently high. These results suggest that acute and chronic renal failure would be underestimated using hospitalization data unless dialysis is required. However, administrative health databases are likely to contain few false positives for renal failure requiring dialysis.
DISCUSSION
We developed 13 consensus statements about best practices for the use of administrative data for rheumatic diseases research and surveillance. Our consensus statements address issues of rheumatic disease case ascertainment, epidemiology methods, and the identification of comorbid conditions and outcomes. This information will be useful to a wide audience of users of administrative data, including researchers, epidemiologists, health system planners, policy analysts, and patient advocates. Our goal was to identify a core set of consensus statements that address major issues in the use of administrative health databases for research and surveillance. Alternative case definitions and methods of analysis may be adopted in research and surveillance studies, but we hope that publication of these consensus statements will encourage researchers to use the approaches or definitions recommended in the consensus statements in at least 1 set of analyses, to improve consistency and allow comparison across studies.
These consensus statements are consistent with recent guidelines published by the International Society for Pharmacoeconomics and Outcomes Research37. These guidelines demonstrated that valid findings of causal therapeutic benefits can be produced from observational studies using techniques such as multivariable regression, propensity scoring, instrumental variables, sensitivity analyses, and discussion of residual confounding. Some of the statements draw upon existing work, such as the EULAR biologics registry taskforce paper26, which was written to address specific needs of rheumatic disease research concerning the establishment, analysis, and reporting of safety data from biologics registries. Although registries are not administrative databases, many analytical issues are common to both data sources. Our recommendations are consistent with those included in the EULAR biologics registry taskforce paper.
Our consensus statements were developed based on the characteristics of Canadian administrative health databases. However, most issues identified in these statements are relevant to administrative health databases in other countries; in fact, our reviews were based on data not just from Canada, but from the United States, Europe, and elsewhere. Clearly, the same issues are important to all. The importance of these issues is reflected by the International Health Data Linkage Network, which was inaugurated with the support of the Research and Development Directorate of the UK National Health Service in 2008. With membership spanning Canada, the UK, Australia, and New Zealand, some aims of this network are to promote effective methods for the use of linked administrative health data, as well as fostering collaboration and exchange.
Clearly, unresolved issues still exist — for example, precise recommendations regarding rheumatic disease case definitions have yet to be specified, as have many methodological issues, such as what lag window might be most preferable for commonly used rheumatic drugs. Further funding from the Canadian Arthritis Network is allowing us to implement an interactive Website, which could facilitate continuing discussion and updating of these “best practices” statements as more evidence becomes available. Currently, a repository of background documents related to the literature reviews and February meeting are available at https://connect.mcgill.ca/r41824168. Some of the systematic literature review results have been presented at meetings of the Canadian Rheumatology Association and the American College of Rheumatology, and at the Canadian Arthritis Network annual scientific meetings27,33,36. The full results of the reviews are currently in preparation and will be featured in independent publications.
Although administrative health databases represent a rich source of data for research and disease surveillance, they do have inherent limitations. Our consensus statements were developed to raise awareness of these limitations and address them wherever possible. We anticipate that the consensus statements presented here will help those engaged in, or planning to undertake, rheumatic disease research and surveillance using administrative health databases.
Acknowledgment
We are grateful to all who attended the meeting in February 2011, as well as to Dr. John Hanly, who served as moderator. Those who attended the meeting and provided input include Christina Bancej, Public Health Agency of Canada; Cheryl Barnabe, University of Calgary; Susanne Benseler, The Hospital for Sick Children-Sick Kids/University of Toronto; Louis Bessette, Centre Hospitalier Universitaire de Québec-Université Laval; Claire Bombardier, University of Toronto; Jeffrey Curtis, University of Alabama at Birmingham; Ciaran Duffy, University of Ottawa; Steven Edworthy, University of Calgary; Brenda Elias, University of Manitoba; Deborah Levy, The Hospital for Sick Children-Sick Kids/University of Western Ontario; Louise McRae, Public Health Agency of Canada; Pat McCrea, British Columbia Ministry of Healthy Living and Sport; Peter Nestman, Dalhousie University; Glenn Robbins, PHAC; Natalie Shiff, University of Saskatchewan; and Mark Smith, Manitoba Centre for Health Policy.
APPENDIX 1: Group Participants
List of study collaborators:
M. Hudson, J. Markland, P. Docherty, M. Fritzler, N. Jones, E. Kaminska, N. Khalidi, S. Ligier, A. Masetto, J.P. Mathieu, D. Robinson, D. Smith, E. Sutton, M. Abu-Hakim, S. LeClercq.
Category 1: Case definitions.
Chair: Lisa Lix, University of Saskatchewan. Members: Siobhan O’Donnell, PHAC; Elizabeth Badley, University of Toronto; John Hanly, Dalhousie University; Gillian Hawker, University of Toronto; Jaime Henderson, Canadian Rheumatology Association; Sam Lim, Emory University; Christine Peschken, University of Manitoba; Elizabeth Stringer, Dalhousie University; Larry Svenson, Alberta Health and Wellness. Research Trainees: Jeremy Labrecque, McGill University; Jessica Widdifield, University of Toronto.
Category 2: Methods.
Chair: Sasha Bernatsky, McGill University. Members: Johan Askling, Karolinska Institutet-Sweden; Louise Bergeron, Canadian Arthritis Patient Alliance; Will Dixon, Manchester University; Jacek Kopec, University of British Columbia; Michael Paterson, Institute for Clinical Evaluative Sciences; Collette Raymond, University of Manitoba; Daniel Solomon, Harvard University; Samy Suissa, McGill University. Research Trainee: Evelyne Vinet, McGill University.
Category 3:
Comorbidity conditions. Chair: Diane Lacaille, University of British Columbia; Cochair: Antonio Avina-Zubieta, University of British Columbia. Members: Paul Fortin, University of Laval; Marie Hudson, McGill University; Sonia Jean, Institut de Sante Publique du Quebec; Elham Rahme, McGill University; Daniel Solomon, Harvard University; Gordon Whitehead, Consumer Advisory Board, Arthritis Research Centre of Canada. Research trainees: Claire Barber, University of Toronto; Bindee Kuriya, University of Toronto; Jeremy Labrecque, McGill University; Aaron Leong, McGill University; Caroline Sirois, McGill University. Research support for all categories was provided by Jennifer Lee, McGill University. Dr. John Hanly served as moderator at the February 2011 meeting.
Footnotes
-
Funded by the Canadian Institutes of Health Research and the Canadian Arthritis Network, with in-kind support from the Public Health Agency of Canada.
- Accepted for publication September 5, 2012.