Audit and feedback using the brief Decision Support Analysis Tool (DSAT-10) to evaluate nurse–standardized patient encounters

https://doi.org/10.1016/j.pec.2008.07.016Get rights and content

Abstract

Objective

To evaluate the brief Decision Support Analysis Tool (DSAT-10) for auditing the quality of nurse–standardized patient encounters, structuring feedback for nurses, and testing instrument reliability.

Methods

A systematic process was used to develop standardized patient scenarios, pilot-test scenarios, calibrate DSAT-10 coders, analyze taped telephone encounters using DSAT-10, and provide feedback. Inter-rater reliability was calculated using coder agreement, kappa, and intra-class correlation coefficients.

Results

Six scenarios portrayed patients’ decisional uncertainty from either: pressure from others (n = 2), unclear values (n = 2), or inadequate information (n = 2). Scenarios were easy to use over the telephone, produced realistic role performance, and were practical for audio-recording interactions. DSAT-10 analysis of 76 nurse–standardized patient encounters revealed nurses’ strengths (e.g., information provision) and their limitations (e.g., lack of discussion of values and/or support needs). Scores discriminated between trained and untrained nurses. The kappa coefficient over all items was 0.55 (95% CI: 0.49, 0.61) with higher agreement for encounters involving trained nurses (0.62; 95% CI: 0.43, 0.80).

Conclusion

Auditing nurse–standardized patient encounters using DSAT-10 and providing feedback to nurses was feasible. Although DSAT-10 items had adequate inter-rater reliability and discriminated between trained/untrained nurses, some items were problematic.

Practice implications

Providing feedback on nurse encounters with standardized patients experiencing uncertainty has the potential to enhance nurses’ decision support skills.

Introduction

Patients experiencing uncertainty about health decisions receive variable quality of decision support [1], [2], [3], [4], [5], [6]. Decision support is a process of assessing patients’ decision-making needs, providing support tailored to their needs, and evaluating progress in decision-making [7], [8]. The goal is to reach a quality decision that is informed with the latest evidence and based on patients’ informed values [9], [10], [11]. Common modifiable factors interfering with patients’ decision-making include inadequate knowledge of options and their outcomes, being unclear about their values associated with outcomes of options, and feeling pressure or unsupported in the decision-making process. Previous studies indicate that health professionals’ decision support focuses primarily on providing information without addressing patients’ other decision-making needs [1], [2], [3], [4], [5], [6].

Effective interventions to change health professionals’ practice include reminders, dissemination of educational materials, audit and feedback, and multiple interventions with educational outreach [12], [13]. According to the Cochrane Review, audit and feedback is defined as a summary of healthcare clinical performance over a specified period of time with or without recommendations for clinical action [14]. Evidence from 118 randomized trials indicates that audit and feedback used alone or in combination with other interventions results in a 10% absolute improvement in care [14]. Audits may be conducted on patients’ health records, computerized health information databases, or observations from clinical encounters.

Three known instruments are focused on assessing the quality of patient–practitioner decision-making encounters: the OPTION scale measures the extent and quality of shared decision-making [1], the Decision Support Analysis Tool (DSAT) measures practitioners’ use of decision support and communication skills [2], and the Rochester participatory decision (RPAD)-making scale measures patient–physician collaborative decision-making [6]. An important difference among these instruments is that the DSAT can also be used to evaluate the quality of decision support provided to patients by coaches whose role is to prepare them for decision-making with their health care provider. In a previous study, the DSAT was validated, found to discriminate between different decision support interventions provided to patients (e.g., information brochure versus patient decision aid), and was correlated to measures of patient and physician satisfaction, as well as patients’ reduction in decisional conflict [2]. However, evaluation of the DSAT instrument revealed insufficient responses to some items (e.g., facilitate future learning), a lengthy coding process, and validation limited to physician–female patient encounters for which menopausal women were experiencing varying amounts of uncertainty about taking hormone replacement therapy.

Compared to real patients, standardized patients can facilitate a more consistent experience across healthcare professionals [15]. These actors, trained to portray the emotional, symptomatic, and physical characteristics of patient case scenarios, have been used for over 40 years in evaluation of medical students’ clinical examinations and communication skills [16], [17]. More recently, standardized patients have been used for evaluating telephone consultations [18] and shared decision-making [19]. Compared to self-report or chart audit, there has been higher inter-rater agreement and improved measurement of practitioner performance with standardized patients [3], [16], [20], [21]. Performances within and among standardized patients remain consistent with repeat presentations, including intervals as long as 3 months between performances [22], [23].

The objectives of this study were (a) to evaluate the feasibility of using standardized patients to enhance practitioners’ telephone-based skills in supporting patients’ decision-making and (b) to develop and establish inter-rater reliability for the brief Decision Support Analysis Tool (DSAT-10).

Section snippets

Methods

We used a systematic process to develop standardized patient scenarios, pilot-test scenarios, calibrate DSAT-10 coders, analyze audiotaped encounters using DSAT-10, and provide feedback. The current study is based on data from 58 audiotaped encounters between untrained nurses and standardized patients and 18 encounters between trained nurses and standardized patients. These data were taken from a randomized controlled trial to evaluate the effect of an implementation intervention on the quality

Results

We analyzed 76 audio-recordings of 38 nurse-standardized patient encounters taken at baseline (untrained) and within 1 month of the training intervention (18 trained, 20 untrained). One recording was lost due to equipment failure. Telephone-based encounters with standardized patients using case scenarios were easy to complete, the patients were realistic in their role performance, and the telephone encounters were feasible to analyze using audio-recordings.

The DSAT-10 highlighted practitioners’

Discussion

This was the first evaluation of the brief DSAT-10 and its use to analyze telephone encounters with non-physician health care professionals focused on preparing standardized patients for discussing decisions with their primary practitioner. Standardized patients were able to provide a consistent level of uncertainty about a decision across nurses. The revised tool had adequate inter-rater reliability and discriminated between nurses trained and untrained in providing patient decision support.

Conflicts of interest

The authors declare that they have no competing interests.

Acknowledgements

We thank Sara Khangura, Stephen Kearing, Laura Rapp, and Andrea Powers for analyzing the encounters using the DSAT-10. We would also like to recognize the important contributions of the six standardized patients: Edward Booth, Michele Fansett, Gerry Holt, Shirley Jacobs, Melody Mortensen, and Paulette Panesh.

Funding: This study was supported by the Canadian Institutes for Health Research (CIHR Grant #MT-15580). DS held a CIHR Doctoral Studies Award and scholarship from the University of Ottawa.

References (35)

  • A. Ratliff et al.

    What is a good decision?

    Effect Clin Pract

    (1999)
  • G. Elwyn et al.

    Developing a quality criteria framework for patient decision aids: online international Delphi consensus process

    Br Med J

    (2006)
  • K.R. Sepucha et al.

    Policy support for patient-centered care: the need for measurable improvements in decision quality

    Health Affair

    (2004)
  • J.M. Grimshaw et al.

    Effectiveness and efficiency of guideline dissemination and implementation strategies

    Health Technol Assess

    (2004)
  • M. Wensing et al.

    Implementing guidelines and innovations in general practice: which interventions are effective?

    Br J Gen Pract

    (1998)
  • G. Jamtvedt et al.

    Audit and feedback: effects on professional practice and healthcare outcomes

    Cochrane Database Syst Rev

    (2006)
  • J.A. Colliver et al.

    Assessing clinical performance with standardized patients

    J Am Med Assoc

    (1997)
  • Cited by (0)

    View full text