ORIGINAL RESEARCH ARTICLE

Using the SNAPPS model to develop student physical therapist decision-making skills during new patient encounters in the outpatient clinic: a pilot study

Patti J. Berg-Poppe*, Matt Dewald, Becca Jordre, Joy R. Karges-Brown and Adam Ladwig

Department of Physical Therapy, University of South Dakota, Vermillion, SD, USA

Abstract

Rationale: The SNAPPS (summarize, narrow, analyze, probe, plan, select) model is a six-step teaching tool that facilitates decision-making in clinic environments. The tool promotes active communication between students and clinical instructors (CIs) and positions the student as lead in the learning scenario. The current study employed the SNAPPS model for use with student physical therapists. The purpose of the study was to gauge changes in perceptions of verbal ability, decision-making, and confidence levels following new patient evaluations where the SNAPPS model was utilized.

Methods: Participating student and CI partners received training to learn the SNAPPS model with fidelity. Log worksheets guided students through the SNAPPS steps. After new patient encounters, student and CI partners rated student verbal skills, decision-making, and confidence levels using mirrored statements. Representative early, middle, and late week ratings were compared for change.

Results: Six of forty-eight (12.5%) eligible students participated. Student and CI assessments were not significantly different, indicating reliable student self-assessment. Improvements were noted in students’ (1) skill in providing a verbal rationale, (2) ability to generate thoughtful and relevant learning prompts, (3) confidence in diagnosing pathology and impairment, and (4) confidence in selecting an appropriate intervention.

Clinical relevance: The SNAPPS model is a clinical education tool that shows promise toward improving thought process verbalization and confidence levels for the student seeing new patients in an outpatient setting. This active learning experience can promote accountability for learning and enhance student verbal and analytical skills.

Keywords: clinical education; learning outcomes; communication skills; decision-making; patient management; teaching and learning

 

Citation: Journal of Clinical Education in Physical Therapy 2022, 4: 8093 - http://dx.doi.org/10.52214/jcept.v4.8093

Copyright: © 2022 Patti J. Berg-Poppe et al.
This is an Open Access article distributed under the terms of a Creative Commons-Attribution-Non-Commerical-No Derivatives License (https://creativecommons.org/licenses/by-nc-nd/4.0/).

Received: 6 April 2021; Revised: 5 August 2021; Accepted: 2 October 2021; Published: 15 March 2022

Competing interests and funding: The authors (Berg-Poppe, Dewald, Jordre, Karges-Brown, and Ladwig) have no conflicts of interest to declare. There are no funding sources to report.

*Patti J. Berg-Poppe, Department of Physical Therapy, University of South Dakota, 414 East Clark, Vermillion, SD 57069, USA. Email: Patti.Berg@usd.edu

 

Clinical reasoning is an important and fundamental skill set refined and sharpened by physical therapists (PTs) over time and with encounters and experiences with many patient presentations. During the didactic, theoretical phase of the student PT’s curricular plan of study, structured and explicit problems are presented and solved in the classroom, often through guided discussion. In contrast, during the clinical phase, inexperienced students are asked to draw from this didactic knowledge base, linking potentially disparate concepts, searching for information that makes sense of observations, behaviors, and objective measures and drawing clinical conclusions that undergird a patient’s plan of care.

During the less structured clinical phase, a framework may benefit less experienced student PTs as they refine their clinical reasoning skills, develop a context for decisions, scaffold feedback to develop expertise, and construct schemas for recognizing patterns of behavior and their responsive interventions.1 Clinical reasoning is improved with strategies such as script activation,1,2 scaffolding through active learning,1 reflection,1,3 immediate feedback,3 and use of semantic qualifiers1 and can falter for the clinical student with poor foundational knowledge, data collection and processing skills, or metacognitive awareness.1

Improving problem-solving with explicit processes

Metacognition serves an important role in regulating thinking. Learners with strong metacognitive skills are less likely to make cognitive errors.4 Clinical students with good metacognitive skills are aware of their cognitive biases and do not rush to quickly anchor their impressions to a diagnosis without careful consideration. It has long been understood that verbalization is valuable in moving implicit information to explicit learning5 and enhances problem-solving during early learning.6 Underlying verbalizations are metacognitive processes that are more likely responsible for gains in problem-solving, decision-making, and clinical reasoning.7

A framework for verbalizing clinical reasoning

The SNAPPS (summarize, narrow, analyze, probe, plan, select) model, a learner-centered teaching tool, employs a series of cognitive processes to aid the clinical student in the clinical decision-making process and enhance verbal and analytical skills. The model guides clinical student through several steps: summarize history and findings, narrow the differential, analyze the differential, probe the preceptor, plan management for the patient’s medical issues and select a case-related issue for self-directed learning. The student uses these six steps to organize thinking and verbalize the thought process to the preceptor (i.e. clinical instructor [CI]).

An effective CI is intentional with instructional methods. Successful CIs ask meaningful questions, offer intellectually safe spaces for learning, and empower student growth. With the SNAPPS framework, active student participation is central to the success of the model, which draws from the Socratic method that uses more purposeful, guided queries by the instructor to develop understanding.8 Clinical students take a communication lead, and, rather than simply reporting facts and information, they are coached to communicate thoughts, questions, and uncertainties to enhance their development through collaborative communication with their CI. To balance the student’s active ownership over learning during this educational exchange of thoughts and ideas, the CI assumes the role of a facilitator.

The SNAPPS model has been used with positive feedback from third-year medical students during an ambulatory medicine rotation9 and with high satisfaction among child and adolescent and general psychiatry residents and fellows.10 Wolpaw and colleagues9 affirmed the value of this model in an outpatient (OP) medical clinic to promote a learning dialogue between clinic students and CIs that enhance higher-order thinking and improves active learning. The current study adopted this framework to support student PTs seeing new referrals in an OP setting, where differential diagnostic competencies are frequently called upon.

Experience plays an important role in recognizing clinical patterns that support schema building and guide the development of patient-specific treatment plans.11 The SNAPPS steps prompt the student learner to verbalize thoughts and observations and to engage the more experienced CI in a discussion that reveals their own reasoning strategies, as well. Experienced clinicians often approach clinical problem solving using recognition-primed decision-making, based on countless prior encounters.12 The more inductive approach to arriving at a diagnosis and treatment plan may seem elusive to the more novice clinical student, who is more likely to rely on a hypothetico-deductive approach to narrowing conclusions through data collection and interpretation.12 The SNAPPS model brings student and CI processes to light and provides a framework for developing the verbal exchange. Furthermore, the model requires initiative from the student that translates to accountability in learning during the clinical phase of programming.

Purpose

The study promoted the use of an explicit framework, SNAPPS, to develop verbal and analytical skills and promote improved clinical reasoning in student PTs situated in clinical education placement. The purpose of this study was to investigate changes in perceptions of verbal ability, decision-making, and confidence levels after new patient evaluations when utilizing this framework.

Methods

The SNAPPS model was used to devise and pilot student and CI assessment forms as well as a log worksheet to guide students when moving through the SNAPPS process. Members of two cohorts from a Midwestern US university completing an OP or ‘rural general’ clinical rotation with an OP component over the course of five semesters were invited to participate. Invited students were completing either a 6-week integrated clinical education (ICE; situated in the middle of the program of study’s didactic curriculum) experience or an 8-week terminal clinical education experience (which occurred after completion of didactic coursework). Prior to implementation, the study was approved by the investigating university’s committee on research ethics. An invitation to participate detailed IRB consent requirements and mutual student-CI implied consent was assumed for the exempt study with the return of survey items.

Investigators provided in-services for interested student PTs and CIs prior to implementation during the clinical experience. These in-services were provided in either presentation or print material format, depending on the availability of CIs. Presentation sessions familiarized CIs and student PTs with the SNAPPS model, introduced the study’s aims, shared worksheets and assessment tools, and answered related questions. Investigators reached out to interested CIs during a state-level professional organization meeting and provided the information to students during a special session delivered prior to students’ clinical education experiences. Print materials included a copy of the Microsoft Power Point presentation with related slide deck notes, SNAPPS worksheets and assessment documents, and investigator contact information to support CI questions. Eligible student PTs with an interest in participating were asked to provide these materials to their CI and review the SNAPPS expectations together.

The in-service presentation and print materials emphasized the importance of student learners using verbal skills to articulate the six-step case presentation to elucidate their thought process for the CI. The student PT was instructed to, subsequently, reflect on the individual case and prepare related education to share with the CI from a follow-up question. As facilitator, the CI was instructed to listen to the case presentation and respond to the student’s uncertainties. The CI was instructed to act as a coach, guiding student information processing, encouraging the student PT to take a lead role in the learning process, and scaffolding clinical reasoning growth.

Paired student-CI participants were asked to use the SNAPPS model consistently for all new patient evaluations and to use the documentation forms for, at minimum, 1–2 new patient evaluations during at least early, middle, and late week clinical timelines. Student PTs were encouraged to step out of the treatment space to consult with the CI immediately following the examination, at which time the student PTs summarized findings, articulated a narrowed differential, and analyzed the differential for the CI. During the consultation, the student PTs probed the CI for any missed tests and measures or differentials that might not have been considered. When the most likely differential was correctly identified, the student PTs followed through by describing the proposed intervention plan to the CI and, with the CI’s approval, returned to the treatment space to deliver the patient plan of care. Participating CIs were also encouraged to verbalize their own thought processes for student PTs, in order to assist those who were struggling with the selection of tests and measures or differential diagnoses. After the treatment session, the student PT led a debriefing conversation with the CI (e.g. reflection-on-action session). The student PT closed the debrief with a relevant question for the CI and was prompted to ask for feedback. Lastly, the student PT was instructed to select a case-specific question, with guidance from the student log worksheet, to research and report back to his/her CI the following day.

Participating student-CI pairs were provided with written instructions, student log worksheets (Appendix A), and self (for student; Appendix B) and student (for CI; Appendix C) assessments. Student PTs were asked to use the log worksheet to facilitate the use of the SNAPPS framework. During an early week (ICE weeks 1–2; terminal weeks 2–3), middle week (ICE weeks 3–4; terminal weeks 3–5), and late week (ICE weeks 5–6; terminal weeks 6–8), pairs were asked to choose 1–2 of the week’s new patients to also complete self and student assessments after the SNAPPS process was complete. Case pairs used the same patient identifier code to link the paired assessments. The student PT completed the structured learning activity by developing a specific question and reporting the results of the student PT’s research to the CI the following morning. Case pairs were neither encouraged nor prohibited from sharing the outcomes of their assessment forms with each other. With these instructions, researchers anticipated between three and six assessment data points at three different times per paired student-CI for analysis.

Instruments

Student log worksheet

This worksheet (Appendix A) was generated to prompt student PTs through the step-by-step SNAPPS process. Student PTs were encouraged to use the worksheet as an aide for their student-led interactions with their CI. They were also encouraged to move through the SNAPPS process with less reliance on the log worksheet for steps 1 through 5 as clinical weeks advanced. The final worksheet activity drove the last step in the SNAPPS process, which was to ‘select a case-related issue for self-directed learning’.

Student self-assessment and CI assessment of the student

Self (Appendix B) and student (Appendix C) assessments were developed to capture ratings for the student PT’s ability to summarize history and findings (related Items 4 and 5), narrow the differential (related Items 1, 2, and 8), analyze the differential (Item 3), probe the preceptor (Item 6), and articulate the rationale for the treatment plan (Items 7 and 9). The final assessment items (Items 8 and 9) surveyed the student PT’s confidence in selecting the appropriate medical diagnosis and physical therapy impairment as well as intervention specific to diagnosis and impairment.

Item statements were inspired by the SNAPPS model and guided by the patient–client model framework adopted by PT practice. The developed items considered, as well, the end goal of moving student PTs toward entry-level competencies with PT Clinical Performance Instrument (CPI) criteria13 (e.g. ‘Participates in self-assessment to improve clinical and professional performance’ [CPI 6]; ‘Performs a physical therapy patient examination using evidenced-based tests and measures’ [CPI 9]; ‘Evaluates data from the patient examination (history, systems review, and tests and measures) to make clinical judgments’ [CPI 10]; ‘Determines a diagnosis and prognosis that guides future patient management’ [CPI 11]; ‘Establishes a physical therapy plan of care that is safe, effective, patient-centered, and evidence-based’ [CPI 12]).

A single author drafted the nine-item statements and their five-point descriptors. A team of eight additional faculty members from the sponsoring institution provided language and content suggestions, and the assessments were further refined based on these recommendations. The five-point Likert scale used skill or confidence descriptors specific to each item and reflective of an ordinal 5 = Excellent, 4 = Good, 3 = Average, 2 = Fair, and 1 = Poor. A ‘not applicable’ option was made available, as well. Student PT self-assessment and CI assessment of student items mirrored one another by item number.

Data analysis

The data set was reduced through random selection of single early, mid, and late clinical week data points for each paired student-CI case. The data were represented in Microsoft Excel (2019) for early week versus late week graphing (Fig. 1).

Fig 1
Fig. 1. Comparison of individual student ratings at early clinical weeks and late clinical weeks for each self-assessment item. Case series student PTs A and B were early program students participating in an integrated clinical education experience (ICE), while case student PTs C through F were placed in a terminal clinical experience following the conclusion of all didactic work.

JCEPT-4-8093-F1.jpg

Descriptive statistics were used to depict item means and standard deviations. Wilcoxon Signed Rank tests were used to compare student ratings against those of CIs and early clinical week student PT ratings against late week ratings. SPSS v 27 was used for statistical analysis (IBM Corp. Released 2020. IBM SPSS Statistics for Windows, Version 27.0. Armonk, NY: IBM Corp). Cohen’s d was utilized to calculate the effect size.

Results

Forty-eight student PTs from two cohorts were situated in clinical experiences during the five-semester period of interest. A total of 120 clinical experiences were suitable for SNAPPS use during the five-semester period (Table 1). The model was implemented with fidelity by six student PTs completing seven affiliations (one student represented twice) in OP or rural general settings, leaving a participation with fidelity rate of 5.8% of all possible clinical experiences and representing 12.5% of all students completing eligible experiences. While student PTs situated in 10 of the eligible 120 experiences agreed to participate, one student PT used the framework during weeks 6 and 7 only, another during weeks 5, 6, and 8 only, and a third during weeks 1 through 4 only; these cases were eliminated for lack of representative week data. A single representative early, mid, and late assessment was randomly selected for each participating student-CI pair for analysis. Seven paired student-CI participants were included in the study (Table 2). Two of these cases were ICE clinical experiences, while the remaining five cases were terminal experiences. Independent student PT responses to each item at early and late clinical weeks are depicted for a case study in Fig. 1.

Table 1. Clinical education (CE) and SNAPPS participation details
Clinical experience number (five-CE sequence) OP or rural general affiliations appropriate for SNAPPS Number of affiliation Student-CI partners participating in SNAPPS Student-CI partners participating in SNAPPS with study fidelity
Cohort A (N = 22) 2 17 2 (11.8%) 1 (5.9%)
3 15 1 (7.1%) 0 (0.0%)
4 17 0 (0.0%) 0 (0.0%)
5 15 1 (7.1%) 0 (0.0%)
Cohort B (N = 26) 1 (ICE) 21 2 (9.5%) 2 (9.5%)
2 23 3a (13.0%) 3a (13.0%)
3 12 1a (8.3%) 1a (8.3%)
120 10a (8.3%) 7a (5.8%)
a Includes a student participating in SNAPPS twice.
Table 2. Clinical education (CE) profiles, by case
Case identification Clinical education experience number (five-CE sequence) Integrateda (ICE) vs. terminal clinical education experience Early week number Middle week number Late week number First outpatient orthopedic or rural general CE experience?
A 1 Integrateda 3 4 6 Yes
B 1 Integrateda 3 4 6 Yes
C.1 2 Terminal 3 4 8 No
C.2 3 Terminal 1 3 6 No
D 2 Terminal 2 5 7 No
E 2 Terminal 1 4 7 No
F 2 Terminal 1 5 8 No
aPlacement occurring before didactic programming is complete.

Wilcoxon signed ranks test comparing student PT and CI assessments for early, middle, or late sample captures showed no significant difference, indicating that student PTs’ ratings were reliable when compared to their CI evaluator, with two exceptions. Student PT and CI ratings differed significantly at late week for Item 4 (choose and plan for appropriate and confirming tests and measures) and mid-week for Item 5 (generate a problem list in agreement with the CI’s list), with CI’s rating both items higher than students (P = 0.014 and P = 0.046, respectively).

Improvements over time

A comparison of student PT assessments for early versus late clinical weeks showed improved ratings across all items (Table 3). These improvements met significance for Items 2 (verbally explain why a chosen diagnosis was valid), 6 (generate relevant and thoughtful questions to prompt discussion), 8 (confidence in one’s ability to appropriately diagnose pathology and impairment), and 9 (confidence in one’s ability to select appropriate interventions). While five of nine items did not reach a level of significance, effect sizes were large (defined as Cohen’s d = 0.8) for all but two of nine items (Items 3, verbally explain why other diagnoses were ruled out, and 4, choose and plan for appropriate and confirming tests and measures), which demonstrated medium effect sizes (defined as Cohen’s d = 0.5).

Table 3. Mean assessment values by time and evaluator
Item # Early clinical week M (SD) Middle clinical week M (SD) Late clinical week M (SD) Early vs. late clinical weeek: student ratingsa
Student self-assessment Clinical instructor assessment Student self-assessment Clinical instructor assessment Student self-assessment Clinical instructor assessment Zb P Cohen’s d
Item 1 3.57 (0.79) 3.57 (0.54) 4.43 (0.98) 4.29 (0.49) 4.29 (0.49) 4.57 (0.54) −1.633 0.102 1.01
Item 2 3.14 (1.22) 3.43 (0.79) 4.29 (0.76) 4.29 (0.49) 4.43 (0.79) 4.57 (0.54) −1.983 0.047* 1.26
Item 3 3.43 (0.79) 3.43 (0.98) 3.83 (0.75) 4.00 (0.82) 4.00 (0.82) 4.43 (0.54) −1.190 0.234 0.71
Item 4 3.29 (0.95) 3.29 (0.76) 4.00 (0.58) 4.33 (0.52) 3.86 (0.38) 4.71 (0.49) −1.265 0.206 0.79
Item 5 3.57 (0.54) 3.86 (0.69) 3.71 (0.49) 4.29 (0.49) 4.29 (0.76) 4.71 (0.49) −1.890 0.059 1.09
Item 6 3.57 (0.54) 4.14 (0.90) 3.71 (0.49) 4.43 (0.54) 4.14 (0.38) 4.43 (0.78) −2.000 0.046* 1.22
Item 7 3.71 (0.95) 3.57 (0.98) 4.14 (0.69) 4.43 (0.54) 4.43 (0.54) 4.71 (0.49) −1.633 0.102 0.93
Item 8 3.14 (0.90) 3.43 (1.27) 3.86 (0.90) 4.29 (0.77) 4.14 (0.38) 4.43 (0.79) −2.070 0.038* 1.44
Item 9 3.29 (1.11) 3.29 (1.11) 4.29 (0.76) 4.43 (0.79) 4.29 (0.76) 4.14 (0.69) −2.070 0.038* 1.05
aWilcoxon signed ranks test; bBased on negative ranks; *P < 0.05.

Discussion

Aspects of the SNAPPS model that promote verbalization and student PT initiative for learning share the additional benefits of enhancing communication and driving student PT accountability during the clinical education experience. Additionally, use of the educational framework reminds both students and CIs of their roles in the educational experience. When both educational partners approach a patient case understanding their roles and responsibilities in the SNAPPS process, the student PT may be less likely to fall back on passive observation or simply wait for directives from the CI. When the student PT understands basic expectations, higher levels of cognitive, psychomotor, and behavioral growth are made possible. Improved confidence accompanies gains in skill, knowledge, and reasoning.

We expected that use of the SNAPPS model would improve student PT and CI perceptions of student PT verbal ability, decision-making, and confidence levels after new patient evaluations. Consistent with studies involving medical students,1416 results of the current study showed self-reported improvements in student PTs’ clinical reasoning processes, their verbal communication, their ability to ask thoughtful questions, and their projected confidence when moving through the SNAPPS steps. The CI ratings were, in general, agreement with those of students. These results highlight improvements in perceived confidence when the CI is poised as a facilitator and tasked with the responsibilities of listening, guiding, encouraging, and scaffolding clinical reasoning. The SNAPPS process is learner centered, yet the framework establishes communication checkpoints between students and CIs that shape higher-order clinical decisions. The framework upholds concepts of Bloom’s revised taxonomy for teaching, learning, and assessing,17 with greater levels of instructor guidance provided for processes requiring analysis, evaluation, and creation. While student PTs may enter the clinical experience with strong factual or procedural knowledge, frameworks are helpful for developing metacognitive knowledge17 that translates to improved clinical reasoning.

Limitations

Case complexity was not taken into consideration when comparing early clinical week and late clinical week ratings. Because student PT self-assessment ratings of ability and confidence may be influenced by complexities and particulars of the patient case itself, aside from growth in ability and confidence, circumstances for comparison may not be completely equitable. The instrument’s reliance on a single investigator to generate the original assessment statements can be perceived as a study limitation, as well.

The small number of participants from a narrow geographical region may limit generalizability and may also call into question the true strength of student-CI rating system agreement. Recruiting student-CI pairs to participate in this voluntary experience was challenging, as it required effort beyond typical expectations. Participating CIs were typically known to the program as CIs who already invested in educating students, and students who chose to participate may already have been predisposed to taking ownership of their learning. These tendencies add an additional layer of selectivity and bias to student-CI pairs choosing to participate and suggest that generalizability may be additionally limited by both student and CI motivation, the CI’s perceived role as an educator, and the student’s predisposition toward engaged and active learning. These factors may issue a barrier to broader implementation.

Differences in ICE and terminal clinical experience lengths in weeks should be considered. However, it was observed that student PTs benefitted from the use of the SNAPPS model even when participating in an early curriculum experience (ICE) with a shorter number of weeks, which offers encouraging support for the use of the SNAPPS framework in the clinic. A single student’s choice to participate in the SNAPPS model twice may have impacted the student PT’s second data set means at early and subsequent weeks since successful euse of the model should translate to improved ratings with advancing education. However, a new student-CI relationship may impact these second set means, as well, at least during the early week, offering the potential for an initial week ‘reset’ rating.

Clinical relevance

Physical therapists in all fifty United States enjoy direct access privileges although stipulations on these privileges vary by state. Direct access is especially important to the PT providing services in the OP or rural general setting, where the first point of contact is a likely scenario. As autonomous practitioners, PTs must be ready to differentiate diagnoses and develop a plan of care specific to the patient’s individual set of problems. While the OP environment is a natural fit for this model, health professionals who frequently diagnose conditions in other healthcare settings, such as inpatient environments,14 have also found use for educating students using the SNAPPS model. Clinical educators in these clinic settings should consider an adaptation of this model well suited to the evaluation processes specific to these settings.

It must be acknowledged that a barrier to broad implementation of this approach in the OP clinic is CI ‘buy-in’, since the method requires time, patience, and training to ensure method fidelity. However, for CIs invested in student education, the SNAPPS clinical reasoning framework is an educational tool that helps the student PTs organize and articulate thought processes as they move through the initial evaluation and form a plan of care. The framework also encourages the students to express their uncertainties and probe the CI to understand the expert’s thought process. Educators can use this model to promote higher-order cognitive processes in both the classroom and clinic environments. Use of the SNAPPS model in the OP PT setting is promising for several reasons. The model encourages a more active learning experience, demonstrates an effort to promote accountability for learning, develops student verbal and analytical skills, and considers direct access as it relates to the first point of contact professional.

Ethics statement

All relevant ethical safeguards were met with this study, including approval through the University of South Dakota’s Internal Review Board (IRB). As such, the study protocol conforms to the ethical guidelines established in the 1975 Declaration of Helsinki.

References

  1. Cutrer WB, Sullivan WM, Fleming AE. Educational strategies for improving clinical reasoning. Curr Probl Pediatr Adolesc Health Care (2013) 43(9): 248–57. doi: 10.1016/j.cppeds.2013.07.005
  2. Levin M, Cennimo D, Chen S, et al. Teaching clinical reasoning to medical students: a case-based illness script worksheet approach. MedEdPORTAL (2016) 12: 10445. doi: 10.15766/mep_2374-8265.10445
  3. Choi S, Oh S, Lee DH, et al. Effects of reflection and immediate feedback to improve clinical reasoning of medical students in the assessment of dermatologic conditions: a randomised controlled trial. BMC Med Educ (2020) 20: 146. doi: 10.1186/s12909-020-02063-y
  4. Croskerry P, Nimmo G. Better clinical decision-making and reducing diagnostic error. J R Coll Physicians Edinb (2011) 41(2): 155–62. doi: 10.4997/JRCPE.2011.208
  5. Ferdinand NK, Kray J. Does language help regularity learning? The influence of verbalizations on implicit sequential regularity learning and the emergence of explicit knowledge in children, younger and older adults. Dev Psychol (2017) 53(3): 597–610. doi: 10.1037/dev0000262
  6. Gagné RM, Smith EC, Jr. A study of the effects of verbalization on problem-solving. J Exp Psychol (1962) 63(1): 12–18. doi: 10.1037/h0048703
  7. Berardi-Coletta B, Buyer LS, Dominowski RL, et al. Metacognition and problem-solving: a process-oriented approach J Exp Psychol (1995) 21(1): 205–23. doi: 10.1037/0278-7393.21.1.205
  8. Oh RC. The Socratic Method in medicine – the labor of delivering medical truths. Fam Med (2005) 37(8): 537–9.
  9. Wolpaw TM, Wolpaw DR, Papp KK. SNAPPS: a learner-centered model for outpatient education. Acad Med (2003) 78: 893–8. doi: 10.1097/00001888-200309000-00010
  10. Connor DF, Pearson GS. Feasibility and implementation of SNAPPS in an outpatient child psychiatry clinic. Acad Psychiatry (2017) 41: 299–300. doi 10.1007/s40596-016-0635-7
  11. Banning M. A review of clinical decision-making: models and current research. J Clin Nurs (2008) 17: 187–95.
  12. Shin HS. Reasoning processes in clinical reasoning: from the perspective of cognitive psychology. Korean J Med Educ (2019) 31(4): 299–308. doi: 10.3946/kjme.2019.140
  13. Roach K, Gandy J, Deusinger SS, et al. The development and testing of APTA clinical performance instruments. Phys Ther (2002) 82(4): 329–53. doi: 10.1093/ptj/82.4.329
  14. Jain V, Rao S, Jinadani M. Effectiveness of SNAPPS for improving clinical reasoning in postgraduates: randomized controlled trial. BMC Med Educ (2019) 19(1): 224. doi: 10.1186/s12909-019-1670-3
  15. Kapoor A, Kapoor A, Kalraiya A, et al. Use of SNAPPS model for pediatric outpatient education. Indian Pediatr (2017) 54(4): 288–90. doi: 10.1007/s13312-017-1090-6
  16. Sawanyawisuth K, Schwartz A, Wolpaw T, et al. Expressing clinical reasoning and uncertainties during a Thai internal medicine ambulatory care rotation: does the SNAPPS technique generalize? Med Teach (2015) 37(4): 379–84. doi: 10.3109/0142159X.2014.947942
  17. Anderson LW, Krathwohl DR, Airasian PW, et al. A taxonomy for learning, teaching, and assessing: a revision of bloom’s taxonomy of educational objectives. New York: Addison Wesley Longman, Inc; 2001.

Appendix A. SNAPPS Student Log Worksheet

  1. SUMMARIZE History & Findings
    • Obtain the history.
    • Perform the evaluation.
    • Consider: any red flags to prevent treatment?
    • Develop your problem list.
    • Present a concise summary to your CI.
  2. NARROW your Differential
    • Verbalize to your CI what you think the possible physical therapy or medical diagnosis may be (use the spine classification system for appropriate patients).
  3. ANALYZE your Differential
    • Tell your CI what other diagnoses are possible.
    • Explain the findings that allow you to eliminate other potential diagnoses.
  4. PROBE with Questions
    • Ask your CI one or two thoughtful questions about this diagnosis, its set of differentials, outcomes, choice of interventions, etc.
    •  Your question(s):
  5. PLAN Management
    • Initiate an intervention plan.
    •  Plan is evidence based
  6. SELECT a case-related issue for Self-Directed Learning
    • Write down a specific question with relevance to this evaluation.
      • Plan is evidence based
    • Investigate.
    • Report back to your CI tomorrow about your findings.
  7. Does your curriculum include full-time, non-ICE experiences before the didactic curriculum is complete?
    • Yes
    • No

Appendix B. Student Self-Assessment

Evaluate your performance for each evaluation using the following questions and their response rubrics. Be sure to use the same evaluation number as your clinical instructor. If you are using the Student Log Workshop, this evaluation number should match the number on your Log, as well.

Item 1. My ability to generate a list of possible diagnoses that correlated with my Clinical Instructor’s list was:

5 Excellent My list was comprehensive and matched my CI’s list exactly.
4 Good My list included the most likely diagnosis, but other possibilities generated by my CI were missing.
3 Average My list included the most likely diagnosis; however, a number of my CI’s choices were missing.
2 Fair I generated a diagnoses list, but none of my choices were among those on my CI’s list.
1 Poor I was unable to generate a list of possible diagnoses.
NA Not Applicable Not applicable to this evaluation.

Item 2. My ability to explain to my Clinical Instructor, verbally, why the chosen diagnosis(es) was/were valid was:

5 Excellent I clearly and succinctly expressed my reasoning in a well-organized, well-articulated, and logical way.
4 Good My language was clear and concise, and my reasoning was presented in a logical and well-organized way.
3 Average Although my language was not as clear, concise, and direct as possible, my ideas were presented in a logical and well-organized way.
2 Fair My ideas were delivered with many pauses; I had difficulty expressing my reasoning in an articulate and well-organized way.
1 Poor I ‘stumbled’ on my words, paused often, and found it necessary to change my ideas in midcourse of explaining them.
NA Not Applicable Not applicable to this evaluation.

Item 3. My ability to explain to my Clinical Instructor, verbally, why I ruled out other potential diagnosis(es) was:

5 Excellent I clearly and succinctly expressed my reasoning in a well-organized, well-articulated, and logical way.
4 Good My language was clear and concise, and my reasoning was presented in a logical and well-organized way.
3 Average Although my language was not as clear, concise, and direct as possible, my ideas were presented in a logical and well-organized way.
2 Fair My ideas were delivered with many pauses; I had difficulty expressing my reasoning in an articulate and well-organized way.
1 Poor I ‘stumbled’ on my words, paused often, and found it necessary to change my ideas in midcourse of explaining them.
NA Not Applicable Not applicable to this evaluation.

Item 4. My ability to choose and plan for the administration of appropriate tests and measures to confirm the diagnosis(es) was:

5 Excellent I chose the most appropriate tests and measures, position changes were minimized, and there were no irrelevant tests/measures performed.
4 Good I chose appropriate tests and measures and kept position changes to a minimum; however, I administered some irrelevant tests/measures.
3 Average I chose appropriate tests and measures but could have done a better job of planning position changes. I administered some irrelevant tests/measures, as well.
2 Fair I was not sure which tests and measures would give me the information I was looking for. I administered tests in an order that required many position changes.
1 Poor I was unable to generate any appropriate tests or measures without my CI’s assistance.
NA Not Applicable Not applicable to this evaluation.

Item 5. My ability to generate a problem list that corresponded with my Clinical Instructor’s list was:

5 Excellent My list was comprehensive and matched my CI’s list exactly.
4 Good My list was similar to my CI’s but some of the patient problems generated by my CI were missing on my list.
3 Average My problem list included more obvious problems, but many of the less obvious patient problems listed by my CI were missing on my list.
2 Fair I generated a problem list, but the problems I listed were not among those that my CI identified as primary problems for this patient.
1 Poor I was unable to generate a problem list without my Clinical Instructor’s help.
NA Not Applicable Not applicable to this evaluation.

Item 6. My ability to generate relevant and thoughtful questions to prompt discussion between my Clinical Instructor and me was:

5 Excellent My question(s) was/were well informed by my current knowledge, well-articulated, easily understood by my clinical instructor, and directly related to this evaluation. My question(s) prompted an important educational opportunity between my CI and me.
4 Good My question(s) was/were well considered, relevant to this evaluation, and based on my current knowledge. My question was easily understood by my clinical instructor. A good learning exchange resulted from my question(s).
3 Average My question(s) was/were directly related to this evaluation and based on my current knowledge and understanding, but my ability to articulate my question(s) was not as good as I would like. A good learning exchange resulted from my question(s).
2 Fair While my question may have been well articulated, it was simple and generated very little discussion between my CI and me.
1 Poor My question was not well articulated or did not prompt discussion/a learning opportunity between my clinical instructor and me. [OR] I was not able to generate a meaningful question.
NA Not Applicable Not applicable to this evaluation.

Item 7. My ability to explain, verbally, to my Clinical Instructor the reasoning behind my choice of intervention(s) was:

5 Excellent I was able to clearly and succinctly express my reasoning in a well-organized, well-articulated, and logical way.
4 Good My language was clear and concise, and my reasoning was presented in a logical and well-organized way.
3 Average Although my language was not as clear, concise, and direct as possible, my ideas were presented in a logical and well-organized way.
2 Fair My ideas were delivered with many pauses; I had difficulty expressing my reasoning in an articulate and well-organized way.
1 Poor I ‘stumbled’ on my words, paused often, and found it necessary to change my ideas in midcourse of explaining them.
NA Not Applicable Not applicable to this evaluation.

Item 8. My confidence in my ability to appropriately diagnose the pathology(ies) and impairment(s) relevant to this evaluation was:

5 Excellent I felt extremely confident.
4 Good I felt very confident.
3 Average I felt confident.
2 Fair I was not very confident.
1 Poor I was not confident at all.
NA Not Applicable Not applicable to this evaluation.

Item 9. My confidence in my ability to select the appropriate intervention(s) relevant to this evaluation was:

5 Excellent I felt extremely confident.
4 Good I felt very confident.
3 Average I felt confident.
2 Fair I was not very confident.
1 Poor I was not confident at all.
NA Not Applicable Not applicable to this evaluation.

Appendix C. Clinical Instructor Assessment of the Student

Evaluate the students’ performance for each evaluation using the following questions and their response rubrics. Be sure to use the same evaluation number as the student.

Item 1. The student’s ability to generate a list of possible diagnoses that correlated with my list was:

5 Excellent The student’s list was comprehensive and matched my list exactly.
4 Good The student’s list included the most likely diagnosis, but other possibilities that I generated were missing.
3 Average The student’s list included the most likely diagnosis; however, a number of my choices were missing.
2 Fair The student generated a diagnoses list, but none of the student’s choices were among those on my list.
1 Poor The student was unable to generate a list of possible diagnoses.
NA Not Applicable Not applicable to this evaluation.

Item 2. The student’s ability to explain to me, verbally, why the chosen diagnosis(es) was/were valid was:

5 Excellent The student showed exceptional ability to clearly and succinctly express his/her reasoning in a well-organized, well-articulated, and logical way.
4 Good The student’s language was clear and concise and his/her reasoning was presented in a logical and well-organized way.
3 Average Although the student’s language was not as clear, concise, and direct as possible, his/her ideas were presented in a logical and well-organized way.
2 Fair The student’s ideas were delivered with many pauses; the student had difficulty expressing his/her reasoning in an articulate and well-organized way.
1 Poor The student ‘stumbled’ on words, paused often, and found it necessary to change ideas in midcourse of explaining them.
NA Not Applicable Not applicable to this evaluation.

Item 3. The student’s ability to explain to me, verbally, why he/she ruled out other potential diagnosis(es) was:

5 Excellent The student showed exceptional ability to clearly and succinctly express his/her reasoning in a well-organized, well-articulated, and logical way.
4 Good The student’s language was clear and concise and his/her reasoning was presented in a logical and well-organized way.
3 Average Although the student’s language was not as clear, concise, and direct as possible, his/her ideas were presented in a logical and well-organized way.
2 Fair The student’s ideas were delivered with many pauses; the student had difficulty expressing his/her reasoning in an articulate and well-organized way.
1 Poor The student ‘stumbled’ on words, paused often, and found it necessary to change ideas in midcourse of explaining them.
NA Not Applicable Not applicable to this evaluation.

Item 4. The student’s ability to choose and plan for the administration of appropriate tests and measures to confirm the diagnosis(es) was:

5 Excellent The student chose the most appropriate tests and measures, position changes were minimized, and there were no irrelevant tests/measures performed.
4 Good The student chose appropriate tests and measures and kept position changes to a minimum; however, the student administered some irrelevant tests/measures.
3 Average The student chose appropriate tests and measures but could have done a better job of planning position changes. The student administered some irrelevant tests/measures, as well.
2 Fair The student was not sure which tests and measures would give him/her the information he/she was looking for. The student administered tests in an order which required many position changes.
1 Poor The student was unable to generate any appropriate tests or measures without my assistance.
NA Not Applicable Not applicable to this evaluation.

Item 5. The student’s ability to generate a problem list that corresponded with my list was:

5 Excellent The student’s list was comprehensive and matched my list exactly.
4 Good The student’s list was similar to mine, but some of the patient problems were missing on the student’s list.
3 Average The student’s problem list included more obvious problems, but many of the less obvious patient problems were missing on his/her list.
2 Fair The student generated a problem list, but the problems he/she listed were all different compared to those I listed as primary problems for the patient.
1 Poor The student was unable to generate a problem list without my help.
NA Not Applicable Not applicable to this evaluation.

Item 6. The student’s ability to generate relevant and thoughtful questions to prompt discussion between the student and me was:

5 Excellent The student’s question(s) was/were well informed by his/her current knowledge, well-articulated, easily understood, and directly related to this evaluation. The student’s question(s) prompted an important educational opportunity between the student and me.
4 Good The student’s question(s) was/were well considered, relevant to this evaluation, and based on his/her current knowledge. The student’s question(s) was/were easily understood. A good learning exchange resulted from the student’s question(s).
3 Average The student’s question(s) was/were directly related to this evaluation and based on his/her current knowledge and understanding, but the student’s ability to articulate his/her question(s) was not as good as it could have been. A good learning exchange resulted from the student’s question(s).
2 Fair While the student’s question may have been well articulated, it was simple and generated very little discussion between the student and me.
1 Poor The student’s question was not well articulated or did not prompt discussion/a learning opportunity between the student and me. [OR] The student was not able to generate a meaningful question.
NA Not Applicable Not applicable to this evaluation.

Item 7. The student’s ability to explain, verbally, the reasoning behind his/her choice of intervention(s) was:

5 Excellent The student demonstrated exceptional ability to clearly and succinctly express his/her reasoning in a well-organized, well-articulated, and logical way.
4 Good The student’s language was clear and concise and his/her reasoning was presented in a logical and eed way.
3 Average The student’s language was not as clear, concise, and direct as possible, but his/her ideas were presented in a logical and well-organized way.
2 Fair The student’s ideas were delivered with many pauses; he/she had difficulty expressing his/her reasoning in an articulate and well-organized way.
1 Poor The student ‘stumbled’ on his/her words, paused often, and found it necessary to change ideas in midcourse of explaining them.
NA Not Applicable Not applicable to this evaluation.

Item 8. The student’s apparent confidence in his/her ability to appropriately diagnose the pathology(ies) and impairment(s) relevant to this evaluation was:

5 Excellent The student appeared extremely confident.
4 Good The student appeared very confident.
3 Average The student appeared confident.
2 Fair The student demonstrated little confidence.
1 Poor The student appeared to have no confidence in his/her ability to generate an appropriate diagnosis(es).
NA Not Applicable Not applicable to this evaluation.

Item 9.  The student’s apparent confidence in his/her ability to select the appropriate intervention(s) relevant to this evaluation was:

5 Excellent The student appeared extremely confident.
4 Good The student appeared very confident.
3 Average The student appeared confident.
2 Fair The student demonstrated little confidence.
1 Poor The student appeared to have no confidence in his/her ability to generate an appropriate diagnosis(es).
NA Not Applicable Not applicable to this evaluation.