Ruth Lyons Hansen1*, Danielle Struble-Fitzsimmons2 and Kathryn Ryans1
1School of Health and Natural Sciences, Mercy University, Dobbs Ferry, NY, USA; 2Columbia University-Programs in Physical Therapy, New York, NY, USA
Purpose: The Clinical Performance Instrument (CPI) has been adopted by US academic physical therapist (PT) programs as a key measure of clinical education performance. In May 2023, the APTA released an updated version, the CPI 3.0, which included significant changes. The purpose of this study was to explore perceptions of students (SPTs) and clinical instructors (CIs) who were the initial users of the CPI 3.0.
Methods: Retrospective study utilizing an investigator-created electronic survey meant to measure the constructs of technology, scoring, and stakeholder burden compared to the previous version. The survey was sent out to five cohorts of students and their CIs who used the CPI 3.0 for a full-time clinical experience during the inaugural release. IRB approval was obtained.
Results: Students (n = 63) and CIs (n = 47) reported that the CPI 3.0 platform was easy to access (95.2% SPT; 76.6% CI) and navigate (93.5% SPT; 72.3% CI). However, submission problems were experienced. More than 90% of students and CIs agreed that the CPI 3.0 was able to capture an accurate reflection of student performance. In addition, 91.1% of CIs reported that the tool would enable them to capture student performance difficulties that would put them at risk of not passing. Those that used the previous version of the CPI agreed that the CPI 3.0 was less time consuming (64.3% SPT; 76.3% CI) and burdensome (60.5% SPT; 68.4% CI).
Conclusion: Students and CIs perceived the CPI 3.0 favorably in terms of ability to capture performance, time to complete, and overall burden.
Keywords: Clinical Performance Instrument 3.0; Perceptions; PT Clinical Instructors; PT Students
Citation: Journal of Clinical Education in Physical Therapy 2025, 7: 13048 - http://dx.doi.org/10.52214/jcept.v7.13048
Copyright: © 2025 Ruth Lyons Hansen et al.
This is an Open Access article distributed under the terms of a Creative Commons-Attribution-Non-Commerical-No Derivatives License (https://creativecommons.org/licenses/by-nc-nd/4.0/).
Received: 17 September 2024; Revised: 16 March 2025; Accepted: 2 May 2025; Published: 5 August 2025
Poster presentation at CSM2025.
Competing interests and funding: The authors have no conflict of interests.
*Ruth Lyons Hansen, Doctor of Physical Therapy Program - room 367, School of Health & Natural Science, Mercy University 555 Broadway Dobbs Ferry, NY 10522. Email: rlhansen@mercy.edu
The Commission on Accreditation in Physical Therapy Education (CAPTE) requires physical therapist education programs (PTEPs) to include at least 30 weeks of full-time clinical education.1 In the 2023/2024 academic year, full-time clinical education experiences accounted for 29% of the average length of the professional curriculum.2 The assessment of student performance in clinical learning environments is required to evaluate progressive skill development in alignment with academic program expectations, provide students with formative and summative feedback, and ensure that students reach ‘entry-level’ clinical performance. While CAPTE requires PTEPs to assess student performance during full-time clinical education experiences, they do not endorse or require any particular tool. Physical therapists educational programs have the autonomy to select and use a clinical assessment tool that meets all related stakeholders’ needs, including the academic program, Director of Clinical Education (DCE), student, Site Coordinator of Clinical Education (SCCE), and clinical instructor (CI).1 Tools should be psychometrically sound and not create undue burden to users.3,4 In the literature, there are no defined best practices, but strengths and limitations of the four US-developed assessments (Blue MACS, PT MACS, Clinical Internship Evaluation Tool [CIET], and Clinical Performance Instrument [CPI]) have been described.3,5–15 Of these, the CPI is the most common assessment tool used by CAPTE-accredited programs.16 The PT CPI includes two main components: (1) performance criteria related to physical therapist practice and (2) a defined rating scale. The CPI is used by the CI to evaluate student performance and by the student to self-evaluate performance at both midterm and final.17,18
The CPI was originally developed by the American Physical Therapy Association (APTA) in 1997.19 It was revised in 200617 and moved to online administration (CPI 2.0/Web) in 2008. In May 2023, after a full psychometric review,20 the APTA released the CPI 3.018 to replace the CPI 2.0/Web.20 The CPI 3.0 was developed to address known limitations of the CPI 2.0/Web21,22 and to better align with contemporary physical therapist practice.23,24 The CPI 3.0 includes significant changes compared to the CPI 2.0/Web, including a different technology platform, new user training requirements, revised performance criteria, decreased number of items, and updated scoring and rating criteria.18,25–27 A comparison of the tools is illustrated in Fig. 1. While the APTA published a technical brief describing the CPI 3.0 development20 and a recent study explored its validity,28 there is currently no research exploring user’s perceptions and satisfaction of the CPI 3.0 in the literature.
Fig. 1. Comparison of CPI 2.0 and CPI 3.0 performance criteria. CPI = Clinical Performance Instrument.
Physical therapist education programs must participate in regular clinical education curricular review.1 Therefore, when adopting a new student assessment tool, PTEPs have a responsibility to collect and review data to make informed decisions about curricular alignment, student outcomes, and future plans. In early transitional stages from one assessment tool to another, stakeholder feedback can provide valuable information. During the inaugural phase of the CPI 3.0 roll-out, academic (DCE) and clinical (SCCE) members of the New York/New Jersey (NYNJ) Clinical Education Consortium anecdotally reported various administrative and technical challenges using the new tool. To provide a more comprehensive understanding of impact, we sought to survey non-administrative users, namely, students and CIs, regarding their experience with the tool. The purpose of this study was to explore the perceptions of PT students and CIs who were the initial users of the CPI 3.0, focusing on the domains of technology, scoring, and comparisons to the previous tool, the CPI 2.0/Web. Feedback from these key stakeholders can be used by academic programs to evaluate the ability of the CPI 3.0 to meet the needs for student assessment in the clinical setting.1
This study was a retrospective, exploratory, descriptive study utilizing two similar but separate investigator-created electronic surveys; one developed for students and the other for CIs. The surveys were developed and distributed using Research Electronic Data Capture (REDCap) hosted at Mercy University.29,30
A non-probability sample of convenience was used.31 PT students and CIs who participated in physical therapist educational programs in NYNJ Clinical Education Consortia who initially transitioned to the new CPI 3.0 in May of 2023 were asked to participate. The inclusion criteria were physical therapist students and CIs who used the CPI 3.0 to either evaluate or self-evaluate student clinical performance for a full-time clinical experience between June and December 2023.
The survey was distributed via a REDCap link embedded in an email to five cohorts of students and their CIs, by the DCEs of three PTEPs at the end of the scheduled rotations, after the CPI 3.0 had been used to evaluate mid-point and/or final performance. A reminder email was sent 3 weeks after the initial email. The survey was anonymous, and an informed consent was obtained at the beginning of the survey before participants could progress to the survey questions.
After consent was obtained, participants were required to answer if they had used the CPI 3.0 to evaluate or self-evaluate student performance to determine inclusion criteria.
Participants responding ‘no’ to the question ended the survey. After the first two inclusion questions, participants were not required to answer a question in order to proceed to the next. Conditional branching was used to ask those students and CIs who had used the CPI 2.0/Web in the past, to answer two additional questions.
Due to the novel nature of this tool and time sensitivity to capture initial impressions, we opted to develop our own survey. The survey was developed by the investigators, who are all experienced DCEs and have previous experience with survey development and research. The survey explored constructs of technology platform, the ability to capture student performance, and user burden. The survey questions consisted of Likert-scale items, short-answer, and demographic questions. A Delphi panel, consisting of PTEP faculty, CI/SCCE, and a student, reviewed the survey for face and content validity, achieving greater than 90% agreement after one round of review, followed by pilot testing.
Data were downloaded from RedCAP29,30 and analyzed using SPSS version 27.32 Data were screened for inclusion criteria, and incomplete records (n = 2) were deleted. Participants who answered at least 90% of the survey questions were included in analysis as this was determined to be adequate to answer the research question.33 Cronbach’s alpha was used on both surveys to test the overall internal consistency and on individual items. Descriptive statistics, frequencies, percentages, and means were used to summarize the data. Participants were not required to answer all questions; therefore, valid percentage was used to present the data.
Sixty-three students from three PTEPs and 48 CIs from a variety of practice settings completed the survey (the response rate was 24.6% of CIs and 30.7% of students; 29% overall). Two CI cases were deleted due to not meeting inclusion criteria or survey completeness. CIs had a mean 8.9 (SD = 10.4) years of experience; 39 (84%) had used the previous version of the CPI and 7 (15.2%) had not. Of the students who participated, 43 (68.3%) had used the previous version of the CPI and 20 (31.7%) had not; 23 (36.5%) were in their first clinical experience (CE), 39 (65.9%) were in intermediate experiences, and one student was in a remedial experience. No students were in their final experience. Participant demographics of CIs and students are presented in Tables 1 and 2, respectively.
Cronbach’s alpha for the overall CI survey was .94 and for the student survey .92, indicating excellent internal consistency reliability. Analysis of individual items did not indicate improvement of alpha if any items were dropped from either instrument.
Overall, CIs and students perceived the new CPI favorably in terms of the ease of use of the technology platform, scoring and capturing student performance, and the administrative burden compared to the previous version of the CPI. Frequency and percentages of all Likert scale item responses for CIs and students are presented in Tables 3 and 4, respectively.
Both groups reported that the CPI 3.0 platform was easy to access (95.2% of students and 76.6% of CIs) and navigate (93.5% of students and 72.3% of CIs). Most students (84.1%) and CIs (83.5%) agreed that user guide instructions were helpful. Some students (9.7%) and CIs (28.3%) reported problems in submitting the final assessment. Open-ended comments from both groups indicated technology ‘glitches’ during the submission process, which resulted in comments being lost and therefore having to be rewritten.
Most students (96.7%) and CIs (95.7%) reported being confident that the completion of the APTA CPI 3.0 training enabled them to use the instrument appropriately. Both students (92%) and CIs (91.5%) agreed that the CPI 3.0 was able to capture an accurate reflection of student performance in the five domains of practice. In addition, CIs (91.1%) reported that the tool would enable them to capture student performance difficulties that would put them at risk of not passing. Both groups reported that descriptions of supervision/caseload (87.3% of students and 83.0% of CIs) and sample behaviors (88.7% of students and 87.0% of CIs) were helpful in differentiating ratings between performance levels. Despite these findings, open-ended comments revealed that both students and CIs had some difficulty differentiating between performance levels, especially between beginner and advanced beginner and between intermediate and advanced intermediate.
Both students and CIs who used the previous version (PTCPI 2.0/Web) agreed that the CPI 3.0 was less time consuming (64.3% of students and 76.3% of CIs) and burdensome (60.5% of students and 68.4 of CIs). The inability to rate in-between performance criteria on the CPI 3.0, showing smaller changes in performance, was negatively perceived by students and some CIs who had previously used the CPI 2.0.
The recent transition to the updated CPI 3.0 has the potential to impact a large number of PTEPs. Historically, most US-based PTEPs have used earlier versions of the CPI to assess student performance in the clinical setting.16 Although research provides evidence that there are academic programs that have transitioned from the CPI 2.0/Web to other clinical assessment tools,8 these same programs may seek comparisons with the revised CPI as part of the curricular review process.1
We found that students and CIs had positive feedback regarding the accessibility and navigation of the CPI 3.0 online platform. This aligns with prior stakeholder survey studies of other clinical performance assessment tools. In 2019, Haj et al. reported that DCEs and CIs found the CPI 2.0/Web delivery methods to be a strength of that tool.14 Similarly, the online version of the CIET has been perceived favorably by CI and student users.8,9 In a case report, CIs agreed that the CIET was easy to access (60.9%) and complete (87%).8 Furthermore, a multi-site study by Birkmeier et al. (2022) reported that both CIs and students perceived the CIET to be easier to use than the CPI 2.0/Web (p < 0.001).9 Finally, in our study, the negative comments from students and CIs regarding the CPI 3.0 technology platform, including glitches and submission-related issues, are similar to the CIET interface issues described by student users in the North and Sharp (2020) sample.8
The CPI 3.0 requires all users to complete a free 1-hour online training course before use.18 This is slightly longer than the training time required for the CIET (<1 h)8,9 but shorter than the training required for the CPI 2.0/Web (2 h).8,17 In our sample, students (96.7%) and CIs (95.7%) overwhelmingly reported high levels of confidence that the APTA CPI 3.0 training enabled them to use the instrument appropriately. In comparison, the findings for CIET training are mixed. While the Birkmeier et al. (2022) survey of CIs and students found that both groups had positive perceptions of CIET training effectiveness,9 North and Sharp (2020) reported that only 65.6% of CIs felt that the CIET met user needs. Additionally, this study also described CI challenges in completing the CIET training quiz.8
To ensure that students are meeting program benchmarks and ultimately achieving ‘entry-level competence’ as required by CAPTE,1 PTEP programs must be confident that a clinical performance tool is accurately capturing student performance. However, both previous versions of the CPI17,19 had reported limitations in this domain, including variations in scoring and narrative comments based on CI training and experience,15,34,35 visual analog scale (VAS) score validity issues,12 and, concerningly, incomplete performance item scoring by CIs.11 Conversely, North and Sharp found that of the CIs who had used the CIET for 1 year, 95.7% agreed that the CIET was representative of entry-level skills and behaviors,8 which is similar to our results regarding the CPI 3.0. Although not a validation study, our sample of students (91.5%) and CIs (92%) had favorable perceptions regarding the ability of the CPI 3.0 to effectively and accurately reflect a students’ performance level. This finding provides preliminary early support for one of the key objectives of the CPI redesign: to address known issues with scoring and rating of the CPI 2.0/Web.20,21 Importantly, our study found that a high percentage (89.7%) of CIs reported that the tool would enable them to capture performance difficulties that would put a student at risk of not passing. This was a concern of the authors since the stand-alone safety performance item was removed from the CPI 3.0 version.
A major aspect of the CPI 3.0 redesign was the reduction in performance items from 18 (CPI 2.0/Web)17 to 12 (CPI 3.0).18 According to the APTA, CPI 2.0/Web users expressed a need to reduce redundancy and completion times to make the updated tool more user-friendly.20 Published studies had shown that CPI 2.0/Web completion times were greater than 1 h and were longer than those of other tools.8,9 As noted previously, PTEPs should apply a comprehensive approach to clinical assessment tool selection. This means that the tool must serve the needs of all related stakeholders, both academic and clinical. On the academic side, the assessment must accurately represent student performance in accordance with professional standards and program-specific competencies.1 On the clinical side, SCCEs and CIs have indicated preference for an assessment tool that can be efficiently implemented into busy clinical environments.4,9,14,36–38 In practice, many CIs do not receive any time or productivity accommodations for the added student supervision workload, meaning they must complete CI responsibilities while also maintaining patient care, productivity, and potentially, administrative duties.36,39–41 Multiple studies have shown that CIs experience stress during a student supervision experience, with paperwork and grading contributing to CI burden. These negative perceptions are associated with CI dissatisfaction and create a barrier to clinical education operations.4,36–38 Physical therapist education programs, therefore, should consider CI-related needs in their curricular assessment of a clinical performance tool. This study found that among CIs and students who had used the prior version of the CPI, both groups felt that the CPI 3.0 was less time consuming and burdensome, demonstrating alignment with CPI 3.0 redesign goals20 and program considerations for CI impact.
In this study, the sample size was limited to a small number of CIs and students, which were linked to three specific universities and 158 clinical sites; therefore, the results of the study cannot be generalized to all CIs and students utilizing the CPI 3.0 for assessment. Since the CPI 3.0 was in the inaugural stage during our data collection, another limitation is the lack of prior research to compare our results. In addition, the CPI 3.0 has been updated since its initial roll-out, which may have addressed some of the ‘technology glitches’ reported by CIs and students. Future CPI 3.0 research should include larger scale reliability and validity studies and satisfaction surveys involving all stakeholders, including DCEs. Satisfaction surveys comparing the CPI 3.0 to other available tools may inform academic programs when choosing a tool to use to evaluate student performance in clinical education.
Despite some negative comments regarding technology glitches, perceptions of CIs and students were favorable for the CPI 3.0’s ability to effectively rate student performance. Overall, both groups of participants felt it was less time consuming and burdensome compared to the CPI 2.0/Web.
This study was approved by the Mercy University Institutional Review Board.
| 1. | Standards and required elements for accreditation of physical therapist education programs. Commission on Accreditation in Physical Therapy Education. Available from: https://www.capteonline.org/globalassets/capte-docs/capte-pt-standards-required-elements.pdf [cited 26 August 2024]. |
| 2. | Aggregate program data: 2023 physical therapist education programs fact sheet. Commission on Accreditation in Physical Therapy Education. Available from: capte-2023-pt-fact-sheet.pdf (capteonline.org) [cited 26 August 2024]. |
| 3. | O’Connor A, McGarr O, Cantillon P, et al. Clinical performance assessment tools in physiotherapy practice education: a systematic review. Physiotherapy (2018) 104(1): 46–53. doi: 10.1016/j.physio.2017.01.005 |
| 4. | Wilkinson T, Myers K, Bayliss J, et al. Facilitators and barriers to providing clinical education experiences through the lens of clinical stakeholders. J Phys Ther Educ (2023) 37(3): 193–201. doi: 10.1097/JTE.0000000000000280 |
| 5. | Hrachovy J, Clopton N, Baggett K, et al. Use of the blue MACS: acceptance by clinical instructors and self-reports of adherence. Phys Ther (2000) 80(7): 652–61. doi: 10.1093/ptj/80.7.652 |
| 6. | Stickley LA. A content validity of a clinical education performance tool: the physical therapist manual for the assessment of clinical skills. J Allied Health (2005) 34(1): 24–30. |
| 7. | Fitzgerald LM, Delitto A, Irrgang JJ. Validation of the clinical internship evaluation tool. Phys Ther (2007) 87(7): 844–60. doi: 10.2522/ptj.20060054 |
| 8. | North S, Sharp A. Embracing change in the pursuit of excellence: transitioning to the Clinical Internship Evaluation Tool for student clinical performance assessment. J Phys Ther Educ (2020) 34(4): 313–20. doi: 10.1097/JTE.0000000000000154 |
| 9. | Birkmeier M, Wheeler E, Garske HM, et al. Feasibility of use of the Clinical Internship Evaluation Tool in full-time clinical education experiences: a multi-institutional study. J Phys Ther Educ (2022) 36(3): 263–271. doi: 10.1097/JTE.0000000000000237 |
| 10. | Adams CL, Glavin K, Hutchins K, et al. An evaluation of the internal reliability, construct validity, and predictive validity of the Physical Therapist Clinical Performance Instrument (PT CPI). J Phys Ther Educ (2008) 22(2): 42–50. doi: 10.1097/00001416-200807000-00007 |
| 11. | Proctor PL, Dal Bello-Haas VP, McQuarrie AM, et al. Scoring of the Physical Therapist Clinical Performance Instrument (PT-CPI): analysis of 7 years of use. Physiother Can (2010) 62(2): 147–54. doi: 10.3138/physio.62.2.147 |
| 12. | Straube D, Campbell SK. Rater discrimination using the visual analog scale of the Physical Therapist Clinical Performance Instrument. J Phys Ther Educ (2003) 17(1): 33–38. doi: 10.1097/00001416-200301000-00006 |
| 13. | Roach KE, Frost JS, Francis NJ, et al. Validation of the Revised Physical Therapist Clinical Performance Instrument (PT CPI): Version 2006. Phys Ther (2012) 92(3): 416–28. doi: 10.2522/ptj.20110129 |
| 14. | Haj T, Wolden M, Wolden B. Perspectives on the PT CPI from directors of clinical education and clinical instructors. Poster presented at: Educational Leadership Conference, APTA, Bellevue, WA, 18–20 October 2019. |
| 15. | Rubertone PP, Nixon-Cave K, Wellmon R. Influence of clinical instructor experience on assessing doctor of physical therapist student clinical performance: a mixed-methods study. J Phys Ther Educ (2022) 36(1): 25–33. doi: 10.1097/JTE.0000000000000208 |
| 16. | Clinical education special interest group meeting minutes. Education leadership conference, Academy of Physical Therapy Education, 19 October 2019. Available from: https://www.aptaeducation.org/cesig-meeting-minutes [cited 26 August 2024]. |
| 17. | Physical therapist clinical performance instrument. American Physical Therapy Association. Available from: http://www.apta.org/PTCPI/ [cited 18 July 2023]. |
| 18. | Physical therapist clinical performance instrument 3.0. American Physical Therapy Association. Available from: http://www.apta.org/PTCPI/ [cited 18 July 2023]. |
| 19. | Task Force for the Development of Student Clinical Performance Instruments. The development and testing of APTA Clinical Performance Instruments. American Physical Therapy Association. Phys Ther (2002) 82(4): 329–53. |
| 20. | Crawford BF, Sinclair AL. Sinclair Physical therapist and physical therapist assistant clinical performance instruments: validation study technical brief. Alexandria, VA: American Physical Therapy Association; 2023. |
| 21. | Sinclair AL. Research studies to support reliability and validity of the PT CPI and PTA CPI (2020 No. 072). Alexandria, VA: Human Resources Research Organization; 2020. |
| 22. | Wetherbee E, Dupre AM, Feinn RS, et al. Relationship between narrative comments and ratings for entry-level performance on the Clinical Performance Instrument: a call to rethink the Clinical Performance Instrument. J Phys Ther Educ (2018) 32(4): 333–43. doi: 10.1097/JTE.0000000000000060 |
| 23. | Harris JL, Rogers AP, Caramagno JP. Analysis of practice for the physical therapy profession: report memo 2021 (2021 No. 100). Alexandria, VA: Human Resources Research Organization 2021. |
| 24. | Crawford BF, Borawski EA, Sinclair AL. Alignment of the physical therapist and physical therapist assistant clinical performance instrument content to current practice standards (2022 No. 052). Alexandria, VA: Human Resources Research Organization; 2022. |
| 25. | Crawford BF, May M, Sinclair AL, et al. Revising the physical therapist and physical therapist assistant clinical performance instrument performance criteria and rating scales (2022 No. 104). Alexandria, VA: Human Resources Research Organization; 2022. |
| 26. | Crawford BF, Sinclair AL. Intended uses of the physical therapist and physical therapist assistant clinical performance instruments (2022 No. 037). Human Resources Research Organization. |
| 27. | Crawford BF, Sinclair AL. Developing a preliminary passing standard and scoring model for the revised physical therapist and physical therapist assistant clinical performance instruments (2022 No. 116). Alexandria, VA: Human Resources Research Organization; 2022. |
| 28. | Campbell DF, Alameri M, Macahilig-Rice F, et al. Validation of the revised American Physical Therapy Association physical therapist clinical performance instrument 3.0. Phys Ther (2025) 105(4). doi: 10.1093/ptj/pzaf015 |
| 29. | Harris PA, Taylor R, Thielke R, et al. Research electronic data capture (REDCap) – a metadata-driven methodology and workflow process for providing translational research informatics support. J Biomed Inform (2009) 42(2): 377–81. doi: 10.1016/j.jbi.2008.08.010 |
| 30. | Harris PA, Taylor R, Minor BL, et al. The REDCap consortium: building an international community of software partners. J Biomed Inform (2019) 95: 103208. doi: 10.1016/j.jbi.2019.103208 |
| 31. | Designing Surveys and Questionnaires. In: Portney LG. eds. Foundations of clinical research: applications to evidence-based practice, 4e. F. A. Davis Company; 2020. Available from: https://fadavispt-mhmedical-com.rdas-proxy.mercy.edu/content.aspx?bookid=2885§ionid=243181135 [cited 01 March 2025]. |
| 32. | IBM Corp. Released 2020. IBM SPSS Statistics for Windows, Version 27.0. Armonk, NY: IBM Corp. |
| 33. | Davies RS. Designing surveys for evaluations and research. Ed Tech Books. Available from: https://edtechbooks.org/designing_surveys?tab=front [cited 26 August 2024]. |
| 34. | Tsuda HC, Low S, Vlad G. A description of comments written by clinical instructors on the Clinical Performance Instrument. J Phys Ther Educ (2007) 21(1): 56–62. doi: 10.1097/00001416-200701000-00008 |
| 35. | Vendrely A, Carter R. The influence of training on the rating of physical therapist student performance in the clinical setting. J Allied Health (2004) 33(1): 62–69. |
| 36. | Davies R, Hanna E, Cott C. ‘They put you on your toes’: physical therapists’ perceived benefits from and barriers to supervising students in the clinical setting. Physiother Can (2011) 63(2): 224–33. doi: 10.3138/ptc.2010-07 |
| 37. | Rabena-Amen AK, Raja B, Davenport TE. Obstacles to physical therapy clinical instruction: a qualitative study of clinical instructors. Internet J Allied Health Sci Pract (2024) 22(4): Article 11. |
| 38. | Anderson C, Cosgrove M, Lees D, et al. What clinical instructors want: perspectives on a new assessment tool for students in the clinical environment. Physiother Can (2014) 66(3): 322–28. doi: 10.3138/ptc.2013-27 |
| 39. | Apke TL, Whalen M, Buford J. Effects of student physical therapists on clinical instructor productivity across settings in an academic medical center. Phys Ther (2020) 100(2): 209–16. doi: 10.1093/ptj/pzz148 |
| 40. | Jensen G, Mostrom E. Handbook of teaching and learning for physical therapists. 3rd ed. St. Louis, MO: Elsevier Health Sciences; 2013. |
| 41. | Recker-Hughes C, Padial C, Becker E, et al. Clinical site directors’ perspectives on clinical education. J Phys Ther Educ (2016) 30: 21–7. doi: 10.1097/00001416-201630030-00005 |