ORIGINAL RESEARCH ARTICLE

The impact of low-cost, optimal-fidelity simulation on physical therapy students’ clinical performance and self-efficacy: a pilot study

Laura Hagan1*, Shira Schecter Weiner1 and Zohn Rosen2,3

1Touro College School of Health Sciences, New York, USA; 2Department of Health Policy & Management, Mailman School of Public Health, Columbia University, New York, USA; 3Touro School of Health Sciences, New York, USA

Abstract

Purpose: Patient simulation has emerged as a useful tool to refine cognitive, psychomotor, and affective skills in realistic yet controlled settings. However, the cost associated with simulation labs can be a barrier. The purpose of this pilot study was to 1) assess the feasibility of a low-cost optimal-fidelity simulation lab integrated into an academic course and 2) assess the effectiveness of a low-cost optimal-fidelity patient simulation on self-efficacy and clinical performance of Doctor of Physical Therapy (DPT) students.

Methods: This prospective study utilized a repeated measures quasi-experimental research design. Subjects were recruited through convenience sampling from two branches of the same accredited program run separately on different campuses. Students from one campus served as the experimental group and students from the other campus served as the control group. The control group received usual training for a course on patient assessment. Simultaneously, the experimental group received usual training with the addition of simulation. The Jones and Sheppard self-efficacy questionnaire was administered at baseline (T0), after simulation (T1), and after the subjects’ first clinical experience (T2). The PT Clinical Performance Instrument (CPI) assessed clinical performance. Faculty time, space, equipment, and funds were recorded for descriptive analysis

Results: A low-cost optimal-fidelity simulation lab was developed in a 360 square feet room with approximately $500 of supplies. Mann–Whitney independent sample tests demonstrated no statistical significance between groups at each of the data collection points. Within group changes in self-efficacy were statistically significant from T0 to T1 in the experimental group only. No statistically significant changes in CPI scores were noted between groups at the midterm or final assessment. A small-to-moderate effect size (d = 0.386) was noted.

Conclusion: The feasibility of the low-cost optimal-fidelity simulation was demonstrated by the limited cost and space requirements. Exposure to one simulated patient encounter appeared to accelerate the development of self-efficacy prior to a first clinical placement compared to usual training in this pilot study.

Keywords: self-efficacy, simulation, physical therapy, low-cost, clinical performance, fidelity

 

Citation: Journal of Clinical Education in Physical Therapy 2020, 2: 1894 - http://dx.doi.org/10.7916/jcept.v2.1894

Copyright: © 2020 Laura Hagan et al.
This is an Open Access article distributed under the terms of a Creative Commons-Attribution-Non-Commerical-No Derivatives License (https://creativecommons.org/licenses/by-nc-nd/4.0/).

Published: 22 July 2020

Competing interests and funding: The authors have no conflicts of interest. The authors have not received any funding or benefits from industry or elsewhere to conduct this study.

*Laura Hagan, 320 W 31st Street, New York, NY 10001, USA. Email: Laura.Hagan@touro.edu

 

Training students to become practitioners of excellence is the goal in all healthcare training programs. The dynamic landscape of healthcare imposes increasing demands on all healthcare practitioners, which in turn increases the expectations of students entering clinical education (CE).1 Health care educators are concerned that students are not optimally prepared for the demands of the current clinical environment.1 The implications of this are far reaching, ranging from challenges with clinical decision-making2 to errors that may compromise patient safety3. The simulated patient encounter has emerged as a useful teaching modality and can be tailored to include contextual challenges of the clinic that students will likely encounter during practice. Simulation has been found to refine cognitive, psychomotor, and technical skills in artificial but highly realistic and safe settings.4,5 In addition, simulation exposes students to unpredictable scenarios and challenges their clinical decision-making, helping to develop clinical readiness.

The implementation of simulated learning activities in health sciences, nursing, and medical curricula has demonstrated improvements in clinical skills6,7 and clinical decision–making.8 In recent years, the use of simulation has been increasingly integrated into physical therapy (PT) programs. In a survey of accredited entry-level PT education programs in the United States, 80 (70%) of 140 programs report using simulation.5 Specifically, the use of high-fidelity acute care simulation in PT education has positively impacted confidence, self-efficacy,9 and learning satisfaction.10 High-fidelity simulation is typically associated with expensive technology simulation suites that mimic the patient care environment and high-tech mannequins that are programmed for physiological responses. However, high-fidelity simulation models can be cost prohibitive for small academic institutions not associated with nursing or medical programs with established simulation centers10 and impose a barrier to establishing such programs.2 Therefore, if simulation enhances training, exploring alternatives to high-fidelity simulation may provide complementary teaching methods that allow more PT students to benefit from this educational paradigm.

Simulation fidelity refers to the extent to which a training scenario mimics the characteristics of an authentic clinical encounter.3,11 These characteristics may include the physical setting, the task required of the student, and the authenticity of the patient response. High-fidelity simulation has been assumed to be superior compared to low-fidelity simulation experiences.8,12 Compared to high-fidelity simulation, low-fidelity simulation utilizes minimal technology and may employ patient actors.13 However, the research findings are equivocal regarding the benefits of high-cost high-fidelity simulation over low-cost low-fidelity simulation.14,15 For example, no significant differences were reported in medical student performance between high-fidelity (computerized mannequins, cadavers) and low-fidelity (makeshift equipment, task trainers) simulation when exploring outcomes including basic surgical skills, auscultation, and crisis management.16 Some educators suggest an elaborate simulation setup may be detrimental to the learning objectives by distracting the learner from the intended task and redirecting their attention towards high-tech equipment that is not vital to the learning objectives.16 The sophistication of the equipment used may be less critical for learning compared to the immersive encounter.12 More importantly, effective simulation could be designed to target specific learning objectives, and overemphasizing fidelity may undermine the educational purpose.12,17 Therefore, in considering simulation needs, educators may focus on replicating clinical demands, rather than on expensive and perhaps unnecessary technology.

There have been tremendous advancements in the design of mannequins that replicate physiological responses to high-risk procedures administered primarily by physicians and nurses. In PT practice, many techniques employed in patient encounters include low-risk interventions such as positional changes, gait training, and therapeutic exercise, none of which can be readily executed with existing mannequins. In fact, Maran and Glavin argue that the highest fidelity simulation model is one that uses standardized patients (SPs), who are actors trained to simulate cases in lieu of mannequins.3 The interface between students and a live ‘patient’ allows for the development of communication and interpersonal skills that cannot be recreated with mannequins. In addition, use of SPs allows for both instructor and ‘patient’ feedback to the students regarding patient handling skills, communication style, and safety.18,19

Rather than attempting to meet the predefined criteria of high-fidelity and low-fidelity simulation, a new approach is introduced by the authors. Low-cost optimal-fidelity (LCOF) simulation strategically utilizes only that technology necessary to target educational objectives. This supports the current paradigm of effective simulation design focusing ‘on methods to enhance educational effectiveness using principles of transfer of learning, learner engagement, and suspension of disbelief’.12

Simulation can purposefully target the development of competencies among clinical students, including self-efficacy.20 In a study of predictors of clinical performance among physician assistant students examining cognitive and noncognitive performance measures, only self-efficacy emerged as an important noncognitive measure.21 Self-efficacy is defined as the belief in one’s own ability to influence events based on knowledge and skills that can determine motivation, effort, and performance.22 A strong sense of self-efficacy is correlated with high achievement and clinical competence in various groups of clinical students including physician assistants, nursing, midwifery, and medical students.23 The opportunity for students to deliberately practice and improve their skills in a low-stakes environment is critical for developing proficient, entry-level clinicians.24,25 Although evidence that simulation can affect self-efficacy is just emerging, a pilot study to explore this may provide meaningful insights.

The purpose of this feasibility pilot study was twofold:

  1. to assess the feasibility of implementing an LCOF simulation lab integrated into an academic course with respect to costs, space, equipment and time requirements;
  2. to assess the effectiveness of an LCOF patient simulation on self-efficacy and clinical performance of Doctor of Physical Therapy (DPT) students enrolled in a 3-year entry-level program.

Methods

Approval by the committee on human experimentation was secured, and written informed consent was obtained from participants. The study utilized a two-group, pre-post prospective design. Subjects were recruited through convenience sampling from two branches of the same accredited program run separately on different campuses. The program is a 3-year (six semester) DPT program. Both campuses had identical curricula and were run concurrently but had distinct faculty on two different branch campuses. Students from one campus served as the experimental group, and students from the other campus served as the control group. Subjects who were enrolled in the second year of the DPT program (prior to their first clinical experience) met the inclusion criteria. Baseline demographic data were collected from all subjects including sex and grade point average (GPA).

Simulation lab development

The resources necessary to create the simulated clinical environment required faculty time, space, equipment, and funds. All resources used were recorded for analysis. To optimally utilize faculty time, a PT student served as a volunteer research assistant. Tasks designated to the research assistant included exploring availability and pricing of simulation systems and equipment needs. A storage closet was repurposed, and underutilized equipment owned by the department was reclaimed (Fig. 1). A process for utilizing program-provided tablets and an online meeting platform was established, thereby allowing faculty to observe simulations in real time from a mock-control room (unused classroom) and record simulations for debriefings (Fig. 2). Upon completion of the simulation lab setup, an LCOF simulation was integrated into an existing course, for a cohort of 30 students. The course teaching assistant served as the SP, as part of their course responsibilities. The simulation lab could be easily modified to replicate either a ‘typical’ inpatient or outpatient setting. The control/observation room was a classroom adjacent to the simulation lab, from where the simulation could be observed. To capture and view the simulation in real time from outside the lab, two tablets video-recorded and projected the encounter using an online meeting platform (https://zoom.us) to a television screen in the observation room. This set up allowed for an optimal patient simulation delivered on a low budget (Table 1).

Fig 1
Fig. 1.  Low-cost simulation lab setup.

Fig 2
Fig. 2.  Observing simulation from observation room.

Table 1. Resources and cost for low-cost optimal-fidelity simulation lab development
Resource Cost
Personnel
 -Research assistant PT student volunteer est. at 60 h
 -Standardized patient Included in Teaching Assistant duties
 -Sim lab coordinator Faculty donated time 50 h x two faculty (100 h)
Simulation supplies
 -Hospital bed Available through department
 -Bed linens $25
 -Bedside table $50
 -Foley catheter $ 4
 -IV pole $36
 -IV fluids and blood transfusions $60
 -Alcohol wipes $ 5
 -Oxygen tank $215
 -Gloves Available through department
 -Reflex hammer Available through department
 -Stethoscope Available through department
 -Goniometer Available through department
 -Hospital gown, slippers Available through department
 -Incentive spirometer $8
 -Sphygmomanometer Available through department
 -Hip abductor pillow $35
 -Mobility assistive devices (wheelchair, walkers, canes, crutches) Available through department
 -Pulse oxymeter $30
Technology
 -Tablets Available through department
 -Tablet holders $47
 -Online meeting platform Available through department
 -Monitor/television Available through department
WiFi Available through department
 -Space
 -Repurposed storage space 20 feet x 17 feet

Simulation case development and procedures

Two simple clinical scenarios were used in the LCOF simulation: 1) reflecting an adult neuromuscular outpatient case and 2) an adult musculoskeletal acute inpatient case. Three expert clinicians reviewed the cases to ensure their clinical validity, with final cases agreed on by consensus, to ensure face validity. At the start of the simulation, students scheduled to see the outpatient case received a prescription requesting PT, with no other medical records. A patient chart was available for the inpatient case, providing vital signs, diagnosis, lab values, and brief medical history. Students were instructed to perform a clinical assessment of an SP played by the course’s teaching assistant, who was a licensed, practicing physical therapist. The SP was provided with the case 2 weeks in advance of the simulations, enabling him/her to explore the presentation, and allowing the faculty to address any outstanding questions about the case. Prior to conducting the first simulation the faculty and SP had an in-person meeting to discuss the details of the case.

Prior to each simulation, the researchers conducted a procedural briefing to prepare the students for the patient encounter without offering clinical guidance. Students were assured that the simulation was a formative learning experience with no associated assessment. Due to time and staff constraints, students in the experimental group were divided into groups of 4–5. One student was designated the ‘PT’ by the instructor, and the other students were selected as active ‘observers’. Each simulated encounter was viewed in real time by the faculty and observers from the observation room. The encounter was recorded for use during the debriefing, and observers were provided with guidelines to prompt reflection on noteworthy moments occurring during the simulation.

A debriefing session followed immediately upon concluding the simulation. All students (PT and Observers) and faculty (including the teaching assistant/SP) participated actively. Video recordings of the patient simulation were used as needed to facilitate recall and optimize the discussion.

Effectiveness of LCOF simulation

Self-efficacy data were collected immediately prior to the simulation (T0), following exposure to LCOF simulation (experimental group) (T1) and after completing the first clinical experience (T2). The CPI measures were collected during the clinical experience at midterm and final.

In the experimental group, simulation was embedded into a course, and therefore, all students participated as part of the course requirement. However, students had the option to decline completing the self-efficacy surveys.

Prior to the first clinical affiliation, all students were enrolled in a course entitled Physical Therapy Examination. This course focused on preparing students to screen patients and determine appropriateness for PT services. The structure of the course allowed for embedding of simulations. The controls, who were enrolled in the identical course on another campus, received usual training, which consisted of lectures, paper cases, and class discussion. Simultaneously, the experimental group received usual training plus an LCOF patient simulation module. The primary outcome measures used were the Jones and Sheppard self-efficacy questionnaire and the PT Clinical Performance Instrument (CPI).25,26 No outcome measure to date has been universally accepted as both valid and reliable to measure physical therapist student self-efficacy. However, the Jones and Sheppard self-efficacy questionnaire was used in this study based on preliminary evidence supporting its validity and correlation to clinical performance (Appendix 1).25 The instrument contains 13 questions with a five-point Likert scale for each question; scores range from 13 to 65. To assess clinical performance, the PT CPI was used.26 The CPI is an industry-wide accepted measure for evaluating PT student performance during clinical placements. Responses from the self-efficacy questionnaire were recorded on three separate occasions: at baseline (T0, prior to simulation), after simulation (T1, post-patient simulation), and after the subjects’ first clinical experience (T2, post-clinical experience). After T1, students began their 6-week full-time clinical experiences. Midterm and final CPI scores were culled from administrative departmental records (between T1 and T2). Item analysis and summative CPI scores were considered for both groups.

Analysis

Descriptive data were calculated for faculty time, space, equipment, and funds. Comparisons of self-efficacy between experimental and control groups at all three time points were conducted using Mann–Whitney independent sample tests. Experimental group subjects were considered to have had LCOF exposure regardless of the role they played during the simulation (PT or observer). Repeated measure analyses were performed using a longitudinal subsample for matched subjects who participated at each compared time point. Two within subject individual-level change scores were derived for each subject using the repeated self-efficacy measures (T1 vs. T0; T2 vs. T1) (Table 2). Self-efficacy scores at earlier time points were subtracted from later values to yield the individual-level change scores. Tests of the differences in within-group mean changes were then calculated over each time interval (T1 vs. T0; T2 vs. T1) independently for both the experimental and control groups using Wilcoxon signed-rank tests. Kruskal–Wallis testing was used to assess changes between group changes in CPI scores at midterm and final. All analyses were conducted using SPSS version 25.0 (IBM Corp., Armonk, NY), and levels of significance were set at 0.05. Cohenʼs d effect sizes were calculated for change at the final time point to aid future sample size calculations.

Table 2. Self-efficacy scores
Pre-simulation (T0) Post-simulation (T1) Post-clinical (T2)
Experimental group
Sample size n = 23 n = 25 n = 18
Mean (SD) 36.04 (7.90) 41.20 (7.57) 45.44 (6.30)
Sample size+ T1 vs. T0
n = 20
T2 vs. T1
n = 12
Within subject change Z = 2.32, P = 0.02* Z = 0.788, P = 0.43
Control group
Sample size n = 22 n = 30 n = 31
Mean (SD) 37.82 (10.43) 41.60 (7.10) 44.94 (8.14)
Sample size+ T1 vs. T0
n = 19
T2 vs. T1
n = 23
Within subject change Z = 0.55, P = 0.59 Z = 2.30, P = 0.02*
*Denotes significant finding.
+Analyses performed using longitudinal subsample for matched subjects who participated at each compared time point.

Results

An LCOF 360 square feet simulation lab was developed. In addition to available equipment, approximately $500 was spent for supplemental supplies (Table 1). To accomplish this, 60 h of volunteer Research Assistant time and 100 h of faculty time was utilized (2 faculty, 50 h each).

Table 2 shows mean self-efficacy scores for both experimental and control groups, across all time points, sample sizes at each time point, and varied response rates across time points.

At baseline, no differences in self-efficacy were noted between the experimental and control groups (P = 0.628). There were also no differences after the intervention at T1 (P = 0.237) and post-clinical placement at T2 (P = 0.727). Comparisons of changes within each group (experimental and control) yielded a different pattern of results. These analyses used the subsample of subjects who had data at each of the time points included in the analysis. Within the experimental group, subjects demonstrated a statistically significant increase in self-efficacy after the simulation training (T1) versus before (T0), Z = 2.32, P = 0.02. Over this same time period (T1 vs. T0), control subjects did not demonstrate a statistically significant change in self-efficacy (Z = 0.55, P = 0.59).

Looking at the interval between T1 and T2 when all students regardless of experimental assignment received their clinical placement and training, control subjects demonstrated statistically significant improvement in self-efficacy between T1 and T2 (Z = 2.30, P = 0.02), while experimental subjects did not (Z = 0.788, P = 0.43). Alpha correction was applied to these results as multiple two-tailed tests were conducted. Since two comparisons were made within each group (T1 vs. T0; T2 vs. T1), these were adjusted to the alpha level of 0.025. No statistically significant changes in CPI scores were noted between groups at midterm and final. Effect size for changes in self-efficacy demonstrated a small-to-moderate effect size (d = 0.386).

Discussion

This pilot study demonstrated that integrating an LCOF simulation lab into an existing course was both feasible and had an impact on student self-efficacy. The researchers established an operational simulation lab and seamlessly integrating simulation into the existing curriculum. Utilizing readily available resources facilitated the expeditious development of a low-cost simulation lab. While those resources typically found in a high-fidelity lab were not available, the low-fidelity lab could be used for both in and outpatient cases. Using tablets, which are available to all faculty and students, and an online meeting platform, allowed for video recording of the simulations for use during debriefings. Faculty of the course seamlessly altered the course syllabus to accommodate the simulation experience. While different resources were readily available in each professional physical therapy (PT) academic program, utilizing the resources that can be accessed allows for the development of a similar LCOF lab to be replicated in most environments. A challenging process was achieved through faculty creativity, drive, and determination along with departmental backing and minimal financial support. It has been reported that high-fidelity simulation laboratory setup costs are approximated to be $100,000 while for a fraction of that cost a utilitarian simulation laboratory was established that met student and departmental needs.27

The second objective of this pilot study was to examine if LCOF resulted in increased self-efficacy. Self-efficacy is an important criterion for clinical performance,25 and there is evidence that simulation positively impacts self-efficacy.28 Self-efficacy is associated with clinical competence and clinical performance, and therefore, promoting self-efficacy among students seems advisable. The findings from this study showed that students exposed to one LCOF simulation demonstrated improved self-efficacy that was comparable to the improvements in self-efficacy seen in the control group after 6 weeks in the clinic. Thus, prior to a first clinical experience, exposure to LCOF simulation accelerated the development of self-efficacy within the experimental group students compared to control group students exposed to usual training without simulation. Therefore, sending students to their clinical placements with enhanced self-efficacy as a result of simulation exposure seems advantageous in that the students may demonstrate enhanced clinical competence and clinical performance. Although enhanced clinical competence was not evident in the CPI data, the findings may reflect a shortcoming in the sensitivity of the CPI in measuring significant change in student performance.

Although the results of this pilot study did not show a significant difference between the experiential and control subjects, there was a small-to-moderate effect size for self-efficacy. Cohen’s d effect sizes were calculated for change at the final time point to determine an appropriate sample size that would demonstrate significance if this study were to be repeated. In order to achieve a power level of 0.8, the effect size calculation indicates that a minimum of 168 subjects (84 per group) would be needed to achieve a one-tailed significance at 0.05. This suggests that additional studies with larger sample sizes are indicated to explore the relationship between low-fidelity simulation and self-efficacy.

Robust studies are also needed to better understand the importance of simulation in PT training. Healthcare programs, including those without access to costly simulation labs, should identify ways to creatively embed simulation into the curriculum and explore the impact on student development. By doing so, opportunities for data generation will naturally evolve that can inform the optimal utility of simulation in the PT curriculum.

There are several limitations in this study. The instruments measuring outcome may not have been sufficiently sensitive to detect subtle changes. Despite demonstration of the utility of the Jones and Sheppard scale for assessing self-efficacy in PT students, there is evidence to suggest that it may not be sufficiently sensitive to detect self-efficacy in all skills across practice areas.23 It is possible that the simulations used in this study challenged specific skills and cases for which the instrument is less sensitive. Therefore, in future studies, the skills challenged, the case, and the instrument must align.

The CPI was utilized to assess student clinical performance as the standard instrument utilized in clinics throughout the United States, yet no significant findings were noted. Others have reported similar limitations for the CPI.29,30 The CPI was designed to assess student performance and change in performance over time. It was not designed to compare performance differences between student groups. Therefore, the effect of simulation on clinical performance was not shown and alternative measures are needed.

Standards of best practice for simulation training are available for nursing and medicine; however, none currently exist for PT.31,32 Translation of best practice into training for physical therapy is lacking. Calhoun et al.33 noted that ‘there is limited infrastructure available to assist programs in translation of these best practices into more standardized educational approaches’. Therefore, PT faculty have little foundation to guide the development of profession-specific simulation training.

The sample used for this study was one of convenience, which included only one academic cohort. Thus the findings may not be generalizable. Not all students in the experimental group had identical simulation exposure, because some acted as the PT during the simulation, while others observed. This approach is supported in the literature, yet allowing all students to experience the role of the PT may have impacted the outcome.33 As a pilot study, the sample size was small and insufficient to detect significant changes.

Conclusion

High-fidelity simulation training demands resources that can be limited in most academic programs, including funding, laboratory space, equipment, and faculty time. The simulation laboratory in this setting was developed on a shoestring budget, utilizing available resources and repurposing space. Many of the clinical proficiencies that PT students must master can be challenged using LCOF simulation. The main purpose of this study was to test the feasibility of implementing LCOF simulation into an existing curriculum. The findings suggest that developing a simulation laboratory is possible. Once established, LCOF simulation may be easily implemented.

The addition of a single LCOF simulation to PT training significantly enhanced the development of self-efficacy prior to a first clinical experience, comparable to that of usual training plus 6 weeks of clinical experience in this pilot study. While the final analysis showed that ultimate self-efficacy levels were comparable for control and experimental groups, the increased self-efficacy resulting from LCOG simulation may have better prepared DPT students for a first clinical placement.

Ethical approval

This study was approved by the TouroCollege School of Health Sciences IRB Committee on August 11, 2016 (reference no.: #HSIRB1668).

References

  1. Gonzalo JD, Haidet P, Papp KK, et al. Educating for the 21st-century health care system: an interdependent framework of basic, clinical, and systems sciences. Acad Med (2017) 92(1): 35–9. doi: 10.1097/ACM.0000000000000951
  2. Kellett J, Papageorgiou A, Cavenagh P, et al. The preparedness of newly qualified doctors – views of foundation doctors and supervisors. Med Teach (2015) 37(10): 949–54. doi: 10.3109/0142159X.2014.970619
  3. Maran NJ, Glavin RJ. Low- to high-fidelity simulation – A continuum of medical education? Med Educ (2003) 37(Suppl 1): 22–8. doi: 10.1046/j.1365-2923.37.s1.9.x
  4. Motola I, Devine LA, Chung HS, et al. Simulation in healthcare education: a best evidence practical guide. AMEE guide no. 82. Med Teach (2013) 35(10): 1511. doi: 10.3109/0142159X.2013.818632
  5. Stockert B, Ohtake PJ. A national survey on the use of immersive simulation for interprofessional education in physical therapist education programs. Simul Health (2017) 12(5): 298–303. doi: 10.1097/SIH.0000000000000231
  6. Cook DA, Hatala R, Brydges R, et al. Technology-enhanced simulation for health professions education: a systematic review and meta-analysis. JAMA (2011) 306(9): 978–88. doi: 10.1001/jama.2011.1234
  7. Norman J. Systematic review of the literature on simulation in nursing education. ABNF J (2012) 23(2): 24–8.
  8. Sabus C, Macauley K. Simulation in physical therapy education and practice: opportunities and evidence-based instruction to achieve meaningful learning outcomes. J Phys Ther Educ (2016) 30(1): 3–13. doi: 10.1097/00001416-201630010-00002
  9. Silberman NJ, Panzarella KJ, Melzer BA. Using human simulation to prepare physical therapy students for acute care clinical practice. J Allied Health (2013) 42(1): 25–32.
  10. Shoemaker MJ, Riemersma L, Perkins R. Use of high fidelity human simulation to teach physical therapist decision-making skills for the intensive care setting. Cardiopulm Phys Ther J (2009) 20(1): 13–18. doi: 10.1097/01823246-200920010-00003
  11. Laschinger S, Medves J, Pulling C, et al. Effectiveness of simulation on health profession students’ knowledge, skills, confidence and satisfaction. JBI Libr Syst Rev (2008) 6(7): 265–309. doi: 10.1111/j.1744-1609.2008.00108.x
  12. Hamstra SJ, Brydges R, Hatala R, et al. Reconsidering fidelity in simulation-based training. Acad Med (2014) 89(3): 387–92. doi: 10.1097/ACM.0000000000000130
  13. Massoth C, Röder H, Ohlenburg H, et al. High-fidelity is not superior to low-fidelity simulation but leads to overconfidence in medical students. BMC Med Educ (2019) 19(1): 29. doi: 10.1186/s12909-019-1464-7
  14. Meurling L, Hedman L, Lidefelt K, et al. Comparison of high- and low equipment fidelity during paediatric simulation team training: a case control study. BMC Med Educ (2014) 14: 221. doi: 10.1186/1472-6920-14-221
  15. Nimbalkar A, Patel D, Kungwani A, et al. Randomized control trial of high fidelity vs. low fidelity simulation for training undergraduate students in neonatal resuscitation. BMC Res Notes (2015) 8: 636. doi: 10.1186/s13104-015-1623-9
  16. Norman G, Dore K, Grierson L. The minimal relationship between simulation fidelity and transfer of learning. Med Educ (2012) 46(7): 636–47. doi: 10.1111/j.1365-2923.2012.04243.x
  17. Schoenherr JR, Hamstra SJ. Beyond fidelity: deconstructing the seductive simplicity of fidelity in simulator-based education in the health care professions. Simul Healthc (2017) 12(2): 117–23. doi: 10.1097/SIH.0000000000000226
  18. Weller JM, Nestel D, Marshall SD, et al. Simulation in clinical teaching and learning. Med J Aust (2012) 196(9): 594. doi: 10.5694/mja10.11474
  19. Park JH, Son JY, Kim S, et al. Effect of feedback from standardized patients on medical students’ performance and perceptions of the neurological examination. Med Teach (2011) 33(12): 1005–10. doi: 10.3109/0142159X.2011.588735
  20. Paskins Z, Peile E. Final year medical students’ views on simulation-based teaching: a comparison with the best evidence medical education systematic review. Med Teach (2010) 32(7): 569–77. doi: 10.3109/01421590903544710
  21. Opacic DA. The relationship between self-efficacy and student physician assistant clinical performance. J Allied Health (2003) 32(3): 158–66.
  22. Bandura A. Human agency in social cognitive theory. Am Psychol (1989) 44(9): 1175–84. doi: 10.1037/0003-066x.44.9.1175
  23. van Lankveld W, Jones A, Brunnekreef JJ, et al. Assessing physical therapist students’ self-efficacy: measurement properties of the physiotherapist self-efficacy (PSE) questionnaire. BMC Med Educ (2017) 17(1): 250. doi: 10.1186/s12909-017-1094-x
  24. Thomas EM, Rybski MF, Apke TL, et al. An acute interprofessional simulation experience for occupational and physical therapy students: key findings from a survey study. J Interprof Care (2017) 31(3): 317–24. doi: 10.1080/13561820.2017.1280006
  25. Jones A, Sheppard L. Self-efficacy and clinical performance: a physiotherapy example. Adv Physiother (2011) 13(2): 79–83. doi: 10.3109/14038196.2011.565072
  26. Roach KE, Frost JS, Francis NJ, et al. Validation of the revised physical therapist clinical performance instrument (PT CPI): version 2006. Phys Ther (2012) 92(3): 416–28. doi: 10.2522/ptj.20110129
  27. Hanberg A, Brown SC, Hoadley T, et al. Finding funding: the nurses’ guide to simulation success. Clin Simul Nurs (2007) 3(1): 5–9. doi: 10.1016/j.ecns.2009.05.032
  28. Hough J, Levan D, Steele M, et al. Simulation-based education improves student self-efficacy in physiotherapy assessment and management of paediatric patients. BMC Med Educ (2019) 19(1): 463. doi: 10.1186/s12909-019-1894-2
  29. Silberman N, Litwin B, Panzarella K, et al. Student clinical performance in acute care enhanced through simulation training. J Acute Care Phys Ther (2015) 1: 25–37. doi: 10.1097/JAT.0000000000000021
  30. O’Connor A, McGarr O, Cantillon P, et al. Clinical performance assessment tools in physiotherapy practice education: a systematic review. Physiotherapy (2018) 104(1): 46–53. doi: 10.1016/j.physio.2017.01.005
  31. Sittner BJ, Aebersold ML, Paige JB, et al. INACSL standards of best practice for simulation: past, present, and future. Nurs Educ Perspect (2015) 36(5): 294–8. doi: 10.5480/15-1670
  32. Cooke JM, Rooney DM, Fernandez GL, et al. Simulation center best practices: a review of ACS-accredited educational institutes’ best practices, 2011 to present. Surgery (2018) 163(4): 916–20. doi: 10.1016/j.surg.2017.11.004
  33. Calhoun AW, Nadkarni V, Venegas-Borsellino C, White ML, Kurrek M. Concepts for the simulation community: Development of the international simulation data registry. Simul Healthc (2018) 13(6): 427–34: doi: 10.1097/SIH.0000000000000311
  34. Bonnel W, Hober C. Optimizing the reflective observer role in high-fidelity patient simulation. J Nurs Educ (2016) 55(6): 353–56. doi: 10.3928/01484834-20160516-10

Appendix 1

Self-efficacy Questionnaire: Jones and Sheppard (2012)

(1)    My training has adequately prepared for me for my clinical placement.

(2)    My training has adequately prepared me for verbally communicating effectively and appropriately.

(3)    My training has adequately prepared me for communicating in writing effectively and appropriately.

(4)    My training has adequately prepared me for performing subjective assessments during my clinical placement.

(5)    My training has adequately prepared me for performing objective assessments during my clinical placement.

(6)    My training has adequately prepared me for interpreting assessment findings.

(7)    My training has adequately prepared me for identifying and prioritizing patients’ problems during my clinical placement.

(8)    My training has adequately prepared me for selecting appropriate short- and long-term goals during my clinical placement.

(9)    My training has adequately prepared me for appropriately performing treatments during my clinical placement.

(10)  My training has adequately prepared me for performing discharge planning during my clinical placement.

(11)  My training has adequately prepared me for evaluating my treatments during my clinical placement.

(12)  My training has adequately prepared me for progressing interventions appropriately during my clinical placement.

(13)  My training has adequately prepared me for dealing with the range of patient conditions which may be seen while on my clinical placement.