ANALYSIS OF CLINICAL EXPERIENCES IN ATHLETIC TRAINING EDUCATION PROGRAMS AND PERFORMANCE ON THE BOC CERTIFICATION EXAMINATION Except where reference is made to the work of others, the work described in this thesis is My own or was done in collaboration with my advisory committee. This thesis does not include proprietary or classified information __________________________________ James Shelby Searcy, Jr. Certificate of Approval: _________________________ _________________________ David Shannon Peter Hastie, Chair Professor Professor Educational Foundations, Health and Human Performance Leadership, and Technology _________________________ _________________________ Henry N. Williford Joe F. Pittman Professor Interim Dean Foundations, Secondary, and Graduate School Physical Education ANALYSIS OF CLINICAL EXPERIENCES IN ATHLETIC TRAINING EDUCATION PROGRAMS AND PERFORMANCE ON THE BOC CERTIFICATION EXAMINATION James Shelby Searcy, Jr. A Dissertation Submitted to the Graduate Faculty of Auburn University in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy Auburn, Alabama December 15, 2006 iii ANALYSIS OF CLINICAL EXPERIENCES IN ATHLETIC TRAINING EDUCATION PROGRAMS AND PERFORMANCE ON THE BOC CERTIFICATION EXAMINATION James Shelby Searcy, Jr. Permission is granted to Auburn University to make copies of this dissertation at its direction, upon request of individuals or institutions and at their expense. The author reserves all publication rights. __________________________ Signature of Author __________________________ Date of Graduation iv VITA James Shelby Searcy, Jr., son of J. Shelby and Rachel (McDonald) Searcy, was born December 6, 1959, in Andalusia, Alabama. He graduated from Greenville High School in 1978. In August, 1978, he entered Louisiana State University in Baton Rouge, Louisiana and received a Bachelor of Science degree in Health and Physical Education in December, 1982. He entered Graduate School at Auburn University in May, 1983, and received a Master of Education degree in Health and Human Performance in August, 1984. After a career as an athletic trainer in which he served as a collegiate head athletic trainer, clinical athletic trainer, and coordinator of sports medicine, he reentered the Graduate School in January of 2000, to pursue a Doctor of Philosophy degree at Auburn University. He currently serves as an assistant professor in the Human Performance program at Huntingdon College, Montgomery, Alabama. He is married to the former Allison Godwin, daughter of Jesse and Martha Godwin of Greenville, Alabama. v DISSERTATION ABSTRACT ANALYSIS OF CLINICAL EXPERIENCES IN ATHLETIC TRAINING EDUCATION PROGRAMS AND PERFORMANCE ON THE BOC CERTIFICATION EXAMINATION James Shelby Searcy, Jr. Doctor of Philosophy, December 15, 2006 (M.Ed., Auburn University, 1984) (B.S. Louisiana State University, 1982) 167 Typed Pages Directed by Peter Hastie The purpose of this study was to identify the general design of the clinical education experience and determine which factors of this experience might influence performance on the BOC Certification Examination. Part 1 of the study utilized a 47-item Internet questionnaire to gather information from ATEP program directors concerning the design of their clinical education experiences and instruments used to evaluate the level of proficiency achieved by athletic training students. Part 2 of the study involved the use of a 33-item questionnaire to gather information from certified athletic trainers who passed the BOC examination in 2005. General information concerning clinical education experiences and performance on the certification examination was collected. Both questionnaires included closed-ended items with appropriate space for expressing comments. vi Responses from both program directors and ATCs helped develop a fairly consistent design to the clinical education experience. Students are typically admitted to the ATEP during the second or third academic term of enrollment. The clinical education experiences will general last 5 to 6 terms, include 5 to 6 clinical experiences, and include placement in 2 to 3 clinical settings. Student to ACI ratios are generally low, 3 to 4 students per ACI. Students are evaluated frequently and consistently throughout the educational experiences. Logistic regression analysis of factors associated with the clinical education experience identified GPA and an early start to the clinical experiences as potential predictors for passing the written simulation and practical parts of the certification examination on the first attempt. MANOVA procedures determined statistically significant relationship between the number of clinical settings and performance on parts of the examination. Post hoc analysis identified number of clinical settings does not have a significant effect of passing both the written simulation and practical parts of the examination, but does influence passing at least one of these two parts of the examination. These results support the rotation of students through 3 or more different clinical settings during the clinical education experience vii ACKNOWLEDGMENTS I would like to express my appreciation to each committee member for the guidance and support each has provided during this project --Doctors: Peter Hastie, David Shannon, and Hank Williford. I would like to thank Mr. Lauren Russell for his assistance with the web publishing of the survey questionnaires and data collection. Appreciation is also extended to the staff of the Board of Certification of Athletic Trainers for their efforts to assist with the verification of participants? examination scores. Recognition is likewise extended to those program directors and certified athletes trainers who completed the questionnaires despite busy schedules. I am greatly indebted to Allison, Tyler, Mac, and Jennifer Leigh for the patience and understanding they have shown during the years that I have worked to complete this degree and research project. Words cannot express my heart-felt gratitude for the sacrifices each of you have made so I might complete this project. I also thank my parent for the encouragement they have continuously provided throughout my education. I also thank my fellow faculty members and the administration at Huntingdon College for the encouragement and support they have shown over the years as I have worked to complete this degree. viii Style manual or journal used: APA Manual Computer software used: Microsoft Word ix TABLE OF CONTENTS LIST OF TABLES ............................................................................................................ xi I. Introduction ............................................................................................................1 Purpose of the Study .........................................................................................6 Research Question ............................................................................................7 Significance of the Study ..................................................................................7 Operational Definitions .....................................................................................9 Limitations of the Study ..................................................................................10 Assumptions of the Study ...............................................................................10 II. Review of Literature...............................................................................................12 Assessment in Higher Education ....................................................................12 Assessing Competency in Higher Education ..................................................15 Development of an Assessment Method .........................................................19 Assessment of Medical and Allied Health Students .......................................21 Assessment of Students Clinical Skills in Medical Education Programs .........................................................................................................28 III. Analysis of Clinical Education Experiences in Athletic Training Education Programs ..............................................................................................49 Introduction .....................................................................................................49 Methods ...........................................................................................................55 Results .............................................................................................................58 Discussion....................................................................................................... 71 Conclusion ......................................................................................................75 Bibliography ...................................................................................................77 IV. Clinical Education Experiences of Athletic Training Students and Performance on the BOC Certification Examination .....................................79 Introduction .....................................................................................................79 Methods ...........................................................................................................84 Results .............................................................................................................86 Discussion .....................................................................................................105 Conclusion ....................................................................................................110 Bibliography .................................................................................................112 x V. Summary ............................................................................................................114 Bibliography ...................................................................................................................117 Appendix..........................................................................................................................128 Survey of Evaluation Methods used for the Assessment of Clinical Proficiencies ..........................................................................................129 Cover Letter ........................................................................................................140 Consent Form ......................................................................................................141 Follow-up Notification ........................................................................................144 An Assessment of Methods used to Evaluation Athletic Training Students During the Clinical Experience and Performance on the BOC Examination ...............................................................................................145 Cover Letter ........................................................................................................151 Consent Form ......................................................................................................153 Follow-up Post-card ............................................................................................155 xi LIST OF TABLES Tables: 3.1 Beginning Year/Term of Enrollment Starting Clinical Experiences ...............................................................................................58 3.2 Required Number of Academic Terms and Clinical Experiences ...............................................................................................59 3.3 Number of ACIs used in ATEP and Average Number of Years Experience ..................................................................................60 3.4 Student to ACI Ratio ................................................................................61 3.5 Number of Clinical Settings and ACIs Associated with each Clinical Setting .........................................................................62 3.6 Employment Characteristics of ACIs ......................................................63 3.7 Setting for Initial Evaluation and Mastery of Clinical Skills ...................64 3.8 Types of Evaluation Instruments Frequently used for Assessing ATSs? Clinical Skills ................................................................................65 3.9 Frequency of Evaluation of Clinical Skills ..............................................65 3.10 Use of Assessment Instruments for Evaluating Competencies within Educational Domains .....................................................................67 3.11 Characteristics of Models used for Assessment of ATSs? Clinical Skills ............................................................................................68 3.12 Job Responsibilities of Program Directors ..............................................70 3.13 Characteristics of Program Directors .......................................................70 4.1 Required Clinical Hours for Admission to ATEP ...................................87 4.2 Required Number of Academic Terms and Clinical Experiences .................................................................................87 xii 4.3 Types of Courses and Credit for Clinical Experiences ............................88 4.4 Beginning of Clinical Experiences and Evaluation of Clinical Skills ............................................................................................89 4.5 Clinical Settings for Placement and Evaluation .......................................90 4.6 Number of Clinical Settings for Placement and Evaluation Performed ................................................................................91 4.7 Setting for First Evaluation and Evaluation of Mastery of Clinical Skills ............................................................................................92 4.8 Evaluation Instruments used for Assessing Clinical Skills .......................93 4.9 Summary of Frequency of Evaluations of Clinical Skills ........................93 4.10 Summary of Total Number of Evaluations of Clinical Skills ...................94 4.11 Overall GPA at Graduation .......................................................................95 4.12 Summary of BOC Examination Results ...................................................95 4.13 Characteristics of AT Survey Participants ................................................96 4.14 Summary of BOC Examination Scores ....................................................97 4.15 Cross-tabulation Analysis Summary: Passing WS and PR On First Attempt .......................................................................................99 4.16 Summary of Logistic Regression Results ...............................................102 4.17 Summary of MANOVA Analysis: Passing WS and PR Parts of of BOC Examination on First Attempt ...................................................104 1 CHAPTER 1 INTRODUCTION The National Athletic Trainers? Association (NATA) was founded on the stated purpose to ?build and strengthen the profession of athletic training through the exchange of ideas, knowledge, and methods of athletic training? (O?Shea, 1980). Within a few years after its founding, the organization?s Board of Directors recognized a need for developing educational programs as an avenue to promote professionalization of athletic training. The first model athletic training curriculum was approved by the NATA in 1959. This model emphasized attainment of secondary-level teaching credentials and inclusion of courses corresponding to the prerequisite required for acceptance into professional physical therapy programs. (Delforge and Behnke, 1999). The first undergraduate athletic training education programs received certification by the NATA in 1969. Curriculums at these programs were based on a model originally proposed by the NATA in 1959. Student interests led to the development of specific courses addressing prevention and care of athletic related injuries. These education programs began to develop their own identity and by the 1970?s the NATA had developed a list of behavioral objectives for inclusion in an athletic training curriculum. Competency checklists were developed to serve as a guide for monitoring the 2 development of essential clinical skills by students in these programs. The combination of behavioral objectives, competency check-lists, and supervised field experiences re- emphasized the NATA?s goal to develop athletic training into a specialized learning experience with its own unique body of knowledge (Delforge & Behnke, 1999). In the 1970s, the NATA Professional Education Committee published a list of behavioral objectives identifying desired outcomes for athletic training courses based on 11 specific courses. This represented a significant effort by the NATA to identify a specialized body of knowledge required of those practicing in the profession. Also, clinical clock-hour requirements and a skill-competency checklist were created to guide and monitor students? development of clinical skills (Weidner and Henning, 2002). However, the behavioral objectives did not represent a true competency-based educational program because objectives were dictated and restricted by the content required in existing courses (Weidner and Henning, 2002). Efforts to establish learning objectives and identify courses relevant to the curriculum did bring attention to the unique body of knowledge required for the certified athletic trainer. The behavioral objectives developed during this time became the conceptual framework for the first edition of the Competencies in Athletic Training development by the Professional Education Committee in 1983 (Delforge and Behnke, 1999). In 1983, the NATA Professional Education Committee published Guidelines for Development and Implementation of NATA Approved Undergraduate Athletic Training Education Programs. This document would serve as a guide for the development of undergraduate athletic training education programs. The document included specific content areas to include in the curriculum rather than specific courses. This allowed 3 greater flexibility for individual institutions in the development of their curriculum. Included within the Guidelines was the Competencies in Athletic Training. Specific competencies were assigned to ?performance domains? based on the 1983 role delineation study of athletic trainers by the NATA Board of Certification (NATABOC). The combination of subject area requirements and specific competencies marked the beginning of a true competency-based educational program for athletic trainers (Delforge and Behnke, 1999). Historically, the NATABOC offered two routes to achieving certification. One required completion of a formal educational program in athletic training while the other took more of a ?hands-on? approach with minimal required academic courses. From the 1970s until 2003, students could meet the eligibility requirements to take the certification examination by completing 600 to 800 clinical hours in an approved/accredited program or 1800 clinical hours as an apprentice and later 1500 hours as an internship student. When compared to the development of clinical education experiences in medical schools, professional preparation in athletic training historically emphasized more clinical experience and less didactic instruction (Weidner and Henning, 2002). Since publication of the 1983 Guidelines, the leadership of the NATA has continued to standardize the educational requirements, and improve the consistency of learning experiences for those desiring to work in the profession. In 1997, the organization adopted a policy requiring those pursuing careers as a certified athletic trainer to graduate from an accredited entry-level athletic training education program. This policy became effective in January 2004, and all alternative internship programs were eliminated. 4 Compared to other allied health professions (i.e. physical therapy, nursing, and medical physician) athletic training is a relatively young profession. The leadership of the organization has developed a framework for educating students, professional certification of athletic trainers, continuing education of certified professionals, and research in the areas of prevention and care of athletic injuries. Methods of student assessment have not been the attention of significant investigation during recent educational reform. Although, accreditation standards and guidelines require demonstration of continued assessment of students and documentation of competency in clinical proficiencies, there are few published articles investigating methods used to assess student performance, specifically, the demonstration of competency associated with clinical proficiencies. According to the newly formed Commission on Accreditation of Athletic Training Education (CAATE) Standards for the Accreditation of Entry-Level Athletic Training Education Programs (2005, p.13), athletic training education programs must ?routinely secure qualitative and quantitative data to determine the outcomes and effectiveness of the program?. A comprehensive assessment plan to evaluate all components of the program must be develop, of which, completed clinical proficiency evaluations may be included. Each student?s clinical experiences must be arranged in a manner to allow program faculty / staff to student progression and learning on a regular and frequent basis. ?Students? clinical experiences must be conducted in such a way to allow the ATEP faculty/staff to regularly and frequently evaluate student progress and learning, as well as the effectiveness of the experience? (Standards for Accreditation, 2005, p. 13). Traditional assessment techniques such as written examinations, checklists, oral / practical examinations, and videotaping have been used to assess students and their 5 progression. Most assessment tools used are developed by the program director due to the individual nature of programs. These tools are often developed without regard for validity and reliability, but are seen as a way to document the completion of clinical competencies, thus fulfilling accreditation requirements. In comparison to athletic training education, methods used to assess students in nursing, physical therapy, and physician education programs have received significantly more attention. In addition to traditional methods of assessment (written examinations, oral presentations, and checklist), a significant amount of attention has been directed at assessing clinical performance of these allied health students. Individuals involved in educating these students have investigated the use of faculty, students, peers, patients, and nursing personnel in the assessment of clinical performance. Methods of assessment such as observation, written communication, oral communication, simulation, and self- evaluation have been studied. Instruments used for observing performance have included checklists, rating scales, anecdotal records, critical incidents, and videotaping. Instruments used for the evaluation of written communication have included care plans, paper and pencil tests, and process recording. For oral communication skills, clinical conference / post-conference, and multidisciplinary conferences have been incorporated into the assessment process. Objective structured clinical examinations (OSCE), role- playing, and interactive media have been used as evaluation instruments to evaluate clinical skills. Journals and personal interviews have been used as instruments for self- evaluation (Gomez, Lobodzinski, and Hartwell, 1998). In a similar manner to students in other allied health educational programs, athletic training students are required to do more than just demonstrate a level of 6 knowledge concerning materials within the discipline. They are also required to demonstrate a certain level of competency for specific clinical proficiencies. Accreditation standards stipulate students must be provided sufficient clinical experiences in an athletic training setting and they must receive academic credit for that clinical experience (Standards for the Accreditation, 2005). The athletic training clinical experience is described as that portion of the student?s preparation that includes the formal acquisition, practice, and evaluation of clinical proficiencies through involvement in classroom, laboratory, and clinical education experiences under the direct supervision of an ACI or a clinical instructor (NATA Education Council, 2006). As with students in other allied health education programs, instruments are necessary for the assessment of athletic training students in these clinical experiences. However, there has been few published works concerning instruments used to evaluate students during clinical experiences. This lack of sufficient documentation of investigative research regarding methods used to assess clinical performance of athletic training students leads one to seek additional information concerning instruments and methods being used to assess the level of competency for clinical proficiencies required. Purpose of the Study Athletic training education includes use of didactic classroom instruction and clinical education experiences in the professional preparation of athletic training students. Studies have examined various student factors (i.e. GPA and clinical hours) in an effort to determine possible relationships associated with performance on the Board of Certification (BOC) certification examination. However, there is no published information concerning methods used to evaluate these students during the clinical 7 experience and possible influence on performance on the certification examination. Thus the purpose of this study is threefold. The first purpose is to gather information from program directors related to the design of athletic training clinical experiences, and methods used to evaluate students during the clinical experience. A second purpose is to determine if athletic training students view design of the clinical experience and methods of assessment in a similar manner to the program directors. A final purpose is to examine what factors associated with the clinical experience might influence performance on the certification examination. Research Questions 1. What type of evaluation instruments do program directors perceive as beneficial for assessing clinical skills within specific domains? 2. What are athletic training students? perceived understanding of the design of the clinical education experience? 3. What influence do clinical education factors, such as number of evaluations, frequency of evaluations, number of clinical settings, types of evaluation instruments, and number of evaluation instruments, have on performance on the Written Simulation and Practical portions of the certification examination. Significance of the Study Athletic training clinical education is an important component in the professional preparation of athletic training students. Clinical education can be viewed as a triad involving the interaction of setting, instruction, and student (Weidner & Henning, 2002). It is during clinical education that the theoretical and practical components of the 8 curriculum are integrated into the real-life situations of dealing with the physically active. Clinical education is essential to the transformation from novice student to competent practitioner. In health care professions, clinical instruction and clinical experiences are vital to the training and education of the professional, and competency-based education has emerged as the focal point for ensuring that these professionals are prepared to serve the public (Weidner and Henning, 2002). There is an interest in determining what factors influence, or contribute to, successful performance on the certification examination. Program coordinators, and others involved in the education of these students, desire to have a better understanding of factors influencing students? academic and clinical performance in athletic training education programs. Students? grade point averages have been found to have a positive correlation with certification examination (Harrelson, Gallaspy, Knight, and Leaver- Dunn, 1997) while the number of clinical hours has shown no significant correlation with performance on the examination (Sammarone, Comfort, Perrin, and Gieck, 2000). Portfolios have been shown to be a good tool for evaluating the overall experience of students during the clinical experience and assessing the structure of the clinical experience (Hannam, 1995). The role of clinical education and clinical instructors in preparing students for careers in athletic training has been the focus of several investigations. Studies have examined characteristics of helpful clinical instructors (Laurent and Weidner, 2001; Meyer, 2002), behaviors of clinical supervisors (Curtis, Helion, and Domsohn, 1998; Foster and Leslie, 1992), interpersonal communication between clinical instructors and students (Swan, 2002), learning styles of athletic training students (Coker, 2000; Draper, 9 1989; Harrelson, Leaver-Dunn, and Wright, 1997), clinical-placement hours of athletic training students (Miller and Berry, 2002), clinical experiences as a predictor of performance on the certification examination (Sammarone, et al, 2000), and pedagogic strategies perceived to enhance student learning (Mensch and Ennis, 2002). Assessment is an important part of any education program. It is through regular assessment that the strengths and weaknesses of a program can be determined. The same is true concerning the assessment of students. Through assessment, program coordinators can determine if students are retaining information and skills learned, progressing in their educational experiences, and developing the personal characteristics essential to enter a professional career. It is essential those involved in athletic clinical education have an understanding of evaluation instruments that can be incorporated into the assessment of students. Obtaining information regarding the use of various types of evaluation instruments is the first step to improving the assessment of students. This study seeks to determine what evaluation instruments are frequently used in the assessment of athletic training students during their clinical education experience. Operational Definitions 1. Approved clinical instructor (ATC): ?An appropriately credentialed professional identified and trained by the program Clinical Instructor Educator to provide instruction and evaluation of the Athletic Training Educational Competencies and / or Clinical Proficiencies.? 2. ATEP: Athletic Training Education Program 10 3. Athletic training facility / clinic: ?The facility designated as the primary site for the preparation, treatment, and rehabilitation of athletes and those involved in physical activity.? 4. Athletic training student (ATS): ?A student enrolled in the athletic training major or graduate major equivalent.? 5. Clinical education: ?The application of knowledge and skills, learned in classroom and laboratory settings, to actual practice on patients under the supervision of an ACI / CI.? 6. Clinical experiences: ?Those clinical education experiences for the athletic training student that involve patient care and the application of athletic training skills under the supervision of a qualified instructor.? 7. Clinical instructor (CI): ?A certified athletic trainer who is responsible for supervising athletic training students during the clinical education experience.? 8. Learning over time (Mastery of Skills): ?The process by which professional knowledge and skills are learned and evaluated. This process involves the initial formal instruction and evaluation of that knowledge and skill, followed by a time of sufficient length to allow for practice and internalization of the information / skill, and then a subsequent re-evaluation of that information / skill in a clinical (actual or simulated) setting.? Standards for the Accreditation of Entry-Level Athletic Training Education Programs, 2005, p. 20 ? 21 & 23). 11 Limitations of the Study 1. Data related to examination scores and passing of portions of the BOC certification examination are self-reported and may be different if collected during another period due to failure to recall information. 2. Results of the study should be generalized only to undergraduate athletic training education programs, and indirectly to other allied health education programs with similar concerns, needs, and preferences. Assumptions for Study 1. All subjects in this study are competent and knowledgeable sources of the requested data. 2. Respondents will answer the questionnaire with honesty/integrity. 3. Respondents will not discuss their answers with anyone who also may be a participant in the study, to reduce possible influence on other subjects? responses. 12 CHAPTER 2 REVIEW OF LITERATURE Academic assessment of students in institutions of higher education has come under increasing scrutiny in recent years. The use of subjective measures to assess knowledge and performance of college students has been criticized and the trend is for increased emphasis on development of instruments designed to objectively measure student performance. The concern relative to assessment of student performance has become very evident among instructors in medical and allied health education programs. The following review of literature will address issues of student assessment in allied health education programs, methods used to assess student performance, and a review of instruments used to assess students in entry-level athletic training education programs. Assessment in Higher Education In recent years there has been a renewed emphasis on assessing college students? learning. Brown, Bull, and Pendleberry (1997. p. 21) define learning as ?changes in knowledge, understanding, skills, and attitudes brought about by experience and reflection upon the experience.? Learning can either take the form of a structured process involving classroom lectures and laboratory demonstrations, or an unstructured process through casual discussions with peers. Students learn by reading, listening, observing, talking to and with others, and by doing things. For a student to achieve the full benefits 13 of a learning experience there is a need for feedback from others and the opportunity to reflect on the experience. Assessment, whether by students, peers, or instructors, is a valuable tool for obtaining information related to the experience, and a method for supplying a student with hopefully meaningful feedback. To have a better understanding relative to the assessment process, one must have an understanding of how an individual learns. Two ?dominant orientations? of learning can be identified in the learner. Learners may have a knowledge-seeking orientation in which they search for facts and information. Knowledge-seekers may be somewhat mechanical in the learning process, showing little if any interest in the speculation of ideas. This process can lead to a more superficial type of learning with little development associated with the learning of the meaning of theories, concepts, or principles. These learners tend to desire building upon experiences in a gradual manner and tend to be more dependent learners (Brown, et al., 1997). A second learning style is the understanding-seeking orientation. The understanding-seeking learner is more interested in the personal meaning of the experience and less interested in facts. This learner will generally attempt to relate a current learning experience to earlier experiences in an effort to explore potential connections, linkages, and discrepancies. Understanding-seekers are generally more intrinsically motivated in comparison to knowledge-seekers, more often respond to the situation. Also, understanding-seekers tend to exhibit characteristics of being more creative and independent learners. Most students will exhibit a predominant style of learning. However, learning style can be influenced by the learning environment and the assessment process (Brown et al., 1997). 14 Students who are predominately knowledge-seeking learners will tend to perform better in a learning environment that is highly structured, with the use of instructor lectures and written examinations. Students who are understanding-seekers tend to perform better in an environment in which they are allowed to have a choice in the material that is studied, a flexible approach to teaching and learning, and a variety of assessment methods. Whatever learning-style a student exhibits, an instructor must be attuned to the type of learning environment presented to students. The learning environment in which active learning is encouraged, in a carefully planned and properly implemented manner, will generally be received more favorably by both students and instructors. Research supports evidence indicating that freedom in learning that is combined with good teaching encourages students to develop their own conception. This can lead to a deeper learning approach and ultimately enhance conceptual learning. Skills that are developed through experimental learning (e.g. simulations, projects, and work placements) will also enhance conceptual understanding of learners and strengthen the ability to make appropriate applications of those skills that have been learned (Entwistle?s study as cited in Brown, Bull, and Pendlebury, 1997). This becomes an important factor relative to the educational experiences and training of those who will assume roles as medical and allied health professionals (i.e., physical therapists, nurses, physician?s assistants, athletic trainers). The type of assessment method incorporated into the educational process will influence the learning style of students. Multiple-choice exams (i.e. matching, multiple- choice questions) tend to promote a more knowledge-seeking style of learning. Projects, essay exams, and other forms of open-ended assessment methods tend to promote 15 independent learning and a more understanding-seeking style of learning (Brown et al., 1997). It is important to remember that one style of learning is not necessarily superior to another. Some instructors would argue that knowledge-seekers are inferior to understanding-seekers. Although having the ability to reason and connect current experiences with past experiences is an important part of learning, the learning of specific facts and information must not be overlooked. Physicians cannot make an accurate diagnosis of the heart if they do not have an adequate knowledge of the structure and function of each part of the organ. The development of both knowledge-seeking and understanding-seeking orientations by students should be encouraged in educational settings (Brown et al., 1997). Assessing Competency in Higher Education Since the 1980?s educators have received increasing pressure to address accountability issues in higher education. Higher education administrators have been under increasing pressure to demonstrate to stakeholders that the academic programs are effective at producing desired outcomes relative to the knowledge, skills, and attitudes expected of the students. Students are interested in the capability of selected academic programs to prepare them as competitive candidates for employment in a chosen profession. Parents have concerns relative to the rising cost of an education and the quality of education to be offered by the selected institution. Administrators have concerns regarding the ability to demonstrate that their institutions can provide a quality educational experience to the satisfaction of political leaders, so as to receive much needed state and federal financial support. Also, the globalization of the world economy 16 has brought significant changes to the business world and to the knowledge and skills expected of entry-level employees (Banta, 2001). These factors have contributed to the increasing calls for accountability in higher education, and have brought to the forefront the issue of competence-based educational programs and assessment of students enrolled in these programs. In 1988, Secretary of the Department of Education William Bennett issued an executive order requiring accrediting agencies to include in accreditation criteria documentation of educational achievement of students. That order would also require institutions to verify the confirmation of degrees to students who demonstrated the achievement of educational goals specific to the degree. The executive order led accrediting agencies to revise criteria by which institutions and educational programs would be accredited. It also placed considerable importance on program outcomes, especially the demonstration of student learning. In 1998, amendments to the Higher Education Act, applied the force of law to the 1988 executive order. Accrediting agencies seeking recognition by the Department of Education are required to evaluate an institution?s, or educational program?s, ability to achieve and maintain the established learning outcomes. Also, these agencies are required to assess the quality of education provided to the students and efforts of the institution, or program?s leadership, to improve the quality of education. (Higher Education Amendments, 1998). These requirements illustrate the increasing importance of accountability in higher education. The emphasis on accountability has focused attention on the continuing need for developing effective methods to assess student learning, student performance, and program effectiveness. 17 Requirements for documenting achievement of specific educational outcomes and student learning for the fulfillment of accreditation standards has led to an increase in the development of competency-based educational programs, especially in academic programs designed to prepare medical allied health professionals. Competence has been defined as ?a knowledge, skill, ability, personal quality, experience, or other characteristic that is applicable to learning and success in school or in work? (Wheeler and Haertel?s study as sited in Banta, 2001). The National Post-secondary Education Cooperation concluded that in the hierarchy of learning competence is viewed as ?sitting atop the apex of a hierarchy of experiences that have currency for the learner? (Jones and Voorhees, 2002). The ability to demonstrate, or perform a specific task, is the result of applying competence and serves as the basis for performance assessment. For those involved in student performance assessment to ensure the attainment of competence, there must be a description of the competency, a method for measuring the competency, and a standard by which the student is judged to be competent (Palomba, 2001). These requirements must be satisfied in an effort to establish effective methods of instruction and performance assessment in a competency-based education program. In professional allied health programs (i.e. nursing, physical therapy, and medicine) accreditation of the academic program has become a requirement due to calls for accountability by those receiving services from these professionals. Professional programs such as these require students to complete some type of field experience, whether it is a clinical experience, internship, or capstone project. The purpose of such an experience is to provide students with sufficient opportunities to apply their knowledge and skills, learned in the curriculum, in a real-world setting. Field experiences provide 18 instructors an opportunity to assess the performance of students by methods beyond traditional paper and pencil examinations. The experiences provide opportunities to assess students? knowledge and skills in a variety of unique situations relative to the selected profession. In the profession of athletic training, educational reform has led to establishing competency-based educational programs and adopting required clinical experiences similar to those required in academic programs for physical therapists and nurses. Establishment of required clinical experiences has created a need for developing and validating various methods to assess the competence of students relative to the knowledge and skills of an entry-level certified athletic trainer. Also, use of these assessment methods will assist in evaluation of the effectiveness of an academic program to achieve desired outcomes and guide curriculum development. An area of interest in student assessment for those entering professions such as nursing, physical therapy, and medicine relates to specific skills that must be mastered by those intending to practice in their respective profession. Terms such as ?skill? and ?skill development? have often been perceived rather negatively by those in higher education. Some in higher education have viewed ?skill? as implying less emphasis on the discipline content and more as an industrial, or technical relationship. However, development of specific skills is essential to the education of many professionals. Brown et al. (1997) defined skill as a ?goal-directed sequence of actions? that is learned and becomes routine action. The actions are learned so well that feedback is provided as the action is performed which enables a person to adjust to the task at hand. Skills can be characterized as a pattern of sequenced actions to a specific cue rather than an isolated response to a situation. If the sequence is disrupted, one is required to stop and 19 consider what action(s) must occur next. Skills are associated with learning new methods and theories, and include perceptual, cognitive, and psychomotor components of varying proportions. The proportion of each component will vary among skills. For example, demonstration of competence relative to evaluation of a possible ankle injury does not mean the student is competent to evaluate the foot for possible injury. Although a student would use similar skills when evaluating both types of injuries, there are specific skills that must be mastered to successfully complete each. Students must have an ability to recognize specific characteristics of each possible injury in order to complete an evaluation in a systematic manner using the appropriate skills. These factors support the importance of appropriate instruction and assessment of specific skills for which students must demonstrate proficiency in the competence-based education program. Development of an Assessment Method The development of a method for assessing student progression is a time- consuming and difficult task. An evaluator must identify the following: the primary purpose for the assessment of the students, the specific cognitive, psychomotor, or affective traits to be evaluated, when the students will be evaluated, and the frequency of evaluation. Also, evaluators must have an understanding of the general learning characteristics of the students who are being evaluated. Prior to beginning the process of developing an assessment method, there are several questions that must be addressed. First, an evaluator must consider if the primary purpose for the assessment is to grade students, or merely to identify students who have passed or failed an assignment/course. If the purpose is to grade students, then the method of assessment should be designed considering various levels of students 20 understanding. If the purpose of assessment is to pass/fail students, then the assessment method should be designed so knowledge or skills tested are at an appropriate level for students. After the purpose of the assessment is determined, an examiner must decide if the assessment method will be formative or summative. If the purpose is to gain information regarding knowledge and skills learned by students, relative to specific context of information, then a formative method of assessment is desired. If the purpose is to determine overall performance, relative to specific course objectives, then a summative method of assessment should be implemented. Examiners must consider available resources, length of time to develop the assessment method, and time required for review and grading of the assessment method. Also, an examiner must consider what type of feedback the assessment will provide for students (Bradshaw, 2001). Regarding grading of the assessment method, examiners must consider the marking scheme. Is the purpose of the marking scheme solely for grading purposes, or for providing feedback concerning student progression? If the marking scheme is for the grading purposes then there should be concern regarding reliability of the grading process. An examiner must not sacrifice reliability of grading in an attempt to get assignments graded quickly. If the marking scheme is for providing feedback to the students, then one must be willing to spend the necessary time to grade in a manner that will provide effective feedback to them. This will require the examiner not only to note student errors, but also include specific key points that draw attention to errors and assist them in correcting their mistakes and ultimately promote learning 21 Assessment of Medical and Allied Health Students A major challenge for instructors in medical and allied health education programs relates to identification of specific knowledge and skills students must learn, and development of methods to accurately measure the mastery of required knowledge and skills. The majority of these programs include using classroom lectures and laboratory sessions for learning and practicing specific skills, and using clinical experiences in which students are required to apply their knowledge and skills in real-life situations. Each learning experience requires developing methods to assess students? progression. Although classroom and laboratory experiences will generally include written examinations and checklist, respectively, the assessment of students in clinical experiences presents more of a challenge for instructors and clinical supervisors. Instructors and clinical supervisors are increasingly concerned with using assessment methods that emphasize objective measurement rather than subjective appraisal. This concern has increased due to required implementation of competency-based curriculum for allied health education programs to achieve and maintain accreditation. Clinical experiences for medical and allied health students play a significant role in the professional preparation of the students. During these experiences a student is required to apply both knowledge and skills learned in classroom and laboratory experiences to real-life situations. Clinical experiences are not intended for a student to act independently, but under supervision of an allied health professional. The purpose of clinical performance assessment is to determine the student?s ability to implement an established standard of patient care. The assessment process becomes a critical part of the criteria for determining a student?s progression to the next level of the program. This 22 same process can also be used to determine if a student needs additional instruction and/or practice relative to required competencies. The process for evaluating a student?s clinical experience should be conducted in a systematic manner that includes preparation of materials, collection of information (data), interpretation of that information, and feedback to the student. Each phase requires an examiner to fulfill certain criteria when developing the evaluation process. Systematic evaluation of students will help ensure validity and reliability of the assessment process. Effective assessment should begin with establishing a clear set of standards upon which students? performance will be judged. Performance standards dictate a basis for clinical evaluation and provide students the opportunity to achieve program objectives identified within the cognitive, psychomotor, and affective domains. During the preparation phase an appropriate clinical setting must be selected that will provide students with opportunities to achieve the desired outcomes. Specific criteria to be demonstrated by each student prior to entering the clinical experience must be developed. Examiners must establish an effective method for the communication of the results of assessments between faculty and students. Also, an examiner must select and/or develop appropriate instruments to use when assessing students? performance. Following the preparation phase, information (data) must be collected regarding student performance. Instruments used for collecting information must be checked for reliability and validity. An additional concern to the examiner will be the level of student supervision during the assessment process. Patient safety and care must not be compromised during the assessment. Increased supervision may create anxiety among some students. Criteria to consider for determining the supervision level include: 23 student?s ability to provide care, type of care to be provided, type of injury / illness of the patient, acuity of the injury / illness, and ability of the student to adapt to potential changes in the status of the patient (Orchand?s study as cited in Gomez, Lobodzinski, and Hartwell, 1998). Results of clinical evaluations must be shared with students. Performance feedback should be shared individually. A student should be provided results of formative and summative evaluations. Formative evaluations should provide appropriate feedback concerning areas in which improvements are needed. Summative evaluations should provide feedback relative to the overall final performance and progression. There should be no surprises at the conclusion of the clinical experience. Evaluation results should be presented to each student in writing and explained by the examiner. It is helpful if completed evaluation instruments are available to show areas of improvement, and assist in the recall of specific information. Evaluation results should be explained in a manner that will provide specific details relative to difficulties exhibited in the student?s performance. Examiners should then assist students with establishing new goals and objectives. The final assessment report should end on a positive note in an effort to help motivate the individual performance. Development of an assessment method for clinical experiences requires instructors and supervisors to consider who will perform evaluations, when evaluations will be performed, and how information received from each evaluation will be used in judging student progression. When considering who will evaluate students in the clinical setting the examiner can select from faculty, peers, patients, allied health professionals, the student, or any combination of persons. The advantages and disadvantages relative to 24 each should be considered prior to developing an assessment method. Faculty will have the most knowledge regarding assessment purposes and standards by which students will be evaluated. However, faculty can only sample behaviors that provide limited information relative to the actual performance of students. Evaluations by fellow students can benefit those participating in similar clinical experiences through the communication of the evaluation process. One disadvantage to peer assessment is the potentially biased judgments expressed by some students due to a desire to provide favorable feedback and not cause problems for a fellow student. Allied health professionals can also serve as examiners, providing valuable information concerning student performance. Information provided by these professionals can contribute to the validity and reliability of the assessment process. However, if an allied health professional?s expectations are different from those of an instructor then a discrepancy in the judgment of performance will likely occur. Also, these professionals may not have adequate time in their already busy schedules to adequately assess student performance. Patients can be involved in the assessment process by providing information from the perspective of a consumer. Patients can provide information regarding the affective behaviors of a student. Patient opinion will likely be based on previous experiences, which can draw into question the reliability of the information. Also, students can evaluate their own performance. This can serve as a learning process in itself, empowering a student to make choices relative to personal strengths and weaknesses. Self-assessment is useful as long as the process is objective and not a biased opinion, disclosing only favorable evidence of personal performance. (Gomez, Lobodzinski, and Hartwell, 1998). 25 Examiners must consider when each student will be evaluated and the frequency of the evaluations. Clinical experiences will often vary in length from several weeks to a year in length. If an instructor desires to provide students with formative information regarding performance then students must be assessed throughout the clinical experience. Assessment frequency will depend on the length of the clinical experience and resources available for assessment. Students should be assessed early during the clinical experience to ensure acceptable standards of patient care are not compromised, allow adequate time for correcting any deficiencies, and allow time for reassessment of performance. Assessment at the end of the clinical experience is summative in nature, providing students with information relative to overall performance. A variety of assessment methods have been employed for evaluating students during clinical experiences. Methods often used include: (1) observations; (2) written communications; (3) oral communications; (4) simulations; and (5) self-evaluations. Due to the complexity of clinical experiences, the use of more than one method for the assessment of students is recommended. The following is a description of frequently used assessment methods. Observations are the most frequently employed method when evaluating a student?s clinical performance. This method provides faculty a means for judging the performance of students and providing feedback. Observations can be made on an informal or formal basis. Students will generally progress from a state of constant observations to one in which a supervisor is available to intervene if necessary, but not standing over the shoulder of the student. There is always the threat of biased opinions and subjectivity that can decrease the reliability of the evaluation (Gomez et al., 1998). 26 Subjectivity can be reduced by establishing specific criteria for evaluating students? performance, and training observers to recognize both acceptable and unacceptable behavior. The documentation of observed behaviors is accomplished by using anecdotal notes, critical incidents, rating scales, checklists, and videotaping. Each method has advantages and disadvantages relative to the time required to record the observation, the information obtained relating to the student?s performance, and the feedback of information to be provided to the student. Written communications provide a method to evaluate a student?s ability to translate ideas onto paper. For an allied health student, opportunities for written communication can be provided through documentation of a patient history, findings of a physical assessment, and development of a care plan. Several common methods used in allied health programs include: charting and notes regarding patient care; patient care plans; paper and pencil tests, and recording student / patient interactions. Written communications provide a method that evaluates student knowledge and the ability to document information. Evaluation of written communications must be made during the assessment process due to its importance in the profession. However, the amount of time required to grade and provide effective feedback for written communications limits the frequency this method can be utilized (Gomez et al., 1998). Verbal communication provides another means for assessing students? knowledge and critical thinking skills. This evaluation method can also be used to assess group interactions, and more than one student at a time may be assessed. Group discussions and conferences, both interdisciplinary and multidisciplinary, allow students to share experiences and provide opportunities for students to learn from the experiences of 27 others. This can be advantageous to the learning process of many students. However, this method can be a disadvantage to students who feel threatened with sharing their knowledge (or lack thereof) and being evaluated publicly, and possibly critically, by others. (Gomez et al., 1998). Another method commonly used when assessing clinical skills is that of simulations. Simulations can help control students? behaviors as indicated by the learning objectives. This method provides a way to judge specific clinical practices without having to wait for opportunities to occur in clinical settings. Simulations permit assessing a variety of student skills and traits and can provide immediate feedback to the student. Simulation methods frequently used in medical and allied health programs include role- playing; interactive multimedia programs; standardized patients, clinical problem solving laboratories; and objective structured clinical examinations. Advantages to using simulations include the: variety of situations and scenarios that can be presented to the students; ability to emphasize interaction between the client and clinician; and ability to provide students with realistic experiences without the risk of harming a patient in the decision-making and care-giving process (Gomez et al., 1998). The disadvantages to using simulations include the: significant amount of time and effort to develop the simulations; training of models to be used in the simulations; the accuracy of models in presenting realistic simulations; and implementation cost. Using self-evaluation can also be beneficial to the student. Self-evaluation provides a method for students to continually examine their progress on an ongoing basis. This method provides opportunities for students and instructors to discuss strategies for enhancing future learning experiences, and addressing specific areas that need 28 improvement. Self-evaluation enables students to analyze problem areas, and hopefully correct problems if provided appropriate time and effective feedback. The disadvantage to using self-evaluation is the time and effort required. Instructors must meet with students on a regular basis, which can be difficult if supervising several students. Instructors must possess skills necessary to interact with students in an effective manner, which requires developing and attending workshops and seminars. Also, students must be willing to provide a critical and honest judgment of self. Instruments frequently used for self-evaluation include: keeping a journal of the clinical experiences; portfolios; and personal interviews with instructors (Gomez et al., 1998). Assessment of Students? Clinical Skills in Medical Education Programs One must presume the primary purpose of clinical experiences in many medical and allied health education programs is to provide students opportunities to refine skills and demonstrate the ability to perform effectively tasks such as history taking, physical examination, concepts and procedures, and the care and treatment of patients. Student assessment during clinical experiences has remained a difficult area among faculty. For many years oral and/or written examinations were used exclusively to assess student progress. However, faculty and students have continually believed these methods do not provide an accurate measurement of student performance in an environment that is primarily patient-oriented. Oral and written examinations cannot document how students interact with patients and the effectiveness of students? behaviors toward patients during clinical experiences. Simulations provide an alternative method to effectively evaluate student performance prior to, during, and following clinical experiences. 29 Use of standardized patients for assessment purposes. Simulations are not a new concept for the teaching and evaluating of students in professional programs. In 1964, Howard Barrows and Stephen Abrahamson published the first article that examined simulations for the assessment of students during the clinical experience. For the investigation, a ?programmed patient? was used to test junior medical students during their neurological clerkship. The ?programmed patient? was defined as a ?simulation in which a normal person is trained to assume and present on examination the history and, in this case, the neurological findings in a similar manner to an actual patient? (Barrows and Abrahamson, 1964, p. 803). A professional model was trained to exhibit characteristics typical of an actual patient. Students were allowed 30 minutes to obtain information concerning the present illness, review neurological symptoms, past history, family history, and neurological examination. Student behaviors and performance were evaluated by both the examiner (student) and examinee (programmed patient). Faculty then reviewed the reports with each student. The authors summarized that the ?programmed patient? provided an effective evaluation of the student?s clinical performance. Observing a student in the role of a physician has always been an essential component in evaluating the skills of the medical student. However, concerns that introducing a third-person observer may interrupt the normal interaction between patient and medical student prompted introduction of the ?programmed patient?. Using the ?programmed patient? eliminated possible contamination of the interaction between student and client. Also, using the ?programmed patient? guaranteed the client was constant for all students being tested. The authors concluded that use of a consistent ?patient? helped faculty relative to 30 determining strengths and weaknesses of the teaching program, through careful analysis of errors made by the students. In addition, the recorded performance of each student could be analyzed for purposes of addressing individualized instruction and counseling (Barrows and Abrahamson, 1964). Following that publication medical schools began involving ?programmed patients? more frequently to assess performance of medical students. Referred to as ?simulated patients,? these trained models began participating in a variety of education settings (ranging from undergraduate training to specialty certification) and assessing the quality of care provided by health care professionals. In 1982, Norman, Tugwell, and Feightner reported on a study validating use of simulated patients. Comparisons were made with respect to performance of medical residents on a simulated patient, or a real patient presenting a similar clinical problem. Information from four actual patients with chronic conditions and relatively stable physical findings was used in the study to train a simulated patient of similar age and gender. Competencies assessed included history taking, physical examination, significant data, diagnosis, and laboratory utilization. Differences in resident performances between the simulated and real patients were characterized relative to data gathering activities, critical data elicited from the patient, diagnosis, and use of investigations. Results indicated there were no significant differences in performance of the residents between a simulated and a real patient in the areas of history taking and physical examination. A significant difference was found in the amount of data elicited from the simulated and real patients, due primarily to a problem with memory of one real patient. No differences were observed between the simulated and real patients concerning 31 the amount of data gathering activity, or the formulation of a diagnosis and planned intervention. Also, residents were able to correctly identify clients as a real patient or simulated patient in 67 % of the cases. Findings support using simulated patients as a valid approximation of students? cognitive performance relative to actual patients (Norman et al., 1982). Simulated patients have been shown to provide a method for standardized presentation of patient illnesses. Barrows, Williams, and Moy (1987) used a simulated patient to assess performance of standardized tasks required of fourth-year medical students. The investigation involved administering a multiple-station examination that included 16 problems. Students were to take a patient history, perform a physical examination, and educate or counsel a patient relative to diagnosis and management. After each simulated patient encounter, students proceeded to a central location to answer questions relevant to the patient problem. Data for the analysis and scoring of each student?s performance were obtained by using selected observations reported by the simulated patient after each student encounter, and from each student?s responses to the test questions. Information relative to the performance of each student was provided in the areas of development of working hypotheses, data collection, test selection, diagnosis, data analysis, management, communication, interpersonal skills, test interpretation, and adequacy of the student?s knowledge related to each problem. Analysis of results indicated students? performance on the comprehensive clinical assessment correlated well with clerkship (clinical) performances (r = .65, p < .001). This indicated the assessment measured the same behaviors that faculty considered were important when rating students? clinical competence. A reported reliability index of 0.75 32 demonstrated the examination provided a reasonably consistent measure of clinical competence that was reproducible for cases from the patient population selected. Also, the students? performance provided the faculty with information regarding areas of instruction that needed improvement. Clinical assessment using simulated patients provided an objective and consistent method to assess selected clinical competencies of medical students over a broad range of common and important patient problems (Barrows et al., 1987). In a similar report, Stillman and Swanson (1987) reported studying standardized patients to ensure clinical competence of medical school graduates. Standardized patients were described as non-physicians who were highly trained to simulate patient encounters and function in multiple roles such as patient, teacher, and evaluator by using their own body as a teaching tool. The investigation demonstrated that standardized patients, when used in a systematic manner, could provide a reliable evaluation of many clinical competency components. Standardized patients could be used to improve training, provide diagnostic evaluations, and supplement student experiences. Symptomatic standardized patients, with stable positive findings on physical examination, could be trained to teach and evaluate various neurological, musculoskeletal, cardiovascular, pulmonary, and orthopedic examinations. Highly trained standardized patients could provide feedback regarding the techniques used during examination and ascertain whether the student correctly identified any abnormalities with the physical examination. Standardized patients could also be used as part of a comprehensive evaluation plan with graduating medical students to assess each student?s competencies relative to critical clinical skills that must be mastered prior to graduation. The authors concluded that 33 standardized patients provided an effective method for teaching, interviewing, and physical examination skills, and provided students with controlled exposures to common ambulatory problems and difficult patient communication situations (Stillman and Swanson, 1987). Continued investigation of assessing medical skills by Stillman, Regan, Swanson, Case, McCahan, Feinblatt, Smith, Willms, and Nelson (1990) again used standardized patients to study students? potential deficiencies in clinical skills and to ascertain the adequacy of current methods used for assessing clinical skills. Standardized patients were used to assess students? ability to obtain a focused history, perform a physical examination on patients with a common chief complaint, and provide patient education and counseling. Performance on the standardized patients was compared with performance during the clerkship and scores on the National Board of Medical Examiners (NBME) Examination, Parts I and II. The study also examined the feasibility of collaborative assessment efforts between four medical schools. Results indicated there was little relationship between scores when assessed using the standardized patients and grades for the clerkship. Grades for the clerkship, and the standardized patient scores, did not identify the same students as having questionable clinical skills. A review of clerkship evaluations for students performing poorly on the standardized patient exam revealed the majority of these students received grades of least ?pass? in the clerkship. Results from a questionnaire completed by the students indicated 35% of the students reported no faculty member had observed them performing a complete history and physical examination on a real patient. Also, 22% reported they had been observed only once by a faculty member when performing these tasks. 34 Discrepancies between performance on the standardized patient exam and grade for the clerkship led the researchers to suspect that clerkship grades may not provide an adequate index of clinical skills. The authors also reported a low correlation between scores on the standardized patient exam and the NBME scores. The low correlations were attributed primarily to the fact that the NBME reflects student knowledge rather than their clinical skills. It was concluded that although standardized patients should not serve as substitutes for frequent observations of medical students interacting with real patients, use of simulated patients provides important information about students that may not be available through other sources. The use of standardized patient examinations may assist in identifying those students who are performing at marginal levels (Stillman et al., 1990). Due to similar concerns regarding clinical competences of medical students at various levels of training, the University of Texas Medical Branch (UTMB) developed and implemented an instructional program that included use of standardized patients for teaching and assessment of medical students. Standardized patients were incorporated into the first-year medical interviewing course, second-year diagnosis course, third-year clerkship, fourth-year student curriculum, and residency-training program. Standardized patient encounters were designed to supplement experiences of the students but not substitute for direct patient encounters in clinical setting. Standardized patients were trained for multiple roles according to protocols adapted to course goals and examinees? level of training. A staff member observed each standardized patient?s performance during the initial clinical portrayal, and periodic intervals, to ensure presentation consistency. 35 Throughout each student?s training, standardized patients were used to assess student?s clinical skills. Before entering the clinical experience, each student completed a 90 minute standardized patient encounter. Following the encounter, each student received immediate verbal feedback from the standardized patient. The purpose of the feedback was to identify errors and omissions, and provide positive reinforcement regarding correct student performance. This format was received favorably by students and demonstrated excellent agreement between standardized patients and faculty ratings relative to student competencies (Ainsworth, Rogers, Markus, Dorsey, Blackwell, and Petrusa, 1991). Standardized patient encounters were also used extensively to evaluate fourth- year students before graduation. This provided faculty with opportunities to directly observe comprehensive clinical skills of the senior students. Students were typically appreciative of the direct observations and the feedback from faculty about their clinical performances (Ainsworth et al., 1991). Overall, involving standardized patients to train and assess students? performance yielded positive outcomes in the investigation. Although it was stated that much research was needed relative to standardized patients, several conclusions were drawn. First, standardized patients offered the opportunity to establish an absolute standard of performance based on direct observation of clinical skills. Second, assessment of clinical skills with standardized patients at regular intervals allowed for longitudinal analysis of student competency. This enabled instructors to track students? performance as they progressed through the curriculum. Third, using standardized patients during testing allowed instructors to identify the critical clinical skills and establish expected 36 performance criteria for clinical competency. In conclusion, attention to these factors would greatly assist instructors in their efforts to train physicians (Ainsworth et al., 1991). Use of simulations for the assessment of clinical skills. Hanna (1991) described simulations as an ?ideal method for teaching clinical nursing.? Simulations allow instructors to present realistic situations to students while controlling important variables. Time constraints, due to diagnostic testing and frequent interruptions and distractions, are eliminated. Instructor concerns over stopping a student in order to prevent harm to a patient, or correcting a student in front of a patient and possibly embarrassing the student, are eliminated. Simulations in which students assume the roles of client and nurse enable a single instructor to work with a larger student group. This helps make simulations more cost effective. The relaxed environment of a simulation allows students to experiment with various skills, something that cannot be practiced on real patients. Students can be provided repeated opportunities to practice a particular skill, thereby allowing development of mastery essential for performance of the task in real-life situations. Also, simulations provide instructors opportunities to simultaneously teach in two or more domains, such as the psychomotor and cognitive, or cognitive and affective (Hanna, 1991). During nurse training, quality-learning opportunities in the clinical setting have been a concern among instructors for many years. Simulations in the teaching of higher- level cognitive skills can assist in creating an environment for students to learn, and facilitate the decision-making process without placing the patient at risk. However, student-learning experiences in the clinical setting are dependent upon such factors as 37 patient illnesses, staffing level, number of students, skill levels, and amount of clinical teaching and supervision received by students. These factors have varying effects on clinical experiences, which influence the appropriateness of the overall learning experience. The clinical decision-making process, which involves dealing with human problems, is often complex and unique to individuals, or situation. This poses a problem to educational programs that depend on clinical experiences for student learning (Roberts, While, and Fitzpatrick 1992). Simulations can potentially serve as an effective tool for assessing decision- making skills of nursing students. The typical simulation presents students with a problem task from a real-life situation. Students are generally required to collect information from the subject, decide the appropriate action(s) to take based on this information, and demonstrate the action(s) based on the decisions. An important factor for the instructor to consider is how information is presented to students. When constructing simulations to examine the decision-making process, the nature and context of the task should be taken into consideration. Lamond, Crow, Chase, Doggen, and Swinkles (1996) reported that nursing students tend to utilize verbal information twice as much as other sources of information (observation, knowledge, and written information). Simulations should provide opportunities for students to collect information from a variety of sources. Also, prior experiences associated with decisions being examined must be considered since past experiences will influence the decision-making process (Lamond et al., 1996). Simulations have been described as being used in nursing education programs in conjunction of clinical laboratories aid instruction. Development of problem-solving 38 skills is a significant challenge to medical and allied health instructors due to problems such as: patient safety; patient availability; individual patient differences; and student supervision. Aronson, Rosa, Anfinson, and Light (1997) described clinical problem- solving learning laboratories as a method that assists students in developing problem- solving skills. Students were presented with scenarios based on course content at several laboratory stations. Each student was required to review scenario data. Each station included signs and symptoms relative to patient conditions. However, each station also included irrelevant or confusing data that the students reviewed. After students collected data individually, all students came together in small groups to make a decision regarding patient status and select appropriate nursing intervention to meet the patient?s needs. Faculty members provided immediate feedback to students as they progressed through each scenario. Student evaluations of the learning laboratory were overwhelmingly positive as students perceived the experiences as non-threatening and enjoyable. Students judged the experiences as a way to bring course content to life without the stress of working with real patients. Repeated opportunities for critical thinking and decision making allowed students to develop the confidence needed to advance on to real-life situations in the clinical setting (Aronson et al., 1997). Similar results were reported in clinical simulation laboratory experiences in which students were involved in interactive role-playing. In addition to allowing students to develop the skills and confidence needed as a medical professional, these experiences allowed students to validate their knowledge and decision-making skills as a ?nurse?. In the laboratory setting students reviewed and critiqued their own, and other students? actions and behaviors, in an environment conducive to learning. Away from clinical 39 settings, students did not have the added pressure that their actions might harm patients. Students were allowed to practice approaches to the situation that are considered acceptable but different. Combining these learning experiences with opportunities to share ideas and receive immediate feedback make clinical simulation laboratories a useful instructional method for nursing students. Also, using the same simulations provides a method for instructors to verify student achievement of critical behaviors such as critical- thinking, communication, and the providing of care (Johnson, Zerwic, and Theis, 1999). Discussion to this point has centered on the use of standardized patients to assist in instruction and assessment of medical and allied health students? skills such as: history taking; physical examinations; and decision-making. Gallagher, Pantilat, Lo, and Papadakis (1999) described the results of an investigation designed to determine how frequently medical students perform key advance directive discussion skills and what skills student finds to be difficulty or easy. Advance directives relate to a patient?s decision to have, or not have, life-sustaining treatment in the effect of cardiac arrest. It was reasoned that physicians lacked skills for discussing advance directives, and medical students received very little teaching on the subject. A standardized patient curriculum was developed to teach third-year medical students to discuss advance directives with their patients. Students were given a 20 minutes lecture on advance directives followed by two standardized patient encounters. Following each encounter standardized patients completed an assessment form rating whether students performed key skills related to advance directives. Results demonstrated that students completed 70% of the advance 40 directive discussion skills. A total of 62% of the students asked the patient about preferences for life-sustaining treatment, and 63% gave the patients a numerical estimate of the survival rate with the use of CPR. Fifty-two percent of the students discussed all potential outcomes of CPR. Students reported that the most difficult task involved describing to patients the likely outcomes of CPR and the easiest task was discussing the patient?s choice of surrogate. The conclusion was standardized patients may be useful in teaching communication skills to medical students (Gallagher et al., 1999). Using standardized patients has proceeded beyond merely instructional purposes. A significant concern among faculty and staff in medical education programs has been establishing objective methods to evaluate the clinical skills of students. Various evaluation methods have been used to assess clinical performance of students, including: observations; written communications; oral communications; simulations; and self- evaluations. Instruments used to evaluate students have included: rating scales; checklist; anecdotal records; critical incidents; videotaping; charting notes; paper and pencil tests; process recordings; clinical conference/post-conference; objective structured clinical examinations; role-playing, interactive multimedia; personal interviews; and self- evaluations. Each instrument has strengths and weaknesses, with no one instrument providing a comprehensive evaluation of the clinical skills of students. In recent years, the objective structured clinical examination is an instrument that has gained increasing usage in nursing and medical education programs as an effective tool for assessing students? clinical skills. Harden, Stevenson, Downie, and Wilson (1975) first reported a testing format that allowed assessment of students on a large number of clinical cases and skills. 41 Standardized clinical situations were organized in which medical students interviewed, or physically examined a patient, and then answered questions relative to the case. Students were given a set time at each station and performance of each student was objectively scored. This testing format created a significant amount of interest among medical school faculty as a method for assessing students, particularly, the clinical performance of residents through using of a large number of clinical cases. Reported discrepancies in the evaluation of student?s clinical skills led to a study examining the use of a standardized method. This evaluation technique included real and simulated clients at multiple stations to measure the clinical performance of medical students. The objectively scored, practical examination was designed to assess the clinical performance of junior medical students at the end of an internal medicine clerkship. The format included 17 cases that involved an activity station where students interviewed patients, or performed some type of physical examination, followed by a response station where students answered open-ended questions designed to assess knowledge. Following each student encounter, patients completed a specific checklist relative to the performance of each student. Maximal performance for each case was 100 % and required skills for each task were weighted according to importance. The overall score for the test was the simple average of all case scores (Petrusa, Blackwell, Rogers, Saydjari, Parcel, and Guckian, 1987). Three groups of 68 students completed the objective clinical examination over a period of three months for a total of 204 students. Total scores for the three groups averaged 58.4, 56.3, and 61.6, respectively. The range in total scores was similar for the three groups, however, cases that received the highest and lowest percent scores varied 42 for each group tested. Standard deviations for most problems were found to be fairly large for all three groups. Also, an examination of the case-to-case variability for the individual student level showed no consistent pattern of low or high performance on one case that might predict performance on other cases. Results supported the concept that assessing student performance on a limited number of problems, as is common with oral examinations or with using a limited number of standardized patients, may lead to a skewed interpretation of the overall performance of the students (Petrusa et al., 1987). In terms of reliability scoring, agreement between the patient and faculty observers ranged from 0.33 to 1.00, with most cases better than 0.80. For the written answers, reliability of scoring ranged from 0.16 for one case to 1.00 for many cases, with an average of 0.80. These results indicate that simulated patients who are trained to use a specific checklist can reliably record clinical performance and non-physicians can reliably score written answers when appropriate scoring guidelines are made available. Reliability for the entire examination ranged from 0.46 to 0.57, which was predicted due to the complexity of the student performance to be assessed. Validity of the examination was supported by correlation coefficients of the exam with ratings by faculty and performance on the National Board of Medical Examiners (NBME) Medical Subtest scores. A significant, but moderate correlation (0.46), was found between scores on the examination and the clinical ratings by faculty. A similar moderate correlation (0.43) was also found between performance on the objective clinical examination and performance on the NBME. It was concluded that objective clinical examinations that incorporate the use of standardized patients who are trained to use specific checklists can provide reliable and 43 valid data about the ability of students to collect data through medical interviews and physical examinations, synthesize this data into working diagnosis, and propose a plan for the management of set medical problems. Results supported measurement of clinical performance on specific cases by more reliable objective methods. Standardized objective clinical examinations were shown to be a feasible assessment tool for use in medical school programs for evaluating clinical skills (Petrusa et al., 1987). Use of the objective structured clinical examination for assessment purposes. Ross et al. (1988) reported using the objective structured clinical examination (OSCE) to measure clinical skill performance in nursing. The investigation was initiated due to the challenge educators had measuring performance of nursing students? clinical skills. The OSCE was developed for evaluating performance of clinical skills associated with the nursing neurological examination. The OSCE included five stations where clinical skills associated with acquisition, interpretation, and use of data related to neurological examination was measured. A 20-item multiple-choice test was used to measure each student?s knowledge base concerning neurological examination (Ross, Carroll, Knight, Chamberlain, Fothergill-Bourbonnais, & Linton, 1988). Comparison of the mean scores on the OSCE and clinical scores for students (which were based on observations made by instructors) yielded a correlation of 0.30, indicating OSCE students who had higher clinical grades did not always do as well on the OSCE. A comparison of mean scores on the OSCE and multiple-choice examinations demonstrated a significant difference. Additional analysis of the scores indicated there was no relationship between the students? scores on the OSCE and their scores on the knowledge test (Ross et al., 1988). 44 After completing the OSCE, students were asked to provide feedback about OSCE. Areas of interest included the relevance of the OSCE to neurological clinical placement and the neurological laboratory, the value of the OSCE as a motivating force for learning and acquiring clinical skills, and the value of the OSCE as a tool for evaluating clinical skills. Students perceived the OSCE as relevant to the content of the neurological clinical placement and the neurological laboratory. However, students perceived the OSCE to be less useful as a motivating factor for learning theory than learning clinical skills. Also, students appeared to be undecided concerning use of the OSCE as an evaluation tool. The authors attributed this uncertainty as an evaluation tool to possibly being due to novelty of the experience. The authors concluded the five station OSCE of neurological nursing skills appeared promising as a method for evaluating clinical competence of neurological clinical skills and it may effectively facilitate learning to perform clinical nursing skills (Ross et al., 1988) In a similar study involving nursing students, an objective structured clinical assessment (OSCA) was implemented in a graduate nurse practitioner program. The study examined the effect of the OSCA on cognitive learning and clinical competences of students. Cognitive measurements were determined by each student?s performance on four subsections of two midterm examinations that corresponded to the four OSCA simulations (i.e. hypertension, UTI, chest pain, and diabetes). Clinical competency was determined by averaging scores obtained by a clinical evaluation tool that measured student communication, subjective history taking, objective physical examination, assessment, management planning, oral presentation, record keeping and professional role by use of a rating scale of 0 (poor performance) to 4 (strong performance). Both the 45 experimental and control groups received lectures concerning hypertension, UTIs, cardiopulmonary disease, and diabetes during their regularly scheduled classes (Bramble, 1994). The experimental group participated in the OSCA simulations the week following the lecture, as part of their required three-hour seminar. The OSCA, which included simulate patients, required each student to read a clinical scenario prior to entering the examination room and directed them to take a focused history, perform a physical examination, or provide patient education. Students were allowed 15 minutes to interact with the simulate patient and five minutes for feedback from the examiner and patient. Students proceeded to a 15-minute static station where they were required to answer written questions concerning the OSCA simulation. (Bramble,1994). Descriptive data of students? performance showed that those students in the experimental group performed slightly better than the control group on the midterm examinations, however, data analysis failed to show a significant difference between the two groups. Also, analysis of each student?s clinical performance failed to show a significant difference between the two groups, indicating the OSCA did not lead to better performance of clinical skills. However, most students either strongly agreed or agreed that the OSCA simulations were a valuable learning experience, helped improve clinical competence, reinforced lecture objectives, and provided valuable feedback. Although the study yielded no quantitative support for using the OSCA, subjective evaluation of the experience was favorable, which supports previous studies of medical and nursing students (Bramble, 1994). 46 A repeated outcome observed with objective structured clinical evaluations (OSCE) was the positive student responses with respect to feedback received from the assessment. Although the OSCE can provide both summative and formative assessment, its potential value as a teaching method has been somewhat neglected. However, Hodder, Rivington, Calcutt, and Hart (1989) investigated the effectiveness of immediate feedback during the administration of the OSCE to a group of medical school students at the University of Ottawa. The study examined the effectiveness of immediate feedback, as incorporated into the OSCE, versus simple repetition of the task as a means for improving competency in skills tested. Also, the effect on student performance of extending the time at each OSCE station from four to six minutes was examined. The experimental group received two minutes of immediate feedback from an examiner following the completion of a standard four-minute OSCE examination. Following two minutes of feedback, students repeated the identical four-minute OSCE station and were scored by the same examiner. Students in the control group performed the same identical four-minute OSCE examination as the experimental group, but were instructed to continue the examination for an additional two minutes before repeating the station again, and they received no feedback. Results indicated no significant difference in scores between the two groups for the initial four-minutes of examinations at any of the testing stations. Repeating the task resulted in a small improvement in mean scores (2.0 +/- 2.5%). Extending the testing period from four to six minutes for the control group yielded a slightly greater increase in the mean scores of 6.7% (p < 0.01). However, scores for those students receiving the two minutes of immediate feedback were significantly improved, ranging from 10.0 to 63.8% 47 (p < 0.001). The mean increase over initial scores following two minutes of immediate feedback was 26.3 +/- 2.1% (p < 0.0001). Also, two minutes of immediate feedback resulted in an improvement in mean scores for the experimental group of 13.7 +/- 3.1% (p < 0.001) over scores resulting from the combination of simple task repetition plus extending the station duration by two minutes. Immediate feedback was well received by students who rated the feedback as a helpful learning experience. Also, examiners reported the feedback aided in providing insight into the adequacy of prior teaching in courses for which students were being examined and feedback dialogue provided insight into possible deficiencies in course design and instruction (Hodder et al., 1989). In conclusion, simulations have been documented as a useful educational tool for educators involved in training students for careers in allied health professions such as nursing, nurse practitioner, and medical physician. Although, there appears to be insufficient evidence to support simulations as being superior to traditional methods of instruction, simulations have been found to be useful in assessing the clinical skills of medical and allied health students. Performance on simulations allow instructors to objectively assess the students? knowledge and skills in a controlled environment so as not to place patients at risk of injury, or undue harm. Using objective simulated clinical evaluations (OSCE) enables instructors to determine more than if the student can perform essential psychomotor skills. OSCE allows instructors to assess the ability of students to interact and communicate with patients in a variety of situations in addition to assessing student knowledge. The OSCE can assist instructors with determining if students are progressing according to established goals and criteria, and if students achieved the expected competencies essential for someone entering a health care profession. The use 48 of OSCEs for instructional purposes have been received favorably by students due to the positive feedback from models and instructors regarding performance on specific tasks. Also, this technique allows students to practice skills essential for appropriate patient care in real-life situations, yet without the risk of harming the patient. Documented effectiveness of the OSCE as an assessment tool in medical education programs, and the positive responses of students and instructors relative to the use of this model, support investigating the OSCE model as a method for assessing students in athletic training education programs. The OSCE model should serve as an effective assessment tool for specific skills and traits within the cognitive, psychomotor, and affective domains that have been identified by the National Athletic Trainers? Association as essential for those preparing to enter the allied health profession of athletic training. 49 CHAPTER 3 ANALYSIS OF CLINICAL EDUCAITON EXPERIENCES IN ATHLETIC TRAINING EDUCATION PROGRAMS Academic assessment of students in institutions of higher education has come under increasing scrutiny in recent years. The use of subjective measures to assess knowledge and performance of college students has been criticized, placing increased emphasis on developing instruments designed to objectively measure student performance. Concerns relative to assessment of students? performance have become very evident among instructors in medical and allied health educational programs. Very limited information concerning the assessment of athletic training students during the clinical education experiences is available. Due to similarities in professional preparations with other allied health fields (i.e. nursing, physical therapy, medical physician), athletic training education programs have relied upon similar assessment methods for assessment of students during clinical education experiences. Banta (2001, p. 13) defines assessment as ?the systematic collection, review, and use of information about educational programs undertaken for the purpose of improving student learning and development?. It is through assessment that faculty can determine if students have acquired the knowledge, skills, and values that faculty collectively have determined important. In competency-based education programs, assessment involves more than documenting the competency of skills. Assessment is a method by which 50 qualitative and quantitative information collected about students? competency is used to improve the learning of current and future students (Banta, 2001). Methods of assessment refer to approaches used to assess learning (Brown, Bull, & Pendlebury, 1997). Checklists are frequently used for documenting completed skills and competencies. Checklists are best used for assessing sequential tasks and assignments requiring specific criteria. When combined with sequential tasks or specific criteria this instrument can be a reliable assessment tool. However, the development of checklists can be time-consuming, especially for those that are task-specific, and their use with assessing complex tasks can take considerable time. Task-specific checklists are very detailed instruments in which every procedure, or maneuver, the examinee is to perform is identified and described. Also, achieving acceptable levels of reliability requires considerable training (Brown et al., 1997). Checklists are common assessment instruments used by many programs. Although they are fairly simple to use, their use leads to the accumulation of large notebooks to accommodate all of the proficiencies for each student. The bulky notebooks are not easily transported as students are assigned to various clinical sites and documents are not always available to ACIs in the athletic training room and on the field (Cuppett, 2003). This can create problems with documentation of completed proficiencies. Cuppett (2003) described the use of Personal Digital Assistants (PDAs) as a new tool for evaluating students. PDAs, more commonly known as ?Palm Pilots?, have been developed through advances in computer technology. These devices allow instructors to load proficiencies onto the PDA thereby eliminating the need to transport a bulky notebook. Students? clinical proficiencies can be evaluated in virtually any setting and 51 recorded by the ACI. Data can then be transferred to a desk-top for permanent storage and summary reports. PDAs have the potential to simplify the documentation of clinical proficiencies. Ryan and Carlton? study (as cited in Tracy, Marino, Richo, and Daly, 2000) described a portfolio as a valuable educational tool and useful product for nurse educators in the evaluation of program goals. Portfolios are recognized as a means to reveal technical and conceptual mastery (Huston and Fox?s study as cited in Tracy et al., 2000). Described as a clinical notebook, Williams (2001, p. 136) defined the portfolio as ?an innovative teaching strategy that has the potential to fully attend to the teaching-learning process as it occurs in clinical situations?. The portfolio can include different items from a variety of courses. It encourages students to take responsibility for their learning and promotes interaction between students and instructors. Portfolios serve as a valuable instrument for nursing educators to enhance, evaluate, and document student learning in clinical education (Williams, 2001). To examine the influence that clinical nursing practice promotes learning, Tracy et al., (2000) investigated the clinical achievement portfolio (CAP). This instrument is described as an action-oriented strategy designed to examine students? performance within clinical nursing courses. The CAP incorporates a collection of qualitative and quantitative materials from multiple sources to verify student-centered learning. The authors found the CAP provided students with a foundation to build on knowledge they already acquired and encouraged students to progress to more complex levels of knowledge development (Tracy et al., 2000). 52 Videotaping, as an assessment method, has been used in nursing education for many years. Matthews and Viens (1988) investigated the use of videotaping of scenarios that included basic skills and physical assessment skills. They found this method saved time for students and faculty, provided students the opportunity to analyze and critique their performance, and decreased test anxiety commonly experienced by students. Similar results were reported by Graf (1993) in a study using videotaping to evaluate clinical skills in return demonstrations. A more recent study by Miller, Nichols, and Beeken (2000) compared performance of students videotaped with that of students who performed in front of instructors. Results indicated a significant difference in performance for only one of thirteen skills for two student groups. Students administering intravenous medication by intermittent infusion sets had a higher mean score in the faculty-present group than the videotaped group. Analysis of overall satisfactory or unsatisfactory ratings indicated both students and instructors preferred the personal contact, opportunity for immediate feedback, and opportunities for instruction associated with the faculty-present method. Both groups reported they liked the flexibility of the videotaping method (Miller et al., 2000) Scenarios have been examined for their value as an assessment instrument of physical examination skills. Wales and Skillen (1997) compared performance of nursing students on two randomly assigned scenarios. Results indicated clinically focused scenarios proved to be effective in evaluating the acquisition of basic physical examination skills. Scenarios evaluate students on their ability to determine which technique to use and on their ability to correctly perform the psychomotor skills. This 53 enables instructors to assess the students? clinical competencies concerning identified health care skills. Standardized marking criteria ensure high inter-rater reliability and promote equity in testing. They also facilitate the identification of course strengths and weakness, important to curriculum assessment. The creation of scenarios and the development of marking criteria can be time consuming, a primary disadvantage to their use as an assessment instrument (Wales & Skillen, 1997) Mensch and Ennis (2002) reported scenarios were included as instructional instruments by instructors to facilitate comprehension and application of athletic training knowledge and skills. Scenarios provide students opportunities to deal with situations that may not be observed in the clinical setting, and provide answers and/or demonstrate specific skills to deal with the situations. Instructors can evaluate the performance of the students and provide immediate feedback. Students and instructors consider scenarios to be essential to making education more meaningful and an effective tool for enhancing student motivation (Mensch & Ennis, 2002). In addition to serving as a learning tool, scenarios can be employed during mock examinations for the purpose of summative assessment. Objective Structured Clinical Examinations (OSCE) were implemented in the 1970?s as an instrument for assessing medical students. This instrument was developed due to needs for objective measurements of clinical skills. OSCE consist of a set of assessment stations based on the objectives of the course. The clinical competency to be measured is broken down into its various components. Each component becomes an objective for one of the examination stations. Examinees rotate through the stations, spending a specified amount of time at each station. Real and/or simulated patients who 54 have been trained to provide consistent responses are used to provide a more real-life situation. Trained observers, typically clinical instructors and medical staff, measured performance. Investigations concerning the effectiveness of the OSCE as an assessment instrument have been favorable. The objective clinical examination, with standardized patients trained to provide feedback about students? performance, provided reliable and valid data on medical students? ability to collect data through interviews and physical examinations, synthesize the data into a diagnosis, and progress to a care plan (Petrusa et al., 1987). A comparison of medical students? performance on an OSCE to performance on a multiple choice test indicated the OSCE appeared to be an acceptable instrument for evaluating medical students? performance of clinical skills, and a promising method for assessing clinical proficiencies (Ross et al., 1988). Hodder, Rivington, Calcutt, and Hart (1989) studied the effectiveness of an OSCE as mechanism for providing immediate feedback to medical students. Results indicated that short periods (2 minutes) of feedback immediately after completion of an OSCE station significantly improved competency in the performance of criterion-based tasks, at least for the short term. Incorporating immediate feedback into an OSCE was a practical means for improving the learning experience for medical students and has the potential to improve certain aspects of the clinical competence. Bramble (1994) reported results contrary to those previously reported on the effectiveness of the OSCEs as an assessment instrument. Her results indicated the OSCE did not have a significant effect on the cognitive or clinical performance of students in a graduate nurse practitioners program. Participants did report the OSCE provided a 55 valuable learning experience, improved their clinical competency, reinforced lecture objectives, and provided valuable feedback. Students appreciated the ?real-life? situations the OSCEs provided and the challenge of the experiences. The OSCEs were perceived as useful learning experiences that could improve clinical practice. The OSCE has been used at the University of Florida College of Medicine to assess second year medical students performance prior to entering the third year clerkship. The OSCE was started due to concerns of third year clerkship directors concerning student performance. A study of all second year medical students from 1989 to 1997 indicated student scores improved over the 9 year period. The OSCE provided assurance to clerkship directors of students? competency in the basic required skills for entering the third year clerkship. Students reported the time volunteered by the clinical faculty expressed their interest in student learning and importance of the exam. Overall, the OSCE proved to be an authentic evaluation model (Duerson, Romrell, & Stevens, 2000). Methods Participants. A computer printout of all ATEPs was obtained from the website of the Commission on Accreditation of Allied Health Education Programs (CAAHEP). The name of each program director institution, and email address for each reaccredited, and newly accredited ATEP as of September, 2005, was collected. Questionnaires were emailed to all 320 accredited ATEP identified on the list. Of the 320 surveys mailed out, 94 were returned and coded. Three follow-up procedures were utilized to encourage optimal response rate. First, a cover letter with a 56 link to the survey was posted on the Athletic Training Educators listserv in an effort to contact PDs who might not have received the initial emailing. Second, a follow-up email was sent to all PDs selected for the study. The purpose of this mailing was two-fold: to thank those who had completed and returned the survey; and, to encourage persons who were delinquent to promptly complete and return the surveys. A third follow-up email, occurring approximately four weeks after the initial mailing was a final attempt to encourage those who not responded to do so and again thank those who had returned completed surveys. Twenty-seven responses were received following the final email reminder Over the course of the emailing 15 addresses were identified as incorrect. Efforts were unsuccessful in locating correct addresses. Three PDs refused to participate in the study due to time constraints and returns were received from two PDs indicating they were on academic leave. This brought the total possible returns to 300. The response-rate for the study was 31.3% (N = 94). Ninety PDs represented undergraduate programs (95.7%) and 4 represented graduate programs (4.3%). Intercollegiate athletic affiliation included: 35 representing NCAA Division I athletic programs (37.2%); 23 representing NCAA Division II athletic programs (24.4%); 24 representing NCAA Division III athletic programs (25.5%); and 12 representing NAIA athletic programs (12.8%). Participants experience as a certified athletic trainer ranged from 3 to more than 21 years .The mean years-experience was 8.76 years with a median of 7 and a mode of 11. The sample included 59 males (62.8%) and 33 females (35.1%), with two (2.1%) not indicating either male or female. 57 Instrument. A questionnaire, developed by the lead investigator, was used to collect the data in this study. The questionnaire included 47 questions divided into four sections. Section 1 consisted of 17 items intended to gather general information about the organizations of the ATEP. Section 2 included 7 items for gathering information concerning the role of ACIs in the program. Section 3 contained 15 items to collect information about the methods used to assess the clinical proficiencies of the students. Questions 5 in this section included items that required multiple responses. Section 4 consisted of 8 items to obtain information about the program director. The instrument included multiple choice responses based on information gained through the review of literature and personal experiences of the investigator as an ATEP program director. Questions in which participants might need to include a response other than the options listed included space for providing qualitative feedback. Following development of the instrument, the questionnaire was reviewed by three certified athletic trainers serving as PD of an ATEP at their respective institutions. The questionnaire was reviewed for content validity, and clarity of questions and responses. After review of the instrument appropriate changes were made as suggested. The instrument and a consent form were transferred into a web page for easy accessibility. Each PD was email a cover letter inviting participation in the study. The cover letter included a link to the consent form. Once the participants read the consent form they could link to the actual survey. Linking to the survey indicated consent to participate in the study. A submit button was pressed following completion of the survey. 58 Participants could discontinue participation at any time by exiting out of the survey web page. The full instrument and the informed consent materials are found in Appendix B. Results Descriptive Analysis. Results of the study indicate 47.9% (45) of the programs begin clinical education experiences the first term of the first year upon initial enrollment at the institution. The first term of the second year was the second most frequent response (27.7%). Only four programs (4.3%) begin clinical experiences the third year-first term. Seventy-nine (84.0%) reported pre-clinical experiences were required prior to admission to the ATEP. For number of academic terms required to complete the ATEP, 56.4% (53) required six terms (M = 5.67, SD = 1.09) and 34% (32) required four or five terms. Forty-two (44.7%) PDs reported their program required six clinical experiences (M = 5.52, SD = 1.13), while 22 (23.4%) and 17 (18.1%) required four and five clinical experiences, respectively (see Tables 3.1 and 3.2). Table 3.1 Beginning Year / Term of Enrollment Starting Clinical Experiences______ Year / Term Frequency % First year / first term 45 47.9 First year / second term 8 8.5 First year / third term 0 0.0 Second year / first term 26 27.7 59 Table 3.1 (continued) Year / Term Frequency % Second year / second term 5 5.3 Second year / third term 0 0.0 Third year / first term 4 4.3 Table 3.2 Required Number of Academic Terms and Clinical Experiences____________________ Academic requirement Clinical requirement Terms / Experiences Frequency % Frequency %___ 4 16 17.0 22 23.4 5 16 17.0 17 18.1 6 53 56.4 42 44.7 7 2 2.1 7 7.4 8 3 3.2 2 2.1 9 3 3.2 2 2.1 No response 1 1.1 2 2.1 Forty-six (48.9%) PDs reported their programs used 10 or more ACIs, and 62.8% reported using 7 to 9 ACIs (M = 7.65, SD = 2.75). Thirteen (13.8%) reported using at least three ACIs for assessment of clinical skills. Two, three, and fours students per ACI (23.4%, 21.3%, and 22.3% respectively) were the most frequent responses for number of students assigned to each ACI involved in the program (M = 3.68, SD = 1.85). Only 6.4% 60 (6) PDs reported having a student to ACI ratio of 8:1. Forty-seven PDs (50%) reported years of experience for their ACIs ranged from 5 to 8 years (M = 7.37, SD = 2.76). More than 10 years of experiences was the most frequently reported response (26, 27.7%) by PDs (see Tables 3.3 and 3.4). Table 3.3 Number of ACIs Used in ATP and Average Number of Years Experience___________ ACIs Years Experience Number Frequency % Frequency %____ 2 - - 2 2.1 3 13 13.8 6 6.4 4 5 5.3 7 7.4 5 9 9.6 10 10.6 6 7 7.4 14 14.9 7 4 4.3 12 12.8 8 4 4.3 11 11.7 9 5 5.3 4 4.3 10 or more 46 48.9 26 27.7 Note: Dash indicates data was not collected for these variables 61 Table 3.4 Student to ACI Ratio_________________ Ratio Frequency %___ 1 to 1 6 6.4 2 to 1 22 23.4 3 to 1 20 21.3 4 to 1 21 22.3 5 to 1 9 9.6 6 to 1 4 4.3 7 to 1 4 4.3 8 to 1 6 6.4 No response 2 2.1 PDs reported intercollegiate athletics as the most frequent clinical setting (94, 100%) and intramurals as the least frequently used clinical setting (9, 9.6%). Other clinical settings included high school athletics (79.8%), physical therapy clinics (70.2%), physician?s office (58.5%), and sports medicine clinics (43.6%). Intercollegiate athletics had the highest frequency for employing 3 or more ATCs (84, 89.4%) within the clinical setting. Forty-two PDs reported interscholastic athletic programs using two or three ATCs as ACIs for supervision of ATS during clinical experiences (see Table 3.5). 62 Table 3.5 Number Clinical Settings and ACIs Associated with each Clinical Setting_______ Clinical Education Experiences___ ACIs in Clinical Setting Frequency % Frequency % Clinical Setting 1 2 3 1 2 3 Intercollegiate athletics 94 100.0 3 5 84 3.2 5.3 89.4 Intramural sports 9 9.6 4 1 1 4.3 1.1 1.1 Physical therapy clinic 66 70.2 20 15 18 21.3 16.0 19.1 Interscholastic athletics 75 79.8 25 15 28 26.6 16.0 29.8 Sports medicine clinic 41 43.6 13 1 17 13.8 1.1 18.1 Physician?s office 55 58.5 18 7 10 19.1 7.4 10.6 In 32 programs (34.0%), ACIs were classified as full-time employees of the institution. Responsibilities as an ACI were stated in the employee contract of the ATC in 56 program. Only 16 (17.0%) reported the ACI as being identified as an adjunct of the college. Fifty-seven (60.6%) reported using ATCs that were not employed by the institution to supervision ATSs. Thirty-eight PDs (40.4%) reported ACIs received free CEUs although they were not employed by the institution. Twenty-one PDs (22.3%) 63 reported ATCs did not receive CEU credits through the institution for which they served as an ACI. Only 29 programs (30.9%) required their ACIs to have a master?s degree (see Table 3.6). Table 3.6 Employment Characteristics of ACIs_________________________________________ Frequency % Characteristics Yes No Yes No_ Employed full-time 32 62 34.0 66.0 Serving as ACI part of employment contract 56 38 59.6 40.4 Serves as adjunct instructor 16 78 17.0 83.0 Required to have Master?s Degree 29 65 30.9 69.1 Not employed by institution, but used as an ACI 57 37 60.6 39.0 Not employed by institution, but receives compensation from employer 5 89 5.3 94.7 Not employed by institution, receives compensation from employer, but not as an adjunct 12 82 12.8 87.2 Not employed by institution, but receives free CEUs 38 56 40.4 59.6 Not employed by institution, no CEUs, no compensation 21 73 22.3 77.7 Results indicated the laboratory session as the most frequently reported site for the first evaluation of ATSs clinical skills (76, 80.9%). The clinical experience and the mock examination were the most frequently reported sites for evaluation of mastery of clinical skills (78.7% and 59.6%, respectively). Oral / practical simulations (OP), general 64 check lists (GCL), and task specific check lists (TSCL) were the most frequently reported evaluation tools used in the assessment of ATSs (86.2%, 75.5%, and 72.3%). Video- taping (VT) was reportedly used in only 17 (18.1%) of the programs. Thirty-five PDs (37.2%) reported using 3 evaluation instruments for the assessment of students (M = 3.36, SD = 1.05). Concerning formal evaluation of clinical skills, 35 PDs (37.2 %) reported requiring two evaluations per academic terms and 19 (20.2%) reported students were required to be evaluated 4 or more times per term (M = 2.3, SD = 1.08). Twenty- four PDs reported conducting formal evaluations of students? clinical skills only 1 time per term (see Tables 3.7, 3.8 and 3.9). Table 3.7 Settings for Initial Evaluation and Mastery of Clinical Skills__________________ First Evaluation Mastery Setting Frequency % Frequency %___ Laboratory session 76 80.9 - - Clinical experience 15 16.0 74 78.7 Field experience 2 2.1 42 44.7 Mock exam - - 56 59.6 Year end exam - - 28 29.8 Note: Dash indicates data was not collected for these variables 65 Table 3.8 Types of Evaluation Instruments Frequently used for Assessing ATSs? Clinical Skills Evaluation Instruments GCL TSCL VT OP OSCE PF PDA Other Frequency 71 68 17 81 65 - - 14 % 75.5 72.3 18.1 86.2 69.1 - - 14.9 Note: Dash indicates data for these variables was not collecting in the survey item. Table 3.9 Frequency of Evaluations of Clinical Skills_____________________________________ Number Frequency %______ 1 per academic term 24 25.5 2 per academic term 35 37.2 3 per academic term 14 14.9 4 or more per academic term 19 20.2 No Response 2 2.1 Results indicated a variety of assessment instruments used for evaluating students progression and competency concerning identified clinical skills within specified educational domains. PDs reported GCL as the most frequently used instrument for evaluating clinical skills in risk management (69.1%), pharmacology (53.2%), general medical conditions (72.3%), nutrition (53.2%), psycho-social intervention and referral (54.3%), health care administration (59.6%), and professional development (53.2%). They reported using TSCL most frequently for evaluation purposes in assessment and evaluation (84.0%), therapeutic modalities (81.9%), therapeutic exercise (79.8%), and 66 acute care (77.7%). VT was not a frequently used instrument, but does appear useful for evaluation purposes in assessment and evaluation (17.0%). OP was most frequently used for evaluating assessment and evaluation (91.5%), therapeutic modalities (87.2%), and therapeutic exercise (87.2%) clinical skills. Portfolios (PF) are not used routinely for evaluation purposes in all domains. Very few PDs reported using personal digital assistants (PDA) for evaluating clinical skills. Although, not identified as a primary instrument, OSCEs are used by PDs on a routine bases for evaluating clinical skills in risk management, assessment and evaluation, acute care, therapeutic modalities, and therapeutic exercise (see Table 3.10). PDs reported human subjects were the primary model for use with assessment instruments. Ninty-three PDs reported use of human subjects in comparison to use of medical manikins (43, 45.7%). Athletic training students, CIs, and ACIs were most frequently recruited as models (93.6% , 77.7%, and 74.5%, respectively). Concerning the training of models, 31 PDs (33.0%) reported use of a written script, but no model training in preparation for evaluating students. Forty-three PDs (45.7%) reported no written script or training in preparation of models (see Table 3.11). 67 Table 3.10 Use of Assessment Instruments for Evaluating Competencies within Educational Domains________________________________________________ Evaluation Instruments (%) Domain GCL TSCL VT OP PF PDA OSCE RM 69.1 59.6 5.3 44.7 29.8 3.2 50.0 AE 77.7 84.0 17.0 91.5 31.9 4.3 64.9 AC 73.4 77.7 4.3 76.6 25.5 4.3 60.6 PH 53.2 44.7 0.0 28.7 27.7 2.1 36.2 TM 72.3 81.9 6.4 87.2 27.7 2.1 60.6 TE 73.4 79.8 8.5 87.2 27.7 2.1 61.7 GMC 72.3 57.4 3.2 50.0 25.5 3.2 46.8 N 53.2 41.5 0.0 21.3 27.7 1.1 33.0 PIR 54.3 45.7 4.3 33.0 29.8 1.1 35.1 HCA 59.6 46.8 0.0 35.1 44.7 2.1 42.6 PD 53.2 36.2 1.1 24.5 43.6 0.0 37.2 Evaluation instruments: General checklist (GCL); Task specific checklist (TSCL); Videotaping (VT); Oral/practical (OP); Portfolio (PF); Personal digital assistant (PDA); Objective structured clinical evaluation (OSCE). Competency domains: Risk management (RM); Assessment and evaluation (AE); Acute care (AC); Pharmacology (PH); Therapeutic modalities (TM); Therapeutic exercise (TE); Nutrition (N); Psycho-social intervention / referral (PIR); Health care administration (HCA); Professional development (PD). 68 Table 3.11 Characteristics of Models used for Assessment of ATSs? Clinical Skills______________ Frequency % Characteristics Yes No NR Yes No NR__ Use of medical manikins 43 48 3 45.7 51.1 3.2 Use of human subjects 93 1 0 98.9 1.1 0.0 Types of subjects ATSs 88 6 0 93.6 6.4 0.0 Volunteers 48 46 0 51.1 48.9 0.0 CIs 73 21 0 77.7 22.3 0.0 ACIs 70 24 0 74.5 25.5 0.0 Simulated patients 46 48 0 48.9 51.1 0.0 Models trained to provide feedback 15 78 1 16.0 83.0 1.1 ACIs allowed to provide feedback 85 8 1 90.4 8.5 1.1 Types of training Written script, no training 31 33.0 Written script, 1 day of training 7 7.4 No script, 2 days of training 0 0.0 No script, 3 days of training 0 0.0 No script, no training 43 45.7 69 Table 3.11 (continued) Frequency % Characteristics Yes No NR Yes No NR__ Other 11 11.7 No response 2 2.1 Models are paid for service 2 90 2 2.1 95.7 2.1 Heard of OSCE 30 64 0 31.9 68.1 0.0 Note: NR represents no response to this item by the participants. Forty PDs (42.6%) identified their job title as program director and assistant professor. Only 3 PDs identified their job title as program director and head athletic trainer. Twenty-one or more and 12 to 14 years of experience (27.7% and 20.2%, respectively) were the most frequent responses by PDs (M = 5.78, SD = 1.85). Forty-five PDs reported their highest academic achievement to be a master?s degeee and 46 PDs (48.9%) reported having a terminal degree. Athletic training was identified by 21 PDs to be their area of academic preparation and exercise science was the second most frequent response (see Table 3.13). Means and standard deviations for job responsibilities of program directors are identified in Table 3.12. When asked to identify their job responsibilities, PDs identified instruction as their primary responsibility (M = 39.66%, SD = 19.23). PDs reported administrative responsibilities accounted for 20.62% (SD = 12.59) of their job. Event coverage represented 13.04% of the PDs? responsibilities, although supervision of athletic training students only represented 6.85%. Instruction of lab classes and research were approximately equal at 9.19% and 9.46%, respectively (see Table 3.13). 70 Table 3.12 Job Responsibilities of Program Directors___________________________________ Responsibilities M (%) SD_______ Administration 20.62 12.59 Instruction 39.66 19.23 Academic advising 6.68 7.71 Lab classes 6.77 9.19 Assessment of clinical skills 5.6 7.54 Supervision of ATS 4.14 6.85 Event coverage 7.23 13.04 Research 4.88 9.46 Other 0.16 1.15 Table 3.13 Characteristics of Program Directors_________________________________________ Characteristics Frequency %______ Job title PD / Assist Prof 40 42.6 PD / Assoc. Prof 21 22.3 PD / Prof. 7 7.4 PD / Head AT 3 3.2 PD / Assist. AT 4 4.3 PD / Other 17 18.1 71 Table 3.13 (continued) Characteristics of Program Directors_________________________________________ Characteristics Frequency %______ No Response 2 2.1 Yrs. Experience as ATC 3 ? 5 6 ? 8 9 ? 11 12 ? 14 15 ? 17 18 ? 20 21 > No Response 4 5 17 19 8 12 26 3 4.3 5.3 18.1 20.2 8.5 12.8 27.7 3.2 Discussion The analysis of these results has helped provide a better picture the organization of athletic training clinical education experiences and how athletic training students are evaluated during clinical experiences. It appears that many athletic training education programs follow a similar model concerning the length of the program, setting in which clinical experiences are placed, and types of evaluation instruments incorporated in the assessment process. The majority of ATEPs allow students to begin their clinical experiences in the first year - first term of enrollment at the institution. This allows students a longer period 72 of time in which to obtain clinical experiences and learn to master the clinical skills. Most programs require their students to complete a minimum of 6 academic terms and 6 clinical experiences. Very few programs indicated students were required to only complete the minimum 2-year requirement as outlined in accreditation standards. All programs required students to complete a clinical rotation with intercollegiate athletics, and additional clinical rotations where frequently required in interscholastic athletics and physical therapy clinics. Many of the programs incorporate the use of 10 or more athletic trainers to serve as ACIs, and supervise students during the clinical experiences. Athletic trainers who serve as ACIs appear to be well experienced, with many having more than 6 years of experience in the profession. It appears program directors go to considerable efforts to maintain an appropriate student to ACI ratio below the maximum 8:1 ratio identified in the accreditation standards. Student to ACI ratios of 2:1 to 4:1 were the frequently reported by the program directors. Very few programs reported a maximum (as stated in accreditation guidelines) of 8:1 ratio. The responsibility of serving as an ACI and supervising athletic training students is frequently stated in the athletic trainer?s employment contract even though 60.6% of the ACIs indicated they were not employed by the institution offering the degree program. The requirement that the ACI have a master?s degree does not appear to be a significant factor for the recruitment of athletic trainers to serve in this role. PDs reported students are formally evaluated on average 2 times per academic term. The initial evaluation of clinical skills most frequently takes place in the laboratory setting that accompanies a didactic course and mastery of clinical skills in typically 73 demonstrated in the clinical setting. Mock examinations are also a frequent setting for the demonstration of competency in clinical proficiencies. General Checklist (GCL) and Task Specific Checklist (TSCL) appear to be the instruments of choice for the evaluation of clinical proficiencies. Although these instruments are time-consuming to develop, they are typically the simplest method for objectively evaluating the performance of students?. These two types of instruments were most frequently used for evaluating clinical proficiencies associated with the domains of risk management, pharmacology, general medical conditions, nutrition, psycho-social intervention and referral, health care administration, and professional development. Oral / Practical (OP) were observed to be the instrument of choice for evaluating proficiencies in the domains of assessment and evaluation, therapeutic modalities, and therapeutic exercise. Objective Structured Clinical Examinations (OSCE) were also used for evaluation of clinical skills in these areas. It is very likely that some form of checklist was also incorporated into these evaluations. Portfolios were reportedly used to assist with the evaluation of clinical proficiencies in the domains of health care administration and professional development, two domains that are rather difficult to document mastery, due to the structure of the competencies. Videotaping (VT) and the use of Personal Digital Assistants (PDA) were the least used evaluation instruments. Very little use of the PDAs was expected since these instruments are rather expensive and new to the profession. The little use of videotaping was rather surprising since this has been an accepted instrument for evaluating students? performance in other allied health training programs. 74 Human subjects were reportedly used more frequently (98.9%) than medical manikins (45.7%) as models for evaluation purposes. Although medical manikins make excellent models for students to practice on and demonstrate clinical skills, they are very expensive which limits their use unless they can be borrowed from a medical training program. Athletic training students were most often recruited to serve as models for other students. This can serve as a good experience for new students and provide an opportunity to see what they will be expected to accomplish as they progress through the program. Models received very little training in preparation for serving as a model. The most frequent mode of training was review of a script, but no practice of the behaviors to be performed or avoided. The majority of program directors participating in the study were male (62.8%). Opportunities for females to serve in this role will most likely increase due to the increasing number of females who are choosing to enter the profession and pursue careers as athletic training educators. Academic instruction was identified as the job responsibility occupying the largest percentage (39.66%) of the program director?s duties. This was closely followed by administrative responsibilities at 20.62%. An interesting result was the responsibility of the program directors to event coverage. Although event coverage only accounted for a same percentage of the program directors? duties (7.23%), the fact that they were being required to provide coverage for athletic events was surprising. The heavy instructional load combined with administration responsibilities could have a potential negative effect on tenure and promotion for those PDs at institutions that emphasis research and publication. 75 Conclusion The professional preparation and education of athletic trainers has experienced significant changes since the accreditations of the first athletic training education programs in the early 1970?s. In an effort to ensure that all entry-level certified athletic trainers are competent professionals, internship programs have been eliminated and candidates for BOC certification are required to graduate from an accredited program. The responsibilities and essential skills to practice in the profession are regularly reviewed through role delineation studies. Information from these studies serves as the basis for development of competence based educational programs to prepare individuals desiring to enter the athletic training profession. A well established, stringent accreditation process ensures that programs offering degrees in athletic training are well organized, provide didactic and clinical education experiences, and document the demonstration of proficiency concerning identified clinical competencies. The results of this investigation indicate that those programs participating in the study are working to ensure that their students will be prepared to work as entry-level athletic trainers upon successful passing of the BOC certification examination. These programs have developed curriculums that provide appropriate clinical education experiences for their students. Students are provided sufficient time once admitted to the programs to develop their clinical skills. They are provided adequate opportunities for clinical experiences in a variety of settings. It appears students are being evaluated on a regular basis so that progression can be documented, and appropriate intervention can be implemented for those students failing to demonstrate progression. Programs have 76 recruited sufficient numbers of experienced certified athletic trainers to provide appropriate supervision of students and monitor clinical progression. GCL and TSCL evaluation instruments are frequently incorporated into the assessment of students? during clinical education experiences. These instruments are used to evaluate students? demonstration of proficiency concerning clinical competencies. Programs appear to be incorporating appropriate evaluation instruments for assessment of students during their clinical experiences. Programs participating in this study appear to be meeting the standards and guidelines for accreditation. The basis has been established for additional research to determine what, if any, factors of the clinical education experience might influence passing the written simulation and practical portions of the BOC certification examination. 77 Bibliography Banta, T. W. (2001). Assessing competency in higher education. Assessing Student Competence in Accredited Disciplines. Edited by C. A. Palomba & T. W. Banta, Stylus Publishing, 1 - 10. Bramble, K. (1994). Nurse practitioner education: Enhancing performance through the use of the Objective Structured Clinical Assessment. Journal of Nursing Education, 33, 59 ? 6. Brown, G., Bull, J., & Pendlebury, M. (1997). Student learning. In Methods and strategies. Assessing student learning in higher education. Routledge: New York, NY. Cuppett, M. M. (2003). Documenting clinical skills using personal digital assistants. Athletic Therapy Today, 8, 15 ? 20. Duerson, M. C., Romrell, L. J., and Stevens, C. B. (2000). Impacting faculty teaching and student performance: Nine years? experience with the objective structured clinical examination. Teaching and Learning in Medicine, 12, 176 ? 182. Graf, M. A. (1993). Videotaping return demonstrations. Nurse Educator, 18, 29 Hodder, R. V., Rivington, R. N., Calcutt, L. E., & Hart, I. R. (1989). The effectiveness of immediate feedback during the Objective Structured Clinical Examination. Medical Education, 23, 184 - 188. Mensch, J. M. and Ennis, C. D. (2002). Pedagogic strategies perceived to enhance student learning in athletic training education. Journal of Athletic Training, 37 (Supplement), S199 ? S207. 78 Miller, H. K., Nichols, E., and Beeken, J. E. (2000). Comparing videotaped and faculty- present return demonstrations of clinical skills. Journal of Nursing Education, 39, 237 ? 239 Petrusa, E. R., Blackwell, T. A., Rogers, L. P., Saydjari, C., Parcel, S., & Guckian, J. C. (1987). An objective measure of clinical performance. American Journal of Medicine, 83, 34 - 42. Ross, M., Carroll, G., Knight, J., Chamberlain, M., Fothergill-Bourbonnais, R., & Linton, J. (1988). Using the OSCE to measure clinical skills performance in nursing. Journal of Advanced Nursing, 13, 45 - 56. Tracy, S. M., Marino, G. J., Richo, K. M., & Daly, E. M. (2000). The clinical achievement portfolio. Nurse Educator, 25, 241 ? 246. Wales, M. A. and Skillen, D. L. (1997). Using scenarios as a teaching method in teaching health assessment. Journal of Nursing Education, 36, 256 ? 262. Williams, J. (2001). The clinical notebook: Using student portfolios to enhance clinical teaching learning. Journal of Nursing Education, 40, 135 ? 137. 79 CHAPTER 4 CLINICAL EDUCATION EXPERIENCES OF ATHLETIC TRAINING STUDENTS AND PERFORMANCE ON THE BOC CERTIFICATION EXAMINATION The clinical education experience is an essential component in the education of the athletic training student. Through the combination of formal classroom instruction, clinical instruction, and clinical experiences, the student is prepared to make application to the Board of Certification to take the certification examination and enter the health care profession. According to the Standards for the Accreditation of Entry-Level Athletic Training Education Programs (2005), published by the Commission on Accreditation of Athletic Training Education (CAATE), provisions must be made within the curriculum for clinical experiences under the direct supervision of a qualified ACI or CI in an appropriate clinical setting. Standard J2 states the students must be provided clinical experiences that provide opportunities to integrate cognitive, psychomotor skills/clinical proficiency, and affective competence/core values. The clinical experiences must also provide students the opportunities to develop, synthesize, and demonstrate cognitive competency (i.e., learning over time) and professional behavior (CAATE, 2005). Within Standard J2 is a new term, learning over time. Learning over time is primarily associated with psychomotor skills and clinical proficiencies (Amato, Konin, & Brader, 2003). In practice learning over time involves documenting the acquisition of clinical skills and progression of students though the ATEP. Learning over time is to be 80 applied to learning opportunities ranging from the classroom to lab sessions to clinical rotations. This shift in emphasis to learning over time, away from required clinical hours, has been supported by research that has demonstrated number of hours completed in the clinical setting has little influence on students? performance on the BOC certification examination (Turocy, Comfort, Perrin, & Gieck, 2000). Learning over time serves the purpose of providing a consistent pattern of learning for students in CAATE accredited programs. Clinical education has been described as the portion of the curriculum in which theoretical and practical education are applied to real-life situations involving athletes or patients (Weidner & Henning, 2002; Berry, Miller, & Berry, 2004). During the clinical education experience, students progress from the learning of general technical skills to clinical competency. The clinical experience also allows opportunities for students to learn to appreciate the affective aspects of the profession, and to develop essential interpersonal and social skills and attitudes (Weidner & August, 1997). The clinical education experience constitutes a significant part of curriculums in allied health care professions. Certified athletic trainers have reported they perceived that approximately 53% of their entry-level professional development occurred in clinical education. In comparison, clinical education in physical therapy programs reportedly accounts for only 23% to 30% of the total curriculum (Weidner & Henning, 2002). These experiences allow students access to hands-on learning and provide opportunities to apply things learned in the classroom and laboratory setting to real-life situations. What students are doing during these experiences has been a concern among PDs and instructors for some time. Active learning time (ALT), the time students spend engaged 81 in activities that contribute to their academic success, is essential for the development of clinical competencies. Clinical instructors and supervisors must ensure that a student?s time is effectively utilized and the clinical experience provides quality instruction and supervision, social support, and an appropriate level of clinical activities (Berry et al., 2004). The importance of clinical education confirms the need to have reliable and valid instruments for the assessment of students during these experiences. Much of the literature concerning athletic training clinical education has focused on various aspects of clinical instruction, student learning, and predictors of performance on the BOC certification examination. Fuller (1997), in a study examining learning objectives and critical thinking, found that educators encourage more critical thinking in their learning objectives and written assignments than in their written exams. He stated the incorporation of valid educational instruments, such as Bloom?s taxonomy, may help instructors with development of learning objectives, assignments, and examinations. Coker (2000) examined the learning styles of undergraduate athletic training students to determine their consistency between traditional classroom and clinical education settings. Her results indicated reflective observation was the preferred mode of learning in classroom settings and active experimentation was more prevalent for clinical settings. She concluded that learning styles do appear to change depending on the learning environment. Also, knowing a learner?s developmental level an instructor can incorporate the appropriate instructional style (Gardner & Harrelson, 2002). Instructors need to take this information into consideration when concerning general organization of courses, planning of learning activities, designing lecture materials, and assignments. 82 The characteristics of helpful clinical instructors, as perceived by clinical instructors and students, have been studied in athletic training education. Students and clinical instructors were in agreement rating individual items. Modeling professional behaviors was considered the most helpful subgroup of characteristics. The authors concluded that clinical instructors should model specific professional behavior as a way to facilitate student learning (Laurent & Weidner, 2001). Being an effective clinical instructor requires good communication skills, effective interpersonal skills, effective supervisory skills, an understanding or teaching and learning styles, an ability to effectively evaluate students? performance and provide appropriate feedback, and the ability to demonstrate competency concerning clinical skills (Weidner & Herring, 2002). Mensch and Ennis (2002) found that among students and instructors there are three pedagogic strategies that appear to facilitate athletic training education in accredited entry-level programs. The use of scenarios and case studies, authentic experiences, and a positive educational environment were found to be helpful pedagogic practices for educational experiences in athletic training. Regarding predictors of success on the BOC certification examination, few independent variables have been studied. Draper (1989) studied the effect of learning style on examination performance. He also studies the influence of GPA and clinical experiences, expressed in clinical hours, as predictors of examination success. Draper reported no relationship between GPA and score on the written simulation (WS) and oral- practical (OP) portions of the examination. He did report a relationship between GPA and performance on the written (W) portion of the exam. Contrary to these results, it has been reported that a relationship does exist between GPA and initial success on all portions of 83 the BOC examination (Harrelson, Gallaspy, Knight, & Leaver-Dunn, 1997). This study found that no single variable could be used independently to predict success on the examination. However, a composite set of variables including overall academic GPA, athletic training GPA, academic minor GPA, ACT composite score, and number of semesters of university enrollment did capture a large percentage of subjects passing all portions of the examination on the first attempt. Turocy, Comfort, Perrin, and Gieck (2000) examined NATABOC clinical experience requirements and individual student characteristics to predict passing of the NATABOC Certification Examination. Survey information and examination scores were gathered concerning age, gender, route to certification, previous athletic training and allied health experiences, clinical education experiences, and performance on the certification examination. Results indicated the majority of sports experiences were completed while working in an intercollegiate setting, with very little experience in high school sports or non-traditional settings. Total clinical experiences (i.e. hours of experience) were not found to be a predictive factor of scores on the certification examination. Clinical hours acquired in high-risk sports such as football were not found to be predictive of examination scores. The authors suggested less emphasis be placed on contact hours and more emphasis placed on the knowledge, skills, and abilities delineated in the clinical competencies. The authors recommended development of a greater variety of clinical education settings and experiences, and designing clinical education experiences that are appropriate for the level and abilities of the individual athletic training student. 84 Few researchers have examined the influence of clinical education as a predictor of performance on the BOC certification examination. Due to the requirement that all candidates seeking athletic training certification must graduate from a CAAHEP accredited ATEP, the role clinical education will play in the educational process has changed. The number of clinical hours that students must complete is no longer a requirement for eligibility to take the examination. Demonstration of competency for identified clinical proficiencies is the emphasis of ATEPs, and learning over time is the foundation of programs. The purpose of this study was two-fold. One purpose was to gather current information concerning the clinical education experiences of athletic training students in CAAHEP accredited ATEPs. A second intent was to determine if a relationship exist between any variables associated with clinical experiences and performance on the certification examination. Methods Participants A random list of 752 certified athletic trainers (ATCs) was obtained from the office of the Board of Certification for Athletic Trainers. This list was produced from a total of 2010 ATCs who passed all portions of the examination on the first attempt, or completed passing all portions of the examination, in 2005. The list included a stratified sample of ATCs representing all ten NATA districts. The survey questionnaire was mailed to all 752 ATCs identified on the list. Of the 752 surveys mailed out, 212 were returned and code. Three procedures were utilized to encourage optimal return. First, a cover letter, consent forms, survey, and self-addressed stamped envelope were mailed to potential participants. Participants could 85 complete and return the enclosed survey, or go to a website to complete and submit the survey. Two weeks following the initial mailing a postcard was sent to each participant. The mailing of the postcard was two-fold: to thank those who had completed and returned the survey; and, to encourage prompt return by others. Four weeks following the first mailing a second postcard was sent to all participants. The postcard was two-fold: to again thank those who had completed and returned the survey; and, to remind others of the closing date for return of surveys. Of the 752 surveys, 22 were returned to the investigator. These reduced the number of surveys to 730. Response rate for the study was 29% (212). All returned surveys were usable. Demographic information for participants is presented in Table 13. Instrument A survey questionnaire, developed by the lead investigator (see Appendix), was employed to gather information from certified athletic trainers. The survey included 35 questions divided into two sections. Section 1 contained 24 items designed to gather information about the type of program the ATC graduated from, clinical education experiences, and demographics. Section 2 included 8 items intended to collect information about the ATC?s performance on the BOC certification examination. The instrument included multiple choice responses based on information gained through the review of literature and personal experiences of the investigator as an ATEP program director. Questions in which a response other than the options listed included space for qualitative feedback from the participant. Following development of the instrument, it was reviewed by three certified athletic trainers serving as PD of an ATEP at their respective institutions. The instrument 86 was reviewed for face and content validity, and clarity of questions and responses. After review of the instrument appropriate changes were made as suggested. A pilot study was initiated to examine the effectiveness of the survey questionnaire. PDs were contacted by email regarding their willingness to assist with distribution of the questionnaire to recent (2005) graduates. Twenty-three PDs indicated they would assist with distribution of the questionnaire. Each PD was emailed a cover letter to be forwarded to former students. The cover letter included a link to the consent form and survey instrument. PDs were sent several reminders to be forwarded to possible participants. Thirty-four questionnaires were collected through Internet resources. The responses were reviewed with minimal revisions required. The BOC office was contacted concerning information required to verify self-reported examination results. Modifications to the questionnaire were made so this information could be collected. The full instrument and the informed consent materials are found in the Appendix. Results Descriptive Analysis Results of the study indicated 79.6% (168) participants were required to complete a pre-clinical experience prior to admission to the ATEP. Twenty-five to 50 hours of pre- clinical experience was the most frequent response (26.1%), followed by more than 100 hours reported by 39 subjects (18.4%). Forty-two (19.9%) participants reported they were not required to complete a pre-clinical experience. Six terms was the most frequently reported response (67, 31.6%) for number to academic terms required to complete the ATEP, followed by eight terms (M = 6.13, SD = 1.49). Also, six clinical experiences (65, 30.7%) was the most frequent response of participants (M = 5.88, 87 SD = 1.56). See Tables 4.1 and 4.2 for a summary of the results. Table 4.1 Required Clinical Hours for Admission to ATEP__________________ Hours Frequency %____ < 25 hours 20 9.4 25 to 50 hours 56 26.4 51 to 75 hours 31 14.6 76 to 100 hours 23 10.8 > 100 hours 39 18.4 Pre-clinical experience not required 42 19.8 No response 1 .5 Table 4. 2 Required Number of Academic Terms and Clinical Experiences_________ Academic Terms Clinical Experiences Number Frequency % Frequency %__ 4 34 16.0 51 24.1 5 40 18.9 37 17.5 6 67 31.6 65 30.7 7 14 6.6 21 9.9 88 Table 4. 2 (continued) Academic Terms Clinical Experiences Number Frequency % Frequency %__ 8 43 20.3 15 7.1 9 12 5.7 21 9.9 No response 1 0.5 1 0.5 The majority (67.8%) of participants indicated receiving academic credit for clinical experience in only one type of course. Clinical courses were most frequently reported (125, 59%) to require clinical experiences for course credit. The first year-first term of enrollment was the most frequently reported term in which students were allowed to be involved in clinical experiences (130, 61.3%). Twenty-two (10.4%) subjects reported they had to wait until the second year-first term of enrollment before starting their clinical experiences. Nine (4.2%) reported they could not begin their clinical experiences until the first semester of the third year of enrollment at their institution. Tables 4.3 and 4.4 provide a summary of the results. Table 4.3 Types of Courses and Credit for Clinical Experiences_____________________________ Frequency % Type of Course Yes No NC NR Yes No NC NR Clinical course 125 45 41 1 59.0 21.2 19.3 0.5 Laboratory course 34 136 41 1 16.0 64.2 19.3 0.5 89 Table 4.3 (continued) Frequency % Type of Course Yes No NC NR Yes No NC NR Didactic course with lab 24 146 41 1 11.3 68.9 19.3 0.5 Didactic course 22 148 41 1 10.4 69.8 19.3 0.5 Other 11 159 41 1 5.2 75.0 19.3 0.5 Note: NC indicates no credit awarded for any clinical experiences. NR represents no response by the participant. Table 4.4 Beginning of Clinical Experiences and Evaluation of Clinical Skills________________ Begin Clinical Begin Evaluation of Experience Clinical Skills Term Frequency % Frequency %_____ First year / first term 130 61.3 126 59.4 First year / second term 28 13.2 28 13.2 First year / third term 2 0.9 2 0.9 Second year / first term 22 10.4 26 12.3 Second year / second term 13 6.1 12 5.7 Second year / third term 1 0.5 1 0.5 Third year / first term 9 4.2 10 4.7 No response 7 3.3 7 3.3 90 All but one participant reported receiving clinical experience in the intercollegiate athletic setting (99.5%). The second and third most frequently reported clinical settings were high school athletics (80.6%) and physical therapy clinics (57.8%). Intramurals (12.7%) were the least used setting for clinical experiences. Concerning total number of clinical settings, 29.2% (62) reported being assigned to four clinical settings (M = 3.29, SD = 1.18). This was closely followed by three clinical setting (27.4%) and two clinical settings (23.1%). See Tables 4.5 and 4.6. Table 4.5 Clinical Settings for Placement and Evaluation_________________________________ Clinical Placement Clinical Evaluations Setting Frequency % Frequency %____ Intercollegiate athletics 211 99.5 208 98.1 Interscholastic athletics 171 80.7 153 72.2 Physical therapy clinic 122 57.5 92 43.4 Physician?s office 107 50.5 51 24.1 Sports medicine outreach 59 27.8 37 17.5 Intramurals 27 12.7 12 5.7 91 Table 4.6 Number of Clinical Settings for Placement and Evaluations Performed______________ Clinical Placement Clinical Evaluations Setting Frequency % Frequency %____ 1 11 5.2 38 17.9 2 49 23.1 70 33.0 3 58 27.4 53 25.0 4 62 29.2 35 16.5 5 26 12.3 14 6.6 6 6 2.8 1 0.5 No Response - - 1 0.5 Note: Dash indicates no reported data for this response. The first term of the first semester of enrollment was also reported to be the term that students? clinical skills were first formally evaluated (126, 59.2%). Participants reported these first evaluations occurred most frequently in the clinical experience (111, 52.4%) or the laboratory session (89, 42%). Intercollegiate athletics (98.1%) was the clinical setting in which these formal evaluations were conducted, followed by high school athletics and physical therapy clinics, 72.0% and 43.4%, respectively. Mastery of clinical proficiencies was again most frequently evaluated in the clinical experience (40.1%). Mock examinations were reported by 54 participants (25.5%) to be the setting in which mastery was demonstrated. Information concerning the proficiency evaluation of clinical skills is presented in Tables 4.4 and 4.7. 92 Table 4.7 Setting for First Evaluation and Evaluation of Mastery of Clinical Skills_____________ First Evaluation Evaluation of Mastery Setting Frequency % Frequency %___ Clinical experience 111 52.4 85 40.1 Laboratory session 89 42.0 29 13.7 Mock examination - - 54 25.5 Year-end examination - - 17 8.0 Other 6 2.8 4 1.9 Response omitted (multiple responses) 5 2.4 - - No response 1 0.5 23 10.8 Note: Dash indicates no reported data for this response. Oral/practical simulations (OP) were reported to be the instrument most frequently used for evaluating clinical skills (197, 92.9%). Task specific checklists (TSCL) and general checklists (GCL) were reportedly incorporated for evaluation purposes 89.6% and 86.8%, respectively. Multiple objective structured clinical stations (OSCE) were reportedly to be used for evaluation purposes by 95 survey participants (44.8%). Twenty-six participants (12.3%) reported the use of videotaping (VT) for evaluation purposes. When asked to indicate the number of times per term they were evaluated, 84 (39.6%) reported they were evaluated 2 times per term, and 82 (39.2%) indicated they were evaluated more than 3 times per term (M = 2. 83, SD = 1.05). The majority (70.3%) reported they were evaluated eight or more times over the course of 93 enrollment in the ATEP (M = 7.79, SD = 1.71). Tables 4.8, 4.9, and 4.10 include summaries of these results. Table 4.8 Evaluation Instruments used for Assessing Clinical Skills_________________________ Frequency % Instrument Yes No NR Yes No NR___ OP 197 14 1 92.9 6.6 0.5 TSCL 190 21 1 89.6 9.9 0.5 GCL 184 27 1 86.8 12.7 0.5 OSCE 95 119 1 44.8 54.7 0.5 VT 26 185 1 12.3 87.3 0.5 OTHER 2 209 1 .09 98.6 0.5 Note: NR indicates participant provide no response to the survey item. Table 4.9 Summary of Frequency of Evaluations of Clinical Skills___ Evaluations / Term Frequency %_____ 1 17 8.0 2 84 39.6 3 26 12.3 > 3 83 39.2 No response 2 0.9 94 Table 4.10 Summary of Total Number of Evaluations of Clinical Skills___ Total Evaluations Frequency %________ 4 20 9.4 5 5 2.4 6 32 15.1 7 3 1.4 8 31 14.6 > 8 118 55.7 No response 3 1.4 Participants were also asked to indicate their overall GPA at the time of graduation from the ATEP. Fifty percent (105) of the participants indicated their GPA to be 3.5 or higher when they graduated. Seventy-two participants (34.3%) indicated their GPA was in the range of 3.2 ? 3.4. Regarding performance on the certification examination, 65.6% (139) participants indicated they passed the written simulation (WS) on the first attempt and 68.4% (145) indicated they passed the practical (PR) portion of the exam on the first attempt. Of those subjects who did not pass these portions of the exam on the first attempt, 18.4% (39) reported they passed the WS and PR portions of the exam on the first retake. Fifteen subjects (7.1%) reported it took more than two retakes to pass the WS, and 4.2% (9) reported they had to retake the PR more than two times before passing. Mean scores on the WS and PR portions of the examination were 578.89 (SD = 74.36) and 39.65 (SD = 3.345), respectively. See Tables 4.11 and 4.12 for results. 95 Table 4.11 Overall GPA at Graduation_____________________________________ GPA Frequency %_____ 2.5 ? 2.7 0 0 2.6 ? 2.8 3 1.4 2.9 ? 3.1 29 13.7 3.2 ? 3.4 72 34.0 3.5 > 106 50.0 Multiple response (response omitted) 1 0.5 No response 1 0.5 Table 4.12 Summary of BOC Examination Results_______________________________________ Characteristics Frequency %______ Passing WS on first attempt 139 65.6 Retaking of WS 1 time 2 times > 2 times No Response 39 8 15 11 18.4 3.8 7.1 5.2 Passing PR on first attempt 145 68.4 96 Table 4.12 (continued) Characteristics Frequency %______ Retake of PR 1 time 2 times > 2 times No response 39 10 9 9 18.4 4.7 4.2 4.2 Passing WR on first attempt 139 65.6 Retake of WR 1 time 2 times > 2 times No response 44 8 13 8 20.8 3.8 6.1 3.8 Passing BOTH WS and PR on first attempt 106 50.0 Table 4.13 Characteristics of AT Survey Participants______________________________________ Characteristics Frequency %_______ Gender 71 33.5 140 66.0 Male Female No Response 1 0.5 97 Table 4.13 (continued) Characteristics Frequency %_______ NATA District at time of graduation District 1 16 7.5 District 2 18 8.5 District 3 35 16.5 District 4 70 33.0 District 5 35 16.5 District 6 2 0.9 District 7 8 3.8 District 8 7 3.3 District 9 17 8.0 District 10 3 1.4 No Response 1 0.5 Table 4.14 Summary of BOC Examination Scores____________________ Examination Mean SD____ Written Simulation 578.89 74.364 Practical 39.65 3.345 Written Examination 115.07 13.798 98 Statistical Analysis Means, standard deviations, MANOVA, and logistic regression analysis were performed using SPSS 14.0 Statistical Package. Results of data analysis indicate participates passing the WS and PR portions of the BOC examination on the first attempt averaged 6.02 (SD = 1.51) academic terms of enrollment in their respective athletic training education programs, in comparison to 6.13 (SD = 1.49) for all participates. Those passing the certification examination participated in an average of 5.95 clinical experiences in comparison to 3.29 for all participants and 6.04 for those not passing both the WS and PR parts of the examination on the first attempt. The number of clinical settings for those passing these two portions of the examination was 3.25, much less than the 5.88 clinical settings for all participants and much greater than the 2.81 clinical settings reported of those not passing both parts. The number of evaluation settings for subjects not passing the WS and PR on the first attempt was 2.27 (SD = 1.28), less than valued reported for all subjects and subjects passing both parts of the examination. Number of evaluation instruments used to assess subjects? clinical skills was approximately equal for the three groups. See table 4.17. Cross-tabulation of variables was used to determine potential significance categorical variables. Chi-square procedures were used to determine the statistical significance of the categorical variables of the study. An alpha value of p < 0.05 was used to determine statistical significance. Categorical variables included: athletic affiliation of the ATEP (ATH), number of pre-clinical hours of observation required for making application to the program (PCH), number of formal evaluations of clinical skill per academic term (EPT), total number of formal evaluations over course of enrollment in the 99 program (TEV), first term participants began clinical experiences (FCEX), first term clinical skills of subjects formally evaluated (FCEV), setting for first evaluation of clinical skills (SFCEV), setting in which mastery of clinical skills evaluated (SMEV), and overall grade point average at time of graduation (GPA). GPA was the only variable found to be statistically signification (p < .01). See Table 4.15 for summary of results. Table 4.15 Cross-tabulation Analysis Summary: Passing WS and PR on First Attempt___________ Total Total Not Statistical Pearson Asymp. Sig. Variable Passing Passing Test df Chi-Sq (2-sided)_ ATH 106 (52.5%) 96 (47.6%) Chi-square 6 11.274 0.08 PCH 106 (52.2%) 97 (47.8%) Chi-square 10 10.141 0.428 EPT 105 (51.7%) 98 (48.3%) Chi-square 6 2.129 0.907 FCEX 102 (51.8%) 95 (48.2%) Chi-square 12 17.889 0.119 FCEV 103 (52.3%) 94 (47.7%) Chi-square 12 12 0.155 SFCEV 105 (52.8%) 94 (47.3%) Chi-square 4 7.447 0.114 100 Table 4.15 (continued) Total Total Not Statistical Pearson Asymp. Sig. Variable Passing Passing Test df Chi-Sq (2-sided)_ SMEV 98 (53.6%) 85 (46.5%) Chi-square 8 8.015 0.432 GPA 106 (52.2%) 97 (47.8%) Chi-square 6 17.572 0.007** *p < 0.05 **p<0.01 Logistic regression. The null model resulted in an accurate prediction of 53.3% of the cases. A backward elimination logistic regression was performed to further investigate the extent to which the five predictors identified above could be used to predict passing on the written simulation (WS) and practical (PR) portions of the BOC certification examination. These results indicate the simpler model with two independent variables (chi-square = 10.27, p = 0.006) to is no worse than the full model (chi-square = 12.766, p = 0.026) for the prediction of successfully passing the WS and PR on the first attempt for participants in this study. See Table 4.16 for summary of results. In the final restricted model, two predictors were retained. These predictors included overall grade point average at graduation (GPA) and the earliest academic term in which students began their clinical experiences (FCEX). For this model 56.1% of the responses were clustered correctly. Results indicate that as GPA increases so does the success of passing the WS and PR portions of the certification examination. More 101 specifically, odds of passing these parts of the examination of the first attempt are increased 1.403 times for each unit increase in GPA. Results also indicate that students who begin their clinical experiences later in their college experience (students would complete a minimum of 4 terms of clinical experiences during the remaining time to graduate in the typically 4 year plan of study) are 0.781 times more likely not to pass the WS and PR on the first attempt. For example, an athletic training student with an overall GPA of 2.6 and who began their clinical experiences during the fifth semester of enrollment (thus with 4 remaining academic terms clinical experiences) would have a 0.1316 probability of passing both the WS and PR on the first attempt. A student who begins his clinical experiences the first year-first term of enrollment (thus providing the student with 8 academic terms for opportunities to receive clinical experiences) and a GPA of 3.4 at graduation has a greater (0.6474) probability of passing both the WS and PR portions of the certification examination on the first attempt. 102 Table 4.16 Summary of Logistic Regression Results______________________________________ % Classified Model Chi ? Square Model Predictor Correct Chi-Square Prob Difference Prob__ Null Model 53.5% 1 GPA SFCEV FCEX FCEV TEV 58.8% 12.766 0.026 2 GPA FCEX FCEV TEV 58.3% 12.718 0.013 -0.049 0.826 3 GPA FCEX TEV 58.3% 12.497 0.006 -0.221 0.638 4 GPA FCEX 56.1% 10.270 0.006 -2.226 0.136 103 MANOVA procedures were used to determine in significance of means for interval variables in the study. Variables in this analysis included: number of academic terms (ATM), number of clinical experiences (CEX), number of clinical settings (CST), number of evaluations settings (EST), and number of evaluation instruments (EIT). The overall Wilks? Lambda of 0.903 (p = 0.03) indicates a multivariate difference existed among these variables. Univariate ANOVAs were performed to determine differences in these variables. The assumption of homogeneity of variance was supported for each follow-up ANOVA (p > 0.05). One of the five variables, number of clinical settings, was found to be statistically significant. Specifically, participants in the group that did not pass the written simulation (WS) and practical (PR) portions of the exam on the first attempt (M = 2.76, SD = 1.234) also reported having clinical experiences in fewer clinical settings than those passing the WS and PR on the first attempt (M = 3.23, SD = 1.146). See Table 4.17 for summary of results. Table 4.17 Summary of MANOVA Analysis: Passing WS and PR Parts of BOC Exam on First Attempt_______________________ Not passing Passing Passing either part one part both parts Dependent variables (n = 25) (n = 70) (n = 106) F Prob M = 6.72 M = 6.03 M = 6.02 Number of academic terms SD = 1.339 SD = 1.444 SD = 1.519 2.454 0.089 Number of clinical experiences M = 6.12 M = 5.76 M = 5.95 SD = 1.571 0.599 0.550 SD = 1.716 SD = 1.488 M = 2.76 M = 3.56 M = 3.23 Number of clinical SD = 1.234 SD = 1.87 settings SD = 1.146 4.421 0.012 Number of evaluation settings M = 2.20 M = 2.74 M = 2.64 SD = 1.258 SD = 1.212 SD = 1.161 1.938 0.147 Number of evaluation instruments M = 3.16 M = 3.26 M = 3.30 SD = 0.878 0.266 0.766 SD = 1.143 SD = 0.863 An overall multivariate test resulted in statistical significance. Wilks? Lambda = 0.903 (p = 0.030) Bonferroni post hoc analysis confirmed a statistically significant relationship within the CST variable (p = 0.012). This relationship was found to be present between subjects in group 1 (not passing WS and PR on first attempt) (M = 2.76) and group 2 104 105 (passing either WS or PR on first attempt) (M = 3.56). No significant relationship was found to exist between group 1 and group 2 (passing both WS and PR on first attempt) (M = 3.23). Number of clinical settings athletic training students are placed in appears to influence first time performance on the WS and PR portions of the BOC examination. However, CST does not have a significant influence on passing both the WS and PR on the first attempt. Students receiving clinical educations experiences in 3 or more different clinical settings are more likely to pass at least one of these two parts of the examination on the first attempt in comparison to those students only rotated through 1 or 2 different clinical settings. Discussion Analysis of results has provided useful information concerning the clinical education experiences of athletic training students. There are multiple factors that can possibly influence students? clinical education experiences and potentially performance of the BOC Certification Examination. Factors influencing students? performance on the written simulation and practical portions of the examination include various aspects associated with the design of the ATEP curriculum in additional to academic achievement. The majority of programs (79.7%) required students to complete a pre-clinical experience prior to admission to the ATEP. Twenty-five to fifty hours appears to be the reasonable number of hours required for the pre-clinical experience, although 19.8% of participates did report having to complete more than 100 hours to fulfill this requirement for admission. It appears the pre-clinical experience is included as part of the students overall clinical experience. The pre-clinical experience serves as an excellent opportunity 106 to prepare students for the program. During the pre-clinical experience students can be taught their basic first aid and CPR skills, proper procedures for safe practice concerning blood-borne pathogen, and basic taping and wrapping skills. The pre-clinical experience provides students the opportunity to see first hand the responsibilities of a certified athletic trainer, and the requirements that must be achieved in order to successfully complete the program. Most participants (80.2%) reported receiving academic credit for their clinical education experiences, and these clinical experiences were associated with a separate clinical course 59% of the time. There appears to be a trend for programs to allow students to begin these clinical experiences upon initial entrance into the institution. Seventy-five percent of participants indicated they began their clinical experiences either the first or second term of the first year of enrollment at their institution. The first and second term of the first year of enrollment was reported to be the first term for the evaluation of clinical skills by 73% of participants. Fifty-seven percent of subjects reported the ATEP they completed required 5 to 7 academic terms (M = 6.57) to complete the academic and clinical components of the program. Athletic training students are required to demonstrate proficiency concerning competencies identified through the BOC Role Delineation Study and outlined in the NATA Educational Competency Manual. Students must be provided adequate time to gain in knowledge and understanding of the principles of athletic training, to practice and master the skills required to practice in the field of athletic training, and to demonstrate proficiency concerning the knowledge, skills, and personal behaviors essential to becoming an entry-level certified athletic trainer. Allowing students to have a longer period of time (i.e. more academic terms) to 107 complete required academic courses and clinical education experiences allows for learning over time to occur and the proper assessment of the progression of students. Athletic training students received opportunities to complete clinical experiences in a variety of settings. In addition to the intercollegiate athletic setting, which is primary clinical setting, they received clinical instruction through involvement with high school athletics, patients in physical therapy clinics, and patients in physicians? offices. Sixty- nine percent of participants reported receiving clinical instruction in 3 to 5 different clinical settings. Injuries associated with physical activity are not isolated to college athletes. Anyone who is physically active is susceptible to potential musculoskeletal injuries. Allowing students to work in different clinical settings provides opportunities to experience a greater variety of injuries and develop a better understanding of individual nature of injuries. Athletic training students need to understand that the rehabilitation of an ACL reconstruction for a 40 year-old carpenter (while following similar protocols) is not going to progress exactly as the rehabilitation for at 21 year-old well conditioned and highly motivated defensive back. Also, providing opportunities to interact with physical therapist and physicians allows students to a better understanding of these professions and an appreciation for their training and knowledge in the care of physical injuries and other medical conditions. Fifty-two percent of subjects reported their clinical skills were first evaluated in the clinical setting. The intercollegiate setting was the most frequently reported setting in which clinical skills were first evaluated by an ACI. Also, interscholastic and physical therapy settings were frequently used for the initial evaluation of clinical skills. An average of 2.6 clinical settings were employed for evaluation of students? skills. The 108 limited number of clinical settings for initial evaluations is related to accreditation standards that limit these evaluations to ACIs. The ACI has received specific trained by the program clinical instructor educator concerning assessment of athletic training students. Limiting the setting for initial evaluations hopefully ensures consistency of assessment and appropriate feedback relative to deficiencies and strengths. Demonstration of mastery of clinical skills and competencies was most frequently reported to occur in the clinical setting (40.1%). Twenty-six percent of participants reported mastery of clinical skill was evaluated through use of a mock examination and 8% reported demonstration of mastery during participation in a year-end examination. This is to be expected due to the fact that students spend a considerable amount of time participating in clinical settings. However, the use of mock examinations and year-end examinations can serve as excellent methods to evaluate mastery and retention of knowledge and skills learned over the course of enrollment in the ATEP. The evaluation instrument employed most frequently to assess clinical skills and competencies was the oral ? practical simulation. Oral - practical simulation are an excellent tool for evaluating clinical skills because they can test a variety of skills. Simulations also require students to demonstrate skills on a simulated patient or medical manikin in the presence an ACI, or group of ACIs. There is not second-guessing as to whether or not students can perform the essential skills of an athletic trainer with this method of assessment. Task specific check-lists and general check-lists are frequently used to document the successful completion of specific tasks associated with performance of clinical skills and the overall completion of these clinical competencies. Objective structured clinical examinations were reportedly used for assessment purposes 109 by 45% of survey participants. Although time consuming to develop, OSCE are very useful for standardized assessment of groups students in one evaluation setting. Only 8% of participants reported being formally evaluated one time per academic term. Two and more than 3 times per academic were the most frequently reported responses. Results indicated students were evaluated an average of 7.79 times over the course of enrollment in the ATEP. It is important that students be evaluated frequently concerning their performance of clinical skills and achievement of clinical competencies. Frequent evaluations assist in the identification of deficiencies and allow appropriate time for additional instruction to achieve mastery. Ninety-six percent of survey participants reported being evaluated one-on-one by an ACI and 87% reported receiving a combination of positive and negative feedback from the ACI following the evaluation. Students cannot be allowed to practice clinical skills for which they have not demonstrated an acceptable level of proficiency. Deficiencies must be identified, corrected, and re-evaluated. Students cannot be allowed to graduate and enter the profession unless each can demonstrate the desired level of proficiency concerning clinical skills and clinical competencies. Means scores (self-reported) for survey participants on the written simulation, practical, and written portions of the BOC Certification Examination were 578.89 (SD = 74.36), 39.65 (SD = 3.35), and 115.07 (SD = 13.80), respectively. These results were all greater than the mean values for all taking the certification examination in 2005. Average scores on the 2005 examination were 501.08 (SD = 97.64) on the written simulation, 33.56 (SD = 7.28) on the practical, and 99.05 (SD = 12.31) on the written. Sixty-six percent of the participants indicated they passed the written simulation in comparison to a 110 passing rate of 46.48% reported in the BOC 2005 Annual Report. Sixty-eight percent of the survey participants reported they passed the practical portion of the examination on the first attempt in comparison to 47.09% reported by the BOC. The BOC Report indicated 22.88% of those taking the written portion of the examination passed on the first attempt while 65.6% of the participants in the study reported passing this part on the first attempt. The survey questionnaire used in this study was mailed to a randomly selected list of certified athletic trainers who passed the certification examination in 2005. No distinction was made concerning those who passed all three parts of the examination on the first attempt and those required to retake one or more parts of the examination. A possible explanation for the higher scores for the survey participants may be that those participating in the study did so because they did perform well on the examination. These subjects may have been more willing to report their scores because they did not have to retake any portion of the examination. Although subjects were assured confidentiality would be strictly maintained and their individual performance would do be published, some may have been unwilling to participate in the study because they did not pass all portions of the examination on the first attempt. Therefore, the higher reported values of participants in this study on the certification examination should be viewed cautiously. Conclusion The educational preparation of students desiring to pursue careers as certified athletic trainers has undergone significant changes in the past 15 years in an effort to ensure quality of care provided by entry-level certified athletic trainers. Clinical competencies have been developed and published that specifically address the cognitive, psychomotor, and affective aspects of athletic training that the entry-level certified 111 athletic trainer must demonstrate proficiency. An essential part in education those pursuing cares in athletic training is the clinical education experience. The purpose of this study was to examine the effect that clinical education experiences might have on overall performance on the written simulation and practical parts of the BOC Certification Examination. Participants in this study reported a variety of instruments being used for evaluation of clinical skills and competencies. The majority of these assessments, both formative and summative, occurred during clinical experiences. One-on-one evaluations by the ACI appear to be primary format for these assessments, which corresponds to CAATE Standards and Guidelines. The results of this study indicate athletic training students are allowed to participate in multiple clinical education experiences in a variety of settings. Many students are allowed to begin their clinical experiences during the first or second term of the first year of enrollment. Formal evaluation of these students? performance of clinical skills begins at this same time. Early admission into the ATEP appears to be beneficial to students if allowed to complete multiple clinical experiences. Also, early admission provides adequate time for multiple evaluations of students? performance and progression in the ATEP. GPA continues to be the strongest predictor of success on the certification examination as demonstrated in previous studies. 112 Bibliography Amato, H. K., Konin, J. G., and Brader, H. (2002). A model for learning over time: The big picture. Journal of Athletic Training, 37 (Supplement), S236 ? S240. Berry, D. C., Miller, M. G., and Berry, L. M. (2004). Effects of clinical field-experience setting on athletic training students? perceived percentage of time spent on active learning. Journal of Athletic Training, 39, 176 ? 184. Coker, C. A. (2000). Consistency of learning style of undergraduate athletic training students in the traditional classroom versus the clinical setting. Journal of Athletic Training, 35, 441 ? 444. Draper, D. O. (1989). Students? learning styles compared with their performance on the NATA Certification Examination. Athletic Training, JNATA, 24, 234 ? 235. Fuller, D. (1997). Critical thinking in undergraduate athletic training education. Journal of Athletic Training, 32, 242 ? 247. Gardner, G. and Harrelson, G. L. (2002). Situational teaching: Meeting the needs of evolving learners. Athletic Therapy Today, 7, 18 ? 22. Harrelson, G. L., Gallaspy, J. B., Knight, H. V., and Leaver-Dunn, D. (1997). Predictors of success on the NATABOC Certification Examination. Journal of Athletic Training, 32, 323 ? 327. Laurent, T. and Weidner, T. G. (2001). Clinical instructors? and student athletic trainers? perceptions of helpful clinical instructor characteristics. Journal of Athletic Training, 36, 58 ? 61 113 Mensch, J. M. and Ennis, C. D. (2002). Pedagogic strategies perceived to enhance student learning in athletic training education. Journal of Athletic Training, 37 (Supplement), S199 ? S207. Standards for the Accreditation of Entry-Level Athletic Training Education Programs. (2005). Commission of Accreditation of Athletic Training Education. Turocy, P. S., Comfort, R. E., Perrin, D. H., and Gieck, J. H. (2000). Clinical experiences Are not predictive of outcomes on the NATABOC Examination. Journal of Athletic Training, 35, 70 ? 75. Weidner, T. G. and August, J. A. (1997). The athletic therapist as clinical instructor. Athletic Therapy Today, 2, 49 ? 52. Weidner, T. G. & Henning, J. M. (2002). Historical perspective of athletic training clinical education. Journal of Athletic Training, 37 (Supplement), S222 ? S22. 114 CHAPTER 5 SUMMARY In comparison to the allied health professions of nursing and physical therapy, athletic training is a relatively young profession having celebrated the 50 th anniversary of the NATA in 2000. During this period the professional preparation for athletic training students seeking professional certification has evolved from a system that allowed two paths of educational experiences and internships to a single path that requires graduation from an accredited athletic training education program. These programs must demonstrate adherence to strict standards and guidelines established by an external accrediting agency in order to achieve and maintain accreditation. Students in ATEPs no longer stand on sidelines just to get hours, hoping an opportunity will be provided to practice a skill learned in the classroom. Today?s programs incorporate clinical education experiences to correspond to didactic courses and laboratory sessions so students are provided opportunities to develop their clinical skills and demonstrate the mastery of clinical skills and clinical competencies in the presence of trained clinical instructors. Students graduating from these programs have demonstrated the knowledge, psycho- motor skills, and attitudes essential to practicing as an entry-level certified athletic trainer. The purpose of this study was two fold. The first purpose of this study was to gather information concerning the design of clinical education experiences in accredited 115 ATEPs. This was achieved through the use of an Internet survey questionnaire sent all accredited ATEPs. The second purpose of the study was to gather information for recent graduates of accredited ATEPs to determine what relationship might exist between clinical education experiences and performance on the written simulation and practical portions of the BOC Certification Examination. Information gathered from PDs provides clearer picture concerning the design of the clinical education experience for athletic training students. Pre-clinical experiences are often required prior to admission to the ATEP as a way to prepare students for the program. Requiring 5 to 6 academic terms of enrollment in the program and 4 to 6 clinical experiences appears to be common among programs. Most programs have students begin their clinical experiences no later than the first term of second year of enrollment at the institution. Doing so helps ensure students have the necessary time to learn, practice, and master the many clinical competencies required to complete the program. The saying, ?see one, do one, and do one more? is never more present than in the clinical education of athletic training students. Students appear to be gaining their clinical experiences in a variety of traditional and non-traditional settings. This allows students to observe and assist in the care of a greater variety of medical conditions and injuries. These experiences also help students gain a better understanding and appreciation for the diversity of the persons who are physically active. Results indicate programs are making diligent efforts to keep an appropriate student / ACI ratio. Seventy-three percent of the programs participating in the study indicated ratio of 4 to 1, or less for their students and ACIs. It is important that 116 appropriate student / ACI ratios be maintained so students can be properly supervised, sufficient time is available for instruction, and the care of athletes/patients is not compromised. Students tend to be evaluated by their ACI frequently throughout the academic term and over the course of enrollment in the program. These evaluations are essential to measuring and documenting the progression of students. Also, a variety of traditional evaluation instruments (i.e. oral/practical examinations, objective structure clinical examinations, and task-specific checklist) are used to document the progression of the students. However, the lack of use of technology for assessment purposes does raise some concerns concerning the recording of students? performance and clinical progression. Results for the survey of athletic trainers successfully passing the BOC Certification Examination in 2005 reveal information to that reported by PDs concerning the organization of the clinical education experience. Many of the participants reported beginning their clinical experiences early following initial enrollment at the institution. Participants in the study reported the ATEP required 5 to 6 semesters to complete and generally 6 clinical experiences. Clinical experiences were spread among a variety of traditional and non-traditional athletic training settings, although the evaluation of clinical competencies took place in a limited number of settings. Participants reported their proficiency regarding clinical proficiencies were evaluated 2 to 3 times per academic term for a frequency of 8 or more formal evaluation over the completion to the ATEP. Oral/practical examinations and use of task-specific checklist were commonly used for evaluating clinical skills. Also, many of the survey participants reported a 117 combination of positive and negative feedback was provided by the ACI following evaluation. Statistical analysis indicated there are several variables that may potentially influence passing the written simulation and practical portions of the certification examination on the first attempt. The number of clinical settings students are rotated through during their clinical experiences appears have a possible influence on performance on the certification examination. It is reasonable to conclude that placement in a greater number of clinical settings (more than 2) would be a positive factor for performance on the WS and PR examinations. Clinical education experiences in a greater number of clinical settings will provide additional hands-on experiences in which students can apply the knowledge and skills learning in the classroom and laboratory sessions. Increased clinical experiences and increased opportunities to make application would hopefully yield improved performance on the WS and PR parts of the certification examination. Two additional factors that appear to influence passing of the WS and PR portions of the examination include GPA and starting term for clinical experiences (FCEX). GPA has consistently been shown to be a good predictor of success on examinations and even performance in specific educational programs. It is well accepted that students with higher GPAs tend to demonstrate better performance. This was also found to be true for this study. Athletic training students with higher GPAs tended to perform better on the WS and PR exams. Also, a relationship appears to exist between the beginning of clinical experiences and performance on the WS and PR exams. Students who being their clinical experiences early following initial enrollment to the institution tend to do better of these 118 part of the certification examination. An early start with clinical experiences allows for greater time to acquire the knowledge and clinical competencies essential practicing as an entry-level certified athletic trainer. Mastering the clinical competencies required for certification is no easy task. Having a greater length of time (i.e. 6 academic terms) to learn, practice, and master the identified clinical competencies appears to be beneficial to students. Much has been learned from this study. From a practical standpoint, it appears several factors should be considered in the designing of an ATEP. Students should be allowed to begin their clinical experiences early (first or second semester of first year of enrollment) so they will have more time (academic terms) for the completion of clinical experiences. This will provide a greater time frame for the learning, practicing, evaluation, and mastering of clinical competencies. Students should be rotated through more than 2 clinical settings in an effort to provide increased opportunities for interaction with a more diverse population of physically active participants. Students should be encouraged to excel academically, doing their very best in all courses. Also, academic plans of study should be designed so students can effectively manage the academic challenges of the athletic training curriculum and the participation requirements of the clinical education experiences. Hopefully this information will be useful to programs directors in efforts to design clinical education experiences to meet the education goals of athletic training students. This can be considered the beginning to the study of an area of athletic training education that has received very little formal investigation. Also, as curriculums change in efforts to promote learning and the preparation of athletic trainers, the role of the clinical education 119 experience will also change. Additional research is needed in the application of technology for the evaluation of students and the better use of simulated patients for evaluation purposes. Athletic training education appears to be headed on the right course for the preparation of these allied health professionals. It is essential that athletic training educators continue to question and investigate the various factors making up the clinical education experience to ensure this is an effective learning component of the athletic training education experience. 120 CUMMULATIVE BIBLIOGRAPHY Ainsworth, M. A., Rogers, L. P., Markus, J. F., Dorsey, N. K., Blackwell, T. A., & Petrusa, E. R. (1991). Standardized patient encounters: A method for teaching and evaluation. Journal of American Medical Association, 266, 1390 - 1396. Amato, H. K., Konin, J. G., and Brader, H. (2002). A model for learning over time: The big picture. Journal of Athletic Training, 37 (Supplement), S236 ? S240. Amendments to the Higher Education Act of 1965. (1998). P.L. 105-244. Retrieved August 4, 2006, from http://www.ed.gov/policy/highered/leg/hea98/index.html Aronson, B. S., Rosa, J. M., Anfinson, J., & Light, J. (1997). Teaching tools: A simulated clinical problem-solving experience. Nurse Educator, 22, 17 - 19. Barrows, H. S. & Abrahamson, S. (1964). The programmed patient: A technique for appraising student performance in clinical neurology. Journal of Medical Education, 39, 802 - 805. Barrows, H. S., Williams R. G., & Moy, R. H. (1987). A comprehensive performance- based assessment of fourth-year students? clinical skills. Journal of Medical Education, 62, 805 ? 80. Banta, T. W. (2001). Assessing competency in higher education. Assessing Student Competence in Accredited Disciplines. Edited by C. A. Palomba & T. W. Banta, Stylus Publishing, 1 - 10. 121 Berry, D. C., Miller, M. G., and Berry, L. M. (2004). Effects of clinical field-experience setting on athletic training students? perceived percentage of time spent on active learning. Journal of Athletic Training, 39, 176 ? 184. Borbasi, S. A. & Koop, A. (1994). The objective structured clinical examination: Its appreciation in nursing education. The Australian Journal of Advanced Nursing, 11, 33 ? 40. Bradshaw, M. J. (2001). The clinical pathway: A tool to evaluate clinical learning. Fuszard?s Innovative Teaching Strategies in Nursing (3 rd edition). Edited by A.R.Lowerstein and M. J. Bradshaw. Gaithersburg, MD: Aspen Publishers 340 - 367. Bramble, K. (1994). Nurse practitioner education: Enhancing performance through the use of the Objective Structured Clinical Assessment. Journal of Nursing Education, 33, 59 ? 6. Brown, G., Bull, J., & Pendlebury, M. (1997). Student learning. In Methods and strategies. Assessing student learning in higher education. Routledge: New York, NY. Coker, C. A. (2000). Consistency of learning style of undergraduate athletic training students in the traditional classroom versus the clinical setting. Journal of Athletic Training, 35, 441 ? 444. Cuppett, M. M. (2003). Documenting clinical skills using personal digital assistants. Athletic Therapy Today, 8, 15 ? 20. 122 Curtis, N., Helion, J. G., and Domsohn, M. (1998). Student athletic trainer perception of clinical supervisor behaviors: A critical incident study. Journal of Athletic Training, 33, 249 ? 253. Draper, D. O. (1989). Students? learning styles compared with their performance on the NATA Certification Examination. Athletic Training, JNATA, 24, 234 ? 235. Delforge, G. D. and Behnke, R. S. (1999). The history and evolution of athletic training education in the United States. Journal of Athletic Training, 34, 53 - 61. Duerson, M. C., Romrell, L. J., and Stevens, C. B. (2000). Impacting faculty teaching and student performance: Nine years? experience with the objective structured clinical examination. Teaching and Learning in Medicine, 12, 176 ? 182. Fuller, D. (1997). Critical thinking in undergraduate athletic training education. Journal of Athletic Training, 32, 242 ? 247. Gallagher, T. H., Pantilat, S. Z., Lo, B., & Papadakis, M. A. (1999). Teaching medical students to discuss advance directives: A standardized patient curriculum. Teaching & Learning in Medicine, 11, 142 - 147. Gardner, G. and Harrelson, G. L. (2002). Situational teaching: Meeting the needs of evolving learners. Athletic Therapy Today, 7, 18 ? 22. Gomez, D. A., Lobodzinski, S., & Hartwell, C. D. (1998). Evaluating clinical performance. Teaching in Nursing: A Guide for Faculty. edited by D. M . Billings and J. A. Holstead, Philadelphia, PA :W. B. Saunders, 407 - 421. Graf, M. A. (1993). Videotaping return demonstrations. Nurse Educator, 18, 29. 123 Haessig, C. & LaPotin, A. (2000). Outcomes assessment for dietetics educators. Chicago, IL: Commission of Accreditation for Dietetics Education, The American Dietetics Association. Hanna, D. R. (1991). Using simulations to teach clinical nursing. Nurse Educator, 16, 28 - 31. Hannam, S. E. (1995). Portfolios: An alternative method of student and program assessment. Journal of Athletic Training, 30, 338 ? 341. Harden, R., Stevenson, M, Downie, W. W., Wilson, G. M. (1975). Assessment of clinical competence using objective structured examination. British Medical Journal, 1, 447 - 451. Harrelson, G. L., Gallaspy, J. B., Knight, H. V., and Leaver-Dunn, D. (1997). Predictors of success on the NATABOC Certification Examination. Journal of Athletic Training, 32, 323 ? 327. Heath, J. (1983). Gaming / simulation in nurse education. Nurse Education Today, 3, 92 - 95. Hodder, R. V., Rivington, R. N., Calcutt, L. E., & Hart, I. R. (1989). The effectiveness of immediate feedback during the Objective Structured Clinical Examination. Medical Education, 23, 184 - 188. Johnson, J. H., Zerwic, J. J., & Theis, S. L. (1999). Clinical simulation laboratory: An adjunct to clinical teaching. Nurse Educator, 24, 37 - 41. 124 Jones, E. & Voorhees, R. (2002). Defining and assessing competencies: Exploring data ramifications of competency-based initiatives. National Post-secondary Education Cooperative. Retrieved August 4, 2006, from http://nces.ed.gov/pubsearch/pubsinfo.asp?pubid=2002159. Lamond, D., Crow, R., Chase, J., Doggen, K., & Swinkels, M. (1996). Information sources used in decision making considerations for simulation development. International Journal of Nursing Studies, 33, 47 - 57. Laurent, T. and Weidner, T. G. (2001). Clinical instructors? and student athletic trainers? perceptions of helpful clinical instructor characteristics. Journal of Athletic Training, 36, 58 ? 61. Matthew, R. and Viens, D. C. (1988). Evaluating basic nursing skills through group video testing. Journal of Nursing Education, 27, 44 - 46 Makar, Joos I. (1984). A teacher?s guide for using games and simulations. Nurse Educator, 9, 3, 25 - 29. Mensch, J. M. and Ennis, C. D. (2002). Pedagogic strategies perceived to enhance student learning in athletic training education. Journal of Athletic Training, 37 (Supplement), S199 ? S207. Meyer, L. S. (2002). Leadership characteristics as significant predictors of clinical- teaching effectiveness. Athletic Therapy Today, 7, 34 ? 39. Miller, M. G. and Berry, D. C. (2002). An assessment of athletic training students? Clinical-placement hours. Journal of Athletic Training, 37 (Supplement), S299 ? S235. 125 Miller, H. K., Nichols, E., and Beeken, J. E. (2000). Comparing videotaped and faculty- present return demonstrations of clinical skills. Journal of Nursing Education, 39, 237 ? 239 Clinical Education Terms (2006). NATA Education Council. Retrieved August 4, 2006 http://www.nataec.org/html/clinical_education_definitions.html Norman, G. R., Tugwell, P., & Feightner, J. W. (1982). A comparison of resident performance on real and simulated patients. Journal of Medical Education, 57, 708 - 715. O?Shea, M. E. (1980). A history of the National Athletic Trainers? Association. Greenville, NC: National Athletic Trainers? Association. Orchand, C. (1994). The nurse educator and the nursing student: A review of the issue of clinical evaluation procedures. Journal of Nursing Education, 33, 245 - 251. Petrusa, E. R., Blackwell, T. A., Rogers, L. P., Saydjari, C., Parcel, S., & Guckian, J. C. (1987). An objective measure of clinical performance. American Journal of Medicine, 83, 34 - 42. Palomba, C. A. (2001). Implementing effective assessment. Assessing Student Competence in Accredited Disciplines. Edited by C. A. Palomba & T. W. Banta, Stylus Publishing, 13 - 28 Roberts, J. D., While, A. E., & Fitzpatrick, J. M. (1992). Simulation: Current status in nurse education. Nurse Education Today, 12, 409 - 415. Ross, M., Carroll, G., Knight, J., Chamberlain, M., Fothergill-Bourbonnais, R., & Linton, J. (1988). Using the OSCE to measure clinical skills performance in nursing. Journal of Advanced Nursing, 13, 45 - 56. 126 Standards for the Accreditation of Entry-Level Athletic Training Education Programs. (2005). Commission of Accreditation of Athletic Training Education. Stillman, P. L. & Swanson, D. B. (1987). Ensuring the clinical competence of medical school graduates through standardized patients. Archives of Internal Medicine, 147, 1049 - 1052. Stillman, P. L., Regan, M. B., Swanson, D. B., Case, S. McCahan, J., Feinblatt, J., Smith, S. R., Willms, J., & Nelson, D. V. (1990). An assessment of the clinical skills of fourth-year students at four New England medical schools. Academic Medicine, 65, 320 - 326. Swann, E. (2002). Communicating effectively as a clinical instructor. Athletic Therapy Today, 7, 28 ? 33. Turocy, P. S., Comfort, R. E., Perrin, D. H., and Gieck, J. H. (2000). Clinical experiences Are not predictive of outcomes on the NATABOC Examination. Journal of Athletic Training, 35, 70 ? 75. Tracy, S. M., Marino, G. J., Richo, K. M., & Daly, E. M. (2000). The clinical achievement portfolio. Nurse Educator, 25, 241 ? 246. Wales, M. A. and Skillen, D. L. (1997). Using scenarios as a teaching method in teaching health assessment. Journal of Nursing Education, 36, 256 ? 262. Weidner, T. G. and August, J. A. (1997). The athletic therapist as clinical instructor. Athletic Therapy Today, 2, 49 ? 52. Weidner, T. G. & Henning, J. M. (2002). Historical perspective of athletic training clinical education. Journal of Athletic Training, 37 (Supplement), S222 ? S228. 127 Wheeler, P. & Haertel, G. D. (1993). Resource handbook on performance assessment and measurement: A tool for students, practitioners, and policymakers. Berkeley, CA: The Owl Press. Williams, J. (2001). The clinical notebook: Using student portfolios to enhance clinical teaching learning. Journal of Nursing Education, 40, 135 ? 137. 128 APPENDIX 129 Survey of Evaluation Methods used for the Assessment of Clinical Proficiencies The purpose of the following survey is to collect information relative to the methods used to evaluate students during the clinical experience and the frequency that various evaluation instruments are employed relative to the assessment of the athletic training students. Complete the following items relative to the Athletic Training Education Program: 1. Indicate the type of Athletic Training Education Program at your institution: ___ CAAHEP Accredited Entry-Level Program [undergraduate program] ___ CAAHEP Accredited Entry-Level Program [graduate program] 2. Indicate the current intercollegiate affiliation of the athletic program: ___ NCAA Division I __ NCAA Division II __ NCAA Division III __ NAIA 3. Are students required to complete a pre-clinical experience prior to enrollment in the formal clinical experience? __ Yes __ No If you answered No to question 3,explain the criteria for admission into the program. Explain: 4. Indicate the type of academic term at your institution. __ Semesters __ Quarters 5. Indicate the number of terms required to complete the programs once the student has been formally admitted into the ATEP. __ 4 __ 5 __ 6 __ 7 __ 8 __ 9 __ more than 9 6. Indicate the number of academic terms in which the student is receiving credit for clinical experience. __ 4 __ 5 __ 6 __ 7 __ 8 __ 9 __ more than 9 130 7. Indicate the first term that the student is allowed to begin his/her clinical experiences following admission into the program: __ First Year of Enrollment __ First Term __ Second Term __ Third Term __ Second Year of Enrollment __ First Term __ Second Term __ Third Term __ Third Year of Enrollment __ First Term __ Second Term __ Third Term 8. Are clinical experiences awarded academic credit? __ Yes __ No If you answered No to question 8,explain how the clinical experience is figured into the academic performance of the student. Explain: 9. If you answered Yes to question 8, indicate the type of course in which clinical experience is a requirement for the course. __ Didactic course __ Didactic course with a lab __ Clinical course __ Laboratory course __ Other-Explain Explain: 10. Indicate the average number of students per year admitted into the program over the past three (3) years. __ 2 __ 3 __ 4 __ 5 __ 6 __ 7 __ 8 __ 9 __ 10 __ more than 10 11. Indicate the average number of students per year graduating from the program over the past three (3) years: __ 2 __ 3 __ 4 __ 5 __ 6 __ 7 __ 8 __ 9 __ 10 __ more than 10 12. Indicate the number of students graduating from the institution's athletic training program during the past three (3) years who successfully passed all portions of the BOC examination on the first attempt: __ 0 __ 1 __ 2 __ 3 __ 4 __ 5 __ 6 __ 7 __ 8 __ 9 __ 10 __ more than 10 __ Do not know 131 13. Indicate the number of students graduating from the institution's athletic training program during the past three (3) years who were required to retake the Written Simulation portion of the BOC examination: __ 0 __ 1 __ 2 __ 3 __ 4 __ 5 __ 6 __ 7 __ 8 __ 9 __ 10 __ more than 10 __ Do not know 14. Indicate the number of students graduating from the institution's athletic training program during the past three (3) years who were required to retake the Practical portion of the BOC examination: __ 0 __ 1 __ 2 __ 3 __ 4 __ 5 __ 6 __ 7 __ 8 __ 9 __ 10 __ more than 10 __ Do not know 15. Indicate the number of students graduating from the institution's athletic training program during the past three (3) years who have been required to retake the Written Simulation portion of the BOC examination two (2) or more times: __ 0 __ 1 __ 2 __ 3 __ 4 __ 5 __ 6 __ 7 __ 8 __ 9 __ 10 __ more than 10 __ Do not know 16. Indicate the number of students graduating from the institution's athletic training program during the past three (3) years who have been required to retake the Practical portion of the BOC examination two (2) or more times: __ 0 __ 1 __ 2 __ 3 __ 4 __ 5 __ 6 __ 7 __ 8 __ 9 __ 10 __ more than 10 __ Do not know 17. Indicate the number of students graduating from the institution's athletic training program during the past three (3) years who did not make application to take the BOC examination: __ 0 __ 1 __ 2 __ 3 __ 4 __ 5 __ 6 __ 7 __ 8 __ 9 __ 10 __ more than 10 __ Do not know Complete the following items relative to the use of Approved Clinical Instructors (ACI) in the program. 1. Indicate the total number of ACIs used in the program. __ 3 __ 4 __ 5 __ 6 __ 7 __ 8 __ 9 __ 10 or more 2. Indicate the average number of students assigned to an ACI for assessment purposes. __ 1 __ 2 __ 3 __ 4 __ 5 __ 6 __ 7 __ 8 3. Indicate the average number of years of experience as a certified athletic trainer of the ACIs used in your program. __ 1 __ 2 __ 3 or more 132 4. Are graduate assistant certified athletic trainers allowed to serve as an ACI? __ Yes __ No 5. If you answered Yes to question 4, indicate the minimum number of years of certified experience required for graduate assistant athletic trainers to be eligible to serve as an ACI. __ 1 __ 2 __ 3 or more 6. Identify each type of clinical setting in which students receive clinical experience. Also, indicate the number of ACIs directly associated with each clinical setting. __ Intercollegiate Sports __ 1 __ 2 __ 3 or more __ Intramural Sports __ 1 __ 2 __ 3 or more __ Physical Therapy Clinic __ 1 __ 2 __ 3 or more __ High School Sports __ 1 __ 2 __ 3 or more __ Clinical Sports Medicine Outreach program __ 1 __ 2 __ 3 or more __ Physician's Office __ 1 __ 2 __ 3 or more 7. Mark all statements that apply to the ACIs involved in the evaluation of the athletic training students: __ All ACIs are full-time employees of the institution. __ Clinical instructor responsibilities are included as part of the ACIs employment contract with the institution. __ All ACIs employed by the institution are identified as adjunct instructors according to the policies of the institution. __ All ACIs employed by the institution are required to have a master's degree. __ Athletic Trainers not employed by the institution are used as ACIs __ ACIs not employed by the institution are compensated by their employer for serving in this role. __ ACIs not employed by the institution receive compensation from the institution, however, not as an adjunct instructor. __ ACIs not employed by the institution receive CEU credits (at no expense) through participation in instructor workshops. __ ACIs not employed by the institution receive no compensation or CEU credits for their assistance to the program 133 Complete the following items relative to the assessment of the clinical proficiencies of the students. Clinical proficiencies are defined as a specific skills and tasks that are identified in the NATA Athletic Training Educational Competencies manual as essential skills that entry- level athletic trainers should possess. 1. Identify the setting in which the student's clinical proficiencies are first assessed by the ACI using a standardized evaluation instrument (mark only one response): __ Laboratory session __ Clinical experience __ Field experience 2. Identify the number of times per academic term that the student's clinical proficiencies are assessed by the ACI using a standardized evaluation instrument: __ 1 __ 2 __ 3 __ 4 or more 3. Indicate the setting(s) in which the student's mastery of clinical proficiencies is to be demonstrated: __ Clinical experience __ Field experience __ Mock examination __ Year-end examination 4. Identify all types of standardized evaluation instruments used in the assessment of the student's clinical proficiencies : __ General Check List of Clinical Skills __ Task Specific Check List __ Video Taping __ Oral/Practical Simulations __ Objective Structured Clinical Evaluation __ Other-Explain Explain: 134 5. For the assessment instrument listed below, indicate each educational domain in which the instrument is used for the assessment of clinical proficiencies. General Check List of Clinical Proficiencies __ Risk Management __ Assessment/Evaluation __ Acute Care __ Pharmacology __ Therapeutic Modalities __ Therapeutic Exercise __ General Medical Conditions __ Nutrition __ Psychological Intervention/Referral __ Health Care Administration __ Professional Development Task Specific Check List __ Risk Management __ Assessment/Evaluation __ Acute Care __ Pharmacology __ Therapeutic Modalities __ Therapeutic Exercise __ General Medical Conditions __ Nutrition __ Psychological Intervention/Referral __ Health Care Administration __ Professional Development Video Taping __ Risk Management __ Assessment/Evaluation __ Acute Care __ Pharmacology __ Therapeutic Modalities __ Therapeutic Exercise __ General Medical Conditions __ Nutrition __ Psychological Intervention/Referral __ Health Care Administration __ Professional Development 135 Oral/Practical Simulations __ Risk Management __ Assessment/Evaluation __ Acute Care __ Pharmacology __ Therapeutic Modalities __ Therapeutic Exercise __ General Medical Conditions __ Nutrition __ Psychological Intervention/Referral __ Health Care Administration __ Professional Development Portfolio __ Risk Management __ Assessment/Evaluation __ Acute Care __ Pharmacology __ Therapeutic Modalities __ Therapeutic Exercise __ General Medical Conditions __ Nutrition __ Psychological Intervention/Referral __ Health Care Administration __ Professional Development Personal Digital Assistants __ Risk Management __ Assessment/Evaluation __ Acute Care __ Pharmacology __ Therapeutic Modalities __ Therapeutic Exercise __ General Medical Conditions __ Nutrition __ Psychological Intervention/Referral __ Health Care Administration __ Professional Development 136 Objective Structured Clinical Evaluations __ Risk Management __ Assessment/Evaluation __ Acute Care __ Pharmacology __ Therapeutic Modalities __ Therapeutic Exercise __ General Medical Conditions __ Nutrition __ Psychological Intervention/Referral __ Health Care Administration __ Professional Development 6. Are human subjects used as models for purposes of evaluating the student's level of proficiency relative to clinical tasks: __ Yes __ No If you answered No to question 6, skip to question 11. 7. Indicate all of the various types of models used in the assessment of clinical proficiencies: __ Another AT student __ Volunteer Student [none AT student] __ A Clinical Instructor __ Another ACI __ Simulated Patient/Client __ Other-Explain Explain 8. Are medical manikins used for purposes of evaluating the student's level of proficiency relative to clinical tasks? __ Yes __ No 137 9. Indicate the method used to train models that are used in the assessment of clinical skills or clinical proficiencies. __ Written script, but no training __ Written script and one day of training __ No script, but two days of training __ No script, but three days of training __ No script and no training __ Other-Explain Explain 10. Are models paid to participate in the assessment process? __ Yes __ No 11. Are models trained to provide feedback to the student following completion of the clinical task during formal evaluation? __ Yes __ No 12. Is the ACI allowed to provide feedback to the student following completion of the clinical task during formal evaluation? __ Yes __ No 13. If you answered Yes to the question 12, indicate the level of feedback the ACI is allowed to provide to the student: __ Positive [majority of feedback was on the things I did right] __ Negative [majority of feedback was on the things I did wrong] __ Combination [receive feedback on the things I did right as well 14. Have you ever heard of the Objective Structured Clinical Evaluation? __ Yes __ No 15. If you answered Yes to question 14 but you do not use this method for the assessment of clinical skills or clinical proficiencies, please explain why you do not use this assessment method. Explain: 138 As Program Director, complete the following items relative to your position and educational background. 1. Current Job Title: __ Program Director/Assistant Professor __ Program Director/Associate Professor __ Program Director/Professor __ Program Director/Head Athletic Trainer __ Program Director Assistant Athletic Trainer __ Program Director/Other-Explain Explain: 2. Current Job Responsibilities [Assign a percentage to each of the following items. The total must not exceed 100%]: __ Administration __ Academic Instruction __ Academic Advising __ Instruction of Clinical Skills/Proficiencies [laboratory classes] __ Assessment of Clinical Skills/Proficiencies __ Supervision of Student Clinical/Field Experiences [practices, games, etc.] __ Coverage of athletic events [practices, games, etc.] __ Research __ Other - Explain Explain: 3. Indicate the number of years of experience as a NATA-BOC Certified Athletic Trainer: __ less than 3 __ 3 - 5 __ 6 - 8 __ 9 - 11 __ 12 - 14 __ 15 - 17 __ 18 - 20 __ 21 or more 139 4. Indicate the educational setting that you completed in order to be eligible for NATA Board of Certification: __ CAAHEP Accredited Entry-Level Athletic Training Education Program __ NATA Approved Graduate Entry-Level Athletic Training Education Program __ Internship 5. Indicate the highest academic degree you have earned: __ Master's Degree __ Doctorate Degree 6. Indicate the area of academic preparation for the highest degree you have earned: __ Athletic Training __ Educational Instruction __ Educational Leadership __ Exercise Science __ Teacher Certification-Physical Education __ Health Science __ Higher Education Administration __ Other-Explain Explain: 7. If you completed graduate studies in athletic training, Indicate the type of graduate program you completed and graduate assistant experience: __ NATA Accredited Graduate Athletic Training Education Program with Athletic Training Assistantship __ NATA Accredited Graduate Athletic Training Education Program without Athletic Training Assistantship __ Athletic Training Assistantship [not an NATA Accredited Athletic Training Education Program] __ Other-Explain Explain: 8. Indicate your gender: __ Male __ Female 140 Cover Letter to Program Directors Dear Program Director, As a graduate student at Auburn University I am in the process of completing my studies in Human Performance. As the former director of an athletic training education program I have become increasing interested in the assessment of athletic training students during their clinical experiences. A search of the literature has shown that there is little published information relative to methods used to assess these students during this very important component of their educational experience. Due to accreditation standards that require the evaluation of students during all phases of the educational experience, it is apparent that there is a need for information to be gathered regarding methods used to assess the proficiency levels achieved by students relative to clinical skills/competencies. It is hopeful that information gained from this study will benefit program directors in the assessment of students during the clinical experience. The purpose of this letter is to invite you to participate in this survey of methods used to assess the proficiency level achieved by students during clinical experiences. The questionnaire is easy to complete and will take only 10 to 15 minutes to complete. Also, you will not be asked to supply your name, email address, BOC number, or the name of your institution on the survey. The following link (https://mars.aum.edu/secure/searcy/PDConsentForm.html) will direct you to the Informed Consent page that will describe the study in full detail. After reading the consent form, simply click the ?Continue? button at the bottom of the consent page that will link you to the questionnaire. If you have any questions, comments, or technical difficulties, please contact me at 334.833.4267, or contact the chair of my dissertation committee, Dr. Peter Hastie at 334.844.1469 I want to thank you an advance for your willingness to participate in this study. It is hopeful that the information gained from this study will benefit all involved with the education of students desiring to enter the profession of athletic training. Respectfully, Mr. Shelby Searcy, M.Ed., ATC Graduate Student Auburn University 334.833.4267 ssearcy@huntingdon.edu 141 An Assessment of Methods used to Evaluate Athletic Training Students During the Clinical Experience and Performance on the BOC Examination Informed Consent Please read this document carefully before you decide to participate in this study. Purpose of the research project: The purpose of this study is to gain information relative to methods used to assess the level of proficiency achieved by the athletic training students relative to clinical skills/tasks evaluated during the clinical experience. You were selected to participate in this study because you are the Program Director of an accredited program. What will you be asked to do in this project: You are asked to complete a 47 item questionnaire. The survey contains questions designed to gather information relative to the methods used to evaluate the level of clinic proficiency achieved by the athletic training students, frequency of evaluations, educational and professional characteristics of the ACI, recent performance of students on the BOC Certification Examination, and educational and career background of the program director. Time required: Approximately 15 minutes Risks: There are no risks to the participant in the study. The participant should feel assured that all information will remain confidential. Benefits: The information collected in this study will hopefully help identify current trends and methods relative to the assessment of the level of proficiency achieved by the athletic training students during clinical experiences and information relative to those involved in the assessment of the students. 142 Compensation: There is no compensation for participation in this study. However, your voluntary participation is greatly appreciated. Confidentiality: All information you submit will be recorded and analyzed anonymously. Neither your name nor name of the institution will be recorded. Due to this arrangement you may receive additional requests for participation in the study following the initial mail-out in an effort to increase response rate. If you have already responded to the study, please ignore additional requests to participate in the survey. Voluntary participation: Your participation in this study is completely voluntary. You are under no obligation by your employer, professional membership or certification organizations, or accrediting organization to participate in this study. Right to withdraw from the study: You have the right to withdraw from the study and withdraw any data you provided that is identifiable with you at any time without consequences. Whom to contact if you have questions about the study: Mr. Shelby Searcy, M.Ed., ATC Auburn University Doctoral Candidate Dept. of Health & Human Performance 334.833.4267 ssearcy@huntingdon.edu Dr. Peter Hastie, Ph.D Auburn University Assistant Professor Dept. of Health & Human Performance 334.844.1469 For more information regarding your rights as a research participant you may contact Auburn University Office of Human Subjects Research or the Institutional Review Board by phone (334) 844-5966, or e-mail at hsubject@auburn.edu or IRBChair@auburn.edu. 143 Agreement: I have read the information relative to this research study. Continuing forward verifies my willingness to voluntarily participate in the study Survey Link 144 Follow-up Notification to Program Directors Dear Program Director: The following is a friendly reminder of the call for participation in a research questionnaire that was recently emailed to you. I am again requesting your assistance as an athletic training educator in gathering information relative to the assessment of athletic training students during the clinical experience. I kindly ask that you take a few minutes to complete the questionnaire that is part of my dissertation project. The questionnaire is easy to complete and will take only 10 to 15 minutes to complete. Also, you will not be asked to supply your name, email address, BOC number, or the name of your institution on the survey. The following link (https://mars.aum.edu/secure/searcy/PDConsentForm.html) will direct you to the Informed Consent page that will describe the study in full detail. After reading the consent form, simply click the ?Continue? button at the bottom of the consent page that will link you to the questionnaire. If you have any questions, comments, or technical difficulties, please contact me at 334.833.4267, or contact the chair of my dissertation committee, Dr. Peter Hastie at 334.844.1469 I would greatly appreciate your response to this request. If you have already completed and submitted the questionnaire, I want to thank you for doing so and please accept my apology for this reminder. Respectfully, Mr. Shelby Searcy, M.Ed., ATC Graduate Student Auburn University 334.833.4267 ssearcy@huntingdon.edu 145 An Assessment of Methods used to Evaluate Athletic Training Students During the Clinical Experience and Performance on the BOC Examination The purpose of the following survey is to collect information relative to the methods used to evaluate students during the clinical experience. The survey is intended to gather information related to various evaluation instruments used to evaluate the student's level of proficiency regarding the performance of clinical skills/tasks. Complete the following items relative to the Athletic Training Education Program you completed: 1. Indicate the type of Athletic Training Education Program (ATEP) at your institution ___ CAAHEP Accredited Entry-Level Program [undergraduate program] ___ CAAHEP Accredited Entry-Level Program [graduate program] 2. Indicate the current intercollegiate affiliation of the athletic program: ___ NCAA Division I ___ NCAA Division II ___ NCAA Division III ___ NAIA 3. Was admission into the program competitive? ___ Yes ___ No 4. Did you complete a pre-clinical experience prior to admission into the ATEP? ___ Yes ___ No 5. If you answered ?Yes? to question 4, indicate the number of contact hours you were required to complete during the pre-clinical experience ___ less than25 ___ 25 to 50 ___ 51 to 75 ___ 76 to 100 ___ greater than 100 6. Indicate the type of academic term and number of terms required to complete the program: ___ Semesters ___ Quarters __ 4 __ 5 __ 6 __ 7 __ 8 __ 9 146 7. Indicate the number of clinical experiences you were required to complete during your enrollment in the ATEP. A clinical experience requires that you be assigned to a clinical instructor and your clinical skills were evaluated by an approved clinical instructor (ACI (Note: Even though your clinical experience may have required you to be involved with more than one athletic team or assigned to more than one clinical setting, the experience counts as a single clinical experience.) __ 4 __ 5 __ 6 __ 7 __ 8 __ 9 8. Did you receive academic credit for each clinical experience? ___ Yes ___ No 9. If you answered ?Yes? to question 8, indicate the type of courses in which clinical experience is a requirement for the course ___ Didactic course ___ Didactic course with a lab ___ Clinical course ___ Laboratory course ___ Other-Explain: ________________________________________________________________ 10. Indicate the first term in which you were allowed to begin your clinical experiences following admission into the program: ___ First Year of Enrollment __First Term __Second Term __Third Term ___ Second Year of Enrollment __First Term __Second Term __Third Term ___ Third Year of Enrollment __First Term __Second Term __Third Term 11. Indicate all of the settings in which you received clinical experience: ___ Intercollegiate athletes ___ Intramural sports ___ High school athletes ___ Physical Therapy clinic ___ Sports medicine outreach program ___ Physician's office 147 12. Indicate the first term in which your level of proficiency, relative to clinical skills, was first formally evaluated by an approved clinical instructor (ACI). ___ First Year of Enrollment __First Term __Second Term __Third Term ___ Second Year of Enrollment __First Term __Second Term __Third Term ___ Third Year of Enrollment __First Term __Second Term __Third Term 13. Indicate the type of setting in which your level of proficiency, relative to the majority of clinical skills, was first evaluated by an ACI (mark only one response) ___ Laboratory session ___ Clinical experience ___ Other - Explain: ________________________________________________________________ 14. Indicate all of the settings in which your level of proficiency, relative to clinical skills, was evaluated by an approved clinical instructor (ACI): ___ Intercollegiate athletics ___ Intramural sports ___ High school athletes ___ Physical therapy clinic ___ Sports medicine outreach program ___ Physician's office 15. Indicated the setting in which your proficiency mastery, relative to the majority of clinical skills, was evaluated by an ACI (mark only one response): ___ Laboratory session ___ Clinical experience ___ Mock examination ___ Year-end examination ___ Other - Explain: ________________________________________________________________ 16. Identify all of the types of standardized evaluation instruments used to evaluate your proficiency relative to clinical skills ___ General check-list of clinical skills ___ Task specific check-list of a specific clinical skill ___ Video taping of your performance of a clinical skill ___ Oral/Practical simulation involving you, a model, and the ACI ___ Multiple objective structured stations in which you respond to an identified task in the presence of the evaluating ACI ___ Other - Explain: ________________________________________________________________ 148 17. How many times per academic term were formal evaluations with an ACI scheduled to evaluate your proficiency in the performance of specific clinical skills? __ 1 __ 2 __ 3 __ more than 3 18. Over the course of your enrollment in clinical courses, how many times were formal evaluations with an ACI scheduled to evaluate your proficiency regarding clinical skills __ 4 __ 5 __ 6 __ 7 __ 8 __more than 8 19. Were you evaluated by the ACI on a one-on-one basis when assessed relative to your mastery of clinical proficiencies ___ Yes ___ No If you checked ?No?, please explain how you were evaluated. Explain: _________________________________________________________________ 20. Did the evaluating ACI provide feedback to you following the evaluation of your level of proficiency relative to identified clinical skills ___ Yes ___ No 21. If you answered ?Yes? to question 20, indicate the type of feedback provided by the ACI: ___ Positive [majority of feedback was on the things I did right] ___ Negative [majority of feedback was on the things I did wrong] ___ Combination [receive feedback on the things I did right as well as the things I did wrong] 22. What was your overall grade point average (GPA) at the time of graduation? ___ 2.0 - 2.2 ___ 2.3 - 2.5 ___ 2.6 - 2.8 ___ 2.9 - 3.1 ___ 3.2 - 3.4 ___ 3.5 or higher 23. Indicate your gender ___ Male ___ Female 149 24. Indicate the NATA District in which the institution you graduated from is located: ___ District 1 [CT, MA, ME, NH, VT] ___ District 2 [DE, NJ, NY, PA] ___ District 3 [MD, NC, SC, VA, WV, DC] ___ District 4 [IL, IN, MI, MN, OH, WI] ___ District 5 [IA, KS, MO, ND, NE, OK, SD] ___ District 6 [AR, TX] ___ District 7 [AZ, CO, NM, UT, WY] ___ District 8 [CA, HI, NV] ___ District 9 [AL, FL, GA, KY, LA, MS, TN] ___ District 10 [AK, ID, MT, OR, WA] Complete the following items relative to your performance on the BOC Examination: Your exam results will be held strictly confidential. Your results will only be used to determine if a relationship exist between the methods and frequency of evaluations used to assess clinical proficiencies during the clinical experience, and performance on the certification examination. Please provide the score reported by the BOC office when you were informed that you had passed the Written Simulation portion of the certification examination: __________ Did you pass the Written Simulation portion of the examination on your first attempt? Yes ___ No ___ If you did not pass the Written Simulation portion on the first attempt, please indicate the number of times you retook this portion before achieving a passing score: ___ 1 time ____ 2 times ____ more than 2 times Please provide the score reported by the BOC office when you were informed that you had passed the Practical portion of the certification examination: __________ Did you pass the Practical portion of the examination on your first attempt? Yes ___ No ___ If you did not pass the Practice portion on the first attempt, please indicate the number of times you retook this portion before achieving a passing score: ___ 1 time ____ 2 times ____ more than 2 times 150 Please provide the score reported by the BOC office when you were informed that you had passed the Written portion of the certification examination: __________ Did you pass the Written portion on your first attempt? Yes ___ No ___ If you did not pass the Written portion on the first attempt, please indicate the number of times you retook this portion before achieving a passing score: ___ 1 time ____ 2 times ____ more than 2 times The results you have reported above will be sent to the Board of Certification office for confirmation. The following information is necessary to ensure accuracy regarding confirmation of self-reported results and scores. I wish to assure you that all information will remain strictly confidential. Only the BOC data specialist, computer specialist assisting with this project, and myself will see this information. All information will remain in a locked filing cabinet until the project is completed. When the project is completed this information will be destroyed. I kindly ask that you supply as much of the requested information as possible. Name (print): _______________________ ___ ________________________ First MI Last BOC Examination Number: _________________ NBC Number: _________________ Date of Birth: ____ / ____ / ____ Month Day Year Date you completed passing all portions of the cert. exam: _________________ Last four (4) digits of Social Security Number: _________________ Thank you for taking the time to complete this survey questionnaire. 151 Cover Letter to Certified Athletic Trainers April 10, 2006 Dear Certified Athletic Trainer: Your name has been randomly selected by the BOC office from a list all 2005 certification exam examinees to participate in this research project. The purpose of this letter is to invite you to participate in a research investigation designed to gather information relative to the methods used in the assessment of athletic training students during their clinical experiences and performance on the BOC Certification Examination. There is very little information relative to this subject and it is hopeful that information from this study will benefit athletic training students, program directors, and clinical instructors. The project will culminate in a doctoral dissertation. To be eligible to participate in this study you must meet the following criteria: ? Graduation from an CAAHEP accredited entry-level ATEP during the period between December, 2004 and December, 2005; ? Passed of all portions of the BOC Examination. The questionnaire will take approximately 10 minutes, or less, to complete. There are two ways in which you can respond to this request: ? Complete the enclosed survey questionnaire and return in the self-addressed, stamped envelop. ? Go on-line and type the web address listed below into your browser. The address will take you to the participant consent form, which has a link to the questionnaire at the bottom of the page. https://mars.aum.edu/secure/searcy/default.html I kindly ask that you complete the survey and mail to me as soon as possible. Surveys returned by mail should be postmarked no later than May 15, 2006. If you get in a pinch for time, the web survey can be completed and submitted in a couple of minutes. The web survey is a secure site, fully protected. The on-line survey will be removed at 5.00 pm on May 15, 2006. I wish to assure you that all information will be kept strictly confidential. You will be asked to identify your name, DOB, BOC examination number, NBC identification number, date you passed the exam, and last four digits of your social security number. This information will be used by the administration at the BOC office to verify your reported examination results. You will be asked to report your exam results and examination scores. This information will not be published in the thesis, or any follow-up 152 presentations or reports. Information that you provide will only be used to determine if there is a relationship between the methods and frequency of evaluation used to assess the proficiency level of students during clinical experiences and performance on the certification examination. I want to thank you in advance for your participation in this survey. I wish only the best as you pursue a career in the profession of athletic training. If you have any questions relative to this study, or technical difficulties with completing the web-based survey, please feel free to contact me at 334.833.4267, or ssearcy@huntingdon.edu. Sincerely, Shelby Searcy, M. Ed., ATC Doctoral Candidate Auburn University Dept. of Health and Human Performance 153 An Assessment of Methods used to Evaluate Athletic Training Students During the Clinical Experience and Performance on the BOC Examination Informed Consent Please read this document carefully before you decide to participate in this study. Purpose of the research study: The purpose of this study is to gain information relative to methods used to assess the clinical proficiencies of students in athletic training education programs (ATEP). You have been selected to participate because you graduated from an accredited ATEP and you have passed the BOC Examination. What will you be ask to do? You are asked to complete a 33 items questionnaire. The survey contains questions relative to the methods used to evaluate the clinical proficiencies of the students during the clinical experience. Time required: No more than 10 minutes Risks: There are no risks to the participant in the study. The participant should feel assured that all information will remain confidential. Benefits: The information collected in this study will hopefully help identify current trends and methods used to assess the clinical proficiencies of athletic training students and performance on the BOC Certification Examination. The completed study will assist in identifying the following: methods used to evaluate the clinical proficiencies of the students; the frequency of assessment; who is involved in the assessment of the student; determine if a relationship exist between assessment methods employed during the clinical experience and performance on the certification examination. Compensation: There is no compensation for participation in this study. However, your voluntary participation is greatly appreciated. Confidentiality: Your identify will be kept confidential. You will be asked to provide your name, DOB, date you passed the exam, BOC examination number, NBC identification number, and last four digits of your social security number. This information will only be used to verify your reported examination results with the BOC office. This information will be kept separate from all other reported information and will be kept in a locked filing cabinet at all time. This information will not be published in the thesis, or any follow-up presentations or reports. Information that you provide will only be used to determine if there is a relationship between the methods and frequency of evaluation used to assess the proficiency level of students during clinical experiences and performance on the certification examination. 154 Voluntary participation: Your participation in this study is completely voluntary. You are under no obligation by your former institution, the certification organizations, or accrediting organizations to participate in this study. You may receive additional requests for participation in the study following the initial mail-out in an effort to increase the response rate. If you have already responded to the study, please ignore any additional requests. Right to withdraw from the study: You have the right to withdraw from the study and withdraw any data you provided that is identifiable with you at any time without consequences. Whom to contact if you have questions: Mr. Shelby Searcy, M.Ed., ATC Dr. Peter Hastie, Ph.D Auburn University Auburn University Doctoral Candidate Assistant Professor Dept. of Health & Human Dept. of Health & Human Performance Performance 334.833.4267 334.844.1469 ssearcy@huntingdon.edu For more information regarding your rights as a research participant you may contact Auburn University Office of Human Subjects Research or the Institutional Review Board by phone (334) 844-5966 or e-mail at hsubject@auburn.edu or IRBChair@auburn.edu Agreement to voluntarily participate in the study: I have read the information relative to this research study. Continuing forward verifies my willingness to voluntarily participate in the study. Submission of the completed survey questionnaire confirms voluntary participation in the study. 155 Follow-up Post Card to Certified Athletic Trainers Date: April 25, 2006 Dear Certified Athletic Trainer: I want to thank you for your completion of the survey questionnaire you were recently mailed regarding assessment of the athletic training student during the clinical experience and performance on the BOC Examination. I understand that your schedule is often times very busy and I appreciate the time and effort you have taken to complete the questionnaire. Your participation shows a shared concern for improving the assessment and education of athletic training students. If you have not yet completed the questionnaire, I kindly ask that you do so and return by May 15. You may complete and return the questionnaire that was mailed to you, or you can complete the questionnaire by going on-line to https://mars.aum.edu/secure/searcy/default.html. If you have misplaced the survey and need another hard copy, I will gladly send you one immediately. Also, if you do not know your BOC Exam Number or NBC Number, kindly supply the other requested information and submit the survey. Having as many returned surveys as possible is essential to the completion and success of this research project. Please contact me (334-833-4267) by May 8 if you need another hard copy of the survey, or if you have any questions regarding the survey. Thanks again for your assistance with this research project. Professionally, Shelby Searcy Doctoral Candidate Auburn University