The Effects of Professor Humor on College Students? Attention and Retention
by
James D. Mantooth
A dissertation submitted to the Graduate Faculty of
Auburn University
in partial fulfillment of the
requirements for the Degree of
Doctor of Philosophy
Auburn, Alabama
August 9, 2010
Keywords: Humor, College Students, Engagement, Retention, Teaching
Copyright 2010 by James D. Mantooth
Approved by
Jill Salisbury-Glennon, Chair, Associate Professor of Educational Foundations,
Leadership & Technology
David M. Shannon, Professor of Educational Foundations, Leadership & Technology
Paris Strom, Associate Professor of Educational Foundations, Leadership & Technology
ii
Abstract
The purpose of this present study was to investigate the effect of instructor humor on
college students? levels of engagement and retention of material. A convenience sample of
junior- and senior-level students enrolled in four separate courses within the College of
Education were exposed to two different lectures ? one humorous, one non-humorous. The
lectures covered material that was already imbedded within the course curriculum, and occurred
at points in the semester where this material would have occurred without this study. Data was
gathered using interest surveys, domain knowledge tests, and post-lecture feedback surveys.
There was a one-week time span that occurred between the pretests and posttests.
Results from paired t-tests indicated that the participants (1) did actually perceive the
presenter as humorous during the humorous presentations, (2) were more engaged in the
humorous presentations than the non-humorous ones, and that the specific topic did not play a
statistically significant role in the results.
Results from within-subjects ANOVA indicated that the humorous lectures did not have a
statistically significant effect on the posttest domain knowledge test scores. The rate of gain
from pretest to posttest scores was almost identical for the humorous and non-humorous
presentations.
iii
Acknowledgments
I would like to express very sincere gratitude to my committee: Dr. Jill Salisbury-
Glennon, Dr. David Shannon, and Dr. Paris Strom. Dr. Salisbury-Glennon ? your unwavering
support and confidence in me often exceeded my own. I cannot thank you enough for your
gentle reminders and encouraging proximal and distal goal setting. This would have been a
much more arduous experience without your guidance. Dr. Shannon ? if there was a scout badge
for patience, you would certainly win it! Thank you for guiding me through the bulk of chapter
4, answering my questions, and for not calling me an idiot when I probably deserved it. I
honestly don?t know that chapter 4 would have happened without your assistance. Dr. Strom ?
thank you for stepping in when you did. Even when you were not on my committee you were
faithful in asking how things were going. You seemed genuinely interested in me and my
research when you had nothing invested in it. I am thankful that you were able to be there and
take a personal interest by serving on my committee when I needed you.
Thank you to the cohort of Beth Yarbrough, Dan Connelly, and Martha Kelley. Our bi-
weekly lunch meetings provided much needed perspective, comfort, and motivation. I am
honored to have experienced this with you.
To my children: Porter, Betsy, and Jack. You are now too young to understand and/or
appreciate any of this. And for that, you?re welcome! You have had to experience times when I
was absent, either mentally or physically, as I devoted time to this little project. Thank you for
knowing that I love you. Thank you for allowing me to smother you with hugging, kissing, and
wrestling when I was there. Those times are way more important than this silly little paper I
iv
have written. Perhaps one day you can actually read this and gain some understanding of why
this topic interests me so much?but for now, let?s stick with Dr. Seuss.
Finally, and most importantly, thank you to my wife, Katie. Years ago when I started this
process, you said you supported it. Years later when I actually (finally) finished it, you said you
supported it. Thank you for doing more than just saying you supported it. You encouraged me
and held me accountable. Countless times you carried more than your ?fair share? of family
responsibilities so that I could devote time and energy to this. Thank you for being a tender
driving force. Thank you for not letting me quit teaching Sunday School. Thank you for your
high expectations. ?I love you? is not now, nor has it ever been strong enough to encapsulate
everything you are and mean to me?but I will say it anyway?I love you, Katie. You are the
quintessential Proverbs 31 woman. And, yes?you have to call me Dr.
v
Table of Contents
Abstract ......................................................................................................................................... ii
Acknowledgments........................................................................................................................ iii
List of Tables ............................................................................................................................... ix
List of Figures ............................................................................................................................... x
List of Abbreviations ................................................................................................................... xi
Chapter 1 ? Introduction ............................................................................................................... 1
Statement of Problem ........................................................................................................ 3
Purpose of Investigation ................................................................................................... 4
Benefits of Humor............................................................................................................. 5
Theoretical Framework ..................................................................................................... 6
Definition of Terms........................................................................................................... 8
Summary ........................................................................................................................... 9
Chapter 2 ? Literature Review .................................................................................................... 10
Theories of Humor .......................................................................................................... 13
Physical Effects ............................................................................................................... 17
Psychological Effects ...................................................................................................... 19
Classroom Setting ........................................................................................................... 23
Instructor Immediacy ...................................................................................................... 28
Conclusion ...................................................................................................................... 32
vi
Chapter 3 ? Methodology ........................................................................................................... 34
Participants ...................................................................................................................... 34
Procedure ........................................................................................................................ 35
Instrumentation ............................................................................................................... 37
Data Analysis .................................................................................................................. 41
Pilot Study ....................................................................................................................... 42
Summary ......................................................................................................................... 45
Chapter 4 ? Presentation of Findings .......................................................................................... 46
Sample Demographics .................................................................................................... 47
Research Design and Instrumentation ............................................................................ 48
Reliability ........................................................................................................................ 48
Preliminary Analysis ....................................................................................................... 50
Hypothesis #1.................................................................................................................. 51
Hypothesis #2.................................................................................................................. 53
Hypothesis #3.................................................................................................................. 55
Open-ended Responses ................................................................................................... 57
Summary ......................................................................................................................... 61
Chapter 5 ? Discussion ............................................................................................................... 62
Strengths ......................................................................................................................... 62
Limitations ...................................................................................................................... 63
Conclusions ..................................................................................................................... 64
Hypothesis #1.................................................................................................................. 64
Hypothesis #2.................................................................................................................. 66
vii
Hypothesis #3.................................................................................................................. 67
Recommendations for Practice ....................................................................................... 69
Recommendations for Future Research .......................................................................... 71
Summary ......................................................................................................................... 73
References ................................................................................................................................... 74
Appendix A ? IRB Consent ........................................................................................................ 83
Appendix B ? Invitation Script ................................................................................................... 85
Appendix C ? SRA Demographic Survey .................................................................................. 86
Appendix D ? SRA Interest Survey ............................................................................................ 87
Appendix E ? SRA Domain Knowledge Test (Pre/Post) ........................................................... 88
Appendix F ? SRA NH PowerPoint Slides and Script ............................................................... 93
Appendix G ? Explanation of SRA Post-Lecture Activity ....................................................... 104
Appendix H ? SRA NH Post-Lecture Activity ......................................................................... 105
Appendix I ? SRA H PowerPoint Slides and Script ................................................................. 107
Appendix J ? SRA H Post-Lecture Activity ............................................................................. 121
Appendix K ? SRA Post-Lecture Feedback Survey ................................................................. 123
Appendix L ? PBA Demographic Survey ................................................................................ 124
Appendix M ? PBA Interest Survey ......................................................................................... 125
Appendix N ? PBA Domain Knowledge Test (Pre/Post) ......................................................... 126
Appendix O ? PBA NH PowerPoint Slides and Script............................................................. 131
Appendix P ? PBA H PowerPoint Slides and Script .............................................................. 140
Appendix Q ? Explanation of PBA Post-Lecture Activity ....................................................... 154
Appendix R ? PBA Post-Lecture Activity: Job Description .................................................... 155
viii
Appendix R ? PBA Post-Lecture Activity: Interview Questions ............................................. 156
Appendix R ? PBA Post-Lecture Activity: Vague Rubric ....................................................... 157
Appendix R ? PBA Post-Lecture Activity: Thorough Rubric .................................................. 158
Appendix V ? Permission e-mail regarding the SPAS ............................................................. 161
Appendix W ? Adapted SPAS .................................................................................................. 162
ix
List of Tables
Table 1 ? Eight Physiological Benefits of Laughter ................................................................... 18
Table 2 ? Reverse Coded Survey Items ...................................................................................... 39
Table 3 ? Sample Demographics ................................................................................................ 47
Table 4 ? Reliability and Descriptives ........................................................................................ 49
Table 5 ? Paired Samples Statistics for Topic Interest and Pretest Scores ................................. 51
Table 6 ? Independent Samples Statistics for Topic ................................................................... 51
Table 7 ? Paired Samples t-Test of Hypothesis #1 ..................................................................... 52
Table 8 ? Independent Samples Group Statistics for Topic and Lectures .................................. 53
Table 9 ? Paired Samples t-Test of Hypothesis #2 ..................................................................... 54
Table 10 ? Independent Samples Group Statistics for Engagement Levels and Lectures.......... 55
Table 11 ? Means and Standard Deviations for Pretests/Posttests ............................................. 56
Table 12 ? Within-Subjects ANOVA ? Hypothesis #3 .............................................................. 57
Table 13 ? H SRA PLF Open-ended Comments ........................................................................ 58
Table 14 ? NH PBA PLF Open-ended Comments ..................................................................... 59
Table 15 ? NH SRA PLF Open-ended Comments ..................................................................... 59
Table 16 ? H PBA PLF Open-ended Comments ........................................................................ 60
x
List of Figures
Figure 1 ? Means and Standard Deviations for Pretests/Posttests .............................................. 56
xi
List of Abbreviations
H Humorous
NH Non-Humorous
PBA Performance-Based Assessment
PLF Post-Lecture Feedback
SRA Selected-Response Assessment
1
CHAPTER 1
Introduction
In its Fall 2005 national survey, the Higher Education Research Institute revealed that
two of the leading reasons students attend college are to get a job and to prepare to be better off
financially (HERI, 2005). That is far removed from the original purpose of higher education in
America. Harvard was founded in 1637 as the first American institution of higher education. Its
aim was to produce a learned clergy and lettered men of society. Having graduates with the
ability to think, write, and reason well were driving forces of many of the higher education
institutions that were founded within that first 150 years of American higher education (Rudolph,
1990).
As the country grew and changed, so did the purposes of colleges/universities.
Landmarks like the Civil War, an increasing dependence on agriculture, the First and Second
Morrill Acts, the rise of technology and industry, World Wars I & II, the GI Bill, and the Civil
Rights Act (to name only a few) made practical education and accessibility priorities in higher
education institutions. As revealed in the 2005 HERI survey, practical education is still a focus
of today?s college student.
The classical curriculum, which was once the staple of higher education studies and
included ancient languages, mathematics, philosophy, and chemistry, are hard to recognize
today. They are still present in various forms of prescribed core curriculums; however, they
have been replaced and/or enhanced with more academic-track specific course offerings, such as
accounting, program evaluation, horticulture, dietetics, and hydraulics. College students might
2
find these courses more practical than learning Greek or Latin in obtaining a job and preparing
financially.
The role of the faculty has also changed with the times. At their beginning, colleges
adopted the working philosophy of ?in loco parentis,? meaning ?in the place of a parent.?
Professors were counted on to be the moral compass, to touch young men?s souls with moral
truth, and to do so with paternalistic nurturing. In 1824 Thomas Jefferson recruited English and
German faculty members based on their intellectual ability and promise, rather than their
morality. He was attacked for this decision. ?Mr. Jefferson might as well have said that his
taverns and dormitories should not be built with American brick.? ??one of the greatest insults
the American people have ever received? (Rudolph, 1990, p. 158-159). However, Thomas
Jefferson seems to have been ahead of his time. The role of faculty was indeed shifting from that
of the students? moral conscience to that of a scholar whose intellectual ability and capability far
outweighed any moral deficiencies.
It might appear that today?s institutions of higher learning have morphed/are morphing
into career-preparation factories. The modern university has done little to combat that
perception. Measures of an institution?s quality often include such rankings as the percentage of
graduates with jobs within six months of graduation, the number of faculty members with
terminal degrees, financial resources, selectivity of the admissions process, and grant money
produced by faculty research. If any attention is given to undergraduate education, it is often
faculty-to-student ratio. With the exception of specialized studies like this one, occurrences in
the college classroom are largely overlooked, except for end-of-term faculty evaluations.
The college students themselves also contribute to the college/university transformation
into career-preparation factories. The idea of achieving an education is less important than
3
obtaining a degree. Since so many college students are focused on the end product (the degree)
and not the process (the education), it is more important than ever for faculty to engage their
students in the learning process.
Statement of the Problem
Kuh (2001) states ?the time and energy students devote to educationally purposeful
activities is the single best predictor of their learning and personal development? (p. 1). The
National Survey of Student Engagement further states that the quality of student learning and a
student?s overall educational experience are directly impacted by the degree to which students
are engaged in their studies (http://nsse.iub.edu/index.cfm).
An essential factor in learning is student engagement, and humor used as an instructional
technique can be used to engage students. Humor is effective at gaining students? attention and
holding their interest, and that is one of the primary reasons to use humor as a teaching tool in
the college classroom (Deiter, 2000). According to Tomlinson (1999), there are two elements
necessary for a great class: engagement and understanding. She believes that students have an
inherent understanding of what engagement is. She likens classroom engagement to a magic
magnet that attracts and holds students? attention, which elevates students? understanding of the
material.
The attention-gaining power of humor can also be tied to information processing theory,
which is an approach in the study of cognitive development. Within information processing
theory, attention is the first step. Information must first be attended to, processed through the
short-term memory, and then stored in the long-term memory (Forbes et al, 2006). Using humor
as an instructional tool, classroom material can be presented in such a manner as to engage
students? attention, and thereby begin the learning process.
4
Purpose of the Investigation
When replying to professors who were complaining about apathetic students, the
legendary Notre Dame football coach, Knute Rockne, said: ?Make your classes as interesting as
football? (Rudolph, 1990, p. 287). That was in the 1920s, and that advice is still true today. A
2008 national survey conducted by the Higher Education Research Institute revealed that 44.6%
of college freshmen were not satisfied with the quality of instruction they received. In the same
survey, 43.5% also said they were frequently bored in class (Nemko, 2008).
When college students are engaged in the classroom, three things are likely to occur:
student learning, retention, and a quality undergraduate experience. While the student is
expected to contribute toward an overall engagement level (class attendance, participation,
studying), much of the responsibility for an engaging classroom falls on the instructor. The role
of the college instructor in student engagement cannot be trivialized. Faculty-student contact and
pedagogical techniques are two of the institutional practices that have an impact on student
outcomes, like student engagement and student learning (Umbach & Wawrzynski, 2004).
Effective instruction involves two main elements: engagement and understanding.
Engaged students are more likely to understand concepts that are being presented in the
classroom. Colleges and universities that are educationally effective are those that engage their
students in activities that are educationally purposeful (Kuh, 2001; Tomlinson, 1999). The
responsibility for educationally purposeful activities within the classroom lies directly with the
instructor.
One potential means of engaging college students in the classroom is through the use of
humor. As a pedagogical technique, humor has produced mixed results as to its effect on student
learning. Even Dr. Ronald Berk, one of the most commonly cited contemporary humor
5
researchers, admitted that some of the claims of humor?s effectiveness in the classroom are
unsubstantiated (1996). Ambiguity surrounding humor as an instructional tool fuels the
necessity for this research even further. It is essential to advance the literature on the role that
humor plays in the college classroom.
Benefits of Humor
The literature on humor in education reveals that humor affects students physically as it
relaxes muscles, stimulates circulation, improves respiration, exercises the lungs and chest
muscles, controls hormones that relieve stress on the body, increases immune system, increases
the production of endorphins, and lowers pulse rate and blood pressure (Berk, 1996; Berk, 2002;
Caron, 2002; Mahoney, 2000).
The literature on humor in education reveals that humor also affects students
psychologically as it decreases anxiety, stress, and tension; improves self-esteem and morale;
and increases motivation, curiosity, comprehension, and perceived quality of life (Anderson &
Arnoult, 1989; Bennett, 2003; Berk, 1996; Cornett, 1986; Garner, 2006; Philaretou, 2006;
Stambor 2006).
The humor literature also suggests that humor increases instructor immediacy, which is
the perceived distance between an instructor and the students. Humor creates a classroom that is
open to student participation, facilitates learning, and builds cohesion among the students
(Burbach & Babbitt, 1993; Chiasson, 2002; Garner, 2006; Wanzer & Frymier, 1999). There are
many ways to use humor in the classroom. These include the syllabus, group activities, cartoons,
and on-line discussions. According to Murray (1992), the lecture is the most common mode of
instruction in higher education. Thus, the present investigation will focus on the role of humor
during a traditional lecture setting.
6
The Auburn University Board of Trustees Policy Manual has this to say about quality
instruction: ?The Board of Trustees views the instruction of students as the foremost activity of
Auburn University.? The Auburn University Faculty Handbook goes on to state: ?The
University believes that each faculty member in conducting classes should exhibit high standards
of professional behavior through his/her scholarship, personal integrity, and enthusiasm for the
profession of teaching? (emphasis added). Innovative teaching approaches that foster creativity
are encouraged
(http://www.auburn.edu/academic/provost/handbook/instruction.html#university).
One approach to teaching that does just that is using humor. Humor has been shown to
stimulate creativity, create positive learning environments, help students retain and comprehend
information, encourage class attendance, engage students in the learning process, and facilitate a
connection between the instructor and the student (Berk, 1996; Berk, 1998, Berk, 2002, Deiter,
2000; Garner, 2003; Garner 2005; Hill, 1988). These ideas will be covered in greater detail in
the following chapter.
Theoretical Framework
The responsibility for engaging students within the classroom lies mainly with the
instructor. However, it is entirely unfair to presume the responsibility of motivating students to
learn lies solely with the instructor. Among the distractions that students often bring to the
classroom ? family issues, relationships, the responsibility of other classes and/or jobs ? they
also bring varying degrees of interest in the subject matter being taught. Schunk et al (2008)
suggest four steps an instructor can use to develop intrinsic motivation: challenge, curiosity,
control, and fantasy. The second step, curiosity, can tie directly to humor as an instructional
tool.
7
Schunk et al state that ?curiosity is prompted by activities that present students with
information or ideas that are discrepant from their present knowledge or beliefs that appear
surprising or incongruous. Such incongruities motivate students to seek information and resolve
the discrepancies? (p. 265). Of the three main theories of humor, the incongruity theory is the
most prevalent in the literature.
Deckers & Kizer (1975) define incongruity as ?the divergence between an expected and
actual state of affairs and has long been recognized as a condition for humor? (p. 215). As
suggested by several theorists described in the following chapter, and confirmed by Schunk
above, seeking to solve the incongruous situation often leads to cognitive development.
Therefore, humor as an instructional technique can deliberately create a situation that facilitates
cognitive growth, creates interest, and stimulates engagement.
The second theory of humor addressed here is superiority. The superiority theory of
humor is similar to the incongruity theory in that it is based in the cognitive realm, but it goes
beyond cognitive into affective. The superiority theory of humor, as the name suggests, is
contingent upon individuals or groups of people being perceived as better than other individuals
or groups of people (Cornett 1986; La Fave et al, 1976; Shade 1996). This type of humor dates
back to Plato and Aristotle: ?We laugh maliciously when we possess superior knowledge over
the people we ridicule? (Hill, 1988, p. 40). It is important to note that this researcher cautions
against overuse of this type of humor as an instructional technique. Humor rooted in the
superiority theory could alienate or offend some students.
The final theory of humor addressed here is evolutionary, which is rooted in biology.
The ability to produce humor is linked to intelligence and creativity, thereby increasing a
potential mate?s desirability in sexual selection. Evolution, shaped by sexual selection in favor
8
of those who can produce humor, increased the probability of offspring with these desirable
qualities (Bressler et al, 2006). Men and women view humor differently, and value humor?s
production and appreciation differently. In line with these differences, male and female students
are likely to view an instructor?s sense of humor somewhat differently based on the gender of
that instructor. This is an area in which there remains a paucity of research.
Definition of Terms
The following definitions will be used in this investigation:
Engagement ? The time, energy and resources students devote to the activities designed to
enhance learning (Krauss, 2005). Examples of engagement activities include: academic effort,
higher order thinking skills, academic integration, active and collaborative learning, and
interaction with faculty members (Zhao & Kuh, 2004).
Humor ? Any message, verbal or nonverbal, that is communicated by the instructor and evokes
feelings of positive amusement by the student (Hurren, 2006).
Instructor ? For the purposes of this study, the term instructor refers to anyone who assumes the
role of teacher in the college classroom. It can include: Graduate Teaching Assistant, Assistant
Professor, Associate Professor, or Full Professor. Humor can effectively be used by anyone in
the teacher role.
Instructor Immediacy ? Perceived or actual distance between an instructor and a student.
Intrinsic Motivation ? Motivation to engage in an activity for its own sake (Schunk et al, 2008, p.
377).
9
Summary
In summary, the use of humor as an instructional technique has many positive physical,
psychological, and overall classroom benefits for both the student and the instructor. Humor
connects the instructor and the student and engages the student in the learning process. Engaging
students in the classroom with effective instruction is more important now that it has ever been in
the history of higher education. Students and instructors alike have many demands for their time
and attention that do not include the classroom. Enrollment at higher education institutions is
increasing. A 2006 United States Census report stated that in 2006, 20.5 million students were
enrolled in two-year and four-year colleges and universities in this country. That number is up
17%, or 3 million students, since 2000 (U.S. Census Bureau, 2009).
As students continue to enroll in classes, instructors must continue to seek ways to
engage them in the learning process. McKeachie (1994) states that the instructor?s enthusiasm is
a vital factor in influencing student learning and motivation. Humor as a teaching tool is a great
way to demonstrate enthusiasm for the subject matter and the students in that classroom. The
purpose of humor in the classroom is ?to connect with our students and engage them in learning
to facilitate their academic success? (Berk, 2002, xvii). Thus, humor, as an important means of
fostering student engagement, and its benefits, including retention of academic material, will be
the focus of this present study.
10
CHAPTER 2
Literature Review
The literature on the topic of humor results from research across a wide variety of
disciplines. These disciplines include education, business, psychology, public speaking, and
medicine. The present review of the literature examines the literature relating to humor in higher
education. Specifically, the use of humor as a teaching tool in the college classroom was
examined. This review highlights theories of humor; the use of humor by instructors and
students; various types of humor and its effect on the classroom environment; when and where to
use humor; and even literature that disparaged any use of humor in an educational setting (Berk,
2002; Berk, 2005; Check, 1986; Cornett, 1986; Garner, 2005; Garner 2006; Hill 1988; Powers
2005; Stambor, 2006).
Humor has many uses in the college classroom. Humor can be used on the course
syllabus to introduce the course and the instructor to the class (Kher, Molstad, & Donahue,
1999). For example, humorous prerequisites such as ?must have at least 150 Facebook friends,
must be able locate your seat in the classroom, and must memorize the instructor?s favorite
recipes and TV shows.? All of these examples serve to connect the students to the classroom,
and therefore, the instructor. If the students in the college classroom feel good about themselves
and connected to their environment, retention rates and ratings for teacher effectiveness both
increase. Torok et al (2004) state that ?perceptions in the amount of humor used in the
classroom positively related to perceptions of how much students feel they learn and how
positively they feel about course content and the professors? (p. 15).
11
There does not, however, appear to be a consensus about the role of humor in the college
classroom. According to one author, humor in the classroom must satisfy ?the criterion of
pedagogical purpose? (Skinner, 2001, pg. 53). In other words, humor should not be used only to
entertain, but to convey the content of the educational message. Making learning memorable
should be the goal of instructional humor ? not making students laugh. Berk (2002) states that
there are two basic reasons for using humor as an instructional tool: (1) to build the professor-
student connection and (2) engage the students in the learning process (p. 4). ?The potential of
humor as a teaching tool to change attitudes, decrease anxiety, and increase achievement is
unlimited and, at this point, largely unrealized? (Berk, 1996, pg. 88).
That professor-student connection is more formally referred to as instructor immediacy.
Instructor immediacy is loosely defined as the distance ? perceived or real ? between an
instructor and a student. Crump (1996) states ?immediacy behaviors reduce the physical and
psychological distance between interactants and enhances closeness to one another? (p. 4).
According to Kher et al (1999), it is the instructor?s job to establish a connection between the
student and the instructor. It is the instructor who has majority control over the quality of the
learning experience and the learning environment. Humor is recognized as a technique for
creating that positive learning environment. Torok et al (2004) asked students about potential
outcomes of humor being used in the classroom. They stated that humor made teachers more
likeable and helped them to understand the material, boosted their morale, and helped them pay
closer attention.
The students? perception of the instructor is fundamental in that process of creating a
positive learning environment. If the students perceive the instructor as disengaged,
unapproachable, distant, and uncaring, it is unlikely that a positive learning environment will
12
occur. On the other hand, an instructor who displays a sense of humor in the classroom creates
an environment that is open, respectful, enjoyable and, most importantly, engaging.
An engaging learning environment is of significant importance for colleges and
universities. The National Survey of Student Engagement (NSSE), which launched nationally in
the spring of 2000, has as its goal to improve the quality of higher education by gathering
information about student participation in programs and activities, provided by their institutions,
that lead to students? learning and personal development
(http://nsse.iub.edu/html/quick_facts.cfm).
This information is critical because the literature tells us that ?the time and energy
students devote to educationally purposeful activities is the single best predictor of their learning
and personal development. The implication for estimating collegiate quality is clear. Those
institutions that more fully engage their students in the variety of activities that contribute to
valued outcomes of college can claim to be of higher quality compared with colleges and
universities where students are less engaged? (Kuh, 2001, p. 1). Colleges and universities that
are educationally effective are those that engage their students in activities that are educationally
purposeful. According to the NSSE, the quality of student learning and a student?s overall
educational experience are directly impacted by the degree to which students are engaged in their
studies. ?As such, characteristics of student engagement can serve as a proxy for quality?
(http://nsse.iub.edu/html/origins.cfm). The most educationally purposeful activities occur in the
classroom. Therefore, it is imperative that students are engaged in the classroom, and through
this study, the researcher will demonstrate that humor is an effective teaching tool to do just that.
It is necessary to define the term ?student engagement.? Student engagement is defined
as the time, energy and resources students devote to the activities designed to enhance learning
13
(Krauss, 2005). Examples of engagement activities include: academic effort, higher order
thinking skills, academic integration, active and collaborative learning, and interaction with
faculty members (Zhao & Kuh, 2004). Tomlinson (1999) tells us there are two elements
necessary for a great class: engagement and understanding. She believes that students have an
inherent understanding of what engagement is. ?Engagement happens when a lesson captures
students? imaginations, snares their curiosity, ignites their opinions, or taps into their souls.
Engagement is the magnet that attracts learners? meandering attention and holds it so that
enduring learning can occur? (p. 38). When students are engaged, they are more likely to
understand the concept or idea, thereby encouraging actual learning.
This literature review will be divided into the following headings: theories of humor,
effects of humor on students? physical disposition in the college classroom, effects of humor on
students? psychological disposition in the college classroom, effects of humor on the college
classroom setting, and effects of humor on instructor immediacy.
Theories of Humor
There are three prevailing theories of humor: incongruity, superiority, and evolutionary.
An interesting point is that none of the theories have one theorist who is recognized as its
originator. Parts of incongruity theory have been attributed to Freud, Piaget, Schopenhauer, and
Kant; parts of superiority theory to Aristotle, Plato, Hobbes, Bain, and Bergson; parts of
evolutionary theory to Darwin, Alexander, and Weisfeld.
According to Deckers & Kizer (1975) incongruity can be defined as ?the divergence
between an expected and actual state of affairs and has long been recognized as a condition for
humor? (p. 215). This divergence ? or incongruity with what was expected ? results in humor.
Something is perceived as incongruous when it is interpreted as being in an unusual or
14
unexpected combination with something else (Hill, 1988). Berk (2002) lists two elements of
incongruity theory as expected content and unexpected twist or punch line. Jonas (2004) states
that incongruity theories ?explain humor as unexpected or surprising experiences, words or
activities that happen. Strange, absurd, inappropriate consequences or endings are examples of
incongruity theories? (p. 57 ? 58). Shade (1996) posits that the basis of incongruity theory is
when we are expecting one thing and suddenly presented with another. The humor results as an
outcome of these verbal or visual incongruities. For example, many found it humorous when
former President George W. Bush was heard using a four-letter word at the 2006 G8 Summit
because that is not the type of language we expect to hear publicly from the President of the
United States. When an incongruity is presented, one must resolve the incongruity in order for
humor to be achieved. Finally, Rothbart (1976) added this to the aforementioned definitions of
incongruity theory: ??although perception of an incongruous or unexpected event may lead to
laughter, perception of an unexpected event may also lead to fear, curiosity, problem-solving, or
concept learning? (p. 38). From that perspective, humor certainly has a place in education.
The authors mentioned above basically agree on the definition (or approximations) of
what the incongruity theory of humor is. Additionally, they agree on the driving force behind it,
cognitive development. With specific reference to the classroom, Berk (2002) points out that the
cognitive processes used in understanding a joke are similar to what is involved in problem
solving. Specifically, this mental processing occurs in the right hemisphere of the brain, where
creativity and problem solving lie.
When discussing cognition, Piaget?s cognitive development model must be
acknowledged. The stages that Piaget developed are sensorimotor, preoperational, concrete
operational, and formal operational (Gredler, 2001). The concrete operational stage is where
15
humans begin to understand incongruous humor, because an understanding of reality is usually
achieved by this stage. This is imperative because an understanding of reality is necessary
before one can accept any distortion of reality, and therefore, incongruous humor.
While incongruity theory accounts for the cognitive piece of the puzzle, it does not
entirely account for academic achievement. It is descriptive more of potential than results.
There are several studies where the use of humor has been shown to increase academic
achievement ? Crump (1996) and Garner (2006), for example. However, there are also studies
that show humor as a contributing factor to academic achievement rather than the sole predictor.
This is an area that needs more research.
The incongruity theory of humor has its basis in problem solving, or in the cognitive
arena. But learning is broader than just the cognitive realm. The superiority theory of humor
takes that cognitive aspect of humor a step further into the affective realm. Superiority theory of
humor is still very much cognitive in that it requires the same recognition of incongruent stimuli.
However, the humor comes from more than just the resolution of said incongruent stimuli.
Humor is an effective instructional technique because it joins the cognitive and affective realm of
learning.
For example, La Fave et al (1976) researched superiority theory of humor by examining
different classes of people when exposed to various types of jokes. In some jokes their class or
group was the ?butt? of the joke, and in others the ?victor.? They stated, ?an attitude holds both
an emotive and a cognitive component? (p. 67). Cognitively, what is their conception of their
class? Emotionally, how do they feel about that class?
According to Cornett (1986), the superiority theory of humor is based on the idea that
?humans derive pleasure from seeing themselves as better off than others? (p. 26). We can
16
safely laugh at those who make mistakes we never would. We can learn to laugh at our own past
mistakes because we feel superior to who or what we were back then. Things that represent
forms or classes lower than ours are often found humorous: clowns, caricatures, puppets, and
impersonators. On the flip side, when something ?indignant? happens to someone or something
that is afforded great respect or dignity, it is often seen as funny ? as long as you are not that
person! There is one important caveat, however. Defects in others are humorous as long as they
are not harmful to the victims.
Centuries ago Plato and Aristotle cited superior feelings as a source of laughter. ?We
laugh maliciously when we possess superior knowledge over the people we ridicule.? ?We
laugh at people who have an inferior moral character or at people who are more ugly or distorted
than ourselves? (Hill, 1988, p. 40). According to this theory, we sometimes laugh at people or
situations out of fear, ignorance, or lack of power and control.
Shade (1996) suggests that this type of humor is often a less-obvious form of prejudice.
Many of the jokes told to make one feel or appear superior involve one of the following groups
as the ?butt? of the joke: religions, nationalities, races, occupations, etc. In an effort to inflate
our own ego, or deflate the status of someone else, we sometimes pick on another person or
group of people. Due to the heterogeneous make-up of many college classrooms, using this type
of humor as an instructional technique could alienate students.
The evolutionary theory of humor uses biology as its basis. This theory puts forward that
the ability to produce and appreciate humor evolved via sexual selection. The ability to produce
humor is indicative of intelligence and creativity. The process of sexual selection would favor
those who can produce humor. This increased mating success ? preferential to funny people ?
would provide offspring with desirable genetic qualities (Bressler et al, 2006).
17
The Bressler (2006) study sought to measure the importance that subjects placed on a
partner?s sense of humor. They discovered that men and women view sense of humor
differently. Women viewed sense of humor in a partner as his ability to produce humor, whereas
men viewed sense of humor in a partner as her receptiveness to his humor. Basically ? he
doesn?t care if she?s funny, just as long as she thinks he is! ?Thus, sexual selection may have
more strongly favored women who reacted positively to humor producers and men who attended
preferentially to women who appreciated their humor? (p. 122).
Polimeni and Reiss (2006) reviewed humor?s evolutionary origins. Not only do they
believe that humor and laughter are evolutionarily adaptive, but that ?humor may arguably be
humankind?s most complex cognitive attribute? (p. 348). Without fully understanding the
underlying reasons, humans will often laugh. Laughter feels good and, therefore, is a behavior
that is reinforced. The authors connected laughter with social grooming in primates. Both
laughter and social grooming release endorphins into the body. As humans evolved, humor
came to replace social grooming practices as the primary bonding activity within a group.
Following this train of thought, it is possible that humor and laughter are the beginnings of
language by ?maintaining a pleasurable association to conversation? (p. 352).
The effects of humor on students? physical disposition in the classroom
A brief summary on the physical benefits of humor and laughter includes the following:
humor can serve to relax muscles, stimulate circulation, improve respiration and to exercise the
lungs and chest muscles, to decrease serum cortisol, dopec, and epinephrine levels in the blood
(all three control effects of stress on the body), to increase immune system?s ability to protect the
body, and to increase the production of endorphins, lower pulse rate and blood pressure (Berk,
18
1996; Caron, 2002; Mahoney, 2000). When we laugh we use parts of our anatomy that we do
not use any other time.
In Table 1 below, Dr. Ronald A. Berk, one of the prominent contemporary researchers on
the subject of humor as an instructional technique, suggests and expands on 8 physiological
benefits of laughter (2002, p. 57).
Table 1
Benefits Examples
1.Improves Mental Functioning Increases interpersonal responses, alertness, and memory
2. Exercises and Relaxes Muscles Exercises facial, chest, abdominal, and skeletal muscles;
improves muscle tone, decreases muscle tension; and
relieves discomfort from neuralgias and rheumatism
3. Improves Respiration Exercises the lungs and improves breathing and blood
oxygen levels; relieves chronic respiratory conditions;
reduces chances of bronchial infection and pneumonia
4. Stimulates Circulation Exercises the heart like aerobic exercise, followed by
decreases in heart rate and blood pressure.
5. Decreases Stress Hormones Reduces stress
6. Increases Immune System?s
Defenses
Fights viral and bacterial infections
7. Increases Pain Threshold and
Tolerance
Decreases pain and produces a euphoric state without
liquor, drugs, or aerobic exercise
8. Kills Common Viruses and
Bacteria
Relieves hemorrhoids, psoriasis, gangrene, gingivitis, and
malaria
19
It has been said that ?laughter is the best medicine.? The summary of the research cited
above seems to prove just that by clearly asserting the physical benefits of humor and laughter.
However, humor as an instructional technique offers more than just physical benefits.
The effects of humor on students? psychological disposition in the classroom
Psychological effects of humor and laughter include: decreased anxiety and stress,
improved self-esteem, increased motivation, and higher perceived quality of life (Berk, 1996,
Cornett, 1986). Laughter has been shown to help people cope with stressful events, and help
improve morale (Anderson & Arnoult, 1999; Philaretou, 2006; Stambor 2006). Laughter has
shown therapeutic qualities such as relieving tension, increasing curiosity and comprehension,
and reducing stress (Bennett, 2003; Garner, 2006). Because laughter helps us stay mentally
healthy, it is even called the ?safety valve for sanity? (Weiss, 1993).
Stress and Self-Esteem
Stressful situations can produce psychological symptoms such as anxiety and feelings of
distress, and decreased self-esteem (Hurren, 2006). Abel (1996) posited that students with high
self-esteem will be less affected by stressful situations because they may feel less vulnerable.
Students with low self-esteem are more likely to demonstrate greater distress from stressful
situations. Students with low self-esteem are likely to react to stress differently than students
with high self-esteem. And if a student views the college classroom as a stressful situation, then
a student?s self-esteem can be critical to his/her success in that classroom.
A 2008 study by Mitchell, Smith, and Simpson compared the levels of self-esteem
between college freshmen and college seniors. The authors cited academic competence as one
external source of one?s self-esteem. The classroom environment can positively or negatively
influence a student?s pre-existing self-esteem. For example, Mitchell et al stated that support
20
structures (the college classroom can be viewed as a support structure) can serve as buffers
against high levels of stress. An instructor who effectively uses humor as an instructional tool
would positively influence students with low self-esteem. It is important to note that this study
also stated that high self-esteem and decreased stress levels are associated with improved
academic performance.
Humor also serves as an adapting and coping mechanism by allowing students to
temporarily detach themselves from the current situation, which is especially helpful if they view
the college classroom as a source of stress or even a threat. Humor can allow the students to
reframe the situation (exam, quiz, homework, etc.), and reduce negative feelings and control
negative emotional reactions. Humor also promotes objectivity, which can buffer those negative
responses (Berk, 2002).
Attention/Engagement
Humor as an instructional technique is used to engage students, and an essential factor in
learning is student engagement. In order for an instructor to engage a student, that student must
attend to (pay attention to) said instructor and the activities occurring in said classroom ? i.e., the
lecture. Attending to the classroom activities is another key factor in determining students?
success in the classroom (McKeachie, 1994). Humor is highly effective at gaining and holding
one?s attention. One of the main reasons to use humor as a teaching tool in the college
classroom is to gain students? attention and keep their interest in the material being presented
(Deiter, 2000). Attention is the first step in the information processing theory. Before
information is interpreted and stored in the long-term memory, it must first be attended to
(sensory register), then processed through the short-term or working memory (Forbes et al,
21
2006). Using humor as an instructional tool, classroom material can be presented in such a
manner as to engage students? attention, and thereby begin the learning process.
Advertising has long used humor because it was believed that humor enhances
advertisement?s persuasive powers. Cline & Kellaris (2007) conducted a study to test the power
of humor in advertising. They discovered that humor strength (how funny it is) and humor-
message relatedness (appropriate to the product or message) combine to influence the
participants? recall of advertising claims. It is interesting to point out that ?strong humor is not
its own virtue? (p. 65). In order for the humor in advertising to be effective, it must be connected
to brand claims. Translated to the college classroom, that means that humor used as a teaching
tool must be connected to the subject matter at hand.
McKeachie (1994) believes that because students? minds wander so easily it is paramount
that the professor be able to keep and maintain attention. While he does not specifically refer to
humor, he has this to say about lectures. ?Keeping lectures to student interests, giving examples
that are vivid and intriguing, building suspense toward a resolution of a conflict ? these are all
techniques of gaining and holding attention? (p. 58). McKeachie also cites as the primary
characteristic of teachers appreciated by students ?enthusiasm and willingness to make the
course worthwhile? (p. 25). An instructor?s use of humor in the college classroom is an
indication of enthusiasm, and enthusiasm and humor are two qualities of master teachers.
Teacher humor should be intended to make learning enjoyable and, therefore, memorable
(Skinner, 2001).
Kher et al (1999) refer to humor as ?classroom ?magic? when all educational elements
converge and teacher and student are both positive and excited about learning? (p. 1). This
22
?magic? creates an open classroom with mutual respect between instructor and student, keeps the
focus on the student, and creates an overall positive learning environment.
Nicewonder (2001) begins each of his Owens Community College algebra classes with a
content-related joke. Something like: ?What do you get if you add sixty female pigs and forty
male deer? A hundred sows and bucks! What did the acorn say when he finally grew up?
Geometry!? (pg. 1). Why does Nicewonder do this? So the students will not be bored?but more
importantly because ?it creates an atmosphere in which learning is more likely to occur,
encourages student involvement, and holds the students? attention? (p. 2).
Motivation
Forbes et al (2006) define motivation as a ?state that energizes, directs, and sustains
behavior? (p. 424). Motivation can be exemplified by personal investment of time and energy
(engagement) toward an activity. Student engagement is one of the most critical components to
student motivation during the learning process. Successful efforts by students in the classroom
can be directly linked to the level of student motivation (Beeland, 2002).
Prominent motivational theorists Wigfield & Eccles (2002) state that there are three
motivational questions that many students ask when faced with a new task: (1) Can I do this
activity? (2) Do I want to do this activity? (3) What do I need to do to succeed? This author
believes that students ask themselves very similar questions each time they begin a new class, or
perhaps, each time they enter a classroom. The teacher is one of the major sources for student
stimulation in the classroom (McKeachie, 1994). The teachers? enthusiasm and values, along
with verbal and nonverbal communication, ?have much to do with your students? interest in the
subject matter? (p. 355). While the instructor cannot control a student?s motivation, he/she can
certainly influence it through humor.
23
In Garner?s (2006) study, 117 undergraduates in a distance education environment were
divided into two groups. Each group received a series of three 40-minute lectures on research
methods and statistics (because statistics is often rated as a ?dreadful? course). One group?s
presentation was in a humorous format (stories, examples, metaphors) and the other group?s
presentation was not infused with any purposeful humor. The results demonstrated that the
humor group had higher ratings for the overall opinion of the lesson and how well the
information communicated the lesson, and the humor group retained and recalled significantly
more information than the non-humorous group. Additionally, the humorous group had a higher
rating of the instructor.
Garner (2003, 2005) also stated that humor as instructional technique provides new
perspectives and novel insights. Humor also sustains student interest, facilitates instructor
immediacy, increases class attendance, and increases self-motivation. That translates to potential
success in the college classroom, even in classes that may contain subject matter of little interest
to the student. ?Students can be supported to develop interest and to work with subject content
for which they initially have a less-developed interest? (Renninger & Hidi, 2002, p. 173). The
instructor can facilitate that interest and motivation through using humor as an instructional
technique.
The effects of humor on the college classroom setting
Chiasson (2002) reviewed humor used in language classrooms ? specifically in the
second language classrooms. Chiasson used cartoons with multiple panels because it provided
material appropriate for communicative questioning and discussion, which is something certainly
fitting for second language classrooms. Chiasson states, ?? the choice of cartoon that you
choose to demonstrate a particular point will naturally depend on the theme, grammatical or
24
cultural component you are teaching or examining?Ask yourself the question, ?What
knowledge do I want the students to demonstrate by interpreting this cartoon??? (p. 5). The
humorous cartoons allow Chiasson to focus on intonation, stress certain syllables, and work on
vocabulary words. Chiasson concludes by stating that the cartoons allow language to be seen as
authentic in everyday situations. The classroom is open and students are able to express
themselves freely.
It is important to note, however, that when using humor in the college classroom,
Chiasson (2002) suggests the following guidelines. An instructor who follows these guidelines
is more likely to be successful at engaging students in the classroom than one who does not.
1. Don?t try too hard. Humor must flow naturally from the instructor.
2. Do what fits your personality. Students may view forced humor as awkward
and consequently it will be ineffective.
3. Don?t use private humor or humor that leaves people out. Humor should be
used to create unity within the classroom and not divide by exclusion of some.
4. Make humor an integral part of your class rather than something special. This
way the humor is more likely to flow naturally.
5. Humorous material should be related to what is going on in the classroom.
6. The extent that humor is used will vary. Instructors must be willing to
differentiate their approach. Student discussion and interest could guide the
amount of humor used and when.
According to Garner (2006), when a classroom is led by a professor who uses humor, the
potential for learning is high. A professor like that engages the students through a positive social
and emotional environment. Defenses are lowered and students can focus and pay attention to
25
the information being presented. Wanzer & Frymier (1999) said that an instructor who uses
humor creates a classroom that is more enjoyable and has students who are less anxious and
more willing to participate in class. Basically, it creates an environment more conducive to
learning.
A study by Burbach & Babbitt (1993) looked at how wheelchair-bound college students
use humor as a coping mechanism. This study revealed that humor (among other things) is used
by these students as a way of building group solidarity, blurring group differences, and removing
barriers between groups. How does that affect the college classroom? ?In the process of sharing
a laugh they were also reducing the social distance between the two groups? (p. 9). A shared
laugh can shorten the distance between students and between the instructor and students. These
types of behavior are referred to as instructor immediacy behaviors, which have been shown to
increase student attention in the classroom. Instructor immediacy will be discussed further in the
next section.
Hashem (1994) stated humor can help avoid negative situations and consequences by
improving the classroom atmosphere and developing relationships among the students.
Additionally, a classroom with humor aids in focusing student attention, invites students to be
more open with their teachers and to approach them first when confronted with a problem.
Hellman (2006) and Shatz & LoSchiavo (2006) reviewed humor as a teaching tool in on-
line courses. The benefits of on-line courses are affordability, efficiency, flexibility, and ?multi-
sensory? experiences. These are all reasons why the use of this educational medium is on the
rise. To aid in fostering student attention to on-line courses, Hellman suggests implementing
humor through the written language: e-mail, bulletin board message, threaded discussions,
cartoons, pictures, sound files, etc. Hellman does caution, however, that the instructor needs to
26
be clear when humor is being used. Because personal interaction is so limited, misinterpretation
of humor is more likely than in a typical classroom.
Due to the physical separation that exists between teacher and student, Hellman says that
humor is very important in on-line courses. The lack of personal interaction can limit the use of
quips, puns, or humorous stories. An instructor has to work especially hard to ensure that the
students feel a sense of community. The most effective way to do that in an on-line course is
quick communication. Immediate feedback and quick answers to students? questions help
provide the sense of instructor presence, which also helps focus student attention.
Berk (2002) confesses that his classroom antics are not just for his own personal
enjoyment?although he does enjoy using popular music and re-enacting scenes from
blockbuster movies complete with theme music and stage lighting. His theatrical approach is
largely based on Gardner?s Multiple Intelligence Theory. Berk (2002) cites a study by Diaz-
Lafebvre that compares multiple intelligence theory to a 10-speed bicycle. ?Our students have
several gears we have never asked them to use? (p. 66). Berk is a proponent of active learning,
cooperative learning, and, more specifically, teaching strategies that are designed to elicit higher-
order thinking skills from students (for him that means music and dramatic interpretations). His
goal is to engage more than just the students? minds; he wants their entire person to be present
and actively involved in his classroom experience. Berk agrees with Gardner, who says,
??nearly every topic can be approached in a variety of ways, ranging from telling the story, to a
formal argument, to an artistic exploration, to some kind of ?hands-on? experiment or simulation.
Such pluralistic approaches should be encouraged? (Gardner, 1998, p. 66).
A professor?s use of humor in the classroom can serve to facilitate a connection among
the class members and between them and the professor. Humor can help the shy student
27
contribute and feel part of the class (Chiasson, 2002). Humor enhances the classroom
environment and aids the learning process (Garner, 2003). Professors who use humor in the
classroom are building that ever-important bond between themselves and the students.
Negative Effects of Humor
What about when college classroom humor results in negative evaluations? Is there ever
a time when humor should not be used in the college classroom? Humor can be subjective,
personal, and unpredictable. Among other things, differences in culture, gender, ethnicity,
religious beliefs, and age should be considered (Garner, 2003; Garner, 2005; Garner 2006).
Therefore, humor in the classroom should be used cautiously, or sometimes not at all. Powers
(2005) suggests four things that should be considered when using humor in the classroom: (1)
the subject, (2) the tone, (3) the intent, and (4) the situation.
Certain subjects are off-limits and should not be used as humorous materials in the
college classroom. Sexual assault, eating disorders, death, substance abuse, and abusive
relationships are example of topics that should be avoided as sources of humor. It is very
possible that there are college students in that classroom struggling with any one or more of
those issues. Making light of those situations could alienate them and negate the original
purpose of the humor.
How you say something is just as important as what you say, and never is that more true
than when infusing humor into a situation. For example, sarcasm is appreciated by some and
loathed by others. The root of the word sarcasm gives some insight to its potential harmful
effects: sarkasmos ? Greek word meaning ?to tear flesh? (Torok et al, 2004). The tone of the
instructor when delivering humor can be the difference between success and failure.
28
The intent or purpose of humor in the classroom is learning. If the intent of humor
becomes to embarrass the student(s) or elevate the status of one group over another (including
the instructor), then a different strategy is necessary.
An instructor should always be cognizant of the situation. A summer class may react
differently than a full-semester class. The gender make-up of the class could affect the dynamic
of class discussion. Outside events can also affect situational humor, i.e., 9/11 attacks, a deadly
school bus crash, or a shooting on a rural college campus.
The effects of humor on instructor immediacy
Instructor immediacy is loosely defined as the distance ? perceived or real ? between an
instructor and a student. Crump (1996) states ?immediacy behaviors reduce the physical and
psychological distance between interactants and enhances closeness to one another? (p. 4).
Crump conducted a study of 70 community college students enrolled in communication courses.
They were given questionnaires that sought to measure eight nonverbal immediacy behaviors
(eye contact, dynamic delivery, physical appearance, friendliness or smiling, vocal variation,
time spent outside of class, appropriate touch and physical distance). The questionnaires also
measured four verbal immediacy behaviors (use of humor, learning student names, using words
like ?our? and ?we?, and using personal examples). This study revealed humor as the most
effective teacher immediacy behavior. Crump further states, ?Humor and laughter are indeed
like an invitation, it aims at decreasing social distance? (p. 13).
According to Campbell (1992), humor is one tool college instructors should use to create
a positive and productive learning environment. Humor has been positively linked to teacher
effectiveness and immediacy. In her qualitative ethnographic study, Campbell goes on to state
that in a teaching style preference hierarchy survey given to students, ?friendly and attentive?
29
were ranked as the most satisfactory aspects of teaching styles. The professor that Campbell
studied used ?humor with insight and discretion to not only facilitate enjoyment, but to make
social commentary, account for behavior, hold attention, increase his own likability, create
solidarity, exert control, give vivid examples and motivate his students? (p. 24).
If that is how humor in the classroom affects the students, what does it do for the
instructors? Humor breaks down the communication barriers between instructors and students
and facilitates effective communication of course material (Berk, 1996). Myers and Bryant
(2004) stated that college instructors are gauged on credibility by how they demonstrate that
credibility through their competence, character, and caring. One of the hallmarks of instructor
character is immediacy. ?Immediacy relates to approach and avoidance behaviors and can be
thought of as the perceived distance between people? (Roca, 2004, p. 186). According to Zhang
(2005), instructor immediacy has been positively associated with teaching effectiveness and
learning outcomes.
Kher et al (1999) state that it is the instructor?s job to establish a connection between the
student and the instructor?to seek to establish instructor immediacy. It is the instructor who has
majority control over the quality of the learning experience and the learning environment.
Humor is recognized as a technique for creating that positive learning environment. The
students? perception of the instructor is vital in that process of creating a positive learning
environment. If the students perceive the instructor as unapproachable, distant, uncaring, and
disconnected (no immediacy established), it is unlikely that a positive learning environment will
occur. On the other hand, an instructor who displays a sense of humor in the classroom creates
an environment that is open, respectful, and enjoyable.
30
If the students in the college classroom feel good about themselves and connected to their
environment (including the instructor), retention rates and ratings for teacher effectiveness both
increase. Torok, McMorris, & Lin (2004) agree by stating that ?perceptions in the amount of
humor used in the classroom positively related to perceptions of how much students feel they
learn and how positively they feel about course content and the professors? (p. 15).
Crump (1996) and Hashem (1994) both conducted research in communication courses.
Crump?s focus was instructor immediacy, and humor (a by-product of the study) was found to
play a pivotal role in instructor immediacy. Hashem?s focus was play and humor as a teaching
technique. Hashem discovered that humor and play allowed students to practice communication
(speaking and listening), collaboration, and cooperation ? all key components to an interpersonal
communication course. Hashem also found that students in a classroom where play and humor
were used excelled and approached their tasks positively and eagerly.
Instructor Immediacy and Teaching Effectiveness
Berk (2005) distinguishes two types of decision-making styles for teaching effectiveness:
formative and summative. ?Formative decisions use the evidence to improve and shape the
quality of teaching, and summative decisions use the evidence to ?sum up? overall performance
or status to decide about annual merit pay, promotion, and tenure? (p. 48). What is this evidence
that Berk references? There are twelve sources of evidence, but Berk gives the most credence to
student ratings. Even with the debate and discrepancy surrounding the quality of student ratings
(student ratings are the most researched topic in higher education), Berk still cites them as ?the
most influential measure of performance used in promotion and tenure decisions at institutions
that emphasize teaching effectiveness? (p. 50).
31
Check (1986) studied the positive traits of effective teachers and the negative traits of
ineffective ones. Check discovered through surveying 747 college students, 104 senior high
school students, and 93 eighth graders that a teacher?s use of humor in the classroom is a
desirable trait. There were seven traits that were rated the highest and ?using humor, jokes, and
witty remarks effectively? was ranked fifth. It was ranked behind understanding of student and
their problems; knowledgeable in subject matter; ability to relate to students, friendly, interested
in them; and ability to communicate on level of students. There were eight negative traits that
surfaced, with ?no sense of humor and unenthusiastic? ranking seventh behind inability to
communicate and deliver the subject; boring and monotonous; lack of knowledge, uniformed in
subject; disorganized; insensitive to students and their needs; and aloofness and arrogance.
Pascarella and Terenzini (2005) state that when it comes to improving subject matter
learning, active student involvement (i.e. humor as an instructional technique) is more effective
than traditional instructional formats (i.e. lecture and recitation). Furthermore, effective teachers
explain concepts more clearly, including examples and analogies pertinent to subject matter;
understand and enthusiastically present the subject matter; and have good rapport with the
students.
Teacher expressiveness is classified as instructor immediacy. One example of teacher
expressiveness is the use of humor as an instructional technique. Pascarella and Terenzini
(2005) cite a study where two randomly assigned groups of college students were shown
videotaped lectures. One tape showed an expressive form of instruction (eye contact, voice
inflection, physical movement, and content-relevant humor), and the other tape showed an
unexpressive form of instruction. A posttest designed to assess retention and conceptual
understanding was administered and revealed the students who received the expressive
32
instruction scored 34% higher than their counterparts. Other works cited by these researchers
revealed that expressive instruction increases motivation to learn and memory encoding.
A review of Pascarella and Terenzini?s (2005) work comes forth with a few major points
validating humor as an instructional technique. The more a student is engaged in the academic
work and experience, the greater the level of knowledge acquisition. There is also a clear link
between student learning and instructor behavior. More specifically, the two most prominent
instructor behaviors that predicted student learning were instructor skill and course structure-
organization. Interestingly, it was also stated that these skills are learnable.
Summary
The present review of the literature examined the literature relating to humor as a
teaching tool in higher education. Areas of attention included: theories of humor, the use of
humor by instructors and students; types of humor and its effect on the classroom environment;
when and where to use humor; and literature that discouraged the use of humor in a classroom.
(Berk, 2002; Berk, 2005; Check, 1986; Cornett, 1986; Garner, 2005; Garner 2006; Hill 1988;
Powers 2005; Stambor, 2006).
The literature revealed there are other studies that have examined humor as an
instructional tool and its effect on student engagement and material retention ? for example,
Garner (2006) and Pascarella &Terenzini (2005). However, these studies have failed to ask
whether or not the students perceived the humorous material as actually humorous. Furthermore,
this researcher did not find any studies where the same group of students received humorous and
non-humorous lectures in a ?live? classroom setting. Based on those gaps in the literature, the
present study will examine three hypotheses regarding the use of humor as an instructional tool
in the college/university classroom.
33
(1) When college students are exposed to the experimental (humor) presentation, they
will view the instructor who uses humor as a teaching tool as significantly more humorous than
students in the control (non-humor) condition will. (2) When college students are exposed to the
experimental (humor) presentation, they will view the instructor who uses humor as a teaching
tool as significantly more engaging than students in the control (non-humor) condition will.
(3)When college students are exposed to the experimental (humor) presentation, they will
experience a significant increase in knowledge from the pretest to the post-test.
34
CHAPTER 3
Methodology
The previous chapter provided a comprehensive review of the literature across a wide
variety of disciplines on humor as an instructional tool in the college classroom. This chapter
will discuss the methodology behind this quasi-experimental study that addresses the following
hypotheses: (1) When college students are exposed to the experimental (humor) presentation,
they will view the instructor who uses humor as a teaching tool as significantly more humorous
than students in the control (non-humor) condition will. (2) When college students are exposed
to the experimental (humor) presentation, they will view the instructor who uses humor as a
teaching tool as significantly more engaging than students in the control (non-humor) condition
will. (3)When college students are exposed to the experimental (humor) presentation, they will
experience a significant increase in knowledge from the pretest to the post-test.
Participants
Since drawing a national sample across millions of college students in the United States
was impractical, this study focused on the accessible population, which consisted of college
students enrolled at a large, public four-year university in the Southeast. More specifically,
participants were sampled from two separate sections of the FOUN 3100 (Child Development,
Learning, Motivation, & Assessment), and two separate sections of FOUN 3120 (Adolescent
Development, Learning, Motivation, and Assessment II) during the Spring 2010 semester. Both
courses are undergraduate courses within the school?s College of Education. Ethnicity, gender,
and age were dependent on the participants enrolled in the courses, and were independent of this
35
study. No participant was excluded on the basis of ethnicity, gender, and/or age.
Participation was voluntary and participants did not receive extra credit for their participation in
this study.
Procedure
Surveys were chosen as the primary means of data collection in the present study for
three reasons. The first reason is convenience. This research was conducted in pre-existing
classrooms. Thus, the researcher had access to an already-established sample of students. The
second reason was the expectation of a high rate of return for the surveys because of the captive
audience. The participants were given all instruments in the classroom and they completed them
while in the classroom. The third reason was the researcher?s ability to answer any
questions/concerns that may arise about the instrumentation. The researcher was in the room
while all of the research was being conducted, which allowed him to answer any questions.
The convenience sample consisted of four separate classrooms of participants enrolled in
FOUN 3100 and FOUN 3120. For the purposes of this study, the classes were randomly
designated A, B, C, and D. The researcher developed four (4) separate lessons on two (2)
different topics, and the researcher presented all eight (8) lectures. Those two topics were
selected-response assessment (SRA) and performance-based assessment (PBA), two common
methods used in educational assessment. Each topic had a humorous (H) and non-humorous
(NH) lesson. Each of the four classes was randomly selected to receive either a humorous (H) or
non-humorous (NH) SRA lecture for their first lecture. The second lectures, on the topic of
PBA, were not randomly assigned. The second lectures were systematically assigned based on
the first lectures. For example, if class A was randomly selected for a H SRA lecture, that class
was systematically assigned to receive the NH PBA lecture.
36
For topic 1, SRA, the H presentation was given to classes randomly assigned to two
sections A and C; and NH presentation to classes B and D. For topic 2, PBA, the classes were
reversed. The H presentation was given to classes B and D; and NH presentation to classes A
and C. This approach exposed the entire sample to both a humorous and non-humorous lecture.
PowerPoint slides and H and NH scripts can be found in Appendices F, I, O, and P.
The researcher visited each of the four classrooms four separate times. Each visit was
separated by one week. The first visit to each class allowed the experimenter to introduce
himself and the study. If they wished to volunteer in the present study, the participants were then
asked to sign consent forms, which had been approved by the Institutional Review Board. An
example of the consent form is located in Appendix A. With the exception of one student, each
of the students in all four (4) classrooms signed a consent form.
The topic of the second visit for each of the four classrooms was selected-response
assessment (SRA). The researcher distributed SRA interest surveys (Appendix D) and then SRA
domain knowledge pretests (Appendix E) Once the pretests were completed, the researcher
delivered the SRA lecture. Each of the four classes was randomly selected to receive either a
humorous (H) or non-humorous (NH) SRA lecture. Two classes received a humorous (H)
presentation; two classes received a non-humorous (NH) SRA lecture. The only difference
between the lectures was either the intentional insertion or omission of humorous material. A
copy of the SRA PowerPoint slides from the H and NH lectures is located in Appendices F and I.
The domain knowledge test questions on the SRA pretests and posttests were identical.
Following the lecture, the post-lecture feedback (Appendix K) survey was administered.
37
The topic for the third visit to each class was performance-based assessment (PBA).
However, before any PBA material was addressed, the participants took the domain knowledge
posttest for SRA. Once those were completed, the researcher distributed PBA interest surveys
(Appendix M) and then PBA domain knowledge pretests (Appendix N). Once the pretests were
completed, the researcher delivered the PBA lecture. Two classes received a humorous (H)
presentation; two classes received a non-humorous (NH) PBA lecture. The only difference
between the lectures was either the intentional insertion or omission of humorous material. A
copy of the PBA PowerPoint slides from the H and NH lectures is located in Appendices O and
P. The domain knowledge test questions on the PBA pretests and posttests were identical.
Following the lecture, the post-lecture feedback (Appendix K) survey was given.
During the fourth and final visit to each class the participants took the domain knowledge
posttest for PBA. Once the posttests were completed, the researcher distributed the Adapted
Student Perception Assessment Scale (SPAS), which is the humor perception instrument. A
copy of the SPAS is located in Appendix T.
Instrumentation
A review of the literature did not reveal an existing instrument that would
comprehensively measure humor as it relates to student perceptions of instructors who use
humor as a teaching tool, student engagement/attention, and retention of material presented.
With the exception of the Student Perception Assessment Scale (SPAS), each of the instruments
used in this study was completely developed by the researcher. Reliability was calculated for
each instrument and that information is reported in Table 5 in chapter 4. Each of the scaled
instruments, which excludes the domain knowledge tests, used a five-point Likert-type survey
with the choices being strongly disagree (SD), disagree (D), unsure (U), agree (A), and strongly
agree (SA).
38
Here are the instruments used in this study:
? Interest Survey ? This is a 10-item scaled survey that was given to each participant
prior to both lectures. Since it was administered prior to the lecture, the interest
survey is independent of any insertion or omission of humor. Examples are located in
Appendices D and M.
? Domain Knowledge Pretests/Posttests ? The pretest is a 20-item multiple choice test
that was given to each participant prior to both lectures. It covered the material about
which the participants were about to receive the presentation. The posttest is the
same 20-question multiple choice test, and was immediately administered the at the
beginning of the class period the week following the appropriate lecture. The domain
knowledge tests for each topic were carefully structured to ensure the material on the
tests was covered in the lectures. Examples are located in Appendices E and N.
? Post-Lecture Feedback Survey ? This is a 17-item scaled survey that was given to
each participant immediately following both lectures. The first 15 items were scaled,
and the final two questions were open-ended. The example is located in Appendix K.
? Student Perception Assessment Scale (SPAS) ? This is a 45-item scaled survey
comprised of questions from the original SPAS, which was developed by Nora James
(2003), and a combination of questions from three other questionnaires (Deiter, 2000;
Shiyab, 2009; Walker, 2006). This survey was the last thing administered in the
fourth and final visit to each classroom. While this instrument did assess participants?
perceptions of the use of humor on their engagement levels, the retention of the
material, and their perception of the instructor, it was not tailored to a specific lecture.
Because it was a general perception instrument, it did not lend itself to answering the
39
three hypotheses. For that reason, the only SPAS data reported in chapter 4 is its
reliability.
To ensure the participants were carefully answering the questions, several questions on
the scaled instruments were reverse coded. No items on the domain knowledge tests were
reverse coded. Table 2 below exhibits which items were reverse coded on the specific scaled
instruments.
Table 2 ? Reverse Coded Items
Instrument Item number
SRA & PBA Interest Surveys 3, 10
SRA & PBA Post-Lecture Feedback Surveys 2, 4, 7, 11, 12, 13, 15
Student Perception Assessment Scale (SPAS) 1, 2, 3, 5, 6, 8, 9, 10, 13, 15, 21, 24, 26, 27, 30,
38, 41
Humor
If the participants did not think the instructor was humorous, then the rest of the study is
irrelevant. Therefore, it was necessary to determine whether or not the participants perceived the
instructor as humorous. There are appropriate and inappropriate types of humor that can be used
in the classroom. A review of the literature reveals and supports many types of appropriate
humor that can and should be used in classroom instruction. Those include puns, self-
deprecation, story-telling, cartoons, jokes, riddles, humorous comments, quotes, analogies,
metaphors, role play, word play, exaggeration, problem sets, examples, and spontaneous types of
humor (Berk, 1996; Berk, 2002; Edwards & Gibboney, 1992; Garner, 2005; Hashem, 1994; Kher
et al, 1999; Shatz & LoSchiavo, 2006). The H and NH presentations for both selected-response
40
and performance-based assessment reflected some of the appropriate types of humor mentioned
here.
Tendentious humor is humor that is hostile or aggressive and often disparages others or
self (Spindle, 1989). Examples include sexual humor, racial humor, overly sarcastic humor, and
humor that is profane, vulgar, or ridicules. This type of humor can result in students
withdrawing and becoming angry, anxious, resentful, and/or tense (Berk, 2002). Since one of
the goals of the present study was to focus on the effects of humor on student engagement,
tendentious humor was intentionally omitted from these lectures.
The first hypothesis for this study specifically addresses humor: When college students
are exposed to the experimental (humor) presentation, they will view the instructor who uses
humor as a teaching tool as significantly more humorous than students in the control (non-
humor) condition will. The instrument used to assess this was the post-lecture feedback survey
(PLF). Six of the fifteen scaled items directly asked the participants about their perceptions of
the humor (or lack of) they just witnessed in the lecture. Results are given in Table 8 in chapter
4.
Engagement
The present study also assessed whether there is a connection between the humor and the
level of engagement experienced by the participants. The second hypothesis for this study is:
When college students are exposed to the experimental (humor) presentation, they will view the
instructor who uses humor as a teaching tool as significantly more engaging than students in the
control (non-humor) condition will. The instrument used to assess this was the post-lecture
feedback survey (PLF). Nine of the fifteen scaled items directly asked the participants about
41
their levels of engagement for the lecture they just witnessed. Results are given in Table 10 in
chapter 4.
Domain Knowledge
Domain knowledge was assessed through domain knowledge pretests/posttests, and
addressed the third hypothesis: When college students are exposed to the experimental (humor)
presentation, they will experience significant increase in domain knowledge from the pretest to
the posttest. Results for this hypothesis are discussed in chapter 4.
Pertaining to the reliability of the pretest/posttest, the amount of time that passes between
the pretest and the posttest is crucial. The shorter the time span between the two observations
(pretest/posttest), the higher the correlation. Conversely, the longer the time span, the lower the
correlation. Because the two observations are related over time, the posttest should follow the
pretest in a reasonable amount of time so that any errors cannot be contributed to the time span
(Trochim, 2006). In this study, the posttest followed the pretest by one week, which is an
adequate time lapse to assess the effect that the humorous presentations had on participants?
retention and recall.
Data Analysis
Based on the three hypotheses, the independent variable is the presence of humor, and the
dependent variables are (1) participants? perceptions of instructor?s humor, (2) participants?
levels of engagement in the class on the subject matter being taught, and (3) domain knowledge.
Humor is the independent variable because that is the variable that is being manipulated across
the four classrooms.
This study compared four classes over two repeated conditions, humor and topic;
therefore, the analysis implemented was within-subjects ANOVA. There were two separate
42
topics (SRA and PBA), and a humor and non-humor (H and NH) presentation for each. The
ANOVA for humor and engagement looked like: Humor (2) X Topic (2).
Analysis was made on each topic and participant interest to ensure that the topic itself
(SRA/PBA) did not factor into any of the results, and that there was not a significant level of
interest for either topic prior to the lectures. A careful analysis of the data is detailed in the in the
following chapter.
Pilot Study
From October 22 ? November 3, 2009, a pilot study was conducted for the humorous
versions only of the presentations for both selected-response assessment (SRA) and
performance-based assessment (PBA). This justification was that the non-humorous and
humorous presentations both contained the same content knowledge. The experimental
presentations (humorous) also contained intentional insertion of humorous cartoons, slides,
comments, and examples, while the control presentations (non-humorous) did not contain
intentionally humorous material. Therefore, it was deemed unnecessary to pilot the control
presentations.
The classroom used was a section of FOUN 3100, which met bi-weekly on Tuesdays and
Thursdays at 5:00 pm during the Fall 2009 semester. The class consisted of 24 students. Of the
twenty-four participants, one was a white male, two were African-American females, and
twenty-one participants were white females. The researcher was scheduled to enter the
classroom four separate times.
43
First Visit
On Thursday, October 22, 2009, the researcher introduced and explained the study to the
participants. Then an IRB-approved script was read and consent letters distributed. The letters
were signed by the participants and then collected by the researcher.
Second Visit
On Tuesday, October 27, 2009, the researcher distributed the SRA interest survey and
domain knowledge pretest prior to beginning the lecture. The participants were asked to
complete the information to the best of their ability. To ensure confidentiality, each participant
was assigned a unique participant identification number, which was to be used on each of the
documents they completed. The lecture covered the selected-response assessment material from
their textbook. There was a PowerPoint slideshow that accompanied the lecture. The lecture
and PowerPoint slideshow contained intentional insertion of humorous cartoons, slides,
comments, and examples that reinforced the content pertaining to SRA. Immediately following
the lecture, a lecture feedback survey was distributed and then collected.
Third Visit
On Tuesday, November 3, 2009, the researcher distributed the SRA domain knowledge
posttest. Once those were completed, the researcher distributed the PBA interest survey and
domain knowledge pretest prior to beginning the lecture. The lecture covered the performance-
based assessment material from their textbook. There was a PowerPoint slideshow that
accompanied the lecture. The lecture and PowerPoint slideshow contained intentional insertion
of humorous cartoons, slides, comments, and examples that reinforced the content pertaining to
PBA. Immediately following the lecture, a lecture feedback survey was distributed and then
collected.
44
Fourth Visit
On Thursday, November 5, 2009, the researcher was scheduled to go back for a final time
in order to distribute and collect the domain knowledge posttest for PBA, and administer the
SPAS instrument. Because of class-related factors unrelated to this study, the PBA post-test
could not be conducted. However, the data from the SRA instrument was deemed sufficient as a
pilot study.
Formats
The researcher used Microsoft PowerPoint presentations for both SRA and PBA
presentations, and handouts of the slides were distributed prior to the lecture. All of the
instruments used in the pilot study ? interest surveys, domain knowledge pretests and posttests,
and lecture feedback ? were pen- and paper-based. Copies of all instruments and presentations
are in the Appendices.
Results
Since the pilot study contained only one set of domain knowledge pretest and posttest
scores, a paired t-test was used to compare the means of the scores. The results of the paired t-
test for the selected-response assessment domain knowledge pretest and posttest scores showed a
significance of .000 (two tailed). The mean score on the pretest scores was 11.36 out of a
possible 20. The mean score on the posttest scores was 15.18 out of a possible 20. The standard
error of the mean was .376. Statistically there was a significant increase in scores from the
pretest to the posttest. Thus the experimental manipulation was deemed sufficient to create a
change in the dependent variable.
There were other things learned through the pilot study. There are a few questions on the
domain knowledge tests that need to be reworded. Many participants did not interpret those
45
specific questions as the researcher intended. It was also realized that there is a lot paperwork
involved in all the data collection, which means lot of participants? time invested in completing
the instruments. It was decided to proceed with the paper-based instruments.
Summary
This chapter described the research design and methodology along with the selection of
the sample, instrumentation, procedure, and data analysis. The results of this study will be
addressed in the following chapter.
The present study will use a knowledge pretest/posttest group design in an effort to assess
the effects of humor as an instructional tool on: students? perceptions of the instructor?s humor,
student engagement, and student retention. A convenience sample of 76 education majors was
used.
46
CHAPTER 4
Presentation of Findings
The purpose of this study was to empirically investigate whether the use of humor as an
instructional tool in the college classroom can increase the level of student engagement, and
thereby increase the level of material retention. This study addressed the following hypotheses:
(1) When college students are exposed to the experimental (humor) presentation, they will view
the instructor who uses humor as a teaching tool as significantly more humorous than students in
the control (non-humor) condition will. (2) When college students are exposed to the
experimental (humor) presentation, they will feel significantly more engaged in the presentation
than students in the control (non-humor) condition will. (3)When college students are exposed
to the experimental (humor) presentation, they will experience a significant increase in
knowledge from the pretest to the post-test.
In the present study, participants were assigned based on a convenience sampling method
to humorous and non-humorous presentations on the topics of: selected-response assessment
(SRA) and performance-based assessment (PBA). The participants completed researcher-
developed interest surveys, domain knowledge pretests and posttests, and lecture feedback
surveys. During the researcher?s final visit to the classroom, the participants each completed the
Student Perception Assessment Scale (SPAS), which is a humor assessment instrument. The
results were analyzed and are presented in this chapter.
47
Sample Demographics
The participants in this study were a sample of convenience, and consisted of 74
undergraduate student participants and 2 graduate student participants. There were 21 males and
55 female participants total, both graduate students who were female. The ages of the sample are
as follows: 18-24 (N=71); 25-34 (N=4); and 35-44 (N=1). All of the undergraduate participants
were classified as either a junior (N=20) or a senior (N=54). The ethnicity of the sample is as
follows: African-American (N=3); Caucasian (N=72); No ethnicity identified (N=1). All 76
participants hold majors within the College of Education, with the exception of one participant
who recently changed out of education and into English. Table 3 is a demographic table where
N=76.
Table 3 ? Demographic Table
GENDER TOTAL NUMBER (N) PERCENTAGE
Male 21 27.6
Female 55 72.4
AGE
18-24 71 93.4
25-34 4 5.3
35-44 1 1.3
CLASSIFICATION
Junior 20 26.3
Senior 54 71.1
Graduate 2 2.6
ETHNICITY
African-American 3 4.0
Caucasian 72 94.7
Not-identified 1 1.3
48
Research Design and Instrumentation
Due to convenience and a captive audience, survey methodology was chosen as the
primary means of data collection. All surveys were completed in the classroom during the visits
by the researcher. The participants were not asked to complete any portion of any instrument
off-site. Surveys included an interest survey on each of the two topics: selected-response
assessment and performance-based assessment, post-lecture feedback survey, and the Adapted
SPAS (Student Perception Assessment Scale), which is a humor perception instrument.
All three surveys consisted of questions and a five-point Likert-type scale with the
choices of: strongly disagree (SD), disagree (D), unsure (U), agree (A), and strongly agree (SA).
The interest survey contained ten questions; the post-lecture feedback survey contained
seventeen questions, two of which were open-ended; the Adapted SPAS contained forty-five
questions. Each of the surveys is located in the Appendices.
In addition to the surveys, domain knowledge pretest and posttests were used on each
topic to assess the amount of domain knowledge learned and retained over a one-week time
period. The domain knowledge pretest and posttests contained twenty multiple-choice questions,
which were taken directly from the lectures. The domain knowledge tests are located in
Appendices E and N.
Reliability
Before the specifics of the reliability tests are discussed, it is important to reveal the
descriptives so that the reader can understand the data set. Below is Table 4, which shows the
Cronbach?s alpha, means, standard deviations, and total number of respondents (N) for each of
the instruments used in this study, including the humorous (H) and non-humorous (NH) lecture
versions.
49
Table 4 ? Reliability and Descriptives Table
Instrument Cronbach?s
Alpha
M SD N
SRA Interest .687 3.3409 .40458 66
PBA Interest .718 3.4348 .39770 74
SRA Pretest .708 .5924 .17259 66
PBA Pretest .591 .4351 .16275 74
NH SRA Post test .770 .6983 .18914 30
NH PBA Post test .632 .5588 .19243 38
H SRA Post test .729 .6875 .21259 36
H PBA Post test .720 .5077 .19320 29
NH SRA PLF .841 3.3892 .52213 31
NH PBA PLF .872 3.2850 .71834 40
H SRA PLF .927 4.0450 .71112 37
H PBA PLF .816 3.8875 .48168 35
Total SPAS .939 4.0141 .38622 68
?A measurement procedure is considered reliable to the extent that it produces stable,
consistent measurements? (Gravetter & Wallnau, 2004, p. 526). To demonstrate reliability in
this study, the researcher used Cronbach?s Alpha, which is a common measure of internal
consistency (Shannon & Davenport, 2001). The Cronbach?s Alpha coefficient ranges from 0 -1,
with 1 being perfectly consistent/reliable.
With the exception of the Student Perception Assessment Scale (SPAS), all of the
instruments used were completely developed by the researcher. Therefore, the reliability of each
instrument was tested as it pertained to topic interest, humorous (H) and non-humorous (NH)
topic pretest scores, humorous (H) and non-humorous (NH) topic posttest scores, and humorous
(H) and non-humorous (NH) topic post-lecture feedback scores. Although the SPAS was heavily
borrowed from James (2003), the researcher did include some additional questions.
50
Consequently, the SPAS instrument was also tested for reliability. The alpha level established
for this research was .05.
Each item rated high in reliability and internal consistency, with the SPAS being most
reliable at .939 and the PBA Pretest being least reliable at .591. The instruments for domain
knowledge pretests and posttests were graded for correct and incorrect responses. The
participants were only given credit for correct responses. The instruments for interest, post-
lecture feedback and SPAS were all scored on a five-point Likert-type scale. The choices were
strongly disagree (SD), disagree (D), unsure (U), agree (A), and strongly agree (SA). The item
means do account for the items that were worded negatively, and therefore, reverse coded. The
questions that were reverse coded are listed in chapter 3.
The mean scores in Table 4 reveal some interesting facts. The participants rated their
interest in PBA (3.4348) higher than they did for SRA (3.3409). However, on each of the other
instruments, they scored or ranked SRA higher than PBA. This will be discussed in greater
detail.
Preliminary Analysis
Before addressing the study?s primary hypotheses, it is important to rule out the
possibility of the topics (selected-response assessment and performance-based assessments) as a
factor in the results. For example, was the use of humor as an instructional tool more effective
when used for one topic versus being used for the other topic? Paired sample t-tests were run
using the total scores from the SRA and PBA interest scales and pretests. The participants
initially knew more about SRA than they did PBA. Table 5 below summarizes those results.
51
Table 5 ? Paired Samples Statistics
Instrument M SD N t p
Pair 1 SRA Interest
PBA Interest
33.27
34.15
4.01
4.15
62
62
-1.98 .053
Pair 2 SRA Pretest
PBA Pretest
11.80
8.84
3.48
3.39
64
64
7.272 < .001
Maintaining the alpha level at .05, any difference in interest between the two topics is
insignificant. While the participants expressed a slightly higher interest in PBA over SRA, this
difference failed to reach statistical significance (p = .053). However, there was a significant
difference between the topics on pretest scores. The average score on the SRA pretest was
11.80, while the average PBA pretest score was 8.84. This demonstrates that the participants
knew more about the topic of SRA than they did the PBA. While the participants demonstrated
more knowledge of SRA, it is important to note that the pretest scores are independent of any
introduction/omission of humor from a lecture.
The independent samples t-tests below in Table 6 indicate that SRA and PBA were not
found to differ regarding humor. This finding helps to rule out topic as a factor in this research.
Table 6 ? Independent Samples Statistics
F Sig df Sig. (2-tailed)
Humor Equal variances assumed 2.560 .114 68 .064
Non-Humor Equal variances assumed .095 .760 61 .231
Hypothesis #1
When college students are exposed to the experimental (humor) presentation, they will view the
instructor who uses humor as a teaching tool as significantly more humorous than students in
the control (non-humor) condition will. Based on the results of the present study, this hypothesis
was supported.
52
It was necessary to determine whether or not the participants who received the humorous
lectures actually perceived them as humorous. More specifically, it was important to determine
whether participants did in fact perceive the instructor as more humorous during the humorous
presentation. This was important because research has shown that instructors who are seen as
humorous increase instructor immediacy, which fosters a positive and productive learning
environment, holds students? attention, and motivates students (Campbell, 1992). Instructor
immediacy has also been positively associated with teaching effectiveness and learning
outcomes (Zhang, 2005).
A paired samples t-test was used to compare participants? perceptions of humor after
listening to a humorous and a non-humorous presentation. The humor and non-humor items
were taken from the post-lecture feedback (PLF) surveys that were administered following each
of the four lectures. There were seventeen (17) total questions on the post-lecture feedback
surveys, and the last two questions were open-ended. Therefore, only the fifteen scaled items are
reported in the Table 7. The specific questions that targeted humor were questions: 1, 2, 7, 12,
13, and 14. All but question number 1 were reverse coded.
Table 7 ? Paired Samples t-test ? Hypothesis #1
M SD N t p
Pair Humor
Non-humor
4.2079
3.3571
.51874
.56785
63
63
12.441 < .001
The paired samples t-test examined the PLF humorous questions and combined the
responses on each topic. The participants reported a significant difference (p < .001) in their
perceptions of the humorous lecture actually being perceived as humorous (4.2079) when
compared to the non-humorous lecture being perceived as humorous (3.3571). This supports
53
hypothesis #1 that participants found the humorous presentations more humorous than they did
the non-humorous presentations.
As a follow-up to the paired samples t-test, an independent t-test was run that compared
the humorous lectures by topic and the non-humorous lectures by topic. Table 8 contains the
descriptive summary table and the results of the t-test for the humorous and non-humorous
lectures for each topic.
Table 8 ? Independent Samples Group Statistics
Humor Topic M SD N t p
Humor SRA
PBA
4.3048
4.0838
.50242
.47899
35
35
1.883
.064
Non-Humor SRA
PBA
3.4394
3.2667
.52011
.61214
33
30
1.210
.231
The descriptive summary reveals that the participants found SRA to be slightly more
humorous than they did PBA PLF scores for both the humorous and non-humorous lectures.
These findings are not statistically significant (p = .064; p = .231). The standard error of means
is small for all four groups, which demonstrates a strong representation of the population mean.
Hypothesis #2
When college students are exposed to the experimental (humor) presentation, they will feel
significantly more engaged in the presentation than students in the control (non-humor)
condition will. Based on the results of the present study, this hypothesis was supported.
It was necessary to determine whether or not the participants who received the humorous
lectures reported feeling more engaged in the lectures than the participants who received the non-
humorous lectures. This is important because academic engagement is one of the best predictors
of academic success and the overall educational experience (Kuh, 2001; McKeachie, 1994).
54
A paired sample t-tests were run that compared the levels of engagement for humorous
lectures and non-humorous lectures on both topics combined. The engagement items were taken
from the post-lecture feedback (PLF) assessments that were given following each of the four
lectures. There were seventeen total questions on the post-lecture feedback surveys, and the last
two questions were open-ended. Therefore, only the fifteen scaled items are reported in the table
below. The specific questions that targeted engagement were questions: 3, 4, 5, 6, 8, 9, 10, 11,
and 15. Questions number 4, 11, and 15 were reverse coded. Table 9 summarizes the results per
survey item.
Table 9 ? Paired Samples t-test ? Hypothesis #2
M SD N t p
Pair H Engage
NH Engage
3.9383
3.2981
.57658
.64906
63
63
6.893 < .001
The paired sample t-test looked at the PLF engagement questions and combined the
responses by humorous and non-humorous lectures. The participants reported a significant
difference (p < .001) in their levels of engagement in humorous lectures (3.9383) when
compared to the non-humorous lecture (3.2981). This supports hypothesis #2 that participants
were more engaged when viewing a humorous presentation than when viewing a non-humorous
presentation.
As a follow-up to the paired samples t-test, an independent t-test was run that compared
the levels of participant engagement by humorous and non-humorous lectures. Table 10 contains
the descriptive summary table for the levels of engagement for each topic each type of lecture, H
or NH.
55
Table 10 ? Independent Samples Group Statistics
Humor Topic M SD N t p
H Engage SRA
PBA
4.0667
3.7587
.56476
.51660
35
35
2.380
.020
N HEngage SRA
PBA
3.1616
3.4481
.69782
.56466
33
30
-1.780
.080
The descriptive summary reveals that the participants were more engaged in the H SRA
lecture than the H PBA lecture at a statistically significant level (p = .020). Participants were
slightly more engaged in the NH PBA lecture than the NH SRA lecture. This is not a
statistically significant finding (p = .080). The participants found the humorous lecture for SRA
more engaging than the humorous PBA lecture, and the non-humorous PBA lecture more
engaging than the non-humorous SRA lecture. The standard error of means is small for all four
groups, which demonstrates a strong representation of the population mean.
Hypothesis #3
When college students are exposed to the experimental (humor) presentation, they will
experience a significant increase in knowledge from the pretest to the post-test. Based on the
results of the present study, this hypothesis was not supported.
Humor as an instructional tool has often been shown to increase the amount of material
understanding and retention (Crump, 1996; Garner, 2006; Pascarella and Terenzini, 2005; Torok
et al, 2004). To determine whether participants who received the humorous lecture had more
significant gains in their posttest domain knowledge scores than the participants who received
the non-humorous lectures, a 2 (H, NH) x 2 (pre, post) within subjects ANOVA was conducted.
The means and standard deviations for each condition are summarized in Table 11, and
illustrated in Figure 1.
56
Table 11
H Pretests H Posttests NH Pretests NH Posttests
Mean (SD) Mean (SD) Mean (SD) Mean (SD)
10.4906 (3.36625) 12.2264 (3.80608) 9.4528 (3.87083) 12.1509 (3.76929)
Figure #1
0
2
4
6
8
10
12
14
Pretest Posttest
Humorous
Non-Humorous
While there was a one point difference between the H and NH pretest mean scores
(10.4906, 9.4528) the gap was closed at the posttests (12.2264, 12.1509). Using only the mean
scores, there is no apparent difference in posttest scores between the H and NH presentations.
The within-subjects ANOVA goes a little deeper. Table 12 summarizes the tests of the 2 x 2
within-subjects ANOVA.
57
Table 12 ? Within-Subjects ANOVA ? Hypothesis #3
df Mean Square F Significance
Humor 1 16.420 1.002 .321
Error 52 16.381
Time 1 260.495 72.923 < .001
Error 52 3.572
Humor * Time 1 12.269 3.988 .051
Error 52 3.077
Overall, there was not a significant difference in knowledge between the humorous and
non-humorous lectures. There was, however, a significant increase in knowledge over time (p <
.001). The interaction between humor and time, however, failed to reach statistical significance
(p = .051), indicating that any increase of domain knowledge scores was primarily a result of
natural gains rather than the type of lecture to which the participants were exposed.
Open-Ended Responses
The post-lecture feedback surveys (PLF), which were the same for both topics, contained
two open-ended questions that followed the fifteen (15) scaled items. A brief listing of some of
the more notable responses are outlined below in Tables 13 ? 16. The participants are the same
in Table 13 and Table 14 (H SRA and NH PBA) and in Table 15 and Table 16 (NH SRA and H
PBA).
58
Table 13 ? Humorous SRA PLF
What about the lecture generated interest for you? What suggestions do you have to improve
the lecture?
I loved the pictures. It gave me something to look
at without the whole class being so serious.
This lecture was very interesting and
educational. I wouldn't change a thing.
I really liked the cartoons and pictures. It helped
me pay attention. The cartoons were better than
looking at a slide full of notes.
More humor! : )
He seemed knowledgeable on the topic. He kept
my interest b/c he did not go off on tangents.
No suggestions. It was fun and interesting
as educational psychology lectures can be.
The humor was directed towards Auburn/ college
students. This is useful information as a future
teacher.
I think that using humor in the classroom is
a great idea but I think that it should be
something that we can relate to. Like things
that we think are funny.
It is something that it is a pretty boring topic, but
he managed to find some humor in it.
Make humor more obvious if that what we
are asked for our opinion about. I had no
idea you were using humor!! Maybe one
thing caught my attention about you being
humorous.
I find the examples relevant to the topic and the
relation to real life situations and current interests
made the lecture more engaging. The humor was a
nice break to the lecture. There was just the right
amount of humor also. Enough to keep me
engaged but not too much that it seemed overdone.
When there is a picture slide, add some
keywords, too. : ) Humor is good but it's
hard to let it stand alone.
The humor and the slides did not give all the
information you did. You related the information
to something that I know about or interests me.
The use of pictures/cartoons generated interest b/c
I wanted to understand what happened and how it
related to what we were discussing. Used good
examples/ anecdotes.
59
Table 14 ? Non-humorous PBA PLF
What about the lecture generated interest for
you?
What suggestions do you have to improve the
lecture?
Not much except I knew that learning about this
is beneficial to me as a future educator. Without
humor it was boring and hard to pay attention.
Bring back humor!!
Nothing!!! Humor!
It was pretty boring More humor!
Use humor! : ) Definitely makes things more
interesting and easier for students to stay
involved and learn.
I didn?t catch much humor
I liked the ending exercise but the PowerPoint
was not as interesting.
I enjoyed the use of humor from the first
lecture, so it was missed in this one.
The activity at the end caught my attention b/c I
will be going through the interview process soon.
More like the first lecture with humor and
relating to stuff.
The activity that we did post-lecture. It really
helped illustrate to me the importance of a good
rubric.
I liked the more humorous lecture last week
better. But I really enjoyed the activity this
week.
The instructor was engaging. He made the
lecture seem more like a conversation.
Maybe incorporate music/video (perhaps You
Tube) into the lecture for more
entertainment/engagement.
Table 15 ? Non-humorous SRA PLF
What about the lecture generated interest for
you?
What suggestions do you have to improve the
lecture?
It was hard to stay interested. I don't know if I missed it but I didn't see any
humor in the lecture.
I liked the final quiz. It is easy to understand
when you see this applied.
Don't talk so fast. Allow the class to engage
in discussion thru back and forth question.
Act like you're excited about the material.
The topic itself is fairly interesting to me as a
teacher. The fact that someone different was
teaching made it more interesting and caught my
attention.
Needs to be more engaging.
Engage students more to grab and hold
attention
60
Table 16 ? Humorous PBA PLF
What about the lecture generated interest for you? What suggestions do you have to improve
the lecture?
I definitely enjoyed this lecture more than I did the
other. I loved the picture comparisons and
descriptions that went along with the lecture.
I feel that if using cartoons I would put them
on separate slides. At times I caught myself
reading whatever instead of listening;
however they do have their plus.
It was quick and to the point. He did not get off
subject and ramble. His examples were easy to
get, quick, and funny.
I liked the pop culture examples better than
the cartoon examples.
I loved the comparison to American Idol (b/c it's
something we all know).
Too many distracting, hilarious comics
You used the cartoons in good places. I think it
will be more memorable.
Use more of a variety of cartoons - this will
appeal to a broader audience.
I liked that there were a lot of cartoons and
examples to make the material more relatable and
interesting.
If all of your lectures were similar it would
be much easier to stay attended to the
lessons.
Cartoons, humor, PowerPoint, age-appropriate
examples - examples were not old.
All the examples that could be easily related to.
Was not presented in a way no one understand.
Made the material approachable.
I liked the American Idol and cartoon references.
It helped bring it down to our level.
The responses to the PLF surveys were positively correlated to the statistical results from
the first two hypotheses. The participants did find the instructor and the intentionally humorous
material as humorous. The following response validates the reason for needing to ascertain
whether or not students think material is funny: ?I think that using humor in the classroom is a
great idea but I think that it should be something that we can relate to. Like things that we think
are funny.? The participants also found the intentionally humorous lectures engaging, and that
was mentioned specifically in several of the responses listed in the above tables.
61
While the participants did report enjoyment and engagement in the humorous lectures,
there was no statistically significant effect of humor on the gains in domain knowledge posttest
scores. Even with participant comments such as ?memorable,? ?interesting,? ?engaging,? and
?bring back the humor,? humor did not increase posttest scores.
Summary
The results for the third hypothesis were not as expected. It was not surprising that the
participants reported the humorous lectures as more humorous and more engaging (hypotheses 1
and 2) than the non-humorous lectures. That finding corroborates some of the previous literature
in this area (e.g., Berk, 2002; Kuh, 2001; Wanzer & Frymier, 1999). It was, however, surprising
that the participants in the humorous lectures experienced statistically insignificant gains in
posttest scores when compared to the non-humorous lectures. The existing literature remains
inconclusive, and this study did not help to further elucidate the issue. A comprehensive
discussion of these findings follows in Chapter 5.
62
CHAPTER 5
Discussion
This study investigated the effect of instructor humor on college students? level of
engagement and ability to retain information from humorous and non-humorous lectures. A
convenience sample of 76 participants was exposed to two different lectures on two different
topics, which were selected-response assessment and performance-based assessment. They were
administered interest surveys, post-lecture feedback surveys, and domain knowledge
pretests/posttests. Results of data analysis revealed that the two of the hypotheses were
supported by the data, while one hypothesis was not.
The final chapter will discuss the strengths and limitations of this study, the conclusions
of each hypothesis, and offer recommendations for current practice and future research.
Strengths
There are several strengths of this study. The first strength is its authenticity as the
lecture topics, selected-response assessment (SRA) and performance-based assessment (PBA),
occurred as natural parts of the curriculum in the classes used. These two topics were
fundamental to the standard curriculum in these education courses. The researcher was also able
to deliver the lectures when they appeared on the syllabus. This allowed the class to stay within
the prescribed syllabus. The topics and timing fit well into the pre-established classrooms and
curriculum. This study occurred in a natural setting, which strengthened its validity.
63
Second, the humorous (H) and non-humorous (NH) lectures were identical in content,
with the exception of the humor elements intentionally inserted or omitted at the appropriate
times. Similarly, the same presenter was used for all lectures. All four lectures were rigidly
scripted so the researcher would not deviate, and thereby introduce extraneous elements to the
study. Copies of the scripts and PowerPoint presentations are in the Appendices.
Third, the type of humor used was based on the literature review. Literature supports the
use of content-related cartoons, humorous quips, personal stories, and incongruous concepts to
emphasize points made in the classroom (Berk, 1998, 2002, 2005).
Fourth, the researcher is an experienced presenter with a fair amount of experience
teaching, speaking in front of and interacting with college students. This allowed for a natural
delivery of both H and NH lectures. Finally, the topics presented, selected-response assessment
(SRA) and performance-based assessment (PBA) are both concepts that were learned by the
researcher as part of his doctoral studies; therefore, the researcher had some degree of prior
knowledge and expertise in these areas.
Limitations
There were also a few limitations of this study. First, because the lectures were so rigidly
scripted, they did not allow for much participant interaction. This limited the amount of
instructor immediacy that could have been developed. Instructor immediacy is a well-
documented benefit of humor in the classroom (Berk, 1996, Campbell, 1992; Crump, 1996;
Myers & Bryant, 2004; Roca, 2004; Zhang, 2005). Similarly, this resulted in a more lecture-type
of delivery, which did not foster a learner-centered approach to learning and instruction, but
rather a more teacher-centered approach.
64
Second, the characteristics of the sample are relatively homogeneous. The sample was
primarily composed of Caucasian, traditional-aged, female college students who were all
Education majors. While this narrow sample strengthened the internal consistency of the results,
it limits the ability to generalize to a broad population of college students.
Third, this research occurred over a four-week period for each class in which four classes
were compared on only two topics. The research was a snapshot in time of those specific two
topics at those specific times. Ideally, the study would have been semester-long and compared
the completed classes on these three topics: perceptions of humor, levels of engagement, and the
effects of humor on student learning, which would be the final GPA in this example.
Finally, the topics were presented in the same order in all four classrooms ? SRA first,
PBA second. The participants could have been more prepared for the second lecture because
they were familiar with the instructor?s style and theme. Counterbalancing the topics for half of
the lectures would eliminate topic order as a confounding variable.
Conclusions
The purpose of this study was to empirically verify whether humor as an instructional
tool in the college classroom could increase the level of student engagement, and thereby
increase the level of material retention. This study addressed the following hypotheses, and the
conclusions from each hypothesis will now be presented.
Hypothesis #1: When college students are exposed to the experimental (humor) presentation,
they will view the instructor who uses humor as a teaching tool as significantly more humorous
than students in the control (non-humor) condition will.
This hypothesis was supported by the data. The post-lecture feedback (PLF) survey
contained items that specifically addressed the participants? perceptions of the humorous nature
65
of the lecture. The participants received the PLF following each humorous (H) and non-
humorous (NH) lecture. The PLF was based on a five-point Likert-type scale. The choices were
strongly disagree (SD), disagree (D), unsure (U), agree (A), and strongly agree (SA).
An independent samples t-test was conducted that compared the humorous lectures by
topic and the non-humorous lectures by topic. The results demonstrated a statistically significant
difference in mean scores, with the H SRA and PBA mean scores both being higher their NH
counterparts. This suggests that the participants thought the H lectures were more humorous
than the NH lectures.
The independent samples t-test also revealed that the assumption of homogeneity is
maintained ? meaning that neither topic is inherently more humorous than the other. There was
a statistically insignificant relationship with regards to the perception of humor by topic.
A paired samples t-test that combined the two topics demonstrated a statistically
significant finding when the H PLF per item mean scores was compared to the NH per item
mean scores. The mean score for the H PLF was almost a full point higher the NH PLF.
There was a large difference between the H and NH lectures when the PLF scores were
combined for each topic. For both topics, the H lectures scored approximately 10 points higher
than the NH lectures.
This data suggests that the participants did, in fact, perceive the lecture ? and therefore
the lecturer ? as more humorous during the H lectures than they did in the NH lectures. This
may seem like an innocuous finding, but if the participants did not perceive the intended
humorous material as actually humorous, then the rest of this study is irrelevant. Since humor
was the independent variable, the researcher had to ensure that humor was actually present in the
study. That is why this first hypothesis is so pivotal.
66
The researcher found no existing humor literature that asked the students? perceptions on
whether or not they perceived the material as humorous. It must be assumed that if the
instructor thought it was humorous, then it must be. Any results from those studies would be
based on the assumption that the students thought the material was humorous because the
instructor did. For the purposes of this study, the researcher felt it necessary not to assume the
participants? perceptions.
Hypothesis #2: When college students are exposed to the experimental (humor) presentation,
they will feel significantly more engaged in the presentation than students in the control (non-
humor) condition will.
This hypothesis was supported by the data. The PLF survey contained items that
specifically addressed the participants? level of engagement in the H lectures versus the NH
lectures.
An independent t-test was conducted that compared the levels of participant engagement
by humorous and non-humorous lectures. The results demonstrated a significant difference in
mean scores for SRA, and only a slight difference for PBA mean scores. The H lectures for both
topics resulted in a higher mean score than the NH lectures for both topics; however the
difference in the PBA mean scores is very small. This suggests the participants were inherently
more engaged in the topic of SRA than they were in the topic of PBA. Further analysis
confirmed that suggestion.
The independent samples t-test revealed that the assumption of homogeneity is not
maintained ? meaning that the participants reported that the topic of SRA is inherently more
engaging than PBA. Data analysis showed a significant relationship between topic and
67
engagement. The participants, in fact, reported higher levels of engagement with the SRA
lectures than the PBA lectures.
Why were the participants more engaged in SRA than PBA? The researcher suggests
that the participants are more familiar with selected-response assessment. SRA includes multiple
choice, matching, true/false, and short-answer items. The participants have likely experienced
SRA more often than PBA, so they are more knowledgeable and familiar with that form of
assessment. It is also important to note that the SRA lectures were delivered first in all four
classes. There is the possibility that the participants were initially engaged because the presenter
was new to the class. By the time the second lecture ? PBA ? was delivered, the presenter was
no longer novel, and therefore, the initial level of engagement may have dwindled.
A paired sample t-test demonstrated a statistically significant finding in the engagement
questions for both topics combined when the H PLF per item mean scores was compared to the
NH PLF per item mean scores. The participants reported that the H lectures were more engaging
than the NH lectures.
This data suggests that the participants were, in fact, more engaged during the H lectures
than they were in the NH lectures. Although SRA was inherently and significantly more
engaging than PBA, when the mean scores on the PLF were combined, there was still a
significant impact of the H over NH lectures on the participants? levels of engagement. Humor
was effective at a statistically significant level at engaging the participants in the lectures.
Hypothesis #3: When college students are exposed to the experimental (humor) presentation,
they will experience a significant increase in knowledge from the pretest to the post-test.
Conclusions from each hypothesis will now be presented.
68
This hypothesis was not supported by the data. The participants were given a domain
knowledge pretest and posttest for each of the two lecture topics. These tests were given one
week apart and were to assess the amount of material retained from one week to the next. A 2x2
within subjects ANOVA was conducted.
The data show that the H lectures did not produce significantly higher posttest scores than
the NH lectures. The pretest mean scores, which were given before any lecture was delivered (H
or NH) illustrate the H pretests were a full point higher than the NH pretests. Since humor had
not been intentionally inserted or omitted at that point and those scores combine the two topics,
those differences are not relevant to the results of this study. The posttests, however, are where
the differences would have occurred, and they did not. Although the pretest scores were a full
point apart, the posttest scores were almost identical. Thus the differences in the posttest scores
were non-significant.
When separated by topic, the differences (or lack thereof) become even more apparent.
For SRA, the gains in posttest scores were almost identical; just as they were when the topics
were combined. There was no effect of the H lecture on the gains in posttest scores for the topic
of SRA. The researcher believes that lack of significant difference in the H and NH posttest
scores for SRA may be due to the significantly high levels of engagement for SRA. From the
beginning, the participants were engaged in this topic. The type of lecture they received ? H or
NH ? apparently had the same effect on the posttest scores.
The topic of PBA tells a different story. For PBA, the gains in posttest scores favored the
NH lectures. Humor had a less-positive effect on the gains in posttest scores. The participants
who received the NH lecture scored higher on their posttests than those participants who
received the H lecture. That is certainly not what this researcher was expecting to see. The
69
researcher believes that the humor used in the H PBA lectures could have been too distracting,
which is a caution offered against the use of humor in the classroom (Berk, 2002). Great care
was taken to ensure the types of humor used were appropriate and content-related, and the
participants reported that it was indeed humorous ? see results from hypothesis 1. The humorous
PBA lecture notes and PowerPoint slides are in Appendix P.
Viewing the within-subjects contrasts further validates that finding. The H lectures had
an insignificant effect on domain knowledge posttests scores when compared to NH lectures.
There was a significant result of the time category, which was independent of the humor factor,
and suggests that any increase of domain knowledge posttests scores could be a result of natural
gains rather than the type of lecture to which the participants were exposed. The insignificant
result of the combined humor and time categories verifies that humor did not play a significant
role in the gains in domain knowledge posttests scores. There is also a possibility that the
differences between the H and NH lectures were not strong enough to warrant significant
differences in posttest scores.
Recommendations for Practice
Two of the three hypotheses were supported by the data ? the participants thought the
humorous lectures were more humorous and more engaging than the non-humorous lectures.
This is important because of humor?s ability to grab and maintain students? attention, which is
one of the main reasons to use humor as an instructional tool (Berk, 2002; Deiter, 2000). This
study validates that humor is effective at engaging students in the learning process. Kuh (2001)
states that the quality of student learning and a student?s overall educational experience are
directly impacted by the degree to which students are engaged in their studies. According to
70
Kuh, student engagement can be used as a proxy for overall university quality. That is how this
study should be used ? to improve the quality of the overall educational experience.
Humor in the college classroom can ?break the ice,? relieve tension, and increase
instructor immediacy (Burbach & Babbitt, 1993; Hashem, 1994; Wanzer & Frymier, 1999).
Those three elements aid in building and strengthening the classroom community, and they can
start with humor. Humor is also known to increase teaching effectiveness (Pascarella &
Terenzini, 2005), which leads to the following recommendations.
There are two primary entities on any college/university campus: students and faculty.
Faculty cannot control students? extraneous distractions and/or initial interest in any subject
matter. However, faculty can control how they teach. Faculty can control the efforts they make
to intentionally engage students. Therefore, colleges/universities should seek ways to train
instructors on the best techniques to engage students in the classroom. For that reason this
researcher suggests the following two recommendations for practice.
1. Implement regular professional development opportunities for college instructors
that specifically train them on effective ways to engage students. Schunk et al (2008)
cite two different studies that show that teachers who are trained in effective
instructional practices are more effective at raising student achievement than are
untrained teachers. Humor should be one component of that training. Humor has
been shown to engage students, and attending to classroom activities is a key
indicator of student success (McKeachie, 1994).
2. Teach instructors how to infuse humor into various aspects of their course. Humor
is an effective partner. Humor in the classroom has been used to emphasize content-
relevant points, relieve tension and stress, improve morale, motivate, and develop and
71
strengthen relationships among students and instructors. All of these things have
been shown to positively affect students? academic performance (Anderson &
Arnoult, 1999; Berk, 1996; Berk, 2002; Garner, 2006; Philaretou, 2006; Stambor
2006, Weiss, 1993). Humor can be applied to the syllabus, lecture, classroom
discussion, e-mails, electronic threaded discussions, assignments, etc. Effectively
using humor encourages active learning and cooperative learning, and fosters higher-
order thinking skills (Berk, 2002; Gardner, 1998). Instructor skill and course
structure-organization are the two most prominent instructor behaviors that predict
student learning, and they are learnable (Pascarella and Terenzini?s, 2005). It is
important to note however, that course instructors should be trained in the uses of
effective vs. ineffective humor (Campbell, 1992; Chaisson, 2002; Garner, 2006).
Recommendations for Future Research
The third hypothesis was not supported by the data. The humorous lectures had no
quantifiable effect on the participants? posttest scores, which reflected the amount of material
retained across a one-week time period. Although disappointing, it is not completely without
precedence. Dr. Ronald Berk (1996), who frequently uses humor in his lectures and who is a
major proponent of humor as a pedagogical technique, states that some of humor?s claims of
positively affecting student learning are unsubstantiated. Conversely, there are other studies that
do support claims of humor increasing student learning and academic achievement (Crump,
1996; Garner, 2006; Pascarella and Terenzini, 2005). There is evidently much work to be done.
For that reason this researcher suggests the following recommendations for future research:
1. Future research should be conducted over an extended duration, such as over the
course of the entire semester of a class. Perhaps infusing humor into all of the
72
classroom content as opposed to only two topics should be investigated. This type of
longitudinal study might reveal more about the strength of humor to positively affect
student learning and academic achievement.
2. More research needs to be conducted that focuses on how humor affects specific
subpopulations of college students. For example, there are two significant
subpopulations that are currently receiving considerable attention across the scope of
American higher education ? demographics with high dropout rates and returning
veterans. How can humor be used to effectively engage these students so that their
attrition rates can be high? Additionally, how does humor affect different
classifications of college students? Mitchell et al (2008) demonstrate there are
differences in self-esteem among college freshmen and college seniors. Would
different types and/or levels of humor influence freshmen differently than seniors?
3. The evolutionary theory of humor was mentioned but not seriously addressed in
this research. This theory is based on the idea that humor has evolved due to sexual
selection. The ability to produce and appreciate humor is valued on the basis of
selecting a mate (Bressler et al, 2006). This theory suggests there would be some
very specific gender differences in the implementation, appreciation, and
effectiveness of humor. There is very little research on the interaction of instructor
and student gender related to humor as a pedagogical instrument. Results from this
type of study could be used by instructors who are considering their audience and
deciding when and how much humor to use.
4. Finally, the impact of topic interest on domain knowledge test scores should be
examined. Interest surveys were administered as part of this research, but there was
73
not significant data analysis performed with them. Multivariate analysis with interest
as a covariate could further isolate humor (independent of interest) and its effect in
the college classroom.
Summary
It is intended that this study will be a contributing factor in the body of research
pertaining to pedagogical effects of humor. This study has shown that when college students
find the instructor humorous at a significant level, they are significantly engaged in the material
and the class. This study has strengthened the research behind the engaging effects of humor,
and highlighted the necessary of role of student engagement in the learning process.
Unfortunately, this study failed to show a positive link between humorous lectures and gains in
material retention as demonstrated on domain knowledge posttests.
There is further research to be done on the role of humor as a positively influential
pedagogical tool. Its powers of engagement are well documented ? this study included.
However, further investigations are needed before humor?s effects on student learning can be
quantitatively verified.
74
References
Abel, M.H. (1996). Self-esteem: Moderator of mediator between perceived stress and
expectancy of success. Psychological Reports, 79, 635-641.
Anderson, C.A., and Arnoult, L.H. (1989). An examination of perceived control, humor,
irrational beliefs, and positive stress as moderators of the relation between negative stress and
health. Basic and Applied Social Psychology, 10(2), 101-117.
Beeland, W.D. (2002). Student engagement, visual learning and technology: Can
interactive whiteboards help? Annual Conference of the Association of Information Technology
for Teaching Education, Trinity College, Dublin. Retrieved from the World Wide Web:
http://chiron.valdosta.edu/are/Artmanscrpt/vol1no1/beeland_am.pdf.
Bennett, H.J. (2003). Humor in medicine. Southern Medical Journal, 96(12), 1257-1261.
Berk, R.A. (1996). Student ratings of 10 strategies for using humor in college teaching.
Journal on Excellence in College Teaching, 7(3), 71-92.
Berk, R.A. (1998). Professors are From Mars. Students are From Snickers. Madison, WI:
Mendota Press.
Berk, R.A. (2002). Humor as an instructional defibrillator: Evidence-based techniques in
teaching and assessment. Sterling, VA: Stylus Publishing, LLC.
Berk, R.A. (2005). Survey of 12 strategies to measure teaching effectiveness.
International Journal of Teaching and Learning in Higher Education, 17(1), 48-62.
75
Bressler, E.R., Martin, R.A., and Balshine, S. (2006). Production and appreciation of
humor as sexually selected traits. Evolution and Human Behavior, 27, 121-130.
Burbach, H.J. and Babbitt, C.E. (1993). An exploration of the social functions of humor
among college students in wheelchairs. Journal of Rehabilitation, Jan.-March 1993, 6-9.
Campbell, C.E. (1992). Doin? time in college: An ethnographic study of power and
motivation in a large lecture class. Paper presented at Annual Meeting of the Speech
Communication Association (78th, Chicago, IL, October 29 ? November 1, 1992).
Caron, J.E. (2002). From ethology to aesthetics: Evolution as a theoretical paradigm for
research on laughter, humor, and other comic phenomena. Humor 15(3), 245-281.
http://2010.census.gov/partners/pdf/toolkit_Campus_Overview.pdf
Check, J.F. (1986). Positive traits of the effective teacher - negative traits of the
ineffective one. Education,106 (3), 326-334.
Chaisson, P.E. (2002). Humor in the second language classroom; It?s not a laughing
matter. Retrieved from the World Wide Web: http://www.caslt.org/research/humour.htm.
Cline, T.W., & Kellaris, J.J. (2007). The influence of humor strength and humor-message
relatedness on ad memorability: A dual process model. The Journal of Advertising, 36(1), 55-67.
Cornett, C.E. (1986). Learning through laughter: Humor in the classroom. Bloomington,
IN: Phi Delta Kappa Educational Foundation.
Crump, C.A. (1996). Teacher immediacy: What students consider to be effective teacher
behaviors. Retrieved from the World Wide Web:
http://eric.ed.gov/ERICDocs/data/ericdocs2/content_storage_01/0000000b/80/24/2e/dc.pdf.
Deckers, L. & Kizer P. (1975). Humor and the incongruity hypothesis. The Journal of
Psychology, 90, 215-218.
76
Deiter, R. (2000). The use of humor as a teaching tool in the college classroom. NACATA
Journal, (20-28).
Edwards, C. M., & Gibboney, E. R. (1992, February). The power of humor in the college
classroom. Paper presented at the annual meeting of the Western States Communication
Association, Boise, ID.
Forbes, S., Ross, M, Salisbury-Glennon, J., & Strom, P. (Compilation Eds.). (2006).
Assessment, development, learning and motivation for children and adolescents. New York:
Pearson Custom Publishing.
Gardner, H. (1998). Reflections on multiple intelligences: Myth and messages. In
Woolfolk, A. Readings in Educational Psychology, (pp. 61-67). Boston, MA: Allyn and Bacon.
Garner, R.L. (2003). Which came first, the chicken or the egg? A foul metaphor for
teaching. Radical Pedagogy, 5(2). Retrieved from the World Wide Web:
http://radicalpedagogy.icaap.org/content/issue5_2/04_garner.html.
Garner, R.L. (2005). Humor, analogy, and metaphor: H.A.M. it up in teaching. Radical
Pedagogy 6(2). Retrieved from the World Wide Web:
http://radicalpedagogy.icaap.org/content/issue6_2/garner.html.
Garner, R.L. (2006). Humor in pedagogy: How ha-ha can lead to aha! College Teaching,
54(1), 177-180.
Gravetter, F.J. & Wallnau, L.B. (2004). Statistics for the behavioral sciences (6th ed).
Belmont, CA: Wadsworth/Thomson Learning.
Gredler, M.E. (2001). Learning and instruction: Theory into practice. (4th ed.). Upper
Saddle River, NJ: Merrill Prentice-Hall.
77
Hashem, M.E. (1994). Play and humor in the college classroom: Using play as a teaching
technique in interpersonal. Annual Meeting of the Central States Communication Association,
Oklahoma City, OK. Retrieved from the World Wide Web:
http://eric.ed.gov/ERICDocs/data/ericdocs2/content_storage_01/0000000b/80/27/25/67.pdf.
Hellman, S.V. (2006, October). Online humor: Oxymoron or strategic teaching tool.
Paper presented at the Midwest Research-to-Practice Conference in Adult, Continuing, and
Community Education, University of Missouri-St. Louis, MO.
Higher Education Research Institute. The American Freshman: National Norms for Fall
2005 (Los Angeles: Higher Education Research Institute, UCLA, 2005
Hill, D.J. (1988). Humor in the classroom: A handbook for teachers and other
entertainers. Springfield, IL: Charles C. Thomas Publisher.
Hurren, B.L. (2006).The effects of principals? humor on teachers? job satisfaction.
Educational Studies, 32(4), 373-385.
James, N. (2003). Vocational nursing students? perception of the use of humor in the
classroom. Masters Thesis.
Jonas, P.M. (2004). Secrets of connecting leadership & learning with humor. Lanham,
MD: Scarecrow Education.
Kher, N., Molstad, S., & Donahue, R. (1999). Using humor in the college classroom to
enhance teaching effectiveness in ?dread courses.? College Student Journal, 33(3), 400-407.
78
Krause, K. (2005a). Understanding and promoting student engagement in university
learning communities.? Paper presented at ?Deconstructing the 21st Century Undergraduate
Student.? In James Cook University Symposium 2005, Sharing Scholarship in Learning and
Teaching: Engaging Students, JCU, 2005. URL:
http://www.cshe.unimelb.edu.au/pdfs/Stud_eng.pdf
Kuh, G.D. (2001). The national survey of student engagement: Conceptual framework
and overview of psychometric properties. Bloomington, IN: Indiana University Center for
Postsecondary Research and Planning.
LaFave, L., Haddad, J., & Maesen, W.A. (1976). Superiority, enhanced self-esteem, and
perceived incongruity humor theory. In Chapman, A.J. & Foot, H.C. (Eds.), Humor and laughter:
Theory, research and applications. (pp. 63-91).
Mahoney, D.L. (2000). Is laughter the best medicine or any medicine at all? Eye on Psi
Chi, 4(3), 18-21.
McKeachie, W.J. (1994). Teaching tips: Strategies, research, and theory for college and
university teachers (9th edition). Lexington, MA: DC Health and Company.
Mitchell, K., Smith, S., & Simpson, J. (2008). Self-esteem and class standing in liberal
arts undergraduate college students. Retrieved from the World Wide Web:
http://www.kon.org/urc/v7/mitchel.html.
Murray, J.P. and Murray, J.I. (1992). How do I lecture thee? College Teaching, 40(3),
109-114.
Myers, S.A, and Bryant, L.E. (2004). College students? perceptions of how instructors
convey credibility. Qualitative Research Reports in Communication,5, 22-27.
79
Nemko, M. (2008, May 2). America's most overrated product: The bachelor's degree. The
Chronicle of Higher Education,54(34), p. B17.
Nicewonder, C. (2001). Humor in the mathematics classroom? But seriously. Retrieved
from the World Wide Web: www.umkn.edu/cad/nade/nadedocs/94conpap/cncpap94.
Pascarella, E.T., & Terenzini, P.T. (2005). How college affects students: A third decade
of research (2nd ed.). San Francisco, CA: Jossey-Bass.
Philareatou, A. G. (2006). Learning and laughing about gender and sexuality through
humor: The woody allen case. The Journal of Men?s Studies, 14(2), 133-144.
Polimeni, J. and Reiss, J.P. (2006). The first joke: Exploring the evolutionary origins of
humor. Evolutionary Psychology, 4, 347-366.
Powers, T. (2005). Engaging students with humor. Association for Psychological
Science Observer. December 2005, 18(12). Retrieved from the World Wide Web:
http://www.psychologicalscience.org/observer/getarticle.cfm?id=1904
Renninger, K.A. & Hidi, S. (2002). Student interest and achievement: Developmental
issues raised by a case study. In Wigfield, A. & Eccles, J.S. (Eds.), Development of achievement
motivation (pp. 173-195). San Diego, CA: Academic Press.
Rocca, K.A. (2004). College student attendance: Impact of instructor immediacy and
verbal aggression. Communication Education, 53(2), 185-195.
Rothbart, M.K. (1976). Incongruity, problem-solving and laughter. In Chapman, A.J. &
Foot, H.C. (Eds.), Humor and laughter: Theory, research and applications. (pp. 37-54).
Rudolph, F. (1990). The American college & university: A history. Athens, GA:
University of Georgia Press.
80
Schunk, D. H., Pintrich, P. R., & Meece, J. L. (2008). Motivation in education: Theory,
research, and applications (3rd ed.). Upper Saddle River, NJ: Pearson Education.
Shade, R. (1996). License to laugh: Humor in the classroom. Englewood, CO: Teacher
Idea Press.
Shannon, D.M. & Davenport M.A. (2001). Using SPSS to solve statistical problems: A
self-instruction guide. Upper Saddle River, NJ: Merrill Prentice Hall.
Shatz, M.A. & LoSchiavo, F.M. (2006). Bringing life to online instruction with humor.
Radical Pedagogy, 8(2). Retrieved from the World Wide Web:
http://radicalpedagogy.icaap.org/content/issue8_2/shatz.html.
Shiyab, S. Pedagogical effect of humor on teaching. Digital Stream Proceedings, North
America, 02 02 2009.
Skinner, N.F. (2001). A course, a course, my kingdom for a course. Reflections of an
unrepentant teacher. Canadian Psychology, 42(1), 49-60.
Spindle, Debra Osborne (1989). College instructor use of humor in the classroom:
Interaction of instructor gender, type and condition of humor with instructor competence and
sociability. Ph.D. dissertation, The University of Oklahoma, United States -- Oklahoma.
Retrieved September 29, 2009, from Dissertations & Theses: A&I.(Publication No. AAT
8919991).
Stambor, Z. (2006). How laughing leads to learning. Monitor on Psychology 37(6), 62-
66.
Tomlinson, C.A. (1999). The differentiated classroom: Responding to the needs of all
learners. Alexandria, VA: ASCD.
81
Torok, S.E., McMorris, R.F., & Lin, W.C. (2004). Is humor an appreciated teaching tool?
Perceptions of professors? teaching styles and use of humor. College Teaching, 52(1), 14-20.
Trochim, William M. The Research Methods Knowledge Base, 2nd Edition. Internet
WWW page, at URL: (version current as of
October 20, 2006).
Umbach, P.D. & Wawrzynski, M.R. (2004). Faculty do matter: The role of college
faculty in student learning and engagement. Online Submission, Paper presented at the Annual
Forum of the Association for Institutional Research (AIR) (44th Boston, MA, May 28-June 2,
2004).
Upcraft, M.L. & Gardner, J.H. and Associates (Eds). (1989). The freshman year
experience. San Francisco, CA: Jossey-Bass.
U.S. Census Bureau. (2009, September). Supporting the 2010 census. Census on
campus: Toolkit for reaching college and university students. Retrieved June 13, 2010, from
http://2010.census.gov/partners/pdf/toolkit_Campus_Overview.pdf.
Walker, B.E. (2006). Using humor in library instruction. Reference Services Review,
34(1), 117-128.
Wanzer, M.B., & Frymier, A.B. (1999). The relationship between student perceptions of
instructor humor and students? reports of learning. Communication Education, 48, 48-62.
Weiss, M.J. (1993). Using humor in the classroom. The Journal of Imagination in
Language Learning and Teaching. Retrieved from the World Wide Web:
http://www.njcu.edu/CILL/vol11/weiss.html.
Wigfield, A. & Eccles, J.S. (Eds.), (2002). Development of achievement motivation. San
Diego, CA: Academic Press.
82
Zhang, Q. (2005). Immediacy, humor, power distance, and classroom communication
apprehension in Chinese college classrooms. Communication Quarterly, 53(1), 109-124.
Zhao, C. & Kuh, G. (2004). Adding value: Learning communities and student
engagement. Research in Higher Education, 45(2).
83
Appendix A
(NOTE: DO NOT SIGN THIS DOCUMENT UNLESS AN IRB APPROVAL STAMP
WITH CURRENT DATES HAS BEEN APPLIED TO THIS DOCUMENT.)
INFORMED CONSENT
for a Research Study entitled
Humor as an Instructional Tool in the College Classroom
You are invited to participate in a research study with the objective of demonstrating the
strength of humor as a pedagogical technique as it relates to student engagement and material
retention. The study is being conducted by James D. Mantooth, doctoral student, under the
direction of Dr. Jill Salisbury-Glennon, Advisor and Associate Professor, in the Auburn
University Department of Educational Foundations, Leadership and Technology. You were
selected as a possible participant because you are enrolled in a current section of FOUN 3100
and are age 19 or older. Students under the age of 19 will not be asked to participate in this
study.
What will be involved if you participate? If you decide to participate in this research study,
you will be asked to (1) complete an interest survey, (2) take a knowledge pretest, (3) listen to a
lecture on material pertinent to the class, (4) take a knowledge posttest, and (5) complete an
assessment that measures your perceptions of humor as an instructional tool in the college
classroom. Your total time commitment will be approximately 2 hours spent over the course of
four different class periods.
Are there any risks or discomforts? The risks associated with participating in this study are a
slight risk to breach of confidentiality. To minimize these risks, each student will be assigned a
student number whereby the researcher can connect the specific student with his/her survey. The
assigned student numbers will be erased once data collection is complete. The completed
surveys will be kept confidentially in a locked filing cabinet in the researcher?s office.
Participant?s Initials ______ Page 83of 2
84
Are there any benefits to yourself or others? There are no expected benefits for you as a
result of participating in this study.
Will you receive compensation for participating? No
If you change your mind about participating, you can withdraw at any time during the study.
Your participation is completely voluntary. If you choose to withdraw, your data can be
withdrawn as long as it is identifiable. Your decision about whether or not to participate or to
stop participating will not jeopardize your future relations with Auburn University, the
Department of Educational Foundations, Leadership and Technology.
Your privacy will be protected. Any information obtained in connection with this study will
remain confidential. Information obtained through your participation may be used to help
complete a dissertation study.
If you have questions about this study, please ask them now or contact James D. Mantooth at
MANTOOTH@auburn.edu or 334.329.0123 at 334-844-4788. Or contact Dr. Jill Salisbury-
Glennon at SALISJI@auburn.edu or 334-844-3064. A copy of this document will be given to
you to keep.
If you have questions about your rights as a research participant, you may contact the
Auburn University Office of Human Subjects Research or the Institutional Review Board by
phone (334)-844-5966 or e-mail at hsubjec@auburn.edu or IRBChair@auburn.edu.
HAVING READ THE INFORMATION PROVIDED, YOU MUST DECIDE WHETHER
OR NOT YOU WISH TO PARTICIPATE IN THIS RESEARCH STUDY. YOUR
SIGNATURE INDICATES YOUR WILLINGNESS TO PARTICIPATE.
____________________________ _________________________________
Participant's signature Date Investigator obtaining consent Date
____________________________ _________________________________
Printed Name Printed Name
85
Appendix B
Invitation Script
Hello, everyone. My name is Jamie Mantooth and I am a doctoral student in the Educational
Psychology program. I am here today because I want to ask for your help in completing some of
my research. Next week I am going to come back into this classroom and I am going to present
a lesson on selected-response assessment.
Before I start the lesson, I am going to give you an interest survey to gauge your interest in the
topic. Then I will give a pretest to see how much you already know about the topic. Then I will
present the lesson. Immediately following the lesson, I will ask you to take a post-lecture
feedback survey. The next class period I will come back, and give you a post test on selected-
response assessment to see how much of the information you retained.
Immediately after that we will begin a new topic. I will give you an interest survey on
performance-based assessment. Then I will give a pretest to see how much you already know
about that topic. Then I will present the lesson. Immediately following the lesson, I will ask you
to take a post-lecture feedback survey. The next class period I will come back, and give you a
post test on performance-based assessment to see how much of the information you retained.
At that time I will also give you an assessment instrument that has to do with your perceptions of
humor as an instructional tool in the classroom.
For those of you keeping score, that?s four separate visits:
DATE ? Today (introduction and information)
DATE ? SRA (interest survey, pretest, lesson, post-lecture feedback)
DATE ? SRA (post test); PBA (interest survey, pretest, lesson, post-lecture feedback)
DATE ? PBA (post test); Humor Perception Instrument
A couple of important points for you to know:
? You are not required to participate in any of the assessments, and you may opt out at any
time.
? None of the assessment instruments will be used in calculating your grade for this course.
You will not be rewarded for taking part, nor penalized for choosing not to.
? I will be collecting some demographic information from you (classification, gender, etc.),
but it will all be kept confidential, and no personal identifying information will be
reported in my results.
? If anyone is under the age of 19, you will need to receive parental permission prior to
participating in this study.
Are there any questions or concerns? If not I will see you next week. Thank you.
86
Appendix C
Selected-Response Assessment Demographic Infomation
Participation in this project is completely voluntary. Please answer the following questions to
the best of your ability.
?Selected-response assessment allows the students to choose from a list of pre-selected
alternatives of answers. Examples include multiple choice, matching, and true/false? (Forbes at
al, 2006).
1. Assigned student number: ______________________________________
2. Gender: (1) Female
(2) Male
3. Age: (1) 18-24 (3) 35-44
(2) 25-34 (4) 45 and older
4. Classification: (1) Freshman (4) Senior
(2) Sophomore (5) Graduate Student
(3) Junior
5. Ethnicity: (1) African-American (4) Hispanic/Latina/Latino
(2) Asian (5) Pacific Islander
(3) Caucasian (6) Other: __________________________
6. Major (no abbreviations): _______________________________________
7. Current overall GPA: __________________________
87
Appendix D
Selected-Response Assessment Interest Survey
Please indicate your response by circling your best answer. Your response should represent how
you think and feel at this point in time.
?SD? if you STRONGLY DISAGREE
?D? if you DISAGREE
?U? if you are UNDECIDED
?A? if AGREE
?SA? if you STRONGLY AGREE
There is no right or wrong answer. Please respond to what you think or how you feel at this point in
time.
1. I have great interest in selected-response assessment. SD D U A SA
2. I have a moderate amount of interest in selected- SD D U A SA
response assessment.
3. I have little to no interest in selected-response SD D U A SA
assessment.
4. I chose to register for this class b/c I thought it would be SD D U A SA
interesting.
5. I chose to register for this class b/c it?s required for my SD D U A SA
plan of study.
6. I chose to register for this class b/c I needed an elective SD D U A SA
in this area of study.
7. I choose to attend this class b/c I find the information SD D U A SA
useful for my future career.
8. I choose to attend this class b/c the material and SD D U A SA
classroom discussion are interesting.
9. I choose to attend this class b/c I don?t want my grade SD D U A SA
to suffer b/c of excessive absences.
10. I usually choose not to attend this class. SD D U A SA
88
Appendix E
Selected Response Assessment
Domain Knowledge Test
Student Code: _______________
Locate the BEST possible answer to each of the following questions. Write your answer in the
blank provided.
1. _______ Of the five types of achievement targets, which one is most accurately assessed using
selected response assessment?
A. Knowledge
B. Reasoning
C. Performance Skills
D. Products
E. Dispositions
2. _______ Which of the following is NOT considered to be a selected response assessment test
format?
A. Multiple Choice
B. True/False
C. Matching
D. Short answer fill in
E. Essay
3. _______ Selected response assessment can be a beneficial test format because it:
A. Keeps the students engaged
B. Taps into the whole child
C. Is easy to grade
D. Produces a product
E. Covers limited amounts of material
4. _______ This is defined as using knowledge and understanding in novel ways to problem solve.
A. Knowledge
B. Reasoning
C. Performance Skills
D. Products
E. Dispositions
89
5. _______ The textbook suggests analysis, synthesis, comparison, classification, and inference as
five different selected response exercises to asses this achievement target.
A. Knowledge
B. Reasoning
C. Performance Skills
D. Products
E. Dispositions
6. _______ When writing selected response assessment items, which of the following is NOT one of
the recommended steps?
A. Write clearly in a sharply focused manner
B. Aim for lowest possible reading level
C. Do not give away the answer
D. Have someone proofread your questions
E. Conduct pilot study of test items
7. _______ ?All of the above? is not generally a good multiple choice test question option because:
A. It is usually the correct answer
B. It can be misleading
C. It is usually the incorrect answer
D. It measures performance over product
E. All of the above
8. _______ Using selected response assessment can be problematic if:
A. Students are not proficient in English
B. You are attempting to measure reasoning
C. The grammar is simple
D. Students? reading level is too high
E. There are too many true/false items
9. _______ Selected response assessment is an exercise in clear communication, therefore, you
should:
A. Not give away the answer
B. Ask more essay questions than selected response questions
C. Not use conjunctions
D. Eliminate true/false test items
E. Write clearly in a sharply focused manner
90
10. _______ Which of the following is NOT a guideline for writing multiple-choice items?
A. Be sure there is only one correct or best answer
B. Ask a complete question, if you can
C. Keep the same number of response options across the test
D. Word response items briefly and grammatically correct
E. Don?t repeat the same response items with each response option
11. _______ When writing matching test items, why is it recommended to have more response items
than stems?
A. To keep the matching list homogeneous
B. To keep the matching list brief and parallel
C. So the process of elimination cannot be utilized
D. So that the response items can use names, dates, and events
E. To keep the same number of response options across the test
12. _______ Which of the following IS a guideline for writing fill-in items?
A. Try to get as many answers as possible from one question
B. Have more response items than stems
C. The length of the answer line should match the length of the expected answer
D. Try to use only one blank per item
E. Present an incomplete thought asking the students to complete it via the fill in.
For the following selected response assessment items, identify the main problem with each
item. Write your answer in the blank provided.
13. _______ Tiger VII is the name of an _____:
A. Eagle C. Facebook Group
B. Tiger D. Tiger Transit system
A. There is more than one correct answer
B. The list is too homogeneous
C. The answer is given away by the grammar in the stem
D. The question is too confusing
E. Does not present a complete thought
14. _______ True or False: Auburn University has always admitted men and women.
A. The item is both true and false
B. Not enough space to write the answer
C. Does not aim for lowest possible reading level
D. Attempts to measure reasoning rather than knowledge
E. Does not present a complete thought
F. Uses absolutes
91
15. _______ Which item can be quickly scored?
A. Multiple Choice C. True/False
B. Matching D. Fill in the blank
A. There is more than one correct answer
B. The list is too homogeneous
C. The answer is given away
D. The question is too confusing
E. Does not present a complete thought
16. _______ Which of the following represents the warmest temperature?
A. 100 degrees Celsius C. 300 degrees Kelvin
B. 200 degrees Fahrenheit D. An oven set at medium
A. There is more than one correct answer
B. There are not enough available answers
C. The answer is given away
D. The question is too confusing
E. Does not present a complete thought
17. _______ A ____________ is a way of ____________ because of ____________.
A. This should be a matching item
B. The optional answers are limited
C. The answer is given away
D. The answer lines are inconsistent
E. Question is too vague
18. _______ In what year did the Battle of Hastings take place?
A. 1096 C. William the Conqueror
B. Europe D. It never took place
A. There is more than one correct answer
B. The list is not homogeneous
C. The answer is not obvious
D. The question is too confusing
E. Does not present a complete thought
92
19. _______ This is defined as a form of assessment that allows students to choose from a pre-
selected list of possible responses.
A. Personal communication C. Selected response assessment
B. Essay D. Performance Based assessment
A. There is more than one correct answer
B. The question is vague and unclear
C. Essay is not a form of assessment
D. Clues to correct answer are given in the question
E. Does not present a complete thought
20. _______ True or False: In a microwave oven, the high-voltage transformer along with a special
diode and capacitor arrangement serve to increase the typical household voltage to as high as
3000 volts.
A. The item is both true and false
B. Not enough space to write the answer
C. Does not aim for lowest possible reading level
D. Attempts to measure reasoning rather than knowledge
E. Does not present a complete thought
93
Appendix F
SRA Non-Humorous PowerPoint Slides and Lecture Script
Hi everyone?thank you for having me back. Today we are going to talk about Selected-
Response Assessment. But first let?s briefly review what has already been discussed in here
about assessment in general.
Selected Response Assessment
? Brief review- From Chapter 14
? 5 Types of Achievement Targets (expectations
reflected in teachers? classroom activities and
assessments)
? Knowledge and Understanding
? Reasoning
? Performance Skills
? Products
? Dispositions
As with any type of classroom assessment method, your goal (as the teacher) is to assess your
students? achievement. If you remember from chapter 14, there are five types of achievement
targets. As teachers, these are the five areas that you want to assess:
1. The first achievement target is Knowledge and Understanding ? mastery of the subject matter,
which includes not just knowing the material but understanding the concept
2. The second achievement target is Reasoning ? using the knowledge and understanding in
novel ways to solve problems, not just regurgitating the material.
3. The third achievement target is Performance Skills ? proficiency in doing something where
the focus is the process; i.e. playing an instrument, giving a speech, etc. The focus is on the
process.
94
4. The fourth achievement target is Products ? creating tangible products; i.e. papers,
Homecoming float, science fair project, etc. which demonstrates evidence of your abilities.
5. The fifth achievement target is Dispositions ? you can assess the students? development of
certain types of attitudes, interests, and intentions?their dispositions.
Selected Response Assessment
? Chapter 15: Selecting Proper Assessment
Methods
? See Table 3.1 on pg. 380 for greater detail
You now know these are the five target areas that you need to assess. There are four methods of
assessment that to varying degrees can address those five types of achievement targets. Those
four assessment methods are selected-response (today), essay, performance-based assessment
(next week), and personal communication.
We are going to talk about how to approach selected response assessment and then how to
properly construct selected response assessment items. It is not as easy as it may appear.
Keep those five achievement targets in mind as we progress through this lesson. Knowledge &
Understanding, Reasoning, Performance Skills, Products, and Dispositions.
Selected Response Assessment
? What is it?
? Multiple choice
? True/false
? Matching
? Short answer fill-in
When we say selected-response assessment, we are really talking about the items listed here:
MC, T/F, Matching, Short Answer. Selected-response assessment is exactly what it sounds like ?
it allows the students to choose from a list of pre-selected possible answers. Short answer may
look out of place here, but b/c the answers are short (usually just a couple of words) they fit into
95
this category of selected-response. In a little while we will talk about when it is appropriate to
use SRA.
Selected Response Assessment
? Positives
? Easy to use and grade
? Large numbers
? Efficient and ?Objective? and scoring
? Cover lots of material
So, why use SRA? There are some real advantages. It?s easy to use and grade, especially if you
have large numbers of students. B/c of that SRA is often efficient b/c you can cover a broad
sampling of the material and the scoring is usually objective ? it?s either T or isn?t.
Selected Response Assessment
? Negatives
? Cultural bias
? Reading proficiency
? Limits creativity
? Format is constrictive
? Chance to guess correctly
Why would you not want to use it?
There is a potential for cultural bias ? If I were to use an example on a test in here that was based
on the sport of ice hockey that probably would not connect with you as much as an example
based on SEC football.
Are my students proficient readers? Can they understand the questions or exercises I am asking
of them? What if English is not the first language for some of my students?
SRA limits creativity. A,B,C,D,E?not much room for creativity there. B/c of that SRA can be
seen as constrictive.
And there is always the chance a student could guess correctly, which doesn?t mean they know
or understand the material?just means they?re lucky.
96
Selected Response Assessment
? 5 Types of Achievement Targets
? Knowledge and Understanding
? Reasoning
? Performance Skills
? Products
? Dispositions
? SRA can be used to assess both students? mastery of
content knowledge and their abilities to use that
knowledge to reason and solve problems.
As with any type of assessment method, your goal as the teacher is to assess your student?s
achievement. So framing the conversation with those five achievement targets (knowledge and
understanding, reasoning, performance skills, products, and dispositions), let?s look deeper at
SRA.
SRA can be used to assess both the students? mastery of content knowledge and their ability to
use that knowledge to reason and solve problems. Those are the first two achievement targets
and here?s what I?m talking about.
Selected Response Assessment
4 A) 9
X3 B) 12
C) 15
D) 18
86% of them answered it correctly.
Which choice goes with: A) 3 X 4 =
X X X X B) 3 + 4 =
X X X X C) 3 X 12=
X X X X
55% of those same students were able to answer correctly.
This example comes straight from your text book on pg. 397. It?s an example of knowing
something vs. understanding it. It?s the same problem presented two different ways, the second
requiring a conceptual understanding of multiplication. When asked only if they know the
answer (1st one), 86% of the students got it right. But when asked a question to gauge their level
of understanding of the problem (2nd one), only 55% of the students got it right. Of the five
types of achievement targets, knowledge and understanding is the one most accurately assessed
using selected response assessment.
Once students have knowledge and understanding, it is time to see if they can apply it to new
problems/situations by reasoning.
97
Selected Response Assessment
? Reasoning ? using knowledge and understanding
in novel ways to problem solve
? To assess reasoning, we must present them with
new test items they have not seen before and see
if they can reason their way through the new
problems
READ SLIDE (bottom part)?this can be done with SRA. READ definition of reasoning. We
inherently understand this. For example, if you are trying to solve an issue with a
boyfriend/girlfriend, you use your knowledge of that person, your understanding of that person
to get a better grasp of the problem. When he reacts like this, I know it means that this probably
happened, so I should do this. Using knowledge and understanding to reason through a problem.
To assess reasoning with our students we must present them with new test items they have not
seen before and see if they can reason their way through the new problems.
Selected Response Assessment
? Analysis
? Synthesis
? Comparison
? Classification
? Inference
Table on pg. 399 goes over 5 types of reasoning?5 different ways you can ask a question using
SRA that can tap into a student?s capability to reason.
Analysis ? critically evaluating a situation: what professor should you take next semester?
Synthesis ? combining info from 2 or more sources: I know this student is normally quite
talkative, but since his Dad has been in Afghanistan, he has been very quiet. You are taking
knowledge from two different sources and reasoning a solution.
Comparison ? what is one important difference btw SRA and PBA?
Classification ? sorting and rating/ranking info.
98
Inference ? drawing conclusions by applying clues to observations or hypothesis. From what
your friend has told you about her mother, if your friend gets a DUI, what would the reaction of
her mother be?
Selected Response Assessment
? Assessing Performance Skills
? More challenging to assess actual skill
? Can assess procedural knowledge and
understanding of prerequisites
We?ve looked at knowledge & understanding and reasoning. Assessing performance skills via
SRA is a bit more challenging. We cannot really use SRA to assess performance skills such as
public speaking, drama, physical education, speaking a foreign language, etc. But, for example,
SRA can allow us to see if our students know/understand vocabulary words in a foreign
language, which should allow for them to use those vocabulary words when speaking. If they
don?t know the Spanish word for bus, then they would be unable to ask where it is. Do they
understand the steps involved in putting a speech together? We can assess the procedural
knowledge and understanding of the task at hand. It?s a bit of a stretch to assess performance
skills with SRA, but it can be done.
Selected Response Assessment
? Assessing Products
? Selected response items cannot help us to
determine if students can create quality
products
? SRA can test students? prerequisite
knowledge of the attributes of a quality
product.
Assessing products with SRA is a similar challenge to assessing performance skills. Answering
a T/F questions does not demonstrate whether or not a student can score a goal, draw a tree, or
build a model. Does the student know how to properly hold a basketball? How a tree looks from
different angles? Or the specifics of a model in order to properly build it?
99
Like assessing performance skills, selected response items cannot help us to determine if
students can create quality products. But SRA can test students? prerequisite knowledge of the
attributes of a quality product.
Selected Response Assessment
? Assessing Dispositions
? We can develop questionnaire items to tap
student attitudes, values, motivational
dispositions, and other affective states.
READ SLIDE. What is your favorite extracurricular activity? Does the cafeteria meet your
expectations for food quality? What do you really think about Toomer?s Ten late night shuttle?
Selected Response Assessment
? Summary- while we can?t reach all of the
achievement targets with selected response
exercises, we can tap many parts of them
? We can test student mastery of content knowledge,
a variety of kinds of reasoning, and problem
solving, and some of the underpinnings of
successful performance
? Summarized in Table 4.1, pg. 401
As related to the five achievement targets (knowledge & understanding, reasoning, performance
skills, products, dispositions), SRA is not a comprehensive tool, but can be used to tap into many
aspects of those targets.
We have talked about what to assess, let?s now look at how to assess. Your text identifies three
basic steps.
100
Selected Response Assessment
The Steps in Assessment Development
1) Develop an assessment plan or blueprint that
identifies an appropriate sample of achievement.
How many and which of the achievement targets am
I targeting?
2) Identify the specific elements of knowledge,
understanding and reasoning to be assessed
3) Transform those elements into test items
Step 1: Make a plan. Assessment plans allow a meaningful target that your students can shoot
for, and one that you can assess.
Step 2 is where you develop your propositions (basically, your goal statements). For example,
once you have identified your achievement targets (step 1) devise clear sentences from the
material from which you will construct your test items. For example if you?re teaching social
studies, you could devise these two propositions: Three common forms of government are
monarchies, dictatorships, and democracies. In democracies, the power to govern is secured by
the vote of the people. From those two statements, you can start constructing your SRA test
items.
Step 3: writing the test items is where we are going to spend the rest of time today. How you
write/construct SRA test items is very critical to the success of your assessment.
Selected Response Assessment
General Item Writing Guidelines
1. Write clearly in a sharply focused manner
2. Ask a question
3. Aim for the lowest possible reading level
4. Double check scoring for accuracy
5. Have a colleague read over your questions
6. Eliminate clues to the correct answer either within the
question or across questions within a test
Table 4.4 on page 418 provides a test item quality checklist.
In general terms, here are some guidelines for writing SRA test items:
1. Write clearly in a sharply focused manner ? if your students don?t understand the question, the
question does no good.
2. Ask a question ? use a complete thought
3. Aim for the lowest possible reading level ? if you mean iron-on, don?t say appliqu?
4. Double check scoring for accuracy
101
5. Have a colleague read over your questions ? they may notice small errors that you have
overlooked
6. Eliminate clues to the correct answer either within the question or across questions within a
test
Selected Response Assessment
Guidelines for Multiple-Choice Items
1. Ask a complete question to get the item started
2. Don?t repeat the same response items with each
response option
3. Word response options as briefly as possible and make
them grammatically parallel
4. Vary the number of response options as appropriate
5. Be sure there is only one correct or best answer
When writing multiple choice test items:
1. Ask a complete question ? b/c that puts the focus on the stem not the response items
2. Don?t repeat the same response items with each response option ?
For example: between 1950 and 1960: (a) interest rates increased, (b) interest rates decreased, (c)
interest rates fluctuated greatly, (d), interest rates did not change. READ: What happened to
interest rates between 1950-1960?
3. Word response options as briefly as possible.
4. Vary the number of response options as appropriate ? every test question does not have to
have the same number of responses. Be cautious with ?all of the above? and ?none of the
above?. ?All of the above? is generally bad business b/c it can be misleading, and students in a
hurry may miss it. ?None of the above? for just the opposite reasons can be a good business b/c
it forces students to think it through thoroughly, but be careful with them. Using them just for
filler lessens their effectiveness.
5. Be sure there is only one correct or best answer.
102
Selected Response Assessment
Guidelines for True/False Items
1. Make the item entirely true or entirely false
2. Avoid absolutes: only, never, always, all, none,
etc.
1. No ?idea salads? ? some part of the question is true and some part of it false. What
exactly are you testing? State it and move on.
2. Absolutes give away the answer and knowledge is not really necessary.
Selected Response Assessment
Guidelines for Matching Items
1. Provide clear and concise directions for making the
match.
2. Keep the list of things to be matched short
3. Keep the list of things to be matched homogeneous
4. Keep the list of response options brief and parallel
5. Include more response items than stems and permit
them to use more than once
1. Provide clear directions for making the match. Do they write the letter in a blank, draw a
line between items?what do they do?
2. Textbook recommends no more than 10 matching items b/c it minimizes information
processing and idea juggling.
3. Don?t mix events, with dates or names. It?s confusing and can give away answers.
4. Keep the list of response options brief and parallel. Again?trying to avoid unnecessary
confusion
5. Include more response items than stems and permit them to use more than once. Why do
this? The process of elimination is eliminated ? you want mastery of material, not
fortunate conclusions.
103
Selected Response Assessment
Guidelines for Fill-in Items
1. Ask respondents a question and provide space
for an answer
2. Try to use only one blank per item
3. Don?t let the length of the line to be filled in be
a clue as to the length or nature of the correct
response.
1. Ask respondents a question and provide space for an answer ? Forces you to express a
complete thought.
2. Try to use only one blank per item
3. Don?t let the length of the line to be filled in be a clue as to the length or nature of the
correct response. Remember?you are trying to assess the levels of achievement, not
deductive reasoning
Selected Response Assessment
? Your test and test items must be able to
withstand scrutiny of administrators and
parents
? Following sound development practices
will help ensure the quality and strength of
your assessments.
Your test and test items must be able to withstand scrutiny of administrators and parents.
Following sound development practices will help ensure the quality and strength of your
assessments.
Instead of discussing these points, we are going to demonstrate them through a quiz. GIVE OUT
QUIZ AND DISCUSS
104
Appendix G
Explanation of SRA post-lecture activity
Appendices H and J contains the activities that followed the H and NH SRA lectures.
The purpose of this activity was to emphasize the importance of sculpting SRA test items so that
the students are being evaluated on what they know/have learned rather than on their ability to
guess correctly. The students were given the quiz and asked to locate all of the errors.
Conversation followed about the specific problems with each item, and how to be aware of them
as teachers who will be constructing SRA test items in the future.
105
Appendix H
Activity conducted at end of NH lecture
Instructions were to ?Find all the errors?
Selected-Response Quiz
November 30, 2005
Instructions: Carefully read each True/False item. Mark the true statements with a ?T?
and the false statements with an ?F.?
1._______ Auburn University has always admitted women.
2._______ John Travolta turned down the leading roles in ?An Officer and a Gentleman,?
?Tootsie,? and ?Saturday Night Fever.?
3._______ All cows give milk.
4._______ If a student uses a GAP, that student?s transcript will still include a special notation
regarding the deleted grade, but it will not be calculated into the GPA.
Instructions: Carefully read each matching item. Each answer is used only once. Match
the best answer to the corresponding item.
5._______ In what year did the Titanic sink?
7._______ What three words are written on
the AU seal?
8._______ How many cell phones are being
used in the US?
9.______ This is the name of the ghost that
supposedly haunts the AU
Chapel.
A. Instruction, Research
& Extension
B. 250,000 ? 300,000
C. 1912
D. Sydney
106
Instructions: Carefully read each multiple choice item. Choose the best answer and write
the answer in the blank beside the question.
10. _______ Who wrote the Auburn Creed?
A. George Washington
B. George Petrie
C. George Miller
D. George Johnson
11. _______ This trophy, which is awarded per annum, recognizes the top collegiate football
player in the country, and has been awarded to Bo Jackson and Pat Sullivan, of Auburn
University.
A. Valedictorian
B. Stanley Cup
C. Frank Broyles Award
D. Heisman Trophy
12. _______ On Thursdays
A. Coach Chizik does his weekly radio show
B. Tiger Transit reverses its routes
C. AU Bookstore offers 30% discount to the first 30 customers
D. The Plainsman is distributed
13. _______ Tiger VI is the name of an
A. Eagle
B. Tiger
C. Facebook group
D. Concourse display table
14. _______ Which of the following are examples of programs offered by Academic Support?
A. IMPACT
B. Tiger Walk
C. Study Partners & Tutoring
D. Tea with the President
15. In the Student Activity Center you can find ________, _________________,
and _______________________________________.
107
Appendix I
SRA Humorous PowerPoint Slides and Lecture Script
Hi everyone?thank you for having me back. Today we are going to talk about Selected-
Response Assessment. But first let?s briefly review what has already been discussed in here
about assessment in general.
Selected Response Assessment
? Brief review- From Chapter 14
? 5 Types of Achievement Targets (expectations
reflected in teachers? classroom activities and
assessments)
? Knowledge and Understanding
? Reasoning
? Performance Skills
? Products
? Dispositions
As with any type of classroom assessment method, your goal (as the teacher) is to assess your
students? achievement. If you remember from chapter 14, there are five types of achievement
targets. As teachers, these are the five areas that you want to assess:
Knowledge and Understanding
108
1. The first achievement target is Knowledge and Understanding ? mastery of the subject matter,
which includes not just knowing the material but understanding the concept She may have the
knowledge of the basic principles of how to park a car, but she clearly does not understand the
concept.
Reasoning
2. The second achievement target is Reasoning ? using the knowledge and understanding in
novel ways to solve problems, not just regurgitating the material. This gentleman here reasons
that the best way to solve this problem is with a sledgehammer. That is certainly novel and
unique?even if it is ill-advised.
Performance Skills
3. The third achievement target is Performance Skills ? proficiency in doing something where
the focus is the process; i.e. playing an instrument, giving a speech, secretly texting during class,
etc. The focus is on the process.
109
Products
4. The fourth achievement target is Products ? creating tangible products, which demonstrates
evidence of your abilities. i.e. papers, Homecoming float, or a science fair project. BTW, this
was an actual middle school science fair project that I found on-line. It provides step-by step
instructions for demonstrating how an electric chair works. I kid you not.
Dispositions
5. The fifth achievement target is Dispositions ? you can assess the students? development of
certain types of attitudes, interests, and intentions?their dispositions. If this kid were in your
class, you could make a pretty good assessment about his disposition.
Selected Response Assessment
? Chapter 15: Selecting Proper Assessment
Methods
? See Table 3.1 on pg. 380 for greater detail
You now know these are the five target areas that you need to assess. There are four methods of
assessment that to varying degrees can address those five types of achievement targets. Those
110
four assessment methods are selected-response (today), essay, performance-based assessment
(next week), and personal communication.
We are going to talk about how to approach selected response assessment and then how to
properly construct selected response assessment items. It is not as easy as it may appear.
Keep those five achievement targets in mind as we progress through this lesson. Knowledge &
Understanding, Reasoning, Performance Skills, Products, and Dispositions.
Selected Response Assessment
? What is it?
? Multiple choice
? True/false
? Matching
? Short answer fill-in
When we say selected-response assessment, we are really talking about the items listed here:
MC, T/F, Matching, Short Answer. Selected-response assessment is exactly what it sounds like ?
it allows the students to choose from a list of pre-selected possible answers. Short answer may
look out of place here, but b/c the answers are short (usually just a couple of words) they fit into
this category of selected-response. Normally for such heavy questions such as what is the
meaning of life, you would not use SRA. In a little while we will talk about when it is
appropriate to use SRA.
Selected Response Assessment
? Positives
? Easy to use and grade
? Large numbers
? Efficient and ?Objective? and scoring
? Cover lots of material
So, why use SRA? There are some real advantages. It?s easy to use and grade, especially if you
have large numbers of students. B/c of that SRA is often efficient b/c you can cover a broad
sampling of the material and the scoring is usually objective ? in the case of T/F, it?s either T or
isn?t.
111
Selected Response Assessment
? Negatives
? Cultural bias
? Reading proficiency
? Limits creativity
? Format is constrictive
? Chance to guess correctly
Why would you not want to use it?
There is a potential for cultural bias ? If I were to use an example on a test in here that was based
on the sport of ice hockey that probably would not connect with you as much as an example
based on SEC football.
Are my students proficient readers? Can they understand the questions or exercises I am asking
of them? What if English is not the first language for some of my students?
SRA limits creativity. A,B,C,D,E?not much room for creativity there. B/c of that SRA can be
seen as constrictive.
And there is always the chance a student could guess correctly, which doesn?t mean they know
or understand the material?just means they?re lucky. Unlike our friend here, Michael Bensan.
His professor sent him an e-mail the following day:
Dear Michael,
Every year I attempt to boost my students' final grades by giving them this relatively simple
exam consisting of 100 True/False questions from only 3chapters of material. For the past 20
years that I have taught Intro Communications101 at this institution I have never once seen
someone score below a 65 on this exam. Consequently, your score of a zero is the first in
history and ultimately brought the entire class average down a whole 8 points.
There were two possible answer choices: A (True) and B (False). You chose C for all 100
questions in an obvious attempt to get lucky with a least a quarter of the answers. It's as if you
didn't look at a single question. Unfortunately, this brings your final grade in this class to
failing. See you next year!
May God have mercy on your soul.
Sincerely,
Professor William Turner
P.S. If all else fails, go with B from now on.
B is the new C
Selected Response Assessment
112
Selected Response Assessment
? 5 Types of Achievement Targets
? Knowledge and Understanding
? Reasoning
? Performance Skills
? Products
? Dispositions
? SRA can be used to assess both students? mastery of
content knowledge and their abilities to use that
knowledge to reason and solve problems.
As with any type of assessment method, your goal as the teacher is to assess your student?s
achievement. So framing the conversation with those five achievement targets (knowledge and
understanding, reasoning, performance skills, products, and dispositions), let?s look deeper at
SRA.
SRA can be used to assess both the students? mastery of content knowledge and their ability to
use that knowledge to reason and solve problems. Those are the first two achievement targets
and here?s what I?m talking about.
Selected Response Assessment
4 A) 9
X3 B) 12
C) 15
D) 18
86% of them answered it correctly.
Which choice goes with: A) 3 X 4 =
X X X X B) 3 + 4 =
X X X X C) 3 X 12=
X X X X
55% of those same students were able to answer correctly.
This example comes straight from your text book on pg. 397. It?s an example of knowing
something vs. understanding it. It?s the same problem presented two different ways, the second
requiring a conceptual understanding of multiplication. When asked only if they know the
answer (1st one), 86% of the students got it right. But when asked a question to gauge their level
of understanding of the problem (2nd one), only 55% of the students got it right. Of the five
types of achievement targets, knowledge and understanding is the one most accurately assessed
using selected response assessment.
Once students have knowledge and understanding, it is time to see if they can apply it to new
problems/situations by reasoning.
113
Selected Response Assessment
? Reasoning ? using knowledge and understanding
in novel ways to problem solve
? To assess reasoning, we must present them with
new test items they have not seen before and see
if they can reason their way through the new
problems
READ SLIDE (bottom part)?this can be done with SRA. READ definition of reasoning. We
inherently understand this. For example, normally, my girlfriend is very patient and kind, but
she was really snippy on Sunday afternoon. Oh?that?s right. She lost her favorite sunglasses at
the rodeo on Saturday. Maybe I should help her pick out a new pair. I would be using
knowledge and understanding to reason through the problem?and hopefully find a good
solution.
Selected Response Assessment
? Analysis
? Synthesis
? Comparison
? Classification
? Inference
Table on pg. 399 goes over 5 types of reasoning?5 different ways you can ask a question using
SRA that can tap into a student?s capability to reason.
Analysis ? critically evaluating a situation: Is Tony Barbee the right hire for AU?s men?s
basketball program? Time will tell.
Synthesis ? combining info from 2 or more sources: I know that my wife likes quality time
together. I know that today is her birthday. I know I should do something really nice to make up
for the fact that I am spending her birthday here with you instead of with her!
Comparison ? what is one important difference btw Chuck Norris and Jack Bauer?
Classification ? sorting and rating/ranking info. That?s what they do on any of the reality dating
shows.
114
Inference ? from what you have seen so far on Lost, you can infer that this must have happened.
And if you can answer that, you are in a very small minority!
Selected Response Assessment
? Assessing Performance Skills
? More challenging to assess actual skill
? Can assess procedural knowledge and
understanding of prerequisites
We?ve looked at knowledge & understanding and reasoning. Assessing performance skills via
SRA is a bit more challenging. We cannot really use SRA to assess performance skills such as
public speaking, drama, physical education, speaking a foreign language, etc. But, for example,
SRA can allow us to see if our students know/understand vocabulary words in a foreign
language, which should allow for them to use those vocabulary words when speaking. If they
don?t know the Spanish word for bar, then they would be unable to ask where it is. (El bar). Do
they understand the steps involved in putting a speech together? We can assess the procedural
knowledge and understanding of the task at hand. It?s a bit of a stretch to assess performance
skills with SRA, but it can be done.
Selected Response Assessment
? Assessing Products
? Selected response items cannot help us to
determine if students can create quality
products
? SRA can test students? prerequisite
knowledge of the attributes of a quality
product.
Assessing products with SRA is a similar challenge to assessing performance skills. Answering
a T/F questions does not demonstrate whether or not a student can score a goal, draw a tree, or
build a model. Does the student know how to properly hold a basketball? How a tree looks from
different angles? Or the specifics of a model in order to properly build it?
115
Like assessing performance skills, selected response items cannot help us to determine if
students can create quality products. But SRA can test students? prerequisite knowledge of the
attributes of a quality product.
Selected Response Assessment
? Assessing Dispositions
? We can develop questionnaire items to tap
student attitudes, values, motivational
dispositions, and other affective states.
READ SLIDE. What is your favorite extracurricular activity? Does the cafeteria meet your
expectations for food quality? What do you really think about the combination of Nike shorts
and Ugg boots? I?m just going to let that one float out there?
Selected Response Assessment
? Summary- while we can?t reach all of the
achievement targets with selected response
exercises, we can tap many parts of them
? We can test student mastery of content knowledge,
a variety of kinds of reasoning, and problem
solving, and some of the underpinnings of
successful performance
? Summarized in Table 4.1, pg. 401
As related to the five achievement targets (knowledge & understanding, reasoning, performance
skills, products, dispositions), SRA is not a comprehensive tool, but can be used to tap into many
aspects of those targets.
We have talked about what to assess, let?s now look at how to assess. Your text identifies three
basic steps.
116
Selected Response Assessment
The Steps in Assessment Development
1) Develop an assessment plan or blueprint that
identifies an appropriate sample of achievement.
How many and which of the achievement targets am
I targeting?
Step 1: Make a plan. Assessment plans allow a meaningful target that your students can shoot
for, and one that you can assess. Properly planning the assessment instrument is very important.
Here?s an example of failing to plan properly?
The Steps in Assessment Development
2) Identify the specific elements of knowledge,
understanding and reasoning to be assessed
3) Transform those elements into test items
Selected Response Assessment
Step 2 is where you develop your propositions (basically, your goal statements). For example,
once you have identified your plan for achievement targets (step 1) devise clear sentences from
the material from which you will construct your test items. For example if you?re teaching social
studies, you could devise these two propositions: Three common forms of government are
monarchies, dictatorships, and democracies. In democracies, the power to govern is secured by
the vote of the people. From those two statements, you can start constructing your SRA test
items.
Step 3: writing the test items is where we are going to spend the rest of time today. How you
write/construct SRA test items is very critical to the success of your assessment.
117
Selected Response Assessment
General Item Writing Guidelines
1. Write clearly in a sharply focused manner
2. Ask a question
3. Aim for the lowest possible reading level
4. Double check scoring for accuracy
5. Have a colleague read over your questions
6. Eliminate clues to the correct answer either within the
question or across questions within a test
Table 4.4 on page 418 provides a test item quality checklist.
In general terms, here are some guidelines for writing SRA test items:
1. Write clearly in a sharply focused manner ? if your students don?t understand the question, the
question does no good.
2. Ask a question ? use a complete thought
3. Aim for the lowest possible reading level ? if you mean iron-on, don?t say appliqu?
4. Double check scoring for accuracy
5. Have a colleague read over your questions ? they may notice small errors that you have
overlooked
6. Eliminate clues to the correct answer either within the question or across questions within a
test
Selected Response Assessment
Selected Response Assessment
Guidelines for Multiple-Choice Items
1. Ask a complete question to get the item started
2. Don?t repeat the same response items with each
response option
3. Word response options as briefly as possible and make
them grammatically parallel
4. Vary the number of response options as appropriate
5. Be sure there is only one correct or best answer
When writing multiple choice test items:
Ask a complete question ? b/c that puts the focus on the stem not the response items
Don?t repeat the same response items with each response option ?
For example: between 1950 and 1960: (a) interest rates increased, (b) interest rates decreased, (c)
interest rates fluctuated greatly, (d), interest rates did not change. READ: What happened to
interest rates between 1950-1960?
118
Word response options as briefly as possible.
Vary the number of response options as appropriate ? every test question does not have to have
the same number of responses. Be cautious with ?all of the above? and ?none of the above?.
?All of the above? is generally bad business b/c it can be misleading, and students in a hurry may
miss it. ?None of the above? for just the opposite reasons can be a good business b/c it forces
students to think it through thoroughly, but be careful with them. Using them just for filler
lessens their effectiveness.
Be sure there is only one correct or best answer.
Selected Response Assessment
This is a confusing MC question?not one I would suggest that you use.
Selected Response Assessment
Guidelines for True/False Items
1. Make the item entirely true or entirely false
2. Avoid absolutes: only, never, always, all, none,
etc.
No ?idea salads? ? some part of the question is true and some part of it false. What
exactly are you testing? State it and move on.
Absolutes give away the answer and knowledge is not really necessary.
119
Selected Response Assessment
Guidelines for Matching Items
1. Provide clear and concise directions for making the
match.
2. Keep the list of things to be matched short
3. Keep the list of things to be matched homogeneous
4. Keep the list of response options brief and parallel
5. Include more response items than stems and permit
them to use more than once
Provide clear directions for making the match. Do they write the letter in a blank, draw a
line between items?what do they do?
Textbook recommends no more than 10 matching items b/c it minimizes information
processing and idea juggling.
Don?t mix events, with dates or names. It?s confusing and can give away answers.
Keep the list of response options brief and parallel. Again?trying to avoid unnecessary
confusion
Include more response items than stems and permit them to use more than once. Why do
this? The process of elimination is eliminated ? you want mastery of material, not
fortunate conclusions.
Selected Response Assessment
Guidelines for Fill-in Items
1. Ask respondents a question and provide space
for an answer
2. Try to use only one blank per item
3. Don?t let the length of the line to be filled in be
a clue as to the length or nature of the correct
response.
Ask respondents a question and provide space for an answer ? Forces you to express a
complete thought
Try to use only one blank per item
120
Don?t let the length of the line to be filled in be a clue as to the length or nature of the
correct response. Remember?you are trying to assess the levels of achievement, not
deductive reasoning
Selected Response Assessment
? Your test and test items must be able to
withstand scrutiny of administrators and
parents
? Following sound development practices
will help ensure the quality and strength of
your assessments.
Your test and test items must be able to withstand scrutiny of administrators and parents. Don?t
underestimate the scrutiny of parents. Now, I am a relatively nice guy and I look cute and
cuddly at PTA meetings. But if I think you are not being fair to my precious little one, I will
come at you like a spider monkey. I will look for ways for ways to criticize your classroom, your
test, your grading systems?anything I can to show that you are being unfair to my little girl.
And remember?I?m the nice one.
Following sound development practices will help ensure the quality and strength of your
assessments and allow them to withstand scrutiny.
Instead of discussing these points, we are going to demonstrate them through a quiz. GIVE OUT
QUIZ AND DISCUSS.
121
Appendix J
Activity conducted at end of H lecture
Instructions were to ?Find all the errors?
Selected-Response Quiz
November 30, 2005
Instructions: Carefully read each True/False item. Mark the true statements with a ?T?
and the false statements with an ?F.?
1._______ Auburn University has always admitted women.
2._______ John Travolta turned down the leading roles in ?An Officer and a Gentleman,?
?Tootsie,? and ?Saturday Night Fever.?
3._______ All of us have eaten a spider in our sleep.
4._______ If a student uses a GAP, that student?s transcript will still include a special notation
regarding the deleted grade, but it will not be calculated into the GPA.
Instructions: Carefully read each matching item. Each answer is used only once. Match
the best answer to the corresponding item.
5._______ In what year was the toothbrush
invented?
7._______ What three words are written on
the AU seal?
8._______ This many Americans are injured
by toilets every year.
9.______ This is the name of the ghost that
supposedly haunts the AU
Chapel.
A. Instruction, Research
& Extension
B. 40,000
C. 1498
D. Sydney
122
Instructions: Carefully read each multiple choice item. Choose the best answer and write
the answer in the blank beside the question.
10. _______ Who wrote the Auburn Creed?
A. George Washington
B. George Petrie
C. George Costanza
D. George Jefferson
11. _______ This trophy, which is awarded per annum, recognizes the top collegiate football
player in the country, and has been awarded to Bo Jackson and Pat Sullivan, of Auburn
University.
A. Sexy in Shoulder Pads Plaque
B. Cutest Cleats Certificate
C. Not-Your-Average Jock Ribbon
D. Heisman Trophy
12. _______ On Thursdays
A. Coach Chizik does his weekly radio show
B. Tiger Transit reverses its routes
C. AU Bookstore offers 30% discount to the first 30 customers
D. The Plainsman comes out
13. _______ Tiger VI is the name of an
A. Eagle
B. Tiger
C. Facebook group
D. Concourse display table
14. _______ Which of the following are examples of programs offered by Academic Support?
A. IMPACT
B. Tiger Walk
C. Study Partners & Tutoring
D. Tea with the President
16. In the Student Activity Center you can find ________, _________________,
and _______________________________________.
123
Appendix K
Post-Lecture Feedback
Student Code: _______________
Based on the lecture that you just heard, please indicate your response by circling your best answer. Your
response should represent what you think and feel at this point in time.
STRONGLY DISAGRE DISAGREE UNDECIDED AGREE STRONGLY AGREE
?SD? ?D? ?U? ?A? ?SA?
There is no right or wrong answer. Please respond to what you think or how you feel at this point in time.
1. I thought the lecture was humorous. SD D U A SA
2. I did not think the lecture was humorous SD D U A SA
3. I feel that the use of humor helped me stay SD D U A SA
more actively involved in the lecture.
4. I was unable to focus when the instructor used SD D U A SA
cartoons and funny examples.
5. I felt encouraged to share my thoughts and ideas. SD D U A SA
6. I felt comfortable and relaxed in the classroom. SD D U A SA
7. It is not appropriate to use humor for this SD D U A SA
material.
8. The lecture grabbed my attention. SD D U A SA
9. The lecture maintained my attention for its SD D U A SA
duration.
10. I am more interested in this topic now than SD D U A SA
before the lecture began.
11. The lecture failed to engage me. SD D U A SA
12. The use of humor made the instructor seem SD D U A SA
unprofessional.
13. The classroom is not the place for humor. SD D U A SA
14. Instructor?s use of humor made him see more SD D U A SA
approachable.
15. Instructor?s use of humor discouraged me from SD D U A SA
participating in the discussion.
16. What about the lecture generated interest for you?
17. What suggestions do you have to improve the lecture?
124
Appendix L
Performance-Based Assessment Demographic Information
Participation in this project is completely voluntary. Please answer the following questions to
the best of your ability.
?Performance assessments involve students in activities that require them to actually
demonstrate performance of certain skills or to create products that meet certain standards of
quality? (Forbes et al, 2006, p.427).
1. Assigned student number: ______________________________________
2. Gender: (1) Female
(2) Male
3. Age: (1) 18-24 (3) 35-44
(2) 25-34 (4) 45 and older
4. Classification: (1) Freshman (4) Senior
(2) Sophomore (5) Graduate Student
(3) Junior
5. Ethnicity: (1) African-American (4) Hispanic/Latina/Latino
(2) Asian (5) Pacific Islander
(3) Caucasian (6) Other: __________________________
6. Major (complete name - no abbreviations):
_______________________________________
7. Current overall GPA: __________________________
125
Appendix M
Performance-Based Assessment Interest Survey
Please indicate your response by circling your best answer. Your response should represent how
you think and feel at this point in time.
?SD? if you STRONGLY DISAGREE
?D? if you DISAGREE
?U? if you are UNDECIDED
?A? if AGREE
?SA? if you STRONGLY AGREE
There is no right or wrong answer. Please respond to what you think or how you feel at this point in
time.
1. I have great interest in performance-based assessment. SD D U A SA
2. I have a moderate amount of interest in performance- SD D U A SA
based assessment.
3. I have little to no interest in performance-based SD D U A SA
assessment.
4. I chose to register for this class b/c I thought it would SD D U A SA
be interesting.
5. I chose to register for this class b/c it?s required for my SD D U A SA
plan of study.
6. I chose to register for this class b/c I needed an elective SD D U A SA
in this area of study.
7. I choose to attend this class b/c I find the information SD D U A SA
useful for my future career.
8. I choose to attend this class b/c the material and SD D U A SA
classroom discussion are interesting.
9. I choose to attend this class b/c I don?t want my grade SD D U A SA
to suffer b/c of excessive absences.
10. I usually choose not to attend this class. SD D U A SA
126
Appendix N
Performance Based Assessment
Domain Knowledge Test
Student Code: _______________
Locate the BEST possible answer to each of the following questions. Write your answer in the
blank provided.
1. _______ Performance assessment can be used to effectively asses four of the five
achievement targets. Which achievement target should NOT be assessed using
performance assessment?
A. Knowledge
B. Reasoning
C. Performance Skills
D. Products
E. Dispositions
2. _______ Which of the following is an example of a situation in which performance
assessment would be an appropriate assessment?
A. Classroom management
B. Matching test
C. Giving a speech
D. Short answer fill in
E. Writing an essay
3. _______ Performance assessment can be a beneficial test format because it:
A. Isolates behavior realm over cognitive realm
B. Taps into the whole child
C. Isolates affective realm over cognitive realm
D. Is objective
E. Covers limited amounts of material
4. _______ This is defined as ?when performance criteria are being applied consistently
when two raters evaluate the same piece of work using the same criteria and, without
conversing about it, draw the same conclusion about level of proficiency.?
A. Intra-rater agreement
B. Rater-friendly validity
C. Homogeneous conclusion
D. Product-rater reliability
E. Inter-rater agreement
127
5. _______ All of the following are negative aspects of performance assessment, EXCEPT
_________.
A. Process is involved and complicated
B. Process is subject to public scrutiny
C. Time is needed to assess
D. Process is subjective
E. Process is objective
6. _______ This is the main reason it is necessary to establish sound performance criteria
and apply them consistently.
A. The assurance of performance over selected-response assessment
B. To account for various reading levels
C. To strengthen the validity in the absence of a pilot study
D. The rater can become a potential source of bias
E. To overcome a small sample size
7. _______ Which of the following is NOT a positive attribute of a good rubric?
A. Fair
B. Specifies the important content
C. Clear and understandable
D. Measures each achievement target
E. Practical and easy to use
8. _______ When developing performance assessments, there are two critical questions that
must be asked. The first one is ?what type of performance are we assessing?? What is
the second critical question?
A. How long should this take to assess?
B. What does good performance look like?
C. Is my sample significant?
D. Why are we assessing this?
E. Are we asking the right questions?
9. _______ One thing to consider when implementing performance assessment is:
A. Whether students have equal access to resources
B. Making sure knowledge and understanding are assessed
C. If the rubric is color-coded
D. If you have obtained the necessary permission
E. That there are no extra costs associated with it
128
10. _______ Which of the following is NOT part of the established set of rules of evidence
for performance assessment?
A. Reflect a clear target
B. Rely on a proper method
C. Assess performance over product
D. Sample the target appropriately
E. Control for key sources of bias
11. _______ This is the main difference between performance assessment and selected-
response assessment.
A. Selected-response is not subject to public scrutiny
B. The number of achievement targets each one assesses
C. There is no main difference; it is the teacher?s preference
D. Students are required to demonstrate performance in performance assessment
E. A rubric is not necessary for selected response assessment
12. _______ What is one method of addressing the issue of potential bias in the rater
(teacher)?
A. There is no solid way of addressing this issue
B. Choose selected response over performance assessment
C. Implement a fluid product target
D. Construct various rubrics for different types of students
E. Implement another rater
For the following performance based assessment items, identify the achievement target that is
being measured. Write your answer in the blank provided.
13. _______ Chemistry students are given unidentified materials to identify and you (the
teacher) observe how they set up the apparatus and conduct the study.
A. Knowledge
B. Reasoning
C. Performance Skills
D. Products
E. Dispositions
129
14. _______ A student auditions for a part in the school play.
A. Knowledge
B. Reasoning
C. Performance Skills
D. Products
E. Dispositions
15. _______ A student demonstrates her proficiency in speaking a foreign language.
A. Knowledge
B. Reasoning
C. Performance Skills
D. Products
E. Dispositions
16. _______ You are asked to judge the 8th grade science fair.
A. Knowledge
B. Reasoning
C. Performance Skills
D. Products
E. Dispositions
17. _______ Students are asked their motivation of why they chose to jump rope instead of
play kickball.
A. Knowledge
B. Reasoning
C. Performance Skills
D. Products
E. Dispositions
18. _______ To gauge whether or not a student understands the math word problem, you ask
her to explain how they arrived at their answer.
A. Knowledge
B. Reasoning
C. Performance Skills
D. Products
E. Dispositions
130
19. _______ The International Federation of Competitive Eating hosts a burrito eating
contest.
A. Knowledge
B. Reasoning
C. Performance Skills
D. Products
E. Dispositions
20. _______ Students are asked their preference of ice cream, brownies, or taffy.
A. Knowledge
B. Reasoning
C. Performance Skills
D. Products
E. Dispositions
131
Appendix O
PBA Non-Humorous PowerPoint Slides and Lecture Script
Hello again. Today we are going to talk about Performance-Based Assessment. In order to do
that, we must first understand what that term means.
? Activities that require students to
actually demonstrate performance
of certain skills or to create
products that meet certain
standards of quality
? We directly observe and judge
performance while it happens
Performance Assessment
If students have to demonstrate a skill then it?s PBA. Examples would include CPR training,
speeches, speed/agility tests, talent competitions, etc.
? Considerations when using:
? All students have equal access to
resources
? Only when there?s time to conduct it
? When there is an active, hands-on
way to engage the students
Performance Assessment
When using PBA, there are a few things to consider. All students must have equal access to the
resources ? the necessary materials may be at home or school. But if they don?t all have equal
access, it?s an unfair standard. It?s not whether one student is smarter than another ?it?s
132
whether you, the teacher, have given an assignment where all of your students have the same
access to the resources to successfully complete the performance assignment.
Another thing to consider is that you, the teacher, must have the time to devote to this method of
assessment b/c it?s a labor-intensive method. The more performance tasks you want to measure
the longer it will take, and this method probably becomes less feasible. So make sure you have
the time for PBA before starting it.
And finally consider the nature of PBA you want to use. This is a powerful assessment and
application method ? be sure that you can meet your goals by having an active, hands-on way to
engage the students.
Performance Assessment
? Positives
? Genuine evaluation
? Students engaged
? Can tap into the whole child (CAP
Model)
? Public Relations ? can show stuff
There are real benefits to PBA. To evaluate achievement in its truest form, you go to where it?s
being done and observe. It keeps the students engaged?not only the ones performing, but likely
the ones watching, too. I believe that is why many of these reality talent-type contests are so
popular.
PBA also taps into the whole child by engaging the cognitive, affective, and physical sides of the
student. Cognitive (have to think about their performance), affective (how do they react to being
judged), and physical (the actual performance).
And it?s good PR, especially for the parents of young children. Parents love to see and hear that
their kids are performing well in school.
133
Performance Assessment
? Negatives
? Process is involved and sometimes
complicated
? Time is needed to assess
? Public Scrutiny
? Subjective
Why would you not want to use PBA? It?s more complicated than a multiple choice or T/F test,
and more complicated for you to grade/assess. It takes more time to grade, and usually more
time for the students to perform. You may open yourself to public scrutiny. For example, a live
performance with an audience exposes your rating/grading system. Maybe you?re comfortable
with that, maybe you?re not.
Another potential drawback to PBA is that it is subjective ? meaning that just b/c one reviewer
saw it one way, another reviewer might see it another.
Performance Assessment
? Predetermined levels of proficiency
? The rater can become a potential source
of bias
? Establish sound performance criteria
and apply them consistently
? The gauge of consistency that we apply is
that of inter-rater agreement
So when using PBA, it is very important to decide what the behavior/performance is that you are
seeking by developing predetermined levels of proficiency.
Predetermined levels of proficiency are important. You don?t need to change your method in the
middle of the assessment. And without predetermined levels of proficiency your students won?t
know what is expected. B/c the nature of this assessment tool is so subjective, the student?s
performance should be judged by predetermined levels of proficiency. That will also allow your
rubric to be solid right from the start.
It is possible that our own biases can emerge. Even educational professionals like ourselves,
have instances where our biases might become a factor, and possibly not even on a conscious
134
level (gender of the student, that student?s prior performance, relationship with students? parents
[go to church together, sorority sisters the mom]).
That?s why thorough preparation and attention to detail are so important. And whatever the
criteria you establish, make sure they are solid (this is done prior to the performance) and then
apply them fairly and consistently.
In addition to being consistent with your assessments, another option for relieving any evaluator
bias is to have an additional rater (besides yourself). That helps to strengthen the consistency of
your assessment method. That?s called inter-rater agreement.
Performance Assessment
Inter-rater agreement occurs when
performance criteria are being applied
consistently when two raters evaluate
the same piece of work using the
same criteria and, without conversing
about it, draw the same conclusion
about level of proficiency
Inter-rater agreement occurs when performance criteria are applied consistently and two raters
evaluate the same piece of work using the same criteria and, without talking about it, draw the
same conclusion about level of proficiency. That is important b/c student proficiency should be
a function of the student?s level of achievement and not a result of who is judging.
Performance Assessment
? When assessing student mastery of
content knowledge, selected response
is usually best choice
? Performance assessment can provide
a means of assessing student
reasoning, which includes problem-
solving proficiency
Think back to the achievement targets that we talked about last week.
Knowledge and understanding ? mastery of the subject matter
Reasoning ? using the knowledge and understanding to solve problems, not just regurgitating the
material
Performance Skills ? proficiency in doing something where the focus is the process
135
Products ? creating tangible products
Dispositions ? development of certain types of attitudes, interests, and intentions
Knowledge and understanding are best assessed through selected-response assessment, which we
discussed last week, and not PBA.
If you want to assess reasoning with PBA, here?s an example (this is from pg. 432 of your text):
give chemistry students unknown substances and ask them to identify those substances. As the
teacher you are looking for proper sequencing of activities to do that task. Are they using the
equipment properly? Following the proper steps? Are they appropriately reasoning through the
problem? That addresses the first two achievement targets of knowledge & understanding and
reasoning.
Performance Assessment
? A dependable means of evaluating
skills as students are doing the things
that demonstrate achievement
PBA is also adept at evaluating performance as an achievement target (no surprise there.)
In performance assessment, remember that the focus is on the process. That?s why in elementary
school when you were learning math, your teacher wanted you to show your work to
demonstrate that you understood the process. Do you have the skills to perform what is being
asked of you?
Performance Assessment
? Sound performance criteria should
reflect the key attributes of these
products
? If we apply those criteria well,
performance assessments can serve as
both an efficient and effective tool
When the product is the achievement target and you?re using PBA, sound performance criteria
should reflect the key attributes of the performances so you can accurately gauge the products
136
you want. For example, you (the teacher) have assigned an art project (product). You would
explain the steps and procedures (criteria) of building the art project all along the way. You
would assess their ability to follow those procedures (performance) as they complete the project.
Therefore, you should be able to assess the quality of the product by assessing their performance.
Make sense?
If we apply those criteria well, PBA can serve as both an efficient and effective tool to assess
products.
Performance Assessment
To the extent that we can draw
inferences about positive attitudes,
strong interests, motivational
dispositions, and/or academic self-
concept based either on students?
actions or on what we see in the
products they create, then
performance assessments can
assist us here as well
Using PBA to assess dispositions, the fifth achievement target, can also be done. It will require
making inferences based on students actions/behaviors or performance. Such inferences would
be about their attitudes, interests, motivational dispositions. A student?s performance in class
may be a clue as to what is happening outside of class.
Performance Assessment
? Each performance assessment must
do the following:
? Reflect a clear target
? Serve a clearly articulated purpose
? Rely on proper method
? Sample the target appropriately
? Control for key sources of bias
Sound performance assessment requires a strict set of established rules of evidence. Each
assessment must do the following:
Reflect a clear target ? define the performance you are assessing. Neither you nor your students
should be confused about the goals of the assignment.
Serve a clearly articulated purpose ? know why you are assessing and what you intend to do
with the results. What is the purpose for the assignment?
137
Rely on proper method ? performance criteria must map a clear and complete continuum, on
which each point corresponds to a different level of achievement. For example, not all of your
students will be wonderful public speakers. Some may be really great, while others may be
terrible. You have to have a grading continuum that addresses both extremes and the middle
ground, where many of your students will be.
Sample the target appropriately ? get enough evidence so that you are confident in your
conclusions.
Control for key sources of bias ? know yourself so you can neutralize any biases.
Performance Assessment
Steps to devising performance
criteria:
1. Discover
2. Condense
3. Define
4. Apply
5. Refine
Your text book lays out five steps to developing performance criteria, beginning on page 437.
1. Discover ? gather and analyze examples of successful performances so students can
understand what it takes to be successful. This way your students can begin discovering the keys
to their own success.
2. Condense ? pool the resulting ideas into a clear and concise set of key attributes, which
includes agreed upon language. This is important b/c the students have to understand your
expectations.
3. Define ? define the full range of performances along a continuum. This way they can
understand where they are now in relation to where you want them down the road. A road map
of sorts?they are here, and want to be/need to be here. Your performance criteria will help
them understand how to get there.
4. Apply ? practice applying the criteria until you are consistent
138
5. Refine ? be open to the possibility of change. Your students may actually come up with
criteria of excellence that you have not seen before. Revising and refining performance criteria
is an on-going process.
Performance Assessment
Developing performance assessment
? Two critical decisions:
1. What type of performance are we
assessing?
2. What does good performance look
like?
In developing PBA instruments, you must have a plan. And this plan starts with two
foundational questions:
1. What type of performance are we assessing?
2. What does good performance look like?
Not only do we need to define outstanding or good performances, but we must also define each
level of performance leading up to the highest levels of proficiency. That?s the continuum we
already talked about.
Defining these varying levels of proficiency is best done with a rubric. A rubric, which is your
model for assessment, provides a means of communicating with your students about the path to
success. As the evaluator, these are the skills I?m looking for. This is what you should be able
to perform. That is why we need a good, strong rubric.
Performance Assessment
The attributes of good rubrics:
? Specify the important content
? Clear and understandable
? Practical and easy to use
? Fair
Here are some attributes of a good rubric.
139
Specify the important content - Rubrics rate high in content when everything of importance is
included,
Clear and understandable - high in clarity when it?s easy for everyone involved to understand
what is meant,
Practical and easy to use - high in practicality when everything is easy to understand and apply,
Fair ? high in fairness when the ratings of student performance actually depict what students can
do and how well?the results are valid
These are signs of a good rubric for a solid PBA, but what would keep your PBA from being
sound?
Performance Assessment
Barriers to sound performance
assessment:
? Inadequate vision of the target
? Mismatch of the target and the method
? Unclear performance criteria
? Incorrect performance criteria
? Unfocused and/or biased tasks
? Too little time to assess
? Untrained raters
Inadequate vision of the target ? not exactly sure what you are measuring, not exactly sure what
you?re looking for.
Mismatch of the target and the method ? you know what you want to measure, but don?t have
proper method. Hitting a baseball with golf club.
Unclear criteria ? you may need help clarifying the focus for what you are measuring.
Incorrect criteria ? what you thought you would measure is not what is there.
Unfocused tasks, biased tasks ? be sure the tasks are clear, and understandable
Too little time to assess ? do you have the time to use PBA? Do you need more evaluators?
Untrained raters ? be sure they are capable for the job.
As you can see there are several things that could keep your rubric from being sound. But let?s
stop talking about it and put it into action. I need two volunteers?
140
Appendix P
PBA Humorous PowerPoint Slides and Lecture Script
Hello again. Today we are going to talk about Performance-Based Assessment. In order to do
that, we must first understand what that term means.
? Activities that require students to
actually demonstrate performance
of certain skills or to create
products that meet certain
standards of quality
? We directly observe and judge
performance while it happens
Performance Assessment
If students have to demonstrate a skill then it?s PBA. Examples would include CPR training,
speeches, speed/agility tests, talent competitions, etc. And one of the more well-known
examples of performance assessments?
American Idol?there?s a performance; it is assessed by 4 judges and the millions of people
around the country who vote. And b/c there is not a consistent agreed-upon standard, we don?t
141
always agree on what is a successful performance?or on who should move on and who should
be sent home. The focus is not always even on the performance.
Sometimes the focus is on what the contestants are wearing, rather than the performance.
Sometimes the focus is on celebrity resemblances, rather than the performance.
Sometimes the focus is on the song itself rather than the performance. But there is a noticeable
absence of a consistent agreed-upon standard. Later we will talk more about the importance of
having a consistent standard.
142
? Considerations when
using:
? All students have
equal access to
resources
Performance Assessment
When using PBA, there are a few things to consider. All students must have equal access to the
resources ? the necessary materials may be at home or school. But if they don?t all have equal
access, it?s an unfair standard. It?s not whether one student is smarter than another (such as our
gifted student in the picture here) ?it?s whether you, the teacher, have given an assignment
where all of your students have the same access to the resources to successfully complete the
performance assignment.
? Considerations when using:
? Only when there?s time to conduct it
Performance Assessment
Another thing to consider is that you, the teacher, must have the time to devote to this method of
assessment b/c it?s a labor-intensive method. The more performance tasks you want to measure
the longer it will take, and this method probably becomes less feasible. So make sure you have
the time for PBA before starting it.
143
? Considerations when using:
? When there is an active, hands-on
way to engage the students
Performance Assessment
And finally consider the nature of PBA you want to use. This is a powerful assessment and
application method ? be sure that you can meet your goals by having an active, hands-on way to
engage the students. This is not the picture you want to see at anytime as a teacher?especially
when giving an assessment.
Performance Assessment
? Positives
? Genuine evaluation
? Students engaged
? Can tap into the whole child (CAP
Model)
? Public Relations ? can show stuff
There are real benefits to PBA. To evaluate achievement in its truest form, you go to where it?s
being done and observe. It keeps the students engaged?not only the ones performing, but likely
the ones watching, too. I believe that is why many of these reality talent-type contests are so
popular: American Idol, So You Think You Can Dance, America?s Got Talent, Biggest Loser
(not a talent show, but certainly a focus on the performance to reach that end goal of losing X-
number of pounds).
PBA also taps into the whole child by engaging the cognitive, affective, and physical sides of the
student. Cognitive (have to think about their performance), affective (how do they react to being
judged), and physical (the actual performance).
And it?s good PR, especially for the parents of young children. We love to see and hear that our
kids are performing well in school. Especially if we see that they are performing better than the
other kids.
144
Performance Assessment
? Negatives
? Process is involved and sometimes
complicated
? Time is needed to assess
? Public Scrutiny
? Subjective
Why would you not want to use PBA? It?s more complicated than a multiple choice or T/F test,
and more complicated for you to grade/assess. It takes more time to grade, and usually more
time for the students to perform. You may open yourself to public scrutiny. For example, a live
performance with an audience exposes your rating/grading system. Maybe you?re comfortable
with that, maybe you?re not.
Another potential drawback to PBA is that it is subjective ? meaning that just b/c one reviewer
saw it that way, another reviewer might see it another. For example, the following quotes were
taken from actual performance review forms in a business setting.
Performance Assessment
? "This young lady has delusions of
adequacy."
? "This employee is depriving a village
somewhere of an idiot."
? ?The wheel is turning, but the hamster is
dead."
? "I would not allow this employee to
breed.?
? ?Gotta a full six-pack, but lacks the little
plastic thingy to hold them all together.?
READ SLIDE
Another employer may have found any of these people perfectly acceptable. Just as another
teacher/evaluator may find your students perfectly acceptable or unacceptable, whichever the
case may be. So when using PBA, it is very important to decide what the behavior/performance
is that you are seeking.
Predetermined levels of proficiency are important. You don?t need to change your method in the
middle of the assessment. And without predetermined levels of proficiency your students won?t
know what is expected.
145
Performance Assessment
? Predetermined levels
of proficiency
Like this poor guy (read cartoon) ? maybe he had no idea what she was grading him on! If he
had known he was being graded on cleaning the toilet, affection, and obedience, he probably
would have focused on those areas.
B/c the nature of this assessment tool is so subjective, the student?s performance should be
judged by predetermined levels of proficiency. That way your rubric should be solid right from
the start.
Performance Assessment
? The rater can become a potential source
of bias
It is possible that our own biases can emerge. Even educational professionals like ourselves,
have instances where our biases might become a factor, and possibly not even on a conscious
level (gender of the student, that student?s prior performance, relationship with students? parents
[go to church together, sorority sisters the mom?whatever]).
Just like in our cartoon here where the seal is getting better marks b/c the evaluator can connect
with the performance.
That?s why thorough preparation and attention to detail are so important. And whatever the
criteria you establish, make sure they are solid (this is done prior to the performance) and then
apply them fairly and consistently.
146
For example, each of these students has a performance test?but you can clearly see they are not
fair, nor is the grading likely to be consistent.
Performance Assessment
? Establish sound performance criteria and
apply them consistently
? The gauge of consistency that we apply
is that of inter-rater agreement
In addition to being consistent with your assessments, another option for relieving any evaluator
bias is to have an additional rater (besides yourself). That helps to strengthen the consistency of
your assessment method. That?s called inter-rater agreement.
Performance Assessment
Inter-rater agreement occurs when
performance criteria are being applied
consistently when two raters evaluate
the same piece of work using the
same criteria and, without conversing
about it, draw the same conclusion
about level of proficiency
Inter-rater agreement occurs when performance criteria are applied consistently and two raters
evaluate the same piece of work using the same criteria and, without talking about it, draw the
same conclusion about level of proficiency. That is important b/c student proficiency should be
a function of the student?s level of achievement and not a result of who is judging.
147
Going back to our American Idol examples, we can see how a lack of inter-rater agreement
confuses the contestants. Randy thinks the song choice was great but it was a little pitchy. Ellen
thinks they are a true artist. Kara thinks the song was ?too old? for them, but their hair was
fantastic. And then Simon thinks the whole thing is purely karoke. The contestant is confused
b/c the judges are inconsistent. Your students will be, too, unless your PBA tools are stronger.
Performance Assessment
? When assessing student mastery of
content knowledge, selected response
is usually best choice
? Performance assessment can provide
a means of assessing student
reasoning, which includes problem-
solving proficiency
Think back to the achievement targets that we talked about last week.
Knowledge and understanding ? mastery of the subject matter
Reasoning ? using the knowledge and understanding to solve problems, not just regurgitating the
material
Performance Skills ? proficiency in doing something where the focus is the process
Products ? creating tangible products
Dispositions ? development of certain types of attitudes, interests, and intentions
Knowledge and understanding are best assessed through selected-response assessment, which we
discussed last week, and not PBA.
If you want to assess reasoning with PBA, here?s an example (this is from pg. 432 of your text):
give chemistry students unknown substances and ask them to identify those substances. As the
teacher you are looking for proper sequencing of activities to do that task. Are they using the
equipment properly? Following the proper steps? Are they appropriately reasoning through the
problem? That addresses the first two achievement targets of knowledge & understanding and
reasoning.
148
Performance Assessment
?
A dependable means
of evaluating skills as
students are doing
the things that
demonstrate
achievement
PBA is also adept at evaluating performance as an achievement target (no surprise there?it?s in
the name)
In performance assessment, remember that the focus is on the process. That?s why in elementary
school when you were learning math, your teacher wanted you to show your work to
demonstrate that you understood the process. Like our example here, when preparing shoot
someone out of a cannon, the process of setting up the cannon properly is just as important
(probably more so) than lighting the fuse. Do you have the skills to perform what is being asked
of you?
Performance Assessment
? Sound performance criteria should
reflect the key attributes of these
products
? If we apply those criteria well,
performance assessments can serve as
both an efficient and effective tool
When the product is the achievement target and you?re using PBA, sound performance criteria
should reflect the key attributes of the performances so you can accurately gauge the products
you want. For example, you (the teacher) have assigned an art project (product). You would
explain the steps and procedures (criteria)of building the art project all along the way. You
would assess their ability to follow those procedures (performance) as they complete the project.
Therefore, you should be able to assess the quality of the product by assessing their performance.
If we apply those criteria well, PBA can serve as both an efficient and effective tool to assess
products.
149
Performance Assessment
To the extent that we can draw
inferences about positive attitudes,
strong interests, motivational
dispositions, and/or academic self-
concept based either on students?
actions or on what we see in the
products they create, then
performance assessments can
assist us here as well
Using PBA to assess dispositions, the fifth achievement target, can also be done. It will require
making inferences based on students actions/behaviors or performance. Such inferences would
be about their attitudes, interests, motivational dispositions. A student?s performance in class
may be a clue as to what is happening outside of class.
For example, I think we may be able to infer from the following set of cartoons what Calvin?s
disposition is.
Performance Assessment
Performance Assessment
Performance Assessment
Performance Assessment
150
Performance Assessment
? Each performance assessment must
do the following:
? Reflect a clear target
? Serve a clearly articulated purpose
? Rely on proper method
? Sample the target appropriately
? Control for key sources of bias
Sound performance assessment requires a strict set of established rules of evidence. Each
assessment must do the following:
Reflect a clear target ? define the performance you are assessing. Neither you nor your students
should be confused about the goals of the assignment.
Serve a clearly articulated purpose ? know why you are assessing and what you intend to do
with the results. Is it for your own amusement, for kicks and giggles? Or is there a real purpose
for the assignment?
Rely on proper method ? performance criteria must map a clear and complete continuum, each
point on which corresponds to a different level of achievement. For example, not all of your
students will be wonderful public speakers. Some may be really great, while others make your
ears bleed. You have to have a grading continuum that addresses both extremes and the middle
ground, where many of your students will be.
Sample the target appropriately ? get enough evidence so that you are confident in your
conclusions.
Control for key sources of bias ? know yourself so you can neutralize any biases. Or as the great
philosopher, Ice Cube said, ?check yourself before you wreck yourself.? I know?yet another
connection between Ice Cube and education. Feel free to use that one?no charge.
Performance Assessment
Steps to devising performance
criteria:
1. Discover
2. Condense
3. Define
4. Apply
5. Refine
151
Your text book lays out five steps to developing performance criteria, beginning on page 437.
6. Discover ? gather and analyze examples of successful performances so students can
understand what it takes to be successful. This way your students can begin discovering the keys
to their own success.
7. Condense ? pool the resulting ideas into a clear and concise set of key attributes, which
includes agreed upon language. This is important b/c the students have to understand your
expectations. Think back to the example of bad performance reviews. It is very possible that
those employees did not have an agreed upon language with their supervisors before their jobs
started.
8. Define ? define the full range of performances along a continuum. This way they can
understand where they are now in relation to where you want them down the road. A road map
of sorts?they are here, and want to be/need to be here. Your performance criteria will help
them understand how to get there.
9. Apply ? practice applying the criteria until you are consistent
10. Refine ? be open to the possibility of change. Your students may actually come up with
criteria of excellence that you have not seen before. Revising and refining performance criteria
is an on-going process.
Performance Assessment
Developing performance assessment
? Two critical decisions:
1. What type of performance are we
assessing?
2. What does good performance look
like?
In developing PBA instruments, you must have a plan. And this plan starts with two
foundational questions:
3. What type of performance are we assessing?
4. What does good performance look like?
152
Performance Assessment
What type of performance are you assessing? If you were judging this dance contest, you would
HAVE to know what type of performance you were assessing b/c these are very different dances.
What does good performance look like? This could be an outstanding square dance. But if we
don?t know what a good square dance looks like, it just looks sort of awkward. Not only do we
need to define outstanding or good performances, but we must also define each level of
performance leading up to the highest levels of proficiency. That?s the continuum we already
talked about.
Defining these varying levels of proficiency is best done with a rubric. A rubric, which is your
model for assessment, provides a means of communicating with your students about the path to
success. As the evaluator, these are the skills I?m looking for. This is what you should be able
to perform. That is why we need a good, strong rubric.
Performance Assessment
The attributes of good rubrics:
? Specify the important content
? Clear and understandable
? Practical and easy to use
? Fair
Here are some attributes of a good rubric.
Specify the important content - Rubrics rate high in content when everything of importance is
included,
Clear and understandable - high in clarity when it?s easy for everyone involved to understand
what is meant,
Practical and easy to use - high in practicality when everything is easy to understand and apply,
153
Fair ? high in fairness when the ratings of student performance actually depict what students can
do and how well?the results are valid
These are signs of a good rubric for a solid PBA, but what would keep your PBA from being
sound?
Performance Assessment
Barriers to sound performance
assessment:
? Inadequate vision of the target
? Mismatch of the target and the method
? Unclear performance criteria
? Incorrect performance criteria
? Unfocused and/or biased tasks
? Too little time to assess
? Untrained raters
Inadequate vision of the target ? not exactly sure what you are measuring, not exactly sure what
you?re looking for.
Mismatch of the target and the method ? you know what you want to measure, but don?t have
proper method. Like hitting a baseball with a golf club?oh, you may hit the ball, but you won?t
perform very well with it. It?s a mismatch of the target and the method.
Unclear criteria ? you may need help clarifying the focus for what you are measuring. You may
want to date a man in uniform?just not a prison uniform.
Incorrect criteria ? what you thought you would measure is not what is there.
Unfocused tasks, biased tasks ? be sure the tasks are clear, and understandable
Too little time to assess ? do you have the time to use PBA? Do you need more evaluators?
Untrained raters ? be sure they are capable for the job. Think back to the American Idol
example. They are not going to call me to be a guest judge b/c I am untrained. Although I
suppose I could be trained to say dog, pitchy, and karaoke. That may get the job done.
As you can see there are several things that could keep your rubric from being sound. But let?s
stop talking about it and put it into action. I need two volunteers?
154
Appendix Q
Explanation of PBA post-lecture activity
The following activity (Appendices Q-T) was given at the end of both the H and NH
lectures. The students were given the job description for the kindergarten teacher, the list of
interview questions, and both rubrics. One rubric was vague and hard to apply to the interview.
The other rubric was very thorough and easy to apply. Two student volunteers were selected.
One was the interviewer and the other was the interviewee. All of the class was to evaluate the
interview ? one half used the good rubric, while the other half used the vague rubric.
Conversation followed about the importance of a thorough assessment plan when using PBA.
155
Appendix R
Kindergarten Teacher Job Description
1. The candidate should have interactive activities to develop language and vocabulary,
introduce scientific and mathematical concepts and improve social skills
2. Teach basic skills such as color, shape, number and letter recognition, personal hygiene,
and social skills.
3. Should effectively communicate with parents about their children?s development.
4. Should be able to monitor and report on children?s development and identify those with
possible learning difficulties, consulting other professionals where appropriate.
5. Meet with other professionals to discuss individual students? needs and progress.
6. 2-3 years of experience working as a kindergarten teacher is desired
7. Meets regular and predictable attendance requirements.
8. Plans for and guides the learning process to help students achieve program objectives.
9. Maintains a classroom atmosphere conducive to learning.
10. Implements useful diagnostic and progress assessment measures.
11. Selects and uses effective instructional methods and learning materials.
12. Establishes a cooperative relationship with all assigned students.
13. Maintains open lines of communication with parents/guardians.
14. Engages in professional growth activities through an ongoing program of job-related
knowledge and skill development.
15. Works collaboratively to achieve the overall purposes of the school program.
156
Appendix S
Kindergarten Teacher Interview Questions
What is your philosophy of teaching?
How do you handle difficult students? Situations?
What is your behavior plan?
How do you reward achieving students?
What would your ideal schedule look like?
How do you motivate students?
What do you get out of teaching? Why do you want to?
Describe a successful lesson plan which you have implemented.
Describe ways in which you address various learning styles?
What is the most important element or attribute which you bring to our school?
What have you been doing since you graduated from high school?
How do you keep yourself aware of changes and innovations in education?
What would you do if a parent confronted you about a situation with their child? How would
you handle it?
Tell us about your teaching experience and educational background.
Why did you decide to teach?
Describe your teaching style. How do you accommodate different levels and learning styles?
What experience do you have working with special needs?
Why do you want to work in our district?
Describe your classroom management and how you keep kids actively engaged in learning.
We regularly collaborate with other teachers in our building. How do you work with others?
Name 2 strengths and 1 weakness that you have.
157
Appendix T
Vague Rubric
Interview Review Topics You
Rock
Needs
Some
Work
Needs
Lots of
Work
Notes
Introduction
Dress/Presentation
Communication Skills
Listening/Attentive
Energy Level
Preparation/Sense of Direction
Initiative/Creativity/Flexibility
Resume/
Activities/Experience/
Grades
Strengths
Weaknesses
Suggestions
158
Appendix U
Thorough Rubric
Cr
ite
ria
1 2 3 4 Scor
e
Ap
pe
ar
an
ce
square4 Overall
appearance is
untidy
square4 Choice in
clothing is
inappropriate for
any job
interview (torn
unclean,
wrinkled)
square4 Poor grooming
square4 Appearance is
somewhat
untidy
square4 Choice in
clothing is
inappropriate
(shirt untucked,
tee-shirt, too
much jewelry,
etc.)
square4 Grooming
attempt is
evident
square4 Overall neat
appearance
square4 Choice in
clothing is
acceptable for
the type of
interview
square4 Well groomed
(ex. Shirt tucked
in, jewelry
blends with
clothing,
minimal
wrinkles)
square4 Overall
appearance is
very neat
square4 Choice in
clothing is
appropriate for
any job
interview
square4 Very well
groomed (hair,
make-up,
clothes pressed,
etc.)
square4 Overall
appearance is
businesslike
Gr
eet
ing
square4 Unacceptable
behavior and
language
square4 Unfriendly and
not courteous
square4 Used typical
behavior and
language ? did
modify
behavior to fit
the interview
square4 Attempts to be
courteous to all
in interview
setting
square4 Acceptable
behavior, well
mannered,
professionalism
lacking
square4 Courteous to all
involved in
interview
square4 Professional
behavior and
language
(handshake,
?hello?, ?thank
you?, eye
contact, etc.)
square4 Friendly and
courteous to all
involved in
interview
159
Co
mm
un
ica
tio
n
square4 Presentation
shows lack of
interest
square4 Speaking is
unclear ? very
difficult to
understand
message of what
is being said (ex.
mumbling)
square4 Facts about job
not included
square4 Volume is
inappropriate for
interview (ex.
Spoke too
loudly, too
softly)
square4 Showed some
interest
square4 Speaking is
unclear? lapses
in sentence
structure and
grammar
square4 Knowledge of
job is minimal
Volume is uneven
(varied)
square4 Showed
interest
throughout the
interview
square4 Speaking
clearly
square4 Minimal
mistakes in
sentence
structure and
grammar
square4 Knowledge and
facts are
included/shared
square4 Volume is
appropriate
square4 Very attentive
square4 Speaking
clearly
square4 Appropriate use
of sentence
structure and
grammar
square4 Commitment &
enthusiasm for
job is conveyed
square4 Volume
conveys
business tone
Bo
dy
L
an
gu
ag
e
square4 Fidgeted ? ex.
constant
movement of
hands and feet
square4 Lack of eye
contact
square4 Slouching all the
time
square4 Fidgeted ? ex.
movement of
hands and feet
freqently
square4 Eye contact is
made
intermittently
square4 Occasionally
slouching
square4 Minimal
fidgeting (ex.
occasionally
shifting)
square4 Occasional loss
of eye contact
square4 Brief slouching,
but quickly
correcting self
square4 No fidgeting
square4 Eye contact
made
square4 Sitting straight
in chair
Re
sp
on
din
g t
o
Qu
est
ion
s
square4 Inappropriate
answers to
questions
square4 Did not attempt
to answer
questions
square4 Gives
inaccurate
answers
square4 Attempts to
answer
questions
square4 Answers are
acceptable and
accurate
square4 Answers
questions
square4 Thorough
answers to
questions
160
As
kin
g Q
ue
sti
on
s
square4 No questions
asked
square4 Student asked
questions that
were not related
to the job
square4 Asked
questions
relating to the
desired
position
square4 Asked
questions
relating to the
desired
position.
(Evidence is
shown that the
applicant had
researched the
business or
career field)
square4 Asked questions
related to the
business or
career field
To
tal
161
Appendix V
From: mantoja@auburn.edu [mailto:mantoja@auburn.edu]
Sent: Tuesday, March 31, 2009 8:02 AM
To: Brenda S. Nichols
Subject: Student Perception Assessment Scale (SPAS)
Dr. Nichols:
Greetings from Auburn University! My name is James Mantooth and I am a doctoral student in
Educational Psychology. My dissertation topic is humor in the classroom, and I ran across the
thesis written by one of your students, Nora James. The thesis is titled, "Vocational Nursing
Students' Perception of the Use of Humor in the Classroom." I liked the assessment tool she
used, SPAS, which she developed, and it is something I would like to use as the basis for my
own tool. Do you know how I could get in touch with her so that I can ask permission to use it?
Of course, proper citation will be given.
Thank you very much,
James Mantooth
>>> "Brenda S. Nichols" 3/31/2009 10:09 AM >>>
Hello Nora
I received the following email about your thesis! James Mantooth is a graduate student at
Auburn University and wants more information and permission to use your instrument. Please
respond to him directly with any questions and permissions (he will need a real letter not just an
email) if you agree.
Haven?t heard from you lately, hope all is well. Please contact me if you get the chance.
Brenda
>>> "James, Nora" 3/31/2009 3:05 PM >>>
Dear James, I would be honored for you to use my tool. I planned to rework the tool and use it in
my dissertation...but life has got in the way & I haven't gotten that far. Just let me know what
you need and good luck on your research.
Nora James, R.N., MSN
VN Director
Lee College
162
Appendix W
Adapted SPAS
Student Code: _______________
Please indicate your response by circling your BEST answer. Your response should represent
how you think and feel at this point in time.
STRONGLY DISAGREE DISAGREE UNDECIDED AGREE STRONGLY AGREE
?SD? ?D? ?U? ?A? ?SA?
There is no right or wrong answer. Please respond to what you think or how you feel at this point in time.
1. I often feel bored during classroom lecture. SD D U A SA
2. I often feel intimidated about asking questions SD D U A SA
in the classroom.
3. I feel that instructors who use humor in the SD D U A SA
classroom are unprofessional.
4. When I laugh, I feel relaxed. SD D U A SA
5. I do not feel that the use of funny examples SD D U A SA
in the classroom helps me to remember.
6. When the instructor uses funny examples in SD D U A SA
the classroom, I get confused and do not
understand.
7. When I am taking an exam, I sometimes SD D U A SA
remember the funny example that the
instructor used but it does not help me to
answer the question.
8. I feel like I am being treated like a child when SD D U A SA
the instructor uses humor in the classroom.
9. I feel more distracted when the instructor uses SD D U A SA
humor in the classroom.
10. I feel more intimidated about asking questions SD D U A SA
when the instructor uses humor in the classroom.
163
11. When humor is used in the classroom, I feel my SD D U A SA
stress level decrease.
12. I usually understand concepts better when I SD D U A SA
am relaxed.
13. I feel more stressed when the instructor SD D U A SA
uses funny examples.
14. When the instructor uses cartoons and SD D U A SA
funny stories or examples, it helps me to focus
on the concepts better.
15. When humor is used in the classroom, I SD D U A SA
feel anxious and uncomfortable.
16. I feel that the use of humor in the classroom SD D U A SA
helps me to stay more actively involved.
17. I feel that I retain more information when SD D U A SA
humor is used in the classroom.
18. I find that when I am taking a quiz, I SD D U A SA
sometimes remember the funny example
that the instructor used and it helps me
to answer the question.
19. I find that sometimes I understand the concept SD D U A SA
better if the instructor uses a funny example
to illustrate it.
20. I find that I am more attentive in class when SD D U A SA
the instructor uses humor.
21. I am unable to focus when the instructor uses SD D U A SA
cartoons and funny examples in the classroom.
22. I feel more comfortable when the instructor uses SD D U A SA
humor in the classroom.
23. When we laugh together in class, I feel more SD D U A SA
comfortable asking questions.
164
24. Using funny examples does not help me to SD D U A SA
remember concepts better.
25. I like the instructor to have a sense of humor in SD D U A SA
the classroom.
26. I understand the topic better if the instructor gives SD D U A SA
the information in a serious manner.
27. I prefer taking the course with an instructor who SD D U A SA
does not laugh or smile.
28. An instructor who is amusing makes me more SD D U A SA
interested in the material.
29. Taking a course with an instructor who expresses SD D U A SA
a sense of humor in the classroom encourages me
to express my ideas.
30. Taking a course with an instructor who expresses SD D U A SA
a sense of humor in the classroom discourages me
from participating.
31. An instructor who uses humor in the classroom is SD D U A SA
more highly respected by college students.
32. An instructor who uses humor in the classroom is SD D U A SA
more likely to grab my attention.
33. An instructor who uses humor in the classroom is SD D U A SA
more likely to increase my interest in the topic.
34. An instructor who uses humor in the classroom is SD D U A SA
more likely to reduce my stress/anxiety about the topic.
35. An instructor who uses humor in the classroom is SD D U A SA
more likely to make the class enjoyable.
36. I am more likely to skip a class where I find the SD D U A SA
lectures typically boring.
37. I am more likely to remember class material if it SD D U A SA
is presented with humor.
165
38. An instructor?s use of humor in the classroom is SD D U A SA
typically a waste of classroom time.
39. An instructor?s job is to teach, not entertain. SD D U A SA
40. I would rather have an instructor try to be humorous SD D U A SA
and fail rather than not try to be humorous at all.
41. I am sometimes offended by the use of humor by an SD D U A SA
instructor.
42. I am more likely to attend class where the instructor SD D U A SA
uses humor.
43. An instructor does not have to use humor in order SD D U A SA
to be effective.
44. I am more likely to pay attention in a classroom where SD D U A SA
an instructor uses humor in the lecture.
45. I think instructors who try to use humor in the SD D U A SA
classroom are actually more humorous than
those who do not.