CANDIDATE REACTIONS TO THREE ASSESSMENT CENTER EXERCISES: A FIELD STUDY Except where reference is made to the work of others, the work described in this dissertation is my own or was done in collaboration with my advisory committee. This dissertation does not include proprietary or classified information. ________________________________________ John Bret Becton Certificate of Approval: _________________________ _________________________ William F. Giles Hubert S. Feild, Chair Professor Torchmark Professor Department of Management Department of Management _________________________ _________________________ Allison Jones-Farmer Stephen L. McFarland Assistant Professor Acting Dean Department of Management Graduate School CANDIDATE REACTIONS TO THREE ASSESSMENT CENTER EXERCISES: A FIELD STUDY J. Bret Becton A Dissertation Submitted to The Graduate Faculty of Auburn University In Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy Auburn, Alabama December 16, 2005 iii CANDIDATE REACTIONS TO THREE ASSESSMENT CENTER EXERCISES: A FIELD STUDY J. Bret Becton Permission is granted to Auburn University to make copies of this dissertation at its discretion, upon request of individuals or institutions and at their expense. The author reserves all publication rights. ______________________________ Signature of Author ______________________________ Date Copy sent to: _________________________________________ Name Date iv VITA John Bret Becton, son of O.M. and Carolyn (Garrigus) Becton, was born April 4, 1969, in Waynesboro, Mississippi. He graduated from Southern Choctaw High School in 1987. He attended the University of Southern Mississippi in Hattiesburg, Mississippi and graduated with a Bachelor of Science Degree in Psychology in May, 1991. After graduation, he entered the University of Tulsa in Tulsa, Oklahoma and graduated with a Master of Arts in Industrial and Organizational Psychology in December, 1993. After working in the management consulting field for seven years, he entered the Graduate School, Auburn University in September, 2001. He married Melanie Becton, daughter of Patricia Watson and Tommy Ivy, on June 1, 1991 and has three sons, Brooks, Lee, and Blake Becton. v DISSERTATION ABSTRACT CANIDIDATE REACTIONS TO THREE ASSESSMENT CENTER EXERCISES: A FIELD STUDY J. Bret Becton Doctor of Philosophy, December 16, 2005 (Master of Arts, University of Tulsa, 1993) Bachelor of Science, University of Southern Mississippi, 1991) 166 Typed Pages Directed by Hubert S. Feild Following a multidimensional procedural justice framework, the current study examined the reactions of candidates completing an assessment center for promotion within a police department. The main purpose of this research was to examine the reactions of actual job candidates to a situational interview, a writing sample, and role- play exercises comprising an assessment center used to make actual promotion decisions. It was hypothesized that candidates would have different reactions to different types of assessment center exercises based on the distinct characteristics of each exercise. Additionally, this study examined the antecedents of applicant reactions to selection devices by examining the relationship of candidates? test-taking motivation, attitude towards testing, race, organizational tenure, level of target position, and evaluative history with exercise performance and selection procedural justice perceptions. It was vi hypothesized that these variables interact to affect exercise performance and/or selection procedural justice perceptions. A total of 173 candidates agreed to participate in this study after completing the situational interview, writing sample, and role-play exercises. Candidate reactions to each assessment center exercise were collected immediately after completion of the devices via surveys. Perceptions of selection procedural justice, attitude toward testing, test-taking motivation, exercise experience, and evaluative history were measured, and the reactions of candidates of different races, experience levels, and organizational levels were compared. Analyses revealed that candidates did not differ significantly in perceptions of job-relatedness, opportunity to perform, and consistency of administration according to the type of exercise. However, candidates viewed the situational interview more positively in terms of information known compared to the writing sample. Also, this study revealed that level of target position was negatively associated with opportunity to perform and test-taking motivation, but positively associated with information known. Additionally, the results indicated that evaluative history was negatively related to perceptions of opportunity to perform and attitude towards testing and level of target position was negatively related to opportunity to perform and test-taking motivation and positively related to information known. vii Results also revealed that African-American and White candidates viewed the situational interview, role-play exercises, and writing sample similarly. However, African-American candidates in this sample reported more favorable perceptions of job- relatedness, opportunity to perform, and test-taking motivation in comparison with White candidates. Implications and directions for future research on reactions to testing are discussed. viii ACKNOWLEDGEMENTS The author would like to thank Drs. Hubert S. Feild, William F. Giles, and Allison Jones-Farmer for their guidance, patience, and encouragement. Thanks are also due to Dr. John Veres, Dr. Katherine Jackson, and Cindy Forehand for allowing the author to collect data in conjunction with their consulting project. Finally, the author must thank his wife, Melanie, and three sons, Brooks, Lee, and Blake, for their unwavering support and understanding. ix Style manual used: American Psychological Association (APA) Style Manual. Computer software used: Microsoft? Office Word 2003 was used to compile this dissertation. SPSS 13.0 for Windows was used to analyze data for this dissertation. x TABLE OF CONTENTS VITA...................................................................................................................... iv ABSTRACT.............................................................................................................v ACKNOWLEDGMENTS ..................................................................................... vi TABLE OF CONTENTS....................................................................................... ix LIST OF TABLES................................................................................................ xii LIST OF FIGURES ............................................................................................. xiv I. Introduction......................................................................................1 Literature Review.......................................................................8 Assessment Centers: What Are They?.................................8 Characteristics of Assessment Centers ................................9 Correlates of Applicant Reactions to Assessment Centers11 Selection Procedural Justice: An Overview.......................13 Procedural Justice Perceptions and Employee Selection...13 Research Hypotheses ...............................................................16 Selection Procedural Justice Perceptions...........................17 Job-relatedness.............................................................17 Opportunity to Perform................................................19 Consistency of Administration ....................................20 Information Known......................................................21 Antecedents of Selection Procedural Justice Perceptions..23 Test-taking Motivation.................................................23 Test-taking Attitudes....................................................24 Organizational Tenure .................................................26 Evaluative History .......................................................27 Level of Target Position...............................................29 Race..............................................................................30 Summary of Research Hypotheses ....................................34 II. Method ...........................................................................................39 xi Measures ..................................................................................40 Selection Procedural Justice Perceptions.....................40 Attitude Toward Testing..............................................41 Test-taking Motivation.................................................41 Race..............................................................................42 Gender..........................................................................42 Organizational Tenure .................................................42 Evaluative History .......................................................42 Level of Target Position...............................................42 Exercise Performance ..................................................42 Procedure .................................................................................43 Assessment Center Development and Exercises .........43 Development....................................................44 Development of Scoring Guidelines................45 Role-play Exercises .........................................46 Writing Sample Exercise .................................48 Situational Interview Exercise .........................49 Administration of Exercises.........................................52 Role-play Exercise...........................................53 Writing Sample Exercise .................................54 Situational Interview Exercise .........................54 Assessor Training.........................................................56 Data Analysis...............................................................57 III. Results............................................................................................59 Relationship between Type of Assessment Center Exercise and Candidate Reactions.....................................59 Relationship among Test-taking Motivation, Exercise Performance, and Job-relatedness.......................66 Relationship among Attitude Towards Testing, Selection Procedural Justice Perceptions, and Exercise Performance .......................................................................67 Interaction of Assessment Center Exercise Type and Organizational Tenure .......................................71 Relationship between Evaluative History, Level of Target Position, and Reactions to Assessment Center Exercises..73 Interaction between Assessment Center Exercise Type and Race....................................................................76 xii IV. Discussion......................................................................................80 Relationship between Type of Assessment Center Exercise and Candidate Reactions.....................................81 Relationship among Test-taking Motivation, Exercise Performance, and Job-relatedness.......................85 Relationship among Attitude Towards Testing, Selection Procedural Justice Perceptions, and Exercise Performance .......................................................................86 Interaction of Assessment Center Exercise Type and Organizational Tenure .......................................88 Relationship between Evaluative History, Level of Target Position, and Reactions to Assessment Center Exercises..89 Interaction between Assessment Center Exercise Type and Race....................................................................92 Implications for Research and Practice....................................94 Limitations of Present Study and Directions for Future Research .................................................................97 REFERENCES ....................................................................................................101 APPENDICES .....................................................................................................118 xiii LIST OF TABLES Table 1. Summary of Study Hypotheses...................................................................36 Table 2. Means, Standard Deviations, Coefficient Alphas, and Intercorrelations among Study Variables...........................................60 Table 3. Differences in Promotion Candidates? Reactions To Assessment Center Exercises ............................................................64 Table 4. Hierarchical Moderated Regression Results for Job-relatedness and Test-taking Motivation Predicting Candidate Performance for Three Assessment Center Exercises .....................................................................68 Table 5. Hierarchical Moderated Regression Results for Attitude Towards Testing and Candidates? Reactions to Assessment Center Exercises Predicting Candidate Performance for Three Assessment Center Exercises....................................................................70 Table 6. Hierarchical Moderated Regression Results for Candidate Organizational Tenure and Assessment Center Exercise Type Predicting Candidates? Perceptions of Opportunity to Perform, Consistency of Administration, Information Known, and Job-relatedness.......................72 Table 7. Multiple Regression Results for Candidate Evaluative History and Level of Target Position Predicting Candidates? Perceptions of Job-relatedness, Opportunity to Perform, Attitude Toward Testing, xiv Information Known, and Test-taking Motivation...................................75 Table 8. Analysis of Interaction Between Type of Assessment Center Exercise and Promotion Candidate Race for Job-relatedness, Opportunity to Perform, and Test-taking Motivation.............................78 Table 9. Means and Standard Deviations of White and African- American Candidates? Reactions to Assessment Center Exercises........79 xv LIST OF FIGURES Figure 1. Schedule of Data Collection for Police Sergeant and Lieutenant .........55 1 CHAPTER 1 INTRODUCTION The selection and promotion of employees is perhaps the most critical function in human resources management. Selecting and properly using appropriate selection devices can have far reaching and serious consequences for organizations. Accordingly, employee selection devices have been the subject of much research. Traditionally, the validity, fairness, utility, and legal defensibility of selection devices have received the most scrutiny in the literature. Although the validity (Schmidt, 1988; Schmitt, Gooding, Noe, & Kirsch, 1984), fairness (Reilly & Chao, 1982; Reilly & Warech, 1990), and utility (Hunter & Hunter, 1984; Schmidt & Hunter, 1998) of employee selection devices have been researched extensively, considerably less attention has been given to candidates? reactions to selection devices. This is perplexing because how candidates react to selection procedures is clearly an important issue. Candidate attitudes and reactions can have an impact on important organizational outcomes such as satisfaction with the selection process, the job, and the organization (Hendrix, Robbins, Miller, & Summers, 1998), job acceptance intentions (Smither & Reilly, 1993), and/or turnover intentions (Sujak, Parker, & Grush, 1998). Furthermore, research on reactions to testing would seem to be more plentiful considering more than one third of Americans have unfavorable attitudes toward employment testing (Schmit & Ryan, 1997). Several authors (Bauer, Truxillo, Sanchez, Craig, Ferrara, & Campion, 2001; Chan & Schmitt, 2 1997; Ryan & Ployhart, 2000; Rynes, 1993; Schmidt, 1993; Smither, Reilly, Millsap, Pearlman, & Stoffey, 1993; Schmitt & Gilliland, 1992) have emphasized the need for more research on applicant attitudes regarding employee selection methods and practices. The current study seeks to respond to this call for more research on applicant attitudes toward testing while addressing the weaknesses and limitations of the existing literature. Importance of Applicant Reactions to Testing An examination of applicant attitudes and reactions is important for several reasons. First, candidates? reactions to selection procedures can have an impact on the candidates? perceived attractiveness of the organization (Breaugh, 1992; Rynes, 1992). Consequently, the effects that applicant reactions have on organizational attractiveness can indirectly influence pursuit or acceptance of job offers (Smither & Reilly, 1993). Additionally, the perceptions of rejected candidates can be of significance as they can spread negative information to other potential job candidates (Herriot, 1989; Ployhart & Ryan, 1997). In other words, if candidates perceive an organization?s selection procedures negatively (i.e., unfair, biased), the organization may have difficulty attracting and retaining qualified candidates. Research has also shown that reactions to testing can spill over into job-related attitudes and behaviors. For example, perceptions of fairness have been shown to be related to job performance (Konovshy & Cropanzano, 1991), withdrawal behaviors (i.e., intention to quit; Sujak et al., 1998), absenteeism (Schmitt, 1996), and retaliatory behaviors (Greenberg, 1990; Skarlicki & Folger, 1997). Research has shown that ensuring positive justice perceptions results in employees with higher levels of intrinsic job satisfaction and commitment, who, in turn, will have a strong desire 3 to perform well within a group, attend work, and remain in their organization (Hendrix, Robbins, Miller, & Summers, 1998). Second, applicant reactions may be related to both the likelihood of litigation and how successfully a selection procedure can be defended (Bible, 1990; Cascio, 1991). Candidates who view selection procedures as lacking validity or being offensive are more likely to file complaints or lawsuits on the basis of the procedures being unfair, unethical, or immoral (Anastasi, 1988, p. 144; Bible, 1990; Cascio, 1987, p. 132; Huffcut, 1990; Seymour, 1988; Thornton, 1993). Smither and Reilly (1993) added that selection procedures with poor face validity have been the object of ridicule in litigation (Vulcan Society v. Civil Service Commission, 1973), and that this increases the possibility of such perceptions influencing court decisions against selection procedures in spite of substantial empirical validity evidence. Third, negative reactions to selection procedures may result in reduced motivation to do well (Arvey, Strickland, Draudent, & Martin, 1990) or withdrawal from the selection process (Rynes, Bertz, & Berhart, 1991), both of which may lower the operational validity and utility of the selection procedures (Arvey et al., 1990; Boudreau & Rynes, 1985; Murphy, 1986). Applicant reactions can have an impact on organizational outcomes such as satisfaction with the selection process, the job, and the organization, job acceptance intentions, and/or turnover intentions (Bauer, Maertz, Dolen, & Campion, 1998). Finally, reactions to selection procedures can have an effect on the psychological well-being of candidates (Gilliland, 1993). For example, the perceived fairness of selection procedures may influence the efficacy and self-esteem of rejected 4 candidates (Robertson & Smith, 1989). Clearly, candidates? reactions to selection procedures are of considerable importance to organizations and society as a whole. Purpose of Present Study Research on applicant reactions to selection procedures is critical because applicant reactions can have important, far-reaching effects. The main purpose of this research is to examine the reactions of actual job candidates competing for a promotional position by completing an actual assessment center comprised of a situational interview exercise, a writing sample exercise, and a role-play exercise. A second purpose is to examine the antecedents of applicant reactions to selection devices by studying the association of candidates? motivation to perform, attitude towards testing, perceived control over selection procedure performance, race, organizational tenure, level of position, and evaluative history with selection procedural justice perceptions and test performance. While a number of studies have focused on applicant reactions to selection devices, the current study is important because it addresses a number of weaknesses in the extant literature. Several researchers have examined perceptions of face validity, procedural justice, attitudes about the organization, job pursuit intentions, test-taking motivation, and self-efficacy as they relate to various selection procedures including assessment centers, cognitive ability tests, video-based tests, biodata, and drug testing (cf., Bauer et al., 2001; Breaugh, 1992; Chan, Schmitt, DeShon, Clause, & Delbridge, 1997; Rynes, 1992). Although these studies have been valuable, there are several problems with the existing research. First, many of these studies were conducted in 5 hypothetical or simulated hiring situations rather than actual hiring situations (e.g., Chan, Schmitt, DeShon, Clause, & Delbridge, 1997; Gilliland, 1994; Kluger & Rothstein, 1993; Ployhart & Ryan, 1998; Rynes & Connerley, 1993). In fact, actual tests were not administered in some cases; participants were simply asked to react to descriptions of different tests (Rynes & Connerley, 1993). This is problematic because the reactions of candidates in the context of a real hiring situation are likely to differ greatly from those of students or participants in a simulated hiring situation. It is clear that asking incumbents or students for their perceptions does not appear to be an adequate substitute for asking actual applicants (Ryan & Ployhart, 2000). The current study extends the literature by examining the reactions of real job candidates to actual selection tests in a competitive promotional context. Second, reactions-to-testing research has not been highly theory driven (Bauer, Maertz, Dolen, & Campion, 1998). It has been suggested that research on reactions to testing might be extended by placing applicant reactions into the appropriate conceptual context (Borman, Hanson, & Hedge, 1997). Several theories are germane to candidate reactions to testing. Organizational justice theory provides a suitable context in which to study reactions to testing. The current study extends the literature by drawing upon theories of organizational justice (Gilliland, 1993) and test-taking motivation (Arvey, Strickland, Drauden, & Martin, 1990). Third, reactions-to-testing literature has not adequately focused on the various types of selection tests, especially structured interviews (Campion et al, 1997). Only one study could be located that examined applicant reactions to situational interviews (e.g., Rynes & Connerley, 1993), and no studies could be located that looked at reactions to 6 role-play exercises and writing samples. The current study extends the literature by examining and comparing applicant reactions to situational interviews, role-play exercises, and writing samples. Fourth, research concerning reactions to testing has been conducted in absolute and global terms (i.e., candidates either view a test as fair or unfair). However, applicant reactions to testing is not a zero-sum game. It is possible that candidates may feel that a test is fair in some ways and unfair in others (Bauer et al., 2001), but past research has not explored this proposition adequately. The current study extends the literature by examining reactions to testing using the Selection Procedural Justice Scale that assesses multiple facets of perceived fairness (Bauer et al., 2001). Fifth, a major concern with existing research on reactions to testing relates to how the constructs assessed are defined and operationalized (Ryan & Ployhart, 2001). The reliability and validity of the measures of applicant perceptions used in most studies have not been sufficiently examined (Ryan & Ployhart, 2000). Recent work by Bauer et al. (2001) did much to extend research in this area by developing and validating a multidimensional scale for measuring reactions to testing (i.e., Selection Procedural Justice Scale). The current study extends the literature by using this measure which has established reliability and validity. Sixth, research has not adequately examined reactions of candidates in a promotion context. Most studies of applicant reactions to tests have been conducted in an entry-level context (i.e., applicants attempting to gain entry into the organization). While test reactions of entry-level applicants are certainly important, the reactions of promotion candidates are perhaps even more critical, yet their reactions to various assessment 7 devices have been largely neglected in the literature. Identification with an organization may be an important factor in how candidates interpret the fairness of a selection process (Brockner, Tyler, & Cooper-Schneider, 1992; Huo, Smith, Tyler, & Lind, 1996; Tyler & Degoey, 1995). Since promotion procedures deal with employees who are already part of the organization, it is likely that different mechanisms act to form reactions (Ryan & Ployhart, 2000), and several research studies have shown differences between perceptions of incumbents and applicants with applicants generally having more positive reactions (e.g., Arvey, Strickland, Dauden, & Martin, 1990; Brutus & Ryan, 1998). Understanding candidate reactions in promotional settings is important for two reasons: (a) the potential negative consequences in the promotional context (e.g., lower morale, decrease in job performance) can be more acutely felt by the organization than those in the entry-level context (e.g., public relations, re-application), and (b) reactions of incumbents are felt throughout the organization (Truxillo & Bauer, 1999). While findings based on undergraduate student reactions may be a good proxy for entry-level applicant reactions, these findings would appear to be less generalizable to promotion situations. Since promotion procedures deal with incumbents who are already part of the organization, it is likely that different mechanisms act to form reactions (Ryan & Ployhart, 2000). Reactions of entry-level and incumbent candidates may differ because individuals within an organization likely possess different information about the organization and selection procedures, i.e., entry-level applicants having relatively little information about the job vs. incumbent applicants having intimate job knowledge (Ryan & Ployhart, 2000). 8 Literature Review Assessment Centers: What Are They? Assessment centers are a common selection method used by organizations in both the public and private sector. An assessment center consists of procedures for measuring the knowldeges, skills, and abilities in groups of individuals using a series of devices, many of which are verbal performance tests (Gatewood & Feild, 2001). Since their early roots in World War I Germany (Howard, 1974), assessment centers have been used by organizations in practically every setting including military forces, intelligence agencies, business and industry, as well as governmental and educational institutions. Assessment centers have been used for the selection of entry-level sales representatives, entry-level police officers, first-level supervisors, middle managers, executives, instructors/teachers, consultants, human relations specialists, executive secretaries, and research analysts, as well as many other applications. However, employee selection and promotion are only two of the many ways assessment centers can be used given the fact that they have evolved into tools for organizational change and multiple human resource functions (Howard, 1997). Thornton (1992) provided examples of how assessment centers have been used in organizational settings including recruitment, selection, placement, training and development, performance appraisal, organizational development, human resource planning, promotion and transfer, and layoffs. Although assessment centers have been used for all these purposes, the vast majority of organizations using assessment centers do so for employee selection/promotion or employee training/development purposes. A survey conducted by Spychalski and Quinones (1997) indicated that, of the 215 organizations surveyed, the most popular decision-making processes that relied on data 9 from assessment centers were promotion (60.8%), selection (54.5%), and development planning (51.2%). Characteristics of Assessment Centers Before discussing the research concerning reactions to assessment centers, it is important to examine what constitutes an assessment center. According to the International Task Force on Assessment Center Operations Guidelines and Ethical Considerations for Assessment Center Operations (2000), a selection process must contain 10 essential elements to be considered an assessment center. The first element is that a job analysis of relevant behaviors must be conducted to determine the dimensions, attributes, characteristics, qualities, skills, abilities, knowledges, or tasks that are necessary for effective job performance, to identify what should be evaluated by the assessment center and what kind of simulation exercises should be used. The second element is that behavioral observations by assessors must be classified into some meaningful and relevant categories, such as dimensions, attributes, characteristics, aptitudes, qualities, knowledges, skills, abilities (KSAs), or tasks. In other words, anything for which a candidate in an assessment center receives a score or evaluation must be related to a tested dimension or KSA. The third element concerns the types of exercises used. The techniques used in the assessment center must be designed to provide information for evaluating the dimensions previously determined by the job analysis. Assessment center components must provide opportunities for candidates to demonstrate competency in the requisite dimensions or KSAs. The fourth element involves the use of multiple assessment techniques. In order to be considered an assessment center, multiple assessment techniques must be used. These can include tests, 10 interviews, questionnaires, sociometric devices, and simulations. The assessment techniques are developed or selected to tap a variety of behaviors and information relevant to the predetermined dimensions. The fifth element of assessment centers is that the assessment techniques must include sufficient job-related simulations to allow multiple opportunities to observe candidates? behavior related to each dimension being assessed. A simulation is an exercise or technique designed to elicit behaviors related to dimensions of performance on the job requiring the participants to respond behaviorally to situational stimuli. Examples of simulations include group exercises, role-play exercises, in-basket exercises, interview simulations, and presentation exercises. In order to meet the sixth requirement, multiple assessors must be used for each candidate. Seventh, assessors must receive thorough training and demonstrate performance guidelines as outlined in the Guidelines and Ethical Considerations for Assessment Center Operations (2000) prior to participating in the assessment center. Thorough training is necessary for assessors to observe and record behavior, classify behavior into measured dimensions, make judgments concerning candidate performance, report their observations to other assessors, and integrate information from other assessors? observations. The eighth element concerns the recording of observations. Assessors must use some type of systematic procedure for recording behavioral observations during assessment center exercises. Techniques can include handwritten notes, behavioral checklists, or behavioral observation scales. Ninth, assessors must prepare some report or record of the observations made in each exercise in preparation for the integration 11 discussion. The tenth element involves the integration of data gathered in all of the exercises. Integration of behaviors must be based on a pooling of information from assessors and from techniques at a meeting among the assessors or through a statistical integration process validated in accord with professionally accepted standards. The integration of information may be accomplished by consensus or some other method of arriving at a joint decision. Correlates of Applicant Reactions to Assessment Centers Probably because assessment centers are most often used for selection and promotion, most research on assessment centers has focused on their use as selection devices (Thornton & Byham, 1982). In particular, research has centered on the predictive accuracy and usefulness of assessment centers, most often for the purpose of selecting and promoting employees. This research has revealed several key findings about assessment centers. First, they have predictive validity (Hough & Oswald, 2000; Rosenthal, Thornton, & Bentson, 1987; Salgado, 1999), yielding significant concurrent and predictive validities without having adverse impact against protected groups (Baron & Janman, 1996; Bobrow & Leonards, 1997). Second, although assessment centers are highly predictive of job performance, they lack construct or discriminant validity (Archambeau, 1979; Bycio, Alvares, & Hahn, 1987; Donahue, Truxillo, Cornwell, & Gerrity, 1997; Kleinmann & Koller, 1997; Lance, Newbolt, Gatewood, Foster, French, & Smith, 2000; Sackett & Dreher, 1982). Third, assessment centers have utility for use in selection of supervisors, managers, and executives (Cascio, 1991). Fifth, assessment centers are a fair method for human resource selection and development (Shore, Tashchian, & Adams, 1997). 12 Research concerning applicant reactions to assessment centers has produced some interesting findings as well. It has been suggested that assessment centers can have a powerful effect on candidates. Due to assessment centers? face validity, their duration, and the range of assessment techniques used, it is difficult for candidates to rationalize away poor performance (Fletcher, 1986). As a result, participating in an assessment center can have a tremendous impact on candidates? professional and perhaps, personal development. A number of studies have shown that candidates generally have a favorable attitude towards assessment centers, and view them as being fair and valid selection methods (Dodd, 1977; Dulewicz, Fletcher, & Wood, 1983; Fletcher, 1991; Thornton, 1992). More recently, Macan, Avedon, Paese, and Smith (1994) studied applicant reactions to an assessment center and a cognitive ability test and found that applicants viewed the assessment center as more face valid than the cognitive ability test. However, as noted earlier, assessment centers can be comprised of numerous exercises, and candidates? reactions to assessment centers are likely influenced by the types and number of exercises included in the assessment center. Research has shown that reactions to certain types of tests are related to the other types of tests used in the process (Rosse, Miller, & Stecher, 1994), the order of test administration (Ryan et al., 1996), and how a test is used (i.e., compensatory vs. hurdle). Therefore, one must consider such factors when attempting to understand reactions to assessment centers and when comparing studies of reactions to assessment centers. This study examined reactions to three assessment center exercises. 13 Selection Procedural Justice: An Overview Organizational justice theory is a framework that is often used to study applicant perceptions. The two most frequently cited aspects of organizational justice theory are procedural justice and distributive justice. From a selection perspective, procedural justice involves a series of perceived rules (i.e., consistency, job-relatedness, information known about test, propriety of questions) that influence perceptions of process fairness or the perceived ?correctness? of the selection process (Ployhart & Ryan, 1998). Distributive justice refers to perceived rules (e.g., equity) that affect perceptions of outcome fairness (i.e., perceived ?correctness? of the selection outcome). Procedural justice is the focus of this research because organizations may have the ability to positively influence procedural justice perceptions (Bauer et al., 2001) whereas distributive justice perceptions are more or less determined by the actual outcome or result. Therefore, a review of the procedural justice literature as it relates to employee selection is provided. Procedural Justice Perceptions and Employee Selection Numerous researchers (Folger & Greenberg, 1985; Folger & Konovsky, 1989; Greenberg, 1987, 1990; Kanfer, 1990) have suggested that procedural justice perceptions are considerable determinants of attitudes about organizations (Smither et al., 1996). Research has shown that perceptions of procedural justice are related to organizational commitment (McFarlin & Sweeney, 1992), employee citizenship (Moorman, 1991), union satisfaction (Fryzell & Cordon, 1989), and job satisfaction and performance (Konovsky & Cropanzano, 1991). Therefore, employees who perceive the selection process as fair should be more likely to develop and retain favorable attitudes about the 14 organization, regardless of the outcomes of the procedure (Smither et al., 1996). Smither et al. (1990) also suggested that the selection procedures an organization employs may communicate clues to candidates about how fairly the organization may deal with other employee concerns and human resources practices. More recent research on applicant reactions to selection systems has been conducted within Gilliland?s (1993) theoretical model for procedural justice in selection (Bauer et al., 2001). In summary, Gilliland?s model suggests that situational and personal circumstances at least partially determine the extent to which procedural and distributive justice rules are perceived as satisfied or violated. In other words, factors such as test type, human resources policies and procedures, and behavior of administrators affects candidates? perceptions of selection systems. Gilliland presented 10 procedural justice rules, which are theorized to improve procedural justice perceptions and consequently, positively influence organizational outcomes. These 10 procedural justice rules for selection procedures are categorized into three broad dimensions. Formal characteristics include job-relatedness, opportunity to perform, reconsideration opportunity, and consistency of administration. The Explanation category includes feedback, information known about the selection process, and openness or honesty of test administrators. Interpersonal treatment includes interpersonal effectiveness of administrators, two-way communication between applicants and administrators, and propriety of questions (i.e., invasiveness or bias). Eight of these procedural justice factors are salient to the current study: (a) job-relatedness-using a test that candidates believe is related to the job of interest, (b) opportunity to perform-having an opportunity for candidates to perform and demonstrate the abilities needed for the job, (c) consistency of administration-ensuring 15 that selection procedures are administered consistently and without bias, (d) information known-receiving information about the test and the testing process, (e) openness- providing honest, sincere, truthful, and open communications regarding the selection procedures, how scores will be used, etc., (f) treatment-treating candidates with warmth and respect (g) two-way communication-providing candidates the opportunity to ask questions or offer input during the process, and (h) propriety of questions-using test questions that avoid personal bias, invasion of privacy, and illegality. Due to the nature and timing of this proposed research, feedback and reconsideration opportunity were not considered pertinent to the study. Although Gilliland?s (1993) model has been well received by researchers, there are relatively few empirical studies of the model?s propositions. However, several recent studies have supported Gilliland?s model. These studies have found that applicant reactions are associated with outcomes such as intentions to pursue employment, recommendations to others to apply for employment with an organization, turnover intentions, and perceived organizational attractiveness (e.g., Bauer, Maertz, Dolen, & Campion, 1998; Cropanzano & Konovsky, 1995; Macan, Avedon, Paese, & Smith, 1994; Smither, Reilly, Millsap, Pearlmen, & Stoffey, 1993; Truxillo & Bauer, 1999). However, measurement of selection procedural justice perceptions has not been consistent across these studies. To address this limitation, Bauer et al. (2001) developed a comprehensive set of items to assess Gilliland?s (1993) 10 procedural justice rules. The resulting instrument, the Selection Procedural Justice Scale (SPJS), has been demonstrated to have reliability, content validity, and convergent and divergent validity (Bauer et al., 2001). Additionally, the SPJS allows researchers to examine different aspects of perceived 16 fairness rather than using global assessments of fairness as much of the existing research has done (Bauer et al., 2001). Candidates may feel that a test is fair in some ways, yet unfair in others, so it is useful to look at the different facets of fairness rather than a global fairness index. For example, candidates may feel that a written multiple-choice test is fair in terms of how the test is administered (i.e., standardized) and how candidates are treated (i.e., standardized instructions, same test questions, etc.) but may feel it is unfair because it lacks face validity or perceived job-relatedness. In order to address this weakness in the existing literature, the SPJS was used to measure fairness perceptions in this study. Bauer et al. (2001) suggested several avenues for future research using this new scale. Two suggested areas of future research were to focus on test-taking motivation as well as potential individual difference moderators (e.g., race, personality) as predicted by Gilliland?s (1993) model. The present study addresses several related areas by examining the associations between test-taking motivation, attitude toward tests, race, organizational tenure, test performance, and selection procedural justice perceptions. Research Hypotheses As previously noted, Gilliland?s model, upon which the SPJS is based, has 10 procedural justice rules or characteristics thought to contribute to selection procedural justice perceptions. While all 10 of these dimensions are important, only four procedural justice dimensions will be used in the study due to the conditions and time-frame under which the current study was conducted. These are (a) job-relatedness, (b) opportunity to perform, (c) information known, and (d) consistency of administration. Each of these 17 four dimensions is discussed, and associated hypotheses are offered in the following section. Selection Procedural Justice Perceptions Job-relatedness. One popular area of concentration in the test reactions literature is perceptions of job relatedness. Job relatedness refers to the extent to which a test either appears to measure content relevant to the job situation or appears to be valid (Bauer, Truxillo, Sanchez, Craig, Ferrra, & Campion, 2001). Job relatedness is important to study because it may have the greatest impact on fairness perceptions as compared to other formal characteristics of selection procedures (Gilliland, 1993). Face validity is another popular topic in the reactions-to-testing literature. Face validity is the degree to which candidates perceive the content of a selection device to be related to the content of the job (Smither et al., 1993). Clearly, there is overlap in these definitions, and a clear distinction between the concepts as they relate to test reactions is difficult to draw. While some researchers have made a distinction between perceived job relatedness and face validity (cf. Elkins & Phillips, 2000), other researchers have recently argued that they are similar if not exactly the same concept (Chan & Schmitt, 2004). This study views face validity and perceived job relatedness as highly similar, if not interchangeable, concepts from the candidates? perspective and draws on research using both concepts for this study. Reactions to ability tests are substantially improved by framing items around job- related topics and increasing the items? contextual relevance (Rynes & Connerly, 1993). One approach to increasing the contextual relevance of tests is the use of simulations. Simulations are defined as ?a representation of a real-life situation which attempts to 18 duplicate selected components of the situation along with their interrelationships in such a way that it can be manipulated by the user? (Coppard, 1976). Research has shown that selection procedures consisting of simulations of actual job behaviors such as work samples, in-baskets, and role-plays are viewed as having more face validity and are perceived more favorably than paper-and-pencil methods (Dodd, 1977; Macan et al., 1994; Schmidt et al., 1977; Smither et al., 1993; Smither & Pearlman, 1991). Additionally, candidates seem to prefer selection devices involving simulations (Macan et al., 1994; Smither et al., 1993; Steiner & Gilliland, 1993). Chan and Schmitt (1997) found that the high degree of face validity and positive candidate reactions to simulations are frequently attributed to their realistic test situation and similarity to the target job. Since simulations more closely approximate the context in which required job behaviors are used, selection devices that are simulations should be viewed as more job-related than those devices that are not. When comparing the three exercises used in the current study (i.e., situational interview, role-play, and writing sample), the role-play exercise and writing sample would appear to be categorized as work samples (i.e., having a high degree of physical and psychological fidelity). Role-play exercises place candidates in realistic job situations by requiring candidates to interact with actors assuming the role of a person likely to be encountered on the job (e.g., citizen, subordinate, customer, coworker). Likewise, writing samples, as used in this study, are exercises in which candidates are required to produce a written product that is similar to a report or document that is part of the target job using materials (i.e., memos, files, maps, photographs, diagrams) used on the job. While situational interview questions typically ask candidates how they would 19 handle certain job-related situations and usually are derived from a job analysis or critical incidents (Gilliland & Steiner, 1999), the degree of physical and psychological fidelity are comparatively lower than writing samples and role-play exercises, respectively. In situational interviews, the presentation of the situations is typically achieved by an interviewer reading the scenarios to the candidate and the candidate responding by explaining how they would handle the situation. This is somewhat different from the manner in which candidates respond in role-play exercises and writing samples (i.e., explaining how they would handle issues or problems in situational interviews versus actually handling issues or problems in role-play and writing sample exercises). Therefore, the job-relatedness of role-play and writing sample exercises may be more salient than for situational interviews. Hypothesis 1: Candidates will perceive the role-play exercise and writing sample as being more job-related than the situational interview. Opportunity to perform. Bauer et al. (2001) defined opportunity to perform as having sufficient opportunity to demonstrate one?s qualifications (i.e., knowledges, skills, and abilities) within the testing situation. This ?opportunity to perform? is somewhat similar to what the organizational justice literature calls voice. Organizational justice research has shown that voice (i.e., input into organizational decision-making processes such as employee selection) enhances employees? perceptions of fairness of these processes (Greenberg, 1990; Lind, Kanfer, & Early, 1990; Lind & Tyler, 1988; Tyler, 1989). Furthermore, people tend to accept decisions and their consequences if they have some degree of input into making them (Folger, Rosenfield, Grove, & Cokran, 1979). This research indicates that employees desire voice and view procedures into which they 20 have input as fairer than those that do not allow input, regardless of the outcomes (Giacobbe-Miller, 1995; Kanfer, Sawyer, Earley, & Lind, 1987; Lind, Lissak, & Conlon, 1983; Tyler, Rasinski, & Spodick, 1985). In the context of employee selection, voice can be construed as having adequate opportunity to demonstrate one?s competencies in the testing situation (Arvey & Sackett, 1993) or the possibility of exerting control in a selection process (Schuler, 1993). In comparison with many types of tests, situational interviews and role-play exercises provide greater opportunity to express oneself and demonstrate one?s competency. Gilliland (1993) hypothesized that ?interviews provide the most direct opportunity for candidates to perform or have a voice in the process because interviews provide the opportunity to express oneself directly to the interviewer rather than indirectly through test questions? (p. 704). Similarly, role-play exercises provide candidates with an opportunity to express oneself and allow them to address issues and situations that seem important to the candidate through interactions with an actor. On the other hand, writing samples provide candidates with a very strict testing environment and narrowly define how and to what candidates will respond (i.e., response forms, background materials, etc.). Accordingly, the following hypothesis is made: Hypothesis 2: Candidates will perceive the situational interview and the role- play exercise as providing them with greater opportunity to perform or demonstrate their knowledge, skills, and abilities than the writing sample. Consistency of administration. Consistency of administration occurs when ?decision procedures are consistent and without bias across people and over time? (Bauer et al., 2001, p. 391). Gilliland (1993) stated that it is reasonable to expect that 21 consistency perceptions may be influenced by the type of test and may be more salient for some types of test than for others. With situational interviews, candidates are typically aware of the fact that they are asked a standard set of questions, and it is hypothesized that this is perceived positively, as they feel that all candidates are treated the same (Gilliland & Steiner, 1999). Similarly, with role-play exercises, candidates are informed that all candidates will be faced with the same situation or a parallel situation in which they must address critical issues and/or problems. However, with both situational interviews and role-play exercises, there are opportunities for variations in how instructions, scenarios, and cues can be communicated due to the aural nature of the exercises. Alternatively, writing samples might be viewed as more consistently administered because instructions and background information are presented in writing and because they are often mass administered, allowing candidates to actually see that all candidates were administered the test in a standardized manner and treated similarly. In contrast, situational interviews and role-play exercises are typically administered in isolation from other candidates, and it may be more difficult for candidates to recognize and/or appreciate the consistency with which they are administered. Therefore, consistency may be more salient with writing samples. Hypothesis 3: Candidates will perceive the writing sample as being more consistently administered than the situational interview and role-play exercise. Information known. Bauer et al. (2001) defined information known about tests as ?information, communication, and explanation about the selection process prior to testing? (p. 391). This information is important because it provides candidates with an explanation or justification for the decisions made. Regarding this justification, 22 perceptions of fairness are expected to be influenced by information concerning test validity, scoring procedures, and the manner in which scores will be used to arrive at hiring decisions (Gilliland, 1993). Further, other information that may affect fairness perceptions is a priori information about the selection process such as study materials, candidate tutorials, or orientation sessions. Often, employers (especially public safety agencies) provide candidates with study materials for selection and promotion procedures. This information typically includes the types of exercises included in the process, sample exercises/questions, dimensions/KSAs measured, and preparation tips. While this type of information is typically provided for all exercises in a selection procedure, information about role-play exercises is usually not as instructive due to the behavioral nature of these devices. The scripts used by actors in role-play exercises often contain responses that are only used if a candidate asks a specific question or makes a certain comment. Therefore, role-play exercises are much more complex than situational interviews and writing samples, and a priori information about the nature of role-play exercises cannot adequately prepare candidates for all of the possible scenarios that could be addressed. Consequently, it is hypothesized that candidates will perceive more information to be known about situational interviews and writing samples in comparison to role-play exercises. Hypothesis 4: Candidates will perceive the information known about the situational interview and writing sample as greater than information known about the role-play exercise. 23 Antecedents of Selection Procedural Justice Perceptions Several variables have been linked to candidate reactions to testing including candidates? general attitude toward tests, perceived control over performance, motivation to perform, applicant race, organizational tenure, evaluative history, and level of position. The following sections review the extant research on each of these variables and present hypotheses concerning their association with selection procedural justice perceptions. Test-taking motivation. Another factor posited to be related to reactions to selection procedures is test-taking motivation. It is widely accepted that test performance is a function of motivation and ability (Chan et al., 1997). Additionally, prior research has indicated that negative reactions to selection procedures, particularly perceptions of face validity or job relatedness, are related to a reduced motivation to do well (Arvey, Strickland, Draudent, & Martin, 1990; Chan & Schmitt, 1997; Goldstein et al., 1998). Furthermore, research has suggested that the face validity of a selection device can affect applicant motivation to do well on the test (Robertson & Kandola, 1982). Specifically, greater face validity should positively affect applicant motivation to perform on a test. Therefore, based on previous research and because it was hypothesized earlier that the role-play exercise and writing sample will be viewed as more job-related than situational interview, it is proposed that candidate motivation to perform will be highest with the role-play exercise and writing sample due to the high face validity of these exercises. Hypothesis 5: Test-taking motivation for the role-play exercise and writing sample will be greater than that for the situational interview. 24 Additionally, researchers have suggested that perceived job-relatedness has a positive effect on motivation to perform well on a test, which in turn affects test performance (Chan, Schmitt, DeShon, Clause, & Delbridge, 1997; Gordon & Kleinan, 1976; Robertson & Kandola, 1982). Following this logic, high fidelity simulation tests such as writing samples and role-play exercises should provide applicants with greater motivation to perform well than low fidelity tests such as situational interviews. Furthermore, this increased motivation to perform should result in increased performance on such tests. In other words, perceived job-relatedness should moderate the relationship between candidates? motivation to perform and their test performance. Therefore, the following hypothesis is advanced: Hypothesis 6: Perceived job-relatedness will moderate the relationship between candidates? motivation to perform and their test performance such that candidates will have greater motivation to perform well on tests perceived to be highly job-related, which will result in higher performance on such tests. Test-taking attitudes. The examination of the test-taking attitudes of job applicants is of extreme importance. Research on applicant reactions to selection procedures has suggested that more than one third of Americans have unfavorable attitudes toward employment testing (Schmit & Ryan, 1997). Considering the negative organizational consequences of unfavorable reactions to employment testing, studying test-taking attitudes is critical. Perhaps the most important reason for studying test-taking attitudes is that they have been shown to be related to actual test performance (McCarty & Goffin, 2003). There is considerable evidence that individuals differ in terms of their state and trait anxiety toward tests, and that these differences influence their test 25 performance (Hashemian, 1978). Furthermore, Nevo and Sfez (1985) argued strongly that test takers experience profound emotions, feelings, and attitudes as a result of taking tests, which could influence their performance on future tests. More recently, research has revealed that high levels of test-taking anxiety and/or low levels of test-taking motivation are thought to have a negative impact on test performance (Arvey et al., 1990; Schmit & Ryan, 1992). For example, Arvey et al. (1991) found a small relationship between test-taking attitudes and test performance. Additional studies have shown that test-taking attitudes are associated with test performance (cf., Chan et al., 1997; Steel & Aronson, 1995). Test-taking attitudes are also expected to be related to procedural justice perceptions. Research has indicated that individuals view testing differently and more negatively when actually competing for a job than when just reporting their general attitudes about testing, and test-taking attitudes has been shown to be related to procedural justice perceptions (Lounsbury, Bobrow, & Jensen, 1989). More recent research has also shown that procedural justice perceptions and test-taking attitudes are positively related (Bauer, Maertz, Dolen, & Campion, 1998). While research indicates that test-taking attitudes are related to both test performance and procedural justice perceptions, the effect test type has on these relationships has been largely ignored. The type of tests involved in a selection procedure can have profound effects on candidates. Candidates? knowledge of the existence of a test in the selection procedure, as well as the test type (e.g., personality test, drug test, structured interview, etc.), can affect the candidates? behavior in such a way that it can cause candidates to withdraw from the process (Schmit & Ryan, 1997). 26 Therefore, it is important to examine how candidates? test-taking attitudes about different types of tests might affect procedural justice perceptions and test performance. Different types of selection procedures (i.e., simulations vs. paper-and-pencil measures) elicit different types of reactions (Dodd, 1977; Macan et al., 1994; Schmidt et al., 1977; Smither et al., 1993). Therefore, it is plausible that candidates might have attitudes about specific types of tests and attitudes about certain test types might be related to procedural justice perceptions and will moderate the relationship between procedural justice perceptions and test performance. It is proposed that the positivity of the relationship between procedural justice perceptions and test performance will be considerably stronger for certain types of tests due to candidates? test-taking attitude toward the test types. For example, if candidates believe that traditional cognitive ability testing is a fair testing method because it is more familiar to them than performance tests and simulations, this attitude might cause them to have more favorable procedural justice perceptions and experience higher test performance on cognitive tests as a result. Accordingly, the following is hypothesized: Hypothesis 7: Attitude toward assessment center exercise type will moderate the relationship between selection procedural justice perceptions and test performance such that more positive test-taking attitudes combined with more positive procedural justice perceptions will be associated with higher test performance. Organizational tenure. Another variable thought to play an important role in candidates? reactions to testing is level of job knowledge or job experience. Theoretically, more experienced candidates should have had more opportunity to develop 27 the knowledge base required to perform well on selection procedures that mirror the job. However, organizational tenure may play a greater role in performance and reactions of candidates for certain types of tests than for others. It has been suggested that job knowledge or experience within a specific job may cause candidates to react more favorably to situational interviews and role-play exercises because they are better prepared to respond to these types of questions (Moscoso, 2000). Especially with situational interview questions and role-play exercises, experience within an organization, field, or industry would provide one with the opportunity to handle similar situations and observe coworkers or supervisors handling similar situations. Accordingly, it is proposed that candidates? reactions to testing will be moderated by level of experience (i.e., tenure in the job) and that candidates with more experience will prefer situational interviews and role-play exercises to writing samples. The following hypothesis is advanced: Hypothesis 8: Organizational tenure will moderate the relationship between assessment center exercise type and candidate reactions in such a way that candidates with more organizational tenure will perceive situational interviews and role-play exercises as higher in opportunity to perform, information known, job relatedness, and consistency of administration than writing samples in comparison with less-experienced candidates. Evaluative history. Another factor expected to influence candidates? reactions to selection procedures is their evaluative or test-taking history. Research has shown that how well candidates have done in similar selection procedures in the past has an effect on how candidates perceive current selection procedures. Therefore, candidates? test-taking 28 history should be positively associated with their reactions to selection procedures. Several researchers have reported support for this assertion in different contexts. Kravitz, Stinson, and Chavez (1994) found that previous experience on tests such as interviews, cognitive ability tests, personality tests, and work samples was positively associated with reactions to tests such as fairness, invasion of privacy, relevance, and overall appropriateness of the test. Similarly, experience with a physical ability test has been shown to be positively related to the perceived fairness and job relevance of a similar test (Ryan, Greguras, & Ployhart, 1996). Truxillo, Bauer, and Sanchez (2001) also demonstrated that test experience was positively correlated with overall test fairness perceptions. Finally, research has shown that test-taking experience is positively related to candidate beliefs in tests (Barbera, Ryan, Burris, Desmarais, & Dyer, 1995). While most research has supported the association between test-taking experience and reactions to testing, Wiechmann and Ryan (2003) maintained that test-taking experience did not relate to post-test perceptions of a computerized in-basket examination. Although results of research studies concerning test-taking experience and reactions are somewhat mixed, it is important to further examine this relationship. Ryan and Ployhart (2000) stated that the role of one?s evaluative history should be assessed at the very least. As a result, the following is hypothesized: 29 Hypothesis 9: Evaluative history will be positively related to candidate perceptions of the test. Specifically, past success in similar tests will be positively related to perceived job-relatedness, information known, opportunity to perform, test-taking motivation, and attitude towards testing. Level of target position. In addition to experience, a somewhat related factor may be associated with perceptions of selection procedures. Because the level or status of the position has been shown to be associated with both the selection process and selection decisions (e.g., Hopper, 1977; Hopper & Williams, 1973; Kalin & Rayko, 1978), candidates for different-level positions can be assumed to have different sets of perceptions of selection fairness (Singer, 1989). Since current employees possess more and different information concerning the organization, selection tests, and score use, it seems plausible that their perceptions would differ from those of entry-level candidates (Ryan & Ployhart, 2000). Likewise, employees at higher levels within an organization would have access to more and different information than those at lower levels of the organization. This difference in perspective and viewpoint might cause employees to have different reactions to selection procedures depending on their level within the organization. This is an important distinction in light of the fact that much of the reactions-to-testing research has been conducted in a selection context (i.e., with entry- level or external candidates) rather than in a promotion context (i.e., with internal candidates). Therefore, the following hypothesis is advanced: Hypothesis 10: Level of position will be positively related with selection procedural justice perceptions in such a way that candidates for positions of a higher level will react more favorably (i.e., have higher selection procedural 30 justice perceptions) to the selection procedures. Specifically, it is expected that candidates for Lieutenant will perceive all selection devices as higher in job- relatedness, information known, opportunity to perform, test-taking motivation, and attitude towards testing than candidates for Sergeant. Race. Several researchers have reported significant African-American?White differences of approximately one standard deviation in performance on paper-and-pencil measures of cognitive ability (e.g., Hunter & Hunter, 1984; Loehlin, Lindzey, & Spuhler, 1975; Schmidt et al., 1977). As several researchers have noted (e.g., Arvey, Strickland, Drauden, & Martin, 1990; Chan & Schmitt, 1997; Chan, Schmitt, DeShon, Clause, & Delbridge, 1997; Helms, 1992), studies on subgroup differences have tended to focus on the ability aspects rather than the motivational aspects of test performance. Other researchers have noted the need to examine potential individual difference moderators of procedural justice perceptions (Bauer et. al, 2001). While African-American-White differences in test performance have received ample research attention, an area that has been largely over looked is racial differences in reactions to selection procedures. The relationship between race and reactions to testing is clearly an important issue with practical implications. Race differences in test reactions could have implications for minority recruitment programs, adverse impact, and EEO litigation. Additionally, several researchers have suggested that African- American?s performance on tests may be negatively affected by attitudes toward testing, and that performance on traditional tests might be improved if these attitudes can be modified (McKay & Doverspike, 2001). Arvey, Strickland, Drauden, and Martin (1991) examined African-American?White differences in motivation toward employment tests, 31 and whether these motivational differences might account for any of the racial gap in test performance. The results of their study suggested that the lack of test-taking motivation among African-American candidates undermined their subsequent performances on employment tests. In another study, Steele and Aronson (1995) examined whether African-American test performance is compromised by perceptions of stereotype threat. Stereotype threat is defined by the authors as a form of anxiety that results when a person is concerned that his/her performance may substantiate a negative stereotype that exists about his/her group. According to stereotype theory, if African-Americans fear performing poorly on cognitive measures, thus confirming the negative stereotype concerning their group?s mental ability, they will experience a decrease in the concentration and attention required in testing situations (McKay & Doverspike, 2001). Steele and Aronson hypothesized that, if the influence of stereotype threat could be reduced, the test performance of African-Americans could be increased. Only partial support for this hypothesis was found as they confirmed that how the test was described to candidates (i.e., test framed as diagnostic or nondiagnostic of IQ) did impact the test performance of African-American candidates, but the interaction was not statistically significant. McKay, Doverspike, Bowen-Hilton, and Martin (in press) examined the effect of how a test is framed on the subsequent test performance of African-American candidates and found that framing a test as indicative or nonindicative of one?s intellectual ability can alter African- American?s performance on cognitive measures. Therefore, it is apparent that there is evidence in the extant literature suggesting that the study of racial differences in reactions to testing is of importance to organizations and society as a whole. 32 Although adverse impact on cognitive ability tests has received ample attention in the literature, efforts to determine causes for lower test performance by African- Americans have been largely unsuccessful. One reaction to the sub-group differences found with pencil-and-paper tests has been to discontinue or limit the use of these types of measures in hopes of eliminating or ameliorating adverse impact. Other calls for action include alternative testing formats such as using biodata, structured interviews, and personality measures as a possible solution to the problem of differences in test performance (Goldstein, Yusko, Braverman, Smith, & Chung, 1998; Schmitt, Rogers, Chan, Sheppard, Jennings, 1997). One alternative test format that has become increasingly popular is the structured interview. Recent studies have suggested that structured interviews are valid selection tools (e.g., Huffcutt, Roth, & McDaniel, 1996; McDaniel, Whetzel, Scmidt, & Maurer, 1994). Structured interviews have also been found to be positively correlated with job performance (Williamson, Campion, Malos, Roehling & Campion, 1997) and predictive of candidates? fit with an organization (Balaban, 1997). Additionally, it has been suggested that structured interviews improve the legal defensibility of the selection process (Campion & Arvey, 1989; Pursell, Campion, & Gaylord, 1980; Williamson et al., 1997). Certainly, one of the most important reasons for the popularity of orally administered selection devices such as structured interviews is that they tend to reduce mean performance differences by race (Huffcutt & Roth, 1998). One potential explanation for the lack of adverse impact with orally administered selected devices is that they do not require high reading ability normally required by written tests. Helms 33 (1992) argued that tests administered orally and aurally may influence the test responses of African-American candidates, which may explain the absence of African-American? White differences in structured interviews. Citing Helms (1992), Goldstein, Braverman, and Chung (1992) argued that the African-centered values and beliefs of African- Americans stress communalism, movement, and orality, which would consequently impact their test-taking performance. Since African-Americans do not perform as well as Whites on paper-and-pencil measures, and these types of tests seem to be inconsistent with the cultural values, beliefs, and experiences of African-Americans, it seems commonsensical that African-American?s attitudes toward written tests would be less favorable than those of Whites. Fierson (1986) argued that African-Americans have less opportunity to learn and practice the skill necessary to do well on standardized tests, thereby reducing their belief in the tests. Other studies have found African-American test-taking motivation to be significantly lower than that of Whites (Arvey et al., 1990; Chan, Schmitt, DeShon, Clause, & Delbridge, 1997). Some researchers have suggested that minority applicants view performance assessments more favorably than multiple-choice testing (Ryan & Greguras, 1998) although little research has been conducted on this assumption. Empirical evidence concerning racial differences in perceptions of job relatedness is meager (Chan et al. 1997). Only two studies have examined racial differences in the perceived job relatedness of tests. Chan et al. (1997) found lower levels of face validity perceptions and test-taking motivation for African-Americans after completing a cognitive ability test battery. In addition, Chan and Schmitt (1997) reported a significant race ? method interaction effect such that the differences in face validity perceptions of 34 African-Americans and Whites were greater for a pencil-and-paper method than for a video-based method of testing. Because of these findings, it is expected that African- Americans and Whites may react differently to tests with heavy written content compared to other test formats. Accordingly, it is expected that African-American candidates will view structured interviews and role-plays more favorably than writing samples due to the low written content and aural nature of situational interview and role-play exercises. Therefore, the following hypotheses are proposed: Hypothesis 11a: Race will moderate the relationship between assessment center exercise type and job-relatedness perceptions in such a way that African-American candidates will perceive situational interviews and role-play exercises as being more job-related than White candidates. Hypothesis 11b: Race will moderate the relationship between assessment center exercise type and opportunity to perform perceptions in such a way that African- American candidates will perceive situational interviews and role-play exercises as providing greater opportunity to perform than White candidates. Hypothesis 11c: Race will moderate the relationship between assessment center exercise type and test-taking motivation in such a way that African-American candidates will report lower test-taking motivation for writing samples than White candidates. Summary of Research Hypotheses The hypotheses for this dissertation proposal are presented in Table 1. Hypotheses 1, 2, 3, 4, 6, and 10 concern how candidates will perceive the different 35 exercises according to SPJS scales or test-taking motivation. Hypotheses 1b, 5, and 8 concern the direction of the relationship between proposed antecedents of selection procedural justice perceptions and either overall SPJS ratings or test performance. Hypotheses 7, 10a, 10b, and 10c involve the interaction of assessment center exercise type with several other variables in the study. 36 Table 1. Summary of Study Hypotheses Hypotheses Hypothesis 1: Candidates will perceive the role-play exercise and writing sample as being more job-related than the situational interview. Hypothesis 2: Candidates will perceive the situational interview and the role-play exercise as providing them with greater opportunity to perform or demonstrate their knowledge, skills, and abilities than the writing sample. Hypothesis 3: Candidates will perceive the writing sample as being more consistently administered than the situational interview and role-play exercise. Hypothesis 4: Candidates will perceive the information known about the situational interview and writing sample as greater than the role-play exercise. Hypothesis 5: Test-taking motivation for the role-play exercise and the writing sample will be greater than that for the situational interview. Hypothesis 6: Perceived job-relatedness will moderate the relationship between candidates? motivation to perform and their test performance such that candidates will have greater motivation to perform well on tests perceived to be highly job-related, which will result in higher performance on such tests. Hypothesis 7: Attitude toward assessment center exercise type will moderate the relationship between selection procedural justice perceptions and test performance such that more positive test-taking attitudes combined with more positive procedural justice perceptions will be associated with higher test performance. 37 Table 1 continued Summary of Study Hypotheses Hypotheses Hypothesis 8: Organizational tenure will moderate the relationship between assessment center exercise type and candidate reactions in such a way that candidates with more organizational tenure will perceive situational interviews and role-play exercises as higher in opportunity to perform, information known, job relatedness, and consistency of administration than writing samples in comparison with less-experienced candidates. Hypothesis 9: Evaluative history will be positively related to candidate perceptions of the test. Specifically, past success in similar tests will be positively related to perceived job-relatedness, information known, opportunity to perform, test-taking motivation, and attitude towards testing. Hypothesis 10: Level of position will be positively related with overall selection procedural justice perceptions in such a way that candidates for positions of a higher level will react more favorably (i.e., higher selection procedural justice perceptions) to the selection procedures. Specifically, it is expected that candidates for Lieutenant will perceive all selection devices as higher in job-relatedness, information known, opportunity to perform, test-taking motivation, and attitude towards testing than candidates for Sergeant. 38 Table 1 continued Summary of Study Hypotheses Hypotheses Hypothesis 11a: Race will moderate the relationship between assessment center exercise type and job-relatedness perceptions in such a way that African-American candidates will perceive situational interviews and role-play exercises as being more job-related than White candidates. Hypothesis 11b: Race will moderate the relationship between assessment center exercise type and opportunity to perform perceptions in such a way that African-American candidates will perceive situational interviews and role-play exercises as providing greater opportunity to perform than White candidates. Hypothesis 11c: Race will moderate the relationship between assessment center exercise type and test-taking motivation in such a way that African-American candidates will report lower test-taking motivation for writing samples than White candidates. 39 CHAPTER 2 METHOD Participants Analyses were conducted on data that were collected from candidates competing for the positions of sergeant and lieutenant in a metropolitan police department in the southeastern United States. Two-hundred and three officers participated in the selection procedures for sergeant, and 82 officers participated in the selection procedures for lieutenant. The selection procedures were imbedded in an assessment center. Candidates for sergeant were required to complete selection procedures consisting of (a) written examination, (b) situational interview, (c) role-play exercise, and (d) writing sample exercise. The written exam and the situational interview were administered to all candidates (N = 203), and performance in these two components determined whether or not candidates progressed to the final stages of the selection procedure. A total of 91 candidates for the rank of sergeant advanced to the role-play and writing sample exercises. Candidates for lieutenant were required to complete (a) situational interview, (b) role-play exercise, and (c) writing sample. All candidates completed all exercises in the lieutenant?s assessment center (N = 82). In order to make comparisons across ranks (i.e., level of target position), this study focused on the three assessment center exercises that were common across the two ranks: the situational 40 interview, the role-play exercise, and the writing sample. The written test was not used in this study since only the sergeant candidates completed the test. In total, 177 (95 for sergeant and 82 for lieutenant) candidates for both ranks completed all three exercises and 173 (98%) agreed to participate in this study. Of these, 104 (60.1%) were white, 62 (35.8%) were African-American, 1 (.6%) was Asian, 3 were Hispanic (1.7%), and 3 (1.7%) listed other as their race. The sample consisted of 160 males (92.5%) and 11 females (6.4%). Two participants failed to indicate their gender (1.1%). The majority (55.2%) of the participants had between 11 and 19 years of experience in their organization. Measures Selection procedural justice perceptions. A modified version of Bauer et al.?s (2001) Selection Procedural Justice Scale (SPJS) was used to assess candidates? fairness reactions to the assessment center components. The SPJS is consistent with eight of Gilliland's (1993) 10 procedural justice rules and measures perceptions of job- relatedness, opportunity to perform, consistency of administration, information known, openness, treatment, two-way communication, and propriety of questions. Coefficient alphas reported by Bauer et al. (2001) for all of the subscales (" = .73 for job-relatedness being the lowest to " = .92 for treatment being the highest) were well above the acceptable level of .70 for newly developed scales (Nunnally, 1978). Bauer et al. found the instrument to have both convergent validity, with Pearson?s product-moment correlation coefficients among the 11 subscales and an overall procedural justice measure ranging from .25 to .77, and divergent validity, with none of the 11 subscales being correlated with age, gender, or test score (all coefficients < .19, nonsignificant). 41 This study utilized 21 items comprising the job-relatedness, opportunity to perform, consistency of administration, and information known subscales, and the response scale ranged from 1 (strongly disagree) to 5 (strongly agree). Job-relatedness was measured with three items (e.g., ?The actual content of the test was clearly related to the job of ?). Opportunity to perform was measured with three items (e.g., ?I could really show my skills and abilities through this test?). Consistency of administration was measured with three items (e.g., ?There were no differences in the way the test was administered to different applicants?). Information known was measured with three items (e.g., ?I understood in advance what the testing process would be like?). Subscale items were averaged to produce a dimension score (e.g., job- relatedness). Attitude toward testing. A measure of attitude toward testing, based on the Belief in Testing subscale developed by Arvey, Strickland, Drauden, and Martin (1990) was used. The scale consisted of three items (e.g., ?I think that testing people is a fair way to determine their abilities?), and the response scale ranged from 1 (strongly disagree) to 5 (strongly agree). Arvey et al. reported a coefficient alpha of .71 for the Belief in Testing scale. Test-taking motivation. A measure of applicant test-taking motivation, based on a scale from the Test Attitude Survey (TAS) created by Arvey et al. (1990) was used. The scale consists of three items (e.g., ?Doing well on this test was important to me?), and the response scale ranged from 1 (strongly disagree) to 5 (strongly agree). Arvey et al. reported a coefficient alpha of .85 for the Test-taking Motivation scale. 42 Race. Candidates? race was coded as 1 = White and 2 = African-American. Racial data were gathered from archival records. Gender. Candidates? gender was coded as 1 = male and 2 = female. Gender data were gathered from archival records. Organizational tenure. Candidates? organizational tenure was measured by a self-report item on the survey. Candidates indicated the length of time they had been employed by the department. Candidates responded using the following scale: 2-4 years; 5-7 years; 8-10 years; 11-13 years; 14-16 years; 17-19 years; 20-22 years; 23-25 years; and 26 or more years. Evaluative history. Candidates? evaluative history was measured by two self- report items. Candidates responded to questions concerning their past performance for similar types of selection procedures, and the response scale ranged from 1 (strongly disagree) to 5 (strongly agree). The items included ?In the past, I have performed well on this type of test? and ?I always seem to do poorly on this type of test? (reverse scored). Level of target position. The rank for which candidates were competing (i.e., level of target position) was coded as 1 = sergeant and 2 = lieutenant. This information was gathered from archival records. Exercise performance. Candidates? performance on each exercise was measured by the ratings or evaluations each candidate received from the trained assessors during the actual assessment center. Candidate performance evaluations were made on a five- point Likert-type scale ranging from 1 (Unacceptable) to 5 (Superior). Evaluations were scaled to a 100-point scale in order to aid in combining exercise scores, crediting veteran?s points, etc. 43 Candidate perceptions of job-relatedness, opportunity to perform, consistency of administration, opportunity to perform, test-taking motivation, attitude toward testing, evaluative history, and exercise performance were collected for each of the three assessment center exercises. A complete list of all survey items for each of the scales is contained in Appendix A. Procedure Immediately after completing the assessment exercises, candidates were escorted to a holding room (i.e., away from the testing room). Participant involvement was strictly voluntary, and candidates were told that they have the right to refuse to participate in the study. Participants were provided with general instructions for completing the questionnaire containing the SPJS, Attitude Toward Testing, and Test-taking Motivation scales. Since process and outcome fairness are related (Brockner & Wisenfeld, 1996), measuring procedural justice after the results are known (i.e., scores are released and hiring decisions are made) may create a potential confound. Therefore, immediately after completing each selection component in this study, candidates completed a survey to collect their reactions to the component just completed. Candidates? participation was voluntary, and the survey was described as part of a university research project not affiliated with the police department or the human resources department. Assessment Center Development and Exercises The following section describes the development, nature, and content of the exercises that comprised the assessment center used in this study. While the focus of the study is on the individual selection devices, it is important to understand how the 44 assessment center exercises were developed, their content, and how they were administered. Development. The assessment center used in this study was developed for a large metropolitan police force by the Center for Business and Economic Development at Auburn University Montgomery. The assessment center exercises were developed as part of a larger project designed to create promotional procedures for the positions of sergeant, lieutenant, and captain. Only the exercises developed for the ranks of sergeant and lieutenant were used in the present study. A comprehensive job analysis was conducted in which incumbent subject matter experts (SMEs) identified the work behaviors and knowledge, skills, and abilities (KSAs) perceived to be necessary for success in the positions of sergeant and lieutenant. SMEs then grouped KSAs meeting testing criteria into dimensions on the basis of similarity. The performance dimensions are included in the candidate information guide contained in Appendix B. With the assistance of SMEs, tests or exercises were developed to measure these performance dimensions. SMEs participated in critical incident sessions, in which they were asked to describe situations on the job that required them to demonstrate the KSAs that comprise the performance dimensions. Both effective and ineffective behavioral examples were obtained in order to provide representation of the entire range of performance for each dimension. Based on the resulting critical incidents, hypothetical scenarios were developed that described situations that a newly promoted sergeant or lieutenant could be expected to encounter on the job. For the writing sample exercise, candidates were presented with a scenario requiring them to produce a written report or letter in response to information 45 provided to them. For the situational interview questions, each scenario presented candidates with a hypothetical problem situation that required them to assume a supervisory role and to describe the actions they would take to arrive at a resolution. For the role-play exercises, each scenario presented candidates with a hypothetical problem situation which required them to interact with an actor playing the role of either a citizen or a subordinate in order to solve the problem. Scenarios were constructed to measure multiple performance dimensions. In order to assess all candidates, it was necessary to conduct the situational interview process over three successive days. To maintain the security of the questions, three parallel forms of each question were developed, one for each day. Questions were parallel in general content and complexity. Parallelism among questions was determined by two different means. First, a panel of SMEs involved in question development rated the questions as to their similarity to the original question. Second, a review by a management team composed of five officers of the rank above the target rank was conducted. Participants were required to assess the similarity of theme, difficulty level, and dimensions being assessed by the parallel questions. This review confirmed the parallel forms of each question. This process resulted in three parallel sets of four questions, for a total of 12 questions. Development of scoring guidelines. Scoring guidelines were created to asses the appropriateness of the actions candidates reported they would take in response to interview questions and role-play exercises. Those SMEs who participated in question development were asked to derive a list of appropriate and inappropriate responses to each hypothetical problem scenario. Once generated, each potential response was placed 46 into one of three categories: (a) clearly unacceptable, (b) clearly acceptable, and (c) clearly superior. Role-play exercises. The role-play exercises were developed to simulate the typical interactions between sergeants or lieutenants and other individuals, particularly subordinate personnel and citizens. The exercises consisted of two, job-related, one-on- one role-play situations involving problems encountered by a sergeant or lieutenant. In one role-play exercise, candidates assumed the role of sergeant or lieutenant, and a role- player assumed the role of a citizen. In a second role-play exercise, the candidate assumed the role of sergeant or lieutenant, and a role-player assumed the role of a subordinate. Candidates were provided with background information explaining the general nature of the situation. Candidates were then asked to handle the situation as they thought a sergeant or lieutenant should. The role-play exercises were scored in real-time by a panel of two assessors using scoring guidelines. Assessors independently rated each candidate on each dimension measured by the exercises. Rating differences greater than one point were discussed by assessors in an effort to come within one point of agreement. Ratings of both assessors across all dimensions were aggregated to produce an overall score for the role-play portion of the assessment center. A sample role-play exercise is provided below. You are to play the role of a Sergeant in the Metro Police Department. It is 1615 hours. You are assigned to the West Precinct uniform division, evening watch. You receive a call from your Captain asking you to personally deal with a problem that has come to his attention. The Captain received a call from a friend of the family, 47 describing a problem she had with one of your officers, Officer Mike Stewart. Carol Williams, the friend of the Captain, has become very upset over a situation that occurred last night. As you understand it from the Captain, the complaint involves a problem that occurred when Officer Stewart stopped Ms. Williams. Captain Keeler tells you that Ms. Williams asked to come to his office to discuss the situation. Captain Keeler tells you that Ms. Williams has just stepped into his office. He said that he will assure Ms. Williams that you will be happy to discuss the situation with her. He asked that you give them a few minutes and then he?ll bring Ms. Williams to your office. You check and see that today is Officer Stewart?s off day. Also, you pull the ticket and find that a citation was issued yesterday evening at 2125 to Ms. Williams for driving under the influence. Your Task: Proceed with this meeting in your office. Handle the citizen complaint the way that a Sergeant should handle it. Remember: 1. You are a Sergeant in the Uniform Division of Metro Police Department. You are assigned to the West Precinct, evening watch. 2. Captain Keeler asked you to meet with Carol Williams, a family friend, about a complaint regarding the way she was treated by one of your officers yesterday evening. 48 3. Carol Williams? problem is regarding Officer Mike Stewart, who is off today. Officer Mike Stewart stopped her yesterday evening. Captain Keeler will bring Ms. Williams to your office in a few minutes. Do you have any questions? Writing sample exercise. The writing sample exercise was designed to measure written communications skills. This exercise required candidates to read and review information, determine the appropriate action, and formulate a response in writing. The instructions requested the candidates to produce a writing sample that a sergeant or a lieutenant might be required to write. Each candidate had the same amount of time in which to write an appropriate response. Candidates were asked to produce a writing sample that completely addressed the issues and requests presented in the instructions. The writing sample was scored by a panel of two assessors using scoring guidelines developed in collaboration with SMEs. Assessors independently rated each candidate on each dimension measured by the writing sample. Rating differences greater than one point were discussed by assessors in an effort to come within one point of agreement. Ratings of both assessors across all dimensions were aggregated to produce an overall score for the writing sample portion of the assessment center. An example of a writing sample similar to the one used in the study is provided below. You are a Sergeant with the Metro Police Department. Today is Thursday, August 23rd. Yesterday one of your officers, Mike Reynolds, was involved in an altercation with a citizen, Ms. Annie Potts, at a traffic stop on Highway 290. Officer Reynolds was polite, yet firm in his 49 dealings with Ms. Potts, however, she has made a complaint that he was rude and unreasonable. You are being provided with a copy of Officer Reynolds? statement. Lieutenant Jamison has requested that you prepare a letter responding to Ms. Potts. You should review Officer Reynolds? statement and respond appropriately to Ms. Potts. You have thirty minutes in which to write this letter. The letter should be no longer than two pages. If it is longer than two pages, ONLY THE FIRST TWO PAGES WILL BE SCORED. You have been provided with pencils, paper, and Final Response Forms. The letter you wish to be scored MUST appear on the Final Response Forms. Only the Final Response Forms will be scored. On the top corner of each page of the Final Response Form there is a space for your assigned two digit number. Please place your number from your candidate envelope into these spaces. DO NOT use your name in the letter. Please use the name SERGEANT PAT CANDIDATE. Please be specific and give details. Address the issues outlined in the directions. Your letter will be assessed for your written communication skills. Situational interview exercise. Interview questions were developed to assess five performance dimensions identified by SMEs. SMEs participated in a critical incident session, in which they were asked to describe situations on the job that required them to demonstrate the KSAs constituting the five performance dimensions. Both effective and 50 ineffective behavioral examples were obtained in order to provide representation of the entire range of performance for each dimension. Based on the critical incidents that were obtained, hypothetical scenarios were developed that described situations that a sergeant or lieutenant could be expected to encounter on the job. Each scenario presented candidates with a hypothetical problem situation in which they were required to assume a supervisory role and describe the actions they would take to arrive at a resolution. Scenarios were constructed to measure more than one performance dimension. The situational interviews consisted of five situational questions measuring technical and departmental knowledge, human relations, problem analysis, management ability, and oral communication. A sample situational interview question is provided below. You are a recently promoted Sergeant. It is 2300 hours on Monday. You are responding to a burglary call at the Bellwood Shopping Center. One of the units in your area is already on the scene. When you arrive, the two officers on the scene relay the information they have gathered. One juvenile suspect is in custody. He was arrested inside one of the stores. He has a large cut on his right shoulder, and it is bleeding heavily. Windows in three stores have been broken out. All three stores are men?s clothing retailers. The officers tell you that each of the three stores is missing clothing. Merchandise is lying on the floor in each of the three stores. Clothes racks are disarranged as if someone went through them in a hurry. One cash register in one of the stores has been forced 51 open and is empty. How should you handle this situation? Please be specific and give details. Note: When you are ready to respond to this scenario, please tell the interview coordinator that you are ready to begin. When you have completed your response, please tell the interview coordinator that you are finished. Huffcutt and Arthur (1994) provided four progressively higher levels of structure used to describe interviews. Level 1 is characterized by an absence of formal constraints and is representative of the typical, unstructured interview. Level 2 is characterized by limited constraints, such as the standardization of the topical areas that will be covered in the interview. Level 3 is characterized by pre-specified questions, but applicants are not asked the exact same questions because different interview forms may be used or interviewers may be allowed to choose among alternative questions and probe applicants to clarify responses to questions. Level 4 is characterized by complete standardization, and applicants are asked the exact same questions, and no deviation or follow-up questioning is permitted. The interview questions were Level 3-4 (moderate to full constraints on both questions and scoring) according to Huffcutt and Arthur?s structure typology. Performance on the situational interviews were scored using a seven-point, Likert-type rating scale with 1 = clearly unacceptable, 4 = clearly acceptable, 7 = clearly superior. The situational interviews were scored in real-time by a panel of two assessors using scoring guidelines. Assessors independently rated each candidate on each dimension measured by the questions. Rating differences greater than one point were discussed by assessors in an 52 effort to come within one point of agreement. Ratings of both assessors across all dimensions were aggregated to produce an overall score for the situational interview portion of the assessment center. Administration of Exercises Role-play exercises. The role-play exercise offered candidates an opportunity to actually demonstrate what they would do in a particular situation. Candidates assumed the role of a sergeant or lieutenant depending on the rank for which they are competing while another individual (i.e., an actor) assumed an interactive role (i.e., subordinate, citizen). Before the exercise began, candidates received general instructions by a panel member. Candidates were given background information describing a problem typical of those problems that may be encountered on-the-job by a police sergeant or lieutenant. The candidate had 10 minutes to review the information before beginning the role-play. Candidates reviewed the background material to determine how they would handle the problem. After candidates had time to read the background information, determine the appropriate action to take, and form a plan of action, they were instructed to go to the door and invite the actor in. If they did not go to the door to get the actor after their preparation time expired, the actor knocked on the door. The candidate had to let the actor in and begin the role-play exercise at that time. Actors interacted with candidates in the role-play exercise. Once the role-play exercise had begun, candidates were instructed to treat the actor as if he or she is actually the person described in the candidate background information for the role-play exercise. The actor gave standard responses to the candidates? actions to further ensure fairness to all candidates. Candidates completed a total of two role-play exercises. The role-play 53 interactions were not timed; however, most role-play exercises lasted between five and 15 minutes. Writing sample exercise. The writing sample exercise was designed to measure written communications skills. This exercise required candidates to read and review job- related information, determine the appropriate action, and formulate a response in writing. The instructions required the candidate to produce a writing sample that a sergeant or lieutenant might be required to write. The actual writing sample task included something a sergeant or lieutenant could be expected to write such as a recommendation for disciplinary action, letter, progress report, plan of action, or follow- up report. Each candidate had the same amount of time in which to write an appropriate response. All information and supplies (i.e., pencils and paper) candidates needed to complete the exercise were available at the test site. Dictionaries were also available for use. Situational interview exercise. Candidates were provided with a brief overview of the situational interview process, and then taken to a room with an interviewer. Candidates followed along as the interviewer read the questions aloud. Candidates were given eight minutes to prepare a response to a question, and an additional eight minutes to provide a verbal response to the question. At the conclusion of the response to the first question, the second question was read aloud, and the process continued until all four questions were asked and answers provided. The administration sequence of all assessment center exercises is presented in Figure 1. Note that the sergeant promotion process involved a written knowledge test. It 54 was not included in the investigation because no such test was used in the lieutenant promotion process, making it impossible to compare reactions between the two ranks. Sergeant Data Collection Procedures: Lieutenant Data Collection Procedures: Figure 1. Schedule of Data Collection for Police Sergeant and Lieutenant Data Collection Job- relatedness, Information known, Opportunity to perform, Consistency and Test-taking motivation for Role-play and Writing sample Data Collection Job-relatedness, Information known, Opportunity to perform, Consistency and Test-taking motivation for Situational interview, Role- play, and Writing sample Written Test Administered to all candidates n = 203 Structured Interviews Administered to all candidates n = 203 Hurdle Scoring criteria are used to select candidates to advance to remaining exercises. Writing Sample Administered to all candidates n = 100 Role-play Exercises Administered to all candidates n = 100 Data Collection Job- relatedness, Information Known, Opportunity to perform, Consistency and Test-Taking Motivation for Written Test and Situational Interview Role-play and Writing Sample Exercises Administered to remaining candidates n = 101 Structured Interviews Administered to all candidates n = 100 55 56 Assessor Training Assessors initially participated in a half-day (approximately four hours) training program. This program was designed to orient and familiarize interviewers with the exercises, response standards, scoring guidelines, and the evaluation process. Assessors were provided with an orientation and training manual that was reviewed during the training session. The orientation portion of the program began with a general description of the selection procedure and the development of the exercises. This was followed by a thorough discussion of simulation exercises, their purpose, their advantages over more traditional selection procedures, their form, and development. After the general overview, assessors were presented with a brief overview of the host organization and a description of the job including example duties and KSAs. Once assessors gained a thorough understanding of the position, a thorough overview of the entire selection process was presented. Following this overview, specifics of each selection component were reviewed. Assessors received extensive training and practice in observing, recording, and evaluating interview and role-play exercise responses. Assessors were trained to limit their observation to overt behavioral responses and to accurately describe those behaviors, while avoiding subjective impressions. Assessors were trained to record behavior as it occurred. They were also given the opportunity to practice behavioral recording through a series of exercises. Next, the performance dimensions and examples of responses within those dimensions were reviewed. Assessors were then trained to evaluate the recorded 57 behavior based on the relevant performance dimensions in accordance with the response standards. The dimension rating form and scale anchors, as well as the interview questions and response standards, were reviewed. Assessors were also made aware of potential rating errors (i.e., halo, comparison, logical, central tendency, rater bias, and telegraphing) and ways to avoid such errors. In the final phase of training, assessors proceeded through a series of practice ratings. Assessor training lasted for approximately eight hours. Data Analyses Hypotheses 1, 2, 3, 4, and 5 concern how candidates will perceive the different exercises according to the SPJS or test-taking motivation measures. The basic premise of these hypotheses is that candidates will perceive certain exercises differently (i.e., negatively or positively) due to the characteristics of the exercises. For example, Hypothesis 1 proposes that candidates will perceive the role-play exercise and situational interviews, as being more job-related than the writing sample. These hypotheses were tested using a repeated measures multivariate analysis of variance (MANOVA) that assessed the statistical significance of the differences for the SPJS subscales by exercise type. Repeated measures one-way ANOVAs were used to identify the dimensions in which reactions to the exercise types differed. Scheffe? multiple comparison tests were used to determine exactly where the differences existed. Hypotheses 6, 7, and 8 concern the direction of the relationship between proposed antecedents of selection procedural justice perceptions and/or exercise performance. For example, Hypothesis 8 proposed that assessment center exercise type and experience will interact with selection procedural justice. These hypotheses were tested using 58 hierarchical moderated multiple regression analysis so the order in which the predictor variables are introduced into the equation could be specified. Using hierarchical regression was preferable because it allows spurious relationships to be removed and incremental validity to be determined (Cohen & Cohen, 1983). Statistically significant (p < .05) increases in ?R 2 involving the hypothesized relationships indicate the variable entered in that step explains significant incremental variance in the criterion, above and beyond that accounted for by the variables entered in previous steps (Cohen & Cohen, 1983). Hypotheses 9 and 10 concerned the relationship between selection procedural justice perceptions and evaluative history and level of target position, respectively. For example, Hypothesis 9 proposed that evaluative history will be positively related to candidate perceptions of the test. To test these hypotheses, a multiple regression was conducted. Hypotheses 11a, 11b, and 11c involve the interaction of exercise type with several other variables in the study. Specifically, Hypothesis 11a, 11b, and 11c proposed that race would moderate the relationship between assessment center exercise type and candidate reactions in such a way that African-American candidates would perceive selection devices with low written content as being more job-related than White candidates would, perceive selection devices with low written content as providing greater opportunity to perform than White candidates would, and report lower test-taking motivation for selection devices with high written content than White candidates. These hypotheses were tested with a 2 x 3 MANOVA. 59 CHAPTER 3 RESULTS This study addressed numerous issues pertaining to candidate reactions to assessment center exercises. First, candidate reactions to the different types of assessment center exercises were examined to determine if candidates viewed certain types of assessment center exercises differently due to their characteristics. Second, several variables were examined to determine their relationship with candidate reactions to testing and test performance. Third, the interactions between assessment center exercise type, organizational tenure, race, and tenure were investigated to determine the impact of race on candidate reactions to different types of assessment center exercises. Descriptive statistics, intercorrelations, and alpha reliability estimates for all study variables are presented in Table 2. As shown in the table, coefficient alphas for the measures ranged from .69 to .88. Relationship between Type of Assessment Center Exercise and Candidate Reactions Hypotheses 1, 2, 3, 4, and 5 concerned whether candidates differed in their reactions to the situational interview, role play, and writing sample assessment center exercises. In order to test these hypotheses, a one-way repeated measures multivariate analysis of variance (MANOVA) was conducted with the three assessment center exercises as the independent variable and the five candidate reactions as the dependent Table 2. Means, Standard Deviations, Coefficient Alphas, and Intercorrelations among Study Variables Variable M SD N 1 2 3 4 5 6 7 8 9 1. Race (1=White; 2=African-American) 1.47 .73 174 - 2. Gender (1=male; 2=female) 1.09 .45 174 .06 - 3. Organizational tenure (years) 4.51 1.86 174 -.04 -.10 - 4. Level of target position (1=sergeant; 2=lieutenant) 1.47 .50 174 -.07 .06 .59** - 5. Performance-interview 81.56 7.94 174 .00 -.03 -.18 -.19* - 6. Performance- role-play 79.95 10.01 174 .05 -.07 .05 .01 .51** - 7. Performance- writing sample 80.07 10.07 174 -.20** .01 -.08 -.02 .13 .11 - 8. Attitude toward test-interview 3.54 .85 174 .06 -.08 -.08 -.02 .15 .10 .03 (.77) 9. Opportunity to perform-interview 3.57 .94 174 .08 .03 -.32 .03 .25** .14 .11 .76** (.87) 10. Consistency of administration-interview 4.26 .66 174 .15* .06 .09 .13 -.04 .06 -.03 .31** .35** 11. Information known-interview 4.17 .54 174 -.05 .04 .05 .15 -.04 .09 -.09 .33** .35** 12. Job-relatedness-interview 3.24 .86 173 .11 -.04 -.17* -.05 .14 .03 -.02 .75** .69** 13. Test-taking motivation-interview 4.49 .54 174 .15* -.04 -.03 -.08 .27** .14 .12 .32** .31** 14. Evaluative history-interview 3.44 .72 174 .14 -.16* -.06 -.15 .01 -.15 -.04 -.16* .12 15. Attitude toward test- role-play 3.47 .79 173 .02 -.07 -.11 -.02 .14 .06 .02 .66* .52** 16. Opportunity to perform- role-play 3.58 .83 173 .01 -.01 -.10 -.06 .20* .13 .08 .53** .64** 17. Consistency of administration- role-play 4.27 .59 173 .11 .07 .01 .06 .09 .13 .00 .26** .36** 18. Information known- role-play 4.11 .59 173 -.02 .05 .12 .22** -.07 .08 .01 .25** .29** 19. Job-relatedness- role-play 3.23 .93 173 .11 -.05 -.04 .04 .09 -.06 -.02 .43** .42** 20. Test-taking motivation- role-play 4.44 .58 173 .19* -.02 -.05 -.08 .23** .21** .04 .26** .25** 21. Evaluative history-role-play 3.18 .57 173 .04 -.08 .14 .23** -.16 -.18* -.22** -.22** -.24** 22. Attitude toward test writing sample 3.41 .76 174 .11 .01 .01 .10 .09 .01 -.11 .54** .44** 23. Opportunity to perform- writing sample 3.50 .83 174 .03 .06 -.01 .04 .21** .05 -.01 .49** .61** 24. Consistency of administration- writing sample 4.32 .54 174 .09 .08 .04 .17 .10 .10 -.02 .26** .29** 25. Job-relatedness- writing sample 4.07 .65 174 .01 .02 .15 .20** .02 .01 -.03 .17* .28** 26. Information known- writing sample 3.18 .74 174 .12 .07 -.13 -.02 .11 .01 -.08 .48** .43** 27. Test-taking motivation writing sample 4.42 .58 174 .18* .01 -.02 -.03 .16* .12 .04 .19* .21** 28. Evaluative history- writing sample 3.73 .68 174 .20** .00 -.39** -.59** .08 -.11 -.09 .07 .06 60 Table 2 continued. Means, Standard Deviations, Coefficient Alphas, and Intercorrelations among Study Variables Variable 10 11 12 13 14 15 16 17 18 19 20 1. Race (1=White; 2=African-American) 2. Gender (1=male; 2=female) 3. Organizational tenure (years) 4. Level of target position (1=sergeant; 2=lieutenant) 5. Performance-interview 6. Performance- role-play 7. Performance- writing sample 8. Attitude toward test-interviw 9. Opportunity to perform-interview 10. Consistency of administration-interview (.75) 11. Information known-interview .47** (.69) 12. Job-relatedness-interview .32** .34** (.82) 13. Test-taking motivation-interview .24** .15* .17* (.72) 14. Evaluative history-interview -.00 -.16* -.12 -.07 (.69) 15. Attitude toward test- role-play .26** .22** .53** .19* .08 (.78) 16. Opportunity to perform- role-play .18* .23** .54** .15* -.05 .69** (.88) 17. Consistency of administration- role-play .57** .24** .29** .16* .10 .41** .33** (.69) 18. Information known- role-play .37** .59** .21** .09 .07 .33** .37** .37** (.86) 19. Job-relatedness- role-play .19* .17* .58** .09 .11 .60** .54** .27** .23** (.79) 20. Test-taking motivation- role-play .23** .14 .15 .72** .08 .25** .24** .25** .17* .17* (.76) 21. Evaluative history-role-play .03 -.06 -.14 -.25** .28** -.21** -.20** -.11 -.04 -.01 -.12 22. Attitude toward test writing sample .18* -.18* .40** .20** .07 .67** .55** .14 .29** .45** .19* 23. Opportunity to perform- writing sample .13 .22** .45** .18* .00 .60** .74** .26** .35** .46** .16* 24. Consistency of administration- writing sample .60** .26** .30** .17* .13 .38** .27** .74** .40** .29** .24** 25. Information known- writing sample .26** .38** .16* .14 .07 .25** .27** .31** .71** .22** .11 26. Job-relatedness- writing sample .22** .13 .59** .07 .10 .61** .52** .27** .29** .65** .03 27. Test-taking motivation writing sample .21** .12 .11 .64** .10 .23** .21** .28** .14 .17* .83** 28. Evaluative history- writing sample .06 -.07 .11 -.02 .42** .09 .11 .11 -.01 .12 .07 61 Table 2 continued. Means, Standard Deviations, Coefficient Alphas, and Intercorrelations among Study Variables Variable 21 22 23 24 25 26 27 28 1. Race (1=White; 2=African-American) 2. Gender (1=male; 2=female) 3. Organizational tenure (years) 4. Level of target position (1=sergeant; 2=lieutenant) 5. Performance-interview 6. Performance- role-play 7. Performance- writing sample 8. Attitude toward test-interview 9. Opportunity to perform-interview 10. Consistency of administration-interview 11. Information known-interview 12. Job-relatedness-interview 13. Test-taking motivation-interview 14. Evaluative history-interview 15. Attitude toward test- role-play 16. Opportunity to perform- role-play 17. Consistency of administration- role-play 18. Information known- role-play 19. Job-relatedness- role-play 20. Test-taking motivation- role-play 21. Evaluative history-role-play (.70) 22. Attitude toward test writing sample -.04 (.77) 23. Opportunity to perform- writing sample -.23** .68** (.84) 24. Consistency of administration- writing sample -.07 .31** .31 (.71) 25. Information known- writing sample .00 .42** .44** .42** (.73) 26. Job-relatedness- writing sample -.06 .69** .62** .37** .39** (.80) 27. Test-taking motivation writing sample -.13 .25** .21** .31** .22** .05 (.77) 28. Evaluative history- writing sample .12 .10 .10 .14 .07 .18 .10 (.71) Note. Coefficients in parentheses on the diagonal are coefficient alphas for the scales and other coefficients represent the intercorrelations among variables. *p < .05. **p < .01. ***p < .001 62 63 variables. Results of the repeated measures MANOVA indicated a significant difference among the types of exercises for candidate reactions, ? (10, 680) = .95, p < .05, ? 2 = .03. Next, a series of one-way repeated measures analyses of variance (ANOVA) were computed to identify the exercise differences for each of the five candidate reactions. Table 3 summarizes the results of these analyses and reports the means and standard deviations on each of the dependent variables for each of the assessment center exercises. Hypothesis 1 proposed that candidates would perceive the role-play and writing sample exercises as being more job-related than the situational interview. There were no significant differences in the job-relatedness perceptions of the three assessment center exercises F(2, 344) = .73, p >.05, ? 2 = .00. As a result, Hypothesis 1 was not supported. Hypothesis 2 stated that the situational interview and role-play exercises would be perceived as providing candidates with greater opportunity to perform than the writing sample. Again, there were no significant differences in the opportunity to perform perceptions among the three assessment center exercises F(2, 344) = 1.46, p > .05, ? 2 = .01. Therefore, Hypothesis 2 was not supported. Hypothesis 3 proposed that the writing sample would be perceived as more consistently administered than the situational interview and role-play exercises. The univariate analysis revealed no significant differences in the job-relatedness perceptions among the three assessment center exercises F(2, 344) = 1.34, p > .05, ? 2 = .01. As a result, Hypothesis 3 was not supported. Table 3. Differences in Promotion Candidates? Reactions to Assessment Center Exercises Situational interview Role-play Writing sample Candidate reactions M SD M SD M SD F a ? 2 Job-relatedness 3.24 .86 3.23 .93 3.18 .74 .73 .00 Opportunity to perform 3.57 .94 3.58 .83 3.50 .83 1.46 .01 Consistency of administration 4.27 .66 4.27 .59 4.32 .54 1.34 .01 Information known 4.17 a .55 4.11 a,b .59 4.07 b,c .65 3.13* .02 Test-taking motivation 4.50 a .54 4.50 a .54 4.44 a,b .58 2.58? .02 Note. N = 173. Repeated measures MANOVA ?(10, 680) = .95, p < .05, ? 2 = .03. The higher the mean score, the more positive the candidate reactions. Means that do not share a common subscript differ at p < .05. a One-way repeated measures analysis of variance. ? p < .10. *p < .05. 64 65 Hypothesis 4 predicted that the situational interview and role-play exercises would be perceived as providing more information about the assessment process and its content than the writing sample. The repeated measures ANOVA revealed differences in the perceptions of information known among the three assessment center exercises F(2, 344) = 3.13, p < .05, ? 2 = .02. Having established there was a significant difference overall between the assessment center exercises for information known, Scheffe? multiple comparison tests were conducted to identify which particular exercises were judged to be significantly different. Results of the multiple comparison test indicated that the situational interview (M = 4.17, SD = .55) was perceived as providing more information than the writing sample (M = 4.07, SD = .65), but there was no difference (p > .05) in the perceptions of information known between the role-play (M = 4.11, SD = .59) and the writing sample (M = 4.07, SD = .65) or between the role-play (M = 4.11, SD = .59) and the situational interview (M = 4.17, SD = .55). As a result, Hypothesis 4 was not supported. Hypothesis 5 proposed that the test-taking motivation for the role-play exercise and the writing sample would be greater than that for the situational interview. The repeated measures ANOVA revealed only marginally significant differences in test- taking motivation for the three assessment center exercises F(2, 344) = 2.58, p < .10, ? 2 = .02. Results of Scheffe? multiple comparison tests failed to indicate any significant differences in test-taking motivation among the three assessment center exercises. Thus Hypothesis 5 was not supported. 66 Relationship Among Test-taking Motivation, Exercise Performance, and Job-relatedness Hypothesis 6 stated that perceived job-relatedness would moderate the relationship between test-taking motivation and test performance in such a way that candidates would have greater motivation to perform well on highly job-related tests, resulting in higher test performance on such tests. Hypothesis 6 was tested using hierarchical regression. In the first step of the hierarchical regression, job-relatedness and test-taking motivation were entered. In the second step, the test-taking motivation ? job- relatedness cross-product term was entered. The results are presented in Table 4. For the situational interview, the set of variables entered in Step 1 was significant, R 2 = .11, F(3, 163) = 6.50, p < .001. The addition of the interaction term in Step 2 did not explain a significant amount of variance in test performance beyond the main effects, ?R 2 = .00, p > .05. For the role-play exercise, Step 1 was also significant, R 2 = .06, F(3, 161) = 3.01, p < .05. The addition of the interaction term in Step 2 did not explain a significant amount of variance in test performance beyond the main effects, ?R 2 = .01, p > .05. For the writing sample exercise, Step 1 was not significant, R 2 = .01, F(3, 163) = .70, p > .05. The addition of the interaction term in Step 2 did not explain a significant amount of variance in test performance beyond the main effects, ?R 2 = .01, p > .05. In summary, the hierarchical regression failed to indicate that job-relatedness moderated the relationship between motivation and performance for any type of exercise. As a result, Hypothesis 6 was not supported. However, the standardized betas for test- taking motivation were significant for the situational interview (? = .24, p < .01) and role- 67 play (? = .22, p < .01) exercises. Test-taking motivation was positively associated with performance on both the situational interview and the role-play exercises. Relationship Among Attitude Towards Testing, Selection Procedural Justice Perceptions, and Exercise Performance Hypothesis 7 stated that attitude toward assessment center exercise type would moderate the relationship between selection procedural justice perceptions and exercise performance such that more positive test-taking attitudes will be associated with more positive procedural justice perceptions and higher test performance. Hypothesis 7 was tested using a hierarchical regression procedure, and the results are presented in Table 5. In Step 1, job-relatedness, consistency of administration, information known, and opportunity to perform were entered. Then, attitude toward testing was entered in Step 2. Finally, the interaction terms attitude toward testing ? opportunity to perform, attitude toward testing ? consistency of administration, attitude toward testing ? job-relatedness, and attitude toward testing ? information known were entered in Step 3. For the situational interview, the Selection Procedural Justice Perception variables entered in Step 1 were associated with performance in the situational interview, R 2 = .12, F(5, 161) = 4.57, p < .001. The addition of attitude towards testing in Step 2 did not explain a significant amount of variance in performance in the situational interview beyond the main effects (?R 2 = .00, p > .05). In Step 3, the addition of the interaction terms did not explain a significant amount of variance in performance in the situational interview beyond the main effects either (?R 2 = .02, p > .05). Table 4. Hierarchical Moderated Regression Results for Job-relatedness and Test-taking Motivation Predicting Candidate Performance for Three Assessment Center Exercises Situational interview Role-play Writing sample Candidate reactions ?R 2 ? ?R 2 ? ?R 2 ? Step 1: Job-relatedness .10 -.10 -.08 Test-taking motivation .24** .22** .04 ?R 2 after Step 1 .11*** .05 .01 Step 2: Test-taking motivation X job-relatedness -.28 -1.08 .96 ?R 2 after Step 2 .00 .01 .01 Overall R 2 .11** .06* .02 Adjusted R 2 .09 .04 -.01 Note. N = 173. *p < .05. ** p < .01. *** p < .001. 68 69 For the role-play exercises, the variables entered in Step 1 were marginally significant, R 2 = .06, F(5, 159) = 1.86, p < .10. The addition of attitude towards testing in Step 2 did not explain a significant amount of variance in performance in the role play exercises beyond the main effects (?R 2 = .00, p > .05). In Step 3, the addition of the interaction terms did not explain a significant amount of variance in performance in the role play exercise beyond the main effects either (?R 2 = .02, p > .05). For the writing sample exercise, the first equation predicting exercise performance was not significant, R 2 = .01, F(5, 161), p > .05. The addition of attitude towards testing in Step 2 did not explain a significant amount of variance in performance in the writing sample beyond the main effects (?R 2 = .01, p > .05). Moreover, the addition of the interaction terms in Step 3 did not explain a significant amount of variance in performance in the writing sample beyond the main effects either (?R 2 = .03, p > .05). The results did not support the proposition that attitude toward assessment center exercise type moderates the relationship between selection procedural justice perceptions and exercise performance. Therefore, Hypothesis 7 was not supported. However, the beta weights for job-relatedness on the role play exercises (? = -.20, p < .05) and for opportunity to perform on the situational interview (? = .37, p < .001), and role-play exercises (? = .20, p < .05) were significant. In summary, job-relatedness was negatively associated with performance on the role-play exercises while opportunity to perform was positively associated with performance on the situational interview and role-play exercises. Table 5. Hierarchical Moderated Regression Results for Attitude Toward Testing and Candidates? Reactions to Assessment Center Exercises Predicting Candidate Performance for Three Assessment Center Exercises Situational interview Role-play Writing sample Candidate reactions ?R 2 ? ?R 2 ? ?R 2 ? Step 1: Job-relatedness -.06 -.20* -.12 Consistency of administration -.08 .12 .01 Information known -.08 .00 -.02 Opportunity to perform .37*** .20* .07 ?R 2 after Step 1 .12*** .06? .01 Step 2: Attitude toward testing -.07 -.00 -.16 ?R 2 after Step 2 .00 .00 .01 Step 3: Attitude toward testing X Opportunity to perform .55 .11 -.65 Attitude toward testing X Consistency of administration .32 .34 .11 Attitude toward testing X Job-relatedness .61 -.71 1.50 Attitude toward testing X Information known -1.04 -1.03 -1.01 ?R 2 after Step 3 .02 .02 .03 Overall R 2 .14* .07 .05 Adjusted R 2 .09 .01 -.01 Note. N = 173. ? p < .10. *p < .05. ** p < .01. *** p < .001. 70 71 Interaction of Assessment Center Exercise Type and Organizational Tenure Hypothesis 8 stated that organizational tenure would moderate the relationship between assessment center exercise type and candidate reactions in such a way that candidates with more experience would perceive situational interviews and role-play exercises as higher in opportunity to perform, information known, job relatedness, and consistency of administration than writing samples in comparison with less-experienced candidates. To test this hypothesis, moderated multiple regressions were conducted according to the procedures described by Aguinis (2003) using hierarchical regression. As recommended by Aguinis, dummy coding was used to indicate the type of assessment center exercise. Two variables, ?interview? and ?role-play? were created and coded as to indicate the type of exercise. The writing sample was designated as the comparison group. Results of the moderated regressions are presented in Table 6. For opportunity to perform, the variables entered in Step 1 were not significant, R 2 = .01, F(4, 515) = .66, p > .05. The addition of the interaction term in Step 2 did not explain a significant amount of variance in exercise performance beyond the main effects, ?R 2 = .00, F(2, 513) = .16, p > .05. For consistency of administration, the variables entered in Step 1 were significant, R 2 = .02, F(4, 515) = 2.72, p < .05. The addition of the interaction term in Step 2 did not explain a significant amount of variance in exercise performance beyond the main effects, ?R 2 = .00, F(2, 513) = .51, p > .05. For information known, the variables entered in Step 1 were significant, R 2 = .03, F(4, 515) = 4.44, p < .01. The addition of the interaction term in Step 2 did not explain a significant amount of variance in exercise performance beyond the main effects, ?R 2 = .00, F(2, 512) = .73, p > .05. For job-relatedness, the variables entered in Step 1 were not Table 6. Hierarchical Moderated Regression Results for Candidate Organizational Tenure and Assessment Center Exercise Type Predicting Candidates? Perceptions of Opportunity to Perform, Consistency of Administration, Information Known, and Job-relatedness Opportunity to perform Consistency of administration Information known Job-relatedness Variables ?R 2 ? ?R 2 ? ?R 2 ? ?R 2 ? Step 1: Organizational tenure -.06 .04 .04 -.11** Exercise type-situational interview .04 -.05 .08 .03 Exercise type-role play .04 -.04 .03 .03 ?R 2 after Step 1 .00 .02* .03** .01 Step 2: Organizational tenure X exercise type- situational interview -.06 .09 -.16 -.10 Organizational tenure X exercise type-role play -.08 -.05 -.05 .11 ?R 2 after Step 2 .00 .02 .00 .00 Overall R 2 .01 .02 .04* .01 Adjusted R 2 -.01 .01 .02 .00 Note. N = 173. *p < .05. ** p < .01. 72 73 significant, R 2 = .01, F(4, 514) = 1.33, p > .05. The addition of the interaction term in Step 2 did not explain a significant amount of variance in exercise performance beyond the main effects, ?R 2 = .01, F(2, 512) = 1.26, p > .05. In summary, these results indicated that the interaction of assessment center exercise type and experience was not significant. Therefore, Hypothesis 8 was not supported. However, the beta weight for organizational tenure was significant for job- relatedness (? = -.11, p < .01). Therefore, in this study, job-relatedness was found to be negatively related to experience. Relationship Between Evaluative History and Level of Target Position and Reactions to Assessment Center Exercises Hypothesis 9 proposed that past success on similar exercises will be positively related to SPJS dimensions of perceived job-relatedness, opportunity to perform, attitude towards testing, information known, and test-taking motivation. Hypothesis 10 stated that level of position would also be positively related with the same selection procedural justice perceptions in such a way that candidates for positions at a higher level would react more favorably (i.e., higher selection procedural justice perceptions) to the selection procedures. Specifically, it was expected that candidates for lieutenant would perceive all selection devices as higher in opportunity to perform, information known, job relatedness, treatment, and consistency of administration than candidates for sergeant. To test these hypotheses, multiple regression analyses were conducted. Evaluative history and level of target position were entered as independent variables and five separate multiple regression analyses were conducted for the five SPJS dependent variables. The results for these hypotheses are presented in Table 7. 74 Although Hypothesis 9 predicted that evaluative history would be positively related to all six independent variables, the standardized betas revealed that evaluative history was only related to perceptions of opportunity to perform (? = -.09, p < .05) and attitude towards test (? = -.10, p < .05). However, these beta weights were negative, which is the opposite of what was predicted. In other words, candidates who performed better on similar exercises in the past had more negative attitudes toward testing and perceptions of opportunity to perform than did candidates who performed poorly in the past. Therefore, Hypothesis 9 was not supported. Concerning Hypothesis 10, results of the regression analysis indicated level of target position was related to several candidate reactions. Although the hypothesis predicted that level of target position would be positively related to all six independent variables, the standardized betas revealed that level of target position was negatively related to opportunity to perform (? = -.02, p < .10) and test-taking motivation (? = -.20, p < .10) and positively related to information known (? = .28, p < .05). In other words, candidates for lieutenant had more negative perceptions of opportunity to perform, lower test-taking motivation, and more positive perceptions of information known than did candidates for sergeant. Thus, Hypothesis 10 was also not supported. Table 7. Multiple Regression Results for Candidate Evaluative History and Level of Target Position Predicting Candidates? Perceptions of Job-relatedness, Opportunity to Perform, Attitude Toward Testing, Information Known, and Test-taking Motivation Candidate reactions Job- relatedness Opportunity to perform Attitude toward testing Information known Test-taking motivation Candidate variables ? ? ? ? ? Evaluative history -.00 -.09* -.10* -.01 -.04 Position level a .01 -.02? .00 .28* -.20? R 2 .00 .01 .01 .04*** .01 Adjusted R 2 -.01 .01 .00 .03 .00 Note. N = 173. a Position level was coded as 1 = sergeant and 2 = lieutenant. ? p < .10. *p < .05. ** p < .01. *** p < .001. 75 76 Interaction Between Assessment Center Exercise Type and Race Hypotheses 11a, 11b, and 11c proposed that race will moderate the relationship between assessment center exercise type and candidate reactions in such a way that African-American candidates will (a) perceive selection devices with low written content as being more job-related than White candidates will, (b) perceive selection devices with low written content as providing greater opportunity to perform than White candidates will, and (c) report lower test-taking motivation for selection devices with high written content than White candidates will. In order to test these hypotheses, a 2 (promotion candidate race) ? 3 (selection procedure type) repeated measures MANOVA was conducted for the three dependent variables: perceived job relatedness, opportunity to perform, and test-taking motivation. The independent variables were selection procedure type (situational interview, role-play, and writing sample) and promotion candidate race (African-American and White). Results of the MANOVA are presented in Table 8. Results of the MANOVA indicated a main effect for race, ? = .65, p < .001, ? 2 = .35, but the main effects for type of selection procedure, ? = .97, p > .05, ? 2 = .01, and the interaction of race and type of selection procedure, ? = .98 , p > .05, ? 2 = .01, were not significant. Because the interaction terms including race were not significant, Hypotheses 11a-11c were not supported. In order to explore the significant main effect for candidate race, separate one- way repeated measures analyses of variance were calculated for candidate race using each of the three dependent variables. In terms of promotion candidate race, there were differences in candidate perceptions of job relatedness, F(1, 324) = 60.74, p < .001, ? 2 = 77 .20, opportunity to perform F(1, 324) = 42.45, p < .001, ? 2 = .12, and test-taking motivation, F (1, 324) = 105.38, p < .001, ? 2 = .24. Examination of the means presented in Table 9 revealed that in terms of perceived job relatedness, African-American candidates (M = 3.46, SD = .89) perceived the selection measures as having greater job relatedness than did Whites (M = 3.08, SD = .79). African-Americans also viewed the selection procedures as providing them with greater opportunity to perform (M = 3.74, SD = .83) than did Whites (M = 3.45, SD = .87). Finally, African-Americans (M = 4.64, SD = .51) reported a higher level of test- taking motivation than White candidates (M = 4.34, SD = .57) did. The univariate analysis of variance also revealed that test-taking motivation differed significantly by assessment center exercise type, F(2, 324) = 3.06, p < .05. Scheffee? multiple comparison tests were conducted to identify which particular exercises candidates reported having greater test-taking motivation. Results of the multiple comparison tests failed to detect any significant differences for test-taking motivation among the three individual assessment center exercises. Table 8. Analysis of Interaction Between Type of Assessment Center Exercise and Promotion Candidate Race for Job-relatedness, Opportunity to Perform, and Test-taking Motivation Job-relatedness Opportunity to perform Test-taking motivation Note. N = 173. *p < .05. ** p < .01. *** p < .001. ANOVA for Candidate reactions Predictors M ANOVA Wilks? F ? 2 F ? 2 F ? 2 Exercise type .97 1.13 .01 2.08 .01 3.06* .02 Race .65*** 60.74*** .16 42.45*** .12 105.38*** .24 Exercise type X race .98 .43 .00 1.28 .01 1.30 .01 78 79 Table 9. Means and Standard Deviations of White and African-American Candidates? Reactions to Assessment Center Exercises Note. The higher the mean score, the more positive the candidate reactions. Candidate reactions Job- relatedness Opportunity to perform Test-taking motivation Candidate race N M SD M SD M SD White 104 3.08 .79 4.34 .57 3.45 .87 African-American 62 3.46 .89 4.64 .51 3.74 .83 80 CHAPTER 4 DISCUSSION The central focus of the current study was to examine the reactions of actual job candidates to three common assessment center exercises (i.e., a situational interview, a writing sample, and role-play exercises) used to make real promotion decisions. In doing so, this study also examined the antecedents of applicant reactions to these assessment center exercises by examining the association of candidates? test-taking motivation, attitude towards testing, race, organizational tenure, level of target position, and evaluative history with both exercise performance and selection procedural justice perceptions. This study indicated that different types of simulation exercises, for the most part, did not elicit different types of reactions from candidates. There were no significant differences in the candidates? perceptions of the three exercise types in terms of job-relatedness, consistency of administration, and opportunity to perform. However, candidates viewed the situational interview more positively in terms of information known compared to the writing sample. Additionally, this study revealed that level of target position was negatively associated with opportunity to perform and test-taking motivation, but positively associated with information known. In other words, candidates competing for jobs higher in the organizational hierarchy (i.e., lieutenant vs. sergeant) viewed the assessment center exercises as being lower in opportunity to perform and test-taking motivation, but higher 81 in information known when compared to candidates competing for lower level jobs (i.e., sergeant candidates). This study also revealed that African-American and White candidates did not perceive the situational interview, role-play exercises, and writing sample differently in terms of job-relatedness, opportunity to perform, and test-taking motivation. However, African-American candidates reported higher perceptions of job-relatedness, opportunity to perform, and test-taking motivation in comparison with White candidates for all exercises. Relationship between Assessment Center Exercise Type and Candidate Reactions One of the most important goals of this study was to examine how candidates reacted to different assessment center exercises, specifically situational interviews, role- play exercises, and writing samples. As explained earlier, a common strategy employed by past researchers has been to treat candidate reactions as a global or unidimensional construct. However, this study examined whether or not candidates viewed tests as fair in some ways and unfair in others by examining candidate reactions to each assessment center exercise type along numerous attitudinal dimensions. Based on the characteristics of the assessment center exercises, it was proposed that candidates would react differently to the three different assessment center exercises. Hypothesis 1 proposed that role-play and writing sample exercises would be viewed as being more job-related due to their high degree of physical and psychological fidelity compared to the rather low fidelity situational interview. However, this 82 proposition was not supported. There was no significant difference in perceptions of job- relatedness among the three exercise types. Hypothesis 2 proposed that situational interviews and role-play exercises would be perceived as providing greater opportunity to perform than the writing sample because of the somewhat free-flowing, candidate-driven nature of situational interviews and role- play exercises. However, this hypothesis was not supported. The situational interview, role-play exercise, and writing sample exercise were all viewed similarly in terms of perceptions of opportunity to perform. Hypothesis 3 proposed that the writing sample exercise would be viewed as more consistently administered in comparison with the situational interview and the role play exercise because it was mass administered. However, the analyses indicated that there was no difference in consistency of administration perceptions among assessment center exercise types. Therefore, Hypothesis 3 was not supported. Hypothesis 4 stated that the situational interview and writing sample would be perceived as providing greater information (i.e., sample exercises/questions, dimensions/KSAs measured, and preparation tips) about the selection process than the role-play exercise. The results indicated that the situational interview was perceived as providing more information than the writing sample, but there was no difference in the perceptions of information known between the role-play and the writing sample or between the role-play and situational interview. As a result, Hypothesis 4 was not supported. This result is the opposite of what was hypothesized and is difficult to explain. Perhaps the situational interview was viewed as most familiar by candidates because it 83 most closely resembles the selection device most frequently used for employment selection, i.e., the unstructured interview (Gatewood & Feild, 2001), while the writing sample may have been the least familiar to candidates, as the role-play exercise is somewhat similar to the situational interview. In other words, perhaps the situational interview and writing sample represent opposite ends of the spectrum in terms of familiarity which influenced candidates? perceptions of information known about the exercises. Familiarity with selection devices can influence candidate reactions. Brockner, Ackerman, and Fairchild (2001) suggested that candidates? familiarity with selection procedures can increase perceptions of legitimacy. For example, Steiner and Gilliland (1996) found that applicants prefer unstructured interviews to other procedures such as cognitive ability tests. While this may be due to other factors, the unstructured interview is familiar to almost all candidates and likely expected, which may partially account for positive candidate reactions to it (Truxillo, Steiner, & Gilliland, 2004). Similar explanations have been made concerning the less negative reactions found towards drug testing (Mastrangelo, 1997). Hypothesis 5 proposed that candidates would report greater test-taking motivation for the role-play exercise and writing sample than for the situational interview. Reported test-taking motivation for the role-play, writing sample, and situational interview were virtually the same. These results seem to indicate that the candidates did not differ significantly in test-taking motivation across different assessment center exercises. Therefore, Hypothesis 5 was rejected. Taken together, the results of testing these hypotheses seem to indicate that the candidates did not perceive situational interviews, role-play exercises, and writing 84 samples differently in terms of job-relatedness, consistency of administration, opportunity to perform, or test-taking administration. These results are contrary to past research that has indicated that mean differences exist between perceptions of different types of tests (Kluger & Rothestein, 1993; Kravitz, Stinson, & Chavez, 1994; Rynes & Connerly, 1993). Ryan and Ployhart (2000) called for researchers to clarify what characteristics (e.g., method of assessing, transparency of assessed constructs, physical features, structure) give rise to these differences in perceptions. Perhaps these results are in part due to the fact that the exercises in this study were not different enough. Past research finding differences in reactions to different types of tests has examined differences in perceptions of extremely disparate types of selection devices (i.e., cognitive ability tests vs. situational interviews, biographical questionnaires vs. cognitive ability tests). In this study, the assessment center exercises were different, but all exercises were simulations or performance tests, and all were the result of an extensive development process. Perhaps candidates do not make such fine distinctions between tests in forming perceptions and view all tests in the same ?family? as being similar in terms of fairness. Furthermore, research suggests that candidates do not react negatively to selection devices that are well developed, job-related, and used in selection processes in which the procedures are appropriately applied, decisions are explained, and candidates are treated respectfully and sensitively (Ryan & Tippins, 2004). Another possible reason for this result is the candidates? familiarity with the selection procedures. Candidates? experience or familiarity with selection procedures could have an important influence on perceptions of fairness (Gilliland, 1993). 85 Furthermore, familiarity with selection processes (i.e., tests) can increase perceptions of their legitimacy (Brockner, Ackerman, & Fairchild, 2001). The police department involved in this study was a long-time client of the Center for Business, and most candidates had competed in numerous assessment centers developed and administered by the Center. Therefore, their familiarity with the assessment center exercise types and their legitimacy would have been very high, certainly higher than in most situations. Perhaps the absence of differences is less a function of the exercises and more a function of the reputation and candidates? familiarity with the Center?s work. Relationship Between Test-taking Motivation, Exercise Performance, and Job-relatedness Hypothesis 6 posited that perceived job-relatedness would moderate the relationship between test-taking motivation and test performance in such a way that candidates would have greater motivation to perform well on highly job-related tests, resulting in higher test performance on such tests. Analyses indicated that job- relatedness did not moderate the relationship between motivation and performance for any type of exercise. As a result, Hypothesis 6 was not supported. However, the results did reveal that test-taking motivation was positively associated with performance for both the situational interview and role-play exercises. While these results are somewhat consistent with research indicating that test-taking motivation is positively related to test performance (Arvey et al., 1990; Chan et al., 1997; Sanchez, Truxillo, Bauer, 2000; Schmit & Ryan, 1992), it is puzzling that this relationship was not present regarding performance on the writing sample exercise. However, these results are not completely surprising. Although the test-taking motivation-test performance relationship has been 86 found consistently in student samples (e.g., Chan & Schmitt, 1997; Chan et al., 1997; Schmit & Ryan, 1992), field studies such as the present study have produced mixed results. For example, Arvey et al. (1990) reported a relationship between test-taking motivation and performance was significant for a work sample test but not for two written tests. Additionally, in a separate sample of managers, Arvey et al. failed to find a significant test-taking motivation-performance relationship. Future research should investigate the possibility that this relationship is stronger for different types of tests (i.e., written tests vs. performance tests, personality tests vs. cognitive ability tests) in field settings. Relationship Between Attitude Towards Testing, Selection Procedural Justice Perceptions, and Exercise Performance Hypothesis 7 stated that attitude toward assessment center exercise type would moderate the relationship between selection procedural justice perceptions and test performance such that more positive test-taking attitudes will be associated with more positive procedural justice perceptions and higher test performance. The results of this study did not support the proposition that attitude toward assessment center exercise type moderated the relationship between selection procedural justice perceptions and test performance. While the moderating effects of attitude towards the assessment center exercises were not significant, opportunity to perform was positively related to performance in the situational interview and role play exercises. This is an important finding as no other study in the literature has established this specific relationship. Furthermore, this finding is important because while clear linkages between candidate reactions and performance on cognitive ability tests have been established, linkages 87 between candidate reactions and performance on devices with which candidates have less experience have not been established (Bell, Ryan, & Wiechmann, 2004). It seems commonsensical that the situational interview and role play exercises would be viewed as more free-flowing and less restrictive, thus providing candidates greater opportunity to demonstrate their competencies (Arvey & Sackett, 1993) and the ability to exert control in a selection situation (Schuler, 1993). Furthermore, exercises such as role-plays and situational interviews provide the opportunity to express oneself directly to the interviewer or assessor rather than indirectly through test questions (Gilliland, 1993). Similarly, it seems logical that opportunity to perform would be related to performance, as it would allow candidates more freedom to emphasize strengths and minimize weaknesses. It has been suggested that voice, which is similar to opportunity to perform, is related to effort or performance in team sport settings (Jordan, Gillientine, & Hunt, 2004). However, no study could be found that demonstrated a link between candidates? perceptions of opportunity to perform and subsequent performance in a selection context. One result more difficult to explain is the finding that job-relatedness was negatively related to performance in the role-play exercises. In other words, the more job relevant candidates viewed the role-play exercise, the worse candidates performed on the role-play exercise. Job-relatedness has been shown to be positively related to performance (Chan et al., 1997; Chan & Schmitt, 1997). In fact, Chan and Schmitt suggested that changing the method of testing to a format that is more concrete and realistic increases test performance. However, this was not the case in this study. Job- relatedness was not related to performance in the situational interview or the writing sample, while it was negatively related to performance in the role-play exercise. This is 88 confusing and warrants further investigation to determine exactly how and in what situation is job-relatedness positively associated with exercise performance. Interaction of Type of Assessment Center Exercise and Organizational Tenure Hypothesis 8 stated that organizational tenure would moderate the relationship between assessment center exercise type and candidate reactions in such a way that candidates with more organizational tenure would perceive situational interviews and role-play exercises as higher in opportunity to perform, information known, job relatedness, and consistency of administration than writing samples in comparison with less-experienced candidates. The results indicated that the interaction of assessment center exercise type and organizational tenure was not significant. However, the negative beta weight for experience was significant for job-relatedness. In other words, the more organizational tenure candidates had, the lower their perceptions of job-relatedness. This is counterintuitive and difficult to explain. One would think that the job-relatedness of exercises would become more evident as organizational tenure increases because of the amount of job knowledge that more experienced job candidates possess. However, these results do not support this proposition. While this result is difficult to explain, written comments provided by candidates completing the survey offer some insight. On the questionnaire, candidates were allowed to write in additional comments concerning anything not covered by the survey items. Numerous candidates indicated that while they felt the selection process was reasonable and fair, it failed to recognize the on-the-job experience of candidates. Several cited examples of how someone might ?have all the answers? on the selection 89 procedures but fail to apply it on the job. For example, one candidate complained that the assessment center ?measured book smarts but not street smarts that you get from experience.? In their minds, this was a serious flaw in the selection procedures because it failed to address an important aspect of the job. Perhaps this attitude in part, explains the negative relationship between organizational tenure and job-relatedness of the exercises. Relationship Among Evaluative History, Level of Target Position, and Reactions to Assessment Center Exercises Hypothesis 9 proposed that past success on similar exercises would be positively related to SPJS dimensions of perceived job-relatedness, information known, consistency of administration, opportunity to perform, test-taking motivation, and attitude towards testing. Even though Hypothesis 9 predicted that evaluative history would be positively related to all six independent variables, the results revealed that evaluative history was related to perceptions of opportunity to perform and attitude towards test only. However, the relationships were negative, which is the opposite of what was predicted. In other words, the more successful candidates have been in past experiences with a particular exercise, the more negative their perceptions of opportunity to perform and attitude towards testing. This is puzzling and difficult to explain as it is not consistent with prior research. Hypothesis 10 stated that level of position would also be positively related with the same selection procedural justice perceptions in such a way that candidates for positions of a higher level would react more favorably (i.e., higher selection procedural justice perceptions) to the selection procedures. In other words, it was thought that candidates for lieutenant would perceive all selection devices as higher in opportunity to 90 perform, information known, job relatedness, treatment, and consistency of administration than candidates for sergeant. The results indicated level of target position was related to several selection procedural justice perceptions. Even though the hypothesis predicted that level of target position would be positively related to all six SPJS dimensions, level of target position was negatively related to opportunity to perform and test-taking motivation and positively related to information known. In essence, this result indicates that more experienced candidates felt that the assessment center exercises provided less opportunity to demonstrate their skills and competencies and reported lower levels of test-taking motivation while perceiving the exercises as giving them more information about the assessment center process than did less experienced candidates. One possible explanation for why candidates for higher level jobs viewed the assessment center exercises negatively in terms of opportunity to perform and test-taking motivation is due to the difference in justice expectations as a function of experience with selection procedures. Research has demonstrated that candidates? previous experience with a selection procedure influences perceptions of fairness of that procedure (Kravitz, Stinson, & Chavez, 1994; Ryan, Greguras, & Ployhart, 1996). In most law enforcement agencies, a candidate must have held lower-level ranks in order to be eligible for promotion to lieutenant. In becoming eligible for promotion to lieutenant, a candidate would likely have completed numerous assessment centers to achieve lower-level ranks such as detective, sergeant, etc. Therefore, candidates for lieutenant would more than likely have had more experience with assessment centers than candidates for sergeant, which in turn, could influence the justice expectations of candidates. Furthermore, it has 91 been suggested that inexperienced candidates will be more tolerant of justice violations because they will not have strong expectations (Gilliland & Steiner, 2001). Another potential explanation for why candidates for higher-level positions viewed the assessment center exercises negatively in terms of opportunity to perform and test-taking motivation is the relationship between tenure, negative affectivity/cynicism, and turnover. In this and most law enforcement agencies, the level of target position would be closely associated with tenure. Each rank within the department has a minimum requirement that candidates have held the previous rank for a specific period of time (e.g., 3 years as sergeant in order to be eligible for promotion to lieutenant). Therefore, candidates eligible for promotion to lieutenant would have more tenure than those eligible for promotion to sergeant, on average. In this study, tenure was related to level of target position (r = .59, p < .01). Research has suggested that tenure and age may be related to negative affect, hostile beliefs, anger, and cynicism such that older or more tenured individuals may experience higher levels of negativity or cynicism (Barefoot, Beckham, Haney, Siegler, & Lipkus, 1993). Furthermore, research has suggested high- tenure individuals are less likely to leave their jobs despite being dissatisfied (Duffy, Ganster, & Shaw, 1998). Others have suggested that high-tenure employees may have acquired more side bets, sunken costs, or investments in the organization (Becker, 1960; Meyer & Allen, 1984). These individuals may feel trapped in the organization due to what is commonly known as escalation of commitment, which, in turn, causes negative, cynical, or jaded perceptions of the organization and its processes. 92 Interaction Between Assessment Center Exercise Type and Candidate Race Hypothesis 11a, 11b, and 11c proposed that race would moderate the relationship between assessment center exercise type and candidate reactions in such a way that African-American candidates would perceive selection devices with low written content as being more job-related than White candidates would, perceive selection devices with low written content as providing greater opportunity to perform than White candidates would, and report lower test-taking motivation for selection devices with high written content than White candidates. Because the interaction terms including race were not significant, no support was found for these hypotheses. However, the significant main effect for race warranted further investigation. Follow-up analyses revealed that race was associated with job-relatedness, opportunity to perform, and test-taking motivation. These results are similar to results of an earlier study of racial differences to pencil-and- paper tests and situational interviews, in which the reactions of African-American candidates were significantly more positive in comparison with White candidates (Becton, Feild, Giles, & Jones-Farmer, 2005). African-American candidates had more positive perceptions of job relatedness, opportunity to perform, and test-taking motivation than White candidates did. While this result is different from what was predicted, previous studies providing empirical evidence of racial differences in perceptions of job relatedness and test-taking motivation have produced mixed results. Schmitt and Ryan (1997) and Smither et al. (1993), for example, reported no differences in test-taking motivation or face validity perceptions between African-Americans and Whites. On the other hand, when racial differences in reactions 93 have been identified, the findings typically have indicated lower levels of face validity and test-taking motivation for African-Americans (Chan & Schmitt, 1997). The finding that Whites gave lower ratings of job relatedness, opportunity to perform, and test-taking motivation than African-Americans was therefore somewhat surprising. While it is possible that these unanticipated results could be because the study was conducted in a promotion situation, it is more likely that other aspects of the organizational context influenced the results of this study. Racial differences in real- world settings are highly influenced by context (Ryan & Ployhart, 2000). Research in organizations with a history of discriminatory practices and/or the presence of a strong affirmative action program has indicated that these contextual factores influence perceptions of fairness (Ryan, Ployhart, Greguras, & Schmit, 1997; Ryan, Sacco, McFarland, & Kriska, 2000; Schmit & Ryan, 1997; Truxillo & Bauer, 1999). For example, African-Americans have more positive views than Whites do about the fairness of testing in organizational settings where there are strong affirmative action programs and minorities in highly visible leadership positions (Ryan, Ployhart, Greguras, & Schmit, 1997; Schmit & Ryan, 1997). The present study involved promotion candidates employed in an urban police department that is racially diverse, with over half (54%) of the candidates being non-White. Additionally, the department?s leadership includes numerous minorities, especially in the upper-levels of management. For example, the current police chief and previous police chief are African American. Furthermore, since selection and promotion procedures are often associated with an organization?s human resources department, perceived fairness of procedures should be related to perceptions of the human resources department (Truxillo, Steiner, & Gilliland, 2004). The human 94 resources department for the county in which this study occurred was comprised of mostly African-American employees, including the human resources director. Perhaps, these factors played an important role in the nature of the findings (i.e., African Americans having more positive reactions than Whites did). Implications for Research and Practice This study extended the literature in the area of reactions to assessment in several ways. First, most of the previous studies were conducted in lab settings using either undergraduate students or employees in simulated employment settings. The stakes are certainly higher for actual job candidates and consequently one might expect the reactions and motivation of actual job candidates to differ from those in lab studies (Arvey et al., 1991). Thus, it is unclear if the findings of much existing research generalize to actual selection contexts. This study was conducted using real job candidates in an actual selection context. Second, most of the prior field research was conducted in an entry-level context (i.e., applicants attempting to gain entry into the organization). While test reactions of entry-level applicants are certainly important, the reactions of promotion candidates are perhaps even more critical, yet their reactions to various assessment devices have been largely neglected in the literature. Understanding candidate reactions in promotional settings is important for two reasons: (a) the potential negative consequences in the promotional context (e.g., morale, job performance) can be more acutely felt by the organization than those in the entry-level context (e.g., public relations, re-application), and (b) reactions of incumbents are felt throughout the organization (Truxillo & Bauer, 1999). While findings based on undergraduate student reactions may be a good proxy for 95 entry-level applicant reactions, these findings would appear to be less generalizable to promotion situations. Since promotion procedures deal with incumbents who are already part of the organization, it is likely that different mechanisms act to form reactions (Ryan & Ployhart, 2000). Reactions of entry-level and incumbent candidates may differ because individuals within an organization likely possess different information about the organization and selection procedures, i.e., entry-level applicants having relatively little information about the job vs. incumbent applicants having intimate job knowledge (Ryan & Ployhart, 2000). This study was conducted in a promotion context and provided needed attention to possibility that perceptions of entry-level and promotion candidates might differ. Results of this study have several important implications for research and practice. First, because candidate reactions to the three exercise types did not differ for the most part, these results imply that candidate reactions may not be a function of the components used in a selection procedure. It is possible that other factors that are constant across exercise type (i.e., treatment by administrative staff (Bies & Moag, 1986; Iles & Robertson, 1989), honesty (Bies & Moag, 1986)) play a more important role in determining candidates? reactions to testing. Candidates? reactions to a selection process are a result of more than simply the instruments per se and the process elicits reactions that are much more complex than those captured with the measures used in most of the existing research (Chan & Smith, 2004). Future research on these dimensions of selection procedural justice perceptions in actual selection situations is needed. Second, the finding that the candidates viewed the writing sample lower compared to the situational interview in regards to information known has implications 96 for practice. Given the significant impact that candidate reactions have on individuals? attitudes and behaviors and organizational outcomes, organizations should attempt to actively manage candidates? expectations, reactions and perceptions (Bell, Ryan, & Wiechmann, 2004). Perhaps organizations can influence or manage candidate reactions to assessment better by providing more information about the different components in the selection process, especially those with which candidates likely have the least exposure (i.e., writing sample). Some support for this assumption is found in the literature. Stone and Kotch (1989) found that attitudes toward drug-testing programs were influenced by the type and amount of a priori information provided. Third, the finding that candidates? test-taking motivation did not differ across different assessment center exercises seems to indicate that test-taking motivation is a more static construct and difficult to change or influence. Additionally, it was shown that the test-taking motivation of African-Americans and Whites did not differ as a function of exercise type. This is important because many researchers have suggested that differences in test-taking motivation may be the reason for performance differences between African-Americans and Whites on standardized tests, and that this performance difference can be reduced if test-taking motivation of African-Americans is improved (McKay & Doverspike, 2001). One of the more popular approaches to improving test- taking motivation is to use different testing formats that are thought to stress the values and culture of the African-American culture (i.e., oral, performance tests). These results suggest that such an approach may be ineffective, as there was no difference in the test- taking motivation regardless of whether the exercise was verbal or written in nature. 97 However, this study involved promotion candidates, and this approach may be more effective with entry-level candidates. Fourth, the finding that level of target position was negatively related to opportunity to perform and test-taking motivation and positively related to information known has important implications. Organizations may need to approach managing candidate reactions to testing differently depending on the level of job for which they are competing. Furthermore, more and different types of information may need to be provided to candidates at different levels within the organization regardless of the exercise type. Future research may also need to address how the influence of justice expectations and perceptions varies over time. While some evidence suggests that expectations are resilient, it is probable that candidates test and revise their justice expectations with experience (Bell et al., 2004). Fifth, results indicating that the reactions of African-American candidates were more positive than those of White candidates highlight an important consideration. Perhaps the racial differences in candidates? reactions to testing found in previous research (e.g., Chan et al., 1997; Chan & Schmitt, 1997) can be attributed more to the context in which the selection devices were administered than to the actual devices themselves. Therefore, it might be prudent of organizations to invest as much time and money into factors that influence the organizational context as the development/purchase of selection devices. Limitations of Present Study and Directions for Future Research Results of this study constitute several important findings. However, as with any research, there are potential limitations to this study. First, while some important 98 relationships were identified, the data in the present study are cross-sectional and correlational in nature. Correlational designs/data cannot rule out alternative explanations that antecedent-reaction associations were in fact spuriously associated as independent effects of a common causal (third) variable (Chan & Schmitt, 2004). As a result, causal inferences cannot be made. As Chan and Schmitt suggest, future research should include experimental or quasi-experimental designs that will allow researchers to examine the causal relationships between study variables (e.g., longitudinal studies, pre- and post-test assessment). However, this presents a tradeoff as it is unlikely that such designs can or will be allowed in actual selection situations, which seriously limits the generalizability of the results. Second, this study likely suffered from the range restriction normally associated with reactions-to-testing research. That is, only organizations with fair selection processes are inclined to allow a study of their selection procedures (Truxillo, Steiner, & Gilliland, 2004). Certainly, one wonders if similar results would be found in contexts where past discrimination had been pervasive or if the organization has relatively few minority employees. Organizations with these characteristics are generally reluctant to participate in a study of selection procedure fairness. In field studies of reactions to testing, mean ratings of perceptions of selection procedures are positively skewed with the most negative ratings being near the middle of a Likert rating scale (Truxillo, Steiner, & Gilliland, 2004). Furthermore, the lack of variability in the SPJS, test-taking motivation, and attitude toward testing scales may be an object of the field setting (Truxillo, Steiner, & Gilliland, 2004). The lack of variance in these measures presents some problems in discovering relationships between these variables and other variables 99 of interest. Future research should focus on comparing reactions to testing in different organizational contexts (e.g., organizations without strong affirmative action programs vs. organizations with strong affirmative action programs). Third, it was not possible to separate method and content empirically in this study. One criticism of field studies of reactions to testing is that test method and test content are confounded. Without controlling for test content across test format (cf. Chan & Schmitt, 1997), the effects of test format cannot be separated from the effects of test content and vice versa. However, this type of design is impractical, if not inappropriate, in actual selection situations as it might result in over-sampling certain dimensions of the job content domain of the target job. Therefore, while this limitation is recognized, it is a tradeoff in conducting such research in actual selection situations using bona fide job candidates. Fourth, it is likely that organizational context had a pronounced effect on the results of this study. As mentioned earlier, an interesting area for future research is examining applicant reactions to testing in different organizational contexts. Specifically, reactions of African Americans and Whites in settings where the racial composition of the organizations varies considerably could be tested. Are reactions of African Americans different in settings where African Americans comprise the majority of the employees and/or hold leadership positions (as in the present study) versus settings where African Americans are the minority? Future research might draw upon the theoretical underpinnings of the similar-to-me effect and social identity theory. Previous research on the similar-to-me effect suggests that individuals prefer people and situations where they believe people are similar to themselves (Clark & Fiske, 1982), and liking has an effect 100 on employment ratings (Cardy & Dobbins, 1986). Perceived similarity between people has been shown to determine likeability and trust in general psychological research (Patzer, 1985) and in employment settings in particular (Kanter, 1977). Similarly, social identification theory suggests that individuals? self-identities are defined partly by their membership in groups that they especially value and find emotionally significant (Tajfel, 1982). When perceiving and evaluating others, individuals are likely to judge more favorably those individuals who hold common group membership, such as a racial group (Tajfel, 1982). Both concepts suggest that members of a particular group (e.g., racial group) will perceive or evaluate members of the same group more favorably than members of other groups (Prewett-Livingston, Feild, Veres, & Lewis, 1996). Is it possible that a similar type of relationship will exist between candidates? perceptions of an organization?s racial composition and perceptions of test fairness? In other words, does the racial composition of the organization have an effect on the candidates? reactions such that candidates who are in the racial majority view selection procedures more fairly? Future research is needed to determine the effect of organization context on reactions to use of various selection methods. 101 REFERENCES Aguinis, H. (2003). Regression Analysis for Categorical Moderators. New York: The Guilford Press. Anastasi, A. (1988). Psychological Testing (6 th ed.). New York: Macmillan Publishing. Archambeau, D.J. (1979). Relationships among skill ratings in an assessment center. Journal of Assessment Center Technology, 2, 7-20. Arvey, R.D., Strickland, W., Drauden, G., & Martin, C. (1990). Motivational components of test taking. Personnel Psychology, 43, 695-716. Balaban, D. (1997). Personality testing is catching on among U.S. businesses. Kansas City Business Journal, 15, p.6. Bannister, B.D. (1986). Performance outcome feedback and attributional feedback: Interactive effects on recipients? responses. Journal of Applied Psychology, 71(2), 203-210. Barbera, K.M., Ryan, A.M., Burris Desmarais, L., & Dyer, P.J. (1995). Multimedia employment tests: Effects of attitudes and experiences on validity. Unpublished manuscript. Baron, H., & Janman, K. (1996). Fairness in the assessment centre. In C.L. Cooper & I.T. Robertson (Eds.), International Review of Industrial and Organizational Psychology, 11, 61-113. London: Wiley. Bauer, T.N., Maertz, C.P., Dolen, M.R., & Campion, M.A. (1998). Longitudinal assessment of reactions to employment testing and test outcome feedback. Journal of Applied Psychology, 83,(6), 892-903. 102 Bauer, T.N., Truxillo, D.M., Sanchez, R.J., Craig, J.M., Ferrara, P, & Campion, M.A. (2001). Applicant reactions to selection: Development of the selection procedural justice scale (SPJS). Personnel Psychology, 54, 387-419. Becton, J.B., Feild, H.S., Giles, W.F., & Jones-Farmer, A. (2005). Pencil-and-paper tests versus situational interviews: Racial differences in promotion candidates? test-taking motivation and job relatedness perceptions. Paper to be presented at the annual meeting of the Academy of Management, Honolulu, HI. Bell, B.S., Ryan, A.M., & Wiechmann, D. (2004). Justice expectations and applicant perceptions. International Journal of Selection and Assessment, 12, 24-38. Bible, J.D. (1990). When employers look for things other than drugs: The legality of AIDS, genetic, intelligence, and honesty testing in the workplace. Labor Law Journal, 41,195-213. Bies, R.J., & Moag, J.S. (1986). Interactional justice: Communication criteria of fairness. Research on Negotiation in Organizations, 1, 43-55. Borman, W.C., Hanson, M.A., & Hedge, J.W. (1997). Personnel selection. Annual Review of Psychology, 48, 299-337. Boudreau, J.W., & Rynes, S.L. (1985). The role of recruitment in staff utility analysis. Journal of Applied Psychology, 70, 354-366. Breaugh, J.A. (1992). Recruitment: Science and practice. Boston: PWS-Kent Publishing. Brockner, J., Ackerman, G., & Fairchild, G. (2001). When do elements of procedural fairness make a difference? A classification of moderating influences. In J. 103 Greenberg and R. Cropanzano (Eds.), Advances in Organizational Justice (pp- 179-212). Stanford, CA: Stanford University Press. Brockner, J., & Wisenfeld, B.M. (1996). An integrative framework for explaining reactions to decisions: Interactive effects of outcomes and procedures. Psychological Bulletin, 120, 189-208. Brockner, J., Tyler, T.R., & Cooper-Schneider, R (1992). The influence of prior commitment to an institution on reactions to perceived unfairness: The higher they are, the harder they fall. Administrative Science Quarterly, 37, 241-261. Burnside, B.L. (1982). Subjective appraisal as a feedback tool. Technical Report 604, U.S. Army Research Institute for the Behavioral & Social Sciences Report. Bycio, P., Alvares, K.M., & Hahn, J. (1987). Situational specificity in assessment center ratings: A confirmatory factor analysis: Journal of Applied Psychology, 72, 463- 474. Cascio, W.F. (1991). Costing human resources: The financial impact of behavior in organizations. Boston: PWS-Kent. Campion, J.E., & Arvey, R.D. (1989). Unfair discrimination in the interview. In R.W. Eder Y G.R. Ferris (Eds.), The employment interview: Theory, research, and practice (pp.61-73). Newbury Park, CA: Sage. Chan, D. & Schmitt, N. (2004). An agenda for future research on applicant reactions to selection procedures: A construct-oriented approach. International Journal of Selection and Assessment, 12 (1/2), 9-23. Chan, D. & Schmitt, N. (1997). Video-based versus paper-and-pencil method of assessment in situational judgment tests: Subgroup differences in test 104 performance and face validity perceptions. Journal of Applied Psychology, 82, (1), 143-159. Chan, D., Schmitt, N., Deshon, R.P., Clause, C.S., & Delbridge, K. (1997). Reactions to cognitive ability tests: The relationship between race, test performance, face validity, and test-taking motivation. Journal of Applied Psychology, 82, 300-310. Cohen, J., & Cohen, P. (1983). Applied multiple regression/correlation analysis for the behavioral sciences. Hillsdale. NJ: Erlbaum. Coppard, L.C. (1976). Gaming simulations and the training process. In R.L. Craig (Ed.), Training and Development Handbook (2 nd Edition). New York: McGraw-Hill. Dipboye, R.L. (1985). Some neglected variables in research on discrimination in appraisals, Academy of Management Review, 10(1), 116-127. Dodd, W.E. (1977). Attitudes towards assessment center programs. In J.L. Moses & W.C. Byham (Eds), Applying the Assessment Center Method. Elmsford, NY: Pergamon Press. Donahue, L.M., Truxillo, D.M., Cornwell, J.M., & Gerrity, M.J. (1997). Assessment center validity and behavioral checklists: Some additional findings. Journal of Social Behavior and Personality, 12, 85-108. Dweck, C.S., Lebbert, E.L. (1988). A social-cognitive approach to motivation and personality. Psychological Bulletin, 95, 256-273. Duffy, M.K., Ganster, D.C., & Shaw, J.D. (1998). Positive affectivity and negative outcomes: The role of tenure and job satisfaction. Journal of Applied Psychology, 83, 950-959. 105 Folger, R. & Greenberg, J. (1995). Procedural justice: An interpretive analysis of personnel systems. In K. Rowland & G. Ferris (Eds.), Research in personnel and human resources management, Vol. 3, 141-183. Greenwich, CT: JAI Press. Folger, R. & Konovsky, M.A. (1989). Effects of procedural and distributive justice on reactions to pay raise decisions. Academy of Management Journal, 32, 115-130. Fryxell, G.E., & Gordon, M.E. (1989). Workplace justice and job satisfaction as predictors of job satisfaction with union and management. Academy of Management Journal, 32,851-866. Fulk, J., Brief, A.O., & Barr, S.H. (1985). Trust-in-supervisor and perceived fairness and accuracy of performance evaluations. Journal of Business Research, 13(4), 301- 313. Gatewood, R.D., & Feild H.S. (2001). Human Resource Selection, p. 648. Fort Worth, TX: Harcourt College Publishers. Gaugler, B.B., Rosenthal, D.B., Thornton, G.C., III, & Bentson, C. (1987). Meta- analysis of assessment center validity. Journal of Applied Psychology, 72, 493- 511. Gibb, P. (1985). Appraisal goals and controls. Personnel Journal, 64(8), 89-93. Gilliland, S.W. (1993). The perceived fairness of selection systems: An organizational justice perspective. Academy of Management, 18, 694-734. Gilliland, S.W., & Steiner, D.D. (1999). Applicant Reactions. In Eder, R.W. & Harris, M.M. (Eds.) The Employment Interview Handbook, 69-82. Thousand Oaks, CA: Sage Publications. 106 Greenberg, J. (1986). Determinants of perceived fairness of performance evaluations. Journal of Applied Psychology, 71(2), 340-342. Greenberg, J. (1987). Reactions to procedural injustice in payment distributions: Do the means justify the ends? The Journal of Applied Psychology, 72, 55-61. Greenberg, J. (1990). Organizational justice: Yesterday, today, and tomorrow. Journal of Management, 16, 399-342. Greenberg, J. (1990). Employee theft as a reaction to underpayment inequity: The hidden cost of pay cuts. Journal of Applied Psychology, 75, 561-568. Greenberger, D.B., & Strasser, S. (1991). The role of situational and dispositional factors in the enhancement of personal control in organizations. Research in Organizational Behavior, 13, 111-145. Hashemian K. (1978). An experimental investigation of the relationship between test anxiety and memory processes under different motivation conditions. (University of Florida, 1977). Dissertation Abstracts International 38, (7-A). Helms, J.E. (1992). Why is there no study of cultural equivalence in standardized cognitive ability testing? American Psychologist, 47, 1083-1101. Herriot, P. (1989). Selection as a social process. In Smith M., & Robertson, I. (eds.), Advances in selection and assessment, 171-187. New York: Wiley. Hendrix, W.H., Robbins, T., Miller, J. & Summers, T.P. (1998). Effects of procedural and distributive justice on factors predictive of turnover. Journal of Social Behavior & Personality, 13 (4), 611-633. Hough, L.M., & Oswald, F.L. (2000). Personnel selection: Looking toward the future- remembering the past. Annual Review of Psychology, 51, 631-664. 107 Howard, A. (1997). A Reassessment of assessment centers: Challenges for the 21st century. Journal of Social Behavior & Personality, 12 (5), 13-53. Howard, A. (1974). An assessment of assessment centers. Academy of Management Journal, 17(4), 115-134. Huffcut, A. (1990). Intelligence is not a panacea in personnel selection. The Industrial Organizational Psychologist, 27, 66-67. Huffcut, A, & Arthur, W. (1994). Hunter and Hunter (1984) revisited: Interview validity for entry-level jobs. Journal of Applied Psychology, 79, 184-190. Huffcut, A., & Roth, P.L. (1998). Racial group differences in employment interview evaluations. Journal of Applied Psychology, 83, 179-189. Huffcut, A., Roth, P.L., & McDaniel, M.A. (1996). A meta-analytic investigation of cognitive ability in employment interview evaluations: Moderating characteristics and implications for incremental validity. Journal of Applied Psychology, 79, 459-473. Hunter, J.E., & Hunter, R.F. (1984). The validity and utility of alternative predictors of job performance. Psychological Bulletin, 96, 72-99. International Task Force on Assessment Center Operations (2000). Guidelines and Ethical Considerations for Assessment Center Operations. 28 th International Congress on Assessment Center Methods, San Francisco, CA. Jenson, A.R. (1980). Bias in mental testing. New York: Free Press. Jones, J. (1991). Protecting job candidates? and employees? privacy rights: The employee?s perspective. Paper presented at the annual meeting of the Society for Industrial and Organizational Psychology, St. Louis, MO. 108 Jordan, J. S., Gillentine, J.A., & Hunt, B.P. (2004). The Influence of Fairness: The Application of Organizational Justice in a Team Sport Setting. International Sports Journal, 8, 139-150. Kanfer, R. (1990). Motivation theory and industrial and organizational psychology. . In Dunnette, M.D., Hough, L.M. (Eds.), Handbook of industrial and organizational psychology (2 nd ed., Vol. 2, pp. 75-170). Palo Alto, CA: Consulting Psychology Press. Kanfer, R., Sawyer, J., Earley, P.C., & Lind, E.A. (1987). Participation in task evaluation procedures: The effects of influential opinion expression and knowledge of evaluative criteria on attitudes and performance. Social Justice Research, 1, 235- 249. Kleinmann, M., & Koller, O. (1997). Construct validity of assessment centers: Appropriate use of confirmatory factor analysis and suitable construction principles. Journal of Social Behavior and Personality, 12, 65-84. Klugler, A.N., & Rothstein, H.R. (1993). The influence of selection test type on applicant reactions to employment testing. Journal of Business and Psychology, 8, 3-25. Konovksy, M.A., & Cropanzano, R. (1991). Perceived fairness of employee drug testing as a predictor of employee attitudes and job performance. Journal of Applied Psychology, 76, 698-707. Kravitz, D.A., Stinson, V., & Chavez, T.L. (1996). Evaluations of tests used for making selection and promotion decisions. International Journal of Selection and Assessment, 4, 24-34. 109 Lance, E.L., Newbolt, W.H., Gatewood, R.D., Foster, M.R., French, N.R., & Smith, D.E. (2000). Assessment center exercise factors represent cross-situational specificity, not method bias. Human Performance, 13(4), 323-353. Latham, G.P., Saari, L.M., Pursell, E.D., & Campion, M.A. (1980). The situational interview. Journal of Applied Psychology, 65, 422-427. Lind, E.A., & Tyler, T. (1988). The social psychology of procedural justice. New York: Plenum. Lounsbury, J.W., Bobrow, W., & Jensen, J.B. (1989). Attitudes toward employment testing: Scale development, correlates, and ?known-group? validation. Professional Psychology: Research and Practice, 20, 340-349. Macan, T.H., Avedon, M.J., Paese, M., & Smith, D.E. (1994). The effects of candidates? reactions to cognitive ability tests and an assessment center. Personnel Psychology, 47, 715-738. Martin, D.C., & Bartol, K. (1986). Training the raters: A key to effective performance appraisal. Public Personnel Management, 15(2), 101-109. Mastrangelo, P.M. (1997). Do college students still pr3efer companies without employment drug testing? Journal of Business and Psychology, 11, 325-337. McCarthy, J.M., & Goffin, D.R. (2003). Is the Test Attitude Survey psychometrically sound? Education and Psychological Measurement, 63, 446-464. McDaniel, M.A., Whetzel, D.L., Schmidt, F.L. & Maurer, S.D. (1994). The validity of employment interviews: A comprehensive review and meta-analysis. Journal of Applied Psychology, 75, 599-616. 110 McFarlin, D.B., & Sweeney, P.D. (1992). Distributive and procedural justice as predictors of satisfaction with personal and organizational outcomes. Academy of Management Journal, 35, 626-637. McKay, P.F., & Doverspike, D. (2001). African-Americans? test-taking attitudes and their effect on cognitive ability test performance: Implications for Public Personnel Management Selection Practice. Public Personnel Management, 30, (1), 67-75. McKay, P.F., Doverspike, D., Bower-Hilton, & D., Martin, Q.D. (in press). Stereotype threat effects on the Raven?s scores of African-Americans. Journal of Applied Social Psychology. Moorman, R.H. (1991). Relationship between organizational justice and organizational citizenship behaviors: Do fairness perceptions influence employee citizenship? Journal of Applied Psychology, 76, 845-855. Moscoso, S. (2000). Selection interview: A review of validity evidence, adverse impact and applicant reactions. International Journal of Selection and Assessment, 8, 237-247. Murphy, K.R. (1986). When your top choice turns you down: Effect of rejected job offers on the utility of selection tests. Psychological Bulletin, 99, 133-138. Murphy, K.R. (1996). Individual differences in behavior: Much more than g. In K.R. Murphy (Ed.), Individual differences and behavior in organizations. San Francisco: Jossey-Bass. Nevo, B., & Sfez, J. (1985). Candidates? feedback questionnaires. Assessment and Evaluation in Higher Education, 10, 236-249. 111 Nunnally, j.C., (1978). Psychometric theory. New York: McGraw-Hill. Nystrom, P.C., & Starbuck, W.H. (1984). Managing beliefs in organizations. Journal of Applied Behavioral Science, 20(3), 277-287. Ployhart, R.E., & Ryan, A.M. (1997). Toward an explanation of applicant reactions: An examination of organizational justice and attribution frameworks. Organizational Behavior and Human Decision Processes, 72, (3), 308-335. Ployhart, R.E., & Ryan, A.M. (1998). Candidates? reactions to the fairness of selection procedures: The effects of positive rule violations and time of measurement. Journal of Applied Psychology, 83, (1), 3-36. Pursell, E.D., Campion, M.A., & Gaylord, S.R. (1980). Structured interviewing: Avoiding selection problems. Personnel Journal, 59, 907-912. Reilly, R.R., & Chao, G.T. (1982). Validity and fairness of some alternative employee selection procedures. Personnel Psychology, 35,1-62. Reilly, R.R., & Warech, M.A. (1990). The validity and fairness of alternatives to cognitive tests. Berkely, CA: Commission on Testing and Public Policy. Robertson, I.T., & Candola, R.S. (1982). Work sample tests: Validity, adverse impact, and applicant reactions. Journal of Occupational Psychology, 55, 171-183. Rosse, J.G., Miller, J.L., & Stecher, M.D. (1994). A field study of job candidates? reactions to personality and cognitive ability testing. Journal of Applied Psychology, 79 (6), 987-992. Rosse, J.G., Ringer, R.C., & Miller, J.L. (1996). Personality and drug, testing: An exploration of the perceived fairness of alternatives to urinalysis. Journal of Business and Psychology, 10 (4), 459-475. 112 Ryan, A.M., Greguras, G.J., & Ployhart, R.E. (1996). Perceived job-relatedness of phusical ability testing for firefighers: Exploring variations in reactions. Human Performance, 9, 219-240. Ryan, A.M., & Ployhart, R.E. (1998). Test preparation programs in selection contexts: Self-selection and program effectiveness. Personnel Psychology, 51, (5), 599- 622. Ryan, A.M., Sacco, J.M., McFarland, L.A., & Kriska, S.D. (2000). Applicant self- selection: Correlates of withdrawal from a multiple hurdle process. Journal of Applied Psychology, 85, 163-179. Ryan, A.M., & Tippins, N.T. (2004). Attracting and selecting: What psychological research tells us. Human Resource Management, 43, 305-318. Rynes, S.L. (1992). Recruitment, job choice, and post-hire consequences: A call for new research directions. In Dunnette, M.D., Hough, L.M. (Eds.), Handbook of industrial and organizational psychology (2 nd ed., Vol. 2, pp. 339-444). Palo Alto, CA: Consulting Psychology Press. Rynes, S.L., & Connerley, M.L. (1993). Applicant reactions to alternative selection procedures. Journal of Business and Psychology, 7 (3), 261-277. Sackett, P.R., & Dreher, G.F. (1982). Constructs and assessment center dimensions: Some troubling empirical findings. Journal of Applied Psychology, 67, 401-410. Salgado, J.F. (1999). Personnel selection methods. In C.L. Cooper & I.T. Robertson (Eds.), International review of Industrial & Organizational Psychology. New York: Wiley. 113 Sanchez, R.J., Truxillo, D.M., & Bauer, T.N. (2000). Development and examination of an expectancy-based measure of test-taking motivation. Journal of Applied Psychology, 85, 739-750. Schmidt, F.L. (1988). The problem of group differences in ability test scores in employment selection. Journal of Vocational Behavior, 33, 272-292. Schmidt, F.L., Greenthal, A.L., Hunter, J.E., Berner, J.G., & Seaton, F.W. (1977). Job sample vs. paper-and-pencil trades and technical tests: Adverse impact and examinee attitudes. Personnel Psychology, 30, 187-197. Schmidt, F.L., & Hunter, J.E. (1998). The validity and utility of selection methods in personnel psychology: Practical and theoretical implications of 85 years of research findings. Psychological Bulletin, 124 (2), 262-274. Schmit, M.J., & Ryan, A.M. (1997). Applicant withdrawal: The role of test-taking attitudes and racial differences. Personnel Psychology, 50, 855-876. Schmit, M. (1996). Individual differences in sensitivity to befallen injustice (BSI). Personality and Individual Differences, 21, 3-20. Schmitt, N., & Coyle, B.W. (1976). Applicant decisions in the employment interview. Journal of Applied Psychology, 61, 184-192. Schmitt, N., & Gilliland, S.W. (1992). Beyond differential prediction: Fairness in selection. In D.M. Saunders (Ed.). New approaches to employee management: Fairness in employee selection, 1, 21-46. Greenwich, CT: JAI Press. Schmitt, N., Gooding, R.Z., Noe, R.A., & Kirsch, M. (1984). Meta-analyses of validity studies published between 1964 and 1982 and the investigation of study characteristics. Personnel Psychology, 37, 407-421. 114 Schmitt, N., Rogers, W., Chan D., Sheppard, L., Jennings, D. (1997). Adverse impact and predictive efficiency of various predictor combinations. Journal of Applied Psychology, 82, 719-730. Schuler, H. (1993). Social validity of selection situations: A concept and some empirical results. In H. Schuler, J.L. Farr, & M. Smith (Eds.). Personnel selection and assessment: Individual and organizational perspectives. Hillsdale, NY: Erlbaum. Seymour, R.T. (1988). Why plaintiffs? counsel challenge tests, and how they can successfully challenge the theory of ?validity generalization.? Journal of Vocational Behavior, 33, 331-364. Singer, M. (1989). Determinants of perceived fairness in selection practices: An organizational justice perspective. Genetic, Social, and General Psychological Monographs, 166, 475-494. Shore, T.H., Tashchian, A., & Adams, J.S. (1997). The role of gender in a developmental assessment center. Journal of Social Behavior & Personality, 12(5), 191-203. Skarlicki, D.P., & Folger, R. (1997). Retaliation in the workplace: The roles of distributive, procedural, and interactive justice. Journal of Applied Psychology, 82, 434-443. Smither, J.W., & Pearlman, K. (1991). Perceptions of job-relatedness of selection procedures among college recruits and recruiting/employment managers. A paper presented at the Sixth Annual Conference of the Society for Industrial and Organizational Psychology, April 1991, St. Louis, MO. 115 Smither, J.W., Reilly, R.R., Millsap, R.E., Pearlman, K., & Stoffey, R.W. (1993). Applicant reactions to selection procedures. Personnel Psychology, 46, 49-76. Smither, J.W., Reilly, R.R., Millsap, R.E., Pearlman, K., & Stoffey, R.W. (1996). An experimental test of the influence of selection procedures of fairness perceptions, attitudes about the organization, and job pursuit intentions. Journal of Business Psychology, 10 (3), 297-318. SPSS, Inc. (2004). SPSS Release 13.1 (1 September, 2004). Chicago: SPSS Inc. Spychalski, A.C., Quinones, M.A., Gaugler, B.B., & Pohley, K. (1997). A survey of assessment center practices in organizations in the United States. Personnel Psychology, 50(1), 71-90. Stone, D.L., Gueutal, H.G., & McIntosh, B. (1984). The effects of feedback sequence and expertise of the rater on perceived feedback accuracy. Personnel Psychology, 37(3), 487-506. Stone, D.L., & Kotch, D.A. (1989). Individuals? attitudes toward organizational drug testing policies and practices. Journal of Applied Psychology, 74, 136-521. Stone, E.F., & Stone, D.L. (1984). The effects of multiple sources of performance feedback and feedback favorability on self-perceived task competence and perceived feedback accuracy. Journal of Management, 10(3), 371-378. Stone, E.F., & Stone, D.L. (1990). Privacy in organizations: Theoretical issues, research findings, and protection mechanisms. Research in Personnel and Human Resource Management, 8, 349-411. Sujak, D.A., Parker, C.P., & Grush, J.E. (1998). The importance of interactional justice: Reactions to organizational drug testing. Paper presented at the 13 th Annual 116 Conference of the Society for Industrial and Organizational Psychology, Dallas, TX. Thornton, G.C., & Byham, W.C. (1982). Assessment centers and managerial performance. New York: Academic Press. Thornton, G.C., (1990). Assessment Centers in Human Resource Management. New York: Addison Wesley. Thornton, G.C. (1992). Assessment centers in human resources management. Reading, MA: Addison-Wesley. Thornton, G.C. (1993). The effect of selection practices on candidates? perceptions of organizational characteristics. In Schuler, H., Farr, J.L., Smith, M. (Eds.), Personnel selection and assessment: Individual and organizational perspectives. Hillside, NJ: Lawrence Erlbaum Associates. Truxillo, D.M., & Bauer, T.N. (1999). Applicant reactions to test score banding in entry- level and promotional contexts. Journal of Applied Psychology, 84(3), 322-339. Truxillo, D.M., Bauer, T.N., & Sanchez, R.J. (2001). Multiple dimensions of procedural justice: Longitudinal effects on selection system fairness and test-taking self- efficacy. International Journal of Selection and Assessment, 9, 336-349. Tyler, T.R. (1989). The psychology of procedural justice: A test of group-value model. Journal of Personality and Social Psychology, 57, 830-838. Wiechmann, D. & Ryan, A.M. (2003). Reactions to computerized testing in selection contexts. International Journal of Selection and Assessment, 11, 215-229. 117 Williamson, L.G., Campion, J.E., Malos, S.B., Roehling, M.V., & Campion, M.A. (1997). Employment interview on trial: Linking interview structure with litigation outcomes. Journal of Applied Psychology, 82, 900-912. Young, J.R. (2003). Researchers charge racial bias on SAT. The Chronicle of Higher Education, 50 (7), 34-35. 118 APPENDIX A 119 Study Scale Items by Dimension Information I understood in advance what the testing process would be like. I knew what to expect on the test. I had ample information about what the format of the test would be. Opportunity to perform I could really show my skills and abilities through this test. This test allowed me to show what my job skills are. This test gives applicants the opportunity to show what they can really do. Job-relatedness Doing well on this test means a person can do the job well. A person who scored well on this test will be a good . The actual content of the test was clearly related to the job of . Consistency The test was administered to all applicants in the same way. There were no differences in the way the test was administered to different applicants. Test administrators made no distinction in how they treated applicants. Attitude Toward Testing I think that this kind of test is a fair way to determine peoples? abilities. This test was a good reflection of what a person could do in the job. This test was a good way of selecting people into jobs. 120 Test-Taking Motivation Doing well on this test was important to me. While taking this test, I concentrated and tried to do well. I pushed myself to do well on this test. 121 APPENDIX B 122 CENTER FOR BUSINESS AND ECONOMIC DEVELOPMENT AUBURN UNIVERSITY MONTGOMERY CANDIDATE INFORMATION GUIDE DEKALB COUNTY BUREAU OF POLICE SERVICES POLICE SERGEANT SELECTION PROCEDURE MARCH 10-11, 2004 PREPARED BY: THE CENTER FOR BUSINESS & ECONOMIC DEVELOPMENT AUBURN UNIVERSITY MONTGOMERY SOUTH COURT STREET, SUITE 110 MONTGOMERY, ALABAMA 36104 334.244.3700 123 CANDIDATE INFORMATION GUIDE DEKALB COUNTY BUREAU OF POLICE SERVICES POLICE SERGEANT SELECTION PROCEDURE As a candidate for DeKalb County Police Department (DKPD) Sergeant, you have been invited to participate in the Sergeant Selection Procedure. All candidates will participate in the first and second stages of the selection procedure: the Written Exam and the Structured Oral Interview. Your scores on the Written Exam and the Structured Oral Interview will be used to determine in which band your score places you. The scoring and banding processes are described in more detail in another section of this Guide. A number of candidates based on the likely number of promotions during the two-year life of the list will proceed to the third stage, the Role-play Exercises and Writing Sample Exercise. These exercises will be used to rank candidates within the bands from which promotions are likely to be made. This guide is provided to acquaint you with the three phases of the selection procedure. Read this information very carefully. It is very important that you know what to expect before participating in these exercises. WRITTEN EXAMINATION Overview All candidates will participate in the Written Exam. The Written Exam will be given in the Decatur Ballroom B at the Holiday Inn Select in Decatur at two different times on Wednesday, April 14, 2004. You will receive your assigned exam time approximately two weeks prior to the exam date. Each group will be allowed three (3) hours to work on the exam. If you are assigned to the morning group, you will not be allowed to leave the test site until the afternoon group has arrived regardless of the time it actually takes you to finish the exam. The Written Examination will consist of approximately 140 multiple-choice questions. Candidates will have three (3) hours to complete the test. Each multiple-choice question has only one (1) best correct answer and three (3) other alternatives. The examination contains a surplus of approximately 40 questions that may not be scored. A surplus of items is included in recognition that all items will not be equally effective for assessing a given knowledge. Questions will be statistically analyzed to identify the best questions. Items showing questionable item statistics (e.g., low item reliabilities) will be eliminated. Before the Written Examination is scored, candidates will be allowed to review the test questions and key and to appeal in writing any test question. The time for review and appeals will be announced at the Written Exam. All appeals will be reviewed, and incorrectly keyed items will be re-keyed or deleted from the exam. 124 The final scoring key for the Written Examination will include only those questions that are not eliminated based on the reviews described above. Candidate scores on the examination will be based on the approximate 100 to 110 remaining items. Decisions regarding which questions to retain and which questions to eliminate will be made before candidate names are identified with each test. Therefore, how these decisions affect any one individual=s test score will not be a factor in these decisions. The decisions regarding question retention will be made by the test developer. Subsequently, candidate names will be identified, and the scored items will comprise each candidate's score for this part of the selection process. The Written Examination will cover the following 49 knowledges and 2 abilities: K10 Knowledge of traffic control procedures to include the position of the vehicle; use of lights, flares, protective clothing, and hand signals; and ensure safe traffic flow. K11 Knowledge of vehicle stop procedures to include traffic violations and known felony stops. K12 Knowledge of the procedures for responding to domestic disputes. K13 Knowledge of the general crime prevention patrol procedures to include security checking, identification of stolen vehicles, and variation of patrol routes. K14 Knowledge of the procedures and guidelines governing radio communication to include radio code systems, phonetic alphabet, and FCC rules and regulations. K15 Knowledge of the proper use and maintenance of vehicles. K16 Knowledge of the procedures for the care and maintenance of service weapons. K17 Knowledge of departmental policy concerning weapons such as duty weapons, second weapons, off-duty weapons, and firing range qualifications to include capabilities and limitations of weapons and qualification guidelines. K18 Knowledge of the procedures for protecting a motor vehicle accident scene and ensuring scene safety to include warning or re-routing traffic, notifying other services (HERO, fire department, traffic engineers, etc.), crowd control, and protecting evidence at the scene. K21 Knowledge of the procedures for collecting, preserving, and transporting physical evidence to include packaging, labeling, marking, photographing, documenting, storing, and chain of custody. K23 Knowledge of procedures for the detention and arrest of suspects to include suspect approach, handcuff, etc. K24 Knowledge of field search techniques and positions. K26 Knowledge of the types and procedures for conducting line ups such as physical line up, photo line up, and show up identification. 125 K27 Knowledge of procedures for protecting crime scenes and conducting the initial crime scene investigation to include evidence preservation and securing the scene. K29 Knowledge of the laws and policies regarding use of physical force to include the use of deadly force, the minimum physical force required to subdue a person, how force is to be used, guidelines for the progression in the use of force, and documentation following the use of force. K30 Knowledge of the use of force techniques and equipment such as restraining devices, self-defense, and handcuffing techniques as needed to restrain and apprehend subjects in a manner that is effective and safe to the subject and officer. K31 Knowledge of applicable laws and court rulings governing arrests with and without a warrant including considerations of exigent circumstances. K32 Knowledge of appropriate court rulings governing stopping and searching motor vehicles with and without a warrant to include reasonable suspicion, probable cause, span of control, search incident to an arrest, and inventory searches. K33 Knowledge of the rules of evidence to include confessions, dying declarations, issues of admissibility, Miranda warnings, confidentiality of information, spontaneous utterance, hearsay, and compulsory testimony. K34 Knowledge of applicable laws and court rulings governing search and seizure with and without a warrant to include field and protective searches and the difference between full body, span of control, stop-and-frisk (pat down), etc. K35 Knowledge of the applicable laws and court rulings governing DUI enforcement to include Atraffic check@ type operations, test administration, standardized field sobriety testing and documentation, and breath and blood testing. K36 Knowledge of the applicable laws and court rulings governing domestic violence cases to include arrests without a warrant and reading of the Miranda warning. (Titles 15 &16) K38 Knowledge of Title 17 of the Criminal Code of Georgia, criminal procedure, and miscellaneous criminal provisions as found in the Georgia Law Enforcement Handbook (Criminal Procedure). K39 Knowledge of the definition of crime to include the elements of crime necessary to charge specific offenses to include power and authority of arrests, whether a warrant can be obtained, etc. K40 Knowledge of the classification of various crime such as felonies and misdemeanors. 126 K41 Knowledge of the applicable motor vehicle laws governing moving violation enforcement. K42 Knowledge of the applicable motor vehicle laws governing non-moving violation enforcement. K44 Knowledge of available resources and programs for the assistance of officers in need (e.g., EAP, Safe Harbor). K45 Knowledge of basic first aid procedures to include CPR, treatment for shock, treatment for seizures, and pressure dressings to stop bleeding. K46 Knowledge of self-protection techniques for the prevention of infectious diseases. K47 Knowledge of departmental personnel policies regarding transfers, leave, overtime, work assignment, rules of conduct, dress codes, and appearance. K48 Knowledge of departmental disciplinary procedures to include verbal and written counseling requirements and procedures. K54 Knowledge of the department=s chain of command to include policies and procedures governing communications within the chain of command. K57 Knowledge of department Report Writing guidelines found in the DeKalb County Employee Manual to include how to select appropriate forms and how to complete forms. K59 Knowledge of police liability issues including potential civil rights violations and issues of vicarious liability. K60 Knowledge of the appropriate use of police equipment such as batons, OC, hand-cuffs, and flashlights as needed to properly effect arrests. K62 Knowledge of the state and federal laws regarding the use of NCIC as needed to comply with the Privacy Act and aid in investigation. K64 Knowledge of dispatch procedures as needed to answer calls efficiently. K65 Knowledge of the DeKalb County Employee Manual as needed to comply with departmental and legal procedures. K68 Knowledge of the proper use of the Mobile Data Terminal (MDT) as needed to receive and transmit calls and messages and to obtain GCIC/NCIC information as found in the DeKalb County Employee manual. K71 Knowledge of special orders, general orders, memos, and other department issued correspondence as needed to inform subordinates of new rules, provide directives of new procedures, update DeKalb County Employee Manual, and to develop plan of action for special events. 127 K74 Knowledge of constitutional laws such as Miranda rights, search and seizure, invasion of privacy, arrests made without warrants, right to a speedy trial, and preliminary hearing to avoid violating the rights of individuals and to reduce personal and department liability when making arrests, interviewing suspects, and conducting searches. K76 Knowledge of Title 16 of the Criminal Code of Georgia, criminal procedure, and miscellaneous criminal provisions as found in the Georgia Law Enforcement Handbook (Crimes and Offenses) as needed to stay within the law when effecting arrests, writing/evaluating report information, evaluating evidence, and obtaining arrest and search warrants. K77 Knowledge of Title 15 of the Criminal Code of Georgia, criminal procedure, and miscellaneous criminal provisions as found in the Georgia Law Enforcement Handbook (Juvenile Proceeding) as needed to stay within legal guidelines when questioning or detaining juveniles, obtaining petitions, effecting juvenile arrests, or taking juveniles into protective custody. K78 Knowledge of Title 40 of the Criminal Code of Georgia, criminal procedure, and miscellaneous criminal provisions as found in the Georgia Law Enforcement Handbook (Motor Vehicles and Traffic) as needed to stay within legal guidelines when enforcing traffic laws. K79 Knowledge of DeKalb County ordinances to include those governing drunkenness in public, loitering for sex and drugs, creating an offensive and hazardous situation, noise and parking as needed to stay within legal guidelines when effecting arrests, issuing citations, and providing public services. K94 Knowledge of survival techniques to include weapon retention, use of baton, and use of cover and concealment as needed to prevent injury, save lives, and effect arrests. K97 Knowledge of the response to threat of explosives and suspicious packages. K99 Knowledge of the types and policies governing the handling of harassment (e.g., race, sex) of officers to include the anti-harassment policy, procedures for addressing complaints, and maintaining a work environment which avoids such harassment as needed to prevent liability and ensure a positive work environment. A14 Ability to read and follow maps and street guides. A72 Ability to read and understand written material such as legal bulletins, departmental memos and directives, case laws, updated court rulings, and law enforcement literature. 128 SOURCE LIST SERGEANT WRITTEN EXAMINATION DEKALB COUNTY POLICE DEPARTMENT 1. Basic Law Enforcement Training Course: Peace Officer Liability 2. Code of DeKalb County- Selected Ordinances T Sale of Alcohol T Public Intoxication T Loitering for Sex T Loitering for Drugs T Noise T Minimum/Maximum Speeds in parks, on roads and highways T Public Park and Recreation Facility Hours T Temporary Outdoor Sales of Merchandise 3. DeKalb County Police Department Employee Manual (updates through 03/01/04) 4. DeKalb County Drug and Alcohol Testing Policies and Procedures. Dated 01/30/97 5. General Order Number DPS 96-01. Domestic Violence Involving Employees. Dated 01/08/96 6. General Order Number 99-02. After Hours Property/Evidence Storage Area. Dated 01/15/99 7. General Order Number PSG 01-1. Fingerprinting/Photographing of Juveniles. Dated 03/20/01 8. General Order Number 2003-11. Changes to DeKalb County Code. Dated 07/18/03 9. Georgia Law Enforcement Handbook (2003-2004 Revision): T Chapter 2 - Arrests Chapter 4 - Search and Seizure T Chapter 5 - Confessions and Self Incrimination T Title 15: Courts T Title 16: Crimes and Offenses T Title 17: Criminal Procedure T Title 24: Evidence T Title 40: Motor Vehicles and Traffic 10. Lesson Plans: Specialized Patrol Techniques 11. Basic Law Enforcement Training Course - Universal Precautions 12. DeKalb County Department of Public Safety Basic First Aid: T CPR Techniques T Shock 13. DeKalb Department of Public Safety Public Safety Signal Card 14. Training Lesson Plans-Officer Survival: T Cover Awareness T Protective Equipment T Verbal Challenge T Weapons Maintenance and Training 129 All employees should have a current DeKalb County Employee Manual. Candidates who do not have a current updated employee manual must make a request for same through their chain of command. It seems as though most officers have a copy of the 2002 edition of the Georgia Law Enforcement Handbook. All of the questions written from this resource have been verified as accurate in both the 2002 and 2003 editions. If you do not have this resource, you may wish to share with someone else or purchase your own copy. The handbook is available from: West Group Attention: Inside Sales C1-10 610 Opperman Drive Eagan, MN 55123 1-800-328-9352 www.west.thomson.com and may be purchased with VISA, MasterCard, personal check, or cash for a cost of approximately $49.00 plus tax. A CD-ROM version is also available. You may download any DeKalb County ordinances at the following website: http://livepublish.municode.com/9/lpext.dll?f=templates&fn=main-j.htm&vid=10637 All of the other source materials listed on page 6 are available from the Chief=s office. You may contact Sergeant C. H. Dedrick or Captain P. R. Taylor to obtain copies. Written questions have been based on information in these sources. No source is listed for the questions designed to measure the ability to interpret maps and street guides or the ability to read and understand written material because there is not one specific source from which the test questions pertaining to these abilities were obtained. Instead, these test questions have been created to allow the candidate to demonstrate the possession of these abilities. The candidate should, for instance, expect to read and interpret a map in answering some questions. All questions have been thoroughly developed, reviewed, and approved by incumbent sergeants. Administration The tentative administration date for the Sergeant Structured Oral Interview is the week of May 17, 2004. Until the actual number of candidates is determined, we cannot say exactly how many days will be required for administration. We should be able to provide more firm testing dates at the administration of the written exam. However, you may not know your exact testing date or time until a few weeks prior to the interview date. We anticipate that candidates will be divided into four groups, each group appearing on only one of four days. Within a given day, each group will further be divided into a morning group and an afternoon group. For test security reasons, the morning group will not be permitted to leave until those from the afternoon group arrive. It is for that reason that arrival times listed in the letters you receive will be strictly followed. We cannot hold the morning group of candidates until candidates who are running late arrive. Therefore, the time requirements will be strictly followed for all arrival times B morning and afternoon. If you are late, you will be disqualified. As mentioned above, a number of candidates will proceed to the third stage of the selection procedure, the Role-play and the Writing Sample Exercises. The date for the administration of this final phase has not been determined at this time. The exact time, date, and location will be announced as soon as the arrangements have been finalized. We anticipate that candidates will be divided into two groups appearing on one of two days. Those groups will be divided into a morning group and an afternoon group, and the morning group will be held until each candidate in the 130 afternoon group arrives at the test site. Since you will most likely be at the testing site for several hours (for the Structured Oral Interview, Writing Sample, and Role-play Exercises), you may wish to bring a book to read. Panels assessing candidate responses will be comprised of two or three panel members for the Structured Oral Interview and two panel members for the Role-play Exercises. Under no circumstances will you be rated by someone who knows you. The panel members will be selected from other law enforcement agencies to aid in the accuracy of the scoring of each selection procedure component. Each assessor will become familiar with the content of the selection procedures and receive training on scoring the individual exercises. STRUCTURED ORAL INTERVIEW The structured oral interview will consist of three or four job-related scenarios. Each scenario describes a problem situation and asks you, the candidate, to explain how you would handle the situation as a DKPD Sergeant. Your responses to these scenarios will be evaluated by a panel of two or three individuals. The content of your answers will be compared to response standards developed for each scenario by incumbent Sergeants in the DeKalb County Police Department. The response standards provide objective and standardized scoring guidelines for the interview panel to use in rating your response. All guidelines are tailored to the DeKalb County Police Department. Performance Dimensions A careful analysis of the job of DeKalb County Police Department Sergeant identified many knowledges, skills, and abilities (KSAs)important to successful job performance. The selection procedure components were designed to allow candidates to demonstrate their potential to perform successfully as a Sergeant. In the Structured Oral Interview, you will be evaluated (rated) on the five performance dimensions. Each dimension measures KSAs important to the job of a Sergeant. The dimensions and underlying KSAs have been provided below. You should familiarize yourself with each of these five performance dimensions. 131 Problem Analysis Effectiveness in identifying problem areas, securing relevant information, relating and comparing information from different sources, determining the source of a problem and implementing task-resolving decisions. This includes developing short- or long-range plans to determine objectives, identify problems, establish priorities, set standards, provide guidelines and identify resource needs. A11 Ability to determine if a complaint on an officer describes behavior in violation of department policy and procedures. A12 Ability to reserve judgment concerning a complaint or problem until all facts are collected. A13 Ability to identify a method of investigating a complaint that is consistent with DKPD policy and is appropriate to the situation. A18 Ability to consider multiple sources of evidence, personal perspectives, facts, and points of view when conducting an investigation, making decisions, and choosing a course of action as needed to remain objective. A19 Ability to respond to situations in a way that does not further aggravate a situation as needed to appropriate handle arguments, personnel problems, poor performance, and negative citizen comments. A56 Ability to determine when a decision should be referred to or approved by a supervisor. A62 Ability to determine whether facts are sufficient to support a recommended action such as suggesting a certain level of disciplinary action, issuing a search or warrant, or making an arrest. A69 Ability to pay attention to details in forming a conclusion or taking an action. A70 Ability to identify the legal rules and statutes that apply in a situation such as demonstrations, strikes, searches and seizures, traffic stops, and disasters. A76 Ability to examine the directions and actions of subordinates, peers, and superiors. A81 Ability to understand what is being communicated in the written messages of other individuals. Supervisory Ability The extent to which subordinates are provided with directions and guidance toward the accomplishment of specified performance goals. This includes the ability to set and enforce performance standards, recognize problem behavior, evaluate subordinate work performance, provide guidelines and monitor subordinate performance in order to provide 132 assistance, extend recognition, discipline and motivate or counsel. Supervisory Ability differs from Management Ability in that Supervisory Ability is concerned with the work performance and professional development of individuals in one=s area of responsibility, whereas Management Ability focuses on allocating personnel and equipment to meet Division or Unit work responsibilities or assignments. A39 Ability to give orders and assign work. A73 Ability to make decisions in a timely manner to include setting work priorities when multiple incidents occur at the same time, changing subordinates= work assignments, initiating disciplinary action, referring information up the chain of command to superiors, etc. A74 Ability to reconsider decisions already made and change assignments and priorities when necessary or when given new information. Management Ability The extent to which work is effectively planned, organized and coordinated for the efficient accomplishment of specified goals. This includes proper assignment of personnel, appropriate allocation and management of resources, recognition of resource limitations, and enforcement of policies. Management Ability differs from Supervisory Ability in that Management Ability is concerned with allocating personnel and equipment to meet Division or Unit work responsibilities or assignments; whereas Supervisory Ability focuses on the work performance and professional development of individuals in one=s area of responsibility. A48 Ability to manage one=s time as needed to ensure work responsibilities are accomplished. A49 Ability to delegate authority and maintain accountability as needed to ensure departmental operations run effectively and efficiently. A59 Ability to set priorities to include unit activities, individual subordinates= activities, and one=s own work assignments as needed to ensure all work activities are accomplished despite competing demands. A96 Ability to adapt to changes in policies, procedures, and the work environment. A97 Ability to apply rules, procedures, and policies in a flexible manner to include taking into account a person=s individual situation when making a recommendation regarding discipline, considering a citizen=s explanation and situation when determining how an incident should be handled, and deciding when to confront subordinates with work problems. A98 Ability to adjust the use of resources (equipment and manpower) according to shifts in the priority of incidents. 133 A100 Ability to attend to several situations, problems, and responsibilities at the same time. Technical & Departmental Knowledge Demonstrates knowledge and understanding of departmental policies, procedures and rules and regulations in planning work, monitoring employee performance, disciplining employees, making decisions, giving advice and responding to situations. This includes utilizing knowledge of the departmental organization to find solutions to problems. K12 Knowledge of the procedures for responding to domestic disputes. K21 Knowledge of the procedures for collecting, preserving, and transporting evidence to include packaging, labeling, marking, photographing, documenting, storing, and chain of custody. K27 Knowledge of procedures for protecting crime scenes and conducting the initial crime scene investigation to include evidence preservation and securing the scene. K38 Knowledge of Title 17 of the Criminal Code of Georgia, criminal procedure, and miscellaneous criminal provisions as found in the Georgia Law Enforcement Handbook (Criminal Procedure). K44 Knowledge of available resources and programs for the assistance of officers in need (e.g., EAP, Safe Harbor) K47 Knowledge of departmental personnel policies regarding transfers, leave, overtime, work assignment, rules of conduct, dress codes, and appearance. K48 Knowledge of departmental disciplinary procedures to include verbal and written counseling requirements and procedures. K54 Knowledge of the department=s chain of command to include policies and procedures governing communications within the chain of command. K57 Knowledge of department Report Writing guidelines found in the DeKalb County Employee Manual to include how to select appropriate forms and how to complete forms. K59 Knowledge of police liability issues including potential civil rights violations and issues of vicarious liability. K65 Knowledge of the DeKalb County Employee Manual as needed to comply with departmental and legal procedures. 134 K97 Knowledge of the proper response to threat of explosives and suspicious packages. K99 Knowledge of the types and policies governing the handling of harassment (e.g., race, sex) of officers to include the anti-harassment policy, procedures for addressing complaints, and maintaining a work environment which avoids such harassment as needed to prevent liability and ensure a positive work environment. Oral Communication The clear, unambiguous, and effective expression of oneself through oral means to individuals such as co-workers, other agency employees, the general public and community groups to ensure the accurate and/or persuasive exchange of information. This includes receiving and comprehending information from another individual in order to respond appropriately. A78 Ability to provide oral information clearly and concisely to include staying on the subject, paraphrasing information, and using examples as needed to effectively communicate information to a citizen, subordinate, or superior. A89 Ability to organize facts and present them in the most appropriate and logical order consistent with the purpose of the document. A90 Ability to identify and summarize key information as needed to write incident report narratives from victim information, communicate important information from written bulletins or court decisions to subordinates, and document subordinate problem behavior. The Structured Oral Interview Scenarios Each scenario briefly describes a problem situation you could be expected to handle as a DeKalb County Police Department Sergeant. All scenarios place you in a general supervisory role. The scenarios describe situations you might face as a Sergeant and ask how you would respond. Even though some scenarios may emphasize a particular assignment, an in-depth technical knowledge of the specific assignment is not required to respond to the problem situation. Although you may feel some of the scenarios are difficult, the scenarios are not intended to be tricky. The scenarios were designed to be job-related measures of each of the five important performance dimensions. Each scenario briefly describes a problem situation. The information presented about the problem is usually very limited. Do the best you can, with the limited information available, to explain how you would handle the problem. The interview coordinator will be in the room with you. The coordinator will read the question to you while you read along silently. You will be given a set amount of time (usually seven or eight minutes) to determine how you should respond. You may use as much of the preparation time as you need. When you are ready to respond OR when your preparation time has expired, you will respond to the 135 scenario. You will have an additional set amount of time (usually seven or eight minutes) in which to respond to the scenario, regardless of the amount of time used to prepare. The interview coordinator will not prompt you with additional information or responses to your comments, nor will that person ask follow-up questions. When you have completed a question (or your response time has expired), you will be given another question until you have responded to all Structured Oral Interview scenarios. The interview coordinator will not be someone you know. Also, the interview coordinator will not be evaluating your performance on the interview. It is very important that you think about each scenario before you begin your response. Think about what you want to say before beginning to speak. You will be allowed to take notes and use these notes to give your answer. Your response should completely describe how you would handle the problem situation. You should make sure to explain the reasons for your decisions or actions. (Do not assume that the panel members will know your reasons. Explain!) If you think there may be more than one way to handle a problem, you should include an explanation of the alternatives you might consider appropriate. Finally, your answers should be very specific and detailed. Explain what you would actually DO in such a situation. The assessor panel will give you credit based on what you say you would do and the reasons you give. Candidate Instructions and Sample Structured Oral Interview Scenario Instructions similar to those on the following page will be read to you by an interview coordinator at the beginning of the interview. The sample Structured Oral Interview scenario on page 15 is a scenario similar to those which will be used for the Sergeant=s Structured Oral Interview. You will be given a scenario and asked how to handle it. The content of the scenarios will differ from this scenario. The scenarios will involve situations that you would encounter as a Sergeant in the DeKalb County Police Department. You should expect to see scenarios concerning topics such as personnel problems, citizen complaints, domestic disputes, robberies, burglaries, pursuits, complex situations, kidnapings, hostage situations, and personality conflicts. The following sample scenario will give you an idea of what to expect. 136 SAMPLE CANDIDATE INSTRUCTIONS SERGEANT STRUCTURED ORAL INTERVIEW DEKALB COUNTY POLICE DEPARTMENT Purpose You are now ready to begin the Structured Oral Interview section of the selection process for DeKalb County Police Department Sergeant. The purpose of this interview is to assess several knowledges, skills, and abilities. The interview will assess your oral communication skills, supervisory ability, management ability, problem analysis skills, and technical and departmental knowledge. Background Information All of the time you are participating in this interview, you should respond the way a Sergeant of the DeKalb County Police Department (DKPD) should respond. The interview will require you to respond to three or four different scenarios. These scenarios describe events that often occur on the job of a Sergeant. Please listen carefully to each of the scenarios. Each scenario depicts a situation that can occur on the job. The scenarios provide all the information that you need in order to respond to the exercise. After listening to each scenario, tell in detail how you should respond, since you are a Sergeant in the situation described. Instructions The interview coordinator will read each scenario aloud, and you read along silently. After the interview coordinator finishes reading the scenario, you can take additional time to study the scenario. You may take up to 7 minutes to study the scenario. You should not feel like you must use the full 7 minutes. You can use this additional time to review the scenario silently and take notes. Taking additional time to review a scenario will NOT hurt your rating in any way. Also, taking additional time to review a scenario will NOT take away from the time you can spend responding. When you are ready to begin responding to the scenario, tell the interview coordinator that you are ready. Your time for responding will begin right when you tell the coordinator that you are ready. You can take up to 7 minutes to respond to each scenario. You should not feel like you must use the full 7 minutes. The interview coordinator will tell you when you have two minutes remaining. You may look back at the scenario sheet and your notes at any time during your response. It is important that you read each scenario before you respond. Your response to each scenario will determine your rating on the interview. You should review these task instructions and the scenarios thoroughly before you begin to respond. Do you have any questions? 137 SAMPLE SCENARIO SERGEANT STRUCTURED ORAL INTERVIEW DEKALB COUNTY POLICE DEPARTMENT You are a recently promoted Sergeant. It is 2300 hours on Monday. You are responding to a burglary call at the Bellwood Shopping Center. One of the units in your area is already on the scene. When you arrive, the two officers on the scene relay the information they have gathered. One juvenile suspect is in custody. He was arrested inside one of the stores. He has a large cut on his right shoulder, and it is bleeding heavily. Windows in three stores have been broken out. All three stores are men=s clothing retailers. The officers tell you that each of the three stores is missing clothing. Merchandise is lying on the floor in each of the three stores. Clothes racks are disarranged as if someone went through them in a hurry. One cash register in one of the stores has been forced open and is empty. How should you handle this situation? Please be specific and give details. Note: When you are ready to respond to this scenario, please tell the interview coordinator that you are ready to begin. When you have completed your response, please tell the interview coordinator that you are finished. ROLE-PLAY EXERCISES The Role-play Exercises have been developed to simulate the typical interactions between Sergeants and other individuals, particularly subordinate personnel and citizens. The exercises consist of two work-related, one-on-one Role-play situations involving problems encountered by a Sergeant. In the Citizen Role-play Exercise, you will take the role of the Sergeant and a role-player will take the role of a citizen. In the Subordinate Role-play, you will take the role of a Sergeant and a role-player will take the role of a subordinate. You will be provided with background information explaining the general nature of the situation. You will be asked to handle the situation as you think a Sergeant should. Performance Dimensions In the Role-play exercises, you will be evaluated (rated) on the four performance dimensions described below. Some of these four dimensions are also measured in the Structured Oral Interview. They have the same definitions for both selection procedure exercises. However, different knowledges, skills, and abilities may be included under these dimensions for the Role-play Exercises. Again, you should familiarize yourself with each of these four performance dimensions. The dimensions and underlying KSAs have been provided below. 138 Human Relations The use of appropriate interpersonal skills which indicate a consideration of the feelings, interests and needs of employees, representatives of other agencies and the general public. This includes using tact, building and maintaining rapport and morale, recognizing stress symptoms in others when interacting in one-on-one situations or with groups to resolve interpersonal conflicts and address complaints. A15 Ability to establish rapport with others to include citizens, informants, witnesses, officers, and co-workers as needed to build relationships, establish trust, gather information, and facilitate communication. A19 Ability to respond to situations in a way that does not further aggravate a situation as needed to appropriate handle arguments, personnel problems, poor performance, and negative citizen comments. A21 Ability to negotiate a resolution to a conflict. A29 Ability to demonstrate appropriate patience and tact when dealing with confused, distraught, or mentally challenged citizens; angry or slow-learning students; and frustrated subordinates. A30 Ability to exhibit the appropriate level of firmness with others as needed to arrest suspects, calm emotionally distraught individuals, and address performance problems. A31 Ability to interact with subordinates in a manner that creates an atmosphere that allows the subordinates to solve their own problems. A34 Ability to demonstrate interpersonal sensitivity (e.g., sympathy, empathy) when communicating with others such as distraught citizens and subordinates with problems. A35 Ability to control one=s emotions and remain professional when provoked at chaotic incident scenes or during tragic circumstances. A40 Ability to counsel employees to include providing feedback on subordinate job performance, listening to subordinates= complaints and recommendations, and encouraging subordinates to discuss any personal problems. Problem Analysis Effectiveness in identifying problem areas, securing relevant information, relating and comparing information from different sources, determining the source of a problem and implementing task-resolving decisions. This includes developing short- or long-range plans to determine objectives, identify problems, establish priorities, set standards, provide guidelines and identify resource needs. 139 A11 Ability to determine if a complaint on an officer describes behavior in violation of department policy and procedures. A12 Ability to reserve judgment concerning a complaint or problem until all facts are collected. A18 Ability to consider multiple sources of evidence, personal perspectives, facts, and points of view when conducting an investigation, making decisions, and choosing a course of action as needed to remain objective. A20 Ability to evaluate information during face-to-face interactions with people to include detecting physical and verbal responses that suggest deception. A21 Ability to negotiate a resolution to a conflict. A62 Ability to determine whether facts are sufficient to support a recommended action such as suggesting a certain level of disciplinary action, issuing a search or warrant, or making an arrest. A71 Ability to detect errors in facts and information that do not appear consistent in written information and activity reports. A76 Ability to examine the directions and actions of subordinates, peers, and superiors. Supervisory Ability The extent to which subordinates are provided with directions and guidance toward the accomplishment of specified performance goals. This includes the ability to set and enforce performance standards, recognize problem behavior, evaluate subordinate work performance, provide guidelines and monitor subordinate performance in order to provide assistance, extend recognition, discipline and motivate or counsel. Supervisory Ability differs from Management Ability in that Supervisory Ability is concerned with the work performance and professional development of individuals in one=s area of responsibility, whereas Management Ability focuses on allocating personnel and equipment to meet Division or Unit work responsibilities or assignments. A41 Ability to give positive reinforcement and use incentives to motivate personnel. A45 Ability to confront others when they have performance deficiencies or violate a policy, rule, or procedure. K47 Knowledge of departmental personnel policies regarding transfers, leave, overtime, work assignment, rules of conduct, dress codes, and appearance. K48 Knowledge of departmental disciplinary procedures to include verbal and written counseling requirements and procedures. 140 A97 Ability to apply rules, procedures, and policies in a flexible manner to include taking into account a person=s individual situation when making a recommendation regarding discipline, considering a citizen=s explanation and situation when determining how an incident should be handled, and deciding when to confront subordinates with work problems. A99 Ability to adjust one=s management style (e.g., give orders versus suggest alternatives, closeness of supervision, etc.) to a situation. Oral Communication The clear, unambiguous, and effective expression of oneself through oral means to individuals such as co-workers, other agency employees, the general public and community groups to ensure the accurate and/or persuasive exchange of information. This includes receiving and comprehending information from another individual in order to respond appropriately. A32 Ability to listen attentively to others to include using appropriate eye contact and body language. A78 Ability to provide oral information clearly and concisely to include staying on the subject, paraphrasing information, and using examples as needed to effectively communicate information to a citizen, subordinate, or superior. A80 Ability to understand what is being communicated in the oral messages of other individuals. A82 Ability to state and explain policies, procedures, and problems in a persuasive manner as needed to enlist support, compliance, and acceptance by subordinates, the public, and the media. A83 Ability to assess verbal and physical cues to determine whether information has been communicated clearly and understood by recipients. A84 Ability to adjust communication to the level of understanding of individuals from a wide variety of socioeconomic, educational, and technical (e.g., law enforcement, non law enforcement) backgrounds. Exercise Procedure The Role-play Exercises offer the candidate an opportunity to actually demonstrate what he or she would do in a particular situation. You will take the part of a DKPD Sergeant while another individual will assume an interactive role (i.e., subordinate, citizen). Before the exercise begins, you will receive general instructions by a panel member. If you have any questions before the Role-play Exercise begins, you should ask that panel member. You will be given background information describing a problem typical of those problems which may be encountered on-the-job by a Sergeant in the DeKalb County Police Department. You will have a predetermined amount of time (usually between five and ten 141 minutes) to review the information before beginning the role-play. You should review the background material to determine how you, as a Sergeant, would handle the problem. After you have had time to read the background information, determine the appropriate action to take, and form your plan of action, you will need to go to the door and invite the role- player in. If you do not go to the door to get the role-player, he or she will knock on the door. You must let the role-player in and begin the role-play exercise at that time. The role-player will be interacting with you in the Role-play Exercise. The role-player will not be evaluating your response. The two panel members will be serving as assessors, taking notes during the Role-play Exercises to help them evaluate your responses to the situation. Do not expect to receive feedback from the panel during the Role-play Exercises. Once the Role-play Exercise has begun, you should treat the role-player as if he or she is actually the person described in the candidate background information for the Role-play Exercise. The role-player will give standard responses to the actions of the candidates to further ensure fairness to all candidates. You will be given a total of two Role-play Exercises. The Role-play interactions are not timed; however, most role-plays last between five and fifteen minutes. Candidate Instructions and Sample Role-play Exercise Instructions similar to those on the following page will be read to you by one of the panel members at the beginning of each Role-play Exercise. The Candidate Background Information on page 21 is similar in structure to the Role-play Exercises you should expect to see. This sheet gives the candidate some background information on the situation to be enacted in the Role-play Exercise. Obviously, the content of the Candidate Background Information will differ from the following example. It will relate to a situation commonly encountered by Sergeants in the DeKalb County Police Department. The sample Candidate Background Information will give you an idea of the kind of information a candidate is given before starting each Role-play Exercise. 142 SAMPLE CANDIDATE INSTRUCTIONS SERGEANT ROLE-PLAY EXERCISE DEKALB COUNTY POLICE DEPARTMENT Purpose You are now ready to begin the Role-play Exercise section of the selection process for DeKalb County Police Department Sergeant. The purpose of this section is to assess several knowledges, skills, and abilities. The Role-play Exercises will assess your oral communication skills, supervisory ability, human relations skills, and problem analysis skills. Task You will complete two Role-play Exercises. In the Role-play Exercises, you will take the role of a DeKalb County Police Department (DKPD) Sergeant. You will act the way that a Sergeant should act in each situation. In each Role-play Exercise, the role-player will take the role of either (1) a citizen or (2) a DKPD subordinate under your command. The role- player will act the way that this citizen or subordinate would act in each situation. Since you are the Sergeant in each situation, you should act toward the role-player the way a Sergeant should act. Your job is to study the Candidate Background Information (and any additional information) that you will receive for each Role-play Exercise. You need to analyze the problem that the Candidate Background Information presents. Then you must decide how you should handle each problem in the role of a Sergeant. Instructions In this exercise you should ignore the panel of assessors. They will simply be observing each Role-play Exercise. When the exercise begins, you should treat the role-player according to the role the role-player is playing in each exercise. One role-player will be a citizen. The other role-player will be a subordinate under your command. The Candidate Background Information will give you all of the necessary information about each situation. Once the exercise begins, you should NOT step outside your role of Sergeant. Do you have any questions about the Role-play Exercise procedures? We are now ready to proceed with the Role-play Exercise. Here is the Candidate Background Information for you to study. Let me know when you are ready to begin. 143 SAMPLE BACKGROUND INFORMATION SERGEANT CITIZEN ROLE-PLAY EXERCISE DEKALB COUNTY POLICE DEPARTMENT You are to play the role of a Sergeant in the DeKalb County Police Department. It is 1615 hours. You are assigned to the West Precinct uniform division, evening watch. You receive a call from your Captain asking you to personally deal with a problem that has come to his attention. The Captain received a call from a friend of the family, describing a problem she had with one of your officers, Officer Mike Stewart. Carol Williams, the friend of the Captain, has become very upset over a situation that occurred last night. As you understand it from the Captain, the complaint involves a problem that occurred when Officer Stewart stopped Ms. Williams. Captain Keeler tells you that Ms. Williams asked to come to his office to discuss the situation. Captain Keeler tells you that Ms. Williams has just stepped into his office. He said that he will assure Ms. Williams that you will be happy to discuss the situation with her. He asked that you give them a few minutes and then he=ll bring Ms. Williams to your office. You check and see that today is Officer Stewart=s off day. Also, you pull the ticket and find that a citation was issued yesterday evening at 2125 to Ms. Williams for driving under the influence. Your Task: Proceed with this meeting in your office. Handle the citizen complaint the way that a Sergeant should handle it. Remember: 1. You are a Sergeant in the Uniform Division of DeKalb County Police Department. You are assigned to the West Precinct, evening watch. 2. Captain Keeler asked you to meet with Carol Williams, a family friend, about a complaint regarding the way she was treated by one of your officers yesterday evening. 3. Carol Williams= problem is regarding Officer Mike Stewart, who is off today. Officer Mike Stewart stopped her yesterday evening. 4. Captain Keeler will bring Ms. Williams to your office in a few minutes. Do you have any questions? 144 WRITING SAMPLE EXERCISE The Writing Sample Exercise has been designed to measure your written communications skills. This exercise requires candidates to read and review some information, determine the appropriate action, and formulate a response in writing. The instructions request the candidate to produce a writing sample that a Sergeant might be required to write. Each candidate will have the same amount of time in which to write an appropriate response. It is very important that you think about the writing sample instructions before you begin your final response. Think about what you want to say before you write a final response. You may wish to write down some ideas and/or formulate an initial outline on scratch paper before you begin your response. Your response should completely address the issues and requests made in the instructions. If you think there may be more than one way to handle a situation, you should include an explanation of the alternatives or follow-up activities you might consider appropriate. The assessor panel will give you credit based on what is written and how it is written. The response guidelines against which your written response will be evaluated have been developed by current Sergeants for the DKPD Sergeant level. Thus, you will not be expected to write at the level of an editor for a newspaper. All information and supplies (i.e., pencils and paper) you may need to complete the exercise will be available at the test site. Dictionaries will also be available for your use. You will not be allowed to bring additional materials into the test room. A Writing Sample Exercise example is presented on the following page. The Sergeant=s Writing Sample will be similar to this exercise in format, length, and level of detail. The actual writing sample task could include anything a Sergeant could be expected to write such as a recommendation for disciplinary action, letter, progress report, plan of action, or follow-up report. Performance Dimension In the Writing Sample Exercise, you will be evaluated (rated) on only one dimension B Written Communication. You should familiarize yourself with the Written Communication dimension definition and underlying KSAs provided below. Written Communication The clear, unambiguous, legible, and effective expression of ideas in writing to ensure that readers of varying levels (e.g., co-workers, citizens, attorneys, politicians) can interpret information correctly. This includes not only presenting information in writing, but obtaining and understanding written information. This encompasses the utilization of proper grammar such as capitalization, punctuation, and spelling at a level needed to compose documents. A72 Ability to read and understand written material such as legal bulletins, departmental memos and directives, case laws, updated court rulings, and law enforcement literature. A86 Ability to write using appropriate grammar, sentence structure, punctuation, and spelling. 145 A87 Ability to express oneself accurately in writing to include writing a memo, explaining departmental policy, reconstructing events (e.g., incident report, accident report), and documenting oral statements for later reference. A88 Ability to write legibly. A89 Ability to organize facts and present them in the most appropriate and logical order consistent with the purpose of the document. A90 Ability to identify and summarize key information as needed to write incident report narratives from victim information, communicate important information from written bulletins or court decisions to subordinates, and document subordinate problem behavior. 146 SAMPLE SCENARIO SERGEANT WRITING SAMPLE EXERCISE DEKALB COUNTY POLICE DEPARTMENT You are a Sergeant with the DeKalb County Police Department. Today is Thursday, August 23rd. Yesterday one of your officers, Mike Reynolds, was involved in an altercation with a citizen, Ms. Annie Potts, at a traffic stop on Highway 290. Officer Reynolds was polite, yet firm in his dealings with Ms. Potts, however, she has made a complaint that he was rude and unreasonable. You are being provided with a copy of Officer Reynolds= statement. Lieutenant Jamison has requested that you prepare a letter responding to Ms. Potts. You should review Officer Reynolds= statement and respond appropriately to Ms. Potts. You have thirty minutes in which to write this letter. The letter should be no longer than two pages. If it is longer than two pages, ONLY THE FIRST TWO PAGES WILL BE SCORED. You have been provided with pencils, paper, and Final Response Forms. The letter you wish to be scored MUST appear on the Final Response Forms. Only the Final Response Forms will be scored. On the top corner of each page of the Final Response Form there is a space for your assigned two digit number. Please place your number from your candidate envelope into these spaces. DO NOT use your name in the letter. Please use the name SERGEANT PAT CANDIDATE. Please be specific and give details. Address the issues outlined in the directions. Your letter will be assessed for your written communication skills. 147 THE SELECTION PROCEDURE The selection procedure is a highly structured and standardized process for both the candidates and the assessor panel. A number of precautions will be observed in order to ensure that each candidate is given the same opportunity to demonstrate his or her potential. For the Written Exam, every candidate will take the exact same test on the same day. For the Structured Oral Interview, Role-play Exercises, and Writing Sample Exercise every candidate on a given day will go through the selection process using the exact same procedures. Although these procedures may seem somewhat rigid and inflexible, they are necessary to ensure fairness to all candidates. (Even though a casual and informal interview would be more comfortable for everyone involved, results from such an interview would be much less reliable.) On different days, parallel forms of the exercises will be used for test security reasons. Thus, different questions of equal difficulty will be given on different days. Parallel Structured Oral Interview scenarios, Role-play Exercises, or Writing Sample scenarios involve the same type of problem and have been developed to have the same difficulty level, but differ in the specific facts of the situation. The Structured Oral Interview scenarios will be read to you by an interview coordinator. Your Structured Oral Interview responses will be rated by a panel of assessors. In the Role-play Exercises, a panel member will read you the instructions, answer any questions, and give you the background information. The assessors will then take notes once the role- player enters the room and the Role-play Exercise begins. The assessor panel members will not be permitted to ask you any "follow-up" questions during the interview or comment on your responses during the Role-play Exercise. Only the role-player will respond to your comments during the Role-play Exercise. The rating panel members will be taking notes during the Structured Oral Interview and Role-play Exercises to help them evaluate your responses to the exercises. Do not expect to receive any feedback from the panel during the Structured Oral Interview and Role-play Exercises. Although this may seem somewhat unnatural, it helps ensure that candidates are not unfairly encouraged or discouraged. This helps ensure the consistency of the exercises for all candidates. To further ensure fairness, you will be randomly assigned to an assessor panel. The panels will consist of individuals selected from other law enforcement agencies. Each panel will be diverse with respect to race and gender. Under no circumstances will you be rated by someone who knows you. At this time, we are planning to administer the Writing Sample Exercise on the same day as the Role-play Exercises. The Writing Sample Exercise will be scored at a later time. SCORING THE EXERCISES As explained above, for the Written Exam, all items on the exam (approximately 140) will be scored. Based on the identification of problematic items from test results or from candidate item challenges, some items will be removed. Thus, a number somewhat smaller than the total number of items on the original test will most likely be used to compute an individual=s test score. 148 Your performance in the Structured Oral Interview, Role-play Exercises, and Writing Sample (document only) will be evaluated by assessor panels. The panels will be familiar with all the exercises as well as the response standards developed for each situation. Each panel member will rate you on the performance dimension(s) assessed by the exercise. The response standards developed for each exercise component will be used by the panel members as guidelines for rating you on the performance dimensions. The scoring guidelines help ensure that consistent scoring standards are applied to all candidates and that all scoring criteria are tailored to the DeKalb County Police Department. Your responses to each Structured Oral Interview scenario, Role-play Exercise, and Writing Sample Exercise will be rated independently by each panel member on the performance dimensions. A seven-point rating scale will be used where a A7@ represents a AClearly Superior@ response, a A4@ represents a AClearly Acceptable@ response, and a A1@ represents a AClearly Unacceptable@ response. Your scores in the Structured Oral Interview, Role-play Exercise, and Writing Sample Exercise procedures will be calculated by averaging the raters= dimension scores and then applying the dimension weights which have been determined from the job analysis outcome. Scores may be standardized to correct for rater, panel, and day effects. Reporting Your Scores Your score on the Written Exam and your total score on the Structured Oral Interview will be combined to give you an overall score. The two components will be weighted based on the number and importance of KSAs contained under the dimensions measured by that exercise. These scores will be banded to produce a list from which promotions will be made. (The manner in which promotions will be made is described in the following paragraph.) Banding is a process that acknowledges some degree of error in the measurement process by treating candidates who score within a given range as equal. Candidates falling within a given band are perceived as having the same score and are therefore considered equal with respect to performance on the test. You will receive written notification of the band in which your score places you. Ranking Candidates within Bands Based on the expected number of promotions over the life of the two-year list of Sergeants resulting from this procedure, only a portion of those participating in the Written Exam and Structured Oral Interview will be invited to participate in the final phase, the Role-play and Writing Sample Exercises. The exact number will depend on the number of individuals in the top bands. An approximate number based on estimated promotions will be announced at a later date. The candidates from as many bands as necessary to include that number of candidates will be invited to participate. The Role-play and Writing Sample Exercises will be used to rank individuals within those bands. The overall combined score of the Role- play and Writing Sample exercises will be determined based on the candidate=s performance relative to the KSAs included in the dimensions measured by each exercise. Regardless of an individual=s performance on the Role-play Exercises and Writing Sample Exercise, an individual cannot move from one band to another. Candidate Role-play Exercise performance and performance on the Writing Sample Exercise will only affect the 149 position within the band. However, an individual must participate in all components to be eligible for promotion. Any candidate failing to appear for any scheduled exercise will be eliminated from the process. Based on the number of immediate promotions, individuals in the top band(s) may not be required to participate in the Role-play Exercise. As an example, suppose the Chief determined that ten promotions will be made immediately. In that situation, if the analyses resulted in a top band comprised of eight individuals, there would be no reason to rank those individuals within the top band. Each candidate will receive a letter following the administration of the Structured Oral Interview, informing him/her of his/her status in the selection process. Then, following the administration of the Role-play Exercises and Writing Sample Exercise, each candidate will receive information about individual performance on the dimensions measured by the Structured Oral Interview, Role-play Exercise, and Writing Sample Exercise. Those candidates who do not participate in the final phase will only receive information about the Structured Oral Interview. 150 GUIDELINES FOR PARTICIPATING SERGEANT SELECTION PROCEDURE COMPONENTS DEKALB COUNTY POLICE DEPARTMENT < Review the sources for the Written Exam. Focus on those things that are related to the KSAs measured by the Written Exam. < Written test questions have been developed with the intention of tapping knowledge about information that you should generally know without looking in the book. The test developers and incumbent sergeants made every attempt to avoid requesting information that you would never be expected to know without access to a written source. For example, it is more important to know the content of a law or act rather than the specific number of the act or law. < DO NOT leave any test question unanswered. With some exams, a test taker receives zero (0) points for an unanswered question and actually loses a point (-1) for questions answered incorrectly. That is not how the written exam will be scored. Points are earned with correct responses. An incorrect response earns no points regardless of whether it is blank or not. Therefore, it is to your advantage to GUESS when you do not know the correct answer. You have a 1 in 4 chance of guessing correctly, even if you know nothing about the question. If you can eliminate even one incorrect answer, your odds of answering correctly are even better.