RELIABILITY AND CONCURRENT VALIDITY OF THREE SELF-REPORT MEASURES OF TRAUMA EXPOSURE Except where reference is made to the work of others, the work described in this thesis is my own or was done in collaboration with my advisory committee. This thesis does not include proprietary or classified information. __________________________ Benjamin Hammond Carter Certificate of Approval: __________________________ __________________________ Roger K. Blashfield Frank W. Weathers, Chair Professor Professor Psychology Psychology __________________________ __________________________ Bryan D. Edwards George T. Flowers Assistant Professor Dean Psychology Graduate School RELIABILITY AND CONCURRENT VALIDITY OF THREE SELF-REPORT MEASURES OF TRAUMA EXPOSURE Benjamin Hammond Carter A Thesis Submitted to the Graduate Faculty of Auburn University in Partial Fulfillment of the Requirements for the Degree of Master of Science Auburn, Alabama August 10, 2008 iii RELIABILITY AND CONCURRENT VALIDITY OF THREE SELF-REPORT MEASURES OF TRAUMA EXPOSURE Benjamin Hammond Carter Permission is granted to Auburn University to make copies of this thesis at its discretion, upon request of individuals or institutions and at their expense. The author reserves all publication rights. Signature of Author August 10th, 2009 Date of Graduation iv THESIS ABSTRACT RELIABILITY AND CONCURRENT VALIDITY OF THREE SELF-REPORT MEASURES OF TRAUMA EXPOSURE Benjamin Hammond Carter Master of Science, August 10, 2009 (B.S. Brigham Young University, 2004) 90 Typed Pages Directed by Frank Weathers The DSM-IV-TR requires identification of a traumatic event in order for an individual to meet criteria for PTSD. There are currently many self-report measures designed to assess for exposure to traumatic events, but these measures differ widely in content, format, and in their implicit definition of the trauma construct. Many of the measures were developed on an ad-hoc basis and few have undergone rigorous psychometric evaluation. This study compared the test-retest reliability and concurrent validity of the three most widely used self-report measures of trauma exposure among a sample of undergraduate college students (N = 126) in a large Southeastern university. The study incorporated a between-groups, test-retest design in which participants completed one of the three measures twice over a 2 to 14 day interval. Participants also completed a detailed trauma history interview which served as a criterion against which results from the self-report measure were compared. All three measures demonstrated v good temporal stability. However, each measure appeared to influence subsequent event reporting on the trauma history interview. These results emphasize the importance of understanding the characteristics of self-report measures of trauma exposure when selecting a measure for use in research or practice. vi ACKNOWLEDGEMENTS Thank you to Dr. Frank Weathers for his mentorship, excitement about this project, and patience. Thank you also to Dr. Roger Blashfield and Dr. Bryan Edwards for their constructive feedback and help developing this document. I greatly appreciate the graduate students and undergraduate research assistants who contributed their time and abilities, as well as the many participants who shared their personal experiences. I express my sincere gratitude to my family, especially my wife, Jana Carter, who has been tremendously supportive. Thank you also to my wonderful children, Caitlyn and Jackson, who brighten every day. Finally, thank you to my parents, Don & Judy Carter, who have always encouraged me and been examples for me. vii Style manual used: Publication Manual of the American Psychological Association, 5th edition (2001) Computer software used: Microsoft Word 2007 Statistical Packages for the Social Sciences 16.0 viii TABLE OF CONTENTS LIST OF TABLES ...............................................................................................................x LIST OF FIGURES ........................................................................................................... xi INTRODUCTION ...............................................................................................................1 Changes in the Trauma Construct ............................................................................2 Trauma Assessment .................................................................................................3 Differences in Available Measures of Trauma Exposure ........................................9 Current Study .........................................................................................................11 Hypotheses .............................................................................................................16 METHOD ..........................................................................................................................18 Design and Analysis ..............................................................................................18 Participants .............................................................................................................19 Procedure ...............................................................................................................20 Measures ................................................................................................................21 Data Coding ...........................................................................................................25 RESULTS ..........................................................................................................................28 Hypothesis 1...........................................................................................................29 Hypothesis 2...........................................................................................................31 Hypothesis 3...........................................................................................................32 ix Qualitative Feedback .............................................................................................32 DISCUSSION ....................................................................................................................33 Hypothesis 1...........................................................................................................33 Hypothesis 2...........................................................................................................36 Hypothesis 3...........................................................................................................38 Limitations and Future Directions .........................................................................39 REFERENCES ..................................................................................................................43 APPENDIX ........................................................................................................................55 x LIST OF TABLES 1. PDS, DAPS, and LEC Item Comparison .......................................................................55 2. Summary Variable Creation ..........................................................................................59 3. Range of Events Reported on Trauma History Interview ..............................................60 4. Number of Events Reported by Format (Self-report vs. Interview) ..............................61 5. LEC Individual Items Temporal Stability: Experienced Only ......................................62 6. LEC Individual Items Temporal Stability: Experienced and Witnessed .......................64 7. DAPS Individual Items Temporal Stability ...................................................................66 8. PDS Individual Items Temporal Stability ......................................................................68 9. Accuracy: Experienced Only .........................................................................................69 10. Accuracy: Experienced, Witnessed, and Confronted With .........................................73 11. Discrepancy Attribution Examples ..............................................................................77 xi LIST OF FIGURES 1. Distribution of Discrepancy Attributions.......................................................................79 1 INTRODUCTION As currently formulated in the Diagnostic and Statistical Manual, 4th edition, text revision (DSM-IV-TR; American Psychiatric Association [APA], 2000) posttraumatic stress disorder (PTSD) is a characteristic stress-response syndrome that may develop in response to a traumatic life event. The formal definition of a traumatic event is set out in Criterion A of the PTSD diagnostic criteria, which requires that ?the person experienced, witnessed, or was confronted with an event or events that involved actual or threatened death or serious injury, or a threat to the physical integrity of self or others? (APA, 2000; p. 467). Criterion A further requires that the person?s response to the traumatic event involve ?intense fear, helplessness, or horror.? Criterion A thus plays a pivotal role in the conceptualization and assessment of PTSD in that the disorder can be diagnosed only when the precipitating event meets both parts of Criterion A, even if all other criteria are met. However, defining Criterion A and determining whether a given event fulfills it have proven difficult. The definition of Criterion A and the assessment of trauma have been the focus of considerable controversy. Consequently, researchers have taken different approaches to measuring trauma exposure, which has in turn led to the creation of a wide variety of trauma exposure measures. However, many of the most widely used measures were developed on an ad hoc basis and lack sufficient validation. The present study involves a comparison of three widely used self-report measures of trauma exposure in an effort to 2 explore the potential impact that different approaches to the measurement of trauma exposure have on the reliability and relative accuracy of assessment. The three measures were examined with respect to content, structure, and psychometric properties using a between-groups, test-retest design. Changes in the Trauma Construct Since the inception of the PTSD diagnostic category in the Diagnostic and Statistical Manual, 3rd edition (DSM-III; APA, 1980), the definition of what constitutes a traumatic event has gradually changed. As originally conceptualized in DSM-III, Criterion A involved a ?recognizable stressor that would evoke significant symptoms of distress in almost everyone,? and that would be ?outside the range of usual human experience? (APA, 1980, p. 236). DSM-III-R added the subjective condition that the event is ?usually experienced with intense fear, terror, and helplessness? and expanded the definition of trauma to include ?learning about a serious threat or harm to a close friend or relative? (APA, 1987, p. 248). DSM-IV-TR further expanded the definition of trauma by introducing a two-part definition of Criterion A. Criterion A1 specifies the nature of a traumatic event (?involved actual or threatened death or serious injury, or a threat to the physical integrity of self or others?) and various types of exposure (?experienced, witnessed, or was confronted with?), and Criterion A2 requires a subjective response of ?intense fear, helplessness, horror? (APA, 2000, p. 467). As the definition of trauma has evolved, there has been considerable debate about the role Criterion A should play in the PTSD diagnostic category. For example, researchers have debated the necessary magnitude of a stressor to qualify as a traumatic event (McNally, 2003; Rosen, 2004), the level of exposure required to meet Criterion A 3 (Pfefferbaum, Pfefferbaum, North & Neas, 2002; Propper, Stickgold, Keeley, & Christmas, 2007; Sabin-Farrell & Turpin, 2003), and the overall need for an etiologic requirement in the conceptualization of PTSD (Bodkin, Pope, Detke, & Hudson, 2007; Breslau & Davis, 1987; Resnick, Kilpatrick, Dansky, Saunders, & Best, 1993; Rosen & Lilienfeld, 2008; Solomon & Canino, 1990). Furthermore, the role of the individual?s subjective appraisal of the event (Criterion A2) has been questioned (Breslau & Kessler, 2001; Brewin, Andrews, & Valentine, 2000; Creamer, McFarlane, & Burgess, 2005; Rosen, 2004; Schnurr, Spiro, Vielhauer, Findler, & Hamblen, 2002). Some researchers have recommended including all stressful events as potentially traumatic events (PTEs) rather than distinguishing between Criterion A and non-Criterion A events (Kilpatrick et al., 1998; Maier, 2006). However, much of the research suggesting that the trauma component (Criterion A) is not necessary for a PTSD diagnosis has been limited by inadequate assessment of trauma exposure (e.g., Gold, Marx, Soler-Baillo, & Sloan, 2004). Trauma Assessment Reliable assessment of trauma exposure is necessary to understanding the role of specific traumatic events and aspects of event exposure as risk factors for psychopathology and health problems (Dohrenwend, 2006; Goodman, Corcoran, Turner, Yuan, & Green, 1998; Krinsley & Weathers, 1995). Similarly, accounting for lifetime trauma exposure may help explain individual differences in response to current traumatic events. Measures of trauma exposure that do not account for multiple types of exposure across the lifetime will miss vital information about the relationship between trauma and PTSD. Given the cost of administering individual trauma history interviews, reliable and 4 valid self-report measures of trauma exposure are important to the advancement of knowledge in the field of traumatic stress research. The importance of improving assessment of trauma exposure has been well documented (Frueh, Elhai, & Kaloupek, 2004; Goodman et al., 1998; Krinsley & Weathers, 1995). Norris and Hamblen (2004) reviewed existing measures of trauma exposure and highlighted the lack of systematic research comparing self-report measures of trauma exposure. Similarly, Weaver (1998) suggested that utilizing different assessment strategies within the same sample and comparing within-subject differences in reporting represents an important, but rare, step in the progress of traumatic stress research. A review of the literature identified only three published reports comparing methods of trauma event assessment, and each of those studies compared an open-ended ?gating? question, such as is utilized by the Structured Clinical Interview for DSM-IV (SCID; First, Spitzer, Gibbon, & Williams, 1996) PTSD module, with a more detailed self-report measure (Elhai, Franklin, & Gray, 2008; Franklin, Sheeran, & Zimmerman, 2002; Weaver, 1998). That the extant literature is void of studies comparing methods (or measures) of trauma assessment suggests that additional research on assessing for trauma exposure is needed. Despite the warnings, the field of traumatic stress research has focused more on, and has made far more progress in, developing measures for assessing PTSD symptoms (Criteria B-F) than for assessing trauma exposure (Weathers & Keane, 2007). Since the inception of PTSD in the DSM-III (APA, 1980), researchers have developed approximately twice as many measures that assess PTSD symptoms exclusively as they have measures that assess trauma exposure, or trauma exposure and PTSD symptoms 5 together (Elhai, Gray, Kashdan, & Franklin, 2005). This is in part due to the way the field of traumatic stress research has developed. Initially it was assumed that only extreme stressors of overwhelming magnitude such as rape, combat, and severe natural disasters led to the development of PTSD symptoms. This assumption supported the idea that assessing for trauma exposure was straightforward as such events are not easily forgotten or confused. However, researchers eventually realized the complexity of the trauma construct, and the difficulty that assessing trauma exposure presented (Dohrenwend, Link, Kern, Shrout & Markowitz, 1990; Frueh et al., 2004; Krinsley & Weathers, 1995). In fact, research on the epidemiology of trauma suggests that it is common for people to experience multiple traumatic events in their lifetime (Kessler, Sonnega, Bromet, Hughes, & Nelson, 1995; Norris, 1992), and that the effect of traumatic events is likely cumulative (Follette, Polusny, Bechtle, & Naugle, 1996; Goodman, Dutton, & Harris, 1997; Schumm, Briggs-Phillips, & Hobfoll, 2006). Furthermore, prior exposure to traumatic events may affect an individual?s response to a subsequent traumatic event (Resnick et al., 1993). Nonetheless, measuring trauma has proven difficult (Krinsley & Weathers, 1995; Monroe, 2008; Resnick, Falsetti, Kilpatrick, & Freedy, 1996). Research on the reliability of self-reported trauma exposure has regularly provided evidence of inconsistency (e.g., Goodman et al., 1998; Roemer, Litz, Orsillo, Ehlich, & Friedman, 1998). For example, Southwick and colleagues (1997) administered a modified version of the Desert Storm Trauma Questionnaire (Southwick et al., 1993) and the Mississippi Scale for Combat Related Posttraumatic Stress Disorder (Keane, Caddell, & Taylor, 1988) to 62 veterans of Operation Desert Storm approximately 1 month, and again approximately 2 years, after 6 returning from war. There was a significant mean increase in number of events reported across administrations (M = .69, SD = 2.18), and 88% of the subjects changed their response to at least one item. In addition, 88 events reported as not experienced at time 1 were reported experienced at time 2 (compared to 47 in the opposite direction). While some inconsistency is to be expected with any psychological measurement, when reports of trauma exposure change over time the validity of the measurement is weakened. Assuming that trauma exposure items are unambiguous, the response should not change as the endorsed event either happened or it did not. In attempts to explain inconsistent reporting, many researchers have posited psychological principles such as normal forgetting, psychological state at time of reporting, change in subjective appraisals of events over time, priming, and response avoidance (e.g., Briere, 1992; Ferguson, Horwood, & Woodward, 2000; McNally, Litz, Prassas, & Shin, 1994; Walker, Skowronski, & Thompson, 2003). However, factors related to measure construction (e.g. ambiguous wording or vague instructions) are also likely to impact reliability of reporting. Research has also shown inconsistency of reporting across different methods of assessment (see Monroe, 2008). In the field of life events research, many studies have compared life events checklist measures with interview-based measures and have consistently reported significant differences in the information obtained from the different methods (e.g., Duggal et al., 2000; Gorman 1993; Katschnig 1986; Lewinsohn et al., 2003; McQuaid et al., 2000; Oei & Zwart, 1986). For example, Lewinsohn et al. (2003) reported on data obtained from 191 adolescents who completed a life stress telephone interview an average of 73 days (SD = 39) after they completed and returned a 7 life stress questionnaire. Lewinsohn and colleagues reported that for life events primarily involving the participant, 67.5% of the events reported on the questionnaire were confirmed in the interview. However, for life events primarily involving other people, only 19.7% were confirmed by the interview. Considering that life events primarily involving others were reported twice as often as events primarily involving the participant, the overall correspondence between questionnaire and interview was below 50%. Duggal et al. (2000) and McQuaid et al. (2000) both reported even lower concordance rates between self-report measures and interviews (39% and 32% respectively). While the previous findings refer to life events research that includes events that range widely in magnitude and valence, researchers investigating the assessment of potentially traumatic events have identified similar problems (Hepp et al., 2006; Roemer et al., 1998; Southwick et al., 1997). In a recent review of life stress assessment, Monroe (2008) suggested that three themes are evident in the literature. The first theme involves the memory and recall of potentially traumatic events. A general pattern of increased reporting among combat veterans has been consistently reported (Roemer et al., 1998; Southwick et al., 1997); however, the increase in reporting has not been large. Krinsley et al., (2003) interviewed 76 male military veterans twice over a 7-day interval and reported an average of 10.4 events reported at time 1 and 11.2 events reported at time 2. They reported that 51% of participants reported more events at time 2, 38% reported fewer events, and 11% reported the same number of events (Krinsley et al., 2003). Krinsley and colleagues concluded that reporting of events is generally stable across time. Thus, it seems that differences in reporting between self-report and interview-based measures are 8 more likely to be explained by differences between the two approaches than by memory difficulties. The second theme that Monroe (2008) identified concerns how respondents interpret the various aspects of self-report measures, and how respondents interpret the trauma categories or event descriptions in particular. As Dohrenwend (2006) explained, respondents often interpret the categories provided as prompts in self-report measures in highly personal and idiosyncratic ways that can lead to a range of responses, from trivial to catastrophic events. If a respondent interprets the task differently than the researcher intended, or there is significant variability in how respondents interpret the task, there will be substantial error included in the results. That is, when such variability is present, what the researcher intends to measure likely will not match with what the respondent infers about the task and subsequently reports. The third theme expands upon the second to suggest that in addition to interpreting item prompts differently, respondents will incorporate the larger context of the task to try to infer the researcher?s intentions in order to respond appropriately. That is, respondents will likely attend to all available information, such as the name of the study, the instructions on the measures, the types of events listed, the examples provided, and any other aspects of the setting, interviewer, or measure that might illuminate the task at hand. In addition to the influences of the setting and aspects of the actual measure, event reporting is often influenced by the respondent?s views about what qualifies as stressful or traumatic. For example, respondents who attribute subsequent problems to experiencing a particular event are more likely to endorse that event than are respondents who claim that the event did not affect them. Monroe (2008) explains that recognition of 9 such influences highlights how easily self-report measures of life events can be contaminated by extraneous information. Differences in Available Measures of Trauma Exposure As researchers have sought to more accurately and efficiently measure trauma exposure they have developed a wide variety of trauma exposure measures. Current measures of trauma exposure vary in many ways. Most obvious is the type of traumatic events covered (narrow vs. broad). For example, the Sexual Abuse Questionnaire (SAQ; Lock, Levis, & Rourke, 2005) and the Combat Exposure Scale (CES; Keane et al., 1989) are two measures that assess only for traumas within a particular domain (sexual abuse and combat exposure respectively). The Evaluation of Lifetime Stressors (ELS; Krinsley, Gallagher, Weathers, Kaloupek, & Vielhauer, 1996) and Lifetime Trauma and Victimization History (LTVH; Widom, Dutton, Czaja, & DuMont, 2005) are two measures that cover a broad range of traumas over the lifetime. Some measures, such as the Traumatic Stress Schedule (TSS; Norris, 1990), assess only for high-magnitude or life-threatening events, whereas other measures, such as the Potential Stressor Experiences Inventory (PSEI; Resnick, Falsetti, Kilptrick, & Freedy, 1996), assess for both low- and high-magnitude stressors. Measures also vary in terms of the depth of coverage, or how specific or broad item categories are. For example, while both measures cover the same broad spectrum of events, the TSS includes only seven specific event categories (items) while the Trauma History Questionnaire (THQ; Green, 1996) has 23 specific event categories. The TSS has one broad category for robbery, while the THQ has four narrowly-defined categories that overlap with the TSS category: mugging, robbery?a theft by force, break-in with the 10 respondent present, and break-in with the respondent absent. Some measures include behaviorally specific items such as: ?Before the age of 18, did a man or boy, ever put his penis inside any part of your body (mouth, anus, or [for women] vagina) when you didn?t want him to?? (Fricker, Smith, Davis, & Hanson, 2003). Other measures provide only broad labels such as sexual assault or rape (e.g., Traumatic Events Questionnaire (TEQ); Vrana & Lauterbach, 1994). While some measures assess only for events that happened directly to the respondent (e.g. Detailed Assessment of Posttraumatic Stress (DAPS); Briere, 2001), others include events that the respondent witnessed or learned about (e.g. THQ; Green, 1996). Measures also vary in the extent that they assess for the respondent?s subjective appraisal of the event, the extent of follow-up details elicited for each event, the time necessary for administration, the number of examples provided for each item category, and the level of psychometric research support. The multiple differences in the available trauma exposure measures highlight the complexity of measuring trauma exposure. There is no consensus self-report measure of trauma exposure. One study surveying 227 trauma researchers and clinicians showed that the most widely used measure, the Posttraumatic Stress Diagnostic Scale (PDS; Foa, 1996), was used by only 16% of the sample for clinical purposes and by only 11% of the sample for research purposes (Elhai et al., 2005). The same study identified 12 different self-report measures of trauma exposure in use among the 227 respondents; a recent PsychINFO search identified at least 18 different self-report trauma exposure measures published since 1990. Most of these measures were rationally derived and were created on an ad hoc basis to be used in a particular study with a particular population. Many 11 have not been adequately validated and available psychometric support is often very limited (Gray et al., 2004). A review of the literature revealed that only ten available measures have published test-retest reliability evidence for reported trauma exposure, and of those measures only five present test-retest reliability information for individual items (see Goodman et al., 1998; Gray et al., 2004; Green, 1996; Kubany et al., 2000; McHugo et al., 2005). Current Study Taken together, research on the reliability of trauma exposure reporting and a review of the diversity of trauma exposure measures, suggests that more work is needed to refine trauma exposure assessment. The current study sought to contribute to this process of refinement by examining the performance of three existing trauma exposure measures. The purpose of this study was to examine the implications of differences in content and format on trauma assessment by comparing the test-retest reliability and concurrent validity of three widely used trauma assessment measures: the Life Events Checklist (LEC; taken from the Clinician-Administered PTSD Scale; Blake et al., 1995), the Detailed Assessment of Posttraumatic Stress (DAPS; Briere, 1998), and the Posttraumatic Stress Diagnostic Scale (PDS; Foa, 1996). Each of the three measures chosen for inclusion in this study were selected because they are widely used, were designed to assess a broad range of events over the lifetime, were designed for use with a general population, and yet have distinct differences in content and format. In addition, there is limited published research on the psychometric properties of the trauma assessment portions of each measure. Each of the three measures is described below. The content of each measure is displayed in Table 1. 12 Life Events Checklist. The LEC is a trauma exposure screening measure originally designed to precede administration of the Clinician Administered PTSD Scale (CAPS; Blake et al., 1990). Developed together with the CAPS by the National Center for PTSD, the LEC was intended to assess for a wide array of PTEs that a respondent may have experienced. The CAPS is designed to evaluate the presence and severity of posttraumatic symptoms that may have followed the event (Blake et al., 1995). The LEC instructs the respondent to select the event that they ?consider the worst overall? of all the events that they endorsed, and to provide details about the event. In addition to instructions to briefly describe the worst event in narrative format, the LEC inquires as to how the event was experienced, the degree of life-threat associated with the event, if serious injury or death was threatened or occurred, if the respondent felt terrified, horrified, or helpless in response to the event, how old the respondent was at the time of the event, and how many times the respondent experienced similar events. The LEC consists of 17 items. Sixteen items assess exposure to specific categories of traumatic events known to contribute to PTSD or other posttraumatic difficulties (natural disaster, sexual assault, etc.), and the final item, labeled ?other,? assesses exposure to events that do not fit into one of the 16 specific categories. Many of the categories include sample events to prompt the individual regarding the types of events that may belong in the category. For example, category six is labeled ?Physical assault? and includes the following sample events in parentheses: ?being attacked, hit, slapped, kicked, beaten up.? Respondents can indicate if they have ever experienced an event that they believe fits in a category by checking one or more of the following options: ?Happened to me, Witnessed it, Learned about it, Not sure, and Doesn?t apply.? 13 Initial instructions indicate that the measure lists ?a number of difficult or stressful things that sometimes happen to people,? and instructs respondents to consider their entire life when completing the measure. The LEC items were developed through inspection of existing measures of trauma exposure, review of the PTSD literature, and consultation with experts in the field of traumatic stress and PTSD. Following initial item generation, items were revised through consultation with other trauma and PTSD researchers. Though the LEC is widely used as a screening measure, only one published study has evaluated its psychometric properties (Gray et al., 2004). Regarding reliability of the LEC as a measure of direct trauma exposure (only including events endorsed as ?happened to me?) among 104 college undergraduates, 7 of the 17 items achieved a kappa of .60 or better over a period of 5 to 14 days. Only one item (?caused serious injury or death to another?) failed to achieve a kappa of .40 while the remaining items were above .50. The mean kappa for all items was .61 while the retest correlation was r = .82, p < .001 (Gray et al., 2004). Detailed Assessment of Posttraumatic Stress. The DAPS (Briere, 2001) is a 104- item self-report measure designed to assess for history of trauma exposure and reactions to past traumatic events such as dissociation, symptoms of PTSD, alcohol and substance abuse, and suicidal ideation. The trauma assessment portion of the measure includes 13 prompts that ask if a particular type of traumatic event has ever happened to the respondent. For the purposes of this study, only responses to the first 13 items will be evaluated. Unlike the LEC, the DAPS does not include category labels. Instead, the DAPS includes full-length prompts such as: ?An accident at work or at home, when you were 14 seriously hurt or were afraid you would be hurt or killed?? Also, the DAPS does not allow for events that the respondent witnessed or learned about, with the exception of item 12 which asks about ?Seeing someone else get seriously hurt or killed?? All but one of the DAPS trauma assessment items includes the phrase, ?when you were seriously hurt or were afraid you would be hurt or killed? Respondents are instructed to select the experience that currently bothers them the most, and answer the remaining questions based on the experience they selected. They are also instructed to provide a brief written narrative describing the experience they selected. Eight questions assess Criterion A2 by asking the respondent to rate their experience in terms of their fear, helplessness, horror, guilt, shame or humiliation, disgust, and fear of death during or after the event on a 5- point scale. The remainder of the measure is comprised of items designed to assess DSM- IV-TR PTSD Criteria B-F. The DAPS was normed on a sample of more than 400 respondents from the general population (Briere, 2001). Individual scale scores on the DAPS are converted to T scores and can be compared to group norms to determine the severity and clinical importance of the score. The DAPS also includes two validity scales that help identify individuals who over- or underreport psychological symptoms. In terms of PTSD diagnostic status, the DAPS effectively approximates the diagnostic results of the CAPS. In a study in which participants were administered both the DAPS and CAPS, the DAPS showed good sensitivity (.88) and specificity (.86), as well as a good diagnostic efficiency rating (.87; Briere, 2001). The DAPS manual does not provide any information, nor is there any published research, regarding the reliability or validity of the trauma assessment portion of the measure. 15 Posttraumatic Stress Diagnostic Scale. The PDS (Foa, 1996) is a 49?item measure on which respondents identify whether or not they have ?lived through or witnessed a very stressful and traumatic event at some point in their lives.? A list of 11 categories of events (some with sample events following the category label) is provided as well as one ?other? category. Again, for the purposes of this study only responses to the first 12 items will be evaluated. The respondent is instructed to select an event that they experienced that ?bothers you the most? and to provide a brief written narrative describing the event they selected. Following the written narrative, six questions prompt for more information about the event including when the event occurred and how the participant responded to the event (Criterion A2). The remainder of the PDS includes items designed to assess for subsequent PTSD symptoms experienced in the last month, with respect to the event they identified as their worst event. The trauma exposure portion of the PDS originally consisted of 10 categories of traumatic situations, and two additional categories were added based on feedback that the authors solicited from 15 experts in PTSD-related research. The feedback was qualitative in nature and was related to the range of event categories and the phrasing of the items (Foa, Cashman, Jaycox, & Perry, 1997). The diagnostic accuracy of the PDS has been evaluated by comparing PTSD diagnostic status generated from the PDS with PTSD diagnostic status from the Structured Clinical Interview for DSM-III-R (SCID; Williams et al., 1992). The PDS showed good sensitivity (.89) and good specificity (.75), with a kappa of .65 and 82% agreement, when compared with the SCID PTSD diagnosis (Foa, 1996). Similar to the DAPS, the PDS manual does not include information regarding the reliability and 16 validity of the trauma assessment portion of the measure. However, it does suggest good internal consistency reliability for each of the three PTSD symptom cluster scales (reexperiencing ? = .84, avoidance ? = .88, and hyperarousal ? = .86; Foa, 1996). The primary difference between the three measures involves the level of exposure to potentially traumatic events that each measure is designed to include. That is, the DAPS instructs respondents to endorse only events that they experienced, whereas the PDS instructs respondents to report on events that they ?lived through or witnessed.? In addition to events experienced or witnessed, the LEC includes a response option for events that the respondent ?learned about happening to someone close to you.? The three measures also differ in terms of the types of events included. While there is considerable overlap of event type across the three measures (see Table 1 for a visual display of the overlap in categories), there are as many as six event categories included on the LEC but not on the DAPS or PDS. One of the event categories included in the LEC but excluded from the DAPS and PDS, the sudden unexpected death of a loved one, has been shown to frequently precipitate PTSD (Breslau, 2002). In one large epidemiological study, the sudden unexpected death of a loved one was reported by 60% of the sample and accounted for 31% of all PTSD cases in the sample (Breslau et al., 1998). These and other differences between the measures may lead to differences in reporting among respondents. Hypotheses The present study used a between-groups design to compare the test-retest reliability and concurrent validity of three self-report trauma exposure measures. In addition, the study elicited and evaluated participant feedback on the three measures, 17 including feedback regarding item content, instructions, ease of use, and attributions for any discrepancies between the self-report measure and structured interview. This feedback was intended to highlight the process participants used to complete the measures and to provide vital information about how various aspects of the measures are interpreted by participants, and how those aspects affect what is reported. The following hypotheses guided data analysis. Hypothesis 1. It was hypothesized that each self-report measure of trauma exposure would show moderate to high test-retest reliability (e.g., r = .70 to .90) for total number of items endorsed at time 1 and at time 2, and for individual items endorsed at time 1 and at time 2. In the development of the Stressful Life Events Screening Questionnaire (SLESQ; Goodman et al., 1998), the correlation between the number of events reported at time 1 and at time 2 was r = .89, and the individual item kappas ranged from .31 to 1.00 (median k = .73). Similarly high reliability has been demonstrated in other measures; developers of the TLEQ reported an average test-retest correlation for all items of .84 (Kubany et al., 2000). In a previous validation of the LEC, test-retest reliability for individual items endorsed ranged from .37 to .84, with an average kappa of .61 (Gray et al., 2004). In previous validation studies of the DAPS and PDS, test-retest reliability for total number of events, individual items, and selection of worst event has not been reported. The current study measured the total number of events endorsed at time 1 and at time 2, and reported the correlation between those two totals. The study also included estimates of Cohen?s Kappa for individual items though this calculation may be artificially low due to low base rates for some items (e.g., Exposure to a toxic substance, Combat). For this reason, percent agreement was reported as well. The kappa statistic 18 was also used to calculate agreement between the event selected as the worst at time 1 and at time 2. Hypothesis 2. It was also hypothesized that in comparison with the structured trauma history interview, the LEC would demonstrate higher sensitivity, or fewer false negatives, than the DAPS and PDS. That is, there would be few events reported on the trauma history interview that were not reported on the LEC. Similarly, it was hypothesized that the DAPS would demonstrate higher specificity, or fewer false positives, than the LEC or PDS. That is, there would be few events reported on the DAPS that were not also reported on the trauma history interview. This hypothesis deals with the accuracy of trauma exposure reporting on the self-report measures. Hypothesis 3. Given the wider range of traumatic event categories, and the opportunity to endorse multiple levels of exposure, it was hypothesized that the LEC would most accurately predict the participant?s worst event as determined by the trauma history interview. In order to evaluate each measure from the participants? perspective, qualitative feedback regarding the general format, specific items, and discrepancies between the self- report measure and the interview was solicited. The recorded feedback was reviewed and a coding system was developed to summarize and quantify the qualitative feedback. Because this was an exploratory question, no specific hypothesis was tested. METHOD Design and Analysis This study employed a between-groups, test-retest design in which participants were assigned to one of three groups through block randomization. The three groups were 19 defined by which trauma exposure measure (the independent variable) was assigned, either the LEC, the DAPS, or the PDS. The dependent variables were reliability coefficients, for consistency of event reporting across administrations, in the form of Pearson?s r and Cohen?s kappa. Cohen?s kappa also served as a measure of concurrent validity as a means of calculating the rate that events were reported on both a self-report measure and on the criterion measure, a trauma history interview. Participants Participants were male and female undergraduates recruited from psychology courses at Auburn University through an announcement posted on Sona Systems, an online research management system. Participants self-identified as having experienced ?stressful live events? and all participants were accepted for inclusion in the study regardless of the nature or number of the stressful life events they endorsed. Participants were compensated with documentation of their participation to be used for extra-credit in their psychology courses. The Auburn University Institutional Review Board approved this study. The initial recruited sample consisted of 116 participants, of whom 91 (78%) completed both phases of the study. The final sample (n = 91) was predominantly female (n = 66; 73%), and Caucasian (n = 69; 78%). Participants? ages ranged from 18 to 32 (M = 20.26; SD = 2.38). Most were full-time students (n = 83; 96%), and single (n = 80; 92%). The distribution in education status of participants was 35% in their freshman year (n = 31), 25% in their sophomore year (n = 22), 18% in their junior year (n = 16), and 19% in their senior year (n = 17). 20 Procedure The study included two phases. The first phase (time 1) was a questionnaire session in which participants reviewed and signed the informed consent document, completed one of three self-report trauma exposure and PTSD measures, and completed two measures of depression and anxiety symptoms. An undergraduate research assistant, with graduate student supervision, typically conducted the questionnaire sessions. A standard script guided session administration to ensure consistency of administration across sessions. The second phase (time 2) was an interview session that took place between 2 and 14 days (M = 7.57; SD = 3.93) after the questionnaire session. Participants repeated the same trauma exposure and PTSD self-report measure that they completed in the first session; they also completed a detailed trauma history interview and a structured diagnostic interview for PTSD. The trauma history interview served as a comprehensive measure of lifetime trauma exposure, and, for the purposes of this study, was the criterion against which results from the self-report measures were compared. The interviewers were blind to the results of either administration of the self-report measure at the time of the interviews. At the end of the interview session, participants were debriefed regarding the purposes of the study and regarding their responses to both the self-report measures and the trauma-history interview. At this point, the interviewer reviewed the results of the original self-report measure, identified any discrepancies between the participant?s responses on the self-report measure and their responses in the trauma history interview, and prompted the participant to discuss the discrepancies. The interviewer recorded the participant?s responses verbatim when possible. Participants were then debriefed 21 regarding the purposes of the study and common reactions to discussing stressful life events. They were given a referral list that provided contact information for the mental health resources available in the community. The interviews were conducted by graduate students under the supervision of Dr. Weathers. Graduate students were trained in proper administration of the interview protocol and Dr. Weathers observed and co-rated selected interview sessions to ensure that the interviews were conducted in a valid, consistent manner. Measures In the questionnaire session (time 1) participants completed one of the three self- report trauma exposure and PTSD measures described above. The measures in the questionnaire session were administered in the following order: demographics form, self- report trauma exposure and PTSD measure, Beck Anxiety Inventory (BAI; Beck, Steer, & Brown, 1993), and Beck Depression Inventory?Second Edition (BDI-II; Beck, Steer, & Brown, 1996). Following a review of the informed consent, the interview session (time 2) included these measures in the following order: self-report trauma exposure and PTSD measure, Life Events Checklist Interview (LEC-I; an interview version of the LEC), and Clinician-Administered PTSD Scale (Blake et al., 1990). Self-report trauma exposure and PTSD symptom measures. Trauma history was assessed using one of three measures described previously: the LEC, the DAPS, or the PDS. Because the LEC is designed to serve as a screening measure for a diagnostic interview, it focuses only on identifying the potentially traumatic life events that the individual has experienced and does not inquire about PTSD symptoms. For this reason a DSM-IV-TR correspondent measure of PTSD symptoms, the PTSD Checklist (PCL; 22 Weathers, Litz, Herman, Huska, & Keane, 1993), was administered together with the LEC. The specific version of the PTSD Checklist (PCL-S) is a 17-item self-report measure that assesses each of the 17 DSM-IV-TR symptoms of PTSD (Weathers et al., 1993). With the event they identified on the LEC as their worst event in mind, participants indicated how much they were bothered by each PTSD symptom in the past month using a five-point scale (1 = not at all to 5 = extremely). The PCL has demonstrated good internal consistency (alpha = .94) and temporal stability (retest r = .88, 1-week interval) among college students (Ruggiero, Del Ben, Scotti & Rabalais, 2003). Among motor vehicle accident and sexual assault victims, PCL total scores correlated strongly with total scores from a structured PTSD diagnostic interview (CAPS; r = .929) (Blanchard, Jones-Alexander, Buckley, & Forneris, 1996). Measures of anxiety and depression. The Beck Anxiety Inventory (BAI) is a 21- item measure of anxiety symptoms (e.g., unable to relax, fear of losing control, heart racing; Beck, Steer, & Brown, 1993). Participants endorsed the degree to which they have been bothered by each symptom during the past week on a scale of 0 (Not at All) to 3 (Severely). During development of the BAI, Beck et al. (1988), reported full-scale internal consistency and test-retest reliability estimates of .92 and .75 respectively. The BAI correlated highly (r = .81) with the Anxiety subscale of the Symptom Checklist- 90?Revised (Steer, Ranieri, Beck, & Clark, 1993) and moderately (r = .72) with the State-Trait Inventory (Kabacoff, Segal, & Hersen, 1997) suggesting good convergent validity. 23 The Beck Depression Inventory ? Second Edition (BDI-II) is a 21-item measure of current symptoms of depression (e.g., loss of pleasure, suicidal thoughts, changes in sleeping patterns; Beck, Steer, & Brown, 1996). Ten items correspond to the DSM-IV diagnostic criteria. Participants endorsed the degree to which they have been bothered by each symptom during the past two weeks on a scale of 0 to 3. Internal consistency alpha coefficients of the BDI-II have ranged from 0.89 (Steer et al., 2000) to 0.92 (Beck et al., 1996) in adult psychiatric samples, and from 0.89 (Steer & Clark, 1997; Whisman et al., 2000) to 0.93 (Beck et al., 1996) in college samples. A high correlation between BDI-II total scores and Reynolds Adolescent Depression Scale (Reynolds, 1987) total scores among adolescents (r = .84) suggests good convergent validity (Krefetz, Steer, Gulab, & Beck, 2002). Both the BAI and BDI-II were included in this study to facilitate a comparison of levels of anxiety and depression between the three groups. This comparison was intended to test the hypothesis that the groups did not differ in any way unrelated to the independent variable. Due to the potential for intense emotional reactions in response to the recalling of traumatic experiences, the BDI-II was also included to identify any participants in need of immediate assistance (e.g., participants who endorsed suicidal ideation or intent). Interview trauma exposure and PTSD symptom measures. The second phase of the study involved the readministration of the original self-report trauma exposure measure, as well as two structured interviews. The first interview is the LEC-I, an interview version of the LEC, the Criterion A assessment portion of the CAPS (Blake et al., 1990). The LEC-I was followed by administration of the CAPS. 24 The LEC-I is a trauma exposure interview that assesses whether the participant has ever experienced, witnessed, or learned about (limited to events involving a close friend or family member) one of 17 types of potentially traumatic events. The event categories are identical to those of the LEC. Initially, the participant was asked only if they have ever experienced a particular event within each of the 17 event categories. After all 17 event categories were probed, the interviewer revisited only the events that the participant endorsed and inquired about details relating to the event. The interviewer was instructed to prompt the participant for a brief narrative explanation of the event, how old the participant was at the time, who the perpetrator was (if applicable), and how many times the event happened. Information about the degree of life threat, threat of serious injury, or actual injury was obtained, as well as information about the individual?s emotional response to the event. Participants were instructed to report on the worst event if they had experienced more than one event that applied in a particular category. Participants were then asked to choose the worst event overall, or the event that has caused them the most problems. The CAPS is a semi-structured diagnostic interview generally considered the ?gold standard? for the assessment of PTSD. It has been the primary diagnostic or outcome measure in over 200 published empirical studies and has been translated into at least 10 languages (Weathers, Keane, & Davidson, 2001). Blake et al. (1990) originally reported excellent inter-rater reliability (r = .92 to .99) among combat veterans for all three subscales of the CAPS (reexperiencing, avoidance, and arousal). Weathers et al. (2001) reported evidence of reliability among multiple populations, with coefficient alphas ranging from .73 to .94, and 2-to-3 day test-retest reliability ranging from .78 to 25 .98. Strong correlations with existing measures of PTSD (r = .91 with the Mississippi Scale for Combat-Related PTSD; r = .89 with the PTSD module of the SCID) support the convergent validity of the CAPS among combat veterans as well (Keane et al., 1988; King, Keskin, King, & Weathers, 1998). Among motor vehicle accident victims, Blanchard et al. (1995) reported a strong correlation (r = .93) between the total score of the PCL and the CAPS. For the administration of the CAPS, participants were instructed to respond to questions with reference to only the event they selected as their worst event during the LEC-I. The CAPS includes questions about the frequency and intensity of each of the 17 DSM-IV-TR PTSD symptoms in the past month, as well as an interviewer rating regarding how related the symptom is to the selected event. The CAPS also assesses for the onset, duration, and global severity of symptoms, as well as functional impairment. Interviewers rated each symptom on a 5-point scale for both frequency and intensity. These ratings were used to generate a PTSD diagnosis as well as continuous summary scores for each of the PTSD symptom clusters. Data Coding In order to evaluate the previous hypotheses, it was necessary to prepare the data for analysis. Data were prepared in three ways in order to facilitate data analysis: worst event coding, summary variable creation, and qualitative feedback category creation. Worst event coding. Upon completion of the trauma exposure portion of each measure, participants were asked to identify their worst event, or the event that bothers them the most. Because the LEC prompts for a written description of the participant?s worst item, but does not link that description with a particular item (e.g., item number or 26 category name) it was necessary to determine, from the written narrative, if the participant was describing the same event at time 1 and at time 2. Initially, two raters read the time 1 worst event narratives, independent of each other, and assigned a primary event category (item number 1-17) to the written narrative. Upon completion of all 30 cases, the raters then read the time 2 worst event narratives and assigned a primary event category. The raters achieved perfect agreement on the event classification ratings. Next, each rater independently examined the LEC time 1 and time 2 narratives simultaneously, blind to previously assigned event categories, and rated whether or not the narratives were referring to the same event within a particular event category. This rating was also applied to the narratives provided on the DAPS and PDS. This rating was made to address the possibility that a participant could have selected the same event category across administrations, but referred to two distinct events within the same event category (e.g., two separate car accidents). Again, raters achieved perfect agreement and no participants described different events within the same category as their worst event. Summary variable creation. Because each of the measures differed in terms of format and organization (e.g., event categories and exposure levels; see Table 1), a direct, item-by-item comparison of each of the measures and the trauma history interview could not be made. Therefore, it was necessary to create summary trauma event categories to allow for a comparison between measures. Similar self-report items were collapsed to create a small number of summary variables. The primary researcher evaluated the content of the three self-report measures and grouped similar items together. Following feedback from the research team, changes were made to the groupings that resulted in seven summary variables: accident (ACC), natural disaster (NAD), physical assault and 27 abuse (PAA), sexual assault and abuse (SAA), combat (COM), life-threatening illness or injury (ILL), and other (OTH). The remaining four summary variables represent the items that each measure did not have in common: imprisonment (IMP), torture (TOR), sudden unexpected death of a close friend or family member (SUD), and harm you caused to somebody else (HAR). The first seven summary variables are the primary comparison variables while the last four are the secondary comparison variables. See Table 2 for a listing of the individual items that contributed to each summary variable. Qualitative feedback categories. Following administration of the trauma history interview, the interviewer reviewed the original self-report measure and identified any discrepancies between the events reported on that measure and the events reported on the interview. The interviewer informed the participant of any discrepancies and asked the participant to explain them. The interviewer recorded any discrepancy attributions provided by participants. In order to identify patterns in the discrepancy attributions, each recorded attribution was collected into an electronic document to be reviewed separate from the measure and file that it was originally recorded on. The primary researcher then reviewed each of the attributions and attempted to create a category label that reflected the content of the attribution. This process resulted in 22 initial categories that were then reviewed by the research team and distilled into nine final categories. The categories included: remembered on interview, forgot on interview, event too minor, exposure level, reluctant, mistake, limited categories, emotional state, ?good participant,? and reevaluated. A research team member and a research assistant independently coded the discrepancy attributions using the final nine categories. The two researchers then 28 compared their codings and discussed any differences. They created a consensus coding represented graphically in Figure 1. RESULTS This study compared the temporal stability and concurrent validity of three self- report measures of trauma exposure by randomly assigning participants to complete one of three trauma exposure measures. Participants completed each measure twice over an interval that averaged 7.57 days (SD = 3.93) and completed both a trauma history interview (criterion measure) and a PTSD diagnostic interview. The age, gender, and race of participants did not differ significantly by group membership. The type and frequency of events reported in the trauma history interview is displayed in Table 3. Prior to evaluating the reliability and concurrent validity of each screener, the final groups were evaluated for any differences that might influence the results. BAI and BDI-II scores were compared between the three groups using a one-way analysis of variance (ANOVA). Results were non-significant for both the BAI [F (2, 88) = .359, p = .699] and BDI-II [F (2, 88) = .49, p = .614] suggesting that the groups did not differ in any meaningful way in terms of current anxiety and depression symptoms. Similarly, the three groups were compared in terms of test-retest interval also using a one-way ANOVA. Again, results were non-significant [F (2, 88) = .654, p = .522] suggesting that the retest interval did not differ in any meaningful way between groups. The impact of interval length on test-retest reliability was also examined by calculating correlations between the total number of events reported at time 1 and at time 2 for two groups: participants with a test-retest interval between two and five days, and participants with a test-retest interval greater than 12 days. A visual inspection indicated 29 that correlations increased from the short-interval group to the long-interval group for each measure, suggesting that longer intervals did not negatively impact reliability. Hypothesis 1 In order to evaluate the test-retest reliability of total number of items endorsed at time 1 and at time 2 for each measure, a Pearson?s correlation was calculated. Results from the LEC (n = 30) were evaluated in three ways: including only events that the participant reported experiencing directly, including events either experienced or witnessed, and including events that the participant either experienced directly, witnessed, or learned about happening to someone close to them. Allowing only for direct exposure, the LEC exhibited good temporal stability (r = .85; p < .001) for total number of events reported across administrations. Including events either experienced or witnessed lowered temporal stability (r = .76; p < .001), as did including events either experienced, witnessed, or learned about (r = .79; p < .001). Reliability for total number of events endorsed across administrations was good for both the DAPS (n = 30; r = .82; p < .001) and PDS (n = 31; r = .81; p < .001). Consistency of reporting across administrations was evaluated for individual items in each of the three measures using Cohen?s kappa. Percent agreement was also reported due to the observed low base rates within certain categories of trauma and due to the finding that very low or very high base rates produce low kappas despite moderate or high percentage agreement (Langenbucher, Labouvie, & Morgenstern, 1996). For this analysis LEC results are reported in two ways: Table 5 includes only events that were directly experienced and Table 6 includes events either experienced or witnessed. As seen in Table 5, kappa values for directly experienced events reported on the LEC ranged 30 from .28 to 1.0 with an average kappa of .74. Only four items had kappa values below .70. Percent agreement values ranged from .66 to 1.0 with only one item below 75% agreement (Sudden, unexpected death of someone close to you). Similarly, kappa values for events either experienced directly or witnessed on the LEC ranged from .35 to 1.0 with an average kappa of .68. Percent agreement values ranged from .66 to 1.0 with only one item below 75% agreement (Physical assault). Kappa values for individual items endorsed on the DAPS (see Table 7) ranged from .28 to .81 (M = .66) with four items below .70. Percent agreement values ranged from .80 to .93. Kappa values for individual items endorsed on the PDS (see Table 8) ranged from .46 to 1.0 (M = .76) with three items below .70. Percent agreement values ranged from .90 to 1.0. Upon completion of the trauma exposure portion of each measure, participants were asked to identify their worst event, or the event that bothers them the most. The reliability of this designation was examined using both Cohen?s kappa and percent agreement. For agreement of worst-event designation across administrations, the LEC achieved a kappa of .85 (p < .001) and a percent agreement of .86. The DAPS achieved a kappa of .71 (p < .001) and a percent agreement of .73, and the PDS achieved a kappa of .89 (p < .001) and a percent agreement of .90. Groups were also compared in terms of the total number of events reported on the self-report measure, total number of events reported during the trauma-history interview, and total number of Criterion A events reported during the trauma history interview. Means, standard deviations, one-way ANOVAs, and Tukey Honestly Significant Difference (HSD) tests for these comparisons are reported in Table 4. The analysis was significant for the total number of events reported on the self-report measure, F (2, 88) = 31 78.08, p < .001, for the total number of events reported on the trauma-history interview, F (2, 88) = 9.64, p < .001, and for the total number of Criterion A events reported during the trauma history interview, F (2, 88) = 6.06, p = .003. Tukey HSD tests indicated that participants reported significantly more events on the LEC (? = .05) than on the DAPS or PDS, which did not differ significantly. Similarly, LEC participants reported significantly more events on the trauma-history interview than did DAPS or PDS participants, and LEC participants reported significantly more Criterion A events on the trauma-history interview than did PDS participants. Hypothesis 2 The second hypothesis was designed to test the accuracy of event reporting on the self-report measures as compared with the trauma history interview. Because each measure allows for events to be recorded that were experienced in different ways (e.g., experienced directly, witnessed, or confronted with), accuracy analyses were performed in two different ways. Table 9 includes only events that were experienced directly and Table 10 includes events that were experienced directly, witnessed, or confronted with. For only events experienced directly, the LEC achieved the highest average kappa (.53) and percent agreement (.88). Average kappa values for the DAPS and PDS were = .32 and = .26 respectively, and percent agreement values for the DAPS and PDS were .77 and .83 respectively. For all three levels of exposure, the PDS achieved the highest average kappa (.35) and percent agreement (.80). Average kappa values for the LEC and DAPS were = .25 and = .29 respectively, and percent agreement values for the LEC and DAPS were .73 and .75 respectively. 32 Hypothesis 3 The third hypothesis stated that the LEC would most accurately identify participant?s worst event as determined by the LEC interview. In 73% of the LEC cases (n = 22) the identified worst event remained the same from the self-report measure to the interview. In only 50% of the DAPS cases (n = 15) and 48% of the PDS cases (n = 15) the identified worst even remained the same from the self-report measure to the interview. Among LEC participants, three indicated that their worst event was endorsed in the ?other? category on the self-report measure and four indicated that their worst event was endorsed in the ?other? category on the interview. No DAPS participants selected the ?other? category when identifying their worst event on the self-report measure but five participants did so on the LEC-I. Among PDS participants, five selected the ?other? category as their worst event on the self-report measure, but only one did so on the interview. Qualitative Feedback A number of themes emerged from the qualitative feedback (see Figure 1). The most common discrepancy attribution was related to minimizing the event, or not reporting the event because the participant did not judge the event to be severe enough to report. However, this category of discrepancy attributions included two different patterns of reporting: participants in the LEC group often minimized, and therefore did not report, events on the trauma history interview whereas DAPS and PDS participants often provided this attribution for events that they did not report on the self-report measure but did report during the interview. The second most common attribution described participants who reported an event on the trauma history interview but not on the self- 33 report measure because they concluded that the event did not belong in any of the categories provided in the self-report measure. The third most common attribution for discrepant reporting was that the event in question did not happen directly to the participant and they felt that the self-report measure did not allow for reporting events that were witnessed or that participants were confronted with (e.g., serious accident, suicide of a family member, sudden loss of a loved one). This category of discrepancy attributions was entirely limited to the DAPS and PDS groups. See Table 11 for examples of discrepancy attributions. DISCUSSION This purpose of this study was to compare the test-retest reliability and concurrent validity of three commonly used self-report measures of trauma exposure. This comparison was made after highlighting the differences between each of the measures. It was hypothesized that differences in content (e.g., types of potentially traumatic events included on the measure) and differences in exposure level would contribute to differences in reliability and concurrent validity. Hypothesis 1 The hypothesis that each of the measures would show moderate to good test-retest reliability was supported, but with some variability in the results. Including only events directly experienced resulted in the highest reliabilities. For the LEC, including events witnessed or confronted with lowered the reliability. Krinsley and colleagues (2003) reported similar findings and suggested that events that are directly experienced are subjectively experienced as more traumatic and thus more easily recalled and reported. 34 This explanation seems to fit this study as the events that met Criterion A were most often events that the participant directly experienced. Prior to a discussion of the individual item reliabilities, it is important to acknowledge a general trend in the results. Participants who received the PDS reported the fewest number of events compared to participants who received the DAPS or the LEC (see Table 4). This finding appeared to influence event reporting on the trauma exposure interview as well. Participants who received the LEC initially reported more events on the trauma history interview than participants who received the DAPS, and both reported more events than participants who received the PDS. The same pattern is also evident in the number of events fulfilling Criterion A that were reported on the interview. This puzzling finding may have multiple explanations. It is possible that interviewers were not blind to the results from the original screener prior to administering the trauma history interview. However, interviewers were explicitly instructed not to look at the original measure prior to the interview. Assuming that interviewers did view the results of the initial self-report measure, the trauma history interview still clearly asks about each category of events and prior knowledge of the participants reported trauma exposure should not preclude the participant from reporting, or the interviewer from recording, events. It is also possible that interviewers formed a priori opinions about response patterns that correspond with each of the measures, that is, interviewers may have assumed that participants who received the DAPS would report fewer events. However, such an assumption would not preclude interviewers from asking the questions nor would it preclude participants from responding appropriately. Because this study employed a block-randomized design rather than full randomization, it is possible that 35 groups were formed with preexisting differences in levels of trauma exposure. While such a result is not likely, it cannot be ruled out based on the experimental design. A more likely explanation suggests that participants were primed by the measure that they received to form assumptions about the ?type? of traumatic events that the study was interested in. A brief visual inspection of the screeners (see Table 1) and their instructions suggests that the PDS and DAPS set a higher threshold for event severity than does the LEC. Research on the cognitive aspects of survey methodology indicates that how a question is asked can have a significant impact on responses, and that respondents take cues regarding what a researcher is interested in based on the response alternatives available (Schwarz, 1999). It is likely that when participants started completing the original measure, they formed enduring assumptions about the type of events that were appropriate to report. During the interview, participants often prefaced their responses with something similar to, ?This probably isn?t what you are looking for, but?? That is, they assumed that the researcher was looking for reported events that shared certain characteristics or met a certain threshold. It is possible that such information was conveyed by the name of the study (Assessment of Stressful Life Events) or potentially by the researcher, but it is more likely that various aspects of the self-report measure contributed to the assumptions that participants formed about the study. Because so few events were endorsed on the PDS, a comparison of the reliability of individual items was unwarranted. However, among the LEC (two versions) and the DAPS, there were patterns of high and low reliability. For example, items assessing sexual assault consistently demonstrated the highest reliability and items assessing 36 witnessing serious injury or death to others consistently demonstrated the lowest reliability. Similarly, exposure level appeared to moderate the relationship between reliability and individual items such as natural disaster or life-threatening illness or injury. That is, when allowing for witnessing the aforementioned events, reliability was among the lowest of all events. However, when requiring direct exposure to the aforementioned events, reliability was among the highest. Such findings suggest that participants have a difficult time reliably conceptualizing the meaning of witnessing particular types of events. Hypothesis 2 While comparing the concurrent validity of each of the three trauma exposure measures was a primary goal of this study, creating an appropriate vehicle for comparison proved difficult. Because each of the measures differed in terms of exposure level and individual item content, it was necessary to combine some items that did not correspond perfectly with items on the other measures and on the interview. Because of this method, comparisons are not precise at the item level, but the newly created summary variables did allow for comparison across measures. However, it is important to recall that a particular primary summary variable may consist of as many as five individual items from one measure and as few as two items from another measure (e.g., PAA; DAPS and LEC/PDS respectively), or four items from one measure and only one item from another measure (e.g., ACC; LEC and PDS respectively; see Table 2). Such an imbalance of individual item contributions to the summary variables presents a slight advantage to the measure with more ?input? variables as there are more options for a participant to report their event and thus a higher likelihood of agreement between the measure and the 37 interview. In addition, some of the summary variables could not be compared as the base rates of endorsement were too low (e.g., COM, IMP, TOR). The secondary summary variables were also not appropriate to compare between groups as some variables only received input from two of the three measures (e.g., the DAPS does not have a category for imprisonment). With the previous caveats in mind, there were some clear patterns in the concurrent validity data. For example, when including all three levels of exposure, the PDS appeared to correspond best with the interview while the LEC demonstrated the poorest correspondence with the interview. The correspondence between the DAPS and the interview was hurt by the results of two summary variables: ILL (life-threatening illness or injury) and SUD (sudden unexpected death of a close friend or family member). A single DAPS item (?Seeing someone else get seriously hurt or killed?) contributed to both ILL and SUD but did not fit well within either category. Aside for the poor correspondence to ILL and SUD, the DAPS performed as well as the PDS did. Surprisingly, there were more discrepancies between the LEC and the trauma history interview. Not surprisingly, the discrepancies all occurred in the same pattern: participants often reported events on the self-report measure but did not report the same events on the trauma history interview because they believed the events were too minor. In opposite fashion, the majority of discrepancies for DAPS and PDS participants involved not reporting an event on the self-report measure but reporting it on the interview. The number of events reported from self-report measure to interview increased for both the DAPS and PDS, suggesting that the two measures set a much higher threshold for event severity than did the LEC. This finding was also supported by 38 qualitative feedback from LEC participants who often reported events on the self-report measure but not on the interview because they felt the events were not severe enough. While the finding that the screening measure may have confounded event reporting on the trauma exposure interview blurs the interpretation of the concurrent validity findings, it also suggests caution when implementing a trauma exposure measure in research or clinical practice as the measure may impact subsequent reporting of trauma exposure. Hypothesis 3 Another method for evaluating accuracy focused on the consistency of worst event designation from self-report measure to interview. The DAPS and PDS both correctly identified participant?s worst events roughly half the time, whereas the LEC correctly identified the worst event in roughly three-quarters of the participants. Typically, a participant?s worst event has additional relevance in assessing PTSD as symptoms are linked to an etiological event. Therefore, identification of the most severe events that a participant has experienced takes priority over identification of all other events. In this study, the DAPS and PDS performed poorly regarding worst event identification. However, it should be noted that the sample selected for this study had a restricted range of traumatic events, with more mild to moderate events than severe events. Given a sample with a wider range of PTEs, one would expect the DAPS and PDS to improve in worst event identification. Though this study was not designed to quantitatively evaluate the impact of differences in measure content and format, it appears that such differences did impact trauma reporting. With few exceptions (see Goodman et al., 1998; Kubany et al., 2000), developers of existing measures of trauma exposure have paid insufficient attention to 39 establishing the content validity of their measures. In their article on content validity in psychological assessment, Haynes, Richard, and Kubany define content validity as ?the degree to which elements of an assessment instrument are relevant to and representative of the targeted construct for a particular assessment purpose? (1995, p. 238). Some of the findings of this study suggest that the content of the measures selected for use in this study is not fully representative of the trauma construct. The pattern of findings suggests that the DAPS and PDS create an overly restrictive definition of trauma that may lead to decreased reporting both in the self-report and interview format. This conclusion is supported by the qualitative feedback provided by participants, particularly that participants could not find categories within the DAPS and PDS that fit their events. Similarly, a visual inspection of each of the three trauma-exposure measures in this study suggests that they are qualitatively different and that they each seem to define trauma differently. It is suggested that in any future revision of the current measures, the steps for establishing adequate content validity outlined in Haynes et al. (1995) be completed to ensure that the measure accurately represents the trauma construct as outlined in the DSM-IV-TR (APA, 2000). Limitations and Future Directions The interpretability of the results from the current study is limited due to the size and characteristics of the sample. With only 30 participants in each group, it was difficult to see clear patterns of responding across the individual items of each measure. Furthermore, because this was not a clinical sample, trauma exposure was limited, and other research has demonstrated that more severe events are more reliably reported (Krinsley et al., 2003). The small group size and nonclinical sample resulted in some 40 items being endorsed by too few participants to evaluate statistically. While these limitations significantly impacted analysis of concurrent validity and prediction of PTSD diagnostic status, the current sample did allow for adequate test-retest reliability analysis. There was also evidence that the prevalence rate of PTSD in the current sample (n = 6, 7%; as determined using the F1/I2 scoring rule on the CAPS) is similar to prevalence rates (7-12%) from major epidemiological studies (Kessler et al., 1995; Resnick et al., 1993). An additional limitation of this study involved the lack of reliability data collected for administration of the trauma history interview and the PTSD diagnostic interview. Although each researcher received group training for this study and individual training for a prior study, the lack of interrater reliability data for administration of the two interviews makes ruling out the impact of the researcher difficult. However, researchers were provided with a verbatim script and detailed instructions to follow for each session and any questions were addressed and resolved together as a research team. While the 2- 14 day test-retest period is likely too short to evaluate the impact of memory on trauma reporting, it allowed for an evaluation of the impact of differences in the measures themselves. Furthermore, the same retest interval has been used in other studies interested in the reliability of event reporting (Goodman et al., 1999; Krinsley et al., 2003; Mueser, Rosenburg, Fox, Salyers, Ford, & Carty, 2001). It should also be noted that one of the measures of interest (LEC) was identical in content and structure to the criterion measure (LEC-I). The decision to include the LEC was made due to its? status as one of the most commonly used measures of trauma exposure (Elhai et al., 2005). Similarly, the LEC-I was used because it is the most 41 comprehensive trauma history interview available. However, the LEC-I proved to be inadequate as a criterion measure as event reporting seemed to be influenced by the original self-report measure. Surprisingly, the DAPS and PDS performed well relative to the LEC. Future research should explicitly compare the impact that a screening measure may have on subsequent trauma reporting with a larger, more diverse sample such that the relationship between events reported on the screening measure and events reported on the criterion measure can be more closely examined. The same concept may also be examined in future research with a within subjects design in which all participants receive each of the measures. The results of this study suggest that the LEC, DAPS, and PDS all provide temporally stable estimates of trauma exposure and that reporting about witnessed events is less reliable than reporting about directly experienced events. The results also suggest that trauma exposure reporting is influenced by the measure used to assess it. The three measures selected for this study varied considerably and each appeared to influence reporting. Thus, it is incumbent on the user of these measures to be aware of measure characteristics and to determine if those characteristics match the application of the measure. Consistent with the author?s original intent (Blake et al., 1990), the LEC appears to function best as a broad screener, allowing for reports of a wide range of potentially traumatic events. The DAPS and PDS appear to set a higher threshold for event severity, and therefore elicit fewer reports of trauma exposure. The use of each measure appears justified so long as measure characteristics match the desired function of the measure. That is, the LEC may be most validly utilized as a screening measure that precedes detailed follow-up, whereas the PDS and DAPS may be utilized in settings 42 where determining PTSD diagnostic status is prioritized over assessment of cumulative trauma exposure. Further discussion about how to conceptualize and measure trauma exposure, and attention to how the content and format of trauma assessment measures impacts reporting, will be important to improving understanding of trauma exposure and its? correlates. 43 REFERENCES American Psychiatric Association. (1980). Diagnostic and statistical manual of mental disorders (3rd ed.). Washington, DC: Author. American Psychiatric Association (1987). Diagnostic and statistical manual of mental disorders (3rd ed. Rev.). Washington, DC: Author. American Psychiatric Association. (2000). Diagnostic and statistical manual of mental disorders (4th ed., Text Revision). Washington, DC: Author. Beck, A., Epstein, N., Brown, G., & Steer, R. (1988). An inventory for measuring anxiety: Psychometric properties. Journal of Consulting and Clinical Psychology, 56, 893-897. Beck, A. T., Steer, R. A., & Brown, G. (1993). Beck Anxiety Inventory: Manual. San Antonio, TX: Psychological Corporation. Beck, A. T., Steer, R. A., & Brown, G. K. (1996). Beck Depression Inventory-II Manual (2nd ed.). San Antonio, TX: Psychological Corporation. Blanchard, E. B., Jones-Alexander, J., Buckley, T., & Forneris, C. A. (1996). Psychometric properties of the PTSD checklist (PCL). Behaviour Research and Therapy, 34, 669-673. Blake, D.D., Weathers, F.W., Nagy, L.M., Kaloupek, D.G., Gusman, F.D., Charney, D.S., & Keane, T.M. (1995). The development of a clinician-administered PTSD scale. Journal of Traumatic Stress, 8, 75-90. 44 Blake, D. D., Weathers, F.W., Nagy, L., Kaloupek, D. G., Klauminzer, G., & Charney, D. S. (1990). A clinician rating scale for assessing current and lifetime PTSD: The CAPS-1. The Behavior Therapist, 18, 187?188. Bodkin, J. A., Pope, H. G., Detke, M. J., & Hudson, J. I. (2007). Is PTSD caused by traumatic stress? Journal of Anxiety Disorders, 21, 176-182. Breslau, N., & Davis, G. C. (1987). Posttraumatic stress disorder: The stressor criterion. The Journal of Nervous and Mental Disease, 175, 255-264. Breslau, N., Kessler, R. C., Chilcoat, H. D., Schultz, L. R., Davis, G. C., & Andreski, P. (1998). Trauma and posttraumatic stress disorder in the community: The 1996 Detroit area survey of trauma. Archives of General Psychiatry, 55, 626-632. Breslau, N., & Kessler, R. C. (2001). The stressor criterion in DSM-IV posttraumatic stress disorder: An empirical investigation. Biological Psychiatry, 50, 699-704. Breslau, N. (2002). Epidemiologic studies of trauma, posttraumatic stress disorder, and other psychiatric disorders. Canadian Journal of Psychiatry, 47, 923-929. Brewin, C. R., Andrews, B., & Valentine, J. D. (2000). Meta-analysis of risk factors for posttraumatic stress disorder in trauma-exposed adults. Journal of Consulting and Clinical Psychology, 68, 748-766. Briere, J. (1992). Methodological issues in the study of sexual abuse effects. Journal of Consulting and Clinical Psychology, 60, 196-203. Briere, J. (2001). Detailed Assessment of Posttraumatic Stress (DAPS). Odessa, Florida: Psychological Assessment Resources. 45 Creamer, M., McFarlane, A. C., & Burgess, P. (2005). Psychopathology following trauma: The role of subjective experience. Journal of Affective Disorders, 86, 175- 182. Dohrenwend, B. P. (2006). Inventorying stressful life events as risk factors for psychopathology: Toward resolution of the problem of intracategory variability. Psychological Bulletin, 132, 477-495. Dohrenwend, B. P., Link, B. G., Kern, R., Shrout, P. E., & Markowitz, J. (1990). Measuring life events: The problem of variability within event categories. Stress Medicine, 6, 179?187. Elhai, J. D., Franklin, C. L., & Gray, M. J. (2008). The SCID PTSD Module?s trauma screen: Validity with two samples in detecting trauma history. Depression and Anxiety, 25, 737-741. Elhai, J. D., Gray, M. J., Kashdan, T. B., & Franklin, C. L. (2005). Which instruments are most commonly used to assess traumatic event exposure and posttraumatic effects?: A survey of traumatic stress professionals. Journal of Traumatic Stress, 18, 541-545. First, M. B., Spitzer, R. L., Gibbon, M., & Williams, J.B. (1996). Structured Clinical Interview for DSM-IV Axis I Disorders, Clinician Version (SCID-CV). Washington, DC: American Psychiatric Press. Foa, E. B. (1996). Posttraumatic Stress Diagnostic Scale Manual. United States of America: National Computer Systems, Inc. 46 Foa, E. B., Cashman, L., Jaycox, L., & Perry, K. (1997). The validation of a self-report measure of posttraumatic stress disorder: The Posttraumatic Diagnostic Scale. Psychological Assessment, 9, 445?451. Follette, V., Polusny, M., Bechtle, A., & Naugle, A. (1996). Cumulative trauma: The impact of child sexual abuse, adult sexual assault, and spouse abuse. Journal of Traumatic Stress, 9, 25-35. Fricker, A. E., Smith, D. W., Davis, J. L., & Hanson, R. F. (2003). Effects of context and question type on endorsement of childhood sexual abuse. Journal of Traumatic Stress, 16, 265-268. Frueh, B., Elhai, J., & Kaloupek, D. (2004). Unresolved issues in the assessment of trauma exposure and posttraumatic reactions. In G. M. Rosen (Ed.), Posttraumatic stress disorder: Issues and controversies (pp. 63-84). New York, NY: John Wiley & Sons Gold, S. D., Marx, B. P., Soler-Baillo, J. M., & Sloan, D. M. (2005). Is life stress more traumatic that traumatic stress? Journal of Anxiety Disorders, 19, 687-698. Goodman, L. A., Corcoran, C., Turner, K., Yuan, N., & Green, B. L. (1998). Assessing traumatic event exposure: General issues and preliminary findings for the stressful life events screening questionnaire. Journal of Traumatic Stress, 11, 521-542. Goodman, L. A., Dutton, M., & Harris, M. (1997). The relationship between violence dimensions and symptom severity among homeless, mentally ill women. Journal of Traumatic Stress, 10, 51-70. 47 Goodman, L. A., Thompson, K. M., Weinfurt, K., Corl, S., Acker, P., Mueser, K. T., et al. (1999). Reliability of reports of violent victimization and posttraumatic stress disorder among men and women with serious mental illness. Journal of Traumatic Stress, 12, 587-599. Gray, M., Litz, B., Hsu, J., & Lombardo, T. (2004). Psychometric properties of the Life Events Checklist. Assessment, 11, 330-341. Green, B. L. (1996). Psychometric review of Trauma History Questionnaire (Self- Report). In B. H. Stamm (Ed.), Measurement of stress, trauma, and adaptation. Lutherville, MD: Sidran Press. Haynes, S. N., Richard, D. C., & Kubany, E. S. (1995). Content validity in psychological assessment: A functional approach to concepts and methods. Psychological Assessment, 7, 238-247. Kabacoff, R. I., Segal, D. L., & Hersen, M. (1997). Psychometric properties and diagnostic utility of the Beck Anxiety Inventory and the State-Trait Anxiety Inventory with older adult psychiatric outpatients. Journal of Anxiety Disorders, 11, 33-47. Keane, T. M., Caddell, J. M., & Taylor, K. L. (1988). Mississippi Scale for Combat- Related Posttraumatic Stress Disorder: Three studies in reliability and validity. Journal of Consulting and Clinical Psychology, 56, 85?90. Keane, T. M., Fairbank, J., Caddell, J. M., Zimering, R., Taylor, K. L., & Mora, C. (1989). Clinical evaluation of a measure to assess combat exposure. Psychological Assessment: A Journal of Consulting and Clinical Psychology, 1, 53-55. 48 Kessler, R. C., Sonnega, A., Bromet, E., Hughes, M., & Nelson, C. B. (1995). Posttraumatic stress disorder in the National Comorbidity Study. Archives of General Psychiatry, 52, 1048-1060. Kilpatrick, D. G., Resnick, H. S., Freedy, J. R., Pelcovitz, D., Resick, P., Roth, S., & van der Kolk, B. (1998). Posttraumatic stress disorder field trial: Evaluation of the PTSD construct-Criteria A through E. In T. Widiger, A. Frances, H. Pincus, R. Ross, M. First, W. Davis, & M. Kline (Eds.), DSM?IV Sourcebook (pp. 803?844). Washington, DC: American Psychiatric Press. King, D.W., Leskin, G. A., King, L. A., & Weathers, F.W. (1998). Confirmatory factor analysis of the Clinician-Administered PTSD Scale: Evidence for the dimensionality of posttraumatic stress disorder. Psychological Assessment, 10, 90?96. Krefetz, D.G., Steer, R.A., Gulab, N.A., & Beck, A.T. (2002). Convergent validity of the Beck Depression Inventory-II with the Reynolds Adolescent Depression Scale in psychiatric inpatients. Journal of Personality Assessment, 78, 451?460. Krinsley, K. E., Gallagher, J. G., Weathers, F. W., Kaloupek, D. G., & Vielhauer, M. (1997). Reliability and validity of the Evaluation of Lifetime Stressors questionnaire. Unpublished manuscript. Krinsley, K. E., Gallagher, J. G., Weathers, F. W., Kutter, C. J., & Kaloupek, D. G. (2003). Consistency of retrospective reporting about exposure to traumatic events. Journal of Traumatic Stress, 16, 399-409. Krinsley, K. E., & Weathers, F. W. (1995). The assessment of trauma in adults. PTSD Research Quarterly, 6, 1-6. 49 Kubany, E., Haynes, S., Abueg, F., Manke, F., Brennan, J., & Stahura, C. (1996). Development and validation of the Trauma-Related Guilt Inventory (TRGI). Psychological Assessment, 8, 428-444. Kubany, E., Leisen, M., Kaplan, A., Watson, S., Haynes, S., Owens, J., et al. (2000). Development and preliminary validation of a brief broad-spectrum measure of trauma exposure: The Traumatic Life Events Questionnaire. Psychological Assessment, 12, 210-224. Langenbucher, J., Labouvie, E., & Morgenstern, J. (1996). Measuring diagnostic agreement. Journal of Consulting and Clinical Psychology, 64, 1285-1289. Lock, T., Levis, D., & Rourke, P. (2005). Assessment: The Sexual Abuse Questionnaire: A preliminary examination of a time and cost efficient method in evaluating the presence of childhood sexual abuse in adult patients. Journal of Child Sexual Abuse, 14, 1-26. Maier, T. (2006). Post-traumatic stress disorder revisited: Deconstructing the A-criterion. Medical Hypotheses, 66, 103-106. McHugo, G., Caspi, Y., Kammerer, N., Mazelis, R., Jackson, E., Russell, L., et al. (2005). The assessment of trauma history in women with co-occurring substance abuse and mental disorders and a history of interpersonal violence. The Journal of Behavioral Health Services & Research, 32, 113-127. McNally, R. J. (2003). Progress and controversy in the study of posttraumatic stress disorder. Annual Review of Psychology, 54, 229-252. 50 McNally, R. L., Litz, B., Prassas, A., & Shin, L. (1994). Emotional priming of autobiographical memory in post-traumatic stress disorder. Cognition & Emotion, 8, 351-367. Mol, S. L., Arntz, A., Metsemakers, J. M., Dinant, G. J., Vilters-van Montfort, P. P., & Knottnerus, J. A. (2005). Symptoms of post-traumatic stress disorder after non- traumatic events: Evidence from an open population study. British Journal of Psychiatry, 186, 494-499. Mueser, K. T., Rosenberg, S. D., Fox, L., Salyers, M. P., Ford, J. D., & Carty, P. (2001). Psychometric evaluation of trauma and posttraumatic stress disorder assessments in persons with severe mental illness. Psychological Assessment, 13, 110-117. Norris, F. (1990). Screening for traumatic stress: A scale for use in the general population. Journal of Applied Social Psychology, 20, 1704-1718. Norris, F. (1992). Epidemiology of trauma: Frequency and impact of different potentially traumatic events on different demographic groups. Journal of Consulting and Clinical Psychology, 60, 409-418. Norris, F., & Hamblen, J. (2004). Standardized Self-Report Measures of Civilian Trauma and PTSD. Assessing psychological trauma and PTSD (2nd ed.) (pp. 63-102). New York, NY US: Guilford Press. Pfefferbaum, B., Pfefferbaum, R. L., North, C. S., & Neas, B. R. (2002). Does television viewing satisfy criteria for exposure in posttraumatic stress disorder? Psychiatry, 65, 306-309. 51 Propper, R. E., Stickgold, R., Keeley, R., & Christman, S. D. (2007). Is television traumatic? Dreams, stress and media exposure in the aftermath of September 11, 2001. Psychological Science, 18, 334-340. Resnick, H., Falsetti, S., Kilpatrick, D., & Freedy, J. (1996). Assessment of rape and other civilian trauma-related PTSD: Emphasis on assessment of potentially traumatic events. Theory and assessment of stressful life events (pp. 235-271). Madison, CT US: International Universities Press, Inc. Resnick, H. S., Kilpatrick, D. G., Dansky, B. S., Saunders, B. E., & Best, C. L. (1993). Prevalence of civilian trauma and posttraumatic stress disorder in a representative national sample of women. Journal of Consulting and Clinical Psychology, 61, 984-991. Reynolds, W.M. (1987). Reynolds Adolescent Depression Scale: Professional Manual. Odessa, FL: Psychological Assessment Resources. Roemer, L., Litz, B., Orsillo, S., Ehlich, P., & Friedman, M. (1998). Increases in retrospective accounts of war-zone exposure over time: The role of PTSD symptom severity. Journal of Traumatic Stress, 11, 597-605. Rosen, G. M. (2004). Traumatic events, criterion creep, and the creation of pretraumatic stress disorder. The Scientific Review of Mental Health Practice, 3, 39-42. Ruggiero, K. J., Del Ben, K., Scotti, J. R., & Rabalais, A. E. (2003). Psychometric properties of the PTSD-Checklist, civilian version. Journal of Traumatic Stress, 16, 495-502. Sabin-Farrell, R., & Turpin, G. (2003). Vicarious traumatization: Implications for the mental health of health workers? Clinical Psychology Review, 23, 449-480. 52 Schnurr, P. P., Spiro, A., Vielhauer, M. J., Findler, M. N., & Hamblen, J. L. (2002). Trauma in the lives of older men: Findings from the Normative Aging Study. Journal of Clinical Geropsychology, 8, 175?187. Schumm, J., Briggs-Phillips, M., & Hobfoll, S. (2006). Cumulative interpersonal traumas and social support as risk and resiliency factors in predicting PTSD and depression among inner-city women. Journal of Traumatic Stress, 19, 825-836. Solomon, S. D. & Canino, G. J. (1990). Appropriateness of DSM-III-R criteria for posttraumatic stress disorder. Comprehensive Psychiatry, 31, 227-237. Southwick, S. M., Morgan, A., Nagy, L. M., Bremner, D., Nicolaou, A. L., Johnson, D. R., Rosenheck, R., & Charney, D. S. (1993) Trauma-related symptomatology in veterans of Operation Desert Storm: a preliminary report. American Journal of Psychiatry, 150, 1524?1528 Southwick, S. M., Morgan, C., Nicolaou, A. L., & Charney, D. S. (1997). Consistency of memory for combat-related traumatic events in veterans of Operation Desert Storm. American Journal of Psychiatry, 154, 173-177. Steer, R.A., Ranieri, W.F., Beck, A.T., & Clark, D.A. (1993). Further evidence for the validity of the Beck Anxiety Inventory with psychiatric outpatients. Journal of Anxiety Disorders, 7, 195-205. Steer, R., & Clark, D. (1997). Psychometric characteristics of the Beck Depression Inventory-II with college students. Measurement and Evaluation in Counseling and Development, 30(3), 128-136. Steer, R., Rissmiller, D., & Beck, A. (2000). Use of Beck Depression Inventory-II with depressed geriatric inpatients. Behaviour Research and Therapy, 38, 311-318. 53 Schwarz, N. (1999). Self-reports: How the questions shape the answers. American Psychologist, 54, 93-105. Vrana, S., & Lauterbach, D. (1994). Prevalence of traumatic events and post-traumatic psychological symptoms in a nonclinical sample of college students. Journal of Traumatic Stress, 7, 289-302. Walker, W., Skowronski, J., & Thompson, C. (2003). Life is pleasant--and memory helps to keep it that way!. Review of General Psychology, 7, 203-210. Weathers, F.W. (1993, October). Psychometric measures for the assessment and diagnosis of PTSD. Paper presented at the annual meeting of the International Society for Traumatic Stress Studies in the Research and Methodology Pre- Meeting Institute, San Antonio, TX. Weathers, F. W., Litz, B. T., Herman, D. S., Huska, J. A., & Keane, T. M. (1993, October). The PTSD Checklist: Reliability, validity, and diagnostic utility. Paper presented at the Annual Meeting of the International Society for Traumatic Stress Studies, San Antonio, TX. Weathers, F. W. & Keane, T. M. (2007). The Criterion A problem revisited: Controversies and challenges in defining and measuring psychological trauma. Journal of Traumatic Stress, 20, 107-121. Weathers, F. W., Keane, T. M., & Davidson, J. R. (2001). Clinician administered PTSD scale: A review of the first ten years of research. Depression and Anxiety, 13, 132?156. Weaver, T. L. (1998). Method variance and sensitivity of screening for traumatic stressors. Journal of Traumatic Stress, 11, 181-185. 54 Whisman, M.A., Perez, J.E., & Ramel, W. (2000). Factor structure of the Beck Depression Inventory-Second Edition (BDI-II) in a student sample. Journal of Clinical Psychology, 56, 545?551. Widom, C. S., Dutton, M. A., Czaja, S. J., & DuMont, K. A. (2005). Development and validation of a new instrument to assess lifetime trauma and victimization history. Journal of Traumatic Stress, 18, 519-531. Williams, J., Gibbon, M., First, M., & Spitzer, R. (1992). The Structured Clinical Interview for DSM-III--R (SCID): II. Multisite test-retest reliability. Archives of General Psychiatry, 49, 630-636. 55 APPENDIX Table 1 PDS, DAPS, and LEC Item Comparison Measure Category PDS DAPS LEC Accident #1 Serious accident, fire, or explosion (for example, an industrial, farm, car, plane, or boating accident) #1 An accident or crash involving a car, motorcycle, plane, boat, or other vehicle, when you were seriously hurt or were afraid you would be hurt or killed? #2 Fire or explosion #3 An accident a work or at home, when you were seriously hurt or were afraid you would be hurt or killed? #3 Transportation accident(for example, car accident, boat accident, train wreck, plane crash) #4 Serious accident at work, home, or during recreational activity #5 Exposure to toxic substance (for example, dangerous chemicals, radiation) Natural Disaster #2 Natural disaster (for example, tornado, hurricane, flood, or major earthquake) #2 A hurricane, tornado, flood, earthquake, explosion, or fire, when you were seriously hurt or were afraid you would be hurt or killed? #1 Natural disaster (for example, flood, hurricane, tornado, earthquake) (Table 1 continues) 56 (Table 1 continued) Measure Category PDS DAPS LEC Assault #3 Non-sexual assault by a family member or someone you know (for example, being mugged, physically attacked, shot, stabbed, or held at gunpoint) #4 Someone hitting, choking, or beating you (including someone you lived with or were married to), when you were seriously hurt or were afraid you would be hurt or killed (at any time in your life, including your childhood)? #6 Physical assault (for example, being attacked, hit, slapped, kicked, beaten up) #4 Non-sexual assault by a stranger (for example, being mugged, physically attacked, shot, stabbed, or held at gunpoint) #5 Someone threatening to injure you or do something sexual to you against your will, although they didn?t actually do anything to you, when you were afraid you would be hurt or killed? #7 Assault with a weapon (for example, being shot, stabbed, threatened with a knife, gun, bomb) #6 Someone shooting or stabbing you, or trying to shoot or stab you, when you were seriously hurt or were afraid you would be hurt or killed? #8 Being held-up, robbed, or mugged, when you were seriously hurt or were afraid you would be hurt or killed? (Table 1 continues) 57 (Table 1 continued) Measure Category PDS DAPS LEC Abuse #13 Did an adult ever hit or beat you or in some other way physically hurt you enough that you had scratches, bruises, cuts, or some other injury before you were 16 years old? Sexual Assault #5 Sexual assault by a family member or someone you know (for example, rape or attempted rape) #9 Someone doing something sexual to you against your will (for example, rape, sexual assault, or unwanted sexual contact), or making you do something sexual, that caused you to be seriously hurt or afraid you would be hurt or killed? #8 Sexual assault (rape, attempted rape, made to perform any type of sexual act through force or threat of harm) #6 Sexual assault by a stranger (for example, rape or attempted rape) #9 Other unwanted or uncomfortable sexual experience Sexual Abuse #8 Sexual contact when you were younger than 18 with someone who was 5 or more years older than you (for example, contact with genitals, breasts) #10 Someone doing something sexual to you against your will (even if you were not hurt or afraid you would be hurt) or making you do something sexual before you were 16 years old? Combat #7 Military combat or a war zone #7 Being in a war, when you were seriously hurt or were afraid you would be hurt or killed? #10 Combat or exposure to a war-zone (in the military or as a civilian) (Table 1 continues) 58 (Table 1 continued) Measure Category PDS DAPS LEC Imprison- ment #9 Imprisonment (for example, prison inmate, prisoner of war, hostage) #11 Captivity (for example, being kidnapped, abducted, held hostage, prisoner of war) Torture #10 Torture Illness #11 Life-threatening illness #12 Life-threatening illness or injury Suffering #13 Severe human suffering Sudden Death #14 Sudden, violent death (for example, homicide, suicide) #15 Sudden, unexpected death of someone close to you Caused Harm to Others #16 Serious injury, harm, or death you caused to someone else Other #12 Other traumatic event #11 Some other experience that caused you to be seriously hurt or made you fear that you might be seriously hurt or killed? #17 Any other very stressful event or experience #12 Seeing someone else get seriously hurt or killed? Note. PDS = Posttraumatic Stress Disorder Scale; DAPS = Detailed Assessment of Posttraumatic Stress; LEC = Life Events Checklist. 59 Table 2 Summary Variable Creation Summary Variable Contribution of Individual Items by Measure* LEC DAPS PDS ACC 2,3,4,5 1,3 1 NAD 1 2 2 PAA 6,7 4,5,6,8,13 3,4 SAA 8,9 9,10 5,6,8 COM 10 7 7 ILL 12 12 11 OTH 17 11 12 IMP 11 9 TOR 13 10 SUD 14,15 12 HAR 16 Note. * Specific item numbers listed in table. ACC = Accident; NAD = Natural Disaster; PAA = Physical Assault & Abuse; SAA = Sexual Assault & Abuse; COM = Combat; ILL = Life Threatening Illness or Injury; OTH = Other; IMP = Imprisonment; TOR = Torture & Severe Human Suffering; SUD = Sudden Unexpected Death of a Close Friend or Family Member; HAR = Caused Harm to Others. 60 Table 3 Range of Events Reported on Trauma History Interview Event Category Endorsement Rate Criterion A Events Selection as Worst Event n % n % n % Natural disaster 53 58.2 14 15.4 5 5.5 Fire or explosion 26 28.5 5 5.5 3 3.3 Transportation accident 74 81.3 32 35.2 16 17.6 Serious accident at work, home, or during rec. activity 32 35.1 6 6.6 2 2.2 Exposure to toxic substance 5 5.5 0 0.0 0 0.0 Physical assault 36 39.5 15 16.5 5 5.5 Assault with a weapon 15 16.5 6 6.6 1 1.1 Sexual assault 27 29.7 13 14.3 6 6.6 Other unwanted sexual experience 22 24.2 12 13.2 2 2.2 Combat or warzone exposure 15 16.5 5 5.5 0 0.0 Captivity 5 5.5 0 0.0 0 0.0 Life-threatening illness or injury 45 49.5 17 18.7 11 12.1 Severe human suffering 9 9.9 3 3.3 3 3.3 Sudden, violent death 22 24.2 13 14.3 8 8.8 Sudden, unexpected death 47 51.6 20 22.0 19 20.9 Serious injury or death you caused to somebody else 2 2.2 0 0.0 0 0.0 Other 28 30.8 5 5.5 10 11.0 Note. N = 91. 61 Table 4 Number of Events Reported by Format (Self-report vs. Interview) Trauma Measure LEC DAPS PDS M (SD) M (SD) M (SD) F (2, 88) Eta2 Reported on Self- Report 9.9a (4.2) 2.9b (2.1) 1.7b (1.1) 78.083*** .64 Reported on Interview 6.7a (2.8) 4.7b (2.5) 3.9b (2.3) 9.636*** .18 Reported on Interview (Criterion A) 4.5a (2.4) 3.5ab (2.4) 2.6b (1.8) 6.066** .12 Note. LEC = Life Events Checklist; DAPS = Detailed Assessment of Posttraumatic Stress; PDS = Posttraumatic Stress Diagnostic Scale. Means in the same row that do not share subscripts differ at p < .05 in the Tukey Honestly Significant Difference comparison. * p < .05. ** p < .01. ***p < .001 62 Table 5 LEC Individual Items Temporal Stability: Experienced Only Event category Trauma Time 1 Time 2 Overall agreement Decreased reporting Increased reporting Kappa Not endorsed Endorsed Natural disaster Not endorsed 15 1 83% 14% 3% .64*** Endorsed 4 9 Fire or explosion Not endorsed 24 0 97% 3% 0% .89*** Endorsed 1 5 Transportation accident Not endorsed 6 0 97% 3% 0% .90*** Endorsed 1 23 Serious accident at work/home/rec. act. Not endorsed 20 0 93% 7% 0% .83*** Endorsed 2 7 Exposure to toxic substance Not endorsed 30 0 100% 0% 0% Endorsed 0 0 Physical assault Not endorsed 18 2 80% 13% 7% .53** Endorsed 4 6 Assault with a weapon Not endorsed 25 0 93% 7% 0% .47** Endorsed 2 1 Sexual assault Not endorsed 27 0 100% 0% 0% 1.0*** Endorsed 0 3 Other unwanted sexual experience Not endorsed 23 0 100% 0% 0% .91*** Endorsed 0 7 Combat or exposure to a war-zone Not endorsed 30 0 100% 0% 0% Endorsed 0 0 Captivity Not endorsed 30 0 100% 0% 0% Endorsed 0 0 (Table 5 continues) 63 (Table 5 continued) Event category Time 1 Time 2 Overall agreement Decreased reporting Increased reporting Kappa Not endorsed Endorsed Life-threatening illness or injury Not endorsed 26 0 100% 0% 0% 1.0*** Endorsed 0 3 Severe human suffering Not endorsed 30 0 100% 0% 0% Endorsed 0 0 Sudden, violent death Not endorsed 30 0 100% 0% 0% Endorsed 0 0 Sudden, unexpected death of loved one Not endorsed 14 5 66% 17% 17% .28 Endorsed 5 6 Serious injury, harm, or death you caused Not endorsed 29 0 97% 3% 0% Endorsed 1 0 Any other very stressful event Not endorsed 16 1 87% 10% 3% .72 Endorsed 3 10 Note. LEC = Life Events Checklist. * p < .05. ** p < .01. ***p < .001. 64 Table 6 LEC Individual Items Temporal Stability: Experienced and Witnessed Event category Trauma Time 1 Time 2 Overall agreement Decreased reporting Increased reporting Kappa Not endorsed Endorsed Natural disaster Not endorsed 8 2 76% 17% 7% .50** Endorsed 5 14 Fire or explosion Not endorsed 16 1 94% 3% 3% .86*** Endorsed 1 12 Transportation accident Not endorsed 2 2 84% 10% 6% .42* Endorsed 2 24 Serious accident at work/home/rec. act. Not endorsed 14 0 87% 13% 0% .73*** Endorsed 4 11 Exposure to toxic substance Not endorsed 28 0 100% 0% 0% 1.0*** Endorsed 0 2 Physical assault Not endorsed 11 3 70% 20% 10% .41* Endorsed 6 10 Assault with a weapon Not endorsed 18 0 86% 14% 0% .66*** Endorsed 4 6 Sexual assault Not endorsed 25 1 97% 0% 3% .87*** Endorsed 0 4 Other unwanted sexual experience Not endorsed 22 0 97% 3% 0% .91*** Endorsed 1 7 Combat or exposure to a war-zone Not endorsed 29 1 97% 0% 3% Endorsed 0 0 Captivity Not endorsed 29 0 100% 0% 0% 1.0*** Endorsed 0 1 (Table 6 continues) 65 (Table 6 continued) Event category Time 1 Time 2 Overall agreement Decreased reporting Increased reporting Kappa Not endorsed Endorsed Life-threatening illness or injury Not endorsed 11 4 76% 10% 14% .52** Endorsed 3 11 Severe human suffering Not endorsed 21 1 83% 13% 3% .52** Endorsed 4 4 Sudden, violent death Not endorsed 26 1 90% 6% 3% .35* Endorsed 2 1 Sudden, unexpected death of loved one Not endorsed 10 5 77% 6% 17% .53** Endorsed 2 13 Serious injury, harm, or death you caused Not endorsed 29 0 100% 0% 0% 1.0*** Endorsed 0 1 Any other very stressful event Not endorsed 16 1 87% 10% 3% .72*** Endorsed 3 10 Note. LEC = Life Events Checklist. * p < .05. ** p < .01. ***p < .001. 66 Table 7 DAPS Individual Items Temporal Stability Event category Trauma Time 1 Time 2 Overall agreement Decreased reporting Increased reporting Kappa Not endorsed Endorsed Transportation Accident Not endorsed 11 2 90% 3% 7% .79*** Endorsed 1 16 Natural Disaster Not endorsed 22 1 93% 3% 3% .81*** Endorsed 1 6 Accident Not endorsed 23 2 86% 7% 7% .52** Endorsed 2 3 Physical Assault Not endorsed 20 1 90% 7% 3% .75*** Endorsed 2 7 Threatened Injury Not endorsed 20 0 90% 10% 0% .75*** Endorsed 3 7 Shooting/Stabbing Not endorsed 30 0 100% 0% 0% Endorsed 0 0 Combat Not endorsed 29 0 97% 3% 0% Endorsed 1 0 Robbery Not endorsed 30 0 100% 0% 0% Endorsed 0 0 Rape Not endorsed 22 2 93% 0% 7% .81*** Endorsed 0 6 Sexual Assault (prior to age 16) Not endorsed 25 1 93% 3% 3% .71*** Endorsed 1 3 Other Not endorsed 22 3 80% 10% 10% .28 Endorsed 3 2 (Table 7 continues) 67 (Table 7 continued) Event category Time 1 Time 2 Overall agreement Decreased reporting Increased reporting Kappa Not endorsed Endorsed Witnessing Serious Injury or Death Not endorsed 13 1 80% 17% 3% .60*** Endorsed 5 11 Physical Abuse (prior to age 16) Not endorsed 22 1 87% 10% 3% .58*** Endorsed 3 4 Note. DAPS = Detailed Assessment of Posttraumatic Stress. * p < .05. ** p < .01. ***p < .001. 68 Table 8 PDS Individual Items Temporal Stability Event category Trauma Time 1 Time 2 Overall agreement Decreased reporting Increased reporting Kappa Not endorsed Endorsed Serious Accident Not endorsed 19 0 93% 7% 0% .85*** Endorsed 2 9 Natural Disaster Not endorsed 11 0 90% 10% 0% .79*** Endorsed 3 16 Non-sexual Assault (by family member) Not endorsed 28 0 97% 3% 0% .65*** Endorsed 1 1 Non-sexual Assault (by stranger) Not endorsed 27 1 93% 3% 3% .46* Endorsed 1 1 Sexual Assault (by family member) Not endorsed 27 1 97% 0% 3% .78*** Endorsed 0 2 Sexual Assault (by stranger) Not endorsed 26 1 93% 3% 3% .63*** Endorsed 1 2 Combat Not endorsed 30 0 100% 0% 0% Endorsed 0 0 Sexual Abuse (prior to age 18) Not endorsed 28 0 100% 0% 0% 1.0*** Endorsed 0 2 Imprisonment Not endorsed 29 0 97% 3% 0% Endorsed 1 0 Torture Not endorsed 30 0 100% 0% 0% Endorsed 0 0 Life-threatening Illness Not endorsed 24 1 97% 0% 3% .88*** Endorsed 0 5 Other Not endorsed 22 2 93% 0% 6% .81*** Endorsed 0 6 Note. PDS = Posttraumatic Stress Diagnostic Scale. * p < .05. ** p < .01. ***p < .001. 69 Table 9 Accuracy: Experienced only Variable Trauma Measure Screener Interview Overall agreement PPV NPV Kappa Not endorsed Endorsed ACC LEC Not endorsed 3 1 90% 92% 75% .61*** Endorsed 2 24 DAPS Not endorsed 5 8 70% 94% 38% .35* Endorsed 1 16 PDS Not endorsed 13 7 65% 64% 65% .27 Endorsed 4 7 NAD LEC Not endorsed 16 1 83% 69% 94% .65*** Endorsed 4 9 DAPS Not endorsed 16 7 73% 86% 70% .43** Endorsed 1 6 PDS Not endorsed 11 1 81% 74% 92% .62*** Endorsed 5 14 PAA LEC Not endorsed 18 1 70% 27% 95% .25 Endorsed 8 3 DAPS Not endorsed 11 1 73% 61% 92% .49** Endorsed 7 11 PDS Not endorsed 26 1 87% 25% 96% .27 Endorsed 3 1 (Table 9 continues) 70 (Table 9 continued) Variable Measure Screener Interview Overall agreement PPV NPV Kappa Not endorsed Endorsed SAA LEC Not endorsed 23 0 93% 71% 100% .79*** Endorsed 2 5 DAPS Not endorsed 18 4 87% 100% 82% .71*** Endorsed 0 8 PDS Not endorsed 23 3 84% 60% 88% .45* Endorsed 2 3 COM LEC Not endorsed 30 0 100% Endorsed 0 0 DAPS Not endorsed 29 0 97% Endorsed 1 0 PDS Not endorsed 31 0 100% Endorsed 0 0 ILL LEC Not endorsed 26 1 97% 100% 96% .84*** Endorsed 0 3 DAPS Not endorsed 11 3 40% 6% 79% -.14 Endorsed 15 1 PDS Not endorsed 24 1 81% 16% 96% .17 Endorsed 5 1 (Table 9 continues) 71 (Table 9 continued) Variable Measure Screener Interview Overall agreement PPV NPV Kappa Not endorsed Endorsed OTH LEC Not endorsed 14 3 63% 38% 82% .22 Endorsed 8 5 DAPS Not endorsed 18 7 63% 20% 72% .07 Endorsed 4 1 PDS Not endorsed 21 4 71% 16% 84% .01 Endorsed 5 1 IMP LEC Not endorsed 30 0 100% Endorsed 0 0 DAPS Not endorsed 30 0 100% Endorsed 0 0 PDS Not endorsed 29 1 94% 97% -.03 Endorsed 1 0 TOR LEC Not endorsed 30 0 100% Endorsed 0 0 DAPS Not endorsed 28 2 93% Endorsed 0 0 PDS Not endorsed 29 2 94% 6% Endorsed 0 0 (Table 9 continues) 72 (Table 9 continued) Variable Measure Screener Interview Overall agreement PPV NPV Kappa Not endorsed Endorsed SUD LEC Not endorsed 16 3 70% 45% 84% .32 Endorsed 6 5 DAPS Not endorsed 13 1 50% 13% 93% .05 Endorsed 14 2 PDS Not endorsed 24 7 77% Endorsed 0 0 HAR LEC Not endorsed 29 0 97% 100% Endorsed 1 0 DAPS Not endorsed 29 1 97% Endorsed 0 0 PDS Not endorsed 31 0 100% Endorsed 0 0 Note. ACC = Accident; NAD = Natural Disaster; PAA = Physical Assault & Abuse; SAA = Sexual Assault & Abuse; COM = Combat; ILL = Life Threatening Illness or Injury; OTH = Other; IMP = Imprisonment; TOR = Torture & Severe Human Suffering; SUD = Sudden Unexpected Death of a Loved One; HAR = Caused Harm to Others. LEC = Life Events Checklist; DAPS = Detailed Assessment of Posttraumatic Stress; PDS = Posttraumatic Stress Diagnostic Scale. Italics added to designate measures that did not contribute to summary variable. PPV = Positive Predictive Value; NPV = Negative Predictive Value. * p < .05. ** p < .01. ***p < .001. 73 Table 10 Accuracy: Experienced, Witnessed, and Confronted With Variable Trauma Measure Screener Interview Overall agreement PPV NPV Kappa Not endorsed Endorsed ACC LEC Not endorsed 0 1 97% 100% Endorsed 0 29 DAPS Not endorsed 2 11 63% 100% 15% .17 Endorsed 0 17 PDS Not endorsed 10 10 68% 100% 50% .42** Endorsed 0 11 NAD LEC Not endorsed 2 0 73% 71% 100% .25* Endorsed 8 20 DAPS Not endorsed 15 8 70% 86% 65% .38* Endorsed 1 6 PDS Not endorsed 11 1 94% 95% 92% .84*** Endorsed 1 18 PAA LEC Not endorsed 3 3 60% 63% 50% .09 Endorsed 9 15 DAPS Not endorsed 8 4 70% 72% 67% .38* Endorsed 5 13 PDS Not endorsed 24 3 87% 75% 89% .53** Endorsed 1 3 (Table 10 continues) 74 (Table 10 continued) Variable Measure Screener Interview Overall agreement PPV NPV Kappa Not endorsed Endorsed SAA LEC Not endorsed 9 3 60% 50% 75% .23 Endorsed 9 9 DAPS Not endorsed 17 5 83% 100% 77% .65*** Endorsed 0 8 PDS Not endorsed 19 7 77% 100% 73% Endorsed 0 5 COM LEC Not endorsed 16 3 77% 64% 84% .49** Endorsed 4 7 DAPS Not endorsed 26 3 90% 100% 90% .37** Endorsed 0 1 PDS Not endorsed 30 1 97% 97% Endorsed 0 0 ILL LEC Not endorsed 4 3 67% 70% 57% .22 Endorsed 7 16 DAPS Not endorsed 8 6 43% 31% 57% -.11 Endorsed 11 5 PDS Not endorsed 15 10 65% 83% 60% .28 Endorsed 1 5 (Table 10 continues) 75 (Table 10 continued) Variable Measure Screener Interview Overall agreement PPV NPV Kappa Not endorsed Endorsed OTH LEC Not endorsed 11 4 63% 53% 73% .27 Endorsed 7 8 DAPS Not endorsed 16 9 63% 60% 64% .03 Endorsed 2 3 PDS Not endorsed 21 4 71% 17% 84% .01 Endorsed 5 1 IMP LEC Not endorsed 18 1 70% 27% 95% .25 Endorsed 8 3 DAPS Not endorsed 30 0 100% Endorsed 0 0 PDS Not endorsed 29 1 94% 97% -.03 Endorsed 1 0 TOR LEC Not endorsed 18 1 67% 18% 95% .15 Endorsed 9 2 DAPS Not endorsed 27 3 90% Endorsed 0 0 PDS Not endorsed 28 3 90% 90% Endorsed 0 0 (Table 10 continues) 76 (Table 10 continued) Variable Measure Screener Interview Overall agreement PPV NPV Kappa Not endorsed Endorsed SUD LEC Not endorsed 4 2 83% 88% 67% .51** Endorsed 3 21 DAPS Not endorsed 10 4 60% 50% 71% .21 Endorsed 8 8 PDS Not endorsed 12 19 39% Endorsed 0 0 HAR LEC Not endorsed 26 1 87% 10% 3% -.05 Endorsed 3 0 DAPS Not endorsed 29 1 97% Endorsed 0 0 PDS Not endorsed 31 0 100% Endorsed 0 0 Note. ACC = Accident; NAD = Natural Disaster; PAA = Physical Assault & Abuse; SAA = Sexual Assault & Abuse; COM = Combat; ILL = Life Threatening Illness or Injury; OTH = Other; IMP = Imprisonment; TOR = Torture & Severe Human Suffering; SUD = Sudden Unexpected Death of a Loved One; HAR = Caused Harm to Others. LEC = Life Events Checklist; DAPS = Detailed Assessment of Posttraumatic Stress; PDS = Posttraumatic Stress Diagnostic Scale. Italics added to designate measures that did not contribute to summary variable. PPV = Positive Predictive Value; NPV = Negative Predictive Value. * p < .05. ** p < .01. ***p < .001. 77 Table 11 Discrepancy Attribution Examples Measure Discrepancy Attribution LEC ?I didn?t report it there because I didn't think [the event] really affected me at all.? ?The titles for [LEC] categories were not descriptive enough. Also, some events I reported on [the LEC] did not seem serious enough to talk about orally. The title 'physical assault' does not seem to fit with abuse [her event]. It didn't trigger my memory for the abuse incidents the first time I filled out the measure.? She initially chose her grandmother?s death as her worst event because it was the ?most convenient,? later she changed it to her sexual assault DAPS ?They were cut and dry questions, either yes or no, for me half of the sexual assault category was true but the other half wasn't because I didn't feel like I was going to die" Did not endorse a car accident because she was not seriously hurt, she said she thought her car accident was not as serious as the abuse or the rape, ?It felt like it wasn't as serious as the survey portrayed--like it wasn't what it was looking for? ??questionnaire did not prompt for [life-threatening illness or injury]?it didn't cross my mind while completing [the DAPS]? Participant did not report eventual worst event (father's suicide) because ?it didn?t ask about it;? ?I was thinking about [father's suicide] but on #12 (Seeing someone else get seriously hurt or killed) it refers to seeing it, but I didn?t see it.? PDS ?I think I had a lot more stressful things that what you listed. I think you could word things a little differently. I didn't mark #6 because in my mind that's rape and only rape and I wouldn't consider what I went through a sexual assault. I feel like if somebody else went through what I went through it could be called sexual assault but I feel like what I went through would diminish the severity of the category if I were to call it sexual assault. Losing a friend in a motorcycle accident didn't cross my mind because I was only thinking about what I had gone through, not something happening to someone else. Part of me feels like I can't write the whole story on that little line (provided in PDS).? (Table 11 continues) 78 (Table 11 continued) Measure Discrepancy Attribution PDS Participant did not endorse "Life-threatening illness? or ?Other traumatic event" categories (even though her identified worst event on the interview was her father's death) because she said that the death of her father did not quite seem like a ?prototypically traumatic event? and because she did not want to write or think about it. She selected #5 (attempted rape) as her worst event even though the "Life-threatening illness" category made her think about her dad (and start crying) Participant did not report car accident because ?the examples listed made me not think of smaller-scale [stressors] like the car accident" Note. LEC = Life Events Checklist; DAPS = Detailed Assessment of Posttraumatic Stress; PDS = Posttraumatic Stress Diagnostic Scale. 79 0 2 4 6 8 10 R e membe re d on In ter view F or got on Inte rvie w Eve nt t oo Mi nor Exposur e Le ve l R e lucta nt Mi stake Limi ted C a tegor ies Emot ional S tate 'Good P a rticipa nt' R e e va luate d C a teg or y F re que nc y L EC DA P S P DS Figure 1. Distribution of discrepancy attributions. LEC = Life Events Checklist; DAPS = Detailed Assessment of Posttraumatic Stress; PDS = Posttraumatic Stress Diagnostic Scale.