This Is AuburnElectronic Theses and Dissertations

Show simple item record

A Comparative Analysis of Predictive Model and Large Language Model Approaches to Workplace Competency Assessment


Metadata FieldValueLanguage
dc.contributor.advisorFan, Jinyan
dc.contributor.authorCouvillion, Isabelle
dc.date.accessioned2025-12-04T20:43:34Z
dc.date.available2025-12-04T20:43:34Z
dc.date.issued2025-12-04
dc.identifier.urihttps://etd.auburn.edu/handle/10415/10098
dc.description.abstractWorkplace competencies serve as foundational components of organizational effectiveness, directly influencing talent selection, employee development, and strategic planning. Traditionally, these competencies are assessed through structured interviews and human ratings, which, although well-established, are time-consuming, resource-intensive, and subject to rater bias. Recent advances in artificial intelligence, including predictive models and large language models (LLMs), offer promising alternatives by enabling scalable, standardized, and efficient assessment of competencies. The purpose of this dissertation was threefold: (a) to examine psychometric properties of predictive model – derived competency scores (Study 1), (b) to evaluate the impact of four prompting strategies (i.e., zero-shot direct prompt, few-shot direct prompt, zero-shot chain of thought prompt, and few-shot chain of thought prompt) on the psychometric properties of LLM – inferred competency scores (Study 2), and (c) to compare the psychometric properties of the predictive model – derived and LLM – inferred competency scores. Participants in Sample 1 were 636 full-time employees recruited from Prolific and were used to train predictive models. Participants in Sample 2 were 103 full-time employees recruited from various part-time MBA classes in China and were used to cross-validate the predictive models and for LLM scoring. Results indicated that although both predictive model – derived and LLM-inferred competency scores demonstrated good psychometric properties, LLM – inferred competency scores outperformed predictive model-derived competency scores in terms of convergent and discriminant validity and incremental criterion-related validity. These findings suggest that LLMs may be a promising tool for scalable, efficient, and valid workplace competency assessment. Theoretical and practical implications as well as future research directions were discussed.en_US
dc.rightsEMBARGO_GLOBALen_US
dc.subjectPsychological Sciencesen_US
dc.titleA Comparative Analysis of Predictive Model and Large Language Model Approaches to Workplace Competency Assessmenten_US
dc.typePhD Dissertationen_US
dc.embargo.lengthMONTHS_WITHHELD:60en_US
dc.embargo.statusEMBARGOEDen_US
dc.embargo.enddate2030-12-04en_US

Files in this item

Show simple item record