This Is AuburnElectronic Theses and Dissertations

Show simple item record

A Set-Theoretic Perspective for Evaluating, Training, and Interpreting Language Models


Metadata FieldValueLanguage
dc.contributor.advisorKanti Karmaker, Shubhra (Santu)
dc.contributor.authorBansal, Naman
dc.date.accessioned2024-08-13T13:32:56Z
dc.date.available2024-08-13T13:32:56Z
dc.date.issued2024-08-13
dc.identifier.urihttps://etd.auburn.edu//handle/10415/9467
dc.description.abstractThis thesis investigates the training, evaluation, and interpretation of language models from a set-theoretic perspective. We start by evaluating language models through the lens of Semantic Overlap Summarization (SOS), where models generate a single summary from multiple alternative narratives to capture the common information conveyed by these narratives. To this end, we conducted a systematic study by borrowing the popular ROUGE metric from the text summarization literature and discovered that ROUGE is unsuitable for evaluating the overlap summary. Consequently, we conducted human annotations to create 200 document-level and 1,518 sentence-level ground-truth overlap labels. Our experiments demonstrate that a sentence-wise annotation technique with three overlap labels – Absent (A), Partially-Present (PP), and Present (P) – achieves a higher correlation with human judgment and greater inter-rater agreement than the ROUGE metric. These labels, which effectively measure the overlap between the sentences from reference and model-generated summaries, are grounded in set-theoretic principles. Building on this overlap-based labelling schema, we introduce SEM-F1 (Semantic F1), a new sentence-level precision-recall-style automated evaluation metric. Inspired by the set-theoretic notion of overlap, SEM-F1 provides a more accurate and intuitive assessment of summary quality, reflecting the semantic commonality between generated and reference summaries. One of the challenges in training language models from the lens of semantic overlap is the lack of large-scale datasets. To address this, we propose a novel data generation technique inspired by set theory. By partitioning a document into two overlapping segments and employing an abstractive summarizer to generate summaries for these segments, we create synthetic training samples that embody the essential semantic overlap. This method enables the generation of extensive, domain-specific synthetic datasets, leveraging pretrained summarizers to produce noisy, yet valuable examples. Our experiments demonstrate that fine-tuning seq-to-seq models on these set-theory-inspired synthetic datasets significantly enhances their performance in generating overlap summaries, highlighting the effectiveness of our approach for training language models in this domain. Regarding interpretability, we introduce a novel, task-independent framework inspired by classical set theory to intuitively interpret the latent semantic space of existing sentence encoders as it remains an open challenge. We formulate six criteria and examine seven classical and six Large Language Model (LLM)-based sentence embeddings through the lens of previously mentioned fundamental set-like operations. Our objective is to determine whether a set-theoretic perspective can provide intuitive insights into these representations, independent of the specific task. Our experimental findings consistently show that SBERT models produce the most interpretable embeddings according to our set-theory-inspired framework, even surpassing LLMs in interpretability. Furthermore, we propose a benchmark dataset comprising approximately ~192K samples corresponding to these three set-like text operators. In summary, this thesis demonstrates the impact of applying set-theoretic principles to various aspects of language model training, evaluation, and interpretation. By integrating set-theoretic concepts into the evaluation of semantic overlap, the development of novel synthetic data generation techniques for model training, and the interpretation of sentence embeddings, we have made substantial contributions to enhancing the quality and understanding of language models. Our approach not only addresses existing challenges in training and evaluating models but also provides new insights into the interpretability of their latent representations.en_US
dc.rightsEMBARGO_GLOBALen_US
dc.subjectComputer Science and Software Engineeringen_US
dc.titleA Set-Theoretic Perspective for Evaluating, Training, and Interpreting Language Modelsen_US
dc.typePhD Dissertationen_US
dc.embargo.lengthMONTHS_WITHHELD:60en_US
dc.embargo.statusEMBARGOEDen_US
dc.embargo.enddate2029-08-13en_US
dc.contributor.committeeRahman, Akond
dc.contributor.committeeGupta, Ashish
dc.contributor.committeeZhou, Yang

Files in this item

Show simple item record