Using AI Techniques to Understand Text in Different Contexts: Misinformation, COVID-19, and Mental Health
View/ Open
Date
2023-07-11Type of Degree
PhD DissertationDepartment
Systems and Technology
Restriction Status
EMBARGOEDRestriction Type
FullDate Available
07-11-2025Metadata
Show full item recordAbstract
This dissertation work consists of three studies. This work uses a multi-pronged approach that draws upon various forms of online textual data using Partial Least Squares Structural Equation Modeling (PLS-SEM) and different Artificial Intelligence (AI) techniques such as text analytics, natural language processing (NLP), network analytics, and deep learning approaches to understand the misinformation research, dynamics of COVID-19 news topics on Facebook, and how humans evaluate AI-generated and human-generated reviews in mental health apps. Study 1 provided a systematic text-analytic literature review on misinformation research. First, this survey study presented a framework to explain the creation and spread of misinformation within the AI context. Second, we synthesized four thematic dimensions where recent research on misinformation has been evolving: (1) spread of misinformation, (2) impacts of misinformation, (3) misinformation detection, and (4) mitigation of misinformation. This effort serves to provide guidance for IS and Analytics researchers in pursuing this important line of research on developing improved models and analytics approaches that help mitigate the impact of misinformation. Study 2 examined the evolution of news topics during the COVID-19 pandemic across progressively different timeframes based on seven phases of the pandemic cycle. Guided LDA analysis of 30 million Facebook English posts collected over all the COVID-19 phases helped us identify key terms across each phase. Some topics (terms) were identified as among the most relevant topics in all phases. These key topics can be grouped into four categories: the number of new and death cases (e.g., “coronavirus case”, “test positive”, “new case”, and “positive coronavirus”, prevention strategies (e.g., “close contact”, “social distance”, “wash hand”, “stay home”, “home test”, and “test kit”), COVID-19 vaccines (e.g., “coronavirus vaccine”, “coronavirus vaccination”, “second dose”, and “fully vaccinate”.), and variants (e.g., “delta variant” and “omicron variant”). Thematic analysis revealed ten dominant themes among different COVID-19 phases. Interestingly, we found that even though some themes (e.g., mitigation and prevention strategies) are prevalent across all phases, the topics within these themes evolve as the COVID-19 cycle progresses. Furthermore, the theme “Severity of COVID-19” became dominant during the middle stages of the pandemic when COVID-19 showed a downward trend. Counterintuitively, text network analytics uncovered that topics during the early stages of the pandemic were more connected than the phases when COVID-19 showed an upward trend, indicating that social media topics cover a wider range of topics. Findings from this study could be used to develop better information dissemination portals and content for different pandemic cycles. This study also guides various responsible agencies on synchronizing messaging based on pandemic phases and future pandemic planning. Study 3 investigated how app users are stimulated by the linguistic cues from online reviews and how they process reviews through different paths of internal states, such as perceived cognitive effort and perceived persuasion motives, which affect perceived review credibility. This study confirmed that consumers’ internal states have different impacts on the perceived review credibility, depending on whether the reviews are AI-generated or human-generated. In AI-generated reviews, internal states are affected by Emotion only. However, regarding human-generated reviews, internal states are affected by Complexity, Emotion, and Uncertainty. These findings open up opportunities for future research to incorporate more features of AI-generated fake reviews.