|dc.description.abstract||Fake news is playing an increasingly dominant role in spreading misinformation by influencing people's perceptions or knowledge to distort their awareness and decision-making. The growth of social media and online forums has spurred the spread of fake news causing it to easily blend with truthful information. This study provides a novel text analytics-driven approach to fake news detection for reducing the risks posed by fake news consumption. In this dissertation, we focus on addressing fake news detection tasks by establishing three analytics models.
In the first part, we first describe the framework for the proposed approach and the underlying analytical model including the implementation details and validation based on a corpus of news data. We collect legitimate and fake news, which is transformed from a document-based corpus into a topic and event-based representation. Fake news detection is performed using a two-layered approach, which is comprised of detecting fake topics and fake events. The efficacy of the proposed approach is demonstrated through the implementation and validation of a novel FakE News Detection (FEND) system. The proposed approach achieves 92.49% classification accuracy and 94.16% recall based on the specified threshold value of 0.6.
We propose a computational approach in the second part for detecting fake news in real-time. The proposed methodology utilizes event and topic extraction techniques along with a topic-merging mechanism to process real-time news data and reduce the number of topics. This approach includes a two-stage procedure for improved memory management using a streaming framework. We report the findings from several computational experiments for benchmarking the proposed methodology in different system settings. Our approach is more time-efficient in detecting fake news while also leading to a 19.76% reduction in the number of topics and 26.92% reduction in the numbers of data clusters when compared to other Fake news detection systems.
Objective and Subjective separation (OSS) in texts could benefit textual affective analysis fundamentally. Existing OSS approaches such as extracting perceptual pieces mainly concentrate on identifying subjectivity. Objectivity learning in language has been becoming a challenging task due to false knowledge and other misinformation news propagating over the internet. Finally, this dissertation presents a novel objectivity-subjectivity separation approach for short texts without using traditional subjective clues, referred to as 'private states'. We accomplish this by leveraging three latent features (viewpoint of subject and object, and tense) of extracted relational triple sets in sentences. In the model, we propose a group of algorithms to extract latent features and recognize subjective or objective patterns from datasets. We assess our approach via regrouping the three latent features as three two-elemental variables and a triple variable for comparing the distributions of these variables between objective and subjective datasets. The results indicate that the model based on our proposed methodology has approx. 87.5% accuracy, and approx. 97% recall on evaluating extracted objective patterns.||en_US