This Is AuburnElectronic Theses and Dissertations

Show simple item record

On the Machine Illusion


Metadata FieldValueLanguage
dc.contributor.advisorKu, Wei-Shinn
dc.contributor.authorGong, Zhitao
dc.date.accessioned2019-10-14T15:56:15Z
dc.date.available2019-10-14T15:56:15Z
dc.date.issued2019-10-14
dc.identifier.urihttp://hdl.handle.net/10415/6944
dc.description.abstractIn this work, we empirically study an emerging problem in the machine learning community, i.e., the adversarial samples. Specifically, we focus on the discussion within the realm of neural networks. The existence of adversarial samples reveals yet another inconsistency in our hypothesis about neural networks. An adversarial sample is usually generated by adding very small and carefully chosen noise to a clean data sample, e.g., adding noise to an image to change some pixel values, replacing a few words in a sentence. Despite that they are almost the same (visually or semantically) as the clean samples from the perspective of human beings, the adversarial samples will trick a well-trained neural network into wrong predictions with very high confidence. In addition, we also show that adversarial samples exist in real-world when the objects are in an unusual pose (e.g., a flipped-over school bus). We study this problem from two sides of the coin, i.e., defending against adversarial samples and generating adversarial samples. Concretely, to defend against adversarial samples, we propose a binary classification method to filter out adversarial samples. It achieves almost perfect accuracy on adversarial samples from seen distributions. However, it fails to recognize adversarial samples from unseen distributions. To generate of adversarial samples, we first propose a framework to generate text adversarial samples for text classification problem (e.g., sentimental analysis). The framework generates high-quality text adversarial samples. The limitation is that we do not have explicit control over the semantics and syntax. In addition, we propose another framework to generate image adversarial samples by rendering 3D objects in unusual poses. It shows that natural adversarials in real-world may exist in abundance. What’s lacking in this dissertation is a theoretical exploration of this problem. We may revisit this problem when theories behind neural networks get matured.en_US
dc.subjectComputer Science and Software Engineeringen_US
dc.titleOn the Machine Illusionen_US
dc.typePhD Dissertationen_US
dc.embargo.statusNOT_EMBARGOEDen_US
dc.contributor.committeeQin, Xiao
dc.contributor.committeeNguyen, Anh
dc.contributor.committeeZhou, Yang
dc.contributor.committeeMao, Shiwen
dc.creator.orcid0000-0003-1857-4697en_US

Files in this item

Show simple item record