This Is AuburnElectronic Theses and Dissertations

Behavior Analysis and Enhancement of Robustness for Deep Neural Networks




Wang, Longwei

Type of Degree

PhD Dissertation


Computer Science and Software Engineering

Restriction Status


Restriction Type

Auburn University Users

Date Available



Apart from the remarkable success of machine learning models utilizing deep neural networks in solving a variety of problems including image classifications, these models are highly vulnerable to small and carefully chosen modifications to inputs, known as adversarial examples. While such perturbations are often simple noise to a human or even imperceptible, the perturbations cause state-of-the-art models to misclassify input data with high confidence. To close this technological gap, we are primed to investigate the behaviors of deep neural networks, aiming to understand inner working mechanisms and to boost the classification accuracy of deep neural networks. In the first part of this dissertation, we study the neuron activation behaviors of a well-trained classification model. An information theoretical method is leveraged to examine the behavior of layer-wise neurons in deep neural networks. We discover that in a well-trained classification model, the randomness level of a neurons activation pattern is curtailed with the depth of fully connected layers. This finding suggests that the neuron activation patterns of deep layers are more stable than those of shallow layers. In the second part of the dissertation, we advocate for an approach to incorporating a diversity of symmetries, such as rotation and scaling, into an existing CNN model to enhance the robustness of deep neural network models. We illustrate that the perturbation invariance property can be approximated by various symmetries incorporated in the models. Furthermore, it is guaranteed that a model - equipped with the symmetry enforcement - becomes generalizable to input data that are shifted, rotated, or scaled. Importantly, we explore the relationship between generalization and robustness of deep neural networks, thereby empirically shedding bright light on the impact of generalization on adversarial robustness. The symmetry operations not only multiply the efficiency of training data but also substantially strengthen the expressive capacity of a network, where the adversarial robustness is bolstered. In the third part of the dissertation research, we delve in the development of robust target detection techniques against malicious attack based on representation learning. We construct a two-stage framework for fusing information from heterogeneous sensors. The representation learning stage, a core underpinning, is capable of transforming data into a unified data form. The nature encoded fusion - implemented in the newly designed framework - allows data from different modalities to be processed in a unified probabilistic space. Inherent inter-sensor relationships are exploited: such relationships are envisioned as a nature encoded sensing with heterogeneous sensors. Malicious data injection attacks may tamper with the sensors of a fusion center or a data acquisition system, thereby seriously downgrading the target detection performance of sensor fusion system. We demonstrate that the iterative belief propagation is slated to refine and fuse local individual beliefs to combat the malicious data attacks. Our findings confirm that the belief propagation method furnishes intuitive insights: probabilistic updates are capable of reinforcing beliefs with the help of a correlation factor.