Beyond The Accuracy In Deep Learning: Robustness, Privacy and Fairness
Date
2023-11-15Type of Degree
PhD DissertationDepartment
Computer Science and Software Engineering
Restriction Status
EMBARGOEDRestriction Type
Auburn University UsersDate Available
11-15-2028Metadata
Show full item recordAbstract
While Deep Learning (DL) has achieved great success across various domains, its trustworthiness remains a concern. This dissertation delves into threats associated with robustness, privacy, and fairness, offering novel solutions to enhance the reliability of DL models. In our first study, We explored vulnerability of deep learning model and proposed an integrated defense training strategy, IDRGM, for resilient graph matching with two novel objectives to defend against the attacks. A detection technique of inscribed simplexes in the hyperspheres consisting of multiple matched nodes is proposed to tackle inter-graph dispersion attacks, A node separation method based on phase-type distribution is developed to estimate the distribution of perturbed graphs and separate the nodes within the same graphs over a wide space. In our second study, we developed a novel adaptive optimization method for Federated Learning(FL) from the perspective of dynamics of ordinary differential equations (ODEs) to protect the privacy of the data. a momentum decoupling adaptive optimization method, FedDA, is developed to fully utilize the global momentum on each local iteration and accelerate the training convergence. In our third study, we addressed the collaborative fairness in federated learning, a fairness-aware framework, FedFair, is introduced by utilizing the class unlearning and network pruning method to guarantee the collaborative fairness after the training. Firstly, we employed a theoretically-proven unlearning-based method that prevents real parameters from being accessed by participants while still enabling their contribution to the training process. Secondly, we utilized the network pruning method, assisting the server in assigning appropriate models to ensure collaborative fairness.