On the Adversarial Robustness of Machine Learning Models on Multi-Graph Scenarios
Type of DegreePhD Dissertation
Computer Science and Software Engineering
MetadataShow full item record
In this work, we study the the task of graph matching under several scenarios in an adversarial context. Despite achieving remarkable performance, deep learning based graph matching still suffers from harassment caused by small adversarial perturbations. This perturbation, usually delivered in the form of small yet elaborately designed alternations of the topology of the target graphs, i.e., addition and deletion of very few portion of the edges, can result in serious degradation of the performance of the graph matching process. To begin our investigation with, we designed a density based and meta-learning enhanced attack specifically for graph matching and observed high mismatching rate in empirical analysis. In addition, we also showed that graph models adversarial trained on the attacking perturbation generated using the above approach also gained extra robustness. The weakness of this method as a defense against the adversarial attacks is that it does not provide any kind of guarantee in the sense of unaffected behavior under attacks with limited perturbation budget. Thus, we went further with Lipschitz networks equipped with specially designed Kl-Lipschitz Weibull activation combined with weights constrained calculated target norms with polar decomposition techniques to provide provable robustness while persevering the expressiveness of the network to mitigate the inevitable loss of matching rate. We also investigated the potential threats when performing graph matching in a Federated Learning scheme. An algorithm is proposed to deals with the dilemma in this specific problem which being the data privacy constraint requiring graphs not being shared with each other on one side and the nature of the graph matching problem demanding the possession of the knowledge of at least two graph simultaneously.