On Accelerating Deep Neural Network Mutation Analysis
Date
2025-04-21Type of Degree
Master's ThesisDepartment
Computer Science and Software Engineering
Metadata
Show full item recordAbstract
The usage of Deep Neural Networks (DNNs) is increasing rapidly across various domains, necessitating rigorous testing to ensure validity, usability, and effectiveness. Mutation analysis of DNNs, a technique that applies mutations to models, creating mutants used to evaluate effectiveness, has emerged as a powerful approach for assessing model robustness. However, existing mutation analysis techniques for DNNs face prohibitive computational costs, especially for large real-world models. This creates a critical need to accelerate the analysis of DNN mutations while maintaining the effectiveness of the testing. The primary contribution of this thesis is DEEPMAACC, a novel tool that was designed to mitigate these computational expenses. DEEPMAACC implements two distinct acceleration methods: neuron clustering and mutant clustering. Both methods utilize hierarchical agglomerative clustering to group neurons or mutants with similar weights, with the aim of improving efficiency while maintaining the accuracy of the mutation score. To evaluate DEEPMAACC, this research conducts an empirical study using eight DNN models on four popular classification datasets and two DNN architectures. The secondary contribution of this thesis is two more approaches that are used to accelerate the mutation analysis of DNNs. The approaches are Random Mutant Selection and Boundary Sample Size Selection. Random Mutant Selection is inspired by Ghanbari et al. and acts as a baseline to show that randomly choosing a certain percentage of mutants will not result in the same outcomes as DEEPMAACC. Boundary Sample Size Selection is a unique approach inspired by Shen et al. which only tests on certain sensitivities of decision boundary samples.