This Is AuburnElectronic Theses and Dissertations

Show simple item record

Interpretable Deep Learning for Diagnosis of Human Brain Disorders using Neuroimaging


Metadata FieldValueLanguage
dc.contributor.advisorDeshpande, Gopikrishna
dc.contributor.authorMasood, Janzaib
dc.date.accessioned2020-11-17T17:58:46Z
dc.date.available2020-11-17T17:58:46Z
dc.date.issued2020-11-17
dc.identifier.urihttp://hdl.handle.net/10415/7492
dc.description.abstractDeep neural networks are increasingly being used in neuroimaging research for the diagnosis of brain disorders and understanding of human brain. They are complex data driven systems that work in a black-box fashion once they are trained. Despite their impressive performance, their applicability in medical applications will be limited unless there is more transparency on how these algorithms arrive at their decisions. Interpretability algorithms help in bringing transparency in the decision-making of deep neural networks. In this work, we combined the understanding generated by multiple interpretable deep learning algorithms, to explain our resting state functional connectivity classifiers. Convolutional neural network classifiers were trained for discriminating between patients and healthy subjects; we worked with post-traumatic stress disorder (PTSD), autism spectrum disorder (ASD) and Alzheimer’s disease. We used the variants of gradient and relevance-based interpretability algorithms. Permutation testing and cluster mass thresholding was used to identify the significant discriminating functional connectivity paths between patients and control subjects. For PTSD, the classifier provided >90% accuracy and we found that different interpretability algorithms gave slightly different results, most likely because they assume different things about the model and data. By taking a consensus across methods, the interpretability became more robust and was found to be in general agreement with prior literature on connectivity alterations underlying PTSD. For ASD, classification performance, and hence interpretability, varied widely across data acquisition sites (56% - 94%). Harmonization of data across sites provided incremental improvement in accuracy, but not enough to make interpretability largely consistent. We found that interpretability makes sense only for some sites that provide high enough accuracy. Our results demonstrate that robust interpretability across methods and models requires substantially higher accuracy than is currently possible in many neuroimaging datasets. This should be a cautionary tale for researchers wanting to use interpretability of artificial neural network classifiers in neuroimaging.en_US
dc.rightsEMBARGO_NOT_AUBURNen_US
dc.subjectElectrical and Computer Engineeringen_US
dc.titleInterpretable Deep Learning for Diagnosis of Human Brain Disorders using Neuroimagingen_US
dc.typeMaster's Thesisen_US
dc.embargo.lengthMONTHS_WITHHELD:5en_US
dc.embargo.statusEMBARGOEDen_US
dc.embargo.enddate2021-03-20en_US
dc.contributor.committeeDenney, Thomas
dc.contributor.committeeKatz, Jeffery

Files in this item

Show simple item record