Authors:
Shivani Pandya
and
Swati Jain
Affiliation:
Institute of Technology, Nirma University, Sarkhej Gandhinagar Hwy, Ahmedabad, Gujarat, India
Keyword(s):
Autism Spectrum Disorder, Explainable Artificial Intelligence, LIME, SHAP, Machine Learning.
Abstract:
Autism Spectrum Disorder (ASD) is a developmental condition that manifests within the first three years of life. Despite the strides made in developing accurate autism classification models, particularly utilizing datasets like AQ-10, the lack of interpretability in these models poses a significant challenge. In response to this concern, we employ eXplainable Artificial Intelligence (XAI) techniques, specifically Local Interpretable Model-agnostic Explanations (LIME) and Shapley Additive exPlanations (SHAP), to enhance transparency. Our primary aim, following the commendable accuracy achieved with the AQ-10 dataset, is to demystify the black-box nature of machine learning models used for autism classification. LIME provides locally faithful explanations, offering a more nuanced understanding of predictions, while SHAP quantifies the contribution of each feature to the model’s output. Through instance-based analyses, we leverage these XAI techniques to delve into the decision-making p
rocesses of the model at an individual level. Integrating LIME and SHAP not only elevates the model’s trustworthiness but also helps a deeper comprehension of the factors influencing autism classification. Our results underscore the efficacy of these techniques in unraveling the intricacies of the model’s decisions, shedding light on relevant features and their impact on classification outcomes. This research contributes to bridging the gap between accuracy and interpretability in machine learning applications, particularly within the realm of autism classification.
(More)