The ‘AnnualPrecip’ range is between -2.388 and
5.115, its value in the studied position is approxi-
mately 0.82. According to the PDP plot in Figure 6,
the mean reached value in this point is less than -0.25,
which agrees with SHAP results where f(x) is equal to
0, and explains more why the model predicted a false
0 since both values 0.4 and -0.25 in the PDP plots
(Figure 5 and Figure 6) are far from being around 1.
6 THREATS TO VALIDITY AND
CONCLUSION
This work includes limitations that should be taken
into account when evaluating its findings. During the
data retrieval phase, only one occurrence dataset was
used, incorporating additional data types to the used
tabular dataset may generate better results.
Optimizing the built MLP classifier and creating
more black box models, as well as comparing SHAP
with other global and local interpretability techniques
would undoubtedly provide better explanations to the
misclassified instances.
To conclude, several MLP models were used to
study the distribution of the Loxodonta Africana, the
top performing model was used to predict the species’
occurrence and absence values. Based on SHAP’s re-
sults, the ‘AnnualPrecip’ contributed significantly to
the proposed model’s output since the studied species
lives in the African Savanna known with its tropical
wet and dry climate where rain falls in a single sea-
son and the rest of the year is dry.
SHAP allowed the conduction of models analysis
in depth and leads the selection of appropriate fea-
tures making it a suitable explanation technique for
biodiversity experts to consider when drawing critical
decisions.
Future work would attempt to include more black
box models, and compare their performance as well
as their interpretability with the obtained results using
different techniques such as SHAP’s summary plot,
FI, and LIME.
REFERENCES
Abdollahi, A. and Pradhan, B. (2021). Urban vegeta- tion
mapping from aerial imagery using explainable ai (xai).
Sensors.
AI, I. (2020). What is interpretability? https://www.
interpretable.ai/interpretability/what/.
American Museum of Natural History. Biodiversity
informatics. https://www.amnh.org/research/center- for-
biodiversity-conservation/capacity- development/biodi
versity-informatics.
Barbet-Massin, M., Jiguet, F., Albert, C., and Thuiller, W.
(2012). Selecting pseudo-absences for species distri-
bution models: How, where and how many? Methods in
Ecology and Evolution.
B.Daley and R.Kent (2005). P120 Environmental Science
and Management. London.
Brownlee, J. (2017). Gentle introduction to the adam
optimization algorithm for deep learn- ing. https://ma
chinelearningmastery.com/adam- optimization-algorith
m-for-deep-learning/.
Brownlee, J. (2020). Hyperparameter optimiza- tion with
random search and grid search. https://machinelearn
ingmastery.com/hyperparameter- optimization-with-
random-search-and-grid-search/.
Cheng, B. and Titterington, D. M. (1994). Neural Networks:
A
Review from a Statistical Perspective. Statistical
Science, 9(1).
Doran, D., Schulz, S., and Besold, T. (2017). What does
explainable ai really mean? a new conceptualization of
perspectives.
D.Tuia, B.Kellenberger, s.Beery, Blair.R.Costelloe, S.Zuffi,
B.Risse, A.Mathis, W.Mathis, M., Langevelde, F.,
T.Burghardt, H.Klinck, R., M.Wikelski, Horn, G.,
D.Couzin, I., M.Crofoot, V.Stewart, C., and T.Berger-
Wolf (2022). Perspectives in machine learning for
widlife conservation. Nature Communications.
EcoCommons (2022). Data for species distribution models.
https://support.ecocommons.org.au/support/solutions/ ar
ticles/6000255996-data-for-species-distribution- models.
Fuchs, M. (2021). Nn - multi-layer perceptron clas- sifier
(mlpclassifier). https://michael-fuchs-python.netlify.ap
p/2021/02/03/nn-multi-layer- perceptron-classifier-mlp
classifier/.
GBIF (2021). Loxodonta africana (blumenbach, 1797).
https://www.gbif.org/fr/species/2435350.
Gopinath, D. and Kurokawa, D. (2021). The shapley value
for ml models. https://towardsdatascience.com/the-
shapley-value-for-ml-models-f1100bff78d1.
Gurney, K. (1997). An Introduction to Neural Networks.
Hakkoum, H., Idri, A., and Abnane, I. (2021). Assessing
and comparing interpretability techniques for artificial
neural networks breast cancer classification. Com-
puter Methods in Biomechanics and Biomedical En-
gineering: Imaging & Visualization.
Lippman, D.Bordacount. https://courses.lumenlearning.
com/ mathforliberalartscorequisite/chapter/borda-count/.
Liu, M., Han, Z., Chen, Y., Liu, Z., and Han, Y. (2021).
Tree species classification of lidar data based on 3d
deep learning. Measurement, 177:109301.
Lundberg, S. M. and Lee, S.-I. (2017). A unified approach
to interpreting model predictions. In Advances in Neu-
ral Information Processing Systems.
Molnar, C. (2019). Interpretable Machine Learn- ing.
https://christophm.github.io/interpretable-ml- book/.
https://christophm.github.io/interpretable-ml- book/.
Murphy, A. (2019). Batch size (machine learning).
https://radiopaedia.org/articles/batch-size-machine-lea
rning.