
compared to DiPACE. This suggests that DiPACE+
prioritizes CF diversity and feasibility over prediction
certainty, which may be desirable depending on ap-
plication needs.
Overall, DiPACE+ achieves the most balanced
performance across the metrics, illustrating its ability
to generate CFs that are diverse, realistic, and feasi-
ble, while maintaining reasonable confidence in the
outcome. These results highlight DiPACE+ as a ro-
bust solution for CF generation in real-world contexts
where multiple qualities, including plausibility and
diversity, are essential for actionable insights.
4 CONCLUSION
This study introduces DiPACE and DiPACE+, novel
algorithms for generating counterfactual explanations
that achieve a balanced optimization of diversity,
plausibility, proximity, and sparsity, advancing the
field of counterfactual explanation (CFX). By inte-
grating these characteristics into the loss function and
using an optimization strategy that combines gradient
descent with perturbations, our approach successfully
escapes local optima, producing CF sets that are both
realistic and actionable. Experimental results on heart
disease and credit approval datasets demonstrate that
DiPACE+ consistently outperforms existing CFX al-
gorithms in achieving diverse and plausible CFs, par-
ticularly excelling in scenarios with complex interac-
tions among features. The practical applications of
DiPACE+ extend to various fields where actionable
and realistic CFs are essential, such as healthcare, fi-
nance, and user-focused AI systems. For stakehold-
ers like data scientists and machine learning engi-
neers, DiPACE+ provides deeper insights into model
behavior and potential biases, enhancing transparency
and interpretability in critical decision-making appli-
cations.
Future work should aim to improve the conver-
gence efficiency of the optimization strategy. While
perturbations are effective for escaping local optima,
they can increase convergence time; thus, exploring
adaptive or hybrid optimization approaches may yield
faster results. Additionally, extending DiPACE+ to
handle more complex data types, such as time series
and high-dimensional image data, would broaden its
applicability. Future research could also focus on de-
veloping evaluation metrics that more precisely cap-
ture the trade-offs among diversity, plausibility, prox-
imity, and sparsity, as well as assessing DiPACE+’s
impact on user trust and understanding in interactive
settings.
REFERENCES
Alatabani, L. E. and Saeed, R. A. (2025). Xai applica-
tions in autonomous vehicles. In Explainable Artifi-
cial Intelligence for Autonomous Vehicles, pages 73–
99. CRC Press.
Babaei, G., Giudici, P., and Raffinetti, E. (2023). Explain-
able fintech lending. Journal of Economics and Busi-
ness, 125:106126.
Carrizosa, E., Ram
´
ırez-Ayerbe, J., and Romero Morales,
D. (2024). Generating collective counterfactual ex-
planations in score-based classification via mathemat-
ical optimization. Expert Systems with Applications,
238:121954.
Cheng, F., Ming, Y., and Qu, H. (2021). Dece: Decision
explorer with counterfactual explanations for machine
learning models. IEEE Transactions on Visualization
and Computer Graphics, 27(2):1438–1447.
Dandl, S., Molnar, C., Binder, M., and Bischl, B. (2020).
Multi-objective counterfactual explanations. In Pro-
ceedings of the International Conference on Paral-
lel Problem Solving from Nature, pages 448–469.
Springer.
Del Ser, J., Barredo-Arrieta, A., D
´
ıaz-Rodr
´
ıguez, N., Her-
rera, F., Saranti, A., and Holzinger, A. (2024). On gen-
erating trustworthy counterfactual explanations. Infor-
mation Sciences, 655:119898.
El Qadi, A., Trocan, M., Diaz-Rodriguez, N., and Frossard,
T. (2023). Feature contribution alignment with ex-
pert knowledge for artificial intelligence credit scor-
ing. Signal, Image and Video Processing, 17(2):427–
434.
Guidotti, R. (2022). Counterfactual explanations and how to
find them: literature review and benchmarking. Data
Mining and Knowledge Discovery, 36.
Janosi, A., Steinbrunn, W., Pfisterer, M., and Detrano, R.
(1988). Heart Disease. UCI Machine Learning Repos-
itory. DOI: https://doi.org/10.24432/C52P4X.
Jiang, J., Leofante, F., Rago, A., and Toni, F. (2024). Ro-
bust counterfactual explanations in machine learning:
A survey. arXiv preprint arXiv:2402.01928.
Kumar, P., Wazid, M., Singh, D., Singh, J., Das, A. K., Park,
Y., and Rodrigues, J. J. (2023). Explainable artificial
intelligence envisioned security mechanism for cyber
threat hunting. Security and Privacy, 6(6):e312.
Mirzaei, S., Mao, H., Al-Nima, R. R. O., and Woo, W. L.
(2023). Explainable ai evaluation: A top-down ap-
proach for selecting optimal explanations for black
box models. Information, 15(1):4.
Mothilal, R. K., Sharma, A., and Tan, C. (2020). Explain-
ing machine learning classifiers through diverse coun-
terfactual explanations. In Proceedings of the 2020
Conference on Fairness, Accountability, and Trans-
parency, pages 607–617. ACM.
Nadeem, A., Vos, D., Cao, C., Pajola, L., Dieck, S., Baum-
gartner, R., and Verwer, S. (2023). Sok: Explainable
machine learning for computer security applications.
In Proceedings of the IEEE 8th European Symposium
on Security and Privacy (EuroS&P), pages 221–240.
IEEE.
DiPACE: Diverse, Plausible and Actionable Counterfactual Explanations
553