allowed more standardized and transparent
implementation on ethical conflict resolution
practices. It showed thus the possibility of FASTER-AI to
institutionalize ethical responsibility. The decrease in
public grievances finally signals that governance
mechanisms along the lines should/can be crucial to build
confidence in AI-supported rule-deliberation.
4 CONCLUSIONS AND FUTURE
WORK
This paper proposes the FASTER-AI framework in
order to enhance AI trustworthiness in WIS along the
dimensions of fairness, transparency, security,
robustness, and ethical accountability. First case
studies conducted in various sectors have
demonstrated that the adoption of FASTER-AI
enhances the reliability of AI, since it provides higher
fairness, explainability, security, and ethical trust.
However, these initial findings relate to small
samples, and further research is expected to validate
the adaptation of FASTER-AI into larger and more
complex settings.
As AI evolves, so does FASTER-AI, bound to
proliferate with emerging challenges. Above all, an
effective collaboration between academic
institutions, industry players, and the regulators is
very instrumental in establishing common metrics
and standards for its evaluation. Long-term
implications of deploying FASTER-AI on
organizational change, user trust, and regulatory
compliance would, therefore, be an area of future
research, possibly through longitudinal studies.
Concluding, FASTER-AI contributes to the
debate on trustworthy AI by delivering a real-world
framework for WIS and hence laying the foundation
for creating and maintaining trust in AI systems; trust
will increasingly be necessary for efficacy and
societal acceptance of AI technologies.
REFERENCES
Agarwal, A., Beygelzimer, A., Dudík, M., Langford, J., &
Wallach, H. (2018). A reductions approach to fair
classification. International Conference on Machine
Learning, 60–69.
Bhutani, S., & Mittal, R. (2023). Blockchain Based
Ecommerce Warranty System Using NFT (Non-
fungible Token). 2023 3rd International Conference on
Advancement in Electronics & Communication
Engineering (AECE), 374–377.
Carlini, N., Liu, C., Erlingsson, Ú., Kos, J., & Song, D.
(2019). The secret sharer: Evaluating and testing
unintended memorization in neural networks. 28th
USENIX Security Symposium (USENIX Security 19),
267–284.
Danks, D., & London, A. J. (2017). Algorithmic Bias in
Autonomous Systems. Ijcai, 17(2017), 4691–4697.
Doshi-Velez, F., & Kim, B. (2017). Towards a rigorous
science of interpretable machine learning. ArXiv
Preprint ArXiv:1702.08608.
Floridi, L., Cowls, J., Beltrametti, M., Chatila, R.,
Chazerand, P., Dignum, V., Luetge, C., Madelin, R.,
Pagallo, U., & Rossi, F. (2018). AI4People—an ethical
framework for a good AI society: opportunities, risks,
principles, and recommendations. Minds and
Machines, 28, 689–707.
Floridi, L., Cowls, J., King, T. C., & Taddeo, M. (2020).
How to Design AI for Social Good: Seven Essential
Factors. Science and Engineering Ethics, 26(3), 1771–
1796. https://doi.org/10.1007/s11948-020-00213-5
Ganesh, D., & Rastogi, A. (2023). Integrating Web 3.0
Technologies for Advanced Information Retrieval
Systems in the Educational Sector. https://doi.org/
10.1109/ICCSAI59793.2023.10421149
Ge, M., Pilato, G., Persia, F., & D’Auria, D. (2023).
Recommender System for Social Media: Research
Challenges and Future Applications. https://doi.org/
10.1109/TransAI60598.2023.00033
Goodfellow, I., Shlens, J., & Szegedy, C. (2015).
Explaining and harnessing adversarial examples.
Huang, D. (2022). Design of Integrated English online
education system based on Artificial Intelligence.
https://doi.org/10.1109/ISoIRS57349.2022.00021
Kardaras, D. K., Troussas, C., Barbounaki, S. G., Tselenti,
P., & Armyras, K. (2024). A Fuzzy Synthetic
Evaluation Approach to Assess Usefulness of Tourism
Reviews by Considering Bias Identified in Sentiments
and Articulacy. Information, 15(4). https://doi.org/
10.3390/info15040236
Krouska, A., Troussas, C., & Sgouropoulou, C. (2020).
Applying Genetic Algorithms for Recommending
Adequate Competitors in Mobile Game-Based
Learning Environments. In V. Kumar & C. Troussas
(Eds.), Intelligent Tutoring Systems (pp. 196–204).
Springer International Publishing.
Madry, A., Makelov, A., Schmidt, L., Tsipras, D., & Vladu,
A. (2017). Towards Deep Learning Models Resistant to
Adversarial Attacks.
Martin, K. (2022). Ethics of data and analytics: Concepts
and cases. Auerbach Publications.
Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., &
Galstyan, A. (2021). A survey on bias and fairness in
machine learning. ACM Computing Surveys (CSUR),
54(6), 1–35.
Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). “Why
should i trust you?” Explaining the predictions of any
classifier. Proceedings of the 22nd ACM SIGKDD
International Conference on Knowledge Discovery and
Data Mining, 1135–1144.
Rieke, N., Hancox, J., Li, W., Milletarì, F., Roth, H. R.,
Albarqouni, S., Bakas, S., Galtier, M. N., Landman, B.
A., Maier-Hein, K., Ourselin, S., Sheller, M., Summers,