
Figure 2: The figure above shows a visualization of the state labeled graph. Notation: m f and t r imply the actions,
move forward and turn right, respectively.
ACKNOWLEDGEMENTS
The work is supported by NASA grant
80NSSC23M0166 and is a part of NASA EPSCoR
Rapid Research Response 2023 grant.
REFERENCES
Abeywickrama, D. B., Cirstea, C., and Ramchurn, S. D.
(2019). Model checking human-agent collectives for
responsible ai. In 2019 28th IEEE International Con-
ference on Robot and Human Interactive Communica-
tion (RO-MAN), pages 1–8. IEEE.
Arechiga, N. (2019). Specifying safety of autonomous vehi-
cles in signal temporal logic. In 2019 IEEE Intelligent
Vehicles Symposium (IV), pages 58–63. IEEE.
Balasubramaniam, N., Kauppinen, M., Kujala, S., and
Hiekkanen, K. (2020). Ethical guidelines for solving
ethical issues and developing ai systems. In Interna-
tional Conference on Product-Focused Software Pro-
cess Improvement, pages 331–346. Springer.
Bonnemains, V., Saurel, C., and Tessier, C. (2018). Em-
bedded ethics: some technical and ethical challenges.
Ethics and Information Technology, 20:41–58.
Cimatti, A., Clarke, E., Giunchiglia, F., and Roveri, M.
(2000). Nusmv: a new symbolic model checker. In-
ternational journal on software tools for technology
transfer, 2:410–425.
Clarke, E., Emerson, E., and Sistla, A. (1986). Automatic
verification of finite-state concurrent systems using
temporal logic specifications. ACM Transactions
on Programming Languages and Systems (TOPLAS),
8(2):244–263.
Clarke, E. M. (1997). Model checking. In Foundations of
Software Technology and Theoretical Computer Sci-
ence: 17th Conference Kharagpur, India, December
18–20, 1997 Proceedings 17, pages 54–56. Springer.
Dennis, L., Fisher, M., Slavkovik, M., and Webster, M.
(2016). Formal verification of ethical choices in au-
tonomous systems. Robotics and Autonomous Sys-
tems, 77:1–14.
Dignum, V., Baldoni, M., Baroglio, C., Caon, M., Chatila,
R., Dennis, L., G
´
enova, G., Haim, G., Kließ, M. S.,
Lopez-Sanchez, M., et al. (2018). Ethics by design:
Necessity or curse? In Proceedings of the 2018
AAAI/ACM Conference on AI, Ethics, and Society,
pages 60–66.
Fisher, M., Mascardi, V., Rozier, K. Y., Schlingloff, B.-H.,
Winikoff, M., and Yorke-Smith, N. (2021). Towards
a framework for certification of reliable autonomous
systems. Autonomous Agents and Multi-Agent Sys-
tems, 35:1–65.
Furbach, U., Schon, C., and Stolzenburg, F. (2014).
Automated reasoning in deontic logic. In Multi-
disciplinary Trends in Artificial Intelligence: 8th In-
ternational Workshop, MIWAI 2014, Bangalore, In-
dia, December 8-10, 2014. Proceedings 8, pages 57–
68. Springer.
Gabbay, D., Horty, J., Parent, X., Van der Meyden, R.,
van der Torre, L., et al. (2021). Handbook of deon-
tic logic and normative systems. College Publications,
2021.
Huth, M. and Ryan, M. (2004). Logic in Computer Science:
Modelling and reasoning about systems. Cambridge
university press.
Luckcuck, M., Farrell, M., Dennis, L. A., Dixon, C., and
Fisher, M. (2019). Formal specification and verifica-
tion of autonomous robotic systems: A survey. ACM
Computing Surveys (CSUR), 52(5):1–41.
Mermet, B. and Simon, G. (2016). Formal verication of eth-
ical properties in multiagent systems. In 1st Workshop
on Ethics in the Design of Intelligent Agents.
Roberson, T., Bornstein, S., Liivoja, R., Ng, S., Scholz,
J., and Devitt, K. (2022). A method for ethical ai in
defence: A case study on developing trustworthy au-
tonomous systems. Journal of Responsible Technol-
ogy, 11:100036.
Shea-Blymyer, C. and Abbas, H. (2022). Generating deon-
tic obligations from utility-maximizing systems. In
Proceedings of the 2022 AAAI/ACM Conference on
AI, Ethics, and Society, pages 653–663.
Formal Analysis of Deontic Logic Model for Ethical Decisions
223