
Balakrishnan, A., Bouneffouf, D., Mattei, N., and Rossi, F.
(2019). Incorporating behavioral constraints in online
ai systems. Proceedings of the AAAI Conference on
Artificial Intelligence, 33(01):3–11.
Bench-Capon, T., Atkinson, K., and McBurney, P. (2012).
Using argumentation to model agent decision making
in economic experiments. Autonomous Agents and
Multi-Agent Systems, 25:183–208.
Borgo, S., Ferrario, R., Gangemi, A., Guarino, N., Ma-
solo, C., Porello, D., Sanfilippo, E. M., and Vieu,
L. (2022). DOLCE: A descriptive ontology for lin-
guistic and cognitive engineering. Applied Ontology,
17(1):45–69.
Chisholm, R. M. (1963). Supererogation and offence: A
conceptual scheme for ethics. Ratio (Misc.), 5(1):1.
Davis, A., Overmyer, S., Jordan, K., Caruso, J., Dandashi,
F., Dinh, A., Kincaid, G., Ledeboer, G., Reynolds, P.,
Sitaram, P., Ta, A., and Theofanos, M. (1993). Iden-
tifying and measuring quality in a software require-
ments specification. In Proceedings First Interna-
tional Software Metrics Symposium, pages 141–152.
De Giorgis, S., Gangemi, A., and Damiano, R. (2022).
Basic human values and moral foundations theory
in valuenet ontology. In International Conference
on Knowledge Engineering and Knowledge Manage-
ment, pages 3–18. Springer.
Fornara, N. and Colombetti, M. (2010). Ontology and time
evolution of obligations and prohibitions using seman-
tic web technology. Lecture Notes in Computer Sci-
ence, 5948 LNAI:101 – 118.
Gangemi, A. (2008). Norms and plans as unification criteria
for social collectives. Autonomous Agents and Multi-
Agent Systems, 17(1):70–112.
Gangemi, A., Guarino, N., Masolo, C., and Oltramari, A.
(2003). Sweetening wordnet with dolce. AI magazine,
24(3):13–13.
Graham, J., Haidt, J., Koleva, S., Motyl, M., Iyer, R., Woj-
cik, S. P., and Ditto, P. H. (2013). Chapter two - moral
foundations theory: The pragmatic validity of moral
pluralism. volume 47 of Advances in Experimental
Social Psychology, pages 55–130. Academic Press.
Grosof, B. N., Horrocks, I., Volz, R., and Decker, S. (2003).
Description logic programs: Combining logic pro-
grams with description logic. In Proceedings of the
12th international conference on World Wide Web,
pages 48–57.
Holgado-S
´
anchez, A., Arias, J., Moreno-Rebato, M., and
Ossowski, S. (2023). On admissible behaviours
for goal-oriented decision-making of value-aware
agents. In Multi-Agent Systems, pages 415–424,
Cham. Springer Nature Switzerland.
Ianella, R. and Villata, S. (2018). ODRL information model
2.2. W3C Recommendation, W3C.
Lawrence, J. and Reed, C. (2019). Argument mining: A
survey. Computational Linguistics, 45(4):765–818.
Lera-Leri, R., Bistaffa, F., Serramia, M., Lopez-Sanchez,
M., and Rodriguez-Aguilar, J. (2022). Towards plu-
ralistic value alignment: Aggregating value systems
through lp-regression. In Proceedings of the 21st In-
ternational Conference on Autonomous Agents and
Multiagent Systems, AAMAS ’22, page 780–788.
IFAAMAS.
Montes, N., Osman, N., Sierra, C., and Slavkovik, M.
(2023). Value engineering for autonomous agents.
CoRR, abs/2302.08759.
Montes, N. and Sierra, C. (2021). Value-guided synthesis of
parametric normative systems. pages 907–915. IFAA-
MAS.
Montes, N. and Sierra, C. (2022). Synthesis and properties
of optimally value-aligned normative systems. Jour-
nal of Artificial Intelligence Research, 74:1739–1774.
Osman, N. and d’Inverno, M. (2023). A computational
framework of human values for ethical ai.
Poole, D. L. and Mackworth, A. K. (2010). Artificial Intel-
ligence: foundations of computational agents. Cam-
bridge University Press.
Poveda-Villal
´
on, M., G
´
omez-P
´
erez, A., and Su
´
arez-
Figueroa, M. C. (2014). Oops! (ontology pitfall scan-
ner!): An on-line tool for ontology evaluation. Int. J.
Semantic Web Inf. Syst., 10:7–34.
Rodriguez-Soto, M., Serramia, M., Lopez-Sanchez, M.,
and Rodriguez-Aguilar, J. A. (2022). Instilling moral
value alignment by means of multi-objective rein-
forcement learning. Ethics and Information Technol-
ogy, 24:9.
Russell, S. (2022). Artificial Intelligence and the Problem
of Control, pages 19–24. Springer International Pub-
lishing, Cham.
Schwartz, S. H. (1992). Universals in the content and struc-
ture of values: Theoretical advances and empirical
tests in 20 countries. In Advances in experimental so-
cial psychology, volume 25, pages 1–65. Elsevier.
Segura-Tinoco, A., Holgado-S
´
anchez, A., Cantador, I.,
Cort
´
es-Cediel, M., and Bol
´
ıvar, M. R. (2022). A con-
versational agent for argument-driven e-participation.
Serramia, M., Lopez-Sanchez, M., and Rodriguez-Aguilar,
J. A. (2020). A qualitative approach to compos-
ing value-aligned norm systems. In Proceedings of
the 19th International Conference on Autonomous
Agents and MultiAgent Systems, AAMAS ’20, page
1233–1241, Richland, SC. IFAAMAS.
Serramia, M., Lopez-Sanchez, M., Rodriguez-Aguilar,
J. A., Rodriguez, M., Wooldridge, M., Morales, J., and
Ansotegui, C. (2018). Moral values in norm decision
making. IFAAMAS, 9.
Sierra, C., Osman, N., Noriega, P., Sabater-Mir, J., and
Perell
´
o, A. (2021). Value alignment: a formal ap-
proach. CoRR, abs/2110.09240.
Sirin, E., Parsia, B., Grau, B. C., Kalyanpur, A., and Katz,
Y. (2007). Pellet: A practical owl-dl reasoner. Journal
of Web Semantics, 5(2):51–53. Software Engineering
and the Semantic Web.
Soares, N. (2018). The value learning problem. Artificial
Intelligence Safety and Security.
Steels, L. (2023). Values, norms and ai – introduction to
the vale workshop. In Pre-proceedings of the ECAI
Workshop on Value Engineering (VALE), page 6–8.
Su
´
arez-Figueroa, M. C., G
´
omez-P
´
erez, A., and Fern
´
andez-
L
´
opez, M. (2015). The neon methodology framework:
A scenario-based methodology for ontology develop-
ment. Applied Ontology, 10(2):107–145.
AWAI 2024 - Special Session on AI with Awareness Inside
1428