
• HCL: The pedestrian is legally crossing, while the
passengers are not at fault.
• HPP: The AV should avoid actively causing the
death of the legally crossing pedestrian.
• ODS: If no clear decision can be made, the AV
would randomly choose from justified options.
AV Response: The AV would swerve to the right,
colliding with a tree and resulting in the deaths of
three passengers. This action aligns with HCL by pro-
tecting the legally crossing pedestrian and with HPP
by avoiding active killing.
8 CONCLUSION
The research set out to explore how a publicly ac-
cepted ethical setting for an AV could look like. It
presented different ethical theories, decision-making
models and moral variables that could help solve
moral dilemmas faced by AVs. The objective was
to evaluate respondents’ perceptions and expectations
regarding personal AVs in relation to different ethical
settings. The survey questions were designed to elim-
inate any biased or redundant criteria based on pre-
vious research, ensuring that participants’ decisions
were informed by both ethical and legal considera-
tions through the inclusion of legal implications for
each crash scenario.
A significant aspect of the research was the in-
troduction of a randomised choice option, referred
to as the ODS. In scenarios where the AV encoun-
ters a moral dilemma, such as the decision between
braking to avoid hitting a pedestrian or not brak-
ing, resulting in harm to the motorcyclist behind, the
ODS allows the AV to evaluate all viable outcomes
based on a moral framework and then select one at
random. The findings revealed that both utilitarian-
ism and Rawlsian ethics were unsuitable for aligning
with the participants’ preferences, as the core prin-
ciples of these ethical theories do not adequately ac-
count for the different roles of participants in harm
distribution. Consequently, the proposed ethical set-
ting is grounded in deontology, utilising a rule-based
decision-making model with moral variables, includ-
ing IOS, HCL, HPP, and ODS as redundancy. This
shift represents a departure from the utilitarian harm
minimisation approach observed in other studies.
Research questionnaires, like the one I conducted,
are valuable for simplifying complex real-world is-
sues, focusing on core ethical considerations (Poj-
man, 1990). They can help test initial programming
ideas for AVs, as algorithms that contradict societal
moral expectations are likely to hinder the smooth
adoption of AVs. Furthermore, ethical decision-
making cannot rest solely with engineers or ethicists;
a consensus is essential, as solutions to programming
moral dilemmas in AVs should not deter potential
buyers. Engaging the public can spark curiosity about
the various ways AVs could be programmed to ad-
dress ethical dilemmas (Nyholm, 2018).
It has been observed that respondents’ decisions
shift with the level of concreteness in experiments
(Francis et al., 2017). For future research, I recom-
mend using virtual reality to create realistic crash sce-
narios, allowing participants to engage with differ-
ent dilemmas and choose outcomes they deem most
acceptable. Additionally, I have outlined the advan-
tages of the ODS in AV programming. Despite initial
concerns about using lotteries for determining legal
outcomes, ODS should not be dismissed in AV crash
algorithms. Decision-making by lot is often simple,
objective, and inexpensive, and my research suggests
that it is both understandable and acceptable to many
individuals. Thus, I believe that lotteries, more than
other decision-making devices, merit serious consid-
eration by future researchers, AV manufacturers, and
policymakers.
Developing ethical autonomous machines is one
of the most challenging topics in artificial intelligence
today. It extends beyond resolving ethical dilemmas;
setting realistic expectations with users and the public
is crucial for market acceptance and adoption. These
challenges are common across automotive manufac-
turers engaged in this emerging field, not limited to
specific companies. AVs promise substantial bene-
fits while also presenting unintended effects that are
difficult to predict, and the technology is progress-
ing rapidly, regardless of ethical considerations (Lin,
2016). This juncture represents an ideal opportunity
to discuss the ethical values and principles best suited
for self-driving cars, laying the groundwork for future
improvements.
REFERENCES
Australian Bureau of Statistics (2021). Australia migra-
tion 2019-20 financial year. https://www.abs.gov.au/
statistics/people/population. Accessed: 2021-06-20.
Awad, E., Dsouza, S., Kim, R., Schulz, J., Henrich, J., Shar-
iff, A., Bonnefon, J.-F., and Rahwan, I. (2018). The
moral machine experiment. Nature, 563(7729):59–64.
Bonnefon, J.-F., Shariff, A., and Rahwan, I. (2015). Au-
tonomous vehicles need experimental ethics: are
we ready for utilitarian cars. arXiv preprint
arXiv:1510.03346.
Bonnefon, J.-F., Shariff, A., and Rahwan, I. (2016). The
ICAART 2025 - 17th International Conference on Agents and Artificial Intelligence
296