principles and to work collaboratively to identify
constraints, or limits, within which they consider
these trade-offs to be ethically acceptable. We
hypothesise that the benefits of this structured process
are two-fold: firstly that participation will improve
public understanding of, and willingness to engage
with, ethical complexities of AS and secondly that AS
designers will gain insight into potential design
choices which may be made to render the autonomous
system more acceptable to the public.
We also present an interactive tool (Figure 1)
which we have created to provide a visual
representation of the outcomes of the EETAS
process. This tool serves as a record of the public
discussion, which can be retained by end-users or
stakeholder organisations and used to illuminate
diverse public perspectives on AS ethics. In addition,
the tool can be used later in the lifecyle to
communicate the autonomous system’s ethical
prioritisations and to increase end users’
understanding of it.
In Section 2 we present a discussion of existing
literature which considers questions of ethical
prioritisation in autonomous systems. Section 3
contains our description of the EETAS process, while
Section 4 provides a description of an initial pilot
study workshop which has demonstrated an
indicative trend between participation in EETAS and
enhanced public understanding of AS. Section 5
identifies our conclusions and some steps for further
work.
2 BACKGROUND
The concept of trade-offs, or risk balancing, between
two desirable properties is well-established as a
research area. Expected utility theory (von Neumann,
1947) describes how an individual’s general attitude
to risk and benefits can change their willingness to
accept particular specified risks. Similarly, prospect
theory (Kahneman & Tversky, 1979) also allows a
more complex framing of risk perception and risk
appetite. The trolley problem (Foot, 1967) is of
course perhaps the seminal example of risk balancing,
and has informed much of the public discourse
around autonomous vehicle behaviour.
Beyond this, risk balancing as a concept is well-
explored in autonomous system development. (IET,
2019) describes trade-offs between safety and
security of cyber-physical systems, while
(Akinsanmi, 2021) considers the balancing of public
health, privacy and digital security. Within specific
autonomous domains the concept of prioritising
certain safety or ethical properties has also been
discussed: (Thornton, 2018) describes the tension
between the desire for personal autonomy on the part
of an autonomous vehicle user, and the more general
desire for fairness and public safety while (Lin, 2015)
also considers how specific actions on the part of an
autonomous vehicle – e.g. driving closer to another
car in order to give more room to a pedestrian –
transfer the risk from one segment of the population
(pedestrians) to another (other drivers). In the field of
healthcare, ethical trade-offs between privacy and
well-being are also common (Lee, 2020), (Martinez-
Martin, 2020).
Other existing work focuses specifically on trade-
offs which affect the design process. (Dobrica, 2002)
presents a comprehensive survey of trade-offs in
complex systems design, while (Goodrich, 2000)
discusses these trade-offs within an autonomous
context, specifically that of collision avoidance
systems. Similarly (Bate, 2008) considers trade-offs
more generally within safety-critical systems, while
(Menon, 2019) proposes a methodology for
developers of autonomous vehicles to justify and
communicate the ways in which their system design
has been informed by ethical trade-offs.
The benefit of using a tangible element such as the
interactive tool in Figure 1 to test and visualise trade-
offs in real-time is supported by a large body of
literature, including (Schrier, 2019), (Rossi, 2019),
(Larson 2020). More generally, games have been
shown to be a successful vehicle for engagement with
ethics principles, especially in industry testing.
Examples include Judgment Call, (Ballard, 2019), a
game developed to help AI developers to identify
ethical questions using design fiction, as well as
MiniCode, a design fiction toolkit developed for near-
future technology designers and developers (Malizia,
2022).
Much of the existing work around autonomous
systems and AI is focused on developers, intended
either to provide them with insight into how a system
can be designed or to be used as guidance on making
ethically justifiable decisions. However, there is
comparatively little work which provides
stakeholders and end-users with an opportunity to
express their concerns around AI ethics, or to inform
the design of a proposed system by providing input
into the perceived acceptability of ethical trade-offs.
The process we describe here addresses this gap.