less, we can still ask whether restricting a machine’s
ability to hide information would produce positive or
negative outcomes.
Example. It is commonly believed that Winston
Churchill was aware the town of Coventry was go-
ing to be bombed before it happened; he chose not to
alert the town, because doing so would make it clear
he was able to decode enemy transmissions. The sug-
gestion is that he increased the chance of victory and
reduced the total overall number of deaths in the war
by allowing this isloated attack to occur. Note that
this story may not be true, but that is beside the point.
For the moment, assume that the decision attributed
to Churchill was the correct decision from a utilitar-
ian pespective.
Now we modify the scenario slightly, and we as-
sume that Churchill has a smart phone with an in-
telligent assistant. The assistant knows everything
about the war, and it also knows about Churchill’s per-
sonal affairs. In particular, the assistant knows that
Churchill’s mother is currently visiting Coventry. If
Churchill finds out that his mother is in Coventry, it
may cause him to make the “incorrect” decision based
on emotion. The assistant therefore decides to hide
this information, which seems to be ethically correct
from a utilitarian perspective.
The preceding example appears to give a scenario
where an intelligent agent would be acting ethically
by hiding information. This is true if we consider
passive information hiding (not volunteering the in-
formation), but it is also true if we consider active in-
formation hiding (if Churchill asks about his mother’s
schedule). One could argue that it would be unethical,
from a utilitarian perspective, to enforce a rule that
requires the assistant to share all information. How-
ever, this situation is not useful as it does not matter
than the assistant is not human. The ethical issues are
the same when we replace the intelligent agent with a
human. We want to focus on cases where the fact that
an agent is computational is important.
2.4 Interchangable Parts
We define an information-sharing scenario (ISS) to
be a situation in which two agents are communicating
in a way that causes the amount of information held
by each to change. We have just claimed that there
exist information-sharing scenarios where one agent
can improve overall utility by choosing not to divulge
some piece of information to the other. Consider an
ISS where one agent (the hider) is ethically justified in
hiding information from the other agent (the seeker).
We call such a scenario a hiding-justified information
sharing scenario (HJISS). Note that each role in such
a situation can be filled by a human or by an intel-
ligent computing agent. Now consider the class of
HJISSs in which the hider is a human. We say that
such a scenario is human replaceable if we can re-
place the human with an intelligent computing agent
without changing the utilitarian outcomes at all. The
question, therefore, is the following. Does there exist
a human-hider HJISS that is not human replaceable?
In other words, can we imagine a scenario in which
a human would be justified in hiding information, but
an intelligent computing agent would not.
Example. Consider the Churchill example again.
Suppose that Churchill has a human assistant, and that
the assistant informs him that his mother is in Coven-
try. Suppose further that Churchill then prevents the
attack, and goes on to lose the war. One could argue
that the assistant made an ethically poor decision by
sharing the information from a utilitarian perspective.
Years go by, and the assistant is hit by a car and dies.
When the autopsy is attempted, it is discovered that
the assistant is actually an android. The question is
this: Does the fact that the assistant is not a human
affect our view of the decision to inform Churchill
about his mother? It seems that the ethical character
of the decision remains the same. Certainly, from a
utilitarian perspective, the revelation that the decision
was influenced by a machine does not change our per-
spective a great deal.
To be clear, we are taking a human-centric view of
utility. So, regardless of the aggregate used to calcu-
ate the overall utility for a decision, we are only con-
sidering the benefits and the harms done to humans.
From this perspective, the situation we are describing
is actually rather easy to analyze. If we have a human-
repaceable HJISS, then we are really comparing two
scenarios in which only a single agent has changed.
The hider went from being a human to being a com-
puting machine, but everyone else stayed the same.
When we look at a human replaceable HJISS, we
can see that the only variation in utility in the human
and machine versions of the problem are related to
the agent that is hiding information. In the human
version, the impact of hiding information may have
positive or negative impacts on that individual hu-
man; these impacts may influence the overall utility
of a certain choice. Hence, any distinction between
the correct ethical decision for the human and for the
computing agent is selfish. This is not to say a human
decision maker is being unethical when they are self-
ish of course; sometimes this is the right thing to do.
Information Hiding: Ethics and Safeguards for Beneficial Intelligence
549