imperative, which he presents in three diverse
principles:
1: The law of universalizability: this principle
states that an action is considered to be right if it can
be universalized. For instance, breaking promise or
telling a lie; before performing these actions, a person
has to see whether it can be universalized. Kant puts
it this way: “Act as though the maxim of your action
were by your will to become a universal law of nature.
2: The Principle of Ends states that: “So act as to
treat humanity, whether in your own person or in that
of any other, in every case as an end and never as
merely a means.” To put it differently, this law states
that we should treat human beings as end in
themselves and never use them as a means to fulfill
some other end., To put it differently, individuals
should give respect to each human being and should
not use them as a commodity to accomplish the
desired results. Kant states that since we humans have
rational capacity or ability to reason we can make use
to reason to perform right actions.
3: The Principle of Autonomy this principle states
that we are free rational beings and due to this pre-
given rationality, we are able to determine the
differences between right and wrong actions. To
differentiate between right and wrong actions, we do
not have dependent upon others. However, for Kant
the onus lies on humans to discover and identify
differences between morally right and wrong actions.
We have to use our ability to reason to assist us in
implementing the categorical imperative and make
our own decisions, rather than relying on someone
else to tell us what to do. Kant puts it this way: “So
act that your will can regard itself at the same time as
making universal law through its maxims.
Now putting these ethical theories in context of
artificial intelligence decision- making, the
differences in the theories would result in the
difference in decision making. For instance, a
consequentialist AI might think that killing one evil
human being would be a just act if it results into
saving thousands of other human beings. However
contrary to the normative approach, a virtue theory AI
emphasizes on development of moral traits of an
individual. AI following deontological theory unlike
the consequentialist theory would regard the act of
killing as a wrong act.
5 CONCLUSION
With the advancement of technology and digital
revolution in place, it is believed AI to have rational
thinking capacity if it involves in a pre-given design
of logical thinking basis which it vindicates and
perform actions. If applied to AI, deontology theory
is most appropriate theory because of its ordering
actions and categorizing it on the basis of rational
dimensions of the other moral agents. It uses
universal law to distinguish right from wrong. For
Kant, humans are distinctly superior to other beings
due to its rational capacities. This status would also
apply to artificial intelligence. In my opinion, out of
the above mentioned ethical theories, AI must be
automated with deontological. Since AI have the
potential to make incredibly complex moral
decisions, it is important that humans are able to
identify the logic used in a given decision in a
transparent way, so as to accurately determine the
morality of the action in question.
REFERENCES
Abelson, Raziel and Kai Nielsen. “Ethics, History of” in
Encyclopedia of Philosophy. Ed. Donald M. Borchert,
394-439, 2006.
Anderson, M., Anderson, S; Machine ethics: creating an
ethical intelligent agent. AI Mag. 28(4), 2007.
Bentham, J. Introduction to the principles of morals and
legislation. Blackwell’s political texts. Blackwell,
Oxford, 1789. Intr. de W. Harrison, 1967.
Bernard Williams. Negative responsibility: and two
examples. Utilitarianism: For and against, pages 97-
118, 1973.
Colin Allen, Gary Varner, and Jason Zinser. Prolegomena
to any future artificial moral agent. Journal of
Experimental and Theoretical Artificial Intelligence,
12(3):251-261, 2000.
Driver, J. The history of utilitarianism. In The Stanford
Encyclopedia of Philosophy, E. N. Zalta, Ed., summer
2009.
Lafollette, Hugh, ed. The Blackwell Guide to Ethical
Theory. Malden: Blackwell Publishers Inc., 2000.
Moore. J. H; The nature, importance, and difficulty of
machine ethics. IEEE Intell. Syst. 21(4). 18- 21 (2006).
S. Matthew Liao, A Short Introduction to the Ethics of
Artificial Intelligence In: Ethics of Artificial Intelligence.
Edited by: S. Matthew Liao, Oxford University Press,
2020.
Proposed Way to Inculcate Morality in Artificial Intelligence
321