The Ethics of Robotics in Modern EngineeringA Story by jbtaylor1103A 3,000 word essay on ethical robotics, submitted as part of a university assignment about 'ethics in engineering'. Originally submitted in December 2012.The Ethics of Robotics in Modern
Engineering
Ask an engineer to design a high-yield
nuclear bomb and he/she may question the ethics of doing so. But ask the same
engineer to design a 3D printer " capable of producing large components with
sufficient tolerances for use in machines " and it may be done without any
ethical considerations to the effects on the manufacturing and transport industries
which the printer is circumventing. Ethics is an often underrated, yet
undeniably integral part of engineering. We must consider the ethical
implications of any innovative technology and cast judgment as to whether it is
right to proceed. I use the word
‘right’ in this context as a synonym for ‘ethical’, which I use to also
(briefly) encompass socioeconomic and political aspects. One must also realise that there is a fine
balance to be found between determining the most ethical courses of action
without critically hindering technological progress. In some cases both the
pros and cons of a project may have huge implications on society; in these
cases it can be very difficult to determine a suitable course of action. A good
example of this is nuclear fission; while the deaths of hundreds of thousands
of Japanese civilians by the nuclear bombs dropped on Hiroshima and Nagasaki
can hardly be justified, the same technology then gave the world nuclear power,
which is becoming ever more important as fossil fuel reserves slowly deplete. Just as the word ‘engineering’ covers a vast
range of sciences, fields of study and technologies, so a comprehensive
discussion of ethics in all of engineering cannot be condensed into a single,
short essay. Thus for the purposes of concision I will be concentrating on the
emerging field of robotics. This essay will consider the impact of robots on a
wide range of aspects of modern society, particularly focussing on case studies
documenting their use in industry, medicine and warfare, before considering a
more distant-future perspective and discussing the ethics associated with
artificial intelligence. The work robot
was first used in its current form in the English language in 1920 by Czech
playwright Karel Čapek as a suggestion from his brother Josef [1]. He used it
as an anglicisation of the Czech word robota,
meaning ‘serf labour’. The basic concept of a robot has remained largely
unchanged since the days of Čapek. Put simply, practical robots are designed to
replace humans. Of course this is not a universal truth, as many technologists
and scientists use robots as a means of research into advanced computing or as
a way of better understanding the abstract notions of intelligence and thought.
That said, almost all of the robots in current use have been designed for a
specific purpose other than self-research, with a booming market for home
robots such as the Roomba autonomous vacuum cleaner. Such has been the
exponential growth of the home robot market recently that the global service
robot population has been estimated at more than 18 million as of 2011 [2]. While modern robots have a wide range of
applications, they mostly perform tasks that fall under the 3 D’s: dull, dirty
or dangerous. Heavy industry, for example, qualifies for all three of these
categories. Robots on an assembly line (dirty) perform simple, repetitive tasks
(dull), often with high-power tools and machinery (dangerous). It is perhaps no
surprise, therefore, to see that (high-volume) modern car factories are almost
entirely automated; humans’ largest involvement is usually reserved to quality
control, where a visual inspection is needed and robot costs begin to exceed
human labour costs. Smaller automotive manufacturers, particularly luxury
marques, will still hand-build their cars. Smaller production volumes and rates
mean automation may be unnecessary, though low production rates and labour
costs tend to drive up the costs of these cars. This ‘revolution’ in manufacturing (if it can
be called so) engenders many moral and ethical questions, which can mostly be
condensed into one: is it ethical to
replace human labourers on a production line with robotic workers? One could
justify it from a utilitarian point of view: the costs involved in purchasing
and running a robot are considerably lower than those for equivalent human
labour output. For example, within a year of buying two industrial robotic
arms, Blue Chip Manufacturing in Ohio had already made the money back from the
initial purchase in overall savings, and in 4 years had saved an estimated $120,000
in manufacturing costs [3]. Robots are more efficient and less likely to make
mistakes " the possibility of human error becomes zero (assuming no other
significant human involvement). The increased reliability and efficiency can
also result in higher passenger safety and better build quality in road
vehicles. Does this, however, fully justify automation?
While the use of robots will ultimately produce cheaper, safer cars for
millions of customers worldwide, this surely is little consolation to the
workers who are left jobless. A common counter-argument is to say that such
automation frees up the workers to be more productive in other areas where
robots are not yet advanced enough to work. In an ideal world this would be the
case, but in reality not all workers would be sufficiently qualified or
experienced to simply transfer to a different job. It is important to note how this is simply
another iteration of a centuries-old debate dating back to at least the start
of the Industrial Revolution in the 1750s. ‘Technological unemployment’, as it
has come to be known, is the central argument for preventing technology from
reducing the number of human labourers needed, supposedly leading to higher
levels of unemployment. Most importantly, while local unemployment may be
affected by such technology, on a macroeconomic scale no effect is observed on
national unemployment rates. This has led to the use of the term Luddite Fallacy to refer to the false ‘theory’
that technological advances are ultimately injurious to the economy [4]. I
conclude this example by saying that in my opinion, the use of industrial
robots is entirely justified from an ethical standpoint, as it ultimately
benefits the economy and society. We have so far considered the ethical
implications of using robots as human replacements where they follow a set of
pre-programmed commands and no more. But we must also consider situations where
robots could be relied upon for more. Robots today are increasingly being used
in life-or-death situations where their superhuman precision and/or
expendability are desirable " the most obvious examples being surgical
procedures and warfare. Looking ahead to the future and the rapidly advancing
field of artificial intelligence, the question arises: might we someday give robots
the responsibilities of making life-or-death decisions? For example, would we
deploy robots in battlefields with the ability to judge whether a person is an
ally, enemy, or innocent bystander " and then decide whether or not to shoot
them? Or would we employ a ‘doctor robot’ with the capacity to decide to turn
off the life-support system of a terminally-ill patient? Fundamentally, these
situations question the trust we
place in robots and their ‘intelligence’. In order to best understand the
ethics of empowering robots in this way and giving them authority, it is
perhaps important to first examine the capabilities of those currently used in
medicine and warfare. In a Boston hospital, a young girl is
undergoing an operation to remove a kidney blockage. The inch-long incision has
been made in her abdominal wall, and the surgeon is removing the blockage with
a scalpel and tweezers " but he is not standing over her. Instead he is sat a
few metres away looking at a screen. He is controlling a da Vinci Surgical
System, which is transmitting the live feed to the surgeon’s console via a
stereoscopic camera on the end of one of the machine’s arms, while he precisely
controls the surgical instruments connected to the other three arms [5].
Performed by hand, this operation would likely leave the girl with a 15cm scar
on her abdomen and she would have to spend up to five days recovering in
hospital. With the robotic surgeon, she will return home the following day with
a regular plaster covering the closed incision. Robotic systems are slowly
revolutionising surgical procedures in a way matched only by the 19th
Century introduction of pre-operation instrument sterilisation. Robots like the
da Vinci system are significantly increasing the success rates of complex,
dangerous and often life-threatening procedures, ranging from delicate cardiac
operations to hysterectomies. As these systems are capable of much greater
precision than humans, both recovery times and risks of complications are also
greatly reduced. But these systems are in no way autonomous. They are called
robotic " and in a sense they are, as they at least partially fulfil the ‘human
replacement’ criterion " but they lack the capacity to function without human
input. They are quite literally ‘remote-control surgeons’. Similarly, none of the estimated 12,000
military ground robots and 48 UCAVs (unmanned combat aerial vehicles) deployed
in the Middle Eastern warzones [6] can be described as fully autonomous. Like
the da Vinci system they partially replace the human " they remove the human
from the possibility of direct harm or death in a warzone " but they must
always have a human controller inputting commands. Granted, the degree of
autonomy in the most advanced military unmanned vehicles is, in some cases,
quite high: for example, there exist UAVs which are capable of autonomous
flight to the point where a ‘pilot’ need only input a set of map co-ordinates
for the UAV to fly to and gather surveillance data. The important point to
make, however, is that there is always a human triggerman behind any unmanned
vehicle attack. These include ground robots as well as UAVs " the most basic of
which simply comprise a camera and a rifle mounted on a track-driven platform
which is remotely-controlled by a soldier. While there are some in development,
there are currently no military robots capable of autonomously making a kill
decision without human consent. Consider, though, in the near future, robots
that might, in theory at least, be capable of making such decisions. A robot
has no emotions. It does not feel empathy, nor does it know compassion. All
such a robot would be capable of is applying built-in algorithms to a matrix of
pixels, or image, to determine what it should shoot at. Based on this concept,
could we ever build a sufficiently complex robot to fully replace humans on the
battlefield? Major Daniel Davis (US Army) thinks not: ‘…it is unlikely that any
robotic or artificial intelligence could ever replicate the ability of a
trained fighting man in combat… Suggesting that within the next 12-plus years
technology could exist that would permit life-and-death decisions to be made by
algorithms is delusional’ [7]. It is one thing for an artificial intelligence
to be able to distinguish between hot and cold bodies or animals and humans
(which they have been able to do for some time), but quite another to be able to
distinguish between active combatant and civilian non-combatant. The level of
complexity associated with such distinctions leads me to agree with Davis. But
given the obviously inherent unreliability in predicting future technological
advances, we must consider the ethical course of action if such technology were ever created. In a presentation to the
advanced weapons department of the US Navy, John Canning addresses the issue of
autonomous weapons, and states rather simply: ‘Let men target men. Let machines
target other machines’ [8]. This suggestion is an easy escape from all the
ethical issues of robots killing humans, or more specifically, the wrong
humans. The case of military robotics is a
complicated one ethically, with no single or obvious answer. However I believe
that, until the next major paradigm shift in robotics or military technology,
Canning’s proposal is probably the best stance to adopt from an ethical point
of view. Perhaps at the opposite end of the spectrum,
consider now intelligent robots charged with maintaining our health, instead of
killing us. We have already looked briefly at surgical robots, which benefit us
by having superior precision and surgical capability in the hands of a trained
surgeon. But as the cutting edge of artificial intelligence becomes ever
sharper (as it were), are we facing an eventuality of robotic doctors? If
feeling unwell, would we seek medical advice from a robot instead of a human? After
all, when one goes to a doctor and describes whatever the problem may be, what
does the doctor do? He draws upon his knowledge and experience to make an
educated estimate of what the ailment may be based on the symptoms, and
recommends and appropriate course of treatment. This sort of structured,
ordered logic is perfectly programmable into robots (at least in theory, not
factoring in present-day technological limitations). It is possible to download
Gray’s Anatomy and all medically-related peer-reviewed journal papers into a
hard drive; then, when a patient describes his/her symptoms, the robot can
search within its database of medical knowledge to diagnose the patient with
whatever ailment best fits the symptoms. Of course, as with any artificial
intelligence, ‘common sense’ would be lacking - a human doctor would (hopefully)
be able to use common sense to determine that a fever-ridden patient is more
likely to have common influenza than Ebola, for example. In the future,
however, it may also be possible to program a robot with some form of learning
capability, in such a way that it could then, through gathered experience,
‘learn’ common sense. I think therefore to have a ‘diagnosis robot’ would be
almost ethically acceptable, as long as the patient would not be compelled or
forced in any way to follow the course of treatment proposed by the robot. ‘Care Robots’, on the other hand, are a
different matter. The concept of these robots is to replace human nurses in
hospitals and care homes for ill and/or elderly patients. On the whole, my
opinion on these (and other similar ‘service robots’) is this: from an ethical
standpoint, they are perfectly acceptable. One could apply the same
Technological Unemployment argument as before, but the counter-arguments remain
just as valid. The important distinction to make is when these care robots
would have life-and-death power over us. Certainly I do not think it is at all
ethically acceptable for robots to have the ability to make a conscious
decision to ‘pull the plug’ on a terminally ill patient. Such power would be
open to misuse and abuse and would raise ‘what if’ questions postulating the
possibility of a robot pulling the plug accidentally or even (a sinister
thought) deliberately. Though with the increasing fame of the Dignitas
euthanasia clinic in Switzerland, another question arises: what if a human
asked a robot to end his/her life? I will attempt to answer this in the next
section. Looking ahead into the slightly more distant
future, let us now consider the ethical implications of advanced artificial
intelligence. While it is currently at a very basic level, it is advancing very
quickly. For example, Google recently connected 16,000 processing cores
together to create a neural network (or brain), then let it watch Youtube
videos for 3 days [9]. After this time, the ‘brain’ had learned how to identify
a cat " among many other things " all by itself. This is a good example of
‘machine learning’, and is a very important ethical issue to consider. While
Google say their computer brain was many orders of magnitude less complex than a
human brain, imagine one day when humans create an artificial intelligence to
rival our own. Some estimates place that date around the year 2050 [10]. What
if, say, these robots then begin to question their subservience to humans and
demand to be treated more equally, as beings of equal intelligence? Without
entertaining the notion of a Terminator-esque robotic revolution, would we have
to give robots rights to go with their responsibilities? Arguably the most
famous attempt at devising a set of laws for intelligent robots is the 3 Laws
of Robotics, as coined by Isaac Asimov. But would these laws be sufficient "
primarily to protect humans from harm at the hands of robots " but also vice
versa? Asimov uses his many short stories to explore loopholes and shortcomings
of these laws, while Prof. Alan Winfield proposes a set of draft ethics for
roboticists [11], including such statements as ‘robots should not be designed
solely or primarily to kill humans’, and, most importantly, ‘Humans, not
robots, are responsible agents’. It is easy to argue, then, that as soon as
robots start becoming existential, we could simply build stupider robots, or
just stop making those robots altogether. This topic in itself opens up another
huge philosophical discussion that begins to stray from the considerations of
ethical engineering. That said, it would not be inconceivable to live in a
world where intelligent robots are granted similar rights and responsibilities
as humans (think of The Bicentennial Man
by Isaac Asimov). After all, as biomedical engineering innovations continue to
replace faulty organic parts with mechanical prosthetics, the defining line between
human and android is becoming blurred. To conclude, it is obvious that the ethics of
robotics will remain a hotly debated topic for many years to come. However I
believe that in the majority of cases, replacing humans with robots can be
entirely ethically justified. Such ‘upgrades’, if you will, result in increased
efficiency and reliability while lowering costs and the risk of human injury or
death. Intelligent robots form a separate ethical question, but I hope that as
such technological advances happen, legislation and ethical codes will be
appropriately updated to prevent any kind of robotic revolution. I personally
would quite like to, one day, be able to have an intellectual discussion with a
robot, but perhaps this is more wishful thinking than serious prediction.
References [1]: Čapek, K., 1933. About the
Word Robot. Translated from the Czech by N.Comrada. [online] Available
from: [Accessed 25 November 2012] [2]: World Robot Population,
2000-2011 [graph] [online] Available from: [Accessed 26 November 2012] [3]: Kimes, M., 2008. Need more
workers? Try a robot [online] Available from: [Accessed 27 November 2012] [4]: Toolkit For Thinking: Luddite
Fallacy [online] Available from: [Accessed 27 November 2012] [5]: Singer, E., 2010. The Slow
Rise of the Robot Surgeon [online]. Available from: [Accessed 28 November 2012] [6]: Gates, R.M., 2011. Remarks
by Secretary Gates at the United States Air Force Academy [online].
Available from: [Accessed 28 November 2012] [7]: Davis, D.L., 2007. Who
Decides: Man or machine? [online] Available from: [Accessed 28 November 2012] [8]: Canning, J.S., 2006. A
Concept of Operations for Armed Autonomous Systems [online] Available from:
[Accessed 28 November 2012] [9]: Clark, L., 2012. Google
brain simulator identifies cats on Youtube [online] Available from: [Accessed 28 November 2012] [10]: Moravec, H., 2008. Rise of
the Robots " The Future of Artificial Intelligence [online] Available from:
[Accessed 28 November 2012] [11]: Winfield, A., 2012. Robotics:
A very short introduction. Oxford: Oxford University Press © 2013 jbtaylor1103Author's Note
Featured Review
Reviews
|
Stats
262 Views
1 Review Added on August 30, 2013 Last Updated on August 30, 2013 Tags: ethics, robotics, engineering, essay, university |