Illustration of computer chips in brain
Some scientists say artificial intelligence is moving so fast that humans will be overtaken within 30 years. (Illustration by Pierre-Yves Goavec/Getty Images)

Topics: Science | Ethics

AI advances lead to thorny ethical questions about fate of humans

Computers are learning so fast on their own that it's not clear how they will shape the future of the species that created them

 | 

“A robot may not injure a human being or, through inaction, allow a human being to come to harm.” It “must obey orders given it by human beings except where such orders would conflict with the First Law.” Thirdly, it “must protect its own existence as long as such protection does not conflict with the First or Second Law.”

These are the famous Three Laws of Robotics written by Isaac Asimov in his 1942 short story Runaround, later popularized in his 1950 collection of science fiction, I, Robot. The Russian-born author speculated that humans would one day have to program intelligent machines with these fundamental values in order to protect themselves from their creations. Think of it as a list of moral imperatives for artificial intelligence (AI) not unlike the Ten Commandments.

Science fiction has long portrayed machines capable of thinking and acting for themselves. But the fiction is rapidly becoming a reality. After all, computer speeds are doubling every few months, and more powerful technologies loom on the horizon. Scientists have already developed robots with the ability to reason and develop emotions. Although initially designed to support the fields of exploration and public health, among others, these advanced systems have military applications, too: they can select enemy targets consistently and precisely on their own volition.

As a result, AI has been the subject of serious debate among technologists and philosophers who muse about this next breakthrough in human history while asking hard ethical questions: Is society changing too quickly? Do our values permit us to create golems of steel and silicon in our likeness? Are we on the verge of trumping ourselves, thereby relegating — if not dooming — our species in the process?

The whole notion of thinking machines first appeared in Greek mythology. Hephaestus, the god of technology and fire, was said to have forged living beings entirely out of metal. The “golden servants,” described in Homer’s Iliad, were intelligent and vocal, and meant to do Hephaestus’s bidding.

In 1956, John McCarthy coined the term “artificial intelligence.” The American computer scientist argued that computers could one day reason like humans, and his life’s work made him one of the pioneers of AI. So when did the world first glimpse a fully rational machine? It happened on a fateful day in the spring of 1997, when IBM’s Deep Blue vanquished the world’s greatest living chess player, Russian Garry Kasparov. Reportedly, the computer showed intellectual processes characteristic of humans, such as the ability to reason and learn from past experience — but at a level superior to Kasparov’s.

Even today’s personal computers are more complex than the electronic brain needed to land the Apollo astronauts on the moon. And the world’s most powerful supercomputer processors are soon expected to shrink to the size of a sugar cube, according to IBM scientists. This trend is referred to as Moore’s Law, after the observation by Intel co-founder Gordon Moore that computer processing power doubles every 18 months. Given this pattern, something called “quantum computing” promises to push AI beyond the limits of conventional computers this decade.

“The ‘singularity’ is near,” award-winning technologist Ray Kurzweil wrote in his 2005 book of the same name. “Singularity” — a word borrowed from physics and math to describe the point at which a function takes an infinite value — has been recast to define a point in the future when machine intelligence becomes so overwhelming that we can no longer contemplate its future. A kind of apostle of technological singularity, Kurzweil writes that once we pass this milestone, “the destinies of computers and of humankind will be indistinguishable.”

But with every technological advance comes the fear that artificial beings will one day take over. Movies such as 2001: A Space Odyssey and the Terminator and Matrix series have exaggerated these innovations in the form of supercomputers, evil cyborgs — and disaster for humankind.

The idea of a robot Armageddon may be entertaining to moviegoers. But it actually worries computer scientist and mathematician Vernor Vinge, a pioneer in AI. The retired San Diego State University professor is best known for his manifesto The Coming Technological Singularity. Published in 1993, the book argues that the exponential growth in computer technology would change our world more drastically than any other previous technological surge.

That same year, at a NASA-sponsored symposium in Cleveland, Vinge explained this scenario with mixed emotions: “Within 30 years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended.”

“Might an AI system be able to bootstrap itself to such a higher and higher level of intelligence?” he asks today. “Yes. In fact, this is one reason why I think a superior intelligence would be an almost immediate consequence of achieving human levels of intelligence with AI.”

The automation of weapons systems, for instance, is an evolving military trend. The U.S. army aims to convert 30 percent of its ground vehicles into unmanned machines by 2015. The British military last summer unveiled an aircraft designed to use AI. The Taranis prototype, named after the Celtic god of thunder, is unlike most other unmanned planes. Instead of being controlled by humans on the ground, it can fly itself halfway around the world and select enemy targets entirely on its own. “It could then carry out intelligence [and] surveillance . . . and strike with precision weapons,” Squadron Leader Bruno Wood, the Ministry of Defence spokesperson for the Taranis project, told the Globe and Mail.

In July, the South Korean army deployed sentry robots along the southern edge of the Korean Demilitarized Zone. The automatons, developed by a consortium of South Korean firms led by Samsung Techwin Co., use heat and motion detectors to sense possible threats and can fire 40-millimetre automatic grenade launchers at North Korean targets, according to South Korea’s Yonhap News Agency.

Even if the production of AI wasn’t motivated by the wish to “wage war more successfully,” no autonomous robots or AI systems could ever gain the requisite skills to discriminate between combatants and innocents, says John Leslie, a professor of philosophy at the University of Victoria and fellow of the Royal Society of Canada.

“In fact, being attacked, defeated and perhaps entirely replaced by computerized monsters is clearly a possibility — and a very unpleasant one,” Leslie says. “While it’s wrong to think that humans are fine the way they are, it is equally wrong to think that the presence of artificial intelligence will inevitably make humans better. Such systems could be at the foundations of a society whose tyranny went far beyond that of George Orwell’s Nineteen Eighty-Four.”

Leslie has gone as far as to predict that super-smart robots could cause the extinction of humankind. He muses that their cybernetic brains could not only be linked to tanks and submarines but to cranes and other machinery. For starters, robots could begin managing natural resources on their own, removing humans from industry altogether.

“If very clever machines are developed,” Leslie adds, “it seems inevitable that many people would want to be, to some extent, merged with them. This ‘transhumanism,’ for instance, would be as simple as implanting sufficiently small AI inside human heads.”

Others, however, scoff at such predictions, including John Searle, a philosopher who teaches consciousness at the University of California. He argues that computer thinking is a simulation of one aspect of human thinking, not an all-out emulation of it.

“People don’t know what they mean by ‘intelligence’; it’s an ambiguous term,” Searle says. “If you’re talking about the ability to perform calculations, then for $10 you can buy a pocket calculator that will outperform any mathematician who ever lived. So what? That’s like saying a sledgehammer will drive a nail faster than a human thumb.”

He describes a room in which a non-Chinese-speaking person receives input through a slot in the form of written Chinese characters. By their side: a rulebook written in the language they understand, informing them which Chinese characters they should arrange on another sheet to pass back out through the slot in response. Following these rules, the person would be able to answer questions correctly in Chinese without having the faintest understanding of the language.

Searle’s Chinese Room analogy is designed to show the limits of mechanical ingenuity compared with the complexity of human intelligence. It’s also meant to show the boundaries of the famous Turing Test, devised in the 1950s by English mathematician Alan Turing. His basic argument was that if a machine could give answers to a problem indistinguishable from those given by a person— “fooling the observer into thinking it was human” — it could then be said to be thinking.

“At the moment, there’s no such thing as a commercially made conscious computer,” Searle points out. “Until you can design robots with consciousness, then you don’t have anything remotely similar to human intelligence. And secondly, without consciousness, you don’t have that central feature of autonomy. The existing computers can only do what you tell them to do.”

The prospect of AI running amok doesn’t alarm philosopher James Mensch either. “The human brain does not work according to well-defined rules, so how can an artificial one ever be devised?” asks Mensch, a professor at St. Francis Xavier University in Nova Scotia. “Also, in order for computers to have complex thoughts — or even emotions — they would have to have our evolutionary, biological inheritance.”

For more than 50 years, intelligence has been defined in terms of the algorithms that both humans and machines can perform, Mensch has written. “I would like to raise some doubts about this paradigm in artificial intelligence research. Intelligence, I believe, does not just involve the working of algorithms. It is founded on flesh’s ability to move itself, to feel itself, and to engage in the body projects that accompanied our learning a language.”

Nevertheless, technologists are busy fleshing out the idea of “friendly AI” in order to safeguard humanity. The theory goes like this: if AI computer code is steeped in pacifist values from the very beginning, super-intelligence won’t rewrite itself into a destroyer of humans. “We need to specify every bit of code, at least until the AI starts writing its own code,” says Michael Anissimov, media director for the Singularity Institute for Artificial Intelligence, a San Francisco think-tank dedicated to the advancement of beneficial technology. “This way, it’ll have a moral goal system more similar to Gandhi than Hitler, for instance.”

Kurzweil, who sits on the board of advisers of the Singularity Institute, has said that the potential benefits make it “impossible to turn our backs on artificial intelligence.” In The Age of Spiritual Machines, published in 1999, he writes that people often go through three stages in examining the impact of future technology: “awe and wonderment; then a sense of dread over a new set of grave dangers that accompany these new technologies; and finally, the realization that the only responsible path is to realize technology’s promise while managing its peril.”

In his 1985 novel Robots and Empire, Isaac Asimov was careful to add one more edict to his list of commandments for machines: the Zeroth Law of Robotics, which applies to humanity as a whole. It explicitly states, “A robot may not injure humanity, or, through inaction, allow humanity to come to harm.”

The question remains, however, whether compassion for people is adequately developed within humankind to ensure that an intelligence spawned from ours will comply.

***

This story first appeared in The United Church Observer’s February 2011 issue with the title “Artificial Intelligence.”

Kevin Spurgaitis is a journalist in Toronto.

Comments

Leave a Comment

Your email address will not be published.