Artificial Intelligence: the ability of computers to perform functions that normally require human intelligence—Encarta World English Dictionary, 1999
Cyborg: a fictional being that is part human, part robot—Encarta World English Dictionary, 1999
Michio Kaku, author of “Visions: how science will revolutionize the21st Century”, predicted that sometime beyond 2050, AIs would acquire consciousness and self-awareness. MIT artificial intelligence guru and transhumanist, Ray Kurzweil, agreed in his 1999 book “The Age of Spiritual Machines” that sentient robots were indeed a near-term possibility: “The emergence of machine intelligence that exceeds human intelligence in all of its broad diversity is inevitable.” Kurzweil asserted that the most basic vital characteristics of organisms such as self-replication, morphing, self-regeneration, self-assembly, and the holistic nature of biological design can eventually be achieved by machines.
Examples include self-maintaining solar cells that replace messy fossil fuels and body-cleaning and organ-fixing nanobots.
Now that we are poised on a new era of machine and human evolution, Kurzweil's vision of our cyborg future where humans are fused with machine in what he calls a “post-biological world” appears seductive. However, Bruce Sterling presents us with the following depressing vision of the future: citing Japan's rapidly growing elderly population and its serious shortage of caretakers, Japanese roboticists envision walking wheelchairs and mobile arms that manipulate and fetch. “The peripherals may be dizzyingly clever gizmos from the likes of Sony and Honda,” says Sterling, “but the CPU is a human being: old, weak, vulnerable, pitifully limited, possibly senile.” The possible mayhem generated by this scenario is limited only by our imaginations.
Predictions of a cyborg future have also prompted concerns and dark predictions of governments misusing brain implants and other aspects of AI to monitor and control citizens' behavior. Bill Joy, the cofounder of Sun Microsystems, wrote in his April 2000 article in Wired Magazine, called “Why the Future Doesn’t Need Us”: “Our most powerful 21st century technologies—robotics, genetic engineering, and nanotech—are threatening to make humans an endangered species.” Joy cited Unabomber Theodore Kaczynski's dystopian scenario to warn us of the consequences of unbridled technological advances of GNR (Genetics-Nanotech-Robotics). Joy then cautioned: “We have the possibility not just of weapons of mass destruction but of knowledge-enabled mass destruction (KMD), this destructiveness hugely amplified by the power of self-replication.” He fired home the point by suggesting that “a bomb is blown up only once—but one bot can become many, and quickly get out of control.” Joy also warned that nanotechnological devices in the hands of terrorists or an unscrupulous military could become the ultimate genocidal weapon, created to be selectively destructive, and affecting only a certain geographical area or group of genetically distinct people. Kurzweil argued back that: “People often go through three stages in examining the impact of future technology: awe and wonderment at its potential to overcome age old problems, then a sense of dread at a new set of grave dangers that accompany these new technologies, followed, finally and hopefully, by the realization that the only viable and responsible path is to set a careful course that can realize the promise while managing the peril.”
The perils described by Joy would result largely from unethical actions of humans—not machines. So, how will we make a robot behave? How will we pass on the best ethics to machines and manage them, when, according to Bruce Sterling, “we've never managed that trick with ourselves.”? Nanotechnologist J. Storrs Hall astutely states: “We have never considered ourselves to have moral duties to our machines, or them to us. All that is about to change.” In reference to morality, SF author Vernor Vinge, with a hierarchy of superhuman intelligences presumably in mind, referred to I.J. Good’s Meta-Golden Rule, which is: “Treat your inferiors as you would be treated by your superiors.” What ethics and morals will we instill in our thinking machines? And what will they, in turn, teach us about what it means to be human?
Will ethics alone be sufficient to ensure the benevolence of AI? What other means might we employ to control machine intelligence, which, according to Kurzweil, will surpass the brain power of our entire human race by 2060? Although Kurzweil believes that the evolution of smart machines will run a natural course toward moral responsibility, he does support “fine-grained relinquishments” such as a “moratorium on the development of physical entities that can self-replicate in a natural environment, a ban on self-replicating physical entities that contain their own codes for self-replication and a design called Broadcast Architecture, which would require entities to obtain self-replicating codes from a centralized secure server that would guard against undesirable replication,” such as that alluded to by Joy.
Is it possible that an AI community would strive as an autopoietic system to assemble its disparate parts into a cohesive "organism" to achieve “wholeness”? Perhaps it is the destiny of all intelligent beings to seek "wholeness" within their community and a place in the universe. Perhaps we will finally realize ours through the community of machines we make.
If you haven't read them, you might be interested in my previous posts on artificial intelligence: Part 1: neural implants and Part 2: invisible computers.
Recommended Reading:
Hall, J. Storrs. 2001. “Ethics for Machines.” In http://www.kurzweilai.net/. July 5, 2001.
Joy, Bill. 2000. “Why the Future Doesn’t Need Us.” In Wired, April, 2000.
Kurzweil, Ray. 2001. “The Law of Accelerating Returns.” 2001.
Munteanu, Nina. 2004. “AI: Changing Us, Changing Them.” In Strange Horizons (http://www.strangehorizons.com--archived/ 23/08/2004).
Sterling, Bruce. 2004. “Robots and the Rest of Us.” In Wired, May, 2004.
Vinge, Vernor. 1993. “The Coming Technological Singularity: how to survive in the post-human era.” In Vision-21, NASA, 1993.
Recommended Reading:
Hall, J. Storrs. 2001. “Ethics for Machines.” In http://www.kurzweilai.net/. July 5, 2001.
Joy, Bill. 2000. “Why the Future Doesn’t Need Us.” In Wired, April, 2000.
Kurzweil, Ray. 2001. “The Law of Accelerating Returns.” 2001.
Munteanu, Nina. 2004. “AI: Changing Us, Changing Them.” In Strange Horizons (http://www.strangehorizons.com--archived/ 23/08/2004).
Sterling, Bruce. 2004. “Robots and the Rest of Us.” In Wired, May, 2004.
Vinge, Vernor. 1993. “The Coming Technological Singularity: how to survive in the post-human era.” In Vision-21, NASA, 1993.
Nina Munteanu is an
ecologist and internationally published author of novels, short stories and
essays. She coaches writers and teaches writing at George Brown College and the
University of Toronto. For more about Nina’s coaching & workshops visit www.ninamunteanu.me. Visit www.ninamunteanu.ca for more about her writing.
This sort of thing was complete science fiction not too long ago, but now seems more like science.
ReplyDeleteWhere ARE you getting your images?! They're great!
ReplyDeleteQuite right, Jean-Luc...Most of the articles I cite area from 10 years ago too...The fields of technology and biology are merging into something wondrous and quite frankly frightening. Bio-technology is an awesome field that uses genetics, neuroscience, biochemistry, cybernetics and even ecology to create a new world. My book, Darwin's Paradox explores some of these...consequences and future possibilities...
ReplyDeleteVirginia...I'll never tell!...
ReplyDeleteThe implications of this are staggering.
ReplyDeleteFor me, the biggest issue is that you can't teach a machine morality or empathy. If we're talking machines fused with human beings, maybe it wouldn't be so bad as long as the human brain kept an ethical check on the computer brain. But if there are no moral stops in place, why would a machine bother to think about the ethical implications of anything it does?
It's also interesting that you brought up the health care situation in Japan. Have you seen this story? I lived in Osaka for a little while and this story makes me shudder. Can you imagine being in Japan and needing emergency care-- only to be turned away at every hospital? I can't say I'm surprised that the Japanese would look to any alternative to traditional treatments.
Wow, SQT...that's an awful story (thanks fot the link). It certainly makes the brain reel with staggering possibilities of SF horror, doesn't it?...We are definitely living in interesting times.
ReplyDeleteThis comment has been removed by a blog administrator.
ReplyDelete