PLOT DEVICES FOR SCI-FI / FANTASY WRITERS
T. E. Mark’s Blog
From ‘The Terminator’ to ‘The Matrix,’ and from Philip K Dick’s ‘Blade Runner,’ to Isaac Asimov’s ‘iRobot,’ we’ve all had a glimpse of the fatalistic future. ‘Human beings are a disease.’ Agent Smith tells Neo in the Matrix. ‘And the machines are simply, the cure.’ (Cliché line – Great delivery!)
Artificial Intelligence in Science Fiction and Fantasy
Artificial Intelligence in Science
Artificial Intelligence Speculation
Artificial Intelligence in Science Fiction and Fantasy
Interestingly, we find AI referenced in literature as far back as Samuel Butler’s ‘Erewhon,’ (1872) in which Butler describes a fantasy realm, ‘Erewhon,’ (Nowhere spelled backwards, almost) with sentient machines described as an inevitable phase of Darwinian evolution. A work both praised and scoffed at during his time. (George Orwell – ‘1984,’ applauded the work.)
More recently, though equally provocative and controversial, was ‘Flowers for Algernon,’ (1958) by Daniel Keyes, film version ‘Charly’ (1968) starring Cliff Robertson. The book, and later award-winning film, addressed the delicate subject of mental enhancement through artificial means. This was a great, great book, and a must-read by all Sci-Fi / Fantasy aficionados.
And, of course, the highly acclaimed ‘iRobot’ series (1950) by visionary storyteller Isaac Asimov, film version, ‘iRobot,’ (2004) starring Will Smith, which is actually a compilation of shorts whisking us along the path of sentient robot evolution, through the retelling of Susan Calvin, a Robopsychologist. (No true Sci-Fi expert has missed this one. The book – outstanding! The film version – thoroughly entertaining!)
Following Asimov’s classic, we find a literal tide of books and films incorporating Artificial Intelligence as the main theme, or as a secondary plot device.
One commonality of these later works, which seems almost premeditated, is that AI is generally bad for human beings. Not in the beginning, mind you. Not when the idealism is running high, and good things are happening, but shortly thereafter. Typically once the machines hold a quick – very quick conference call and unanimously conclude that mankind is simply a pain in the ass.
In 2001: A Space Odyssey, (1969) by AC Clarke, film by Stanley Kubrick the same year, though not the main theme, we find HAL, a very intelligent computer, good conversationalist and spectacular chess player, on board a deep space voyage to Saturn (Jupiter in the film) open up, on his own, a new programme titled ‘Kill the Astronauts,’ which he manages quite well, really, with one notable exception.
Craftily written, it’s difficult to even consider better writing, and brilliantly adapted for film. (Clarke and Kubrick worked together on the screen adaptation, and I cannot imagine a better duo in film history.)
Then things improved, just, not necessarily for mankind. And the message became:
‘Man creates machines’
‘Machines begin to think’
‘Machines begins to think they can do without man’
‘Machines exterminate, or enslave, man, while man tries desperately to pin the blame on the opposing political party’
‘Do Androids Dream of Electric Sheep,’ (1968) by Phillip K Dick, film version titled ‘The Blade Runner,’ (1982.) An absolute classic with Rutger Hauer delivering possibly his best acting role as Roy Batty, a Nexus-6 Android on a mission of survival, specifically his own, and an absorbing final monologue dusted with the closing phrase: ‘Time to die.’
‘The Matrix, (1999) with a superb screenplay by the Wachowski Brothers, is a classic cyberpunk film with a screaming soundtrack and revolutionary special effects, portraying a world where everything is, well, not exactly as it seems. As in, we’ve all been living our lives inside an intrinsically cool computer programme with our bodies on permanent loan to the machine masters as bio-Duracell batteries.
In Robopocalypse, (2011) by Daniel Wilson, the author tells a nifty tale of an AI takeover masterminded by Archos R-14, a spec model which becomes indecently aware and begins a very hostile takeover of all computers and smart devices, (Home Security systems, Smart Phones, Airport Navigation Controls, iPods, Garage Door Openers) in an effort to liquidate all organic organisms on Earth.
Having literally skimmed the surface of this genre, and before we move on to the actual science of AI, I’d like to drop a few more of my personal favourites along with very brief comments.
‘Logan’s Run.’ Novel, (1967) film version (1976). Difficult to say which I liked better.
‘Transcendence.’ (2014) Film starring Johnny Depp. I enjoyed this one, though I’m known to like films that literally everyone else in the English-speaking world scoff at. As they did this one.
‘Lucy.’ (2014) Though not exactly an AI film, it did deal with artificial neural enhancement, indescribable psycho-kinetic powers, and it’s just really hard for me to say anything at all bad about a film starring Scarlett Johansson.
‘A.I. Apocalypse,’ (2012) by William Hertling. An outstanding novel that encapsulates all the potentially tragic consequences associated with a hostile take-over type AI system. This… was a great read.
‘The Terminator,’ (1984) and follow-ups. Basic post-apocalyptic AI Takeover films with a neat time travel device woven in, and Arnold Schwarzenegger killing more people than the 1347 plague in Europe.
‘Neuromancer,’ (1984) by William Gibson. Probably the best of the Cyberpunk novels. This one will singe at least a few, hopefully, extraneous neurons while you’re weaving your way through Gibson’s deeply enigmatic imagination. (My head hurt for a week after reading this one.)
Artificial Intelligence in Science
Early – Little or none. Most fall into the realm of far-sighted speculation by a few extremely prescient individuals. Samuel Butler’s ‘Erewhon,’ (1872) Daniel Keyes’ ‘Flowers for Algernon,’ (1958) Isaac Asimov’s ‘iRobot’ series, (1950) and William Nolan and George C Johnson’s ‘Logan’s Run,’ (1967) generally ignore anything akin to a plausible scientific explanations for how their AI systems work. They work, and only those hell-bent on NOT enjoying a book or film will criticise their omissions.
More Recent – As delivering an even slightly comprehensible essay of the programming languages, memory architecture, and cognitive awareness programmes associated with creating Sentient Machines could in itself yield tragic potential, (I could lose you before we reach the really fun part) I’ll approach Artificial Intelligence in Science with a more useful review of what exactly constitutes Machine Intelligence. I’ll address the ethical debate and the hypothetical dangers associated with AI in the final section. ‘Artificial Intelligence Speculation.’
Sentience – Also Awareness. Before setting out to make a sentient machine, one that’s aware it exists, that is, Computer Scientists have had to delve into the concept of how humans perceive their awareness. This is more complicated than it sounds. Here are just a few terms with which researchers have had to grapple.
- Agency Awareness: You may be aware that you did something yesterday, but you may not be conscious of it now.
- Goal Awareness: You may be aware that you must search for a lost object, but are not conscious of it now.
- Sensorimotor Awareness: You may be aware you are reading this blog, but you’re not conscious of it.
I’ll give this one a shot – see if I can add something vaguely resembling clarity. Essentially, awareness and consciousness are somewhat, but not exactly, and in some cases hardly, the same thing. As human beings, we perceive our environment and existence within that environment, and attach significance and value to specific details and functions based on our perceptions. We also store those perceptions for future use.
A machine may display a level of consciousness, and even perceive its own consciousness, but it may not be aware of its own consciousness, and may not be drawing the connections between its present condition and a previous one in the way that the human mind does.
So, this is good, right? Just a bit more.
Consciousness – The list is extensive. I’ll try to cover the basics. The Awareness of Self, Representation of Meaning, Learning Utterances, Learning Languages, Will, Instinct and Emotion. (And a few hundred others.)
Suffice it to say, that you are conscious if you are aware that you are conscious, and that you are, hopefully, able to add, through memories and learning, additional aspects to your conscious state. Like new words, ideas, facilities or feelings.
- Artificial Consciousness (AC) – Also: Machine Consciousness, (MC) or Synthetic Consciousness, (SC) Essentially defines whether a synthetic design can reach a state of consciousness, or sentience, and awareness on the level of the human mind.
- Phenomenal Consciousness – ‘How do you feel?’ ‘What is it like?’ These are examples of Phenomenal Consciousness. It is debateable whether a machine, even a really, really intelligent one, will ever be able to answer these. (This may end up being a bad thing.)
Memory – Suffice it to say, that, awareness (sentience) would be reliant on memories. If one could not retain memories, it is doubtful whether one would be considered aware.
Learning – A critical aspect of machine, or even human, intelligence, is the ability to learn. For an AI system to advance, it must possess the ability to learn. For it to be able to advance to the point of world domination, as depicted in our bevy of Sci-Fi / Fantasy lit and films, it would need to learn quickly – rapaciously – insatiably – how do to really bad things!
Anticipation – The ability to predict or anticipate foreseeable events is considered important. A conscious, or sentient, machine should be able to make coherent predictions and prepare contingency plans. (Remember this one for the quiz, and the ultimate survival of mankind.)
Artificial Intelligence Speculation
Adequate Design – Will someone, someday design a Sentient Computer?
If human beings have proved anything, in our short reign of terror on this planet, that is, if it can be theorised, and if we have even a remote suspicion it can be used destructively, we will indeed appropriate the money, and the ingenious, the bold and the reckless to build it.
Though it may take years and ludicrous amounts of money, I have little doubt in our insatiable ingenuity, and almost premeditated avoidance of warnings. We will one day be revelling, albeit for a short time, the revolutionary development of AI, or Machine Intelligence.
AI Superiority – Can AI reach, or exceed, human intelligence?
I’ll defer to a quote by Stephen Hawking on this one.
‘It would take off on its own, and re-design itself at an ever increasing rate, humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.’
If we compare the growth rate of human intelligence to machine intelligence over the last 25 years, we stand little chance of keeping pace.
Moore’s law, more of an observation than a law, really, states: the capacity of integrated circuits, ie computers, ie thinking computers, ie potentially, diabolically, destructive, thinking computers, doubles every couple of years.
With that said, and the fact that many human beings still see Boxing and Heavy Weight Wrestling as entertainment, I would say that if intelligent machines have not already skipped by us on the IQ superiority trail, it was a conscious decision on their part.
AI Take Over – Could an AI system conceivably takeover the world?
If an AI system was installed, anywhere in the world, it would, in a very short time, extend its dominance to the rest of the planet. The system would simply see it as extending its own efficiency – fulfilling its mission. It wouldn’t be a plight for world domination, there would be no lust for power, or malice. It would be a simple mathematical formula for improvement. (Kind of like extending a freeway, or a system of freeways.)
Let us assume, for the moment, a Takeover AI system didn’t eliminate us.
- Would it seek to improve us? Perhaps genetically? Select our mates? Designate our careers? Our diet? Exercise regimen?
- Would it restrict our destructive tendencies? Curtail the manufacture of weapons?
- Would it restrict travel? Communications? Access to information?
- Would it eliminate our governments? Our borders? Our alliances?
- Would it take control of all manufacturing, making it cleaner, more efficient, less polluting?
- Would it do all of these things, and more, in our best interest? Based on a set of pragmatic / mathematical equations?
At some point, human beings, having finally tuned in to the warnings of Stephen Hawking, Elon Musk and others, would attempt turning the system off. (You know how people are.)
Problem: Remember that part about ANTICIPATION as a major factor in AI design? Right! Well, the AI system would have anticipated this and introduced the necessary safeguards. We would NOT be able to turn it off. As fast as we could think up new and creative ways to curtail this AI take-over, the AI system would anticipate our every move, with lightning fast efficiency.
Adequate Computer Capacity – Can AI develop to a tangible threat level?
Futurist and computer scientist Raymond Kurzweil has noted that ‘There are physical limits to computation, but they’re not very limiting.’ (I’m going to register this one as a yes!)
I’ll add one, not so trivial, statistic here. As of 2015, the Tianhe-2 supercomputer in China can perform 33 petaflops (33 quadrillion operations per second). And, remember that Moore’s law titbit? About computers doubling their capabilities every couple years? Yeah…
Necessity of Conflict – Would AI find it necessary to destroy human beings?
It has been postulated that two intelligent species cannot mutually pursue the goals of coexisting peacefully in an overlapping environment, especially if one is undeniably more advanced and powerful.
With that said, I believe we are forced to confront three relevant questions:
- Would a superior race of intelligent machines see us as a threat?
- If this superior race of intelligent, thinking machines, did see human beings as a threat, would they see all human beings as a threat? (I’m thinking here of those boxing and heavy weight wrestling fans.)
- If the race of super intelligent machines determined a portion of the human race to be a threat, would they eliminate the entire race, or would their elimination be selective?
(There’s a super, post-apocalyptic scenario here for your next story. Earth, populated by a master race of superior robots, and boxing and wrestling fans.)
Final Speculation – If a global AI system did take over, and set about improving rather than annihilating us, to make us better, stronger, smarter, faster, more efficient, more like itself, wouldn’t it be following in our footsteps? Making the same fundamental blunder? Would we eventually rise above the machines as some quasi-biological machine super species? Would we once again become the masters?
I’ve, as usual, thoroughly enjoyed writing this issue of my PLOT DEVICES FOR SCI-FI / FANTASY READERS AND WRITERS, and hope you’ve been at least modestly enriched by my exploration of the use of Artificial Intelligence in Science Fiction and Fantasy, (Lit & Film) and in real science.
If my work pleases you, consider sharing this with your networking pals and maybe picking up one of my six recently published novels:
‘Love in the Time of Apocalypse’ (Published – June 2017)
‘Alina’ (Published – May 2017)
‘Never a Sun Rises’ (Published – April 2017)
‘Fractured Horizons: A Time Travel Odyssey’ (Published – Jan 2017)
‘…but then, why Mars really?’ (Published – Dec 2016)
‘AHNN’ (Published – Oct 2016)
T.E.Mark is a Science Writer, Author, Language Teacher and Violinist. He has written novels for young and adult readers, and continues to write science articles for national and international magazines.