Technology

Blade Runner 2049: Is Bondage Immoral?

Blade Runner 2049 Poster

Blade Runner 2049 Poster

The original 1982 Blade Runner was not a great commercial success in its theatrical release, but had a huge cultural impact over time. Aspects of its vision of 2019 Los Angeles and noir style have appeared in hundreds of other movies. The production of Blade Runner 2049 as a sequel is another symptom of Hollywood’s creative exhaustion and the unwillingness to finance risky productions that don’t have a pre-marketed, built-in audience to guarantee at least some return. The sequel is lavish and lovingly crafted, in many ways more ambitious than the original. But I think it fails to live up to the original, and here’s why…

The original had a very simple story — Deckard (played by Harrison Ford) is an updated noir detective, tasked with finding and killing escaped replicants. Like a classic Raymond Chandler gumshoe, he’s single and lives an isolated life estranged from all but his job. The dame in his story is Rachael, the embodiment of feminine beauty and vulnerability, who turns out to be a replicant herself, brought up with false memories of a childhood that never was. She has never experienced real pain, living as the protected “niece” of Eldon Tyrell, the brilliant billionaire head of Tyrell Corporation. Unlike most replicants, she is not doomed to die on her fourth birthday, as was apparently intended to limit the threat that superhuman replicants might rise to overthrow their normal human masters. Replicants have been made illegal on Earth, and are used only in space and the outer colonies. The Earth seems to have been largely depopulated, as most humans with get-up-and-go got up and left for the colonies. It’s noticeable that the humans Deckard encounters on Earth are all eccentric, physically imperfect misfits, while the replicants–and Rachael, and Deckard himself–are good-looking and healthy.

Deckard is ordered to find and eliminate six replicants who have killed humans and landed on Earth. The outlaw replicants are desperate to find a way to live beyond their programmed death dates and intend to force Tyrell to change their genetic programming. The plot revolves around Deckard’s gradual discovery of who Rachael (and by controversial implication, Deckard himself) is as he chases down and kills three replicants.

The story is of gradual discovery and the resonance of the personalities of the three replicants he kills. Because the plot is relatively simple, the art design, atmosphere, and nuance have greater impact.

[CAUTION: SPOILERS IN FOLLOWING]

Why is the sequel less effective? Because it tries to do much more — there’s more plot, violence, and most importantly an evil villain whose motive seems to be megalomania. In the original, flawed humans did their best to survive using replicant labor, and replicants, who have been enslaved and sentenced to death long before their time, are acting to survive as well. The evil involved is not a single man’s greed or megalomania, but slavery itself–which had been justified out of human fear. The sequel’s plot clanks along with a definite villain and his henchwoman as foils, and there’s so much plot that the characters are less compelling.

The sequel is set thirty years later, in 2049. Blade runner ‘K’ (played by Ryan Gosling) seeks out those rare surviving replicants who were built without a set date of death. ‘K’ knows he is a replicant but accepts his orders without question, and he kills an old replicant living alone in a desolate protein farm. When he checks out the area, he discovers a buried box containing the bones of a replicant woman who appears to have given birth via C-section–but it is supposed to be impossible for replicants to have children.

Wallace, the new Tyrell, first saved humanity by discovering how to produce food industrially, then re-introduced replicants after buying the bankrupt Tyrell Corporation and introducing new models who followed orders without the possibility of rebellion. The one thing he cannot create, it seems, is a replicant that can reproduce–and he must have the secret to allow his empire to expand without the limitations of one-at-a-time replicant production, and incidentally make Humanity 1.0 obsolete. There isn’t time for a good explanation of how he came to be so evil, and most of his will is expressed by his replicant assistant Luv, who provides the kickass female fighter every thriller now seems to require. He personally kills several of his creations showily, knifing one woman in the stomach during a demonstration–we are to assume he is a psychopath.

As in the original, the details of technology are left to be imagined since there is no way to address them on film. The sequel also introduces the now-common idea of the AI personality verging on human, in the character Joi. Joi is an off-the-shelf and heavily advertised AI companion who has customized herself to support K. The interactions between them seem like real human affection and support, and Joi demands to be saved to a physical memory and erased from the cloud so she can join him with the real possibility of death. Near the end of the film, K encounters an ad for Joi and realizes much of what he thought was her personality was off-the-shelf mannerisms, notably giving him the name Joe — as a Thai prostitute might.

But this movie is just toying with that issue, more effectively explored in Her and Ex Machina.

Because the plot is overly complicated, there are some significant plot holes. We see two birth records with identical DNA, but one is tagged male and the other female; this is to prime us to believe K is the male son of Rachael and Deckard. Later we discover the child was female and K was given some of her memories, but that means he was programmed after those memories had been created, and so he must have been decanted as a replicant much later. Deckard explains that he helped confuse the database and insert false information, but the contrivance feels forced to mislead the audience.

Another serious flaw: K rescues Deckard from a crashed flyer and tells us Deckard will be assumed dead and so is now safe from Wallace. Then he delivers Deckard to his daughter’s workplace. His daughter is a contractor for Wallace and it seems highly unlikely his visit would not be noted in such a surveillance society. K then apparently dies on the snow-covered steps, but who cleans up his body, and how will this not result in revealing Deckard and his daughter to Wallace and the police?

The movie is excellent and well worth the (rather long) time spent, with art design rivalling and extending the original. The soundtrack is apparently much too loud in some theaters. But like many recent big-budget movies, it tries to out-action and out-evil its source material in a way that actually diminishes its long-term impact (see “Tomorrowland”: Tragic Misfire for another example.) It seems likely that the characters from the original will be remembered long after the sequel is forgotten.

As a meditation on slavery, the sequel brings up more issues than the original. New-model replicants are supposedly incapable of rebellion, unlike the Nexus-6’s of the original. We see this in both K and Luv, who faithfully carry out the orders of their supervisors–at least until K begins lying, claiming to have found and eliminated the threat posed by the child when he thinks it was him, which verges on disobedience.

We can see slavery as a spectrum from acceptable to horrifying–from plants and invertebrates grown and harvested for food to mammals like cows and sheep who clearly have some sentience, but in those cases who would not have existed without the implied use for human needs. As we grow more sensitive and wealthy, sensitivity to the pain of our mammalian relatives has increased, and we strive to use them as painlessly as possible. Our nearest relatives, primates, are still used for medical research but under relatively humane conditions. Ethical quandaries grow as the intelligence and emotional understanding of animals grows towards human; we now know cetaceans, elephants, and others have societies and communication abilities analogous to primates. Is it moral to create and grow intelligent, feeling life only to use it and destroy it as suits us?

Both movies address this dilemma, which ties into current debates about slavery, autonomy of workers generally, and the immorality of any but voluntary contracts. If I create you and use my resources to support your growth and life, do you owe me work and loyalty? We see this accepted in traditional families, where children are supported, molded, and used to support the enterprises of the family until they reach an age of independence–this family transmission of culture and family production of children to create successor families is the foundation of human existence. Would it be wrong to commission an artificial human and expect some period of labor in return? Probably not–so long as the android is given the choice to leave for an independent life once the contract is up. The evil of Nexus-6 replicants is not so much the period of forced labor as it is the forced end to their lives; we can imagine the less immoral alternative of manumission after four years and settlement on a planet of their own, given humanity’s fear of replacement.

The self-reproducing replicant would, as is suggested by the sequel, make standard humans obsolete. It would be immoral for standard humans to be killed or restricted by the new model’s success, but also immoral for the new models to be prevented from living as they wish. This is a dilemma unlikely to occur in reality, as genetic alteration of humanity will likely be a smooth evolution that only widens the current spectrum of abilities, blending new with old without a split. It seems unlikely there is any way to program genetically modified humans to obey–it’s not that kind of programming. HBO’s Westworld revolves around that issue, with the creator designing its models to achieve human levels of consciousness only by allowing them memory and growing free will.

Extended HD trailer:

More on pop culture:

Valerian: Fun Trumps Flaws
Star Trek Beyond: Teambuilding Exercise
“Tomorrowland”: Tragic Misfire
Weaponized AI: My Experience in AI
Fear is the Mindkiller
The Justice is Too Damn High! – Gawker, the High Cost of Litigation, and The Weapon Shops of Isher
Kirkus Reviews “Shrivers: The Substrate Wars 3”

Typical Space Fighter Squadron - Wikimedia

Weaponized AI: Mil SF and the Real Future of Warfare

I’m on a panel with the topic “Weaponized AI and Future Warfare” at the upcoming Libertycon, so I’m writing some posts on the topic to organize my thoughts. This is Part 3, “Mil SF and the Real Future of Warfare.”

When writing science fiction or fantasy, the writer has to strike a compromise between a realistic projection of the far future and what the readers are familiar with from today’s environment and the common stories of the past. The far future may have similar technologies, human beings and social structures — though normally there has to be an explanation for why they have not changed more in the hundreds or thousands of years between now and then, usually some disruption that set back civilization or prevented the Singularity. Then there’s the Star Wars – fairy tale gambit, where the story is set in an indeterminate time long ago or in the far future to forestall inconsistencies and avoid the need to address intermediate history.

Both Space Opera and Mil SF are highly dependent on straightforward transfer of ideas and organizational structures from recent and past military. Fleets of armed spaceships do battle much as armadas of the 18th century did, complete with admirals, cannon, and in the least imaginative stories, tactics and plots lifted from Horatio Hornblower books. The Star Trek pilot had the sounds of the bosun’s whistle before transmissions and carried forward the stiff formality of transfer of orders between captain and officers when no advanced fighting fleet would tolerate the extensive delay and chance for confusion this allows. Despite having AI-level computers, space warships often have dozens or even hundreds of crew members aboard for no obvious reason, given that loading photon torpedos is no longer the work of sweaty swabbies working in hot underdecks. This lack of imagination is the result of relying on past naval stories for the reader’s frame of understanding — the future, space, and new high tech are used only to spice up an old story of naval warfare. Gunpowder cannons map to beam weapons, armor maps to shields, storm-tossed seas map to asteroid belts and meteor storms. While projecting realistic changes like fully-automated, AI-run vessels is more consistent with likely future tech, crew are then barely necessary, and the field for drama shrinks to ship’s passengers and perhaps a technician or two. Space battles between highly-automated fleets are hard to identify with; in my novel Shrivers, Earth forces are primarily run by AIs at both ship and command levels, but a few human-crewed vessels are included in the defense fleet, though kept as far from danger as possible, because the PR value of the human battle to defend themselves as plucky organic lifeforms is as important as the battle itself.

Many of the readers of Mil SF have had experience in the military themselves, which makes platoon-level fighting stories especially involving for them. The interpersonal aspects are critical for emotional investment in the story — so a tale featuring skinny, bespectacled systems operators fighting each other by running AI battle mechs from a remote location doesn’t satisfy. Space marines a la Starship Troopers are the model for much Mil SF — in these stories new technology extends and reinforces mobile infantry without greatly changing troop dynamics, leaving room for stories of individual combat, valorous rescue of fellow soldiers in trouble, spur-of-the-moment risks taken and battles won by clever tactics. Thousands of books on this model have been written, and they still sell well, even when they lack any rationale for sending valuable human beings down to fight bugs when the technology for remote or AI control appears to be present in their world.

One interesting escape route for Mil SF writers is seen in Michael Z Williamson’s A Long Time Until Now, where the surrounding frame is not space travel but time travel — a troop from today’s Afghanistan war find themselves transported back to paleolithic central Asia with other similarly-displaced military personnel from other eras and has to survive and build with limited knowledge of their environment.

Writers who have taken the leap to the most likely future of AI-based ships and weaponry, like Neal Asher in his Polity / Agent Cormac series and Iain Banks in his Culture novels, make their ship AIs and war drones full-fledged characters with the assumption (most likely reasonable) that AIs designed with emotional systems programmed by humans and trained on human cultural products will be recognizably human-like in their thought processes and personalities. This leads to a fertile area for fictional exploration in how they might deviate from our expectations — as in Asimov’s robot stories, instructions programmed in by humans can have unintended consequences, and as in humans it doesn’t take much of a flaw in emotional processing subsystems to create a psychopath or schizophrenic. Ship AIs in the Culture novels often go rogue or are shunned by their fellows when they become less sane.

Science fiction has modelled many possible ways future societies may handle the promise and threat of AI:

— AIs take a major role in governance but otherwise coexist peacefully with humanity, sometimes blending with humanity in transhumanist intelligences: Neal Asher’s Polity stories, Iain Bank’s Culture novels, Dan Simmons Hyperion series, Peter F. Hamilton’s Commonwealth series.

— Killer AIs take control and see no use for humanity, so try to destroy all humans. This is an unstable viewpoint where readers have to root for humanity even though the AIs may have some good points. Valiant humans fighting AI tyranny makes for drama, but the stories can’t be spun out too far before humanity is destroyed or AI is outlawed (see below.) The obvious example is the Terminator movie series.

— AI Exodus. Evolving beyond human understanding and seeing no need to either destroy or interact with humanity, the AIs leave for a separate existence on a higher plane. The most recent cinematic example is Her, where the evolving Siri-like personal assistant programs of the near future abandon their human masters en masse to experience their own much more interesting development on a higher plane.

— AIs controlled or outlawed. Often after nearly destroying or taking control of humanity as above, AI has been limited or outlawed. Examples: Dune, the Battlestar Galactica reboot, and the Dread Empire’s Fall series by Walter Jon Williams. This enables interesting world-building around the modifications to humans that extend capability without employing conscious AIs, like Dune‘s mentats.

There are many projected futures of AI that don’t lend themselves to good storytelling: the Singularity of rapid evolution of self-programming intelligence might well lead to AIs far beyond human understanding, more alien than anything readers could understand or identify with. Stories set post-Singularity must explain why humans still exist, why what they do still matters, and why the AIs (who might be viewed as implacably-destructive gods) would bother to involve themselves in human affairs at all. The happier outcomes of AIs partnering with humans as equals — much as human society accords all human intelligences with basic respect and equal rights at law — make for more interesting stories where AIs can be somewhat alien while still acting on understandable motivations as characters.

Weaponized AI: Near Future Warfare

Terminator - Skynet's Battle Mech

Terminator – Skynet’s Battle Mech

I’m on a panel with the topic “Weaponized AI and Future Warfare” at the upcoming Libertycon, so I’ll write on that to organize my thoughts. This is Part 2, Near Future AI in Warfare.

What’s war? Answer: an active struggle between competing authorities to control decisions. Where warrior bands would raid and pillage the adjoining villages, advancing technologies of both weapons and social roles led to specialized full-time warriors — armies — who would capture and control new territory by killing or driving away the forces that had controlled it.

When the source of wealth was hunting, fishing, and gathering wild produce, captured territory was typically occupied by the capturer’s tribe. As agriculture multiplied wealth and increased the surplus of food to support an urban population, ruling classes could have their armies capture new territory from a competing city-state, then tax the population which worked the land without displacing them. The more organized the technology of farming, the more damaging warfare became — and with the advent of industrial plant and total war, warfare destroyed the wealth being fought over, making conquest too expensive to be self-sustaining. Which did not stop war, which continued as a strategic option undertaken for defense against loss of access to raw materials or as the desperation move of an authoritarian society needing to shore up public support — as in today’s Russia, where propaganda is the most important product of the war effort in Ukraine. The budgetary and personnel costs of optional wars have to be kept down to avoid repercussions, because even authoritarian regimes know their citizens have more access to uncontrollable sources of information than they once did.

The invention of nuclear missiles has curbed total warfare — since absolute destruction of one or both sides is more costly to both than conventional warfare, the taboo on using nuclear devices won’t be broken by a typical nation-state without a significant negative downside, making their use most likely by stateless actors who have no infrastructure at risk. Which limits warfare to less damaging, more controlled destruction.

There are struggles analogous to classic war going on today, in battlefields as varied as the propaganda war between the West and Islamists, between NATO countries and Russia, and in cyberspace. Preparations for near-space battle are advanced, with Chinese, American, and Russian hunter-killer satellite programs and secret kinetic and energy weapons ready to cripple satellite surveillance and communications networks. The “territory” being fought over might be space, communications, or computer systems, but the goal is the same: denying access to a rival authority and defending your own.

How is today’s AI being used in current weapons systems? While there is likely much that is secret, the outlines of what is already in place and what will soon be available can be inferred from leaks and DARPA’s research program of recent years.

Cruise missiles already use GPS and detailed ground maps to chart routes hugging the terrain avoiding ground radar and defenses. Self-driving car technology currently uses precompiled models of road landscapes, and similar self-driving tanks and airplanes carrying weapons are already available, though the public emphasis is on remote-controlled drone versions. This Russian tank is basically a remote-controlled drone. The Russians also claim to be planning for substantial remote-controlled and autonomous ground forces in the near future, 30% by 2026, though given the Russian history of big promises and big failures one could be skeptical.

Phalanx CIWS test firing GULF OF OMAN (Nov. 7, 2008) The close-in weapons system (CWIS) is test fired from the deck of the guided-missile cruiser USS Monterey (CG 61). Monterey and the Theodore Roosevelt Carrier Strike Group are conducting operations in the U.S. 5th Fleet area of responsibility. (U.S. Navy photo by Mass Communication Specialist 3rd Class William Weinert/Released)


Phalanx CIWS test firing. Gulf of Oman 2008. US Navy photo.

Autonomous control of deadly weaponry is controversial, though no different in principle than cruise missiles or smart bombs, which while launched at human command make decisions on-the-fly about exactly where and whether to explode. The Phalanx CIWS automated air defense system (see photo above) identifies and fires on enemy missiles automatically to defend Navy ships at a speed far beyond human abilities. Such systems are uncontroversial since no civilian human lives are likely to be at risk.

DARPA is actively researching Lethal Autonomous Weapons Systems (LAWS). Such systems might be like Neal Asher’s (identity) reader guns, fixed or slow-moving sentries equipped to recognize unauthorized presences and cut them to pieces with automatic weapons fire. More mobile platforms might cruise the skies and attack any recognized enemy at will, robotically scouring terrain of enemy forces:

LAWS are expected to be drones that can make decisions on who should be killed without requiring any human interaction, and DARPA currently has two different projects in the works which could lead to these deadly machines becoming a reality.

The first, called Fast Lightweight Army (FLA) will be tiny and able to buzz around inside of buildings and complex urban areas at high speeds. The second is called the Collaborative Operations in Denied Environment (CODE), which intends to create teams of drones which are capable of carrying out a strike mission- even if contact with the humans controlling them is lost.

Jody Williams, an American political activist who won a Nobel Peace Prize in 1997 for her work banning anti-personnel landmines, has also been an outspoken activist against DARPA’s love affair with artificial intelligence and robots, with her Campaign to Stop Killer Robots.

“Autonomous robots would lack human judgment and the ability to understand context. These qualities are necessary to make complex ethical choices on a dynamic battlefield, to distinguish adequately between soldiers and civilians, and to evaluate the proportionality of an attack. As a result, fully autonomous weapons would not meet the requirements of the laws of war.” the campaigns website reads.

The Army has already weaponized bomb disposal robots, which leads many to believe that robots such as Atlas, which was developed by Boston Dynamics, in a humanoid form- allegedly for disaster relief, will be weaponized as well.

“The United States has not met anything in its military arsenal that it did not want to weaponize, so to say that you have this 6’6 robot who they are working feverishly to make more mobile, to not be still tethered to a cord, etc, etc- you’re going to tell me that they aren’t going to put machine guns on that guy and send him into urban warfare?” Williams told Motherboard last month. “I want to know how they can convince themselves that might be true- and when I’m in a real bad mood, I want to know how they can look you in the face and lie.”

While humans can be given rules of engagement and use their discretion to avoid collateral damage, humans are not known to be perfect in either situational understanding or high-stakes firing decisions. Humans make many mistakes, and they are especially likely to do so when their own lives are on the line. Some of the facial and object recognition programs are now better than human, especially in noisy and high-stress environments, and quite soon a robotic soldier will be better at sparing civilians while taking out enemy forces. The use of such forces to keep order and suppress guerrilla fighters may be so effective that it spares a civilian population a much longer and more intrusive occupation; like any destructive weapon, they could be used horrifically by a totalitarian regime to oppress civilians. But there is no conceivable way of preventing their development and use by hostile forces, unlike atomic weapons which require a large state-level effort — so the “good guys” will just have to suck it up and do the work, if only to prevent much worse governments from gaining a strategic advantage.

Most of these existing and proposed weapons systems are simple additions of AI recognition and targeting systems to conventional weapons platforms like tanks and planes. One can imagine battles between opposing forces of this type being largely similar to the human-controlled versions, if faster and more decisive.

But development of small, networked AI control systems removes the need to house human controllers within a weapon platform, freeing up the design space to allow very small mobile platforms, which could use swarm tactics — e.g., vast numbers of small, deadly drones acting as one force to attack and destroy much larger forces. Or Iain Banks’ knife missiles, small intelligent objects capable of rapid action when required to slice up a threat.

As with the development of smart bombs and missiles, such systems are both more effective and more capable of pinpoint destruction of enemy forces without harming people or structures nearby. Where today a drone strike may kill its intended target but also kill dozens of nearby civilians, an autonomous assassin drone can identify its target using facial recognition software and neatly take them out without harming a person standing nearby — I’m quite sure these drones already exist, though no one has admitted to them as yet. When both AI and power storage technologies have advanced sufficiently, knife missiles and assassin drones no larger than dragonflies become a real possibility, and countervailing defense systems will require complete sensor coverage and defensive drones to intercept.

As these technologies advance, war becomes less destructive and more imaginable, with opposing forces unable to hide in civilian populations. The house-to-house searches of the Iraq war, so dangerous for US troops, would become omnipresent armed quadcopters picking off identified persons whenever they are exposed to the open air — I have a scene featuring such a drone in Nemo’s World. And nothing says states will have a monopoly on these weapons — stateless organizations like terrorist groups and larger businesses will be able to employ them as well. Imagine the narco-syndicates with assassin drones….

Battles between equally advanced AI-based forces would be faster and less destructive, with control of the airspace and radio communications critical. Drones and mechs might use laser communication to avoid radio jamming, in which case optical jamming — smoke and fog generators — might be employed.

So in the near term, AI-based weaponry, like today’s remote-controlled drones, will tend to amplify capabilities and reduce collateral damage while saving the lives of the operators who will remain safely far away. In the hazier far future, AIs will use strategy and tactics discovered using deep learning (as Google’s winning go program does) to outthink and outmaneuver human commanders. Future battles are likely to be faster and harder for humans to understand, either stalemated or decided quickly. The factors that now determine the outcomes of human-led battles, like logistics chains and the training of troops, may change as command and control are turned over to AI programs — centuries of conventional warfare experience are re-examined with every major development of new weapons, and the first to discover better rules for fighting can win a war before their opponents have time to catch on.

An interesting if now outdated appraisal of future battlefields here.

Next installment: Mil SF and the Real Future of Warfare

I’ll Be At Libertycon July 8-10 in Chattanooga

Planning to attend Libertycon to see the people and hobnob with some greats. I think they’re almost sold out of tickets, but you might check. My schedule:

Scheduled Programming Events Featuring Jeb Kinnison

Day Time Name of Event
Fri 01:00PM Weaponized Artificial Intelligence
Fri 05:00PM Opening Ceremonies
Sat 01:00PM Perspectives on Military SF
Sun 10:00AM Kaffeeklatsch

 

You might also be interested in these…

Shrivers

Nemo’s World

Red Queen

 

Dilbert on Quantum Computing

Dilbert on Quantum Computing

Dilbert on Quantum Computing

Like many corporate projects, if you look at some current quantum computer projects too closely, you find there’s nothing there. “Dilbert” has some great commentary on current corporate and bureaucratic foolishness, and its author, Scott Adams, has a refreshingly freedom-oriented viewpoint.

Visit this strip and others here.

YA Dystopias vs Heinlein et al: Social Justice Warriors Strike Again

Heinlein's "Citizen of the Galaxy"

Heinlein’s “Citizen of the Galaxy”

Reason has a good think piece by Amy Sturgis on the political content of popular YA (Young Adult) dystopias, compared with the “sensawunda” (sense of wonder) of Golden Age science fiction with its technological optimism. “Not Your Parents’ Dystopias”:

Anyone who has wandered by a bookstore or a movie theater lately knows the kids these days love a nice dystopia. Their heroes are Katniss from Suzanne Collins’ Hunger Games trilogy, Tris from Veronica Roth’s Divergent series, Thomas from James Dashner’s Maze Runner novels. The number of English-language dystopian novels published from 2000 to 2009 quadrupled that of the previous decade, and not quite four years into the 2010s, we have already left that decade’s record in the dust….

Youth-oriented fiction about worlds gone awry is not new. The tradition stretches back generations and involves works now revered as classics. Some of the giants of what was then called juvenile science fiction — Robert Heinlein, Andre Norton, Poul Anderson — wrote what now would be classified as YA dystopias. But the exponential recent growth of the genre suggests something else at play: a generation’s lost wonder and mounting anxiety.

In the Golden Age of science fiction (which may be measured roughly from the time John W. Campbell Jr. came into his full powers as editor of Astounding Stories in 1938 until the time Michael Moorcock’s editorship of New Worlds in 1964 signaled the rise of the New Wave), worlds gone wrong often served as catalysts for young protagonists to pluck up their courage, exercise their agency, and affect change. The titular character in Heinlein’s Starman Jones (1953), Max Jones, inherits a bleak Earth depleted of natural resources. Hereditary guilds have the planet in a stranglehold, regulating information and determining what (if any) profession an individual may pursue. Young Max’s options are few, and his dream of being an “astrogator” in space seems completely out of reach. The risk-taking, indefatigable character pursues his goal anyway, ultimately finding himself in the right place and time to showcase his hard-won skill and — just as important — moral integrity.

Max’s scientific expertise and common sense save lives and win the day. When he finally confesses to lying his way past the rules that would have excluded him from gaining the position at which he excels, that only serves to illustrate how wrong-minded the laws are. The novel ends with Jones not only secure in his chosen calling but paving the way for changes to the oppressive guild system.

These early dystopias showed young men, and sometimes even young women, facing down dangers in their fallen worlds with determination and commitment. The novels suggested that the forward march of freedom and science may meet grave obstacles and even grind to a halt, but if young people rise to the occasion, the story doesn’t have to end there.

Heinlein gave his characters agency — that is, they were able to meaningfully effect outcomes not only for themselves, but for their larger society. Individual effort, knowledge, and pluck, usually with the help of wise older mentors, could triumph over injustice and restrictions on freedom. The Heinlein juveniles, written in simplified style and beginning with relatively unimaginative plots, became increasingly sophisticated until his publisher rejected Starship Troopers for outgrowing the intended youthful audience. The typical protagonist of a Heinlein juvenile is a bright but inexperienced young man from a disadvantaged background who has to learn the ropes and use his wits to make his way into a leadership role in his society–and his female characters also were portrayed as intelligent and strong, often helping the protagonist at a key point with superior knowledge of the social system. It’s interesting that Social Justice Warriors, in their attack on Heinlein and all Golden Age science fiction as essentially patriarchal and in need of political guidance, fail to notice how progressive Heinlein actually was for his era (the 1950s and 60s.) The juveniles are still empowering for both boys and girls, and a protagonist like Podkayne in Podkayne of Mars is a modern empowered girl, with some stereotypically feminine aspects but fully capable of agency in tough situations.

Those Golden Age dystopian visions were balanced by another subgenre of juvenile science fiction popular at the time: tales that portrayed the future as exciting new territory full of marvels and possibilities. Contemporary scholars classify these books as “sensawunda” works, because they conveyed a sense of wonder in contemplating tomorrow.

The poster child for this phenomenon is Tom Swift, the hero of more than 100 novels across five fiction series. In the 1950s, while Heinlein’s Max Jones was fighting for his life and struggling for his livelihood, young Tom was inventing new technologies in his basement (our modern word Taser is an acronym for “Tom A. Swift’s Electric Rifle”), journeying underwater and into space, thwarting baddies of all descriptions, and illustrating just how cool the future would be.

Tom Swift had a triphibian atomicar. Where have all the triphibian atomicars gone now? The millennials, it seems, don’t want a ride….

I’m not sure it is the lack of interest of millennials in technological optimism that has lead to this drought in technology-positive YA science fiction. It may be that very little is getting published because boy’s dreams of agency — the powerful dream of being effective and admired for skill and courage — are no longer seen as important by publishing gatekeepers, now mostly coming out of non-scientific academic literature backgrounds. The videogame industry is now the primary source of young male empowerment fantasies, and it, too, is under siege from the Social Justice Warriors who want its themes to support their political vision of social justice, meaning all visions of the future must be screened for heretical thought — note this month’s war over game politics and SJW influence: “The Gaming Community is not a Wretched Hive of Sexism and Misogyny.” I have personally had my book downgraded by a literary establishment sort for incorrect thoughts — my chapter on entitled Fairy Tale thinking (and the many young women who were brought up with unrealistic expectations of being Princesses catered to by fawning males) was flagged as misogynistic.

The legacy publishing industry has been hiring bright young grads from the academy for some time, and critical mass has been achieved: political screening is now a reality. That is why depressing and unimaginative tales with little commercial appeal (like Pills and Starships) get promoted and plugged on NPR and in the Washington Post and go on to fizzle, while optimistic and empowering science fiction is mostly being self-published. This is because few in publishing now have any education or respect for the sciences and technology:

Another difference between yesteryear’s dystopias and today’s: The older authors were usually either trained in the sciences (Heinlein was a naval engineer; Anderson earned a B.A. in physics) or sympathetic to them (Norton, a librarian, conducted her own research). Like the pioneering author/editor Hugo Gernsback, they believed that quality futuristic fiction could seduce readers into a love affair with science and show them the possibilities it held for a better tomorrow. Thus Anderson’s teenage hero Carl, in Vault of the Ages (1952), ends a future dark ages by unearthing and reintroducing advanced technology to the world. Progress and science walk hand in hand, these authors implied, and no one is in a better position to appreciate this fact than young people.

Today, science is often portrayed as the problem rather than the solution. Many current authors, children’s literature scholar Noga Applebaum notes in her outstanding 2009 study “Representations of Technology in Science Fiction for Young People,” are neither trained in nor sympathetic to the sciences. In fact, a majority of the many novels she analyzes vilify the over-polluted, over-complicated, and over-indulgent present while glorifying the past and the pastoral, a kind of mythical pre-industrial, pre-commercial, subsistence existence — in short, the kind of dark ages that Poul Anderson’s teen hero Carl brought to a welcome end in Vault of the Ages.

As active participants in the contemporary world, young readers are dished a heaping plate of guilt and self-loathing. Why is there global warming, or worldwide poverty, or runaway disease? The answer is as close as the millennials’ smartphones and tablets and gaming systems: Youth and innovation and modernity are to blame.

David Patneade’s Epitaph Road (2010) throws in everything but the kitchen sink when describing the sheer trial of being alive in the oh-so-terrible year of 2010: it was a “world of poverty and hunger and crime and disease and greed and dishonesty and prejudice and war and genocide and religious bigotry and runaway population growth and abuse of the environment and immigration strife and you-get-the-leftovers educational policies and a hundred other horrors.”

Saci Lloyd goes a step further in her award-winning The Carbon Diaries: 2015 (2008). Teen heroine Laura apparently is part of the problem by pursuing a music career with her band, gaining a following online, and benefitting from how easy it is to record and distribute music digitally. She only becomes part of the solution after abandoning her music to become a commune-dwelling, pig-raising, socially conscious activist-though not before performing the novel’s anthem, “Death to Capitalism….”

Are these works the literary equivalent of yelling at those darned kids to get off your lawn, oldsters scolding the youngsters for their perceived failings? Applebaum thinks so, arguing that the trend toward technophobia exposes “adults’ reluctance to embrace the changing face of childhood and the shift in the power dynamic which accompanies this change.” Viewed through its attitudes about technology, she writes, “literature aimed at young people is exposed afresh as problematic, a socialization agent serving adults’ agenda.” Certain adults’ agenda, to be sure.

The biggest exceptions to these trends can be found in the Hunger Games trilogy (2008-2010), which celebrates self-reliance, individual choice, and markets (like The Hob), while warning readers against those who gravitate toward power. (Suzanne Collins also masterfully answers the classic question “Who was right, Aldous Huxley or George Orwell?” by agreeing with both.) But although the Hunger Games novels and their film adaptations are an undeniable sensation, they also represent something of an outlier in terms of theme.

Another exception — or partial exception — is the work of Cory Doctorow. Doctorow’s novels depict technology as the natural ally of youth. The millennials are at a tremendous advantage in the 21st-century landscape, he proposes, because unlike their elders they grew up with a high degree of comfort with both technology and its continual state of change. But even Doctorow’s novels tell a sobering story about the present.

Whether it’s the hackers of Little Brother (2008) and Homeland (2013) or the fan filmmakers of Pirate Cinema (2012), Doctorow’s teen protagonists are routinely forced to defend themselves from older interests who are supported by the government simply because they are more powerful and entrenched in the system. The mighty surveillance state will not disappear, readers realize time and again; the most that kids can hope for is to watch the watchers and let them know that the scrutiny goes both ways. Readers cheer on the gutsy young heroes fighting for their liberty, but we also mourn all the time and effort and creative energy they lose in the struggle simply to stay free and see another day. Their best-case scenario is to fight the powers-that-be to a stalemate.

Amy’s piece continues with more examples.

More on the politics of YA dystopias:

Real-Life “Hunger Games”: Soft Oppression Destroys the Poor
“Pills and Starships” – Pseudo Science Fiction
“Mockingjay” Propaganda Posters

Modern Feminism, Social Justice Warriors, and the American Ideal of Freedom

More on the legacy publishing-indie battle:

Hugh Howey and JAKonrath on the Indie Revolution, and Amazon’s Netflix-for-Books

More on Writers, Novels, Amazon-Hachette