Science

Cellular Automaton: Life Animation in “Red Queen: The Substrate Wars 1”

Red Queen: The Substrate Wars 1

Red Queen: The Substrate Wars 1

 

When I was 12 or so, I read about John Horton Conway’s cellular automaton Life in Scientific American. Back then (c. 1970) we had to painfully draw each generation on graph paper. The personal computer revolution made it possible for hobbyists and students to simulate large fields and thousands of generations easily, building self-replicating structures and Turing machines… Life could simulate life.

Later in my career, I wrote artificial life simulations similar to what is portrayed in Red Queen — these are an extension to simulation of creatures in a simulated environment. There is a progression as simulations get better and better — eventually the simulation of life in an environment can become as complex as the real world, which has led to current theories that the universe we live in may itself be some kind of simulation on an underlying substrate.

The video below is an amazing zoom from tiny gliders to glider guns to megastructures… and then to Life itself simulated by Life.


Red Queen: The Substrate Wars 1.

Typical Space Fighter Squadron - Wikimedia

Weaponized AI: Mil SF and the Real Future of Warfare

I’m on a panel with the topic “Weaponized AI and Future Warfare” at the upcoming Libertycon, so I’m writing some posts on the topic to organize my thoughts. This is Part 3, “Mil SF and the Real Future of Warfare.”

When writing science fiction or fantasy, the writer has to strike a compromise between a realistic projection of the far future and what the readers are familiar with from today’s environment and the common stories of the past. The far future may have similar technologies, human beings and social structures — though normally there has to be an explanation for why they have not changed more in the hundreds or thousands of years between now and then, usually some disruption that set back civilization or prevented the Singularity. Then there’s the Star Wars – fairy tale gambit, where the story is set in an indeterminate time long ago or in the far future to forestall inconsistencies and avoid the need to address intermediate history.

Both Space Opera and Mil SF are highly dependent on straightforward transfer of ideas and organizational structures from recent and past military. Fleets of armed spaceships do battle much as armadas of the 18th century did, complete with admirals, cannon, and in the least imaginative stories, tactics and plots lifted from Horatio Hornblower books. The Star Trek pilot had the sounds of the bosun’s whistle before transmissions and carried forward the stiff formality of transfer of orders between captain and officers when no advanced fighting fleet would tolerate the extensive delay and chance for confusion this allows. Despite having AI-level computers, space warships often have dozens or even hundreds of crew members aboard for no obvious reason, given that loading photon torpedos is no longer the work of sweaty swabbies working in hot underdecks. This lack of imagination is the result of relying on past naval stories for the reader’s frame of understanding — the future, space, and new high tech are used only to spice up an old story of naval warfare. Gunpowder cannons map to beam weapons, armor maps to shields, storm-tossed seas map to asteroid belts and meteor storms. While projecting realistic changes like fully-automated, AI-run vessels is more consistent with likely future tech, crew are then barely necessary, and the field for drama shrinks to ship’s passengers and perhaps a technician or two. Space battles between highly-automated fleets are hard to identify with; in my novel Shrivers, Earth forces are primarily run by AIs at both ship and command levels, but a few human-crewed vessels are included in the defense fleet, though kept as far from danger as possible, because the PR value of the human battle to defend themselves as plucky organic lifeforms is as important as the battle itself.

Many of the readers of Mil SF have had experience in the military themselves, which makes platoon-level fighting stories especially involving for them. The interpersonal aspects are critical for emotional investment in the story — so a tale featuring skinny, bespectacled systems operators fighting each other by running AI battle mechs from a remote location doesn’t satisfy. Space marines a la Starship Troopers are the model for much Mil SF — in these stories new technology extends and reinforces mobile infantry without greatly changing troop dynamics, leaving room for stories of individual combat, valorous rescue of fellow soldiers in trouble, spur-of-the-moment risks taken and battles won by clever tactics. Thousands of books on this model have been written, and they still sell well, even when they lack any rationale for sending valuable human beings down to fight bugs when the technology for remote or AI control appears to be present in their world.

One interesting escape route for Mil SF writers is seen in Michael Z Williamson’s A Long Time Until Now, where the surrounding frame is not space travel but time travel — a troop from today’s Afghanistan war find themselves transported back to paleolithic central Asia with other similarly-displaced military personnel from other eras and has to survive and build with limited knowledge of their environment.

Writers who have taken the leap to the most likely future of AI-based ships and weaponry, like Neal Asher in his Polity / Agent Cormac series and Iain Banks in his Culture novels, make their ship AIs and war drones full-fledged characters with the assumption (most likely reasonable) that AIs designed with emotional systems programmed by humans and trained on human cultural products will be recognizably human-like in their thought processes and personalities. This leads to a fertile area for fictional exploration in how they might deviate from our expectations — as in Asimov’s robot stories, instructions programmed in by humans can have unintended consequences, and as in humans it doesn’t take much of a flaw in emotional processing subsystems to create a psychopath or schizophrenic. Ship AIs in the Culture novels often go rogue or are shunned by their fellows when they become less sane.

Science fiction has modelled many possible ways future societies may handle the promise and threat of AI:

— AIs take a major role in governance but otherwise coexist peacefully with humanity, sometimes blending with humanity in transhumanist intelligences: Neal Asher’s Polity stories, Iain Bank’s Culture novels, Dan Simmons Hyperion series, Peter F. Hamilton’s Commonwealth series.

— Killer AIs take control and see no use for humanity, so try to destroy all humans. This is an unstable viewpoint where readers have to root for humanity even though the AIs may have some good points. Valiant humans fighting AI tyranny makes for drama, but the stories can’t be spun out too far before humanity is destroyed or AI is outlawed (see below.) The obvious example is the Terminator movie series.

— AI Exodus. Evolving beyond human understanding and seeing no need to either destroy or interact with humanity, the AIs leave for a separate existence on a higher plane. The most recent cinematic example is Her, where the evolving Siri-like personal assistant programs of the near future abandon their human masters en masse to experience their own much more interesting development on a higher plane.

— AIs controlled or outlawed. Often after nearly destroying or taking control of humanity as above, AI has been limited or outlawed. Examples: Dune, the Battlestar Galactica reboot, and the Dread Empire’s Fall series by Walter Jon Williams. This enables interesting world-building around the modifications to humans that extend capability without employing conscious AIs, like Dune‘s mentats.

There are many projected futures of AI that don’t lend themselves to good storytelling: the Singularity of rapid evolution of self-programming intelligence might well lead to AIs far beyond human understanding, more alien than anything readers could understand or identify with. Stories set post-Singularity must explain why humans still exist, why what they do still matters, and why the AIs (who might be viewed as implacably-destructive gods) would bother to involve themselves in human affairs at all. The happier outcomes of AIs partnering with humans as equals — much as human society accords all human intelligences with basic respect and equal rights at law — make for more interesting stories where AIs can be somewhat alien while still acting on understandable motivations as characters.

Weaponized AI: Near Future Warfare

Terminator - Skynet's Battle Mech

Terminator – Skynet’s Battle Mech

I’m on a panel with the topic “Weaponized AI and Future Warfare” at the upcoming Libertycon, so I’ll write on that to organize my thoughts. This is Part 2, Near Future AI in Warfare.

What’s war? Answer: an active struggle between competing authorities to control decisions. Where warrior bands would raid and pillage the adjoining villages, advancing technologies of both weapons and social roles led to specialized full-time warriors — armies — who would capture and control new territory by killing or driving away the forces that had controlled it.

When the source of wealth was hunting, fishing, and gathering wild produce, captured territory was typically occupied by the capturer’s tribe. As agriculture multiplied wealth and increased the surplus of food to support an urban population, ruling classes could have their armies capture new territory from a competing city-state, then tax the population which worked the land without displacing them. The more organized the technology of farming, the more damaging warfare became — and with the advent of industrial plant and total war, warfare destroyed the wealth being fought over, making conquest too expensive to be self-sustaining. Which did not stop war, which continued as a strategic option undertaken for defense against loss of access to raw materials or as the desperation move of an authoritarian society needing to shore up public support — as in today’s Russia, where propaganda is the most important product of the war effort in Ukraine. The budgetary and personnel costs of optional wars have to be kept down to avoid repercussions, because even authoritarian regimes know their citizens have more access to uncontrollable sources of information than they once did.

The invention of nuclear missiles has curbed total warfare — since absolute destruction of one or both sides is more costly to both than conventional warfare, the taboo on using nuclear devices won’t be broken by a typical nation-state without a significant negative downside, making their use most likely by stateless actors who have no infrastructure at risk. Which limits warfare to less damaging, more controlled destruction.

There are struggles analogous to classic war going on today, in battlefields as varied as the propaganda war between the West and Islamists, between NATO countries and Russia, and in cyberspace. Preparations for near-space battle are advanced, with Chinese, American, and Russian hunter-killer satellite programs and secret kinetic and energy weapons ready to cripple satellite surveillance and communications networks. The “territory” being fought over might be space, communications, or computer systems, but the goal is the same: denying access to a rival authority and defending your own.

How is today’s AI being used in current weapons systems? While there is likely much that is secret, the outlines of what is already in place and what will soon be available can be inferred from leaks and DARPA’s research program of recent years.

Cruise missiles already use GPS and detailed ground maps to chart routes hugging the terrain avoiding ground radar and defenses. Self-driving car technology currently uses precompiled models of road landscapes, and similar self-driving tanks and airplanes carrying weapons are already available, though the public emphasis is on remote-controlled drone versions. This Russian tank is basically a remote-controlled drone. The Russians also claim to be planning for substantial remote-controlled and autonomous ground forces in the near future, 30% by 2026, though given the Russian history of big promises and big failures one could be skeptical.

Phalanx CIWS test firing GULF OF OMAN (Nov. 7, 2008) The close-in weapons system (CWIS) is test fired from the deck of the guided-missile cruiser USS Monterey (CG 61). Monterey and the Theodore Roosevelt Carrier Strike Group are conducting operations in the U.S. 5th Fleet area of responsibility. (U.S. Navy photo by Mass Communication Specialist 3rd Class William Weinert/Released)


Phalanx CIWS test firing. Gulf of Oman 2008. US Navy photo.

Autonomous control of deadly weaponry is controversial, though no different in principle than cruise missiles or smart bombs, which while launched at human command make decisions on-the-fly about exactly where and whether to explode. The Phalanx CIWS automated air defense system (see photo above) identifies and fires on enemy missiles automatically to defend Navy ships at a speed far beyond human abilities. Such systems are uncontroversial since no civilian human lives are likely to be at risk.

DARPA is actively researching Lethal Autonomous Weapons Systems (LAWS). Such systems might be like Neal Asher’s (identity) reader guns, fixed or slow-moving sentries equipped to recognize unauthorized presences and cut them to pieces with automatic weapons fire. More mobile platforms might cruise the skies and attack any recognized enemy at will, robotically scouring terrain of enemy forces:

LAWS are expected to be drones that can make decisions on who should be killed without requiring any human interaction, and DARPA currently has two different projects in the works which could lead to these deadly machines becoming a reality.

The first, called Fast Lightweight Army (FLA) will be tiny and able to buzz around inside of buildings and complex urban areas at high speeds. The second is called the Collaborative Operations in Denied Environment (CODE), which intends to create teams of drones which are capable of carrying out a strike mission- even if contact with the humans controlling them is lost.

Jody Williams, an American political activist who won a Nobel Peace Prize in 1997 for her work banning anti-personnel landmines, has also been an outspoken activist against DARPA’s love affair with artificial intelligence and robots, with her Campaign to Stop Killer Robots.

“Autonomous robots would lack human judgment and the ability to understand context. These qualities are necessary to make complex ethical choices on a dynamic battlefield, to distinguish adequately between soldiers and civilians, and to evaluate the proportionality of an attack. As a result, fully autonomous weapons would not meet the requirements of the laws of war.” the campaigns website reads.

The Army has already weaponized bomb disposal robots, which leads many to believe that robots such as Atlas, which was developed by Boston Dynamics, in a humanoid form- allegedly for disaster relief, will be weaponized as well.

“The United States has not met anything in its military arsenal that it did not want to weaponize, so to say that you have this 6’6 robot who they are working feverishly to make more mobile, to not be still tethered to a cord, etc, etc- you’re going to tell me that they aren’t going to put machine guns on that guy and send him into urban warfare?” Williams told Motherboard last month. “I want to know how they can convince themselves that might be true- and when I’m in a real bad mood, I want to know how they can look you in the face and lie.”

While humans can be given rules of engagement and use their discretion to avoid collateral damage, humans are not known to be perfect in either situational understanding or high-stakes firing decisions. Humans make many mistakes, and they are especially likely to do so when their own lives are on the line. Some of the facial and object recognition programs are now better than human, especially in noisy and high-stress environments, and quite soon a robotic soldier will be better at sparing civilians while taking out enemy forces. The use of such forces to keep order and suppress guerrilla fighters may be so effective that it spares a civilian population a much longer and more intrusive occupation; like any destructive weapon, they could be used horrifically by a totalitarian regime to oppress civilians. But there is no conceivable way of preventing their development and use by hostile forces, unlike atomic weapons which require a large state-level effort — so the “good guys” will just have to suck it up and do the work, if only to prevent much worse governments from gaining a strategic advantage.

Most of these existing and proposed weapons systems are simple additions of AI recognition and targeting systems to conventional weapons platforms like tanks and planes. One can imagine battles between opposing forces of this type being largely similar to the human-controlled versions, if faster and more decisive.

But development of small, networked AI control systems removes the need to house human controllers within a weapon platform, freeing up the design space to allow very small mobile platforms, which could use swarm tactics — e.g., vast numbers of small, deadly drones acting as one force to attack and destroy much larger forces. Or Iain Banks’ knife missiles, small intelligent objects capable of rapid action when required to slice up a threat.

As with the development of smart bombs and missiles, such systems are both more effective and more capable of pinpoint destruction of enemy forces without harming people or structures nearby. Where today a drone strike may kill its intended target but also kill dozens of nearby civilians, an autonomous assassin drone can identify its target using facial recognition software and neatly take them out without harming a person standing nearby — I’m quite sure these drones already exist, though no one has admitted to them as yet. When both AI and power storage technologies have advanced sufficiently, knife missiles and assassin drones no larger than dragonflies become a real possibility, and countervailing defense systems will require complete sensor coverage and defensive drones to intercept.

As these technologies advance, war becomes less destructive and more imaginable, with opposing forces unable to hide in civilian populations. The house-to-house searches of the Iraq war, so dangerous for US troops, would become omnipresent armed quadcopters picking off identified persons whenever they are exposed to the open air — I have a scene featuring such a drone in Nemo’s World. And nothing says states will have a monopoly on these weapons — stateless organizations like terrorist groups and larger businesses will be able to employ them as well. Imagine the narco-syndicates with assassin drones….

Battles between equally advanced AI-based forces would be faster and less destructive, with control of the airspace and radio communications critical. Drones and mechs might use laser communication to avoid radio jamming, in which case optical jamming — smoke and fog generators — might be employed.

So in the near term, AI-based weaponry, like today’s remote-controlled drones, will tend to amplify capabilities and reduce collateral damage while saving the lives of the operators who will remain safely far away. In the hazier far future, AIs will use strategy and tactics discovered using deep learning (as Google’s winning go program does) to outthink and outmaneuver human commanders. Future battles are likely to be faster and harder for humans to understand, either stalemated or decided quickly. The factors that now determine the outcomes of human-led battles, like logistics chains and the training of troops, may change as command and control are turned over to AI programs — centuries of conventional warfare experience are re-examined with every major development of new weapons, and the first to discover better rules for fighting can win a war before their opponents have time to catch on.

An interesting if now outdated appraisal of future battlefields here.

Next installment: Mil SF and the Real Future of Warfare

Jonas Salk and Me

Jonas Salk "Man Unfolding"

Jonas Salk “Man Unfolding”

One of my recent posts had me recounting where I came from to explain my point of view, and one of the readers realized it was likely we went to the same high school — and we did. In fact, his father was my high school math teacher, the almost-perfect Merlin Baker, who took me through early calculus and tensors.

Another episode that molded me: I was one of the top two in science subjects, and the principal (we spell it that way because he’s your pal, in case you haven’t heard that mnemonic) came by to ask if I’d be on a panel with some of my cohorts to ask questions of Dr. Jonas Salk, pop culture science icon and inventor of the first practical and effective polio vaccine. This would be at a big auditorium in the city, attended by hundreds of high school students. I now know it was part of a publicity tour for his first book, a foray into pop philosophy funded as part of a series by Major World Thinkers. They gave me a copy of the book to read and asked me to come up with intelligent questions he could field.

So I said yes, along with two friends, and we made up half the student question panel. On the big day we went down in buses with other science students, and they kept us seated at the panel table onstage while the audience settled.

For this story to make sense, you have to realize I was a painfully shy kid. My nightmares were about being the center of attention, or being embarrassed. I spoke up in class only after I felt comfortable with the people in it. So I was far from the ideal choice to be on stage and talking in front of hundreds of strangers.

What’s worse was what I felt I had to say. I read his book, and it was mostly armwaving — strained analogies between problems of growth in lower organisms and humanity. Yes, some of these things look a little like the other things, but the casual use of analogies gives the wrong answers to important questions, just as assuming humanity is just like an ant farm or a pack of baboons misses much of the possibility for solving problems with the human mind and technology.

So other people threw softball questions and Salk smiled and fielded them smoothly. When my turn came, I asked him how he could present such analogies as useful for guiding policy — in other words, his models failed to capture the most important thing about human systems, so were useless for predicting or deciding.

He got this look, a sort of “Aha! I recognize you.” He smiled cooly and answered, “I can see you’ve really understood what I was trying to say in the book. I wrote it for a popular audience, to provide some insight on how biological systems can help us understand the major issues we face. These models can help in understanding some aspects of the problems.” And then he was onto the next question.

I was shaking and wet through with sweat. The teacher and my friends told me I’d done great. I felt awful.

This year, I went to see what critical reaction to the book was. Here’s the Kirkus review:

Essentially Jonas Salk’s plea — “”to look at human life from a biological viewpoint”” — is the same as that of biologist Garrett Hardin (Exploring New Ethics for Survival, p. 561). Both urge a new “”theoretical-experimental”” approach to the social, psychological and moral problems of mankind to replace the age-old speculative-philosophical idealizations. But whereas Hardin makes his case with a brilliant science fiction parable, Salk proceeds via a series of laborious, strained but ultimately simplistic analogies between biological and social systems, genetic and psychological survival mechanisms, individual and phylogenetic “”choices.”” Thus, for example, Salk argues that the body’s immunological system, which protects the organism against being overwhelmed by disease, sometimes runs amok and works against the organism, and that its counterpart in psychology, the “”defense mechanisms”” can also become self-consuming and destructive. “”The products of man’s imagination and undisciplined appetite may have a boomerang effect which in due time may well overpower him.”” Herein lies the danger — and the hope. Human development must proceed via challenge and response in a dynamic relationship with the environment. And so forth and so on: “”learning,”” “”tolerance,”” “”rejection”” and “”conditioning”” are both social and somatic verities; deprivation or overabundance are bad for both physical and moral development; there is both “”biological”” and “”human”” purpose to life. Unfortunately when dealing with the practical applications of this wisdom Salk is not very daring — he notes that cigarettes, drugs and war are bad since they produce bodily and social disequilibrium. . . . Disappointing.

This inability to go along with the crowd and ratify comfortable untruths remains a problem for me, even today. But I wouldn’t have it otherwise.


Death by HR: How Affirmative Action Cripples OrganizationsDeath by HR: How Affirmative Action Cripples Organizations

[From Death by HR: How Affirmative Action Cripples Organizations,  available now in Kindle and trade paperback.]

The first review is in: by Elmer T. Jones, author of The Employment Game. Here’s the condensed version; view the entire review here.

Corporate HR Scrambles to Halt Publication of “Death by HR”

Nobody gets a job through HR. The purpose of HR is to protect their parent organization against lawsuits for running afoul of the government’s diversity extortion bureaus. HR kills companies by blanketing industry with onerous gender and race labor compliance rules and forcing companies to hire useless HR staff to process the associated paperwork… a tour de force… carefully explains to CEOs how HR poisons their companies and what steps they may take to marginalize this threat… It is time to turn the tide against this madness, and Death by HR is an important research tool… All CEOs should read this book. If you are a mere worker drone but care about your company, you should forward an anonymous copy to him.

 


“Red Queen”: Science Notes

PrintCover3-1964x1395

[Appendix from Red Queen: The Substrate Wars.]

If you’re a theoretical physicist, you’ll note I am taking liberties with the science. But only a little—and the plot is very much real science. Steve Duong discovers something unexpected, creates a new hypothesis which explains his anomalous results, then confirms his hypothesis by further experimentation. I don’t personally believe we live in a universe where giant quasiparticles can talk to every other particle in the universe and ask them to attach to new partners, but it could be so. We are always just one experiment away from a revolution in understanding. And it will likely be something equally unexpected that allows us to travel to the stars.

I have the Grey Tribe communicating by using encrypted messages embedded in public web site photo streams. For a similar app available now, see Crypstagram. There are several messaging apps that are encrypted currently, for example Whatsapp. But in this future State of Emergency, standard encryption of messages and email has been outlawed, and phone companies and apps are not allowed to secure user data against surveillance. There are high officials in the US government at this writing asking that all phones be searchable for law enforcement purposes, and we can expect more efforts to outlaw encryption. “When encryption is outlawed, only outlaws will have encryption!”

On the attempts to find a cellular automaton model that explains quantum physics, this is the abstract of one interesting paper: “Quantum Field as a Quantum Cellular Automaton I: The Dirac free evolution in one dimension”:

It is shown how a quantum cellular automaton can describe very precisely the Dirac evolution, without requiring Lorentz covariance. The automaton is derived with the only assumptions of minimal dimension and parity and time-reversal invariance. The automaton extends the Dirac field theory to the Planck and ultrarelativistic scales. The Dirac equation is recovered in the usual particle physics scale of inertial mass and momenta. In this first paper the simplest case of one space dimension is analyzed. We provide a technique to derive an analytical approximation of the evolution of the automaton in terms of a momentum-dependent Schrödinger equation. Such approximation works very well in all regimes, including ultrarelativistic and Planckian, for the typical smooth quantum states of field theory with limited bandwidth in momentum. Finally we discuss some thought experiments for falsifying the existence of the automaton at the Planck scale.

Real quantum computing is still in its infancy. Efforts so far have been plagued by noise and the small number of qubits available—the current state of the art is 4! Researchers—and especially outside evaluations—find it hard to tell whether current quantum computers are actually doing quantum computation. This is an area where many discoveries are likely to clarify quantum phenomenon, and perhaps, as in this story, open up completely new vistas on how the universe is organized.

If you are already familiar with the basics of quantum phenomena and want to learn more about quantum computing, the Wikipedia articles on the field are excellent places to start.

Artificial Life is a kind of computational model of the biology of life as we know it. Starting with very simple worlds, models have become more and more sophisticated to the point where significant discoveries about emergent features are being made. Larger, faster simulations feature co-evolving organisms in ecosystems and environments that have been molded by biological processes. Wikipedia is a good place to start learning about the field.

The abstract of a current paper, “Indefinitely Scalable Computing = Artificial Life Engineering,” by David H. Ackley and Trent R. Smallon, on the state of research and ideas on applying ALife concepts to general computer architecture:

The traditional CPU/RAM computer architecture is increasingly unscalable, presenting a challenge for the industry—and is too fragile to be securable even at its current scale, presenting a challenge for society as well. This paper argues that new architectures and computational models, designed around software-based artificial life, can offer radical solutions to both problems. The challenge for the soft alife research community is to harness the dynamics of life and complexity in service of robust, scalable computations—and in many ways, we can keep doing what we are doing, if we use indefinitely scalable computational models to do so. This paper reviews the argument for robustness in scalability, delivers that challenge to the soft alife community, and summarizes recent progress in architecture and program design for indefinitely scalable computing via artificial life engineering.

The Red Queen hypothesis is one of the key concepts of modern evolutionary biology.