I’m on a panel with the topic “Weaponized AI and Future Warfare” at the upcoming Libertycon, so I’ll write on that to organize my thoughts.
First, who am I to have an opinion?
Around 1982, I was attempting to break in as a writer, tiring of whiteout and lusting after the word processors that were becoming available. I built a CP/M system and video board from kits, then housed it in a filing cabinet I had found on the street and scrubbed out with vinegar to mostly remove the smell of cat piss. With that and a 300-baud modem, I started hanging out on the MIT machines — OZ, AI, MC, Deep Thought. The AI Lab at that time was happy to host anyone with a brain and a modem able to contribute in some way, and the machines were open to guests. I found the modem dialins and started poking around and creating accounts. The operating system allowed “wheels” (administrator-privileged people) to monitor and break in on user sessions, so one late night I found myself being interrogated by the now-famous Richard Stallman, aka “rms” in Unix style, who wanted to know what I was doing and why. Having passed the riddle of the Pigpen-Sphinx, I applied to return to MIT to finish my final year, which I spent in EECS studying with Hal Abelson, Gerry Sussman, Barbara Liskov, Steve Ward, Randy Davis, and others. My contact with the AI Lab itself was mostly visiting and logging into their machines, but I soaked up much of what was going on at the time — and thirty years later, the parts of then-current AI that are in common use are just thought of as software (one example being the postal service’s ability to recognize addresses), while the still-ongoing AI work is mostly ideas current in 1984 but extended and magnified by greater hardware capabilities. (As a side note, my profs of 1984 are mostly still in place at MIT, over thirty years later.)
My first jobs after MIT were as junior team member during the AI boom of 1985-87, working on projects at BBN and Symbolics funded by DARPA under the Strategic Computing Initiative, intended as a response to the competitive threat of Japan’s Fifth Generation project. Most of the AI community in the US thought the Japanese were naive in believing they could quickly leapfrog their way to machine intelligence — the Japan of the 1980s was prone to accept the work of flim-flam artists posing as researchers, as China is today. But no one in the US wanted to turn down funding for nifty projects….
When immediate results weren’t forthcoming and DARPA leadership changed after 1987, the plug was rather abruptly pulled on government and private funding for AI research, and the $100K Symbolics Lisp Machine workstations we all used were deprecated. Lisp machines were replaced by much cheaper Sun workstations with a new Lisp compiler that appeared to be faster for the money, even though the Sun workstations lacked the fabulous programming environment of the specialized Lisp Machines. Research teams disbanded, companies like Symbolics collapsed into bankruptcy, and the ideas we were pursuing again became academic pursuits with shoestring budgets. Programming environments as good as the Lisp Machine’s didn’t reappear for another decade.
The peak of the bubble was probably the IJCAI (International Joint Conference on AI) of 1985 at UCLA, where many of America’s largest corporations sent teams of their brightest to learn about the business-changing potentials of AI. This was not something computer scientist types were used to, and the dark-suited gaggles from IBM and GM mixed with rumpled academics in the audiences for the actual talks.
I was at BBN at the time, and we presented a paper on Multilisp running on our Butterfly multiprocessor, which had up to 256 Motorola 68000 processor boards. Each board had a local memory connected to others by a packet-switching backplane network to map memory fetches to either local or distant physical memory (photos below of the packet switching interconnect chip from DARPA’s MOSIS fab.)
Multilisp was a package borrowed from an MIT professor and ported by me/us to the Butterfly environment. Its key concept was called the future, a container carrying the result of a computation that would be worked on in parallel by one or more processors and could be passed around as an object, suspending the computation only if the result was actually required. This made use of lazy evaluation — we can talk about and pass around the result of a computation without actually looking at it until the moment its value is actually required.
My special interest was parallelizing resource-optimizing compilers — which I called PROCs — which would use the same principle, but structure the computation for parallelism in advance. I wrote up a proposal, but I was naive in the Japanese way — my proposal was more hope and armwaving than a concrete plan, so it was rejected.
Meanwhile, after the IJCAI conference talk by Geoff Hinton on neural networks, I found a big package of neural network simulation programs and started rewriting it for the Butterfly. When I ran the idea by my boss, he pointed out I had funded work to do and really couldn’t work on that in my free time, so I gave that up. Geoff Hinton moved to Google in 2013, and Google has now released its package of deep learning software for public experimentation, so thirty years later we are finally getting widespread use of neural networks — the idea had always been good, but the human ability to stick with it and build on advances was limited by slow machines and continual platform obsolescence and personnel changes. Now we have widespread commercial application of deep learning and big-money private research going into it, and results are everywhere — facial and scene recognition, language translation, medical imaging diagnosis… and, presumably, secret military work.
DARPA research was not super-secret — on the contrary, the academics and think tanks that do DARPA research are typically making their work public to advance the field. Most universities and institutes refuse truly secret research, which goes against their personnel rules and academic standards. But DARPA is funding basic and applied research that does lead to secret work on weapons and military systems using the knowledge gained, everything from coolant suits and troop communications networks to autonomous drones and battle mechs. One of the other projects at BBN while I was there was SIMNET — a tank battle simulator over the Internet. I wasn’t working on it, but I got to play with it a few times. Like the undersea sub-sound-sensing network we also worked on, SIMNET did require security clearances and it was used directly for training, but the ideas developed for it made it into civilian life as networked game engines:
The SIMNET-D (Developmental) program used simulation systems developed in the SIMNET program to perform experiments in weapon systems, concepts, and tactics. It became the Advanced Simulation Technology Demonstration (ADST) program. It fostered the creation of the Battle Labs across the US Army, including the Mounted Warfare TestBed at Ft Knox, Ky, the Soldier Battle Lab at Ft Benning, GA, the Air Maneuver Battle Lab at Ft Rucker, AL, the Fires Battle Lab at Ft Sill, OK….
One of the primary developers of the network for SIMNET, Rolland Waters, founded RTIME, Inc. in 1992, to provide to the game industry network engines. Sony (SCEA) bought RTIME in 2000 as the basis for their PS2 online game network. Other startups out of the BBN / Delta Graphics team include:
— MetaVR, Inc (W. Garth Smith), simulation and training, GIS systems
— MaK Technologies (Warren Katz and John Morrison), which continues to provide simulation software
— Reality by Design, Inc (Joanne West Metzger and Paul Metzger), simulation and training software and systems
— Zipper Interactive (Brian Soderberg), which developed the SOCOM PS2 game series and was also purchased by SCEA
— Wiz!Bang (Drew Johnston), another game developer. Drew Johnston currently is the Product Unit Manager (PUM) for the Windows Gaming Platform team at Microsoft.
The funding drought for AI research after 1987 is sometimes called the AI Winter, though the cycle of disillusionment and subsequent research funding cuts happened several times. The current boom seems more permanent, relying less on hopes and dreams and bringing forth commercially important applications.
There’s a detailed history of BBN and its many groundbreaking projects here. Like Xerox PARC and Bell Labs, research hothouses were disrupted by changes in business and increasing competition, making blue-sky R&D a luxury and leaving academic centers unchallenged.
Next installment: Smart weapons, autonomous weapons, knife missiles and assassin drones…..
You must be logged in to post a comment.