Science fiction is something that could happen – but you usually wouldn’t want it to
– Arthur C. Clarke
In a lecture delivered in 1948, mathematician John Von Neumann first described the potential for machines to build perfect copies of themselves using material sourced from the world around them. Almost immediately, people began to worry about what might happen if they never stopped. Writing decades later about the science of molecular nanotechnology – in other words, microscopically small man-made machines – Eric Drexler gave this worry a name: Gray Goo.
A Gray Goo scenario works something like this: Imagine a piece of self-replicating nanotechnology manufactured for a purely benevolent reason. Say, a micro-organism designed to clean up oil slicks by consuming them and secreting some benign by-product. So far, so good. Except the organism can’t seem to distinguish between the carbon atoms in the oil slick and the carbon atoms in the sea vegetation, ocean fauna, and human beings around it all that well. Flash forward a few thousand generations – perhaps not a very long time in our imagined micro-organism’s life cycle – and everything on Earth containing even a speck of carbon has been turned into a benign, gray, and gooey byproduct of its digestive process.
In Gray Goo processes, the stakes are high and the threat of ruin is always imminent, which is why they make for ripping good science fiction yarns. But stories of Gray Goo also carry an important moral lesson: A single strategy – or process, or idea – devised with good intention but applied indiscriminately, or clumsily, can lead to catastrophically negative consequences that are virtually unstoppable. And while Gray Goo events sound exotic, once we set aside the talk of runaway nanotechnology, we can begin to see the ingredients of these events in our everyday world. Among them are idealistic agents with imperfect knowledge, poorly thought-out strategies, and a fertile medium in which to spread error, and systemic and self-reinforcing cascade effects.
No domain of human life is impervious to the risk of Gray Goo processes, but some are better mediums than others. A general rule of thumb is that the more large, complex, and interconnected a domain is, the more susceptible it is to Gray Goo. Few domains better fit this description than biological ecosystems: A lesson that humans have had to learn and relearn many times over during the last few hundred years, throughout which we’ve tried introducing species into foreign ecosystems with often disastrous results.
The introduction of cane toads into Australia are an emblematic example of our failure to foresee the obvious risks of Gray Goo processes. After conventional methods of pest control failed to reduce the population of beetles that were damaging sugar cane crops, in 1935 the Bureau of Sugar Experiment Stations imported cane toads from Hawaii and released them into the wild in hopes that they would prey on the insects. Since their release, cane toads have rapidly multiplied in population and spread across northwestern Australia. Current estimates put their number over 200 million, and they have been blamed for the spread of diseases that negatively impact local fauna. A final indignity: The toads appear to have had no real impact on the beetle populations they were meant to reduce.
Another domain that is rife with the potential for Gray Goo processes is the world of finance. Postmortems of the 2008 financial crisis often refer to it, in risk scholar Nassim Taleb’s parlance, as a Black Swan event: An exceedingly rare but highly impacting series of events that are obvious only with the benefit of hindsight. Rare and impacting, yes – but not strictly speaking beyond our predictive powers. At its heart was the practice of sub-prime lending: A kind of Gray Goo process built around the heady mix of the goodwill and just intentions to spread homeownership, a credit environment that enabled easy deployment and replication of high-risk loans, and a toxic accumulation of a byproduct, unrepayable debt.
Taleb himself has turned his focus to a discussion of Gray Goo-like processes recently, especially as they apply to the science of present-day genetically modified organisms. Taleb believes the risks inherent in the creation and distribution of genetically modified organisms are not well understood or appreciated by researchers in the field, who are typically trained in the biological sciences, and not in statistics or risk analysis. They are therefore apt to deploy traditional decision making strategies focused on mitigating the risk of harm, when in fact they should be basing their decisions on the risk of total and irreversible ruin.
One of the key effects of globalization and the interconnectedness enabled by information technology is that all human systems are becoming increasingly complex, and so more and more susceptible to the Gray Goo effects we notice in the domain of biological ecosystems. Along with the potential for fatal asteroid strikes, gamma ray bursts, and super volcano eruptions, the Future of Humanity Institute at Oxford University – a think tank dedicated to assessing risks that might end human life on Earth – lists such human-led endeavors as artificial intelligence, anthropogenic climate change, and molecular nanotechnology among its key areas of research. The threat of Gray Goo and the risk of ruin, it seems, is on the rise. Whether we are up to the challenge of adjusting the way we think about risk and thinking thoroughly about how our choices impact our increasingly complex world remains to be seen.
Avoiding Gray Goo Scenarios
Let your mind consider the worst possible outcome. Are you fundamentally grappling with the risk of harm – falling short of quarterly earnings, for instance, or the risk of ruin – such as the destruction of your brand’s equity or consumer goodwill?
Don’t let righteousness cloud your judgment. Acting in good faith and having benevolent aims isn’t enough. No pharmaceutical company ever set out to design a drug that kills their customers. As John Milton wrote: “Easy is the descent into Hell, for it is paved with good intentions.”
Be honest with yourself about how little control you have. Once you put the wheels in motion, how easy is it to apply a braking mechanism, assuming you have one? And if you don’t have one, don’t you think it would be prudent if you did, just in case?
Don’t use a hammer to do a screwdriver’s job. Devising a multiplicity of tools and processes to address different challenges takes more effort, but it minimizes the likelihood that you’ll end up doing irreparable harm through trying to be expedient.
Walled gardens are good. Use them. If the Bureau of Sugar Experiment Stations had introduced cane toads into a secure enclosure to test their adaptation to the Australian environment before unleashing them into the wild, maybe the continent wouldn’t be overrun by the obnoxious species.
As Depicted in Science Fiction
Cat’s Cradle, Kurt Vonnegut
Vonnegut imagined a polymorph of water called Ice-Nine that would immediately convert all water it came in contact with into a super-stable ice-like formation. Originally developed to help military troops march over mucky terrain, the polymorph ends up converting all the water on Earth to Ice-Nine, creating an eternal winter.
In the Star Trek universe, a race of rapacious, zombie-like, cybernetic creatures called the Borg transform any life form they encounter into more Borg by injecting them with microscopic nanoprobes. Their intentions are, from their perspective, benevolent: to uplift primitive cultures into the Borg so that they can join their quest to “achieve perfection.” The result, however, is a horrifying form of technological collectivism that completely subsumes the individual and makes them an agent of unspeakable terror. ////
Featured in the MISC 2015 : The Creative Process Issue.
Jayar La Fontaine is a senior foresight strategist at Idea Couture. He is based in Toronto, Canada.