Secretive Societies


Watching Conan and Andy Richter do their “in the year 2000” skit before the year 2000 always made me laugh. The pair would make ridiculous predictions and I’d chuckle to myself confident that such predictions were only for the sake of comedy.

To that end, as my birthday present in the seventh grade, my parents found a Mickey Mouse watch at Disney World that digitally counted down the days to the new millennium. Though unbeknownst to them, that watch sparked my love affair with the Absurd after pondering the symbolic nature of an hourglass when juxtaposed with Mickey Mouse.

As a tween, it was easier for me to believe in doomsday theories than consider an alternate fate for mankind. With the X-Files demonstrating a likelihood of shadow governments and hostile alien takeovers, I did not spend much time worrying about my future or, more abstractly, our future. What had been a vague encroaching darkness was now quantified, consumable, and attached to my wrist in all manner of utility and futility.

A decade before I was born, Aurelio Peccei had eloquently found the only rhetorical question worth asking of the commons “Why do we have this general and incurable moral, political, social, psychological, economic and ecological crisis which, in different forms, subtle or explosive, touches us all, developed or not, making us lose our bearings and pushing us towards dark futures?” While I believe in the Limits to Growth, I grimaced when I realized Peccei was grounded in what I’ve come to think of as the early stages of a technocratic savior complex.

I believe that social science fiction is a primer in critical thinking. Putting that belief aside, whenever I hear about Peccei’s Club of Rome, I have to wonder if he had read Isaac Asimov’s Foundation Series and had imagined himself as a Hari Seldon type. Though I’ve never seen explicit character analysis, Kaya Tolon’s 2011 dissertation at Iowa State University entitled “The American futures studies movement (1965-1975); its roots, motivations, and influences” makes the connection that Seldon and Peccei are part of the same movement.

Such a connection isn’t a huge leap. When describing how Hari Seldon uses statistical models on the Galactic Empire to develop the “Seldon Plan” (a complex algorithm that would function to reshape human history), the concept of a mathematical model called “World Problematique” by the Club of Rome that intends to plan for humanity’s future they both seem to have the same guiding principle.

Without venturing into rumors of secretive societies, I’m able to imagine this concept. A way of looking at history, or big data, or network theories... and being able to precisely calculate & conjure the required changes. Without the usual cloud of haze, it’s the grand realization of anyone who uses the term “big data” without laughing.

If you apply the technocratic savior complex to modern social science fiction, not much has changed in the, excuse my laughter, formulaic narrative. Small groups of companies, political leaders, scientists, genius types or hacktivists harness some technology or technology enhanced structure to pull the strings themselves. This technocratic savior complex is most recently demonstrated in the Canadian series Continuum where (spoiler alert) a teenage hacktivist has hijacked multiple timelines, big data, and broad systemic structural forces to write and rewrite the outcome to an unknown agenda.

Evgeny Morozov has done a lot of impressive and well-articulated legwork on this subject. In fully establishing ‘cyber-utopianism’ and ‘solutionism’, he has described the inherent problems with the technologist savior complex: both in the attempted hijacking of broader ethical scrutiny by spreading the belief that technology is breaking global structures & attempting to solve for them in the context of crowdsourcing involvement. By creating an illusion that the process doesn’t have a structure.

The TED talk that says if only you would buy this app, play this game, or blast this message out to all your followers is the persuasive fetishism of actionable insights. Messages that say we’d have solved already but not enough people are downloading these apps: ignoring the scientists behind the curtain. The money spent on advertising, designing, developing, and engaging the masses in fun & creative ways. While not a bad way to grow a new generation of scientists, I fear the credit trailers of the future that kids will moan are intervening with their game time. They risk showing the human element of scientific research. “I’d have solved cancer by now if these damn scientists didn’t waste my time getting their name in the credits,” my daughter will moan as she reinforces the system that gave her a free download token but made her believe that research & planning is yesteryear. Instead of wanting to become a scientist and cure cancer, she will be mad that her phone battery ran out without saving her score.

In the context of human solutions, I look to the movie War Games. To set the stage, a systems engineer at NORAD suggests that the nation’s missile silos should “take the men out of the loop” in an attempt to remove the doubt of mutual assured destruction. A young, hacker Matthew Broderick unintentionally plays a game of global thermonuclear war and escalates the dialogue to a pièce de résistance I’ve imagined opening a Cyber Fast Track submission to DARPA: “General, you are listening to a machine! Do the world a favor and don't act like one.” Though that lesson resonates with the audience of War Games, it doesn’t seem to resonate with the technologist savior complex.

At present, technologist saviors have intervened in systemic processes the world over. And though in many cases this has been for the better, they are now missing the point. Technology is a tool. Interventions and solutions should not be the execution of a predictive algorithm or well oiled machine. Sissela Bok observed that “whatever matters to human beings, trust is the atmosphere in which it thrived.” No matter how much klout or analyzed mind share (how many likes) your TED talks have, I don’t believe that signifies trust in an automated and executable idea. Nor would the detection, finely tuned adjustments, and subsequent automation of an emerging operational pattern be able to calculate what matters to human beings. But the puppeteering of the machinator trying to obtain an obfuscated power from behind the curtain? Especially when they die, leaving behind '80s robot from the Muppets? On a mission to provide Tab, '80s robot believes he knows what we like and offers it to us algorithmically.

In this way, our technologist saviors are automating neocolonialism. They’re promoting a solution that attempts to algorithmically force the tide of human history. Technohegemonic allusions aside, I see this scenario as removing the humans from the direction of humanity. From creating the alter around low hanging fruit rather than seeking a shared process.

Imagine my focus shifting to another time, my watch’s digital screen will have ran through it’s preset countdown. Say, #post2015, I’ll open my eyes and find Mickey Mouse’s arms still swinging and that time is still moving. An app? A piece of hardware? A deus ex machina. What will I have lost by trusting in the false hope of a solution that I’m not a part of? I hope my reaction is to laugh, to really ROFL, at the Absurdist in me. Waiting for my game to load, designed by Godot himself in a different time. Probably before the year 2000.