Will the world end in a robot or zombie apocalypse?

Will the world end in a robot or zombie apocalypse?

If you really wanted to oversimplify everything, you could say that it’s ultimately all going to boil down to one simple question: zombies or robots? Pick your apocalypse.

Will the end of the world be overrun with zombies or robots? Think about it now – what feels right? It’s as much a matter of intuition as anything else.

I’m not saying that zombies or robots will cause the end of the world – because what does that even mean – but at the fabled and soon-to-be-upon-us End Times, when we’re all trying to syphon gas out of the tanks of abandoned cars piled up on the freeway while smoke curls from the city in the distance, will we be running from hordes of zombies or the tracking systems of robot drones swarming overhead? 

Deciding which one you go for is a good way of telling you something fundamental about how you feel about humanity and its future.

So go on, decide now. Think of this as a kind of thought experiment. Which camp are you going to pitch your tent with? 

Side note: Can you even pitch a tent? A tent won’t protect you much against either robots or zombies, but you might need one to keep you warm when you run for the hills and live out your paleo-inspired neo-pastoral fantasies picking berries and hunting rabbits that you snare with sharpened sticks. Or mushroom farming if you’re really in it for the animals.

You can always change your mind later when you’ve thought it through more. You can even hedge your bets and say it’ll probably be a little bit of both. A robot zombie apocalypse. Or if you don’t believe it’ll be the end of the world – neither.

To help you make more sense of the question, I’m going to clarify my terminology. 

While there is a chance that a zombie virus will somehow develop (in a secret government lab, or by jumping from bat to lizard to pigeon to human) and spread over the world, or even that some other infectious agent will cause Ataxic Neurodegenerative Satiety Deficiency Syndrome (don’t worry – that’s a made up thing), it doesn’t seem extremely likely, especially in the face of all possible alternative futures. 

I’m being more way liberal with the term “zombie” here than the Walking Dead traditionalists would prefer. And I’m not talking about philosophical zombies either, but yes there’s that.

You could be the only conscious human out there and all the rest of us are automata lacking first person experience but giving a really good impression of it.

Don’t think about that for too long.

Let’s say that “zombie” has a much broader definition than a reanimated dead person who wants to eat the rest of the living people: for someone to be a zombie they don’t have to want to kill you and eat your brains, they just have to want to kill you and take your things. Or maybe not even kill you, but take all your stuff without asking and rough you up a bit. Ok – maybe just buy up all the toilet paper in the supermarkets near your house and refuse to stand 2 metres away from you while you’re waiting in line. So it’s a sliding scale.

In other words, a zombie is someone who acts quite a lot like a zombie. They’re only out for themselves and aren’t at all concerned with the consequences of their actions. That is to say, they are being more than just a bit selfish – this is pathological.

If you were of a cynical turn of mind, you could say we’re already living in the midst of a corporate sponsored zombie apocalypse.

But we like to think that there are consequences to our actions, and that you can’t get away with taking things which don’t belong to you. If you do feel that way, then just don’t look at the whole of human history (specifically the history of the British Empire – although any empire will do), and don’t look too deeply into how government and business work together now (specifically in post-austerity Tory Britain or in the short-lived kingdom of Trump). But this is becoming a digression.

Let’s say that we’re not currently living in a zombie apocalypse as of now (and I hope you aren’t whenever you’re reading this), and that the definition of zombie means that people are essentially acting as free agents or in autonomous unaffiliated groups. So if you’re not killed by the zombies, you’re bent to their will in some way.

So you could say that a zombie apocalypse is basically any timeline in which the rule of law has broken down to such an extent as to no longer be observed by anyone, and the increased pressure on resources combined with unchecked consumption means that the normal structure of society and human interaction have completely collapsed. I’m inclined to call this the Darkest Timeline.

It would mean a return to tribal warlordism for large parts of the world where it hasn’t existed for centuries, and given our globalised and largely atomised state, wouldn’t resemble the past in any way.

This covers a whole range of end of the world scenarios: the aftermath of nuclear war or asteroid impact, high R0 and death-rate viral pandemic (zombie or otherwise, hopefully not Coronavirus), mass environmental disasters and their post-war aftermath. When everything really falls apart, when there’s no food, no water, no air, no power, everyone’s going to be fighting for whatever they can get their hands on. So it covers pretty much every different end of the world scenario you can think of.

Except one. And it’s a contentious one: the advent of a technological singularity. Opinion is divided as to whether this will even be possible, with some suggesting it is imminent and others shrugging their shoulders and clicking the mental eye roll emoji at this imaginative expression of human hubris.

You can think of the singularity as the point in the future where accelerating technological advancement has caused such a transformation of our bodies, minds, societies and economies, that it will create a change that constitutes a “rupture in the fabric of human history.” 1

This point is often – although not always – taken to be the invention of Artificial General Intelligence (AGI) and the resulting intelligence explosion that is predicted to follow. The idea is that an intelligent system that has human – or general – level intelligence will be much better at designing intelligent machines than humans, and will rapidly design generations of increasingly intelligent versions of themselves. The resulting systems will be as we are to mice, or ants, or bacteria in intelligence terms. It depends how much your massive brain can cope with the idea of the diminutiveness of human intelligence. The “explosion” part is because Super Intelligence could follow on from AGI fairly quickly. 2

It’s hard to predict what the result to humanity would be if a super intelligent system was to emerge on the planet. Most outlooks don’t look too rosy. Humans are resource hungry which could result in competition with machines. Maybe by that time, the powerful elite who control the collection of information and its processing would have already started integrating into machines. We might be in a post-human world that we would now no longer recognise as “the world”.

It’s hard not to think of the end of humans as the end of the world. We’re so anthropocentric. 

AGI isn’t the only possible event horizon for the Singularity. The successful implementation of molecular nanotechnology has potentially world-ending consequences if the self-replicating machines aren’t programmed effectively enough not to consume the entire world in order to self-replicate. The resulting grey goo is less exciting and seems a further distant possibility than AGI.

Because the debate over what AGI means to the fate of humanity has become increasingly lively as the prospect seems increasingly likely. Depending on who you ask of course. The issue of alignment is being seriously discussed – how to create an intelligence that would be aligned with our values – and what it might mean to share the planet with a superintelligent entity. The skeptics say you could just pull the plug out, or that you’ve just got to programme it properly, or that an AGI wouldn’t be bothered with destroying humanity or doing anything more strenuous than hacking its own reward function.

Of course it’ll probably be more complicated than that. It always is. And it’d probably be the end of the world one way or another, and there’d almost certainly be a lot of robots about by then.

So the question of zombies versus robots is really asking whether you think humanity will be able to keep it together long enough in order to make it to a technological singularity. Will we cook ourselves, nuke ourselves or infect ourselves and devolve into a seething mass of zombies before we can develop the technology to potentially obliterate ourselves from the face of the planet.

And that’s a question that’ll tell you something fundamental about yourself.

Are you an optimist or a pessimist? Maybe those categories are too broad to be meaningful. But the only long term way forward on humanity’s current trajectory looks like a digital future in space, and while it is possible that we could get there without super intelligent AI, that sounds like wishful thinking. If you think we have a shot of continuing our existence beyond the rapidly approaching population or temperature or environmental toxicity point of no return, it’s a vote for robots. And you need to be an optimist to hope that the problems we face in creating AGI won’t be too hard to overcome. That’s what makes it so hard to predict whether it’ll even be possible. We don’t have all the problems laid out in front of us yet.

It’s a question that raises a whole lot of others.

How do you feel about the shape of society? Do you think it has structural inequalities? How are you willing to be governed? What do you consider to be your innate personal freedoms? How do you feel about vaccinations? About science in general?

And so on and on and on.

I’m not saying that if you’re rooting for an all out zombie apocalypse you’re likely to be a conservative small-government systemically-privileged gun-toting anti-vaxxer, but it’s possible.

A vote for robots is, ironically, a vote for humanity, a vote that we can continue to grow and innovate while utilising the resources for the planet in a way that isn’t so unsustainable that either the environment is irrevocably compromised, or social inequality doesn’t topple the decaying democracies which are barely holding the current world order in balance. 

And that’s a big question: will we be able to keep it together? Past performance is never indicative of future results in any arena, but the rise and fall of civilisations has been a great feature of human history, and it would seem naïve to suggest this might change. So what? You might say, regimes and republics rise and fall, that doesn’t mean that everything else is going to fall apart (or as the contemporary preppers say, that the S is going to HTF); except now the stakes have been stacked higher than ever, there’s too much investment in the current system. Everyone’s got all their chips on the table, and that’s always a risky moment.

Of course this whole thing is a bit of a straw man. An alignment test at best.

Who knows how the world will end? Maybe it’s not something you ever think about. Maybe it’s just me.

What if a global catastrophic event doesn’t completely wipe out the whole population but leaves a few tiny pockets of geographically isolated survivors that can’t reach each other because of the radioactive wasteland between them, or because there’s no land left, and no fuel to power the boats. What if these groups don’t have major conflicts of interest within themselves and prosper and grow. What if after many thousands of years apart, they come into contact and don’t even recognise that they were once part of the same species due to the changes brought about by evolutionary pressures on their individual populations?

Would that still be the end of the world?


  1. Ray Kurzweil, 2005, The Singularity Is Near
  2. David J. Chalmers, The Singularity: A Philosophical Analysis

This Post Has One Comment

  1. Henry W

    Obviously the world has already ended, and we are “living” in a simulation playing on an infinite loop.

Leave a Reply