The Existential Horrors of the Fermi Paradox

Brace for Impact, We are About to Plunge into Chaos

Krisztián Pintér, 2020
pinterkr@gmail.com


Calm Before the Storm

At the end of the 19th century, science was close to completion with only a handful of open questions remaining. Those questions seemed insignificant in the grand scheme of things, and the overwhelming majority of the people didn't even know about them. It took only a few decades before these relatively small issues, like black body radiation or the reference frame in which light travels, led to a complete rewrite of all physics, giving birth to the Theory of Relativity and Quantum Mechanics. The new world in which we are living today is not a pleasant place. Classical physics was accessible to an average individual, the new theories are not. Classical physics was mostly intiutive, the new physics is not. Barely a handful of people have the mental capacity to work with these constructs, and whether they actually intuitively feel them, or just able to carry out calculations, is unsure, maybe even to them. The transition wasn't smooth in the least, with notable people doubting or downright denying the new physics as late as the '70s. I personally hate new physics, and I think most people either hate it or fear it. The new physics is just inhumane.

The reason why I'm telling this story is because we are living in similar times. Small cracks, tiny questions that eluded answers for many decades threaten to bring down our whole understanding of the universe. Most people are not even aware of these issues, and those that are, consider them a curiosity at most. As we are going to explain in this essay, our current state of mind is in the stage of denial, while the world as we know it shatters into chaos.

The Problem

To put briefly, our understanding of the universe grew considerably in the past decades, to the level that we are essentally sure at this point that the universe must be full of signs of intelligent civilizations. However, none of these signs are observable despite our best effort. In this section, we are going to explain what signs we are looking for, and why do we think they should be there.

Path of Technology

Human civilization is characterized by growth. The most obvious is population. Recent developments suggest a setback, but there are compelling arguments for it being temporary. If not sooner, the arrival of radical life extension brings about unprecedented potential for population increase.

Even more importantly, there is a growth in our use of energy, both measured as a total and per person. This is true even if efficiency also happens to increase with time. Even with better efficiency, people use more and more power, directly and indirectly, as time goes on. It is relatively easy to see why: efficiency and energy are both costly, but their price goes down with advances in technology. As the price goes down, we tend to use more of it. Hence, as long as energy production gets cheaper, we can expect to see more energy being sold. Also, if efficiency becomes cheaper, we gain more results from using a single unit of energy, which makes it a more appealing purchase, leading to, again, an increasing, not decreasing usage. Consider a bag of cookies. Either its price lowers, or its quality, the value it brings, goes up, you'll see increased purchases.

Human civilization also grows in the scale of activities. Early humans might intentionally set the savannah on fire, a large scale event indeed, but it happens on its own anyway. Later we built dikes and dams, removed forests, drained swamps. Recently, we changed the composition of the atmosphere. This is not hubris. Changing things is what enables us to survive and enhance our lives. There is an infinite list of changes to be done, and we are capable only to do so much at a certain stage of technology. Those that are not very important or too expensive will remain in the realm of dreams until the capability arrives. But none of the items on this list is unattainable, it only takes some more time to get there.

There are sentiments deeming growth undesired or harmful. These view disregard the fact that growth is not a self serving goal. Every single step on the way solves a problem, removes a hardship, brings joy and satisfaction. Disappearing diseases, expanding life goals mark the path of technology. It is very unlikely that human civilization will ever stop expanding. We're going to discuss growth or the lack thereof in later sections.

Let's contemplate for a second what this entails regarding the far future. The basic principle is, if growth happens, everything possible and beneficial will eventually be done. There will be a time when the Asteriod Belt will be extensively mined, and then one time it will be entirely mined out. There will be a time when most planets will be converted to raw materials for space stations. There will be a time when solar panels will capture all of the Sun's output (often referred to as a Dyson Swarm). There will be a time when humans spread to nearby solar systems. And then beyond.

The Scale of the Universe

It is utterly impossible to comprehend the scale of the universe, both in size and time. There is an estimated 200-500 billion stars in the Milky Way galaxy alone, and there are a trillion of galaxies in the observable universe. As far as we know, most stars harbor a system of multiple planets. That does not include rouge stars and rouge planets, lone wanderers not participating in the twirling game with their peers. This vast stage came to existence more than 13 billion years ago, and it didn't take too long for stars and planets to form. Just as a comparison, life on Earth went from single cell to the space age in a mere 1.5 billion years. Or, for another example, we expect to get from the first radio emission to visiting a nearby star in a few hundred years, thousand tops. Getting from where we are today to a Sun-shading supercivilization takes maybe three to five thousand years, assuming a modest 1% increase in energy use per year.

The universe is a vast stage for life to emerge. If only one in a billion planets develop life, it is still in the hundreds just in our neighborhood, and near uncountably many in the observable universe. Life almost can't fail. Should the circumstances fit for life on a planet, any number of extinction events just postpone the inevitable. You can wipe out 90% of a planet's life just to see it regrow in a few dozen million years, a blink of an eye on galactic timescales.

Probability Theory and the Drake Equation

How many civilizations we expect to see? This question inspired the somewhat oversimplified Drake equation, which is an attempt to calculate the number of technological civilizations by multiplying the number of potential life harboring planets with the probability of one actually developing life. There is a little bit of a problem with that equation though, namely that every single factor in it is mostly unknown. However, we can still reason about the equation on a higher level.

At its core, the Drake equation boils down to a very large number being multiplied with a very small probability. What can we say about such a product in general? To make it more intiutive, let's use dice throwing as an example. If we throw a regular die, we have 1/6 probability of getting lucky and roll a six. Throw two dice at once, and your chances are close to 1/3. Throw ten, and you most likely get at least one, and likely more than one. To mimic the Drake situation better, imagine a special die that has millions or trillions of sides. Getting the lucky number has a probability of one in a million or trillion. But also throw a lot of them at once.

Now imagine you ask one of your friends to pick a type of die, with a thousand or a million or a trillion sides. Ask another friend to pick a number of dice to use. They don't coordinate, they make a random pick independently. How probable, you think, it is that they pick numbers for which the expected number of lucky throws is close to one? It is next to impossible. If the number of sides is greater than the number of dice only by a factor of a thousand, you expect no lucky throws at all. If it is less by a factor of a thousand, you'd expect thousands.

With the Drake equation, the number of dice is the number of planets, and the number of sides is the probability of a planet developing a technological civilization. That means that the number of dice is in the septillion range, which is million billion billion, a number we usually don't even know about, and it is clearly outside of anything I claim to be able to imagine. How likely it is, in your view, that the probability of one planet developing a civilization is somehow exactly compensates for this large number, say within a factor of a million or so?

It is safe to conclude that although the result of the Drake equation is unknown, we don't expect it to be near one. We expect it to be either practically zero, or many trillions and quadrillions. But we know that it is not zero, because we know about one technical civilization: us. So it boils down to either our existence being extremely unlikely, or there should be zillions of technical civilizations all over the place.

Bonfires in the Night

One obvious solution that comes to mind is that they are there, we just haven't found them yet. Or perhaps we just didn't recognize. Alas, there are a number of problems in this regard. The famous SETI program admits that we don't have the capability to detect a civilization similar to ours even in the closest star systems. Our telescopes are rudimentary to what is necessary to observe planets.

As we discussed in the first section though, a technological civilization grows pretty fast. Suppose we have a thousand other civiliziations in the Milky Way, and similar number on neighboring large galaxies. How probable it is that they're all younger than us, or at least similarly advanced? Not one of them is two or three thousand years ahead? Because if any one of them is, the signs of such a civilizatios is observable and in fact pretty hard to miss.

According to thermodynamics, all used energy ends up as waste heat eventually. When we said the entire energy output of the sun is captured with photovoltaic panels, we didn't imply the light is gone. It is just trasformed. Converted from visible light to other forms of energy, and then at the end, to heat. The only way to get rid of the heat without getting boiled alive is to radiate away as infrared radiation. So what we are looking for is an object that emits as much energy as a star, but it is relatively cold. Relatively, as it can be glowing red hot, but still much colder than a star. There are no natural objects like this in the universe, and discovering one is the exact opposite of challenging. But we've never observed such an object. And it is just the beginning. We also expect civilizations to spread. Which means we don't expect to find one such object, we expect to find an entire cluster of them, with a larger amount of heat coming from the center, and gradually lower amounts coming from the periphery. We are expecting regions of space with no visible stars, but maybe thousands of stars worth of heat radiation.

But it is still not the full picture. Those civilizations know about us. It does not take a lot of technology to build supertelescopes that can easily map the surfaces of planets on the far end of the galaxy. It is also trivial to map all of them in parallel. Building billions of telescopes is a tiny task for a civilization of any significance, after all, automated robots can do the job. Sophisticated algorithms can evaluate images, without any effort whatsoever of a living being. Our galaxy is 100,000 light years accross. The closest galaxies are a few million light years away. It means that everyone in the neigborhood has to know about primates. They probably know about humans, too and see early humans learning to tame fire. But even if they lack the resolution to such fine details, they definitely know about the rich and abundant life this planet is having. They should have sent a message, or more likely, they should be already here to give us a visit.

And before you say they have sent messages, but we are not able to detect them with our inferior technology, rest assured, it is trivially easy to send a message in a way that's impossible to miss. Yes, our detectors can't detect a human-like civilization even a few light years away by their radio noise or tiny spacecrafts. But a communication attempt, especially coming from a race just a few hundred years more advanced, is as easy to see as it gets. Hiding entire stars, or creating light beams that dwarf supernovae are relatively easy tasks, and all you need is a medieval telescope to see them.

They're just not there. The universe is empty and dead. And this is not at all possible.

Some Proposed Solutions That Don't Work

Numerous solutions were proposed to address the issues of the Fermi Paradox. Most of them can relatively easily ruled out, despite their popularity. In this section, we are going to dismiss some known hypotheses.

Recklessness

Many people believes, almost hopes that civilization is a dead end, and soon Mankind destroys itself. There are many reasons why this is not a good solution to the Fermi paradox. For one, which is maybe a little subjective, it requires a particularly pessimist view of ourselves. The problems at hand does not look that bad, and we are actually working on solutions.

But a much stronger argument is that anything we know today that has any remote possibility of wreaking great havoc in the foreseeable future is entirely incapable of destroying human civilization. The worst they can do is to set us back a few hundred years. Granted, we don't want our civilization to be set back a few hundred years, but in the grand scheme of things, it is a tiny blip, and does not change the overall course. Let's quickly go through a few calamities some people envision.

The end of the cold war brought about a significant drop in the risk of a nuclear war. But new international conflicts and new powers arise, thus the danger is not over yet. What makes this possibility much less of an issue than tipically presented is the fact that the effects of an all out nuclear war has been greatly exaggerated. Our current understanding is that such an event would not significantly affect the biosphere, and would not even destroy human civilizitaion, albeit it would destroy most of the large scale infrastructure.

Climate change is very similar in many aspects. We should not forget that our technology develops fast in parallel with climate change, and thus we have quite a good reason to be optimistic. We can hope to find solutions to problems as they emerge. Most of these probles are foreseen and we already contemplate possible solutions. Sea level rise can't cause major problems, droughts are straightforward to solve with irrigation, floods can cause local issues, but it is only a matter of resources to overcome. The only risk there is the faint possibility of something really bad happening extremely fast. But even so, large scale desctruction is practically impossible, the worst case scenario is a major economic setback. In the context of the Fermi paradox, it is not enough to show that a certain situation can, albeit unlikely, end a civilization. If a minority of civilizations encounter such a sudden extinction level change to the climate, we expect to see a tiny bit fewer survivors. But we are not seeing a tiny bit fewer, we are seeing none.

A hostile AI scenario might be the most realistic doomsday possibility. However, it does not solve the Fermi Paradox in any way. Imagine not us, but a nearby civilization being the victim of their own hubris, getting eliminated by their own creations. Where are the creations then? When the AI stand victorious, they are not building their own empire? They're not expanding on their own? Why not? Suppose one of such AI simply lacks the motivation to do so. But all of them? Each and every civilization is getting replaced by their own AI servants, and none of those AIs decide to multiply and explore? We are back to square one.

To conclude this section, we don't see any reasonable way how could we destroy ourselves in the close future or in a more distant future. As our capabilities grow, so does our resilience. And even if we manage to do the unimaginable, and end our adventure, this must be considered the unlikely fringe event, not the norm. We can't expect every single civilization screwing up before reaching interstellar levels. After a certain size, a civilization is effectively impossible to eradicate, even intentionally.

The Cyclic Wipe

In the early universe, or the early ages of the solar system, powerful events were more frequent. That was the time of quasars, the time of rapid star formation, early meteor showers and similar events incompatible with life, which requires a relatively long time with relative environmental stability. Our universe, as well as most of the star systems in modern galaxies, are much friendlier places today. But what if there still are some major events that purge many solar systems, galaxies or even galaxy clusters from life on a regular basis?

While we can't explicitely reject this idea, after all the very core of the idea is the effect being unknown, we can find a number of arguments against the existence of such events, or rather their capacity to explain the Fermi Paradox.

What might that event be? Life is not so easy to eradicate. Life on Earth survived quite a number of cataclismic events, ranging from meteor impacts, changes in orbital geometry, the ever increasing energy output of the Sun to events caused by life itself, like the Great Oxygen Catastrophe. Life reacted to these events with extreme resilience. The source of this resilience is the incredible diversity life exhibits. From the bottom of the ocean to the hights of the atmosphere, you find lifeforms. Thus if any one of these bioms survive, life survives. A total extinction event must be extremely radical and far reaching in its effects. Alternatively, you can have a more modest event striking much more frequently, like every few millions of years. That's probably enough to keep life from reaching complex forms, especially intelligence.

As of yet, we've never seen such an event. The most energetic phenomena in the universe are Gamma Ray Bursts, Hypernovae or merging black holes. These are indeed quite the spectacle, but fortunately, the incredible distances between stars and galaxies render these events only locally devastating. We've never observed scarier events, which is a problem if we are to assume they are regular. One would expect an event sterilizing entire planets every few billion years, thus happening thousands of times every year in the nearby galaxies, and trillions of times in the entire Observable Universe, to be on our radar for quite some time at this point. Smaller but more frequent events should happen even more often.

In addition, we don't only look for the event itself. Where are the signs, the aftermath, the scorched systems, the stripped stars, the deformed galaxies, the remnant gas clouds? What kind of thorough devastation can happen without leaving visible marks behind?

Observability is not our only concern. Where are the SOS signals? Where are the alarms? When other civilizations first face the great calamity, surely at least some of them cares to send out signals to warn their brethren of the danger. These messages should be as frequent as lighting in storms, and there should be a variety of them. The exact opposite of what we are seeing, which is the sheer lack of any messages whatsoever.

But the problem goes even deeper. Why would any civilization wait for a great calamity to start communicating when they are able to? At least some of them want to talk to other intelligent species, as we discussed earlier. Acquiring the ability to broadcast messages to nearby or even far away galaxies is one of the abilities the Fermi Paradox renders impossible. The Great Cyclic Wipe must hit frequently enough to prevent such civilizations from forming. But this is a problem, because we are almost there. We should not be here. We are at most a few thousand years from being able to easily send powerful radio and light signals to essentially everywhere in the universe. If this event is about to reliably wipe every civilization before they get to be visible, this time it did a remarkably bad job, and took quite some risks. This is not how you reach a success rate of 100%.

Staying Home

Degrowth believers will quickly explain to you that expansion is actually barbaric, and our civilization eventually matures into living in harmony with nature, on a single planet. The counterarguments to this are twofold.

First, we argue that this is blatantly false. There is nothing harmful or evil in building space infrastructure. One space infrastructure is built, expansion to the solar system is a logical consequence. At this point, the whole argument of living in harmony with nature makes no sense anymore. One can raise this point in relation with destroying ecosystems and exhausting the resource of the planet. But expansion in the space age does not require exhausting earthbound resources, there's plenty elsewhere. We are already on the brink of large scale space exploitation, so the claim that we need to give up expansion to save Earth is clearly not backed up by evidence. Space based resources will be available pretty soon. After that, what is the reason for not going out?

But there is a much deeper flaw in this reasoning, and this is one that comes up again and again with the Fermi Paradox: I understand that you think that way, but everyone else does too? Can you imagine millions and millions of civilizations around the universe, all developing this exact mindset?

And not only all civilizations, but all members of all civilizations. Once a race acquires the capability to live in space, as we will within a hundred years, what exactly stops a small group of rogue vagabonds to just leave, and set camp right outside the territory of their kind? Can we expect nobody to do that, or all civilizations willing and capable of squandering such efforts, again and again, for thousands of years? I can tell you, you have not much chance to convince me peacefully to stay on this globe.

Hidden Visitors

Or not so hidden, indicated by numerous depictions in ancient thombs and tablets. They did visit us in the past, and might be lurking around right now, but they're not talking to us, some claims. This argument, unfortunately, explains very little.

First, it does not explain where their empire is, and we are not seeing it. They might hide their spaceships or stations, disguise themselves down here. But their interstellar crawl should be easily visible from galaxies away.

Second, all the so called evidence is either laughable or at most vague and unconvincing. Ancient depictions and modern UFO sightings are in most cases trivially explainable. When they are not explainable, they don't resemble any reasonable alien technology. It is questionable why would they fly around in little devices in the first place. An array of orbital observatories would have sufficed for most of human history, and in the recent centuries, backing off just a little would have guaranteed invisiblity. You might want to acquire specimens for some reason, but the number of abductions must be incredibly high for us to spot them so often. They could grow as much humans as they need in a vial, once a single cell is acquired. They could have done that in the stone age, not risking being caught on a photo. We expect aliens to be way more advanced than we see on display in these alleged sightings.

Third, they have no good reason to do be here without contancting us, as will be explained in the next section.

The Prime Directive

The Prime Directive was an invention of the television series Star Trek. It was a mere writing trick, designed to explain why the Milky Way is full with ancient alien races in the not so distant future, yet we are not running into them today. As it is often the case with writing tricks, this one isn't more than handwaving, as it is not a very good explanation, nor a very good directive for that matter.

Just like the ancient visitors concept, it does not explain why we are not seeing large scale interstellar activity anywhere. It would be rather odd to expect civilizations to keep low profile just to comply with the Prime Directive. Especially because building a significant interplanetary empire is easily achievable without advanced technology, thus there might be cases when they need to somehow secretly force a civilization to stop scaling out, as they still aren't allowed to be contacted directly.

The Prime Directive is also inhumane and makes no sense. Imagine an entire assortment of aliens lurking around, and watching us waging wars, dying from disease, decimated by bad weather or climate change, vulnerable to natural disasters. But they don't do anything, because we don't possess the ability to travel stellar distances. But as soon as someone builds an FTL drive in one corner of the planet, suddenly contact is allowed, and everyone else can get help. Why they could not get help before? Most people will still have no access to interstellar space travel, so why can they get help after?

The Prime Directive is so unnatural and unlikely to exist, we can expect virtually no civilizations adhering to it, let alone all. But in order for it to work as a solution to the Fermi Paradox, we would need all of them to adhere unanimously, and to the last person.

Possible Solutions We Hope Are False

So we went through a few hypothetical solutions that don't seem to work. Do we have anything that can work? Yes, but as we will soon see, there is nothing to celebrate about them. None of the options are particularly reassuring, to say the least. Let's see a few.

Tyrant Race

What if there is a race that makes sure everyone keeps low profile or dies. They visit planets that harbor life, and if they find any technical civilization there, they threaten them to abstain from growing big, or just plain murders them. The reason can be either fearing potential competitors or any kind of righteous motives. According to this narrative, as we try to grow bigger, eventually we will either hear about their "offer", or their missiles, whichever is their preferred method of convincing. There are some major flaws to this hypothesis though.

First, their motives can't really be malicious. If the plan is to eliminate competition, that can be done from afar, using concentrated beams of heat or essentially any kind of particles, as well as a barrage of self guided relativistic kill missiles. These are cheap, and can be deployed at the first sights of life evolving. Why would you keep plants and animals around, if you are willing to murder any intelligent civilization that shows up? The purging of Earth is due for a good few billion years at this point, so they're rather late. But even if they're about to engage in discussion, starting it is timely, because we are entering the phase of detectability. Their silence is odd, but not entirely implausible. Perhaps they wait for the last possible moment, in hope we decide against growth on our own.

Second, that's a rather risky endevour. Even if such a race decides to expand, contrary to their own preaching, they still can only do that so fast. A million years let them cover a million light years, a billion is still just a small fraction of the entire universe, which might as well be infinite for all we know. This very moment, a civilization can form a few hundred million light years outside the empire of our villains, and grow into a very tough nut to crack before they even see it. How would they know what news light delivers the next year? Can it be that something even older and bigger then them exists? In addition, by the time the tyrants learn about those, they also learn about the shameful conducts commited. Everything is visible in space. As a rule of thumb, you probably don't want to commit acts of genocide in grand scale if you have any sense of self preservation. However, you might not have.

Third, it is quite hard to imagine such a coherent intent to survive even thousands of years, let alone millions. Just imagine that some cult in the middle ages comes up with the idea of doing something. Suppose they can spread over the world, and convert everyone to their religion. How long before cracks appear on this machinery? How long before people start to question that all this effort is indeed the best way of spending our time around here? No ideologies survived two thousand years so far. Granted, there might be entities more stable, more strictly aligned to goals. More maybe, but for any system to survive with a stable goal for such a long time, it has to be deliberately built for purpose, and even so, it is a stretch. Moreover, whoever designed the system must disappear, as they themselves represent a changing element in the system. So they either had to die, or perhaps they turned themselves into this single purpose machine, by altering their minds to be more static. A gruesome way for a civilization to commit suicide. However, such a static entity might not fare well against the unimaginable variety of emerging species. Adaptive intelligence is required to tackle unforeseeable challenges, but adaptivity is the exactly what we wanted to avoid.

In the end, a precise, stable, relentless murdering entity is not very likely to form or persist long enough to make a difference. Which explains why they're not here. But we can't entirely rule out this scenario. There is a slight chance of a civilization turning itself into a single purpose supermechanism that can maintain its integrity for hundreds of millions of years, enforcing their principles with iron grip, being just flexible enough to stay ahead in the game. Unfortunately, this is one of the least scary possibilities, as we will soon see.

Creation

If the world is specifically created for us, it explains a few things. Alas, the situation is not so easy, because we still have questions to answer.

Why the universe is so big? Do we really need trillions of galaxies all around the place? Maybe God's intentions is that one day we conquer all of them. But according to cosmology, large parts of the universe will always be inaccessible for us, not just practically, but theoretically. The universe does not have an observable boundary. Rather, due to the General Theory of Relativity, and the large scale structure of the universe, we are physically incapable of seeing farther than a certain distance, a phenomenon referred to as the Cosmic Horizon. On top of that, as the universe expands, large areas move outside of the Horizon, disappearing from us forever. Thus we see a lot of stuff today that, by the time we could get there, will irreversibly disappear. Why would God make that stuff? What is it for?

And why the universe is so weird? Most people are utterly incapable of grasping the Theory of Relativity or Quantum Theory. If God is anything like us, why would he create a universe that is so unintuitive and alien to human thinking?

The Creator has to be either crazy or completely outside of our understanding. Which is basically the same thing, as far as we are concerned. In both cases, we can't meaningfully infer or predict anything about him with any level of certainty. God is a mystery, and his actions are as predictable as a diceroll. Forget about the Bible, that thing is way too clear and simple to have any resemblence of reality.

The Simulation Hypothesis

At its core, the simulation hypothesis is not different than a Creator. Quite literally, someone created this world, with us inside of it. The only difference, namely that the Creator is not some mythical figure, but regular dudes, is inconsequential. Unfortunately, the Simulation Hypothesis suffers from quite the same problems. Why the simulation is so big, and why does it have unaccessible regions. Why the universe is so weird.

The Simulation Hypothesis raises some additional questions. Does the simulation mimic the real world? Our physical laws and even perhaps the general structure of the universe is what it actually is like? Or the designers came up with different rules. If the laws of physics are the real laws, that at least explains their weirdness. It is still unclear why would they simulate a universe so big. And also, why would they simulate a world in the first place.

One obvious solution, like in the case of creation, is craziness. If the designer is totally alien to us, then it is not a surprise if the design makes no sense. In fact, there is a proposal to such a designer: a Boltzmann Brain. Quantum Theory posits that matter can come to existence temporarily, with a small probability. For electrons and photons, very simple particles, the probability is actually considerable, and they pop in and out of existence all the time. It is only a matter of probability for more complex systems to pop into existence too. Atoms, even molecules could theoretically materialize out of nothing, albeit you might had to wait for much longer than the universe is expected to exist in its current form. But no matter how complex an object is, it is only a matter of time before it shows up. So much so that the Big Bang itself was such a fluctuation.

But herein comes the interesting part. The universe will exist as an entirely empty place forever, as we understand. The era when there are stars and planets will be a blink of an eye in the timeline of the future. Black holes will dominate the landscape for much longer, but eventually they disappear too, and then there will be nothing, for eternity. Which means there is plenty of time for entire brains to randomly materialize out of vacuum.

What if such a brain designed our simulation? Well, that's pretty bad news actually. Boltzmann Brains are the craziest creatures of all. Evolved creatures have some sort of reason, they have to have some common characteristics. They probably value life, at least their own. They evolved to deal with the real physical world, thus they understand it and fit it. In order to survive, they need some level of consistency and goal orientation. Most life evolve surrounded by other life forms, thus they learn to respect and communicate. If our creature is intelligent, we can expect a whole lot more from it, perhaps even friendliness, helpful attitude and empathy. Boltzmann Brains don't have any of that. They did not evolve, they don't have any consistency. They lack morals, reason, goals, or any values a human would share.

Either way, Boltzmann Brain or otherwise, the sysadmins of our simulation must be some sort of crazy. They can do whatever they desire, and what they desire is anyone's guess.

Technological Suicide

We discussed how the current doomsday predictions fail to scare us in any way, and that we don't see anything that could possibly end human civilization. However, there might be ways for this to work, but only with some most extreme assumptions. In order for any such doomsday scenario to explain the Fermi Paradox, it has to have some truly horrifying properties. As a reminder, we look for explanations that apply to all technological civilizations, without exception. Let's see what that implies.

The Technological Suicide must come without warning. By the time any civilization lears about it, it must be too late to avoid. As we discussed, we don't need radically different technology than what we are having today. Thus if we can foresee, even with a slight chance, the deadliness of a technology, we can avoid using it. But still grow galactic. Many feared that the LHC might accidentally create a tiny black hole that eventually can consume the entire Earth and kill us all. It was laughed off by most scientists, and with a good reason. But the point here is that we are cautious, and we see danger where there really isn't one. There must be civilizations even more cautious than us, and they might opt not to try it until absolutely sure they can do it safely. This was just an example how unexpected the Technological Suicide needs to be. Way more unexpected than a tiny black hole formed by a particle accelerator. It must be expected by nobody at all, even for the most cautious folks out there.

In addition, the Technological Suicide must be absolutely lethal, even major devastation is basically a pass, because if just a fraction of the civilization in question survives, they might remember the calamity that brought then down, and avoid it next time. The Technological Suicide must thoroughly destroy every single member of a race to such a degree that a potential second civilization forming on the same planet can't figure out what happened. If it is at all possible for another race to rise up on the same planet.

Moreover, the Technological Suicide must be entirely inevitable. It is only a matter of time before it hits, no matter how peaceful, resourceful, smart, resilient your culture is, what your religion, principles, world views are. You might be small or big, slow or fast, the Technological Suicide inevitably comes, and kills you.

Finally, the Technological Suicide is imminent. Thanks to the exponential function, we can expect the growth of human civilization to leave a very visible mark on the solar system in a relatively short timespan. According to the Technological Suicide argument, it's not going to happen. Some time between twohundred and a thousand years, it has to strike. Perhaps sooner, if we take the possible differences between the order in which different races develop different technologies.

Dwell for a second on that last argument. As we said, we are almost ready to spread over the solar system. What if we stop everything right now, except what is needed to become an interstellar race? The Technological Suicide argument tells us that it is still not possible. Something will destroy us before we can grow big enough to be visible from afar. What possible secrets might lurk beneath?

A Word of Hope

Not so optimistic, huh? Enslaved by crazy overlords or eradicated by mysterious forces. Who would have thought a hundred years ago that incomprehensible science is just a walk in the park, compared to the true depths of the abyss we are about to look into.

Is there any hope? Well, it certainly does not hurt to have some. Other than that, we can't tell much.