Mars Will Not Save Us. Here Is What Might.
The multiplanetary argument rests on a comfortable assumption: that the problems which made Earth uninhabitable will simply not follow us. They will.
The argument goes like this: Earth is fragile. A single sufficiently large asteroid, an engineered pandemic, a runaway climate event, a nuclear exchange — any of these could end our species. The solution, then, is to become multiplanetary. To put a second copy of humanity somewhere else, insuring against the catastrophic failure of the first. Mars is close enough, similar enough to Earth in basic chemistry, and just hostile enough to make us work for it. Elon Musk has been saying this for two decades. A generation of engineers has organized their careers around it.
There is something deeply seductive about this argument. It is also, on closer inspection, incomplete in ways that matter.
The Case For
Carl Sagan made a version of this argument long before it became a billionaire’s press release. In Pale Blue Dot, published in 1994, he wrote about the fragility of a civilization confined to a single point in space. Not as a reason for despair but as a reason for urgency. Everything we have loved, every institution we have built, every idea we have had — all of it is vulnerable in a way that our ancestors could not conceive of and that we have not yet fully metabolized.
The asteroid risk is real. The Chicxulub impactor, 66 million years ago, ended the Cretaceous in perhaps a matter of weeks. Its equivalent today would end us. The probability of a civilization-ending impact in any given century is low — estimates range from 0.01% to 0.001% — but low probability compounded over geological time becomes certainty. We are simply choosing what timescale to care about.
The pandemic risk is real and accelerating. SARS-CoV-2 killed seven million people and disrupted the entire global economy — and it was, by the standards of pandemic history, a relatively mild pathogen. A pathogen with the lethality of Ebola and the transmissibility of measles has never existed in nature. It could, engineered or evolved.
Climate risk operates on a slower but more certain timeline. Even in optimistic scenarios, we are committed to several degrees of warming by 2100. The systems we are disrupting — thermohaline circulation, monsoon patterns, polar ice masses — are not linear in their responses. Tipping points cascade.
Against all of this, the multiplanetary argument seems reasonable. If one house burns down, it is good to have another.
The Problem With the Argument
The problem is not with multiplanetary expansion as an eventual goal. It is with the implicit premise that the problems which made Earth uninhabitable will not follow us.
They will.
Consider what we are proposing to take to Mars. We are proposing to take human beings — a species with a 200,000-year history of tribalism, violence, resource competition, and short-term optimization. We are proposing to take the same cognitive architecture that produced the climate crisis, the same social structures that produced the inequality crisis, the same political psychology that produced the governance crisis. We are proposing to run the same software on different hardware and expect a different result.
This is not cynicism. It is a straightforward observation about the biology of the species we are trying to save.
The scenarios in which a Mars colony fails are overwhelmingly not meteor strikes or solar flares. They are the same scenarios in which any isolated, resource-constrained, power-unequal human community fails: internal conflict, institutional breakdown, the emergence of hierarchies that extract from rather than serve the group. We have run this experiment on Earth many thousands of times. The results are consistent.
There is also a second problem: even a successful Mars colony does not address the underlying vulnerability. A Mars colony of a few thousand people, entirely dependent on Earth for spare parts, medicine, seeds, and expertise, is not a backup copy of civilization. It is a very expensive hostage. A self-sufficient Mars colony would require a minimum population of roughly 1 million people with full technological redundancy across every domain of knowledge and production. At current launch costs and trajectories, this is a project measured in centuries, not decades.
In the time it takes to build a genuinely self-sufficient off-world civilization, the risks we are supposedly insuring against will have had ample time to materialize.
What The Research Actually Suggests
The most robust interventions against existential risk are not interplanetary. They are institutional, biological, and epistemic. They happen here.
Pandemic preparedness: A 2021 paper in Nature Medicine estimated that the cost of achieving robust global pandemic surveillance and response infrastructure — the kind that could contain a novel pathogen within weeks rather than months — is approximately $10–15 billion per year. For context, humanity spends roughly $2 trillion per year on military expenditure. The gap between what we spend on killing each other and what we spend on surviving together is not a resource problem. It is a priority problem.
Biosecurity: The most dangerous scenario for engineered pandemic risk is not state-level bioweapons programs but the democratization of synthetic biology tools. CRISPR and related technologies have dropped the cost and expertise threshold for genetic engineering by orders of magnitude. The same tools that will transform medicine will, without adequate governance, make dangerous pathogens more accessible. This is a regulation and monitoring problem, not an engineering problem. It does not require Mars.
Governance of AI risk: The consensus among researchers who take AI risk seriously is that the primary risk is not a sudden “Terminator” scenario but a gradual misalignment — systems that pursue goals that are subtly, then catastrophically, wrong in ways humans cannot easily detect or correct. The mitigation is interpretability research, governance frameworks, and the development of international coordination mechanisms. Again: solvable here, on Earth, with existing resources.
Climate adaptation: The 2022 IPCC report made clear that even in the most ambitious mitigation scenarios, significant adaptation will be required. But it also made clear that the technologies for mitigation — solar, wind, grid storage, electrification — are already cost-competitive and scaling rapidly. The obstacle is not technology. It is the political economy of transition: stranded assets, incumbent industries, and the governance of distributional consequences.
The pattern in all of these is the same: the bottleneck is not hardware. It is the capacity for collective action at civilizational scale.
What Might Actually Save Us
The philosopher William MacAskill, in What We Owe the Future, argues for what he calls “longtermism” — the position that we should weight the wellbeing of future generations heavily in our moral calculations. On longtermist grounds, the most important interventions are those that reduce the probability of civilizational collapse, because a collapsed civilization forecloses all future value.
His analysis, and that of the researchers at the Global Priorities Institute at Oxford and the Center for Human-Compatible AI at Berkeley, consistently points to the same cluster of interventions:
Improving the quality of global governance. The systems through which we make collective decisions — democratic institutions, international organizations, treaty frameworks — were designed for a world in which most consequential decisions could be made at the national level. They are structurally inadequate for problems that are global in scope and require coordination across competing interests. Institutional reform is not glamorous. It is also not optional.
Investing in biosecurity and pandemic preparedness. The gap between what is needed and what is spent is quantifiable and closeable. The political will to close it is intermittent. Making it consistent is a tractable problem.
Developing robust AI governance. The question is not whether to develop AI but how to develop it in ways that are interpretable, auditable, and aligned with human values. This requires sustained collaboration between technical researchers and policymakers — a combination that does not come naturally to either group.
Building cultural and epistemic resilience. The social technologies we use to share information, form beliefs, and coordinate action are under significant stress. Disinformation, epistemic fragmentation, and the erosion of shared factual baselines are not merely cultural problems. They are civilizational vulnerabilities. A society that cannot agree on facts cannot respond coherently to existential threats.
None of these is as satisfying as a rocket launch. None of them produces photographs of red landscapes that inspire a generation. They require sustained investment in the slow, unsexy work of institution-building, governance reform, and cultural change.
They are also, by the available evidence, more likely to work.
The Argument’s Real Value
None of this means Mars is worthless. As a long-horizon goal — as a destination for a civilization that has already solved its existential governance problems — becoming multiplanetary makes sense. As a way of giving ambitious people something difficult and meaningful to work on, it is irreplaceable. The engineering advances produced in pursuit of Mars will benefit Earth directly: materials science, life support, renewable energy, medicine in extreme conditions.
What it should not be is a substitute for the harder work of becoming a civilization that deserves to survive.
The multiplanetary dream assumes that we are worth copying. That the copy we make will be better than the original. That the act of going somewhere new will free us from the patterns we carry in our bodies, our institutions, and our psychology.
We have made this assumption before. Every colonial project in human history was premised on it. Every new city, every new country, every fresh start — all of them premised on the idea that if we could just get away from what was broken, we could build something better.
Sometimes we did. More often, we rebuilt what we left.
The question is not whether to go to Mars. The question is who goes, and in what condition. A species that has learned to govern itself, to protect its most vulnerable, to coordinate across difference and time — that species would build something worth having on another planet.
A species that has not learned these things will build, on Mars, what it has always built: a mirror.