Why the far future matters

From Milliongenerations
Revision as of 20:40, 19 November 2023 by Michael (talk | contribs) (→‎A long future: existential security: clarify second para first (and now also second) sentence)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Initial thoughts based on Michael's blog post for IF for Intergenerational Fairness Day Nov 2023. Please feel free to improve and correct the thoughts below.

Our task concerning the future is not to foresee it but to make it possible.

(Antoine de Saint-Exupéry - ‘Citadelle’)

Long time

Picture by Dave Price, CC-BY-SA-2.0

A bird flying high above a certain hill in England's Southwest is treated to a peculiar view: white outlines of a galloping horse in the grass below, roughly the size of a football field. Left untended for a decade or two these lines would grow over and disappear. Yet this ‘Uffington White Horse’ has been around for about three thousand years. Who made it or why has long been forgotten. But every year people continue to maintain it. Something that has been around for a long time clearly matters.

This page serves to argue that the far future matters today and that considering it can be very useful. The far future might seem abstract. But the dangers facing this generation are pressing. We live at a pivotal time. Our capabilities burden us with a responsibility no previous generation has had. The vast majority of those who realize they are alive could live in the future. That future could be astronomically long, and there could be an enormous number of sentient beings. Their lives could be even better than ours. Their existence, however, is not inevitable. Very little stops us from harming them. Our legal systems give no rights to those who do not yet exist and it is hard to feel compassionate about them. There is no automatic safety net to ensure progress safely continues. Our actions are by far the biggest risks. Our failure to safely survive progress would be catastrophic, might well cause human extinction and maybe even end all sentient life on this planet. That would be an unimaginable tragedy, because of all who perish and even worse because of lives prevented. We can and must make a sincere effort to make it likely that there actually will be anyone in the future. Thinking through our role in securing a long succession of life can guide our efforts and puts our own conflicts in perspective.


Our universe seems about 14 billion years old. Physics so far doesn’t tell us why it started. Sun and earth have existed about one third of that time. The earth is by far the most hospitable place within an unimaginably long distance - at least ten light years, possibly much more. The earth could continue to be a very hospitable place for at least many hundred million years. The future could, and should, be very long.

Modern humans have lived here for 200 thousand years, able to collaborate and acquire, refine and pass on knowledge and tools. The resulting exponential progress has become increasingly visible in the last few centuries, and particularly decades.


We, who now live on this planet, are very fortunate. Progress allows 8 billion of us to live, and most of us live lives that are more comfortable and longer than any historical comparison. While our societies certainly can still be improved, and not all serve their populations equally well, on a global scale we have never had it better. It helps to be grateful. But since antiquity there are examples of people longing for a glorious past, where everything was better. That never made sense, except for brief periods of extreme local turmoil, but the fallacy was never as wrong as it is today. The past has one big advantage: it had a future. That is what we need to make likely for our own time.

For millennia, many have awaited an end of the world. So far that hasn’t happened. Never before have we ourselves been powerful enough to bring it about. Now we are, and our technical capabilities keep increasing. Our time is special.


Some claim that humanity will soon go extinct, or even that it should. Careful what you wish for. The perishing of all would dwarf the worst historical atrocities that we rightly deplore. It would prevent anyone in the future from benefiting from what we know and what might be discovered in the future. It is uncertain if other species would benefit from our absence.

It has been suggested that adding a life is morally neutral, so that nonexistence of future beings would not matter. That strikes me as very counterintuitive. I regard being able to perceive as the ultimate privilege. What better way to show gratitude for our existence than to strive to let others enjoy that privilege, too? Preferably beings that can build on knowledge of earlier generations and pass it on refined. Life on this planet should be possible for a very long time. Why not also the passing down of information between sentient beings? What can we do to make that likely?

Avoiding extinction

Ensuring the future certainly requires avoiding extinction. An act of God, or of simulation overlords, might end or change all life as we know it, at any time. Nothing we could do about that. These efforts ill focus on what might be within our control.

It makes sense to focus on the important things. Extinction from natural causes is very unlikely in the next millennia. Most dinosaurs found out that extinction by asteroid, maybe aided by volcano, is certainly possible. Eventually: dinosaurs ruled the earth for nearly 200 million years. NASA has looked at the sky: really dangerous hits are extremely unlikely, and we might even soon be able to reduce this remote threat even further. Nature could have cards up its sleeve, but few consider it malicious. Colonizing the Moon or Mars would create a backup against some of these remote natural threats, but it won’t protect against our own folly – a far bigger danger.

Surprisingly, despite serious search since the 1960ies, we see no examples of life elsewhere in the universe. We might of course be the first to try to sustain progress over long periods. Or surviving progress might be very difficult, limiting advanced societies to brief periods with a terrible end.

Many technologies we’ve invented favour offence over defence. As progress makes everyone more powerful this turns our conflicts into terrible risks. We need to defuse our conflicts as well as focus on developing primarily defensive technologies and regulate those that could be used offensively.

Mutually assured destruction on hair trigger alert hardly sounds like a safe bet for long term survival. We had several lucky escapes from a nuclear Armageddon (I can only write this because of, for example, Vasily Arkhipov’s cool- headedness in 1962 and Stanislav Petrov’s disobedience in 1983). A nuclear winter might not kill all of humanity, but very few should expect to survive, certainly in the Northern hemisphere. Yet the decade old prospect of burning most and then starving the rest of us does not seem terrible enough for us to find solutions out of our conflicts. Organisations committed to reducing the risk of nuclear war have struggled to maintain funding and influence in recent years. Since humans clearly have the potential for violence and since the path to leadership is one and all struggle, we can’t expect the personalities of our leaders to be naturally peaceful. We need to empower organisations that reduce conflicts and implement structures that make it impossible for anyone to gain power by playing to groups at the expense of other groups. True, I don’t know how to abolish nuclear weapons, nor conflicts. But the long-term picture can help us put conflicts in perspective: in a few generations any descendants will be common descendants. There is only one kind of human on this planet!

There might be even worse threats than our conflicts, risks that could plausibly kill not just most, but everyone. People have in the past tried to use bombs as well as chemical and biological weapons to bring down societies. Every year, progress reduces the IQ required to kill everyone. Progress makes it ever easier to be a terrorist. Natural pandemics can be brutal, but they haven’t wiped all of us out in a long past, making it unlikely that they will now. Biotechnology, however, very well might come up with pathogens that could. Suicidal misanthropes would become a very real threat that 007 or other heroes are unlikely to save us from for long. Sensible proposals to counter that growing risk exist and should be adopted, such as monitoring airplane sewage for exponentially proliferating gene sequences, pandemic preparedness and not researching candidate natural pathogens.

Artificial general intelligence, no longer far-off science fiction, might well be an even more urgent threat. The arrival of a superior intelligence on this planet might solve many problems but might also endanger a lot more than jobs and could well lead to human extinction. Some regard it as a welcome and inevitable step in evolution, a new species to supersede humans. Or a path to a long future where a benevolent superintelligence wrenches control from us to enforce an equitable and lasting society. The absence of cosmic examples, however, again should encourage us to be cautious before signing away our legacy to the greed of investors. Being sentient might be possible for any sufficiently complicated information processing entity. But we know very little about that. We might also be in possession of a gift that doesn’t transfer easily. I regard sentience as a necessary part of any goal I want to strive for. For me, there would be no point in a universe full of technological marvels if there were no one to witness it. Fortunately, regulation for safe artificial intelligence receives much attention recently, as last month in Bletchley Park. Yet a lot remains to be done; it is not at all clear what effective measures could look like. We only have one chance to get this right. And we certainly can’t expect to remain in control when a more capable entity arrives.

A long future: existential security

Learning how to maintain a lasting civilisation requires even more than avoiding the biggest risks of extinction. A lot remains to be done. Just societies are more stable. Many young grow up in comparable luxury but can’t afford housing and seem disillusioned and disempowered. Planetary boundaries may depend on how things are done, but they are real. Climate change, resource depletion, mass extinction of species and inequality require attention, not regulatory pause. For decades we know that business as usual is no longer is an option, but we have little to show for it. Growth of anything in a long-term context is necessarily brief. We don’t know how far things can grow; we can’t stop growth and mustn’t try. But we do need to develop economic theories and social structures that could one day do without it.

Considering the long view can help us. It helps us take into account those who could yet be and lets us realize that they could be the vast majority of all who are ever alive. Jonas Salk said that the question we need to ask ourselves is ‘are we being good ancestors?’. The perspective might not be the only thing we should worry about. But it certainly helps to realize that those who do not yet exist are the majority. We should prioritise their interest and protect them. That can only be done one step at a time. Each generation needs to make the world safer for the one after it. But imagine for once that over a million years there still are some who know what we know, and a lot more. We should seriously consider that there is a long future ahead, that we are only at the beginning of a very long tradition, one generation after another. And all those generations might even bother to maintain long term art like the Horse of Uffington, so that those after them can enjoy it, too.

Further reading