<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=UTF-8">
</head>
<body>
<br>
<ul class="art-meta">
<li>25 Feb 2021</li>
<li>The New York Review of Books</li>
<li>Jim Holt</li>
</ul>
<h1>The Power of Catastrophic Thinking</h1>
<div class="clear">
<div class="art-layout-a-2x" id="testArtCol_a"> <span
class="art-object art-mainimage" id="artObjectWrap"
style="height: 60.8em;"><a><img
src="https://t.prcdn.co/img?regionguid=d351eb11-2832-45d9-abc1-0bc93d123d29&scale=176&file=9khf2021022500000000001001®ionKey=Fw4d%2b5%2fQFNEq9D9h%2f6g4BA%3d%3d"
id="artObject" width="289" height="359"></a></span><span
class="art-imagetext">Toba Khedoori: Untitled
(Clouds—Drawing), 2004–2005</span>
<p> <b>The Precipice: Existential Risk and the </b></p>
<b> </b>
<p><b> Future of Humanity by Toby Ord. </b></p>
<b> </b>
<p><b> Hachette, 468 pp., $30.00; </b></p>
<b> </b>
<p><b> $18.99 (paper; to be published in March) </b></p>
<p> T. S. Eliot, in his 1944 essay “What Is a Classic?,”
complained that a new kind of provincialism was becoming
apparent in our culture: “a provincialism, not of space, but
of time.” What Eliot had in mind was provincialism about the
past: a failure to think of dead generations as fully real.
But one can also be guilty of provincialism about the future:
a failure to imagine the generations that will come after us,
to take seriously our responsibilities toward them. </p>
<p> In 1945, not long after Eliot wrote that essay, the first
atomic bomb was exploded. This made the matter of
provincialism about the future all the more acute. Now,
seemingly, humanity had acquired the power to abolish its own
future. A decade later Bertrand Russell and Albert Einstein
issued a joint manifesto warning that nuclear weaponry posed
the risk of imminent human extinction, of “universal death.”
(In a letter to Einstein, Russell also predicted that the same
threat would eventually be posed by biological warfare.) </p>
<p> By the early 1980s, more precise ideas were being put
forward about how this could occur. In 1982 Jonathan Schell,
in a much-discussed series of articles in The New Yorker
(later published as a book, The Fate of the Earth), argued
that nuclear war might well result in the destruction of the
ozone layer, making it impossible for human life to survive on
earth. In 1983 Carl Sagan and four scientific colleagues
introduced the “nuclear winter” hypothesis, according to which
firestorms created by a nuclear exchange, even a limited one,
would darken the upper atmosphere for years, causing global
crop failures, universal famine, and human extinction—an
alarming scenario that helped move Ronald Reagan and Mikhail
Gorbachev to negotiate reductions in their countries’ nuclear
arsenals. Neither Schell nor Sagan was a philosopher. Yet each
raised a philosophical point: with the advent of nuclear
weapons and other dangerous new technologies, we ran the risk
not only of killing off all humans alive today, but also of
depriving innumerable generations of the chance to exist.
Humanity’s past has been relatively brief: some 300,000 years
as a species, a few thousand years of civilization. Its
potential future, by contrast, could extend for millions or
billions of years, encompassing many trillions of sentient,
rational beings yet to be born. It was this future—the
adulthood of humanity— that was now in jeopardy. “If our
species does destroy itself,” Schell wrote, “it will be a
death in the cradle—a case of infant mortality.” </p>
</div>
<div class="art-layout-b-2x" id="testArtCol_b">
<p> The idea that potential future lives as well as actual ones
must be weighed in our moral calculus was soon taken up by
professional philosophers. In 1984 Derek Parfit published his
immensely influential treatise Reasons and Persons, which, in
addition to exploring issues of rationality and personal
identity </p>
<p> with consummate subtlety, also launched a new (and currently
flourishing) field of moral philosophy known as “population
ethics.”1 At its core is this question: How ought we to act
when the consequences of our actions will affect not only the
well-being of future people but their very existence? </p>
<p> It was on the final pages of Reasons and Persons that Parfit
posed an arresting hypothetical. Consider, he said, three
scenarios: </p>
<p> (1) World peace. </p>
<p> (2) A nuclear war that kills 99 percent of the world’s
population. </p>
<p> (3) A nuclear war that kills 100 percent of the world’s
population. </p>
<p> Clearly, he observed, (2) is worse than (1), and (3) is
worse than (2). But which is the greater of the two moral
differences? Most people, Parfit guessed, would say the
difference between (1) and (2) is greater than the difference
between (2) and (3). He disagreed. “I believe that the
difference between (2) and (3) is very much greater,” he
wrote. Killing off that last one percent, he observed, would
mean destroying the entire future of humanity—an inconceivably
vast reduction in the sum of possible human happiness. </p>
<p> Toby </p>
<p> Ord, the author of The Precipice, studied at Oxford under
Parfit </p>
<p> 1This field is sometimes also called “population axiology,”
from the Greek word for “value,” axía. </p>
<p> (who died in 2017) and calls him his “mentor.” Today Ord too
is a philosopher at Oxford and among the most prominent
figures who think deeply and systematically about existential
risks to humanity.2 Ord is a model of the engaged thinker. In
addition to his academic work in applied ethics, he has
advised the World Health Organization, the World Bank, and the
British government on issues of global health and poverty. He
helped start the “effective altruism” movement and founded the
organization Giving What We Can, whose members have pledged
more than $2 billion to “effective charities.” (Their
donations to charities that distribute malaria nets have
already saved more than two thousand lives.) The society’s
members are governed by a pledge to dedicate at least a tenth
of what they earn to the relief of human suffering, which grew
out of a personal commitment that Ord had made. He has now
made a further pledge to limit his personal spending to
£18,000 a year and give away the rest. And he tells us that he
has “signed over the entire advance and royalties from this
book to </p>
<p> 2Others include Nick Bostrom, who directs the Future of
Humanity Institute at Oxford (and who was profiled in The New
Yorker in 2015); Martin Rees, Britain’s astronomer royal and
the author of Our Final Hour (2003); and John Leslie, a
Canadian philosopher whose book The End of the World (1996)
furnished the first analytical survey of the full range of
humanextinction possibilities. charities helping protect the
longterm future of humanity.” </p>
<p> Ord is, in short, an admirable man. And The Precipice is in
many ways an admirable book. In some 250 brisk pages, followed
by another 200 or so pages of notes and technical appendices,
he gives a comprehensive and highly readable account of the
evidence bearing on various human extinction scenarios. He
tells harrowing stories of how humanity has courted
catastrophe in the past—nuclear close calls, deadly pathogens
escaping labs, and so forth. He wields probabilities in a
cogent and often counterintuitive manner. He surveys current
philosophical thinking about the future of humanity and
addresses issues of “cosmic significance” with a light touch.
And he lays out an ambitious three-step “grand strategy” for
ensuring humanity’s flourishing into the deep future—a future
that, he thinks, may see our descendants colonizing entire
galaxies and exploring “possible experiences and modes of
thought beyond our present understanding.” </p>
<p> These are among the virtues of The Precipice. Against them,
however, must be set two weaknesses, one philosophical, the
other analytical. The philosophical one has to do with the
case Ord makes for why we should care about the long-term
future of humanity—a case that strikes me as incomplete. Ord
confesses that as a younger man he “sometimes took comfort in
the idea that perhaps the outright destruction of humanity
would not be bad at all,” since merely possible people cannot
suffer if they never come into existence. His reasons for
changing his mind—for deciding that safeguarding humanity’s
future “could well be our most important duty”—turn out to be
a mixture of classical utilitarian and “ideal goods”–based
considerations that will be familiar to philosophers. But he
fails to take full account of why the future disappearance of
humanity should matter to us, the living, in the here and now;
why we should be motivated to make sacrifices today for
potential future people who, if we don’t make those
sacrifices, won’t even exist. From this philosophical
weakness, which involves a why question, stems an analytical
weakness, which involves a how much question: How much should
we be willing to sacrifice today in order to ensure humanity’s
longterm future? Ord is ethically opposed to the economic
practice of “discounting,” which is a way of quantitatively
shrinking the importance of the far future. I’m with him
there. But this leaves him with a difficulty that he does not
quite acknowledge. If we are obliged to weigh the full
(undiscounted) value of humanity’s potential future in making
our decisions today, we are threatened with becoming moral
slaves to that future. We will find it our duty to make
enormous sacrifices for merely potential people who might
exist millions of years from now, while scanting the welfare
of actual people over the next few centuries. And the
mathematics of this, as we shall see, turn out to be perverse:
the more we sacrifice, the more we become obliged to
sacrifice. </p>
<p>This is not merely a theoretical problem. It leads to a
distorted picture of how we should distribute our present
moral concerns, suggesting that we should be relatively less
worried about real and ongoing developments that will gravely
harm humanity without wiping it out completely (like climate
change), and relatively more worried about notional threats
that, however unlikely, could conceivably result in human
extinction (like rogue AI). Ord does not say this explicitly,
but it is implied by his way of thinking. And it should give
us pause. </p>
<p> What is the likelihood that humanity will survive even the
present century? In 1980 Sagan estimated that the chance of
human extinction over the next hundred years was 60 percent—
meaning that humanity had less than even odds of making it
beyond 2080. A careful risk analysis, however, suggests that
his estimate was grossly too pessimistic. Ord’s accounting
puts the existential risk faced by humanity in the current
century at about one in six: much better, but still the same
odds as Russian roulette. He arrives at this estimate by
surveying the “risk landscape,” whose hills and peaks
represent the probabilities of all the various threats to
humanity’s future. This landscape turns out to have some
surprising features. </p>
<p> What apocalyptic scenario looms largest in your mind? Do you
imagine the world ending as the result of an asteroid impact
or a stellar explosion? In a nuclear holocaust or a global
plague caused by biowarfare? The former possibilities fall
under the category of “natural” risks, the latter under
“anthropogenic” (human-caused) risks. Natural risks have
always been with us: ask the dinosaurs. Anthropogenic risks,
by contrast, are of relatively recent vintage, dating from the
beginning of the atomic era in 1945. That, Ord says, was when
“our rapidly accelerating technological power finally reached
the threshold where we might be able to destroy ourselves,” as
Einstein and Russell warned at the time. </p>
<p> Which category, natural or anthropogenic, poses the greater
threat to humanity’s future? Here it is not even close. By
Ord’s reckoning, the total anthropogenic risk over the next
century is a thousand times greater than the total natural
risk. In other words, humanity is far more likely to commit
suicide than to be killed off by nature. It has thus entered a
new age of unsustainably heightened risk, what Ord calls “the
Precipice.” </p>
<p> We know that the extinction risk posed by natural causes is
relatively low because we have plenty of actuarial data.
Humans have been around for about three thousand centuries. If
there were a sizable per-century risk of our perishing because
of a nearby star exploding, or an asteroid slamming into the
earth, or a supervolcanic eruption blackening the sky and
freezing the planet, we would have departed the scene a long
time ago. So, with a little straightforward math, we can
conclude that the total risk of our extinction by natural
causes over the next century is no more than one in 10,000.
(In fact, nearly all of that risk is posed by the
supervolcanic scenario, which is less predictable than an
asteroid impact or stellar explosion.) If natural risks were
all that we had to worry about, Homo sapiens could expect to
survive on earth for another million years—which, not
coincidentally, is the longevity of a typical mammalian
species. </p>
<p> Over, then, to the anthropogenic category. Here, as Ord
observes, we have hardly any data for calculating risks. So
far, we’ve survived the industrial era for a mere 260 years
and the nuclear era for 75. That doesn’t tell us much, from a
statistical point of view, about whether we’ll get through
even the next century. So we have to rely on scientific
reasoning. And such reasoning suggests that the greatest
human-made dangers to our survival are not what you might
think. </p>
<p> Start with the seemingly most obvious one: nuclear war. How
could that result in the absolute extinction of humanity? It
is often claimed that there are enough nuclear weapons in the
world today to kill off all humans many times over. But this,
as Ord observes, is “loose talk.” It arises from naively
extrapolating from the destruction visited on Hiroshima. That
bomb killed 140,000 people. Today’s nuclear arsenal is
equivalent to 200,000 Hiroshima bombs. Multiply these two
numbers, and you get a death toll from an all-out nuclear war
of 30 billion people—about four times the world’s current
population. Hence the “many times over” claim. Ord points out
that this calculation makes a couple of big mistakes. First,
the world’s population, unlike Hiroshima’s, is not densely
concentrated but spread out over a wide land area. There are
not nearly enough nuclear weapons to hit every city, town, and
village on earth. Second, today’s bigger nuclear bombs are
less efficient at killing than the Hiroshima bomb was.3 A
reasonable estimate for the death toll arising from the local
effects of a full-scale nuclear war—explosions and firestorms
in large cities—is 250 million: unspeakable, but a long way
from the absolute extinction that is Ord’s primary worry. </p>
<p> That leaves the global effects of nuclear war to consider.
Fallout? Spreading deadly radiation across the entire surface
of the earth would require a nuclear arsenal ten times the
size of the current one. Destruction of the ozone layer? This
was the danger cited by Schell in The Fate of the Earth, but
the underlying theory has not held up. Nuclear winter? Here
lies the greatest threat, and it is one that Ord examines in
fascinating (if depressing) detail, before coming to the
conclusion that “nuclear winter appears unlikely to lead to
our extinction.” As for the chance that it would lead merely
to the unrecoverable collapse of civilization—another form of
“existential catastrophe”—he observes that New Zealand at
least, owing to its coastal location, would likely survive
nuclear winter “with most of their technology (and
institutions) intact.” A cheerful thought. </p>
<p> All told, Ord puts the existential risk posed by nuclear war
over the next century at one in one thousand, a relatively
small peak in the risk landscape. 3 </p>
<p> The blast damage scales up as the two-thirds root of the
bomb’s kilotonnage—a fun fact for those who, like Herman Kahn,
enjoy thinking about the unthinkable. So whence the rest of
the onein-six risk figure he arrives at? Climate change? Could
global warming cause unrecoverable collapse or even human
extinction? Here too, Ord’s prognosis, though dire, is not so
dire as you might expect. On our present course, climate
change will wreak global havoc for generations and drive many
nonhuman species to extinction. But it is unlikely to wipe out
humanity entirely. Even in the extreme case where global
temperatures rise by as much as 20 degrees centigrade, there
will still be enough habitable land mass, fresh water, and
agricultural output to sustain at least a miserable remnant of
us. </p>
<p> There is, however, at least one scenario in which climate
change might indeed spell the end of human life and
civilization. Called the “runaway greenhouse effect,” this
could arise— in theory—from an amplifying feedback loop in
which heat generates water vapor (a potent greenhouse gas) and
water vapor in turn traps heat. Such a feedback loop might
raise the earth’s temperature by hundreds of degrees, boiling
off all the oceans. (“Something like this probably happened on
Venus,” Ord tells us.) The runaway greenhouse effect would be
fatal to most life on earth, including humans. But is it
likely? Evidence from past geological eras, when the carbon
content of the atmosphere was much higher than it is today,
suggests not. In Ord’s summation, “It is probably physically
impossible for our actions to produce the catastrophe—but we
aren’t sure.” </p>
<p> So he puts the chance of existential doom from climate
change over the next century at one in one thousand— not quite
negligible, but still a comparatively small peak in the risk
landscape. He assigns similarly modest odds to our being
doomed by other types of environmental damage, like resource
depletion or loss of biodiversity. (For me, one of the saddest
bits in the book is the claim that humans could survive the
extinction of honeybees and other pollinators, whose
disappearance “would only create a 3 to 8 percent reduction in
global crop production.” What a world.) </p>
<p> If neither nuclear war nor environmental collapse accounts
for the Russian roulette–level threat of doom we supposedly
face over the next century, then what does? In Ord’s analysis,
the tallest peaks in the existential risk landscape turn out
to be “unaligned artificial intelligence” and “engineered
pandemics.” </p>
<p> Start with the lesser of the two: pandemic risk. Natural
pandemics have occurred throughout the existence of the human
species, but they have not caused our extinction. The worst of
them, at least in recorded history, was the Black Death, which
came to Europe in 1347 and killed between one quarter and one
half of its inhabitants. (It also ravaged the Middle East and
Asia.) The Black Death “may have been the greatest catastrophe
humanity has seen,” Ord observes. Yet by the sixteenth century
Europe had recovered. In modern times, such “natural”
pandemics are, because of human activities, in some ways more
dangerous: our unwholesome farming practices make it easy for
diseases to jump from animals to humans, and jet travel
spreads pathogens across the globe. </p>
<p> Still, the fossil record suggests that there is only a tiny
per-century chance that a natural pandemic could result in
universal death: about one in 10,000, Ord estimates. </p>
<p> Factor in human mischief, though, and the odds shorten
drastically. Thanks to biotechnology, we now have the power to
create deadly new pathogens and to resurrect old ones in more
lethal and contagious forms. As Ord observes, this power will
only grow in the future. What makes biotech especially
dangerous is its rapid “democratization.” Today, “online DNA
synthesis services allow anyone to upload a DNA sequence of
their choice then have it constructed and shipped to their
address.” A pandemic that would wipe out all human life might
be deliberately engineered by “bad actors” with malign intent
(like the Aum Shinrikyo cult in Japan, dedicated to the
destruction of humanity). Or it might result from
well-intentioned research gone awry (as in 1995 when
Australian scientists released a virus that unexpectedly
killed 30 million rabbits in just a few weeks). Between
bioterror and bio-error, Ord puts the existential risk from an
“engineered pandemic” at one in thirty: a major summit in the
risk landscape. That leaves what Ord deems the greatest of all
existential threats over the next century: artificial
intelligence. And he is hardly eccentric in this judgment.
Fears about the destructive potential of AI have been raised
by figures like Elon Musk, Bill Gates, Marvin Minsky, and
Stephen Hawking.4 </p>
<p> How might AI grow potent enough to bring about our doom? It
would happen in three stages. First, AI becomes able to learn
on its own, without expert programming. This stage has already
arrived, as was demonstrated in 2017 when the AI company
DeepMind created a neural network that learned to play
Kasparov-level chess on its own in just a few hours. Next, AI
goes broad as well as deep, rivaling human intelligence not
just in specialized skills like chess but in the full range of
cognitive domains. Making the transition from specialized AI
to AGI—artificial general intelligence—is the focus of much
cutting-edge research today. Finally, AI comes not just to
rival but to exceed human intelligence—a development that,
according to a 2016 survey of three hundred top AI
researchers, has a fifty-fifty chance of occurring within four
decades, and a 10 percent chance of occurring in the next five
years. </p>
But why should we fear that these ultra-intelligent machines,
assuming they do emerge, will go rogue on us? Won’t they be
programmed to serve our interests? That, as it turns out, is
precisely the problem. As Ord puts it, “Our values are too
complex and subtle to specify by hand.” No matter how careful we
are in drawing up the machine’s “reward function”—the rule-like
algorithm that steers its behavior—its actions are bound to
diverge from what we really want. Getting AI in sync with human
values is called the “alignment problem,” and it may be an
insuperable one. Nor have AI researchers figured out how 4There
are also prominent skeptics— like Mark Zuckerberg, who has
called Musk “hysterical” for making so much of the alleged
dangers of AI.</div>
<div class="art-layout-b-2x"><br>
</div>
<div class="art-layout-b-2x">to make a system that, when it
notices that it’s misaligned in this way, updates its values to
coincide with ours instead of ruthlessly optimizing its existing
reward function (and cleverly circumventing any attempt to shut
it down). What would you command the superintelligent AI system
to do? “Maximize human happiness,” perhaps? The catastrophic
result could be something like what Goethe imagined in “The
Sorcerer’s Apprentice.” And AI wouldn’t need an army of robots
to seize absolute power. It could do so by manipulating humans
to do its destructive bidding, the way Hitler, Stalin, and
Genghis Khan did.</div>
<div class="art-layout-b-2x"> <span class="art-object
art-mainimage" id="artObjectWrap" style="height: 32em;"><a><img
src="https://t.prcdn.co/img?regionguid=fbc1deeb-2a4e-487e-bbd8-526cdb833765&scale=176&file=9khf2021022500000000001001®ionKey=PcLv8INb4subwnmWdOMtjg%3d%3d"
id="artObject" width="555" height="367"></a></span>
<p> “The case for existential risk from AI is clearly
speculative,” Ord concedes. “Indeed, it is the most
speculative case for a major risk in this book.” But the
danger that AI, in its coming superintelligent and misaligned
form, could wrest control from humanity is taken so seriously
by leading researchers that Ord puts the chance of its
happening at one in ten: by far the highest peak in his risk
landscape.5 Add in some smaller peaks for less well understood
risks (nanotechnology, high-energy physics experiments,
attempts to signal possibly hostile extraterrestrials) and
utterly unforeseen technologies just over the horizon—what
might be called the “unknown unknowns”—and the putative risk
landscape is complete. </p>
<p> So what is to be done? The practical proposals Ord lays out
for mitigating existential risk—greater vigilance, more
research into safer technologies, strengthening international
institutions—are well thought out and eminently reasonable.
Nor would they be terribly expensive to implement. We
currently spend less than a thousandth of a percent of world
gross world product on staving off technological
selfdestruction—not even a hundredth of what we spend on ice
cream. Just raising our expenditure to the ice cream
threshold, as Ord suggests, would go far in safeguarding
humanity’s longterm potential. </p>
<p> But let’s consider a more theoretical issue: How much should
we be willing to pay in principle to ensure humanity’s future?
Ord does not explicitly address this question. Yet his way of
thinking about the value of humanity’s future puts us on a
slippery slope to a preposterous answer. </p>
<p> Start, as he does, with a simplifying assumption: that the
value of a century of human civilization can be captured by
some number V. To make things easy, we’ll pretend that V is
constant from century to century. (V might be taken to
quantify a hundred years’ worth of net human happiness, or of
cultural achievement, or some such.) Given this assumption,
the longer humanity’s future continues, the greater its total
value will be. If humanity went on forever, the value of its
future would be infinite. But this is unlikely: eventually </p>
<p> 5As far as I can tell, he arrives at this one-in-ten number
by assuming, in broad agreement with the AI community, that
the chance of AI surpassing human intelligence in the next
century is 50 percent, and then multiplying this number by the
probability that the resulting misalignment will prove
catastrophic, which he seems to put at one in five. the
universe will come to some sort of end, and our descendants
probably won’t be able to survive that. And in each century of
humanity’s existence there is some chance that our species
will fail to make it to the next. In the present century, as
we have seen, Ord puts that chance at one in six. Let’s
suppose—again, to simplify—that this risk level remains the
same in the future: a one in six risk of doom per century.
Then humanity’s expected survival time would be another six
centuries, and the value of its future would be V multiplied
by six. That is, the expected value of humanity’s future is
six times the value of the present century. </p>
<p> Now suppose we could take actions today that would
enduringly cut this existential risk in half, from one in six
down to one in twelve. How would that affect the expected
value of humanity’s future? The answer is that the value would
double, going from 6V (the old expected value) to 12V (the new
expected value). That’s a net gain of six centuries worth of
value! So we should be willing to pay a lot, if necessary, to
reduce risk in this way. </p>
<p> And the math gets worse. Suppose that we could somehow
eliminate all anthropogenic risk. We might achieve this, say,
by going Luddite and stamping out each and every potentially
dangerous technology, seeking fulfillment instead in an
Arden-like existence of foraging for nuts and berries, writing
lyric poems, composing fugues, and proving theorems in pure
mathematics. Then the only existential risks remaining would
be the relatively tiny natural ones, which come to one in ten
thousand per century. So the expected value of humanity’s
future would go from 6V to 10,000V—a truly spectacular gain.
How could we not be obliged to make whatever sacrifice this
might entail, given the expected payoff in the increased value
of humanity’s future? Clearly there is something amiss with
this reasoning. Ord would say— indeed, does say—that humanity
needs risky technologies like AI if it is to flourish, so
“relinquishing further technological progress is not a
solution.” But the problem is more general than that. The more
we do to mitigate risk, the longer humanity’s expected future
becomes. And by Ord’s logic, the longer that future becomes,
the more its potential value outweighs the value of the
present. As we push the existential risk closer and closer to
zero, expected gains in value from the very far future become
ever more enormous, obliging us to make still greater
expenditures to ensure their ultimate arrival. This
combination of increasing marginal costs (to reduce risk) and
increasing marginal returns (in future value) has no stable
equilibrium point short of bankruptcy. At the limit, we should
direct 100 percent of our time and energy toward protecting
humanity’s long-term future against even the remotest
existential threats—then wrap ourselves in bubble wrap, just
to be extra safe. When a moral theory threatens to make
unlimited demands on us in this way, that is often taken by
philosophers as a sign there is something wrong with it. (This
is sometimes called the “argument from excessive sacrifice.”)
What could be wrong with Ord’s theory? Why does it threaten to
make the demands of humanity’s future on us unmanageable?
Perhaps the answer is to be sought in asking just why we value
that future—especially the parts of it that might unfold long
after we’re gone. What might go into that hypothetical number
V that we were just bandying about? </p>
<div class="art-layout-b-2x" id="testArtCol_b">
<p> Philosophers have traditionally taken two views of this
matter. On one side, there are the classical utilitarians,
who hold that all value ultimately comes down to happiness.
For them, we should value humanity’s future because of its
potential contribution to the sum of human happiness. All
those happy generations to come, spreading throughout the
galaxy! Then there are the more Platonic philosophers, who
believe in objective values that transcend mere happiness.
For them, we should value humanity’s future because of the
“ideal goods”—knowledge, beauty, justice— with which future
generations might adorn the cosmos. (The term “ideal goods”
comes from the nineteenthcentury moral philosopher Henry
Sidgwick, who had both utilitarian and Platonizing
tendencies.) </p>
<p> Ord cites both kinds of reasons for valuing humanity’s
future. He acknowledges that there are difficulties with the
utilitarian account, particularly when considerations of the
quantity of future people are balanced against the quality
of their lives. But he seems more comfortable when he doffs
his utilitarian hat and puts on a Platonic one instead. What
really moves him is humanity’s promise for achievement—for
exploring the entire cosmos and suffusing it with value. If
we and our potential descendants are the only rational
beings in the universe—a distinct possibility, so far as we
know—then, he writes, “responsibility for the history of the
universe is entirely on us.” Once we have reduced our
existential risks enough to back off from the acute danger
we’re currently in—the Precipice— he encourages us to
undertake what he calls “the Long Reflection” on what is the
best kind of future for humanity: a reflection that, he
hopes, will “deliver a verdict that stands the test of
eternity.” Ord’s is a very moralizing case for why we should
care about humanity’s future. It cites values—both
utilitarian happiness and Platonic ideal goods— that might
be realized many eons from now, long after we and our
immediate descendants are dead. And since values do not
diminish because of remoteness in time, we are obligated to
take those remote values seriously in our current
decision-making. We must not “discount” them just because
they lie far over the temporal horizon. That is why the
future of humanity weighs so heavily on us today, and why we
should make the safeguarding of that future our greatest
duty, elevating it in importance above all nonexistential
threats—such as world poverty or climate change. Though Ord
does not explicitly say that, it is the conclusion to which
his reasoning seems to commit him. </p>
<p> As a corrective, let’s try to take a nonmoralizing view of
the matter. Let’s consider reasons for caring about
humanity’s future that do not depend on value-based
considerations, whether of happiness or ideal goods. How
would our lives today change if we knew that humanity was
doomed to imminent extinction—say, a century from now? That
is precisely the question that the philosopher Samuel
Scheffler posed in his 2012 Tanner Lectures at Berkeley,
later published in his book Death and the Afterlife.6
Suppose we discovered that the world was guaranteed to be
wiped out in a hundred years’ time by a nearby supernova. Or
suppose that the whole human race was suddenly rendered
infertile, so that no new babies could be born.7 How would
the certain prospect of humanity’s absolute extinction, not
long after your own personal extinction, make you feel? </p>
<p> It would be “profoundly depressing”—so, at least,
Scheffler plausibly maintains. And the reason is that the
meaning and value of our own lives depend on their being
situated in an ongoing flow of generations. Humanity’s
extinction soon after we ourselves are gone would render our
lives today in great measure pointless. Whether you are
searching for a cure for cancer, or pursuing a scholarly or
artistic project, or engaged in establishing more just
institutions, a threat to the future of humanity is also a
threat to the significance of what you do. True, there are
some aspects of our lives—friendship, sensual pleasures,
games—that would retain their value even in an imminent
doomsday scenario. But our long-term, goal-oriented projects
would be robbed of their point. “Most of the time, we don’t
think much about the question of humanity’s survival one way
or the other,” Scheffler observes: 6Reviewed in these pages
by Thomas Nagel, January 9, 2014. </p>
<p> 7Something of the sort threatens to happen in the P.D.
James novel Children of Men (Faber and Faber, 1992). </p>
</div>
</div>
<div class="art-layout-b-2x"><br>
<p> </p>
</div>
</div>
</body>
</html>