<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=UTF-8">
</head>
<body>
<div class="css-ov1ktg">
<div class=" css-qlfk3j">
<div class="rail-wrapper css-a6hloe">
<div class=" css-ac4z6z"><br>
</div>
</div>
</div>
</div>
<div id="root">
<div class="css-wp58sy">
<div class="css-fmnleb">
<div class="css-ov1ktg">
<div width="718" class="css-1jllois">
<header class="css-d92687">
<h1 class="css-19v093x">Facebook Is a Doomsday Machine</h1>
<div class="css-1x1jxeu">
<div class="css-7kp13n">By</div>
<div class="css-7ol5x1"><span class="css-1q5ec3n">Adrienne
LaFrance</span></div>
<div class="css-8rl9b7">theatlantic.com</div>
<div class="css-zskk6u">19 min</div>
</div>
<div class="css-1890bmp"><a
href="https://getpocket.com/redirect?url=https%3A%2F%2Fwww.theatlantic.com%2Ftechnology%2Farchive%2F2020%2F12%2Ffacebook-doomsday-machine%2F617384%2F"
target="_blank" class="css-1neb7j1">View Original</a></div>
</header>
<div class="css-429vn2">
<div role="main" class="css-yt2q7e">
<div id="RIL_container">
<div id="RIL_body">
<div id="RIL_less">
<div lang="en">
<div>
<div>
<div>
<div>
<p>The architecture of the modern web
poses grave threats to humanity.
It’s not too late to save ourselves.</p>
</div>
</div>
<figure>
<div> <video
poster="https://cdn.theatlantic.com/assets/media/video/upload/facebookdoomsday2.jpg"
muted="muted" loop="loop"
autoplay="autoplay"
controls="controls"></video></div>
</figure>
</div>
<div>
<div> </div>
</div>
<div> </div>
</div>
<div>
<section></section>
<section>
<p>T<span>he Doomsday Machine</span> was
never supposed to exist. It was meant to
be a thought experiment that went like
this: Imagine a device built with the
sole purpose of destroying all human
life. Now suppose that machine is buried
deep underground, but connected to a
computer, which is in turn hooked up to
sensors in cities and towns across the
United States.</p>
<p>The sensors are designed to sniff out
signs of the impending apocalypse—not to
prevent the end of the world, but to
complete it. If radiation levels suggest
nuclear explosions in, say, three
American cities simultaneously, the
sensors notify the Doomsday Machine,
which is programmed to detonate several
nuclear warheads in response. At that
point, there is no going back. The
fission chain reaction that produces an
atomic explosion is initiated enough
times over to extinguish all life on
Earth. There is a terrible flash of
light, a great booming sound, then a
sustained roar. We have a word for the
scale of destruction that the Doomsday
Machine would unleash: <em>megadeath. </em></p>
</section>
<section>
<p>Nobody is pining for megadeath. But
megadeath is not the only thing that
makes the Doomsday Machine petrifying.
The real terror is in its autonomy, this
idea that it would be programmed to
detect a series of environmental inputs,
then to act, without human interference.
“There is no chance of human
intervention, control, and final
decision,” wrote the military strategist
Herman Kahn in his 1960 book, <em>On
Thermonuclear War</em>, which laid out
the hypothetical for a Doomsday Machine.
The concept was to render nuclear war
unwinnable, and therefore unthinkable.</p>
<p>Kahn concluded that automating the
extinction of all life on Earth would be
immoral. Even an infinitesimal risk of
error is too great to justify the
Doomsday Machine’s existence. “And even
if we give up the computer and make the
Doomsday Machine reliably controllable
by decision makers,” Kahn wrote, “it is
still not controllable enough.” No
machine should be that powerful by
itself—but no one person should be
either.</p>
<p>The Soviets really did make a version
of the Doomsday Machine during the Cold
War. They nicknamed it “Dead Hand.” But
so far, somewhat miraculously, we have
figured out how to live with the bomb.
Now we need to learn how to survive the
social web.</p>
<p>P<span>eople tend</span> to complain
about Facebook as if something recently
curdled. There’s a notion that the
social web was once useful, or at least
that it could have been good, if only we
had pulled a few levers: some moderation
and fact-checking here, a bit of
regulation there, perhaps a federal
antitrust lawsuit. But that’s far too
sunny and shortsighted a view. Today’s
social networks, Facebook chief among
them, were built to encourage the things
that make them so harmful. It is in
their very architecture.</p>
</section>
<section>
<p>I’ve been thinking for years about what
it would take to make the social web
magical in all the right ways—less
extreme, less toxic, more true—and I
realized only recently that I’ve been
thinking far too narrowly about the
problem. I’ve long wanted Mark
Zuckerberg to admit that <a
href="https://twitter.com/AdrienneLaF/status/910493155421822976">Facebook
is a media company</a>, to take
responsibility for the informational
environment he created in the same way
that the editor of a magazine would. (I
pressed him on this <a
href="https://www.theatlantic.com/technology/archive/2018/05/mark-zuckerberg-doesnt-understand-journalism/559424/">once</a>
and he laughed.) In recent years, as
Facebook’s mistakes have compounded and
its reputation has tanked, it has become
clear that negligence is only part of
the problem. No one, not even Mark
Zuckerberg, can control the product he
made. I’ve come to realize that Facebook
is not a media company. It’s a Doomsday
Machine.</p>
<p><a
href="https://www.theatlantic.com/technology/archive/2019/05/chris-hughess-call-break-facebook-isnt-enough/589138/">Read:
Breaking up Facebook isn’t enough</a></p>
<p>The social web is doing exactly what it
was built for. Facebook does not exist
to seek truth and report it, or to
improve civic health, or to hold the
powerful to account, or to represent the
interests of its users, though these
phenomena may be occasional by-products
of its existence. The company’s early
mission was to “give people the power to
share and make the world more open and
connected.” Instead, it took the concept
of “community” and sapped it of all
moral meaning. <a
href="https://www.theatlantic.com/magazine/archive/2020/06/qanon-nothing-can-stop-what-is-coming/610567/">The
rise of QAnon,</a> for example, is one
of the social web’s logical conclusions.
That’s because Facebook—along with
Google and YouTube—is perfect for
amplifying and spreading disinformation
at lightning speed to global audiences.
Facebook is an agent of government
propaganda, targeted harassment,
terrorist recruitment, emotional
manipulation, and genocide—a
world-historic weapon that lives not
underground, but in a
Disneyland-inspired campus in Menlo
Park, California.</p>
</section>
<section>
<p>The giants of the social web—Facebook
and its subsidiary Instagram; Google and
its subsidiary YouTube; and, to a lesser
extent, Twitter—have achieved success by
being dogmatically value-neutral in
their pursuit of what I’ll call <em>megascale</em>.
Somewhere along the way, Facebook
decided that it needed not just a very
large user base, but a tremendous one,
unprecedented in size. That decision set
Facebook on a path to escape velocity,
to a tipping point where it can harm
society just by existing. </p>
<p>Limitations to the Doomsday Machine
comparison are obvious: Facebook cannot
in an instant reduce a city to ruins the
way a nuclear bomb can. And whereas the
Doomsday Machine was conceived of as a
world-ending device so as to forestall
the end of the world, Facebook started
because a semi-inebriated Harvard
undergrad was bored one night. But the
stakes are still life-and-death.
Megascale is nearly the existential
threat that megadeath is. No single
machine should be able to control the
fate of the world’s population—and
that’s what both the Doomsday Machine
and Facebook are built to do.</p>
<p>The cycle of harm perpetuated by
Facebook’s scale-at-any-cost business
model is plain to see. Scale and
engagement are valuable to Facebook
because they’re valuable to advertisers.
These incentives lead to design choices
such as reaction buttons that encourage
users to engage easily and often, which
in turn encourage users to share ideas
that will provoke a strong response.
Every time you click a reaction button
on Facebook, an algorithm records it,
and sharpens its portrait of who you
are. The hyper-targeting of users, made
possible by reams of their personal
data, creates the perfect environment
for manipulation—by advertisers, by
political campaigns, by emissaries of
disinformation, and of course by
Facebook itself, which ultimately
controls what you see and what you don’t
see on the site. Facebook has enlisted a
corps of approximately 15,000
moderators, people <a
href="https://www.theverge.com/2019/2/25/18229714/cognizant-facebook-content-moderator-interviews-trauma-working-conditions-arizona">paid
to watch unspeakable things</a>—murder,
gang rape, and other depictions of
graphic violence that wind up on the
platform. Even as Facebook has insisted
that it is a value-neutral vessel for
the material its users choose to
publish, moderation is a lever the
company has tried to pull again and
again. But there aren’t enough
moderators speaking enough languages,
working enough hours, to stop the
biblical flood of shit that Facebook
unleashes on the world, because 10 times
out of 10, the algorithm is faster and
more powerful than a person. At
megascale, this algorithmically warped
personalized informational environment
is extraordinarily difficult to moderate
in a meaningful way, and extraordinarily
dangerous as a result.</p>
</section>
<section>
<p>These dangers are not theoretical, and
they’re exacerbated by megascale, which
makes the platform a tantalizing place
to experiment on people. Facebook has
conducted social-contagion <a
href="https://www.theatlantic.com/technology/archive/2014/06/everything-we-know-about-facebooks-secret-mood-manipulation-experiment/373648/">experiments</a>
on its users without telling them.
Facebook has acted as <a
href="https://www.theatlantic.com/technology/archive/2016/02/facebook-and-the-new-colonialism/462393/">a
force for digital colonialism</a>,
attempting to become the de facto (and
only) experience of the internet for
people all over the world. Facebook <a
href="https://www.theatlantic.com/technology/archive/2014/11/how-facebook-could-skew-an-election/382334/">has
bragged</a> about its ability to
influence the outcome of elections.
Unlawful militant groups <a
href="https://www.buzzfeednews.com/article/ryanmac/facebook-moderators-call-arms-not-enforced-kenosha">use
Facebook to organize</a>. Government
officials use Facebook <a
href="https://www.buzzfeednews.com/article/craigsilverman/facebook-ignore-political-manipulation-whistleblower-memo">to
mislead</a> their own citizens, and to
tamper with elections. Military
officials have <a
href="https://www.nytimes.com/2018/10/15/technology/myanmar-facebook-genocide.html">exploited</a>
Facebook’s complacency to carry out
genocide. Facebook inadvertently <a
href="https://apnews.com/article/f97c24dab4f34bd0b48b36f2988952a4">auto-generated</a>
jaunty recruitment videos for the
Islamic State featuring anti-Semitic
messages and burning American flags.</p>
<p><a
href="https://www.theatlantic.com/technology/archive/2018/05/mark-zuckerberg-doesnt-understand-journalism/559424/">Read:
Mark Zuckerberg doesn’t understand
journalism</a></p>
<p>Even after U.S. intelligence agencies <a
href="https://www.dni.gov/files/documents/ICA_2017_01.pdf">identified
Facebook</a> as a main battleground
for information warfare and foreign
interference in the 2016 election, the
company has failed to stop the spread of
extremism, hate speech, propaganda,
disinformation, and conspiracy theories
on its site. Neo-Nazis <a
href="https://www.buzzfeednews.com/article/christopherm51/neo-nazi-group-facebook">stayed
active on Facebook</a> by taking out
ads even after they were formally
banned. And it wasn’t until October of
this year, for instance, that Facebook
announced it would remove groups, pages,
and Instragram accounts devoted to
QAnon, as well as any posts denying the
Holocaust. (Previously Zuckerberg had
defended Facebook’s decision not to
remove disinformation about the
Holocaust, saying of Holocaust deniers,
“I don’t think that they’re <em>intentionally</em>
getting it wrong.” He <a
href="https://www.vox.com/2018/7/18/17588116/mark-zuckerberg-clarifies-holocaust-denial-offensive">later
clarified</a> that he didn’t mean to
defend Holocaust deniers.) Even so,
Facebook routinely sends emails to users
recommending the newest QAnon groups.
White supremacists and deplatformed MAGA
trolls may flock to smaller social
platforms such as Gab and Parler, but
these platforms offer little aside from
a narrative of martyrdom without
megascale.</p>
</section>
<section>
<p>In the days after the 2020 presidential
election, Zuckerberg authorized a tweak
to the Facebook algorithm so that
high-accuracy news sources such as NPR
would receive preferential visibility in
people’s feeds, and hyper-partisan pages
such as <em>Breitbart News</em>’s and
Occupy Democrats’ would be buried, <a
href="https://www.nytimes.com/2020/11/24/technology/facebook-election-misinformation.html">according
to <em>The New York Times</em></a>,
offering proof that Facebook could, if
it wanted to, turn a dial to reduce
disinformation—and offering a reminder
that Facebook has the power to flip a
switch and change what billions of
people see online.</p>
<p>The decision to touch the dial was
highly unusual for Facebook. Think about
it this way: The Doomsday Machine’s
sensors detected something harmful in
the environment and chose not to let its
algorithms automatically blow it up
across the web as usual. This time a
human intervened to mitigate harm. The
only problem is that reducing the
prevalence of content that Facebook
calls “bad for the world” also reduces
people’s engagement with the site. In
its experiments with human intervention,
the <em>Times</em> reported, Facebook
calibrated the dial so that <em>just
enough</em> harmful content stayed in
users’ news feeds to keep them coming
back for more.</p>
<p>F<span>acebook’s stated mission</span>—to
make the world more open and
connected—has always seemed, to me,
phony at best, and imperialist at worst.
After all, today’s empires are born on
the web. Facebook is a borderless
nation-state, with a population of users
nearly as big as China and India
combined, and it is governed largely by
secret algorithms. Hillary Clinton <a
href="https://www.theatlantic.com/politics/archive/2020/01/hillary-clinton-mark-zuckerberg-is-trumpian-and-authoritarian/605485/">told
me</a> earlier this year that talking
to Zuckerberg feels like negotiating
with the authoritarian head of a foreign
state. “This is a global company that
has huge influence in ways that we’re
only beginning to understand,” she said.</p>
</section>
<section>
<p>I recalled Clinton’s warning a few
weeks ago, when Zuckerberg defended the
decision not to suspend Steve Bannon
from Facebook after he argued, in
essence, for the beheading of two senior
U.S. officials, the infectious-disease
doctor Anthony Fauci and FBI Director
Christopher Wray. The episode got me
thinking about a question that’s
unanswerable but that I keep asking
people anyway: How much real-world
violence would never have happened if
Facebook didn’t exist? One of the people
I’ve asked is Joshua Geltzer, a former
White House counterterrorism official
who is now teaching at Georgetown Law.
In counterterrorism circles, he told me,
people are fond of pointing out how good
the United States has been at keeping
terrorists out since 9/11. That’s wrong,
he said. In fact, “terrorists are
entering every single day, every single
hour, every single minute” through
Facebook.</p>
<p>The website that’s perhaps best known
for encouraging mass violence is the
image board 4chan—which was followed by
8chan, which then became 8kun. These
boards are infamous for being the sites
where multiple mass-shooting suspects
have shared manifestos before homicide
sprees. The few people who are willing
to defend these sites unconditionally do
so from a position of free-speech
absolutism. That argument is worthy of
consideration. But there’s something
architectural about the site that merits
attention, too: There are no algorithms
on 8kun, only a community of users who
post what they want. People use 8kun to
publish abhorrent ideas, but at least
the community isn’t pretending to be
something it’s not. The biggest social
platforms claim to be similarly neutral
and pro–free speech when in fact no two
people see the same feed.
Algorithmically tweaked environments
feed on user data and manipulate user
experience, and not ultimately for the
purpose of serving the user. Evidence of
real-world violence can be easily traced
back to both Facebook and 8kun. But 8kun
doesn’t manipulate its users or the
informational environment they’re in.
Both sites are harmful. But Facebook
might actually be worse for humanity.</p>
</section>
<section>
<p><a
href="https://www.theatlantic.com/technology/archive/2020/04/how-facebooks-ad-technology-helps-trump-win/606403/">Read:
How Facebook works for Trump</a></p>
<p>“What a dreadful set of choices when
you frame it that way,” Geltzer told me
when I put this question to him in
another conversation. “The idea of a
free-for-all sounds really bad until you
see what the purportedly moderated and
curated set of platforms is yielding …
It may not be blood onscreen, but it can
really do a lot of damage.”</p>
<p>In previous eras, U.S. officials could
at least study, say, Nazi propaganda
during World War II, and fully grasp
what the Nazis wanted people to believe.
Today, “it’s not a filter bubble; it’s a
filter shroud,” Geltzer said. “I don’t
even know what others with personalized
experiences are seeing.” Another expert
in this realm, Mary McCord, the legal
director at the Institute for
Constitutional Advocacy and Protection
at Georgetown Law, told me that she
thinks 8kun may be more blatant in terms
of promoting violence but that Facebook
is “in some ways way worse” because of
its reach. “There’s no barrier to entry
with Facebook,” she said. “In every
situation of extremist violence we’ve
looked into, we’ve found Facebook
postings. And that reaches <em>tons</em>
of people. The broad reach is what
brings people into the fold and
normalizes extremism and makes it
mainstream.” In other words, it’s the
megascale that makes Facebook so
dangerous.</p>
<p>L<span>ooking back</span>, it can seem
like Zuckerberg’s path to world
domination was inevitable. There’s the
computerized version of Risk he coded in
ninth grade; his long-standing interest
in the Roman empire; his obsession with
information flow and human psychology.
There’s the story of his first bona fide
internet scandal, when he hacked into
Harvard’s directory and lifted photos of
students without their permission to
make the hot-or-not-style website
FaceMash. (“Child’s play” was how
Zuckerberg <a
href="https://www.thecrimson.com/article/2003/11/4/hot-or-not-website-briefly-judges/">later
described</a> the ease with which he
broke into Harvard’s system.) There’s
the disconnect between his lip service
to privacy and the way Facebook actually
works. (Here’s Zuckerberg in a private
chat with a friend years ago, on the
mountain of data he’d obtained from
Facebook’s early users: “I have over
4,000 emails, pictures, addresses …
People just submitted it. I don’t know
why. They ‘trust me.’ Dumb fucks.”) At
various points over the years, he’s
listed the following interests in his
Facebook profile: Eliminating Desire,
Minimalism, Making Things, Breaking
Things, Revolutions, Openness,
Exponential Growth, Social Dynamics,
Domination.</p>
</section>
<section>
<p>Facebook’s megascale gives Zuckerberg
an unprecedented degree of influence
over the global population. If he isn’t
the most powerful person on the planet,
he’s very near the top. “It’s insane to
have that much speechifying, silencing,
and permitting power, not to mention
being the ultimate holder of algorithms
that determine the virality of anything
on the internet,” Geltzer told me. “The
thing he oversees has such an effect on
cognition and people’s beliefs, which
can change what they do with their
nuclear weapons or their dollars.”</p>
<p>Facebook’s <a
href="https://www.theatlantic.com/ideas/archive/2019/09/facebook-outsources-tough-decisions-speech/598249/">new
oversight board</a>, formed in
response to backlash against the
platform and tasked with making
decisions concerning moderation and free
expression, is an extension of that
power. “The first 10 decisions they make
will have more effect on speech in the
country and the world than the next 10
decisions rendered by the U.S. Supreme
Court,” Geltzer said. “That’s power.
That’s real power.”</p>
<p>In 2005, the year I joined Facebook,
the site still billed itself as an
online directory to “Look up people at
your school. See how people know each
other. Find people in your classes and
groups.” That summer, in Palo Alto,
Zuckerberg gave an <a
href="https://www.youtube.com/watch?v=--APdD6vejI">interview</a>
to a young filmmaker, who later posted
the clip to YouTube. In it, you can see
Zuckerberg still figuring out what
Facebook is destined to be. The
conversation is a reminder of the
improbability of Zuckerberg’s youth when
he launched Facebook. (It starts with
him asking, “Should I put the beer
down?” He’s holding a red Solo cup.)
Yet, at 21 years old, Zuckerberg
articulated something about his company
that has held true, to dangerous effect:
Facebook is not a single place on the
web, but rather, “a lot of different
individual communities.”</p>
</section>
<section>
<p>Today that includes QAnon and other
extremist groups. Back then, it meant
mostly juvenile expressions of identity
in groups such as “I Went to a Public
School … Bitch” and, at Harvard,
referencing the neoclassical main
library, “The We Need to Have Sex in
Widener Before We Graduate Interest
Group.” In that 2005 interview,
Zuckerberg is asked about the future of
Facebook, and his response feels, in
retrospect, like a tragedy: “I mean,
there doesn’t necessarily have to be
more. Like, a lot of people are focused
on taking over the world, or doing the
biggest thing, getting the most users. I
think, like, part of making a difference
and doing something cool is focusing
intensely … I mean, I really just want
to see everyone focus on college and
create a really cool college-directory
product that just, like, is very
relevant for students and has a lot of
information that people care about when
they’re in college.”</p>
<p><a
href="https://www.theatlantic.com/technology/archive/2019/02/first-time-atlantic-wrote-about-facebook/581902/">Read:
What we wrote about Facebook 12 years
ago</a></p>
<p>The funny thing is: This localized
approach is part of what made megascale
possible. Early constraints around
membership—the requirement at first that
users attended Harvard, and then that
they attended any Ivy League school, and
then that they had an email address
ending in .edu—offered a sense of
cohesiveness and community. It made
people feel more comfortable sharing
more of themselves. And more sharing
among clearly defined demographics was
good for business. In 2004, Zuckerberg
said Facebook ran advertisements only to
cover server costs. But over the next
two years Facebook completely upended
and redefined the entire advertising
industry. The pre-social web destroyed
classified ads, but the one-two punch of
Facebook and Google decimated local news
and most of the magazine
industry—publications fought in earnest
for digital pennies, which had replaced
print dollars, and social giants scooped
them all up anyway. No news organization
can compete with the megascale of the
social web. It’s just too massive.</p>
</section>
<section>
<p>The on-again, off-again Facebook
executive Chris Cox once talked about
the “magic number” for start-ups, and
how after a company surpasses 150
employees, things go sideways. “I’ve
talked to so many start-up CEOs that
after they pass this number, weird stuff
starts to happen,” he <a
href="https://qz.com/846530/something-weird-happens-to-companies-when-they-hit-150-people/">said</a>
at a conference in 2016. This idea comes
from the anthropologist Robin Dunbar,
who argued that 148 is the maximum
number of stable social connections a
person can maintain. If we were to apply
that same logic to the stability of a
social platform, what number would we
find?</p>
<p>“I think the sweet spot is 20 to 20,000
people,” the writer and internet scholar
Ethan Zuckerman, who has spent much of
his adult life thinking about <a
href="https://www.theatlantic.com/technology/archive/2014/08/advertising-is-the-internets-original-sin/376041/">how
to build a better web</a>, told me.
“It’s hard to have any degree of real
connectivity after that.”</p>
<p>In other words, if the Dunbar number
for running a company or maintaining a
cohesive social life is 150 people; the
magic number for a functional social
platform is maybe 20,000 people.
Facebook now has <a
href="https://www.statista.com/statistics/264810/number-of-monthly-active-facebook-users-worldwide/#:~:text=With%20over%202.7%20billion%20monthly,network%20ever%20to%20do%20so."><em>2.7
billion</em></a> monthly users<em>.
</em></p>
<p>On the precipice of Facebook’s
exponential growth, in 2007, Zuckerberg
said something in an interview with the
<em>Los Angeles Times</em> that now
takes on a much darker meaning: “The
things that are most powerful aren’t the
things that people would have done
otherwise if they didn’t do them on
Facebook. Instead, it’s the things that
would never have happened otherwise.”</p>
</section>
<section>
<p>O<span>f the many</span> things humans
are consistently terrible at doing,
seeing the future is somewhere near the
top of the list. This flaw became a
preoccupation among Megadeath
Intellectuals such as Herman Kahn and
his fellow economists, mathematicians,
and former military officers at the Rand
Corporation in the 1960s.</p>
<p>Kahn and his colleagues helped invent
modern futurism, which was born of the
existential dread that the bomb ushered
in, and hardened by the understanding
that most innovation is horizontal in
nature—a copy of what already exists,
rather than wholly new. Real invention
is extraordinarily rare, and far more
disruptive.</p>
<p>The logician and philosopher Olaf
Helmer-Hirschberg, who overlapped with
Kahn at Rand and would later co-found
the Institute for the Future, arrived in
California after having fled the Nazis,
an experience that gave his desire to
peer into the future a particular kind
of urgency. He argued that the
acceleration of technological change had
established the need for a new
epistemological approach to fields such
as engineering, medicine, the social
sciences, and so on. “No longer does it
take generations for a new pattern of
living conditions to evolve,” he <a
href="https://www.rand.org/content/dam/rand/pubs/papers/2008/P3558.pdf">wrote</a>,
“but we are going through several major
adjustments in our lives, and our
children will have to adopt continual
adaptation as a way of life.” In 1965,
he wrote a book called <em>Social
Technology</em> that aimed to create a
scientific methodology for predicting
the future.</p>
</section>
<section>
<p><a
href="https://www.theatlantic.com/technology/archive/2020/06/facebook-silicon-valley-trump-silence/612877/">Read:
The silence of the never Facebookers</a></p>
<p>In those same years, Kahn was dreaming
up his own hypothetical machine to
provide a philosophical framework for
the new threats humanity faced. He
called it the Doomsday Machine, and also
the Doomsday-in-a-Hurry Machine, and
also the Homicide Pact Machine. Stanley
Kubrick famously borrowed the concept
for the 1964 film <em>Dr. Strangelove</em>,
the cinematic apotheosis of the fatalism
that came with living on hair-trigger
alert for nuclear annihilation.</p>
<p>Today’s fatalism about the brokenness
of the internet feels similar. We’re
still in the infancy of this century’s
triple digital revolution of the
internet, smartphones, and the social
web, and we find ourselves in a
dangerous and unstable informational
environment, powerless to resist forces
of manipulation and exploitation that we
know are exerted on us but remain mostly
invisible. The Doomsday Machine offers a
lesson: We should not accept this
current arrangement. No single machine
should be able to control so many
people.</p>
<p>If the age of reason was, in part, a
reaction to <a
href="https://www.theatlantic.com/magazine/archive/2020/01/before-zuckerberg-gutenberg/603034/">the
existence of the printing press</a>,
and 1960s futurism was a reaction to the
atomic bomb, we need a new philosophical
and moral framework for living with the
social web—a new Enlightenment for the
information age, and one that will carry
us back to shared reality and
empiricism.</p>
<p>Andrew Bosworth, one of Facebook’s
longtime executives, has compared
Facebook to sugar—in that it is
“delicious” but best enjoyed in
moderation. In a memo originally posted
to Facebook’s internal network last
year, he argued for a philosophy of
personal responsibility. “My grandfather
took such a stance towards bacon and I
admired him for it,” Bosworth wrote.
“And social media is likely much less
fatal than bacon.” But viewing Facebook
merely as a vehicle for individual
consumption ignores the fact of what it
is—a network. Facebook is also a
business, and a place where people spend
time with one another. Put it this way:
If you owned a store and someone walked
in and started shouting Nazi propaganda
or recruiting terrorists near the cash
register, would you, as the shop owner,
tell all of the other customers you
couldn’t possibly intervene?</p>
</section>
<section>
<p>Anyone who is serious about mitigating
the damage done to humankind by the
social web should, of course, consider
quitting Facebook and Instagram and
Twitter and any other algorithmically
distorted informational environments
that manipulate people. But we need to
adopt a broader view of what it will
take to fix the brokenness of the social
web. That will require challenging the
logic of today’s platforms—and first and
foremost challenging the very concept of
megascale as a way that humans gather.
If megascale is what gives Facebook its
power, and what makes it dangerous,
collective action against the web as it
is today is necessary for change. The
web’s existing logic tells us that
social platforms are free in exchange
for a feast of user data; that major
networks are necessarily global and
centralized; that moderators make the
rules. None of that need be the case. We
need people who dismantle these notions
by building alternatives. And we need
enough people to care about these other
alternatives to break the spell of
venture capital and mass attention that
fuels megascale and creates fatalism
about the web as it is now.</p>
<p>I still believe the internet is good
for humanity, but that’s despite the
social web, not because of it. We must
also find ways to repair the aspects of
our society and culture that the social
web has badly damaged. This will require
intellectual independence, respectful
debate, and the same rebellious streak
that helped establish Enlightenment
values centuries ago.</p>
<p>We may not be able to predict the
future, but we do know how it is made:
through flashes of rare and genuine
invention, sustained by people’s time
and attention. Right now, too many
people are allowing algorithms and tech
giants to manipulate them, and reality
is slipping from our grasp as a result.
This century’s Doomsday Machine is here,
and humming along.</p>
<p>It does not have to be this way.</p>
</section>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
<div class="css-10y0cgg"><br>
</div>
</div>
</div>
</div>
</body>
</html>