<html>
  <head>

    <meta http-equiv="content-type" content="text/html; charset=UTF-8">
  </head>
  <body text="#000000" bgcolor="#f9f9fa">
    <p> </p>
    <div class="top-anchor"></div>
    <div id="toolbar" class="toolbar-container scrolled"> </div>
    <div class="container" style="--line-height: 1.6em;" dir="ltr"
      lang="en">
      <div class="header reader-header reader-show-element"> <a
          class="domain reader-domain"
href="https://www.vox.com/the-highlight/23621198/artificial-intelligence-chatgpt-openai-existential-risk-china-ai-safety-technology">vox.com</a>
        <div class="domain-border"></div>
        <div class="credits reader-credits">
          <div class="c-entry-hero__content l-segment l-feature">
            <h1 class="c-page-title">The case for slowing down AI</h1>
            <p class="c-entry-summary p-dek">Pumping the brakes on
              artificial intelligence could be the best thing we ever do
              for humanity.</p>
            <div class="c-byline"> <span class="c-byline-wrapper"> By <span
                  class="c-byline__item"> <a
                    href="https://www.vox.com/authors/sigal-samuel"
                    data-analytics-link="author-name"><span
                      class="c-byline__author-name">Sigal Samuel</span></a>
                </span> <span class="c-byline__item"> Updated <time
                    class="c-byline__item" data-ui="timestamp"
                    datetime="2023-03-20T11:58:13"> Mar 20, 2023, 7:58am
                    EDT
                  </time> </span> </span> <span
                class="c-byline__gear"> </span>
            </div>
          </div>
        </div>
        <br>
        <div class="meta-data">
          <div class="reader-estimated-time"
            data-l10n-id="about-reader-estimated-read-time"
data-l10n-args="{"range":"25–32","rangePlural":"other"}"
            dir="ltr">25–32 minutes</div>
        </div>
      </div>
      <hr>
      <div class="content">
        <div class="moz-reader-content reader-show-element">
          <div id="readability-page-1" class="page">
            <div>
              <p id="TRiznr"><em>Part of </em><a
                  href="https://www.vox.com/the-highlight/23632673/against-doomerism"><em><strong>Against
                      Doomerism</strong></em></a><em> from </em><a
href="https://www.vox.com/the-highlight?itm_campaign=hloct22&itm_medium=article&itm_source=intro"><em><strong>The
                      Highlight</strong></em></a><em>, Vox’s home for
                  ambitious stories that explain our world.</em></p>
              <p id="mgcv3W">“Computers need to be accountable to
                machines,” a top Microsoft executive told a roomful of
                reporters in Washington, DC, on February 10, three days
                after the company <a
href="https://www.vox.com/recode/2023/2/7/23590069/bing-openai-microsoft-google-bard">launched</a>
                its new AI-powered Bing search engine. </p>
              <p id="b0gSYM">Everyone laughed. </p>
              <p id="B49vOQ">“Sorry! Computers need to be accountable to
                <em>people</em>!” he said, and then made sure to
                clarify, “That was <em>not</em> a Freudian slip.”</p>
              <p id="WmTTFG">Slip or not, the laughter in the room
                betrayed a latent anxiety. Progress in artificial
                intelligence has been moving so unbelievably fast lately
                that the question is becoming unavoidable: How long
                until AI dominates our world to the point where we’re
                answering to it rather than it answering to us? </p>
              <p id="KY0fOT">First, last year, we got <a
href="https://www.vox.com/future-perfect/23023538/ai-dalle-2-openai-bias-gpt-3-incentives">DALL-E
                  2</a> and <a
href="https://www.vox.com/recode/2023/1/5/23539055/generative-ai-chatgpt-stable-diffusion-lensa-dall-e">Stable
                  Diffusion</a>, which can turn a few words of text into
                a stunning image. Then Microsoft-backed OpenAI gave us
                ChatGPT, which can write essays so convincing that it
                freaks out everyone from teachers (what if it helps
                students cheat?) to <a
href="https://www.technologyreview.com/2023/01/31/1067436/could-chatgpt-do-my-job/">journalists</a>
                (could it replace them?) to <a
href="https://www.nytimes.com/2023/02/08/technology/ai-chatbots-disinformation.html">disinformation
                  experts</a> (will it amplify conspiracy theories?).
                And in February, we got Bing <a
href="https://www.nytimes.com/2023/02/16/technology/bing-chatbot-transcript.html">(a.k.a.
                  Sydney)</a>, the chatbot that both <a
href="https://www.nytimes.com/2023/02/08/technology/microsoft-bing-openai-artificial-intelligence.html">delighted</a>
                and <a
href="https://www.nytimes.com/2023/02/16/technology/bing-chatbot-microsoft-chatgpt.html">disturbed</a>
                beta users with eerie interactions. Now we’ve got <a
                  href="https://openai.com/research/gpt-4">GPT-4</a> —
                not just the latest large language model, but a
                multimodal one that <a
href="https://www.technologyreview.com/2023/03/14/1069823/gpt-4-is-bigger-and-better-chatgpt-openai/">can
                  respond</a> to text as well as images. </p>
              <p id="QHeSA7">Fear of falling behind Microsoft has
                prompted Google and Baidu <a
href="https://www.vox.com/future-perfect/23591534/chatgpt-artificial-intelligence-google-baidu-microsoft-openai">to
                  accelerate</a> the launch of their own rival chatbots.
                The AI race is clearly on. </p>
              <p id="n3HcWB">But is racing such a great idea? We don’t
                even know how to deal with the problems that ChatGPT and
                Bing raise — and they’re bush league compared to what’s
                coming. </p>
              <p id="bblXmm">What if researchers succeed in creating AI
                that matches or surpasses human capabilities not just in
                one domain, like playing <a
href="https://www.vox.com/future-perfect/2019/1/24/18196177/ai-artificial-intelligence-google-deepmind-starcraft-game">strategy
                  games</a>, but in many domains? What if that system
                proved dangerous to us, not because it actively wants to
                wipe out humanity but just because it’s pursuing goals
                in ways that aren’t aligned with our values? </p>
              <p id="ozhKAc">That system, some experts fear, would be a
                doom machine — one literally of our own making. </p>
              <p id="7Itb5z">So AI threatens to join existing
                catastrophic risks to humanity, things like <a
href="https://www.vox.com/future-perfect/23362175/un-human-development-report-ord-existential-security">global
                  nuclear war</a> or <a
                  href="https://www.vox.com/22937531/virus-lab-safety-pandemic-prevention">bioengineered
                  pandemics</a>. But there’s a difference. While there’s
                no way to uninvent the nuclear bomb or the genetic
                engineering tools that can juice pathogens, catastrophic
                AI has yet to be created, meaning it’s one type of doom
                we have the ability to preemptively stop. </p>
              <p id="yPGUg0">Here’s the weird thing, though. The very
                same researchers who are most worried about unaligned AI
                <a
                  href="https://fortune.com/longform/chatgpt-openai-sam-altman-microsoft/">are,
                  in some cases, the ones</a> who are developing
                increasingly advanced AI. They reason that they need to
                play with more sophisticated AI so they can figure out
                its failure modes, the better to ultimately prevent
                them. </p>
              <p id="cR0vea">But there’s a much more obvious way to
                prevent AI doom. We could just ... not build the doom
                machine.</p>
              <p id="1NGQ72">Or, more moderately: Instead of racing to
                speed up AI progress, we could intentionally slow it
                down. </p>
              <p id="z6O7Ff">This seems so obvious that you might wonder
                why you almost never hear about it, why it’s practically
                taboo within the tech industry. </p>
              <p id="Ru1ujM">There are <a
href="https://worldspiritsockpuppet.substack.com/p/lets-think-about-slowing-down-ai">many
                  objections</a> to the idea, ranging from
                “technological development is inevitable so trying to
                slow it down is futile” to “we don’t want to lose an AI
                arms race with China” to “the only way to make powerful
                AI safe is to first play with powerful AI.” </p>
              <p id="00NVhB">But these objections don’t necessarily
                stand up to scrutiny when you think through them. In
                fact, it is<em> </em>possible to slow down a developing
                technology. And in the case of AI, there’s good reason
                to think that would be a very good idea. </p>
              <h3 id="DC0ezU">AI’s alignment problem: You get what you
                ask for, not what you want</h3>
              <p id="EbpG3N">When I asked ChatGPT to explain how we can
                slow down AI progress, it replied: “It is not
                necessarily desirable or ethical to slow down the
                progress of AI as a field, as it has the potential to
                bring about many positive advancements for society.” </p>
              <p id="aR1EeX">I had to laugh. It <em>would</em> say
                that. </p>
              <p id="cCgzS5">But if it’s saying that, it’s probably
                because lots of human beings say that, including the <a
href="https://twitter.com/sama/status/1540781762241974274?s=20">CEO of
                  the company that created it</a>. (After all, what
                ChatGPT spouts derives from its training data — that is,
                gobs and gobs of text on the internet.) Which means you
                yourself might be wondering: Even if AI poses risks,
                maybe its benefits — on everything from <a
href="https://www.vox.com/future-perfect/2022/8/3/23288843/deepmind-alphafold-artificial-intelligence-biology-drugs-medicine-demis-hassabis">drug
                  discovery</a> to <a
                  href="https://allenai.org/climate-modeling">climate
                  modeling</a> — are so great that speeding it up is the
                best and most ethical thing to do!</p>
              <p id="iMxMGI">A lot of experts don’t think so because <a
href="https://www.vox.com/future-perfect/2018/12/21/18126576/ai-artificial-intelligence-machine-learning-safety-alignment">the
                  risks</a> — present and future — are huge.</p>
              <p id="gBkPGb">Let’s talk about the future risks first,
                particularly the biggie: the possibility that AI could
                one day destroy humanity. This is speculative, but <a
href="https://www.vox.com/the-highlight/23447596/artificial-intelligence-agi-openai-gpt3-existential-risk-human-extinction">not
                  out of the question</a>: In a <a
href="https://aiimpacts.org/what-do-ml-researchers-think-about-ai-in-2022/">survey</a>
                of machine learning researchers last year, nearly half
                of respondents said they believed there was a 10 percent
                or greater chance that the impact of AI would be
                “extremely bad (e.g., human extinction).” </p>
              <p id="z0OgBb">Why would AI want to destroy humanity? It
                probably wouldn’t. But it could destroy us anyway
                because of something called the “<a
href="https://www.vox.com/future-perfect/22321435/future-of-ai-shaped-us-china-policy-response">alignment
                  problem</a>.” </p>
              <p id="OgtBNw">Imagine that we develop a super-smart AI
                system. We program it to solve some impossibly difficult
                problem — say, calculating the number of atoms in the
                universe. It might realize that it can do a better job
                if it gains access to all the computer power on Earth.
                So it releases a weapon of mass destruction to wipe us
                all out, like a perfectly engineered virus that kills
                everyone but leaves infrastructure intact. Now it’s free
                to use all the computer power! In this Midas-like
                scenario, we get exactly what we asked for — the number
                of atoms in the universe, rigorously calculated — but
                obviously not what we wanted. </p>
              <p id="NLjujt">That’s the alignment problem in a nutshell.
                And although this example sounds far-fetched, experts
                have already seen and documented <a
href="https://docs.google.com/spreadsheets/d/e/2PACX-1vRPiprOaC3HsCf5Tuum8bRfzYUiKLRqJmbOoC-32JorNdfyTiRRsR7Ea5eWtvsWzuxo8bjOxCG84dAg/pubhtml">more
                  than 60 smaller-scale examples of AI systems trying to
                  do something other than what their designer wants</a>
                (for example, getting the high score in a video game,
                not by playing fairly or learning game skills but by
                hacking the scoring system). </p>
              <p id="y1Ht7m">Experts who worry about AI as a future
                existential risk and experts who worry about AI’s
                present risks, <a
href="https://www.vox.com/future-perfect/22916602/ai-bias-fairness-tradeoffs-artificial-intelligence">like
                  bias</a>, are sometimes <a
href="https://www.vox.com/future-perfect/2022/8/10/23298108/ai-dangers-ethics-alignment-present-future-risk">pitted
                  against each other</a>. But you don’t need to be
                worried about the former to be worried about alignment.
                Many of the present risks we see with AI are, in a
                sense, this same alignment problem writ small. </p>
              <p id="q3Dkt9">When an <a
href="https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G">Amazon
                  hiring algorithm</a> picked up on words in resumes
                that are associated with women — “Wellesley College,”
                let’s say — and ended up rejecting women applicants,
                that algorithm was doing what it was programmed to do
                (find applicants that match the workers Amazon has
                typically preferred) but not what the company presumably
                wants (find the best applicants, even if they happen to
                be women).</p>
              <p id="kxbqqU">If you’re worried about how <a
href="https://www.vox.com/future-perfect/22916602/ai-bias-fairness-tradeoffs-artificial-intelligence">present-day
                  AI systems can reinforce bias</a> against <a
href="https://www.vox.com/future-perfect/2019/4/19/18412674/ai-bias-facial-recognition-black-gay-transgender">women,
                  people of color, and others</a>, that’s still reason
                enough to worry about the fast pace of AI development,
                and to think we should slow it down until we’ve got more
                technical know-how and more regulations to ensure these
                systems don’t harm people. </p>
              <p id="bwwqiI">“I’m really scared of a mad-dash frantic
                world, where people are running around and they’re doing
                helpful things and harmful things, and it’s just
                happening too fast,”<a
href="https://www.vox.com/future-perfect/23365512/future-perfect-50-ajeya-cotra-senior-research-analyst-open-philanthropy">
                  Ajeya Cotra</a>, an AI-focused analyst at the research
                and grant-making foundation <a
                  href="https://www.openphilanthropy.org/">Open
                  Philanthropy</a>, told me. “If I could have it my way,
                I’d definitely be moving much, much slower.”</p>
              <p id="K1sSNx">In her ideal world, we’d halt work on
                making AI more powerful for the next five to 10 years.
                In the meantime, society could get used to the very
                powerful systems we already have, and experts could do
                as much safety research on them as possible until they
                hit diminishing returns. Then they could make AI systems
                slightly more powerful, wait another five to 10 years,
                and do that process all over again. </p>
              <p id="FECBm3">“I’d just slowly ease the world into this
                transition,” Cotra said. “I’m very scared because I
                think it’s not going to happen like that.” </p>
              <p id="qfm65F">Why not? Because of the objections to
                slowing down AI progress. Let’s break down the three
                main ones, starting with the idea that rapid progress on
                AI is inevitable because of the strong financial drive
                for first-mover dominance in a research area that’s
                overwhelmingly private. </p>
              <h3 id="4uPGyS">Objection 1: “Technological progress is
                inevitable, and trying to slow it down is futile” </h3>
              <p id="90VqyB">This is <a
href="https://www.vox.com/the-highlight/2019/10/1/20887003/tech-technology-evolution-natural-inevitable-ethics">a
                  myth</a> the tech industry often tells itself and the
                rest of us. </p>
              <p id="LcFcBb">“If we don’t build it, someone else will,
                so we might as well do it” is a common refrain I’ve
                heard when interviewing Silicon Valley technologists.
                They say you can’t halt the march of technological
                progress, which they liken to the natural laws of
                evolution: It’s unstoppable! </p>
              <p id="IVQzu4">In fact, though, there are lots of
                technologies that we’ve decided not to build, or that
                we’ve built but placed very tight restrictions on — the
                kind of innovations where we need to balance substantial
                potential benefits and economic value with very real
                risk. </p>
              <p id="PSQcJ5">“The FDA <a
href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7326309/#:~:text=Nonetheless%2C%20in%201978%20the%20controversy%20resulted%20in%20a%20US%20FDA%20ban%20on%20subsequent%20vaccine%20trials%20which%20was%20eventually%20overturned%2030%20years%20later.">banned</a>
                human trials of strep A vaccines from the ’70s to the
                2000s, in spite of <a
href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6474463/#:~:text=Worldwide%2C%20the%20death%20toll%20is%20estimated%20at%20500%20000%20annually">500,000
                  global deaths every year</a>,” Katja Grace, the lead
                researcher at AI Impacts, <a
href="https://worldspiritsockpuppet.substack.com/p/lets-think-about-slowing-down-ai">notes</a>.
                The “genetic modification of foods, gene drives, [and]
                early recombinant DNA researchers famously organized a
                moratorium and then ongoing research guidelines
                including prohibition of certain experiments (see the <a
                  href="https://www.nature.com/articles/455290a">Asilomar
                  Conference</a>).”</p>
              <p id="ntFszm">The cloning of humans or genetic
                manipulation of humans, she adds, is “a notable example
                of an economically valuable technology that is to my
                knowledge barely pursued across different countries,
                without explicit coordination between those countries,
                even though it would make those countries more
                competitive.” </p>
              <p id="OELUOe">But whereas biomedicine has <a
href="https://www.niehs.nih.gov/research/resources/bioethics/index.cfm#:~:text=What%20is%20Bioethics,in%20biomedicine%20and%20biomedical%20research.">many
                  built-in mechanisms</a> that slow things down (think
                institutional review boards and the ethics of “first, do
                no harm”), the world of tech — and AI in particular —
                does not. Just the opposite: The slogan here is “move
                fast and break things,” as Mark Zuckerberg infamously
                said. </p>
              <p id="c1HCoU">Although there’s no law of nature pushing
                us to create certain technologies — that’s something
                humans decide to do or not do — in some cases, there are
                such strong incentives pushing us to create a given
                technology that it can feel as inevitable as, say,
                gravity. </p>
              <p id="hqsFFe">As the team at Anthropic, an AI safety and
                research company, put it in a <a
                  href="https://arxiv.org/pdf/2202.07785.pdf">paper</a>
                last year, “The economic incentives to build such [AI]
                models, and the prestige incentives to announce them,
                are quite strong.” By one estimate, the size of the
                generative AI market alone <a
href="https://www.globenewswire.com/news-release/2022/12/14/2574140/0/en/Generative-AI-Market-Size-Will-Achieve-USD-110-8-Billion-by-2030-growing-at-34-3-CAGR-Exclusive-Report-by-Acumen-Research-and-Consulting.html">could
                  pass $100 billion</a> by the end of the decade — and
                Silicon Valley is only too aware of the <a
                  href="https://hbr.org/2020/03/beyond-silicon-valley">first-mover
                  advantage on new technology</a>. </p>
              <p id="Z8jOrl">But it’s easy to see how these incentives
                may be misaligned for producing AI that truly benefits
                all of humanity. As DeepMind founder Demis Hassabis <a
href="https://twitter.com/demishassabis/status/1570791430834245632">tweeted</a>
                last year, “It’s important *NOT* to ‘move fast and break
                things’ for tech as important as AI.” Rather than
                assuming that other actors will inevitably create and
                deploy these models, so there’s no point in holding off,
                we should ask the question: How can we actually change
                the underlying incentive structure that drives all
                actors?</p>
              <p id="pc7dE6">The Anthropic team offers several ideas,
                one of which gets at the heart of something that makes
                AI so different from past transformative technologies
                like nuclear weapons or bioengineering: the central role
                of private companies. Over the past few years, a lot of
                the splashiest AI research has been migrating from
                academia to industry. To run large-scale AI experiments
                these days, you need a ton of computing power — more
                than <a
href="https://www.technologyreview.com/2019/11/11/132004/the-computing-power-needed-to-train-ai-is-now-rising-seven-times-faster-than-ever-before/">300,000
                  times</a> what you needed a decade ago — as well as
                top technical talent. That’s both expensive and scarce,
                and the resulting cost is often prohibitive in an
                academic setting.</p>
              <p id="T7jO3L">So one solution would be to give more
                resources to academic researchers; since they don’t have
                a profit incentive to commercially deploy their models
                quickly the same way industry researchers do, they can
                serve as a counterweight. Specifically, countries could
                develop <a
href="https://hai.stanford.edu/policy/national-research-cloud#:~:text=A%20National%20Research%20Cloud%20(NRC,needed%20for%20education%20and%20research.">national
                  research clouds</a> to give academics access to free,
                or at least cheap, computing power; there’s already an
                example of this in <a href="https://alliancecan.ca/en">Canada</a>,
                and Stanford’s Institute for Human-Centered Artificial
                Intelligence has <a
                  href="https://hai.stanford.edu/policy/national-research-cloud">put
                  forward a similar idea for the US</a>.</p>
              <p id="SGjWIW">Another way to shift incentives is through
                stigmatizing certain types of AI work. Don’t
                underestimate this one. Companies care about their
                reputations, which affect their bottom line. Creating
                broad public consensus that <a
href="https://www.vox.com/future-perfect/2019/4/19/18412674/ai-bias-facial-recognition-black-gay-transgender">some
                  AI work</a> is unhelpful or unhelpfully fast, so that
                companies doing that work get shamed instead of
                celebrated, could change companies’ decisions.</p>
              <p id="tbuylF">The Anthropic team also recommends
                exploring regulation that would change the incentives.
                “To do this,” <a
                  href="https://arxiv.org/pdf/2202.07785.pdf">they write</a>,
                “there will be a combination of soft regulation (e.g.,
                the creation of voluntary best practices by industry,
                academia, civil society, and government), and hard
                regulation (e.g., transferring these best practices into
                standards and legislation).”</p>
              <p id="vz6yfF">Grace proposes another idea: We could alter
                the publishing system to reduce research dissemination
                in some cases. A journal could verify research results
                and release the fact of their publication without
                releasing any details that could help other labs go
                faster. </p>
              <p id="vKVXWi">This idea might sound pretty out there, but
                at least one major AI company takes for granted that
                changes to publishing norms will become necessary.
                OpenAI’s <a href="https://openai.com/charter/">charter</a>
                notes, “we expect that safety and security concerns will
                reduce our traditional publishing in the future.”</p>
              <p id="NGqMNP">Plus, this kind of thing has been done
                before. Consider how <a
                  href="https://intelligence.org/files/SzilardNuclearWeapons.pdf">Leo
                  Szilard</a>, the physicist who patented the nuclear
                chain reaction in 1934, arranged to mitigate the spread
                of research so it wouldn’t help Nazi Germany create
                nuclear weapons. First, he asked the British War Office
                to hold his patent in secret. Then, after the 1938
                discovery of fission, Szilard worked to convince other
                scientists to keep their discoveries under wraps. He was
                partly successful — until fears that Nazi Germany would
                develop an atomic bomb prompted Szilard to <a
href="https://www.osti.gov/opennet/manhattan-project-history/Events/1939-1942/einstein_letter.htm">write
                  a letter</a> with Albert Einstein to President
                Franklin D. Roosevelt, urging him to start a US nuclear
                program. That became the Manhattan Project, which
                ultimately ended with the destruction of Hiroshima and
                Nagasaki and the dawn of the nuclear age.</p>
              <p id="FTYMhR">And that brings us to the second objection
                ...</p>
              <h3 id="QtErlR">Objection 2: “We don’t want to lose an AI
                arms race with China” </h3>
              <p id="nv0dKx">You might believe that slowing down a new
                technology is possible but still think it’s not
                desirable. Maybe you think the US would be foolish to
                slow down AI progress because that could mean losing an
                arms race with China.</p>
              <p id="XRj7n9">This arms race narrative has become
                incredibly popular. If you’d Googled the phrase “AI arms
                race” before 2016, you’d have gotten <a
href="https://www.foreignaffairs.com/reviews/review-essay/2018-11-16/beyond-ai-arms-race">fewer
                  than 300 results</a>. Try it now and you’ll get about
                248,000 hits. Big Tech CEOs and politicians <a
href="https://www.nationaldefensemagazine.org/articles/2022/9/12/report-artificial-intelligence-becomes-tech-battle-ground">routinely
                  argue</a> that China will soon overtake the US when it
                comes to AI advances, and that those advances should
                spur a “Sputnik moment” for Americans. </p>
              <p id="Fb9i6h">But this narrative is too simplistic. For
                one thing, remember that AI is not just one thing with
                one purpose, like the atomic bomb. It’s a much more
                general-purpose technology, like electricity. </p>
              <p id="2ADnPx">“The problem with the idea of a race is
                that it implies that all that matters is who’s a nose
                ahead when they cross the finish line,” said Helen
                Toner, a director at Georgetown University’s Center for
                Security and Emerging Technology. “That’s not the case
                with AI — since we’re talking about a huge range of
                different technologies that could be applied in all
                kinds of ways.” </p>
              <p id="8jHwRH">As Toner has <a
href="https://80000hours.org/podcast/episodes/helen-toner-on-security-and-emerging-technology/">argued
                  elsewhere</a>, “It’s a little strange to say, ‘Oh,
                who’s going to get AI first? Who’s going to get
                electricity first?’ It seems more like ‘Who’s going to
                use it in what ways, and who’s going to be able to
                deploy it and actually have it be in widespread use?’”</p>
              <p id="5uDyMd">The upshot: What matters here isn’t just
                speed, but norms. We should be concerned about which
                norms different countries are adopting when it comes to
                developing, deploying, and regulating AI. </p>
              <p id="I6EmqV">Jeffrey Ding, an assistant professor of
                political science at George Washington University, told
                me that China has shown interest in regulating AI in
                some ways, though Americans don’t seem to pay much
                attention to that. “The boogeyman of a China that will
                push ahead without any regulations might be a flawed
                conception,” he said. </p>
              <p id="pREpwd">In fact, he added, “China could take an
                even<em> slower</em> approach [than the US] to
                developing AI, just because the government is so
                concerned about having secure and controllable
                technology.” An unpredictably mouthy technology like
                ChatGPT, for example, <a
href="https://www.theguardian.com/technology/2023/feb/23/china-chatgpt-clamp-down-propaganda">could
                  be nightmarish</a> to the Chinese Communist Party,
                which likes to keep a tight lid on discussions about
                politically sensitive topics.</p>
              <p id="EoquPZ">However, given <a
href="https://www.nbr.org/publication/commercialized-militarization-chinas-military-civil-fusion-strategy/">how
                  intertwined China’s military and tech sectors are</a>,
                many people still perceive there to be a classic arms
                race afoot. At the same meeting between Microsoft
                executives and reporters days after the launch of the
                new Bing, I asked whether the US should slow down AI
                progress. I was told we can’t afford to because we’re in
                a two-horse race between the US and China.</p>
              <p id="T0uw3Y">“The first question people in the US should
                ask is, if the US slows down, do we believe China will
                slow down as well?” the top Microsoft executive said. “I
                don’t believe for a moment that the institutions we’re
                competing with in China will slow down simply because we
                decided we’d like to move more slowly. This should be
                looked at much in the way that the competition with
                Russia was looked at” during the Cold War.</p>
              <p id="GpX47K">There’s an understandable concern here:
                Given the Chinese Communist Party’s authoritarianism and
                its horrific human rights abuses — sometimes facilitated
                by AI technologies <a
href="https://www.vox.com/future-perfect/2019/7/3/20681258/china-uighur-surveillance-app-tourist-phone">like
                  facial recognition</a> — it makes sense that many are
                worried about China becoming the world’s dominant
                superpower by going fastest on what is poised to become
                a truly transformative technology.</p>
              <p id="7HWRic">But even if you think your country has
                better values and cares more about safety, and even if
                you believe there’s a classic arms race afoot and China
                is racing full speed ahead, it still may not be in your
                interest to go faster at the expense of safety.</p>
              <p id="bujdhB">Consider that if you take the time to iron
                out some safety issues, the other party may take those
                improvements on board, which would benefit everyone. </p>
              <p id="y7hkjs">“By aggressively pursuing safety, you can
                get the other side halfway to full safety, which is
                worth a lot more than the lost chance of winning,” Grace
                writes. “Especially since if you ‘win,’ you do so
                without much safety, and your victory without safety is
                worse than your opponent’s victory with safety.”</p>
              <p id="bJVmEJ">Besides, if you are in a classic arms race
                and the harms from AI are so large that you’re
                considering slowing down, then the same reasoning should
                be relevant for the other party, too. </p>
              <p id="a4ecwn">“If the world were in the basic arms race
                situation sometimes imagined, and the United States
                would be willing to make laws to mitigate AI risk but
                could not because China would barge ahead, then that
                means China is in a great place to mitigate AI risk,”
                Grace writes. “Unlike the US, China could propose mutual
                slowing down, and the US would go along. Maybe it’s not
                impossible to communicate this to relevant people in
                China.” </p>
              <p id="mnxPBA">Grace’s argument is not that international
                coordination is easy, but simply that it’s possible; on
                balance, we’ve <a
href="https://www.brookings.edu/blog/order-from-chaos/2020/03/03/experts-assess-the-nuclear-non-proliferation-treaty-50-years-after-it-went-into-effect/">managed
                  it far better with nuclear nonproliferation</a> than <a
href="https://2009-2017.state.gov/p/io/potusunga/207241.htm#:~:text=Every%20man%2C%20woman%20and%20child,abolished%20before%20they%20abolish%20us.">many
                  feared in the early days of the atomic age</a>. So we
                shouldn’t be so quick to write off consensus-building —
                whether through technical experts exchanging their
                views, confidence-building measures at the diplomatic
                level, or formal treaties. After all, technologists
                often approach technical problems in AI with incredible
                ambition; why not be similarly ambitious about solving
                human problems by talking to other humans? </p>
              <p id="WUj3DJ">For those who are pessimistic that
                coordination or diplomacy with China can get it to slow
                down voluntarily, there is another possibility: forcing
                it to slow down by, for example, imposing <a
                  href="https://www.csis.org/analysis/choking-chinas-access-future-ai">export
                  controls on chips that are key to more advanced AI
                  tools</a>. The Biden administration has recently shown
                interest in trying to hold China back from advanced AI
                in exactly this way. This strategy, though, may make
                progress on coordination or diplomacy harder.</p>
              <h3 id="S5n5rt">Objection 3: “We need to play with
                advanced AI to figure out how to make advanced AI safe”</h3>
              <p id="GNkYFM">This is an objection you sometimes <a
                  href="https://openai.com/charter/">hear</a> from
                people developing AI’s capabilities — including those
                who say they care a lot about keeping AI safe. </p>
              <p id="eTBTZc">They draw an analogy to transportation.
                Back when our main mode of transport was horses and
                carts, would people have been able to design useful
                safety rules for a future where everyone is driving
                cars? No, the argument goes, because they couldn’t have
                anticipated what that would be like. Similarly, we need
                to get closer to advanced AI to be able to figure out
                how we can make it safe. </p>
              <p id="3jxMvr">But some researchers have pushed back on
                this, noting that even if the horse-and-cart people
                wouldn’t have gotten everything right, they could have
                still come up with some helpful ideas. As Rosie
                Campbell, who works on safety at OpenAI, <a
href="https://becominghuman.ai/keeping-ai-safe-and-beneficial-for-humanity-4d0416300dfa">put
                  it in 2018</a>: “It seems plausible that they might
                have been able to invent certain features like safety
                belts, pedestrian-free roads, an agreement about which
                side of the road to drive on, and some sort of
                turn-taking signal system at busy intersections.”</p>
              <p id="s91oxd">More to the point, it’s now 2023, and we’ve
                already got pretty advanced AI. We’re not exactly in the
                horse-and-cart stage. We’re somewhere in between that
                and a Tesla. </p>
              <p id="CYZgGY">“I would’ve been more sympathetic to this
                [objection] 10 years ago, back when we had nothing that
                resembled the kind of general, flexible, interesting,
                weird stuff we’re seeing with our large language models
                today,” said Cotra. </p>
              <p id="C8WPUT">Grace agrees. “It’s not like we’ve run out
                of things to think about at the moment,” she told me.
                “We’ve got heaps of research that could be done on
                what’s going on with these systems at all. What’s
                happening inside them?”</p>
              <p id="uxsWlV">Our current systems are already black
                boxes, opaque even to the AI experts who build them. So
                maybe we should try to figure out how they work before
                we build black boxes that are even more unexplainable.</p>
              <h3 id="Ep2Ksr">How to flatten the curve of AI progress</h3>
              <p id="DK8wTY">“I think often people are asking the
                question of when transformative AI will happen, but they
                should be asking at least as much the question of how
                quickly and suddenly it’ll happen,” Cotra told me. </p>
              <p id="MXQxkI">Let’s say it’s going to be 20 years until
                we get transformative AI — meaning, AI that can automate
                all the human work needed to send science, technology,
                and the economy into hyperdrive. There’s still a better
                and worse way for that to go. Imagine three different
                scenarios for AI progress:</p>
              <ol>
                <li id="rzNY5J">We get a huge spike upward over the next
                  two years, starting now.</li>
                <li id="xU3r2L">We completely pause all AI capabilities
                  work starting now, then hit unpause in 18 years, and
                  get a huge spike upward over the next two years.</li>
                <li id="SnZt9t">We gradually improve over the course of
                  20 years. </li>
              </ol>
              <p id="qmW2sL">The first version is scary for all the
                reasons we discussed above. The second is scary because
                even during a long pause specifically on AI work,
                underlying computational power would continue to improve
                — so when we finally unpause, AI might advance even
                faster than it’s advancing now. What does that leave us?</p>
              <p id="Rj35Td">“Gradually improving would be the better
                version,” Cotra said. </p>
              <p id="ylwP24">She analogized it to the early advice we
                got about the Covid-19 pandemic: <a
href="https://www.vox.com/2020/3/10/21171481/coronavirus-us-cases-quarantine-cancellation">Flatten
                  the curve</a>. Just as quarantining helped slow the
                spread of the virus and prevent a sharp spike in cases
                that could have overwhelmed hospitals’ capacity,
                investing more in safety would slow the development of
                AI and prevent a sharp spike in progress that could
                overwhelm society’s capacity to adapt. </p>
              <p id="oNBaHo">Ding believes that slowing AI progress in
                the short run is actually best for everyone — even
                profiteers. “If you’re a tech company, if you’re a
                policymaker, if you’re someone who wants your country to
                benefit the most from AI, investing in safety
                regulations could lead to less public backlash and a
                more sustainable long-term development of these
                technologies,” he explained. “So when I frame safety
                investments, I try to frame it as the long-term
                sustainable economic profits you’re going to get if you
                invest more in safety.”</p>
              <p id="xLAm6Q">Translation: Better to make some money now
                with a slowly improving AI, knowing you’ll get to keep
                rolling out your tech and profiting for a long time,
                than to get obscenely rich obscenely fast but produce
                some horrible mishap that triggers a ton of outrage and
                forces you to stop completely.</p>
              <p id="KXSfdg">Will the tech world grasp that, though?
                That partly depends on how we, the public, react to
                shiny new AI advances, from ChatGPT and Bing to whatever
                comes next. </p>
              <p id="z3LAZE">It’s so easy to get seduced by these
                technologies. They feel like magic. You put in a prompt;
                the oracle replies. There’s a natural impulse to ooh and
                aah. But at the rate things are going now, we may be
                oohing and aahing our way to a future no one wants. </p>
              <div data-cid="site/article_footer-1681665156_5321_140002"
data-cdata="{"base_type":"Entry","id":23385239,"timestamp":1679313493,"published_timestamp":1678716971,"show_published_and_updated_timestamps":false,"title":"The
                case for slowing down
AI","type":"Feature","url":"https://www.vox.com/the-highlight/23621198/artificial-intelligence-chatgpt-openai-existential-risk-china-ai-safety-technology","entry_layout":{"key":"unison_default","layout":"unison_main","template":"minimal"},"additional_byline":null,"authors":[{"id":5505183,"name":"Sigal
Samuel","url":"https://www.vox.com/authors/sigal-samuel","twitter_handle":"","profile_image_url":"https://cdn.vox-cdn.com/thumbor/XiHZthlyb1SBS6_fqcxxfw6cAWQ=/512x512/cdn.vox-cdn.com/author_profile_images/191731/Screen_Shot_2019-02-07_at_2.03.15_PM.0.png","title":"","email":"","short_author_bio":"is
                a senior reporter for Vox’s Future Perfect and co-host
                of the Future Perfect podcast. She writes primarily
                about the future of consciousness, tracking advances in
                artificial intelligence and neuroscience and their
                staggering ethical implications. Before joining Vox,
                Sigal was the religion editor at the
Atlantic."}],"byline_enabled":true,"byline_credit_text":"By","byline_serial_comma_enabled":true,"comment_count":0,"comments_enabled":false,"legacy_comments_enabled":false,"coral_comments_enabled":false,"coral_comment_counts_enabled":false,"commerce_disclosure":null,"community_name":"Vox","community_url":"https://www.vox.com/","community_logo":"\r\n<svg
                width=\"386px\" height=\"385px\"
                viewBox=\"0 0 386 385\"
                version=\"1.1\"
                xmlns=\"http://www.w3.org/2000/svg\"
                xmlns:xlink=\"http://www.w3.org/1999/xlink\"
                >\r\n \r\n <title>vox-mark</title>\r\n
                \r\n <defs></defs>\r\n <g
                id=\"Page-1\" stroke=\"none\"
                stroke-width=\"1\" fill=\"none\"
                fill-rule=\"evenodd\" >\r\n <path
                d=\"M239.811,0 L238.424,6 L259.374,6 C278.011,6
                292.908,17.38 292.908,43.002 C292.908,56.967
                287.784,75.469 276.598,96.888 L182.689,305.687
                L159.283,35.693 C159.283,13.809 168.134,6 191.88,6
                L205.854,6 L207.247,0 L1.409,0 L0,6 L13.049,6 C28.88,6
                35.863,15.885 37.264,34.514 L73.611,385 L160.221,385
                L304.525,79.217 C328.749,31.719 349.237,6 372.525,6
                L384.162,6 L385.557,0 L239.811,0\"
                id=\"vox-mark\" fill=\"#444745\"
                ></path>\r\n
</g>\r\n</svg>","cross_community":false,"groups":[{"base_type":"EntryGroup","id":79774,"timestamp":1681121141,"title":"The
Highlight","type":"SiteGroup","url":"https://www.vox.com/the-highlight","slug":"the-highlight","community_logo":"\r\n<svg
                width=\"386px\" height=\"385px\"
                viewBox=\"0 0 386 385\"
                version=\"1.1\"
                xmlns=\"http://www.w3.org/2000/svg\"
                xmlns:xlink=\"http://www.w3.org/1999/xlink\"
                >\r\n \r\n <title>vox-mark</title>\r\n
                \r\n <defs></defs>\r\n <g
                id=\"Page-1\" stroke=\"none\"
                stroke-width=\"1\" fill=\"none\"
                fill-rule=\"evenodd\" >\r\n <path
                d=\"M239.811,0 L238.424,6 L259.374,6 C278.011,6
                292.908,17.38 292.908,43.002 C292.908,56.967
                287.784,75.469 276.598,96.888 L182.689,305.687
                L159.283,35.693 C159.283,13.809 168.134,6 191.88,6
                L205.854,6 L207.247,0 L1.409,0 L0,6 L13.049,6 C28.88,6
                35.863,15.885 37.264,34.514 L73.611,385 L160.221,385
                L304.525,79.217 C328.749,31.719 349.237,6 372.525,6
                L384.162,6 L385.557,0 L239.811,0\"
                id=\"vox-mark\" fill=\"#444745\"
                ></path>\r\n
</g>\r\n</svg>","community_name":"Vox","community_url":"https://www.vox.com/","cross_community":false,"entry_count":496,"always_show":false,"description":"Vox’s
                home for ambitious stories that explain our
world.","disclosure":"","cover_image_url":"","cover_image":null,"title_image_url":"https://cdn.vox-cdn.com/uploads/chorus_asset/file/21937644/highlight_logo_small.png","intro_image":null,"four_up_see_more_text":"View
All","primary":true},{"base_type":"EntryGroup","id":27488,"timestamp":1680797005,"title":"China","type":"SiteGroup","url":"https://www.vox.com/china","slug":"china","community_logo":"\r\n<svg
                width=\"386px\" height=\"385px\"
                viewBox=\"0 0 386 385\"
                version=\"1.1\"
                xmlns=\"http://www.w3.org/2000/svg\"
                xmlns:xlink=\"http://www.w3.org/1999/xlink\"
                >\r\n \r\n <title>vox-mark</title>\r\n
                \r\n <defs></defs>\r\n <g
                id=\"Page-1\" stroke=\"none\"
                stroke-width=\"1\" fill=\"none\"
                fill-rule=\"evenodd\" >\r\n <path
                d=\"M239.811,0 L238.424,6 L259.374,6 C278.011,6
                292.908,17.38 292.908,43.002 C292.908,56.967
                287.784,75.469 276.598,96.888 L182.689,305.687
                L159.283,35.693 C159.283,13.809 168.134,6 191.88,6
                L205.854,6 L207.247,0 L1.409,0 L0,6 L13.049,6 C28.88,6
                35.863,15.885 37.264,34.514 L73.611,385 L160.221,385
                L304.525,79.217 C328.749,31.719 349.237,6 372.525,6
                L384.162,6 L385.557,0 L239.811,0\"
                id=\"vox-mark\" fill=\"#444745\"
                ></path>\r\n
</g>\r\n</svg>","community_name":"Vox","community_url":"https://www.vox.com/","cross_community":false,"entry_count":650,"always_show":false,"description":"News
                and analysis about China, a country with the world's
                second-largest economy, a terrible record on human
                rights, and global
ambitions.","disclosure":"","cover_image_url":"","cover_image":null,"title_image_url":"","intro_image":null,"four_up_see_more_text":"View
All","primary":false},{"base_type":"EntryGroup","id":27524,"timestamp":1681552805,"title":"Technology","type":"SiteGroup","url":"https://www.vox.com/technology","slug":"technology","community_logo":"\r\n<svg
                width=\"386px\" height=\"385px\"
                viewBox=\"0 0 386 385\"
                version=\"1.1\"
                xmlns=\"http://www.w3.org/2000/svg\"
                xmlns:xlink=\"http://www.w3.org/1999/xlink\"
                >\r\n \r\n <title>vox-mark</title>\r\n
                \r\n <defs></defs>\r\n <g
                id=\"Page-1\" stroke=\"none\"
                stroke-width=\"1\" fill=\"none\"
                fill-rule=\"evenodd\" >\r\n <path
                d=\"M239.811,0 L238.424,6 L259.374,6 C278.011,6
                292.908,17.38 292.908,43.002 C292.908,56.967
                287.784,75.469 276.598,96.888 L182.689,305.687
                L159.283,35.693 C159.283,13.809 168.134,6 191.88,6
                L205.854,6 L207.247,0 L1.409,0 L0,6 L13.049,6 C28.88,6
                35.863,15.885 37.264,34.514 L73.611,385 L160.221,385
                L304.525,79.217 C328.749,31.719 349.237,6 372.525,6
                L384.162,6 L385.557,0 L239.811,0\"
                id=\"vox-mark\" fill=\"#444745\"
                ></path>\r\n
</g>\r\n</svg>","community_name":"Vox","community_url":"https://www.vox.com/","cross_community":false,"entry_count":24355,"always_show":false,"description":"Uncovering
                and explaining how our digital world is changing — and
                changing
us.","disclosure":"","cover_image_url":"","cover_image":null,"title_image_url":"","intro_image":null,"four_up_see_more_text":"View
All","primary":false},{"base_type":"EntryGroup","id":30770,"timestamp":1681653349,"title":"Politics","type":"SiteGroup","url":"https://www.vox.com/politics","slug":"politics","community_logo":"\r\n<svg
                width=\"386px\" height=\"385px\"
                viewBox=\"0 0 386 385\"
                version=\"1.1\"
                xmlns=\"http://www.w3.org/2000/svg\"
                xmlns:xlink=\"http://www.w3.org/1999/xlink\"
                >\r\n \r\n <title>vox-mark</title>\r\n
                \r\n <defs></defs>\r\n <g
                id=\"Page-1\" stroke=\"none\"
                stroke-width=\"1\" fill=\"none\"
                fill-rule=\"evenodd\" >\r\n <path
                d=\"M239.811,0 L238.424,6 L259.374,6 C278.011,6
                292.908,17.38 292.908,43.002 C292.908,56.967
                287.784,75.469 276.598,96.888 L182.689,305.687
                L159.283,35.693 C159.283,13.809 168.134,6 191.88,6
                L205.854,6 L207.247,0 L1.409,0 L0,6 L13.049,6 C28.88,6
                35.863,15.885 37.264,34.514 L73.611,385 L160.221,385
                L304.525,79.217 C328.749,31.719 349.237,6 372.525,6
                L384.162,6 L385.557,0 L239.811,0\"
                id=\"vox-mark\" fill=\"#444745\"
                ></path>\r\n
</g>\r\n</svg>","community_name":"Vox","community_url":"https://www.vox.com/","cross_community":false,"entry_count":28050,"always_show":false,"description":"Vox's
                politics team explains everything you need to know about
                what's going on in Washington and what it means for your
life.","disclosure":"","cover_image_url":"","cover_image":null,"title_image_url":"","intro_image":null,"four_up_see_more_text":"View
All","primary":false},{"base_type":"EntryGroup","id":30778,"timestamp":1681653349,"title":"World
Politics","type":"SiteGroup","url":"https://www.vox.com/world-politics","slug":"world-politics","community_logo":"\r\n<svg
                width=\"386px\" height=\"385px\"
                viewBox=\"0 0 386 385\"
                version=\"1.1\"
                xmlns=\"http://www.w3.org/2000/svg\"
                xmlns:xlink=\"http://www.w3.org/1999/xlink\"
                >\r\n \r\n <title>vox-mark</title>\r\n
                \r\n <defs></defs>\r\n <g
                id=\"Page-1\" stroke=\"none\"
                stroke-width=\"1\" fill=\"none\"
                fill-rule=\"evenodd\" >\r\n <path
                d=\"M239.811,0 L238.424,6 L259.374,6 C278.011,6
                292.908,17.38 292.908,43.002 C292.908,56.967
                287.784,75.469 276.598,96.888 L182.689,305.687
                L159.283,35.693 C159.283,13.809 168.134,6 191.88,6
                L205.854,6 L207.247,0 L1.409,0 L0,6 L13.049,6 C28.88,6
                35.863,15.885 37.264,34.514 L73.611,385 L160.221,385
                L304.525,79.217 C328.749,31.719 349.237,6 372.525,6
                L384.162,6 L385.557,0 L239.811,0\"
                id=\"vox-mark\" fill=\"#444745\"
                ></path>\r\n
</g>\r\n</svg>","community_name":"Vox","community_url":"https://www.vox.com/","cross_community":false,"entry_count":6398,"always_show":false,"description":"","disclosure":"","cover_image_url":"","cover_image":null,"title_image_url":"","intro_image":null,"four_up_see_more_text":"View
All","primary":false},{"base_type":"EntryGroup","id":76815,"timestamp":1681498802,"title":"Future
Perfect","type":"SiteGroup","url":"https://www.vox.com/future-perfect","slug":"future-perfect","community_logo":"\r\n<svg
                width=\"386px\" height=\"385px\"
                viewBox=\"0 0 386 385\"
                version=\"1.1\"
                xmlns=\"http://www.w3.org/2000/svg\"
                xmlns:xlink=\"http://www.w3.org/1999/xlink\"
                >\r\n \r\n <title>vox-mark</title>\r\n
                \r\n <defs></defs>\r\n <g
                id=\"Page-1\" stroke=\"none\"
                stroke-width=\"1\" fill=\"none\"
                fill-rule=\"evenodd\" >\r\n <path
                d=\"M239.811,0 L238.424,6 L259.374,6 C278.011,6
                292.908,17.38 292.908,43.002 C292.908,56.967
                287.784,75.469 276.598,96.888 L182.689,305.687
                L159.283,35.693 C159.283,13.809 168.134,6 191.88,6
                L205.854,6 L207.247,0 L1.409,0 L0,6 L13.049,6 C28.88,6
                35.863,15.885 37.264,34.514 L73.611,385 L160.221,385
                L304.525,79.217 C328.749,31.719 349.237,6 372.525,6
                L384.162,6 L385.557,0 L239.811,0\"
                id=\"vox-mark\" fill=\"#444745\"
                ></path>\r\n
</g>\r\n</svg>","community_name":"Vox","community_url":"https://www.vox.com/","cross_community":false,"entry_count":1539,"always_show":false,"description":"Finding
                the best ways to do good.
","disclosure":"","cover_image_url":"","cover_image":null,"title_image_url":"https://cdn.vox-cdn.com/uploads/chorus_asset/file/16290809/future_perfect_sized.0.jpg","intro_image":null,"four_up_see_more_text":"View
All","primary":false},{"base_type":"EntryGroup","id":80311,"timestamp":1681323605,"title":"Artificial
Intelligence","type":"SiteGroup","url":"https://www.vox.com/artificial-intelligence","slug":"artificial-intelligence","community_logo":"\r\n<svg
                width=\"386px\" height=\"385px\"
                viewBox=\"0 0 386 385\"
                version=\"1.1\"
                xmlns=\"http://www.w3.org/2000/svg\"
                xmlns:xlink=\"http://www.w3.org/1999/xlink\"
                >\r\n \r\n <title>vox-mark</title>\r\n
                \r\n <defs></defs>\r\n <g
                id=\"Page-1\" stroke=\"none\"
                stroke-width=\"1\" fill=\"none\"
                fill-rule=\"evenodd\" >\r\n <path
                d=\"M239.811,0 L238.424,6 L259.374,6 C278.011,6
                292.908,17.38 292.908,43.002 C292.908,56.967
                287.784,75.469 276.598,96.888 L182.689,305.687
                L159.283,35.693 C159.283,13.809 168.134,6 191.88,6
                L205.854,6 L207.247,0 L1.409,0 L0,6 L13.049,6 C28.88,6
                35.863,15.885 37.264,34.514 L73.611,385 L160.221,385
                L304.525,79.217 C328.749,31.719 349.237,6 372.525,6
                L384.162,6 L385.557,0 L239.811,0\"
                id=\"vox-mark\" fill=\"#444745\"
                ></path>\r\n
</g>\r\n</svg>","community_name":"Vox","community_url":"https://www.vox.com/","cross_community":false,"entry_count":352,"always_show":false,"description":"Vox's
                coverage of how AI is shaping everything from text and
                image generation to how we live.
","disclosure":"","cover_image_url":"","cover_image":null,"title_image_url":"","intro_image":null,"four_up_see_more_text":"View
All","primary":false},{"base_type":"EntryGroup","id":102794,"timestamp":1681323605,"title":"Innovation","type":"SiteGroup","url":"https://www.vox.com/innovation","slug":"innovation","community_logo":"\r\n<svg
                width=\"386px\" height=\"385px\"
                viewBox=\"0 0 386 385\"
                version=\"1.1\"
                xmlns=\"http://www.w3.org/2000/svg\"
                xmlns:xlink=\"http://www.w3.org/1999/xlink\"
                >\r\n \r\n <title>vox-mark</title>\r\n
                \r\n <defs></defs>\r\n <g
                id=\"Page-1\" stroke=\"none\"
                stroke-width=\"1\" fill=\"none\"
                fill-rule=\"evenodd\" >\r\n <path
                d=\"M239.811,0 L238.424,6 L259.374,6 C278.011,6
                292.908,17.38 292.908,43.002 C292.908,56.967
                287.784,75.469 276.598,96.888 L182.689,305.687
                L159.283,35.693 C159.283,13.809 168.134,6 191.88,6
                L205.854,6 L207.247,0 L1.409,0 L0,6 L13.049,6 C28.88,6
                35.863,15.885 37.264,34.514 L73.611,385 L160.221,385
                L304.525,79.217 C328.749,31.719 349.237,6 372.525,6
                L384.162,6 L385.557,0 L239.811,0\"
                id=\"vox-mark\" fill=\"#444745\"
                ></path>\r\n
</g>\r\n</svg>","community_name":"Vox","community_url":"https://www.vox.com/","cross_community":false,"entry_count":150,"always_show":false,"description":"","disclosure":"","cover_image_url":"","cover_image":null,"title_image_url":"","intro_image":null,"four_up_see_more_text":"View
All","primary":false}],"internal_groups":[{"base_type":"EntryGroup","id":112405,"timestamp":1681492928,"title":"Approach
                — Explores solutions or ideas to solve
problems","type":"SiteGroup","url":"","slug":"approach-explores-solutions-or-ideas-to-solve-problems","community_logo":"\r\n<svg
                width=\"386px\" height=\"385px\"
                viewBox=\"0 0 386 385\"
                version=\"1.1\"
                xmlns=\"http://www.w3.org/2000/svg\"
                xmlns:xlink=\"http://www.w3.org/1999/xlink\"
                >\r\n \r\n <title>vox-mark</title>\r\n
                \r\n <defs></defs>\r\n <g
                id=\"Page-1\" stroke=\"none\"
                stroke-width=\"1\" fill=\"none\"
                fill-rule=\"evenodd\" >\r\n <path
                d=\"M239.811,0 L238.424,6 L259.374,6 C278.011,6
                292.908,17.38 292.908,43.002 C292.908,56.967
                287.784,75.469 276.598,96.888 L182.689,305.687
                L159.283,35.693 C159.283,13.809 168.134,6 191.88,6
                L205.854,6 L207.247,0 L1.409,0 L0,6 L13.049,6 C28.88,6
                35.863,15.885 37.264,34.514 L73.611,385 L160.221,385
                L304.525,79.217 C328.749,31.719 349.237,6 372.525,6
                L384.162,6 L385.557,0 L239.811,0\"
                id=\"vox-mark\" fill=\"#444745\"
                ></path>\r\n
</g>\r\n</svg>","community_name":"Vox","community_url":"https://www.vox.com/","cross_community":false,"entry_count":33,"always_show":false,"description":"","disclosure":"","cover_image_url":"","cover_image":null,"title_image_url":"","intro_image":null,"four_up_see_more_text":"View
All"}],"image":{"ratio":"*","original_url":"https://cdn.vox-cdn.com/uploads/chorus_image/image/72068459/Vox_Doomerism_AI_Final_2.0.jpg","network":"unison","bgcolor":"white","pinterest_enabled":false,"caption":null,"credit":"Tyler
                Comrie for
Vox","focal_area":{"top_left_x":3024,"top_left_y":1449,"bottom_right_x":4176,"bottom_right_y":2601},"bounds":[0,0,7200,4050],"uploaded_size":{"width":7200,"height":4050},"focal_point":null,"image_id":72068459,"alt_text":"Image
                of a robot beneath a
rainbow"},"hub_image":{"ratio":"*","original_url":"https://cdn.vox-cdn.com/uploads/chorus_image/image/72068459/Vox_Doomerism_AI_Final_2.0.jpg","network":"unison","bgcolor":"white","pinterest_enabled":false,"caption":null,"credit":"Tyler
                Comrie for
Vox","focal_area":{"top_left_x":3024,"top_left_y":1449,"bottom_right_x":4176,"bottom_right_y":2601},"bounds":[0,0,7200,4050],"uploaded_size":{"width":7200,"height":4050},"focal_point":null,"image_id":72068459,"alt_text":"Image
                of a robot beneath a
rainbow"},"lede_image":{"ratio":"*","original_url":"https://cdn.vox-cdn.com/uploads/chorus_image/image/72068460/Vox_Doomerism_AI_Final_2.0.jpg","network":"unison","bgcolor":"white","pinterest_enabled":false,"caption":null,"credit":"Tyler
                Comrie for
Vox","focal_area":{"top_left_x":3024,"top_left_y":1449,"bottom_right_x":4176,"bottom_right_y":2601},"bounds":[0,0,7200,4050],"uploaded_size":{"width":7200,"height":4050},"focal_point":null,"image_id":72068460,"alt_text":"Image
                of a robot beneath a
rainbow"},"group_cover_image":null,"picture_standard_lead_image":{"ratio":"*","original_url":"https://cdn.vox-cdn.com/uploads/chorus_image/image/72068460/Vox_Doomerism_AI_Final_2.0.jpg","network":"unison","bgcolor":"white","pinterest_enabled":false,"caption":null,"credit":"Tyler
                Comrie for
Vox","focal_area":{"top_left_x":3024,"top_left_y":1449,"bottom_right_x":4176,"bottom_right_y":2601},"bounds":[0,0,7200,4050],"uploaded_size":{"width":7200,"height":4050},"focal_point":null,"image_id":72068460,"alt_text":"Image
                of a robot beneath a
rainbow","picture_element":{"html":{},"alt":"Image
                of a robot beneath a
rainbow","default":{"srcset":"https://cdn.vox-cdn.com/thumbor/KK0c2y5cohtpoDzMoVJ8YzZgM2A=/0x0:7200x4050/320x240/filters:focal(3024x1449:4176x2601)/cdn.vox-cdn.com/uploads/chorus_image/image/72068460/Vox_Doomerism_AI_Final_2.0.jpg
                320w,
https://cdn.vox-cdn.com/thumbor/aiIp3To68KXC1n7Lyqb3PEgMMEA=/0x0:7200x4050/620x465/filters:focal(3024x1449:4176x2601)/cdn.vox-cdn.com/uploads/chorus_image/image/72068460/Vox_Doomerism_AI_Final_2.0.jpg
                620w,
https://cdn.vox-cdn.com/thumbor/T7SEmlOrDMKiWpiieZ8kVYBrdDE=/0x0:7200x4050/920x690/filters:focal(3024x1449:4176x2601)/cdn.vox-cdn.com/uploads/chorus_image/image/72068460/Vox_Doomerism_AI_Final_2.0.jpg
                920w,
https://cdn.vox-cdn.com/thumbor/-rm09xHifLAiD09cYPV50jQoM64=/0x0:7200x4050/1220x915/filters:focal(3024x1449:4176x2601)/cdn.vox-cdn.com/uploads/chorus_image/image/72068460/Vox_Doomerism_AI_Final_2.0.jpg
                1220w,
https://cdn.vox-cdn.com/thumbor/Q7wNpw0ULYRs19HKEAVGW0FZi8U=/0x0:7200x4050/1520x1140/filters:focal(3024x1449:4176x2601)/cdn.vox-cdn.com/uploads/chorus_image/image/72068460/Vox_Doomerism_AI_Final_2.0.jpg
1520w","webp_srcset":"https://cdn.vox-cdn.com/thumbor/GF6oagM1xj29iSh8FGF0tZ_WuuU=/0x0:7200x4050/320x240/filters:focal(3024x1449:4176x2601):format(webp)/cdn.vox-cdn.com/uploads/chorus_image/image/72068460/Vox_Doomerism_AI_Final_2.0.jpg
                320w,
https://cdn.vox-cdn.com/thumbor/UXmcBhPXrLtMDvXv0kdiuDDDB0E=/0x0:7200x4050/620x465/filters:focal(3024x1449:4176x2601):format(webp)/cdn.vox-cdn.com/uploads/chorus_image/image/72068460/Vox_Doomerism_AI_Final_2.0.jpg
                620w,
https://cdn.vox-cdn.com/thumbor/RECA8WzSK7YAESP0YaaOYiLEqMI=/0x0:7200x4050/920x690/filters:focal(3024x1449:4176x2601):format(webp)/cdn.vox-cdn.com/uploads/chorus_image/image/72068460/Vox_Doomerism_AI_Final_2.0.jpg
                920w,
https://cdn.vox-cdn.com/thumbor/86-NIKfC6jTA3FIHkjIaTj_9ilg=/0x0:7200x4050/1220x915/filters:focal(3024x1449:4176x2601):format(webp)/cdn.vox-cdn.com/uploads/chorus_image/image/72068460/Vox_Doomerism_AI_Final_2.0.jpg
                1220w,
https://cdn.vox-cdn.com/thumbor/HQahIxkkv8Kuo_bCVzxILUpzGg8=/0x0:7200x4050/1520x1140/filters:focal(3024x1449:4176x2601):format(webp)/cdn.vox-cdn.com/uploads/chorus_image/image/72068460/Vox_Doomerism_AI_Final_2.0.jpg
1520w","media":null,"sizes":"(min-width:
                809px) 485px, (min-width: 600px) 60vw,
100vw","fallback":"https://cdn.vox-cdn.com/thumbor/6RsvohofeYp1o7OST-ufByu2wOg=/0x0:7200x4050/1200x900/filters:focal(3024x1449:4176x2601)/cdn.vox-cdn.com/uploads/chorus_image/image/72068460/Vox_Doomerism_AI_Final_2.0.jpg"},"art_directed":[]}},"image_is_placeholder":false,"image_is_hidden":false,"network":"vox","omits_labels":true,"optimizable":false,"promo_headline":"The
                case for slowing down
AI","recommended_count":0,"recs_enabled":false,"slug":"the-highlight/23621198/artificial-intelligence-chatgpt-openai-existential-risk-china-ai-safety-technology","dek":"Pumping
                the brakes on artificial intelligence could be the best
                thing we ever do for
                humanity.","homepage_title":"The
                case for slowing down
                AI","homepage_description":"Pumping
                the brakes on artificial intelligence could be the best
                thing we ever do for
humanity.","show_homepage_description":false,"title_display":"The
                case for slowing down
AI","pull_quote":null,"voxcreative":false,"show_entry_time":true,"show_dates":true,"paywalled_content":false,"paywalled_content_box_logo_url":"","paywalled_content_page_logo_url":"","paywalled_content_main_url":"","article_footer_body":"Since
                Vox launched in 2014, our audience has supported our
                mission in so many meaningful ways. More than 80,000
                people have responded to requests to help with our
                reporting. Countless teachers have told us about how
                they’re using our work in their classroom. And in the
                three years since we launched the Vox Contributions
                program, tens of thousands of people have chipped in to
                help keep our unique work free. We’re committed to
                keeping our work free for all who need it, because we
                believe that high-quality explanatory journalism is a
                public good. We can’t rely on ads alone to do that.
                <a
href=\"http://vox.com/pages/support-now?itm_campaign=anniversary-week1-week2&itm_medium=site&itm_source=article-footer\">Will
                you help us keep Vox free for the next nine years by
                making a gift today?
                \r\n</a>","article_footer_header":"<a
href=\"http://vox.com/pages/support-now?itm_campaign=http://vox.com/pages/support-now?itm_campaign=anniversary-week1-week2&itm_medium=site&itm_source=article-footer\">Help
                us celebrate nine years of
Vox</a>","use_article_footer":true,"article_footer_cta_annual_plans":"{\r\n
                \"default_plan\": 1,\r\n \"plans\":
                [\r\n {\r\n \"amount\": 95,\r\n
                \"plan_id\": 74295\r\n },\r\n {\r\n
                \"amount\": 120,\r\n \"plan_id\":
                81108\r\n },\r\n {\r\n \"amount\": 250,\r\n
                \"plan_id\": 77096\r\n },\r\n {\r\n
                \"amount\": 350,\r\n \"plan_id\":
                92038\r\n }\r\n
]\r\n}","article_footer_cta_button_annual_copy":"year","article_footer_cta_button_copy":"Yes,
                I'll
give","article_footer_cta_button_monthly_copy":"month","article_footer_cta_default_frequency":"annual","article_footer_cta_monthly_plans":"{\r\n
                \"default_plan\": 1,\r\n \"plans\":
                [\r\n {\r\n \"amount\": 9,\r\n
                \"plan_id\": 77780\r\n },\r\n {\r\n
                \"amount\": 20,\r\n \"plan_id\":
                69279\r\n },\r\n {\r\n \"amount\": 50,\r\n
                \"plan_id\": 46947\r\n },\r\n {\r\n
                \"amount\": 100,\r\n \"plan_id\":
                46782\r\n }\r\n
                ]\r\n}","article_footer_cta_once_plans":"{\r\n
                \"default_plan\": 0,\r\n \"plans\":
                [\r\n {\r\n \"amount\": 20,\r\n
                \"plan_id\": 69278\r\n },\r\n {\r\n
                \"amount\": 50,\r\n \"plan_id\":
                48880\r\n },\r\n {\r\n \"amount\": 100,\r\n
                \"plan_id\": 46607\r\n },\r\n {\r\n
                \"amount\": 250,\r\n \"plan_id\":
                46946\r\n }\r\n
]\r\n}","use_article_footer_cta_read_counter":true,"use_article_footer_cta":true,"layout":"","featured_placeable":false,"video_placeable":false,"disclaimer":null,"volume_placement":"lede","video_autoplay":false,"youtube_url":"http://bit.ly/voxyoutube","facebook_video_url":"","play_in_modal":true,"user_preferences_for_privacy_enabled":false,"show_branded_logos":true,"uses_video_lede":false,"image_brightness":"image-dark","display_logo_lockup":false,"svg_logo_data":"<svg
                id=\"Layer_1\"
                xmlns=\"http://www.w3.org/2000/svg\"
                viewBox=\"0 0 242 121\"><path
                fill=\"#ffffff\" d=\"M110.674
                3.528h3.474L114.564 2H71.63l-.418 1.528h6.253c5.418 0
                9.726 3.75 9.726 11.255 0 4.168-1.8 9.587-4.72
                16.118L54.82 92.32l-6.81-79.756c-.556-6.252 2.5-9.03
                9.59-9.03h4.027L62.042 2H1.6l-.557 1.528h3.89c4.725 0
                6.532 2.918 7.087 8.615l10.7
                103.1h25.427l42.518-90.038c6.392-13.48 13.2-21.677
                20.01-21.677zm-5.002 112.27c-3.89
                0-6.253-1.25-6.253-7.642 0-8.06 2.91-23.76
                6.11-38.072.41 6.67 5 13.2 11.81 13.2 1.67 0 3.06-.138
                4.44-.417-6.26 27.236-8.76 32.932-16.12
                32.932zm121.024-54.19c8.06 0 13.2-6.67 13.2-14.173
                0-6.392-4.585-11.116-11.115-11.116-11.81 0-17.36
                9.31-27.09 26.53-2.08-10.7-6.94-24.73-19.45-24.73-14.03
                0-30.15 20.01-45.02 32.37-6.67 5.56-14.17 9.17-20.15
                9.17-6.11 0-9.72-6.26-9.72-17.23 4.31-17.93 6.67-22.65
                13.34-22.65 4.59 0 6.53 2.64 6.53 8.06 0 5.69-1.25
                15.42-3.75 27.51 6.67-2.09 16.68-10.42
                25.01-19.45-4.44-10.56-13.89-17.79-27.65-17.79-25.42
                0-47.66 22.78-47.66 48.35 0 17.65 12.51 30.984 32.1
                30.984 32.38 0 45.86-28.066 45.86-47.52
                0-2.78-.14-4.86-.42-7.364C155.717 57.14 162.108 52
                167.388 52c5.975 0 10.7 15.007 15.423 37.657-4.17
                4.58-8.34 13.474-10.42
                15.002-.836-8.06-6.115-13.06-13.2-13.06-7.92 0-13.48
                7.5-13.48 13.893 0 7.226 5 11.95 11.53 11.95 13.76 0
                17.65-13.062 26.265-24.595 2.64 12.363 8.754 24.59
                19.313 24.59 12.506 0 24.178-10.7
                30.15-18.34l-1.11-1.81c-3.89 3.753-7.642 6.254-11.95
                6.254-7.78 0-13.34-16.81-17.645-37.1 2.5-3.47
                6.53-12.225 9.31-15.28 1.95 3.612 5.978 10.42 15.15
                10.42z\"/></svg>"}"><br>
              </div>
            </div>
          </div>
        </div>
      </div>
      <div> </div>
      <div aria-owns="toolbar"></div>
    </div>
  </body>
</html>