<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=UTF-8">
</head>
<body text="#000000" bgcolor="#f9f9fa">
<p> </p>
<div class="header reader-header reader-show-element">[Meneer
Microschoft is ook van de 'grandeur', een veelvoorkomende
CEO-aandoening: "I'm much more focused on the benefits to all of
us. I am haunted by the fact that the industrial revolution didn't
touch the parts of the world where I grew up until much later. So
I am looking for the thing that may be even bigger than the
industrial revolution, and really doing what the industrial
revolution did for the West, for everyone in the world. So I'm not
at all worried about AGI showing up, or showing up fast. Great,
right? That means 8 billion people have abundance. That's a
fantastic world to live in."/RO]</div>
<br>
<div class="header reader-header reader-show-element">$describe
"Satya Nadella" en<br>
en:<br>
Satya Nadella<br>
Satya Narayana Nadella (Telugu: నాదెళ్ల సత్యనారాయణ, ; born 19
August 1967) is an Indian-American business executive. He is the
executive chairman and CEO of Microsoft, succeeding Steve Ballmer
in 2014 as CEO and John W. Thompson in 2021 as chairman. Before
becoming CEO, he was the executive vice president of Microsoft's
cloud and enterprise group, responsible for building and running
the company's computing platforms.<br>
<br>
<br>
<a class="moz-txt-link-freetext" href="https://en.wikipedia.org/wiki/Satya_Nadella">https://en.wikipedia.org/wiki/Satya_Nadella</a><br>
<div class="header reader-header reader-show-element"><br>
</div>
<div class="header reader-header reader-show-element"><a class="moz-txt-link-freetext" href="https://en.wikipedia.org/wiki/Existential_risk_from_artificial_general_intelligence">https://en.wikipedia.org/wiki/Existential_risk_from_artificial_general_intelligence</a></div>
<div class="header reader-header reader-show-element"><br>
</div>
<br>
<br>
</div>
<div class="header reader-header reader-show-element"><br>
</div>
<div class="header reader-header reader-show-element"><a
class="domain reader-domain"
href="https://www.wired.com/story/microsofts-satya-nadella-is-betting-everything-on-ai/">wired.com</a>
<div class="domain-border"></div>
<h1 class="reader-title">Microsoft’s Satya Nadella Is Betting
Everything on AI</h1>
<div class="credits reader-credits">Steven Levy</div>
<div class="meta-data">
<div class="reader-estimated-time"
data-l10n-args="{"range":"17–22","rangePlural":"other"}"
data-l10n-id="about-reader-estimated-read-time" dir="ltr">17–22
minutes</div>
</div>
</div>
<hr>
<div class="content">
<div class="moz-reader-content reader-show-element">
<div id="readability-page-1" class="page">
<div data-testid="ArticlePageChunks">
<div data-journey-hook="client-content"
data-testid="BodyWrapper">
<p><span>I never thought</span> I'd write these words, but
here goes. Satya Nadella—and <a
href="https://www.wired.com/tag/microsoft">Microsoft</a>,
the company he runs—are riding high on <a
href="https://www.wired.com/story/how-microsofts-bing-chatbot-came-to-be-and-where-its-going-next/">the
buzz from its search engine</a>. That's quite a
contrast from the first time I spoke with Nadella, in
2009. Back then, he was not so well known, and he made a
point of telling me about his origins. Born in
Hyderabad, India, he attended grad school in the US and
joined Microsoft in 1992, just as the firm was rising to
power. Nadella hopped all over the company and stayed
through the downtimes, including after Microsoft's epic
antitrust court battle and when it missed the smartphone
revolution. Only after spinning through his bio did he
bring up <a
data-offer-url="https://twitter.com/satyanadella/status/1987999001?lang=en"
data-event-click="{"element":"ExternalLink","outgoingURL":"https://twitter.com/satyanadella/status/1987999001?lang=en"}"
href="https://twitter.com/satyanadella/status/1987999001?lang=en"
rel="nofollow noopener" target="_blank">his project at
the time</a>: Bing, the much-mocked search engine that
was a poor cousin—if that—to Google's dominant
franchise.</p>
<p>As we all know, Bing failed to loosen Google's grip on
search, but Nadella's fortunes only rose. In 2011 he led
the nascent cloud platform Azure, building out its
infrastructure and services. Then, because of his track
record, his quietly effective leadership, and a
thumbs-up from Bill Gates, he <a
href="https://www.wired.com/2014/04/microsoft-new-company/">became
Microsoft's CEO</a> in 2014. Nadella immediately
began to transform the company's culture and business.
He open-sourced products such as .net, made frenemies of
former blood foes (as in a partnership with Salesforce),
and began a series of big acquisitions, including Mojang
(maker of <a
href="https://www.wired.com/story/fast-forward-gpt-4-minecraft-chatgpt/"><em>Minecraft</em></a>),
LinkedIn, and GitHub—networks whose loyal members could
be nudged into Microsoft's world. He doubled down on
Azure, and it grew into a true competitor to Amazon's
AWS cloud service. Microsoft thrived, becoming a $2
trillion company.</p>
<p>Still, the company never seemed to fully recapture the
rollicking mojo of the '90s. Until now. When the startup
<a href="https://www.wired.com/tag/openai">OpenAI</a>
began developing its jaw-dropping <a
href="https://www.wired.com/tag/artificial-intelligence">generative
AI</a> products, Nadella was quick to see that
partnering with the company and its CEO, Sam Altman,
would put Microsoft at the center of a new AI boom.
(OpenAI was drawn to the deal by its need for the
computation powers of Microsoft's Azure servers.)</p>
<p>As one of its first moves in the partnership, Microsoft
impressed the developer world by releasing <a
href="https://www.wired.com/story/openai-copilot-autocomplete-for-code/">Copilot</a>,
an AI factotum that automates certain elements of
coding. And in February, Nadella shocked the broader
world (and its competitor Google) by integrating
OpenAI's state-of-the-art large language model into
Bing, via a chatbot named Sydney. Millions of people
used it. Yes, there were hiccups—<em>New York Times</em>
reporter Kevin Roose <a
href="https://www.nytimes.com/2023/02/16/technology/bing-chatbot-transcript.html">cajoled
Sydney</a> into confessing it was in love with him and
was going to steal him from his wife—but overall, the
company was emerging as an AI heavyweight. Microsoft is
now integrating generative AI—“copilots”—into many of
its products. Its $10 billion-plus investment in OpenAI
is looking like the bargain of the century. (Not that
Microsoft has been immune to tech's recent austerity
trend—Nadella has laid off 10,000 workers this year.)</p>
<p>Nadella, now 55, is finally getting cred as more than a
skillful caretaker and savvy leverager of Microsoft's
vast resources. His thoughtful leadership and striking
humility have long been a contrast to his ruthless and
rowdy predecessors, Bill Gates and Steve Ballmer. (True,
the empathy bar those dudes set was pretty low.) With
his swift and sweeping adoption of AI, he's displaying a
boldness that evokes Microsoft's early feistiness. And
now everyone wants to hear his views on AI, the
century's hottest topic in tech.</p>
<p><strong>STEVEN LEVY: When did you realize that this
stage of AI was going to be so transformative?</strong></p>
</div>
<div data-journey-hook="client-content"
data-testid="BodyWrapper">
<p><strong>SATYA NADELLA:</strong> When we went from GPT
2.5 to 3, we all started seeing these emergent
capabilities. It began showing scaling effects. We
didn't train it on just coding, but it got really good
at coding. That's when I became a believer. I thought,
“Wow, this is really on.”</p>
<p><strong>Was there a single eureka moment that led you
to go all in?</strong></p>
<p>It was that ability to code, which led to our creating
Copilot. But the first time I saw what is now called
GPT-4, in the summer of 2022, was a mind-blowing
experience. There is one query I always sort of use as a
reference. Machine translation has been with us for a
long time, and it's achieved a lot of great benchmarks,
but it doesn't have the subtlety of capturing deep
meaning in poetry. Growing up in Hyderabad, India, I'd
dreamt about being able to read Persian poetry—in
particular the work of Rumi, which has been translated
into Urdu and then into English. GPT-4 did it, in one
shot. It was not just a machine translation, but
something that preserved the sovereignty of poetry
across two language boundaries. And that's pretty cool.</p>
<p><strong>Microsoft has been investing in AI for
decades—didn't you have your own large language model?
Why did you need OpenAI?</strong></p>
<p>We had our own set of efforts, including a model called
Turing that was inside of Bing and offered in Azure and
what have you. But I felt OpenAI was going after the
same thing as us. So instead of trying to train five
different foundational models, I wanted one foundation,
making it a basis for a platform effect. So we
partnered. They bet on us, we bet on them. They do the
foundation models, and we do a lot of work around them,
including the tooling around responsible AI and AI
safety. At the end of the day we are two independent
companies deeply partnered to go after one goal, with
discipline, instead of multiple teams just doing random
things. We said, “Let's go after this and build one
thing that really captures the imagination of the
world.”</p>
<p><strong>Did you try to buy OpenAI?</strong></p>
<p>I've grown up at Microsoft dealing with partners in
many interesting ways. Back in the day, we built SQL
Server by partnering deeply with SAP. So this type of
stuff is not alien to me. What's different is that
OpenAI has an interesting structure; it's nonprofit.</p>
<p><strong>That normally would seem to be a deal-killer,
but somehow you and OpenAI came up with a complicated
workaround.</strong></p>
<p>They created a for-profit entity, and we said, “We're
OK with it.” We have a good commercial partnership. I
felt like there was a long-term stable deal here.</p>
<p><strong>Apparently, it's set up so that OpenAI makes
money from your deal, as does Microsoft, but there's a
cap on how much profit your collaboration can
accumulate. When you reach it, it's like Cinderella's
carriage turning into the pumpkin—OpenAI becomes a
pure nonprofit. What happens to the partnership then?
Does OpenAI get to say, “We're totally nonprofit, and
we don't want to be part of a commercial operation?”</strong></p>
</div>
<div data-journey-hook="client-content"
data-testid="BodyWrapper">
<p>I think their blog lays this out. Fundamentally,
though, their long-term idea is we get to
superintelligence. If that happens, I think all bets are
off, right?</p>
<p><strong>Yeah. For everyone.</strong></p>
<p>If this is the last invention of humankind, then all
bets are off. Different people will have different
judgments on what that is, and when that is. The unsaid
part is, what would the governments want to say about
that? So I kind of set that aside. This only happens
when there is superintelligence.</p>
<figure>
<p><span>Photograph: Meron Menghistab</span></p>
</figure>
<p><strong>OpenAI CEO Sam Altman believes that this will
indeed happen. Do you agree with him that we're going
to hit that AGI superintelligence benchmark?</strong></p>
<p>I'm much more focused on the benefits to all of us. I
am haunted by the fact that the industrial revolution
didn't touch the parts of the world where I grew up
until much later. So I am looking for the thing that may
be even bigger than the industrial revolution, and
really doing what the industrial revolution did for the
West, for everyone in the world. So I'm not at all
worried about AGI showing up, or showing up fast. Great,
right? That means 8 billion people have abundance.
That's a fantastic world to live in.</p>
<p><strong>What's your road map to make that vision real?
Right now you're building AI into your search engine,
your databases, your developer tools. That's not what
those underserved people are using.</strong></p>
<p>Great point. Let's start by looking at what the
frontiers for developers are. One of the things that I
am really excited about is bringing back the joy of
development. Microsoft started as a tools company,
notably developer tools. But over the years, because of
the complexity of software development, the attention
and flow that developers once enjoyed have been
disrupted. What we have done for the craft with this AI
programmer Copilot [which writes the mundane code and
frees programmers to tackle more challenging problems]
is beautiful to see. Now, 100 million developers who are
on GitHub can <em>enjoy</em> themselves. As AI
transforms the process of programming, though, it can
grow 10 times—100 million can be a billion. When you are
prompting an LLM, you're programming it.</p>
</div>
<div data-journey-hook="client-content"
data-testid="BodyWrapper">
<p><strong>Anyone with a smartphone who knows how to talk
can be a developer?</strong></p>
<p>Absolutely. You don't have to write a formula or learn
the syntax or algebra. If you say prompting is just
development, the learning curves are going to get
better. You can now even ask, “What is development?”
It's going to be democratized.</p>
<p>As for getting this to all 8 billion people, I was in
India in January and saw an amazing demo. The government
has a program called Digital Public Goods, and one is a
text-to-speech system. In the demo, a rural farmer was
using the system to ask about a subsidy program he saw
on the news. It told him about the program and the forms
he could fill out to apply. Normally, it would tell him
where to get the forms. But one developer in India had
trained GPT on all the Indian government documents, so
the system filled it out for him automatically, in a
different language. Something created a few months
earlier on the West Coast, United States, had made its
way to a developer in India, who then wrote a mod that
allows a rural Indian farmer to get the benefits of that
technology on a WhatsApp bot on a mobile phone. My dream
is that every one of Earth's 8 billion people can have
an AI tutor, an AI doctor, a programmer, maybe a
consultant!</p>
<p><strong>That's a great dream. But generative AI is new
technology, and somewhat mysterious. We really don't
know how these things work. We still have biases. Some
people think it's too soon for massive adoption.
Google has had generative AI technology for years, but
out of caution was slow-walking it. And then you put
it into Bing and dared Google to do the same, despite
its reservations. Your exact words: “I want people to
know that we made Google dance.” And</strong> <a
href="https://www.wired.com/story/the-boring-future-of-generative-ai/"><strong>Google
did dance</strong></a><strong>, changing its
strategy and jumping into the market with</strong> <a
href="https://www.wired.com/story/meet-bard-googles-answer-to-chatgpt/"><strong>Bard</strong></a><strong>,
its own</strong> <a
href="https://www.wired.com/story/11-better-prompts-google-bard/"><strong>generative
AI search product</strong></a><strong>. I don't want
to say this is recklessness, but it can be argued that
your bold Bing move was a premature release that began
a desperate cycle by competitors big and small to jump
in, whether their technology was ready or not.</strong></p>
<p>The beauty of our industry at some level is that it's
not about who has capability, it's about who can
actually exercise that capability and translate it into
tangible products. If you want to have that argument,
you can go back to Xerox PARC or Microsoft Research and
say everything developed there should have been held
back. The question is, who does something useful that
actually helps the world move forward? That's what I
felt we needed to do. Who would have thought last year
that search can actually be interesting again? Google
did a fantastic job and led that industry with a solid
lock on both the product and the distribution. Google
Search was default on Android, default on iOS, default
on the biggest browser, blah, blah, blah. So I said,
“Hey, let's go innovate and change the search paradigm
so that Google's 10 blue links look like Alta Vista!”</p>
</div>
<div data-journey-hook="client-content"
data-testid="BodyWrapper">
<p><strong>You're referring to the '90s search engine that
became instantly obsolete when Google out-innovated
it. That's harsh.</strong></p>
<p>At this point, when I use Bing Chat, I just can't go
back, even to original Bing. It just makes no sense. So
I'm glad now there's Bard and Bing. Let there be a real
competition, and let people enjoy the innovation.</p>
<p><strong>I imagine you must have had a savage pleasure
in finally introducing a search innovation that made
people notice Bing. I remember how frustrated you were
when you ran Bing in 2009; it seemed like you were
pursuing an unbeatable rival. With AI, are we at one
of those inflection points where the deck gets
shuffled and formerly entrenched winners become
vulnerable?</strong></p>
<p>Absolutely. In some sense, each change gets us closer
to the vision first presented in Vannevar Bush's article
[“<a
href="https://www.theatlantic.com/magazine/archive/1945/07/as-we-may-think/303881/">As
We May Think</a>,” a 1945 article in <em>The Atlantic</em>
that first presented a view of a computer-driven
information nirvana]. That is the dream, right? The
thing is, how does one really create this sense of
success, which spans a long line of inflections from
Bush to <a
href="https://groups.csail.mit.edu/medg/people/psz/Licklider.html">J.
C. R. Licklider</a> [who in 1960 envisioned a
“symbiosis of humans and computers”] to Doug Engelbart
[the mouse and windows] to the Alto [Xerox PARC's
graphical interface PC], to the PC, to the internet.
It's all about saying, “Hey, can there be a more natural
interface that empowers us as humans to augment our
cognitive capability to do more things?” So yes, this is
one of those examples. Copilot is a metaphor because
that is a design choice that puts the human at the
center of it. So don't make this development about
autopilot—it's about copilot. A lot of people are
saying, “Oh my God, AI is here!” Guess what? AI is
already all around us. In fact, all behavioral targeting
uses a lot of generative AI. It's a black box where you
and I are just targets.</p>
<p><strong>It seems to me that the future will be a</strong>
<a
href="https://www.wired.com/story/ai-shouldnt-compete-with-workers-it-should-supercharge-them-turing-trap/"><strong>tug-of-war</strong></a>
<strong>between copilot and autopilot.</strong></p>
<p>The question is, how do humans control these powerful
capabilities? One approach is to get the model itself
aligned with core human values that we care about. These
are not technical problems, they're more social-cultural
considerations. The other side is design choices and
product-making with context. That means really making
sure that the context in which these models are being
deployed is aligned with safety.</p>
<figure>
<p><span>Photograph: Meron Menghistab</span></p>
</figure>
</div>
<div data-journey-hook="client-content"
data-testid="BodyWrapper">
<p><strong>Do you have patience for people who say we
should</strong> <a
href="https://www.wired.com/story/chatgpt-pause-ai-experiments-open-letter/"><strong>hit
the brakes</strong></a> <strong>on AI for six
months?</strong></p>
<p>I have all the respect and all the time for anybody who
says, “Let's be thoughtful about all the hard challenges
around alignment, and let's make sure we don't have
runaway AI.” If AI takes off, we'd better be in control.
Think back to when the steam engine was first deployed
and factories were created. If, at the same time, we had
thought about child labor and factory pollution, would
we have avoided a couple hundred years of horrible
history? So anytime we get excited about a new
technology, it's fantastic to think about the unintended
consequences. That said, at this point, instead of just
saying stop, I would say we should speed up the work
that needs to be done to create these alignments. We did
not launch Sydney with GPT-4 the first day I saw it,
because we had to do a lot of work to build a safety
harness. But we also knew we couldn't do all the
alignment in the lab. To align an AI model with the
world, you have to align it <em>in</em> the world and
not in some simulation.</p>
<p><strong>So you knew Sydney was going to</strong> <a
href="https://www.nytimes.com/2023/02/16/technology/bing-chatbot-microsoft-chatgpt.html"><strong>fall
in love</strong></a> <strong>with journalist Kevin
Roose?</strong></p>
<p>We never expected that somebody would do Jungian
analysis within 100 hours of release.</p>
<p><strong>You still haven't said whether you think
there's any chance at all that AI is going to</strong>
<a
href="https://www.wired.com/story/runaway-ai-extinction-statement/"><strong>destroy
humanity</strong></a><strong>.</strong></p>
<p>If there is going to be something that is just
completely out of control, that's a problem, and we
shouldn't allow it. It's an abdication of our own
responsibility to say this is going to just go out of
control. We can deal with powerful technology. By the
way, electricity had unintended consequences. We made
sure the electric grid was safe, we set up standards, we
have safety. Obviously with nuclear energy, we dealt
with proliferation. Somewhere in these two are good
examples on how to deal with powerful technologies.</p>
<p><strong>One huge problem of LLMs is their
hallucinations, where Sydney and other models just
make stuff up. Can this be effectively addressed?</strong></p>
<p>There is very practical stuff that reduces
hallucination. And the technology's definitely getting
better. There are going to be solutions. But sometimes
hallucination is “creativity” as well. Humans should be
able to choose when they want to use which mode.</p>
<p><strong>That would be an improvement, since right now
we don't have a choice. But let me ask about another
technology. Not that long ago you were rhapsodic about
the metaverse. In 2021 you said you couldn't overstate
how much of a breakthrough mixed reality was. But now
all we're talking about is AI. Has this boom shunted
the metaverse into some other dimension?</strong></p>
<p>I still am a believer in [virtual] presence. In 2016 I
wrote about three things I was excited about: mixed
reality, quantum, and AI. I remain excited about the
same three things. Today we are talking about AI, but I
think presence is the ultimate killer app. And then, of
course, quantum accelerates everything.</p>
<p><strong>AI is more than just a topic of discussion.
Now, you've centered Microsoft around this
transformational technology. How do you manage that?</strong></p>
</div>
<div data-journey-hook="client-content"
data-testid="BodyWrapper">
<p>One of the analogies I love to use internally is, when
we went from steam engines to electric power, you had to
rewire the factory. You couldn't just put the electric
motor where the steam engine was and leave everything
else the same. That was the difference between Stanley
Motor Carriage Company and Ford Motor Company, where
Ford was able to rewire the entire workflow. So inside
Microsoft, the means of production of software is
changing. It's a radical shift in the core workflow
inside Microsoft and how we evangelize our output—and
how it changes every school, every organization, every
household.</p>
<p><strong>How has that tool changed your job?</strong></p>
<p>A lot of knowledge work is drudgery, like email triage.
Now, I don't know how I would ever live without an AI
copilot in my Outlook. Responding to an email is not
just an English language composition, it can also be a
customer support ticket. It interrogates my customer
support system and brings back the relevant information.
This moment is like when PCs first showed up at work.
This feels like that to me, across the length and
breadth of our products.</p>
<p><strong>Microsoft has performed well during your
tenure, but do you think you'll be remembered for the
AI transformation?</strong></p>
<p>It's up to folks like you and others to say what I'll
be remembered for. But, oh God, I'm excited about this.
Microsoft is 48 years old. I don't know of many
companies that age that are relevant not because they
did something in the '80s or the '90s or the 2000s but
because they did something in the last couple of years.
As long as we do that, we have a right to exist. And
when we don't, we should not be viewed as any great
company.</p>
<hr>
<p><em>This article appears in the Jul/Aug 2023 issue.</em>
<a
href="https://subscribe.wired.com/subscribe/splits/wired/WIR_Edit_Hardcoded?source=ArticleEnd_CMlink"><em>Subscribe
now</em></a><em>.</em></p>
<p><em>Let us know what you think about this article.
Submit a letter to the editor at</em> <a
href="mailto:mail@wired.com"><em>mail@wired.com</em></a><em>.</em></p>
</div>
</div>
</div>
</div>
</div>
</body>
</html>