Episode Transcript
Stephen DUBNER: Are you having fun in your job?
Satya NADELLA: I’m loving every day of it, Stephen.
Most C.E.O.s of big technology firms are not loving every day right now. They’ve been facing all sorts of headwinds and backlash. But you can see why Satya Nadella might be the exception. He has worked at Microsoft for more than 30 years, nearly 10 as C.E.O. At the start of the personal-computer era, Bill Gates’s Microsoft was a behemoth, eager to win every competition and crush every rival. But the internet era put the company on its heels; newer firms like Google, Facebook, and Amazon were more nimble, more innovative — and maybe hungrier. Jeff Bezos of Amazon would reportedly refer to Microsoft as “a country club.”
But under Nadella, Microsoft has come roaring back. He invested heavily in what turned out to be big growth areas, like cloud computing. Microsoft has always been in the business of acquiring other companies — more than 250 over its history — but some of the biggest acquisitions have been Nadella’s: LinkedIn, Nuance Communications and, if regulators allow, the gaming firm Activision Blizzard. And there have been many more key acquisitions, like GitHub, where computer programmers store and share their code. Once again, Microsoft is a behemoth, the second-most valuable company in the world, trailing only Apple; its stock price is up nearly 50 percent since the start of 2023.
But that’s not even the reason why Microsoft has been all over the news lately. They’re in the news because of their very splashy push into artificial intelligence, in the form of ChatGPT, the next-level chatbot created by a firm called OpenAI. Microsoft has invested $13 billion in OpenAI, for a reported 49 percent stake in the company, and they quickly integrated OpenAI’s tech into many of their products — including the Microsoft search engine Bing.
For years, Bing was thought of as something between footnote and joke, running a very distant second to Google. But suddenly, Bing with ChatGPT is on the move, and Google is trying to play catchup, with its own chatbot, called Bard. So how, exactly, did Satya Nadella turn the country club into a bleeding-edge tech firm with a valuation of more than two-and-a-half trillion dollars?
NADELLA: Our mission, Stephen, is to empower every person and every organization on the planet to achieve more. And so as the world around us achieves more, we make money.
DUBNER: I like that. I mean, I assume you actually believe that. You’re not just saying that, are you?
NADELLA: No, hundred percent. You have to have a business model that is aligned with the world around you doing well.
Today on Freakonomics Radio: we speak with Satya Nadella about the blessings and perils of A.I.; we talk about Google and Heidegger; about living with pain; and about Microsoft’s succession plan.
NADELLA: We will take succession seriously.
* * *
I spoke with Satya Nadella one afternoon earlier this month. I was in New York, and he was in his office at Microsoft’s headquarters near Seattle.
NADELLA: It is fantastic to have a conversation again.
We first interviewed Nadella in 2017, for a series called “The Secret Life of a C.E.O.” Even then, he was extremely excited about A.I. At the time, Microsoft was high on a virtual-reality headset called the HoloLens.
NADELLA: Think about it: your field of view, right — what you see — is a blend of the analog and digital. The ability to blend analog and digital is what we describe as mixed reality. There are times when it will be fully immersive. That’s called virtual reality. Sometimes when you can see both the real world and the artificial world. That’s what is augmented reality. But to me, that’s just a dial that you set. I mean, just imagine if your hologram was right here interviewing me, as opposed to just on the phone.
Back then, Nadella cautioned there was still a lot of work to do.
NADELLA: Ultimately, I believe in order to bring about some of these magical experiences in A.I. capability, we will have to break free of some of the limits we’re hitting of physics, really.
The limits of physics haven’t been broken yet. And the HoloLens has not been the hit that Microsoft was hoping for. But Nadella’s devotion to A.I. is paying off, big-time, in the form of ChatGPT, which quickly captured the imagination of millions. GPT stands for Generative Pre-Trained Transformer, and ChatGPT is what is known as a large language model, or L.L.M. It takes in vast amounts of data from all over the Internet so it can “learn” how to read and answer questions very much like a human — but a really, really smart human, or perhaps a million smart humans. And the more we ask ChatGPT to answer questions or summarize arguments or plan itineraries, the more finely tuned it gets — which proves, at the very least, that we humans are still good for something. The current iteration is called GPT-4. And what’s the relationship between ChatGPT and Bing?
NADELLA: Basically, Bing is part of ChatGPT and Chat is part of Bing, so in either way it doesn’t matter which entry point you come to, you will have Bing.
DUBNER: So Satya, I asked ChatGPT for some help in this interview. I said I’m a journalist interviewing Satya Nadella and I want to get candid and forthright answers. You know, I just didn’t want corporate boilerplate. And what Chat told me was to do my homework — which I did; I usually do that. To ask open-ended questions, which I typically try to do. But one that hung me up a little bit was, I need to build rapport. Now, we have a relatively short time together today. Are there any shortcuts to building rapport?
NADELLA: Yeah, what’s your knowledge of cricket?
DUBNER: Oh, I blew it. I knew that you’re a big cricketer. You played as a kid. I knew you cared more about cricket than schoolwork as a kid. But no, I blew it.
NADELLA: That’s too bad. Because there’s a World Test championship starting tomorrow. I was going to ask you about it, but, hey, look, your love for economics builds me an instant rapport.
DUBNER: I’d like you to walk us through Microsoft’s decision to bet big on OpenAI, the firm behind ChatGPT. There was an early investment of $1 billion, but then much, much more since then. I’ve read that you were pretty upset when the Microsoft Research Team came to you with their findings about OpenAI’s L.L.M., large language model. They said that they were blown away at how good it was and that it had surpassed Microsoft’s internal A.I. research project with a much smaller research team in much less time. Let’s start there. I’d like you to describe that meeting. Tell me if what I’ve read, first of all, is true. Were you surprised and upset with your internal A.I. development?
NADELLA: Yeah, I think that this was all very recent. This is after GPT-4 was very much there, and then that was just mostly me pushing some of our teams as to, “Hey, what did we miss? You got to learn…” You know, there are a lot of people at Microsoft who got it and did a great job of, for example, betting on OpenAI and partnering with OpenAI. And to me four years ago, that was the idea. And then as we went down that journey, I started saying, “Okay, let’s apply these models for product-building.” Models are not products. Models can be part of products. The first real product effort which we started was GitHub Copilot. And, quite frankly, the first attempts on GitHub Copilot were hard because the model was not that capable. But it is only once we got to GPT-3 when it started to learn to code that we said, “Oh wow, this emergent phenomena, the scaling effects of these transformer models are really showing promise.”
Nadella may be underplaying the tension between Microsoft and OpenAI — at least according to a recent Wall Street Journal article called “The Awkward Partnership Leading the A.I. Boom.” It describes “conflict and confusion behind the scenes.” And, because the OpenAI deal is a partnership and not an acquisition, the Journal piece makes the argument that Microsoft has “influence without control,” as OpenAI is allowed to partner with Microsoft rivals. Still, you get the sense that Nadella is excited about the competitive momentum ChatGPT has given Microsoft — as you can tell from this next part of our conversation:
DUBNER: Google still handles about 90 percent of online global search activity. An A.I. search-enabled model is a different kind of search, plainly, than what Google has been doing. Google’s trying to catch up to you now. How do you see market share in search playing out via Bing, via ChatGPT, in the next five and ten years? And I’m curious to know how significant that might be to the Microsoft business plan overall.
NADELLA: This is a very general purpose technology, right? So beyond the specific use cases of Bing Chat or ChatGPT, what we have are reasoning engines that will be part of every product. In our case, they’re part of Bing in ChatGPT, they’re part of Microsoft 365, they’re part of Dynamics 365. And so in that context, I’m very excited about what it means for search. After all, Google, as you said, rightfully, they’re dominant in search by a country mile, and we’ve hung in there over the decade. We’ve been at it to sort of say, “Hey, look, our time will come where there will be a real inflection point in how search will change.” We welcome Bing versus Bard as competition. It’ll be like anything else, which is so dominant in terms of share and also so dominant in terms of user habit. We also know that defaults matter, and obviously Google controls the default on Android, default on iOS, default on Chrome. And so they have a great structural position. But at the same time, whenever there is a change in the game, it is all up for grabs again to some degree, and I know it’ll come down to users and user choice. We finally have a competitive angle here, and so we’re going to push it super-hard.
DUBNER: What are some of your favorite uses, personal or professional, for ChatGPT?
NADELLA: The thing that I’ve talked about which I love is the cross-lingual understanding, that’s kind of my term for it. You can go from, you know, Hindi to English or English to Arabic or what have you, and they’ve done a good job. If you take any poetry in any one language, and translate it into another language — in fact, if you even do multiple languages. So, my favorite query was, I said I always as a kid growing up in Hyderabad, India, I said I wanted to read Rumi translated into Urdu and translated into English, and in one shot it does it. But the most interesting thing about that is it captures the depth of poetry. So, it finds somehow in that latent space meaning that’s beyond just the words and their translation. That I find is just phenomenal.
DUBNER: This amazes me. You’re saying — you, the C.E.O. of a big tech firm — is saying that one of the highest callings of ChatGPT or a large language model, is the translation of poetry. I love it. I mean, I know you love poetry, but what excites you more about that than more typical business, societal, political, economic applications?
NADELLA: I mean, I love a lot of things, you know? I remember my father trying to read Heidegger in his forties and struggling with it. And I’ve attempted a thousand times and failed. And, you know, he’s written this essay somebody pointed me to. Somebody said, “Oh, you got to read that because after all, there’s a lot of talk about A.I. and what it means to humanity.” And I said, let me read it. But I must say, you know, going and asking ChatGPT or Bing Chat to summarize Heidegger is the best way to read Heidegger.
According to ChatGPT, Heidegger himself would not have been a fan of A.I. “In Heidegger’s view,” Chat tells us, “technology, including A.I., can contribute to what he called the ‘forgetting of being.’” And Heidegger is hardly alone. After all, philosophy and poetry will likely not be the main use cases for A.I. So, coming up, we talk about potential downsides of an A.I. revolution, and the degree to which Microsoft cares.
NADELLA: I want all 200,000 people in Microsoft working on products to think of A.I. safety.
I’m Stephen Dubner, this is Freakonomics Radio, we’ll be right back.
* * *
Last month, a group of leaders from across the tech industry issued a terse, one-sentence warning: “Mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” The extinction they’re talking about is human extinction. Among the signatories were Sam Altman, the C.E.O. of OpenAI, and two senior Microsoft executives. Altman, Satya Nadella, and other executives from firms working on A.I. recently met with President Biden to talk about how the new technology should be regulated. I asked Nadella where he stands on that issue.
NADELLA: I think the fact that we’re having the conversation simultaneously about both the potential good that can come from this technology in terms of economic growth that is more equitable, and what have you, and at the same time that we are having the conversation on all the risks both here and now and the future risk, I think is a super healthy thing. Somebody gave me this analogy, which I love: just imagine when the steam engine first came out. If we had a conversation both about all the things that the steam engine can do for the world and the industrial production and the Industrial Revolution and how it would change livelihoods and at the same time we were talking about pollution and factory filth and child labor, we would have avoided more than 100 years plus of terrible history. So then it’s best to be grounded on what’s the risk framework look like. If A.I. is used to create more disinformation, that’s a problem for our democracy and democratic institutions. Second, if A.I. is being used to create cyberattacks or bioterrorism attacks, that’s a risk. If there is real-world harms that are on bias, that’s the risk. Or employment displacement, that’s a risk. So let’s just take those four. In fact, those were the four even the White House was upfront on and saying, “Hey, look, how do we really then have real answers to all these four risks?” So in terms of, for example, take disinformation, can we have techniques around watermarking that help verify where did the content come from? When it comes to cyber, what can we do to ensure that there is some regime around how these frontier models are being developed? Maybe there is licensing, I don’t know. This is for regulators to decide.
Microsoft itself has been working on provisions to best govern A.I. For instance, safety breaks for A.I. systems that control infrastructure like electricity or transportation; also, a certain level of transparency so that academic researchers can study A.I. systems. But what about the big question; what about the doomsday scenario, wherein an A.I. system gets beyond the control of its human inventors?
NADELLA: Essentially, the biggest unsolved problem is how do you ensure both at sort of a scientific understanding level and then the practical engineering level that you can make sure that the A.I. never goes out of control? And that’s where I think there needs to be a CERN-like project where both the academics, along with corporations and governments, all come together to perhaps solve that alignment problem and accelerate the solution to the alignment problem.
DUBNER: But even a CERN-like project after the fact, once it’s been made available to the world, especially without watermarks and so on does it seem a little backwards? Do you ever think that your excitement over the technology led you and others to release it publicly too early?
NADELLA: No, I actually think — first of all, we are in very early days, and there has been a lot of work. See, there’s no way you can do all of this just as a research project. And we spent a lot of time. In fact, if anything, that — for example, all the work we did in launching Bing Chat and the lessons learned in launching Bing Chat is now all available as a safety service — which, by the way, can be used with any open-source model. So that’s, I think, how the industry and the ecosystem gets better at A.I. safety. But at any point in time, anyone who’s a responsible actor does need to think about everything that they can do for safety. In fact, my sort of mantra internally is the best feature of A.I. is A.I. safety.
DUBNER: I did read, though, Satya, that as part of a broader — a much broader layoff — earlier this year, that Microsoft laid off its entire ethics and society team, which presumably would help build these various guardrails for A.I. From the outside, that doesn’t look good. Can you explain that?
NADELLA: Yeah, I saw that article too. At the same time, I saw all the headcount that was increasing at Microsoft, because — it’s kind of like saying, “Hey, should we have a test organization that is somewhere on the side?” I think the point is that work that A.I. safety teams are doing are now become so mainstream, critical part of all product-making that we’re actually, if anything, doubled down on it. So I’m sure there was some amount of reorganization, and any reorganization nowadays seems to get written about, and that’s fantastic. We love that. But to me, A.I. safety is like saying “performance” or “quality” of any software project. You can’t separate it out. I want all 200,000 people in Microsoft working on products to think of A.I. safety.
One particular concern about the future of A.I. is how intensely concentrated the technology is, within the walls of a relatively few firms and institutions. The economists Daron Acemoglu and Simon Johnson recently published a book on this theme called Power and Progress: Our 1,000-Year Struggle Over Technology and Prosperity. And here’s what they wrote in a recent New York Times op-ed: “Tech giants Microsoft and Alphabet/Google have seized a large lead in shaping our potentially A.I.-dominated future. This is not good news. History has shown us that when the distribution of information is left in the hands of a few, the result is political and economic oppression. Without intervention, this history will repeat itself.” Their piece was called “Big Tech Is Bad. Big A.I. Will Be Worse.” You could argue we are fortunate to have a C.E.O. as measured as Satya Nadella leading the way at Microsoft. But of course he won’t be there forever.
* * *
Satya Nadella grew up in India, where his father was a Marxist civil servant. Satya really did want to be a professional cricketer; and he wasn’t a great student. But he did go on to study electrical engineering at university, then came to the States for a master’s degree in computer science. He worked for a couple years at Sun Microsystems, then joined Microsoft in 1992. He also got an M.B.A. from the University of Chicago. In his early years at Microsoft, he worked on the operating system Windows NT. Windows products were how Microsoft made its big money, and the majority of the world’s desktop computers still run on Windows. But Microsoft famously missed the shift to mobile computing, and from 2000 to 2013, under C.E.O. Steve Ballmer, they saw their stock price fall by more than 40 percent.
Jeffrey SONNENFELD: Satya Nadella inherited a lot of problems.
That is Jeffrey Sonnenfeld. Officially, he’s a professor of management studies at Yale, and he runs the Yale Chief Executive Leadership Institute. Unofficially, he’s known as one of the world’s leading authorities on C.E.O.s. Last year, he published a list of the best American C.E.O.s; Satya Nadella was number one.
SONNENFELD: Rather than try to come up with long lists of ways of vilifying predecessors, what Nadella did is he was able to be on a frontier at this exact same moment as the early investors in OpenAI, as well as in reinventing their own artificial intelligence opportunities. So that Bing, surprise to all, might soar past everybody. He got people excited about building a new future, investing $25 billion in R&D each year. That’s perhaps twice as much as the average pharma company invests. And that’s amazing for an I.T. company to do that.
A big part of Nadella’s success came from expanding Microsoft’s footprint in cloud computing, with their Azure platform.
SONNENFELD: Their footprint across the board at Enterprise Software was flourishing, where he knew how to invest in Azure and a commercial cloud business where his revenues grew 42 percent over the past year.
I asked Nadella himself if he’d been surprised by how valuable cloud computing has become for Microsoft.
NADELLA: Both surprised and not surprised in the following sense. We were leaders in client server. But while we were leaders in client server — you know, Oracle did well, IBM did well. And so in fact, it shaped even my thinking of how the cloud may sort of emerge, which is that it’ll have a similar structure. There will be at least two to three players who will be at scale, and there will still be many other smaller niche players perhaps. So in that sense, it is not that surprising. What has been surprising is how big and expansive the market is, right? Let’s think about it. Like, yeah, we sold a few servers in India, but oh my God, did I think that cloud computing in India would be this big? No. The market is much bigger than I originally thought.
DUBNER: I have a fairly long and pretentious question to ask you. There are economists and philosophers and psychologists who argue that most of us still operate under a scarcity mindset that might have been appropriate on the savanna a million years ago. But now we live in an era of abundance. So, you know, rather than competing for scarce resources, we should collaborate more to grow the overall resource pool. From what I know about your time as C.E.O. at Microsoft, it seems you have embraced the collaborative model over the competitive model — one example being how nicely Microsoft now plays with Apple devices, whereas the previous administration didn’t even want Microsoft employees owning Apple devices. So I’d like to hear your thoughts generally on this idea of collaboration versus competition, and scarcity versus abundance.
NADELLA: That’s a broad — that’s a very deep question. I mean, at the macro level, Stephen, I actually do believe that the best technique humanity has come up with to create, I would say, economic growth and growth in our well-being as humanity is through cooperation. So let’s start there, right? So the more countries cooperate with countries, people cooperate with people, corporations cooperate with other corporations, the better off we are. And then at a micro level, I think you want to be very careful in how you think about zero-sum games, right? I think we overstate the number of zero-sum games that we play. In many cases, I think growing your overall share of the pie is probably even more possible when the pie itself is becoming bigger. So I’ve always approached it that way. That’s kind of how I grew up, actually, at Microsoft. And so, you know, all of what we have done in the last, whatever, close to ten years has been to look at the opportunity set first as something that expands the opportunity for all players. And in there being competitive.
DUBNER: Were there people within the firm, though, who said or felt, “Wait a minute, I know you’re the new C.E.O. and I know you have a new way of doing things, but Google is our enemy. Apple is our enemy. We can’t do that.” Did you have pushback?
NADELLA: Yeah. I mean, look, it’s a very fierce, competitive industry. And even if we didn’t think of them as our competitors, our competitors probably think of us as competitors. But I think at the end of the day, I think it helps to step back and say, you know, it doesn’t mean that you back away from some real zero-sum competitive battles, because after all that’s kind of what fosters innovation and that’s what creates consumer surplus and opportunity. And so that’s all fine. But at the same time, leaders in positions like mine have to also be questioning what’s the way to create economic opportunity. And sometimes, you know, construing it as zero sum is probably the right approach, but sometimes it’s not.
DUBNER: So, Microsoft is a huge company and huge companies get bigger by acquisition typically. Let’s go through a couple. I know you tried a few times to buy Zoom. You haven’t succeeded yet. You’re still in the middle of trying to acquire Activision. That’s tied up in the U.S., at least, in an F.T.C. lawsuit. A few years ago, I read, you tried to buy TikTok. You called those negotiations “the strangest thing I’ve ever worked on.” What was so strange about that?
NADELLA: Look, at least let me talk to all the acquisitions that we did that actually have succeeded, and we feel thrilled about it, right? Whether it’s LinkedIn or GitHub or Nuance or Zenimax or Minecraft — these are all things that we bought. I feel that these properties are better off after we acquired them because we were able to innovate, and then make sure that we stayed true to the core mission of those products, and those customers who depended on those products.
DUBNER: What about TikTok, though? What was so strange about that negotiation or those conversations?
NADELLA: Everything. First of all, I mean, just to be, you know, straight about it, TikTok came to us because they at that time sort of said, “Hey, we need some help in thinking about our structure.” And given what at that time at least was perceived by them as some kind of a restructuring that the United States government was asking —
DUBNER: They needed a U.S. partner, in other words, yes?
NADELLA: Yeah, and so at that point we said, look, if that is the case that you want to separate out your U.S. operations or worldwide operations, we would be interested and we engaged in a dialogue. And it was just, let’s just say, an interesting summer that I spent on it.
DUBNER: Okay. So not long ago, Satya, you became the chair of the Microsoft board in addition to C.E.O. Now, a lot of corporate governance people hate the idea of one person having both jobs. I asked ChatGPT about it. What’s the downside? One potential conflict of interest, ChatGPT told me, is the roles of C.E.O. and board chair can sometimes be at odds. The C.E.O. is typically focused on the day-to-day, yadda, yadda, but there can be potential conflicts of interest. Can you give an example of one conflict that you’ve had, or maybe you haven’t, which would give the corp.-governance people even more headache?
NADELLA: The reality is we have a lead independent director, a fantastic lead independent director in Sandi Peterson — I mean, she has the ultimate responsibility of hiring and firing me. That said, I think the chair role, as I see it, is more about me being able to sort of, you know, having been close to ten years in my role, to use my knowledge of what it is that Microsoft’s getting done in the short and the long run, to be able to coordinate the board agendas and make sure that the topics that we are discussing are most helpful for both the board and the management team. And so it’s kind of as much about program managing the board versus being responsible for the governance of the board. And the governance of the board is definitely with the independent directors.
DUBNER: Can you name a time when the board voted down a big idea of yours?
NADELLA: I don’t know there is a particular vote that they voted me down, but I take all of the board feedback on any idea that I or my management team has. We have a good format, where every time we get together we kind of do a left-to-right, I’ll call it, overview of our business, and we have a written doc, which basically is a living document which captures our strategy and performance. And having that rich discussion where you can benefit from the perspective of the board and then change course based on that perspective is something that I look forward to and I welcome.
DUBNER: Now, the last time we spoke, which was several years ago, you talked about how the birth of your son Zain changed you a great deal. He was born with cerebral palsy. And you said that empathy didn’t come naturally to you — certainly not compared to your wife — but that over time, being a parent to a child with a severe handicap was a powerful experience for you on many levels. I was so sorry to read that Zain died not long ago in just his mid-twenties. So my deepest condolences on that, Satya. I’m also curious to know if or how his death has changed you as well.
NADELLA: No, I appreciate that, Stephen. It’s probably — it’s hard, Stephen, for me to even reflect on it that much. It’s been, you know, for both my wife and me, in some sense, he was the one sort of constant that gave us a lot of purpose, I would say, in his short life. And so I think — you know, I think we are still getting through it. And it’ll, I think, take time. But I’ll just say the thing that I perhaps have been most struck by is what an unbelievable support system that got built around us in even the local community around Seattle. At his memorial, I look back at it, all the people who came, right, all the therapists, the doctors, the friends, the family, the colleagues at work. I even was thinking about it, right, after all, Zain was born when I was working at Microsoft, and he passed when I was working at Microsoft, and everything, even from the benefits programs of Microsoft to the managers who gave me the flexibility. I think that sort of was a big reminder to me that all of us have things happen in our lives, sometimes things like pandemics or the passing of a loved one or the health issues of elderly parents, and we get by because of the kindness of people around us and the support of communities around us. And so if anything, both my wife and I have been super, super thankful to all the people and the institutions that were very much part of his life and thereby a part of our lives.
DUBNER: You are a young man still, 55 years old, but you’ve been at Microsoft a long time now, been C.E.O. almost ten years. I’m curious about a succession plan, especially — I don’t know if you watched the HBO show Succession. Do you watch Succession, Satya? Or no?
NADELLA: I watched, I think, the first season a bit, and I was never able to get back to it.
DUBNER: Okay. So I’ll give you a small spoiler. It doesn’t go well. And their succession plan turns out to be — I think the technical term is “total s*** show.” Okay? So I am curious if your succession plan will be somewhat more orderly than the succession plan on Succession.
NADELLA: Obviously, the next C.E.O. of Microsoft is going to be appointed by the lead independent directors of Microsoft, and not by me. But to your point, it’s a board topic, and we have a real update on it every year, as it should be. And I take that as a serious job of mine. Like, one of the things that I always say is, long after I’m gone from Microsoft, if Microsoft’s doing well, then maybe I did a decent job. Because I always think about the strength of the institution long after the person is gone is the only way to measure the leader. I’m very, very suspicious of people who come in and say, “Before me, it was horrible. And during my time it was great. And after me, it is horrible.” I mean, that’s — first of all, it means you didn’t do anything to build institutional strength. So, yes, I take that job that I have in terms of surfacing the talent and having the conversation with the board of directors seriously. And, you know, when the time comes, I’m pretty positive that they will have a lot of candidates internally, and they’ll look outside as well. And so, yes, we will take succession seriously.
That was Satya Nadella, C.E.O. of Microsoft. His intelligence, I think you’ll agree, doesn’t feel artificial at all.
* * *
Freakonomics Radio is produced by Stitcher and Renbud Radio. This episode was produced by Zack Lapinski and mixed by Greg Rippin, with help from Jeremy Johnston. Our staff also includes Alina Kulman, Daria Klenert, Eleanor Osborne, Elsa Hernandez, Emma Tyrrell, Gabriel Roth, Jasmin Klinger, Julie Kanfer, Katherine Moncure, Lyric Bowditch, Morgan Levey, Neal Carruth, Rebecca Lee Douglas, Ryan Kelley, and Sarah Lilley. Our theme song is “Mr. Fortune,” by the Hitchhikers; all the other music was composed by Luis Guerra.
Sources
- Satya Nadella, chairman and C.E.O. of Microsoft.
- Jeffrey Sonnenfeld, professor of leadership studies and founding president of the Chief Executive Leadership Institute at Yale University.
Resources
- “Big Tech Is Bad. Big A.I. Will Be Worse,” by Daron Acemoglu and Simon Johnson (The New York Times, 2023).
- Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity, by Daron Acemoglu and Simon Johnson (2023).
- “Statement on A.I. Risk,” by the Center for A.I. Safety (2023).
- “The Awkward Partnership Leading the A.I. Boom,” by Tom Dotan and Deepa Seetharaman (The Wall Street Journal, 2023).
- “Microsoft Lays Off Team That Taught Employees How to Make A.I. Tools Responsibly,” by Zoe Schiffer and Casey Newton (The Verge, 2023),
- “How Microsoft Became Innovative Again,” by Behnam Tabrizi (Harvard Business Review, 2023).
- “A Tech Race Begins as Microsoft Adds A.I. to Its Search Engine,” by Cade Metz and Karen Weise (The New York Times, 2023).
- “Biggest C.E.O. Successes and Setbacks: 2022’s Triumphs and 2023’s Challenge,” by Jeffrey Sonnenfeld and Steven Tian (Fortune, 2022).
- “The Question Concerning Technology,” by Martin Heidegger (1954).
Extras
- “The Secret Life of a C.E.O.,” series by Freakonomics Radio (2018-23).
- “Will A.I. Make Us Smarter?” by People I (Mostly) Admire (2023).
- “Max Tegmark on Why Superhuman Artificial Intelligence Won’t be Our Slave,” by People I (Mostly) Admire (2021).
- “Extra: Satya Nadella Full Interview,” by Freakonomics Radio (2018).
Comments