Search the Site

Episode Transcript

If you’re a regular listener to this podcast, then you probably know that I’m deeply frustrated by our current educational system, especially when it comes to mathematics. So I’m very excited about today’s conversation with Conrad Wolfram, a mathematician and entrepreneur who, of anyone I’ve ever met, has the greatest insight into rethinking the way we should teach math and other subjects as well.

WOLFRAM: We’re trying to mimic real life ‘cause part of what I think’s really important in education is get experience, accelerated experience, for what you’re going to face in real life. 

Welcome to People I (Mostly) Admire, with Steve Levitt.

Conrad Wolfram isn’t just a complainer. He’s a doer. He’s built a radical new math curriculum that bears almost no resemblance to what you and I experienced in high school. And amazingly, he actually convinced the country of Estonia to adopt major pieces of his curriculum.

*      *      *

​​LEVITT: So I have to laugh when I think back to the first time that we met, Conrad. It was at a conference devoted to the teaching of math. And it was my first conference on the topic. And I was quite surprised by how emotional people were about math and how it was taught. The question of whether students should be, say, put into different tracks according to their current level of math understanding — that would lead to heated debates. And everyone was suggesting their own favorite tweak to the current way that we treat math, mostly quite incremental. And everyone else would complain, “No, that’s a terrible change to make.” Okay, then it’s your time to present your ideas. You don’t look like a revolutionary. And with your British accent, you don’t really sound like a revolutionary. But would you agree with my characterization that you are roughly the most radical person talking about the teaching of math today?

WOLFRAM: Yes. And it’s quite shocking to me, actually, that it’s so shocking. Basically, in the end, I’m saying: “New machinery came along and the real world fundamentally changed for this reason, but we forgot to change the subject that’s mainstream to get people educated for this.” Which doesn’t seem that revolutionary.

LEVITT: And when you’re saying new machinery, you’re talking about computers?

WOLFRAM: Yeah. And the hint is in the name, right? They compute things for you, which they do much better than people at this point. So let’s use them and step up a gear to the next level for humans. I’ve looked around the world since I’ve been really focused on this, 10 or 15 years, and I can’t find anybody else who’s thinking this. They’re thinking about how do you take the existing subject matter and change the pedagogy to get people better grades in that subject matter. But they’re not thinking, hey, maybe the subject matter, the mainstream thing we’ve built this all on, isn’t right or isn’t optimized.

LEVITT: So just to reiterate, your most basic premise is that 100, 150 years, we’ve more or less been teaching the same thing in more or less the same way — as if the world hasn’t changed. But starting in the ‘60s and the ‘70s, computers came along. Everything else in the world changed dramatically, everything except teaching math. Is that a fair assessment of how you think about things?

WOLFRAM: Absolutely. Math has had a funny place in mainstream education for a long, long time, certainly since the 19th century. It, along with  — certainly in your European countries — Latin, classics, were the intellectual bedrock; if you wanted an intellectual academic education, you got to get your math straight. And you got to get your Latin straight. Calculating things was important at some basic level, and still is for many things. But the thing that changed is, as you say, we got these machines and these machines arguably are the most amazing turnaround of any endeavor in the sense that they’ve taken the hardest aspect of mathematics, which is this process by which you answer questions, and they’ve mechanized the hardest bit of it, which is the actual calculating. They’ve mechanized it better than anybody I think could ever have imagined — 50 or 100 years ago anyway. We’ve had this fundamental shift in the outside world, which is powering everything forward, and in education, the great preparation for this, the thing that’s held up as the central subject to help people power for the A.I. age, we left that standing.

LEVITT: As a society, we have been brainwashed into equating math with computation, right? Well, what is math other than somebody gives me two numbers and I multiply or divide them? Or I’m given by the teacher an algebraic equation and I simplify it. To everyday people, that’s what math is. But you seem to think of it as a process for answering questions that matter. Could you walk us through the four steps that you lay out in your book of the process you call math?

WOLFRAM: I think of math as a subject for everyone. It’s a way of getting answers to questions, possibly the most successful process of problem solving that humans have ever invented. So here’s roughly how it works: You start with a problem that you’re trying to define very simple problem, early primary might be, “I have 12 bits of candy,” or sweets as we’d call them in Britain, “and I have three friends and I want to fairly distribute those 12 candy amongst the three friends.” So that’s the problem I’ve defined. So that’s step one.

LEVITT: I think it’s even more important than the way you laid it out, because in the example you gave, the students are told what the point is. The point is I have 12, I want to distribute fairly. It’s not easy, often, in the real world to define your problem. You’re faced with an infinite set of problems you could try to answer. Understanding what the real question is — what to measure — that’s hard. In the educational system, we punt on that. We say, that’s not the student’s problem. That’s someone else’s problem. But the whole point of education is to prepare people to be those other people, those grownups who actually have to define that problem.

WOLFRAM: It’s a catastrophic problem. I’d go even further than you have. So one of the problems that’s happening at the moment is we apply math, computation, data science — all of these I put in the same bucket — to solve these problems, but we don’t define the problem we’re trying to solve accurately to what we actually want. And so in the end what happens is we solve, often, a simplistic problem that doesn’t relate to the real world that we wanted to get the answer for. And so we come out with an answer that was technically correct for the definition of the problem, but it was the wrong definition. One of the typical ways this happens is you get very simplistic metrics for measuring very complex things. And as long as somehow the metrics are hit, people say, “Oh, that’s wonderful.” Somehow this person performed, but it isn’t performing. And I think this is causing pretty catastrophic errors across society.

LEVITT: Do you have a favorite example of how misdefinition has gotten people awry?

WOLFRAM: A good example, perhaps relevant to this conversation, is grades in exams. So if you take a math exam, I know the British ones better, we call them A-levels here, pre-college, and as long as you get an A star in your math exam, somehow you’re good to go. But those exams have become more and more simplistic in what they’re measuring and more and more out of line with what we need in the real world. As we have more ability to compute complex problems and think about harder things and get computers to help us, most of the problems humans are faced with are more open ended. And I’ve often joked, the number of times I’ve been faced with a problem as C.E.O. of a company, where it’s like, “Hey, you’ve got five answers to choose between. One of them you know is correct, and the other four you know is wrong;” or, “The answer is yes or no;” or, “The answer is three” — is pretty limited. Normally it’s a much more messy problem. And so one of the things that’s gone wrong is because we want to make exams easy to mark, metrics easy to compute, we’ve ended up with simplistic questions to achieve that, which don’t match to the real world.

LEVITT: So the first step is you define the problem that you care about. And your second step is to abstract. What do you do in that step?

WOLFRAM: You turn the human language, English or whatever, into this magical abstract way of representing the problem. It means that you can take totally disparate problems in the world and you can turn them into the same piece of logical mathematics. What you’re doing in step two is you’re trying to set up the question in this abstract form. You’re not trying to get the answer yet; you’re just trying to set up the question. And this uses quite a lot of skill because There are all these tools like division and multiplication and matrices and machine learning that you could use to set up an abstraction that you think you’ll be able to get a good answer from. So this is a tricky step. This is a step where you need imagination in what to do potentially. So to take the example you were talking about earlier, where we have candy to distribute amongst our friends. In the end, that’s a division abstraction. Twelve divided by three is basically what you’re probably trying to set up as at least the first round of what you do here. One thing to point out about current math in schools is that you’re almost never doing either step one or step two. You’re given something to then calculate. You’re given a division sum. You’re given an equation. You’re given a multiplication to do. 

LEVITT: You could even improve our system marginally if you just had students working on a large range of problems at the same time. So they at least had to look at a problem and do some thinking about what of, say, 50 tools they need to apply to this problem. But it’s not the way we teach. We teach, “Okay, now we’re going to learn how to solve quadratics. And you’re going to use this tool to do it.”

WOLFRAM: Fifty times over. Yeah.

LEVITT: You ever see a quadratic five years later, you’d never remember which tool because you didn’t have any practice trying to pull the tool out of the tool kit.

WOLFRAM: And another thing to point out is computers have meant that we have a huge tool set that we could abstract to. It’s a whole vocabulary. And at school right now, what we’re doing is we’re saying, “You’re only allowed to use the tools that you can calculate by hand.” So that’s like a tiny fraction of the possible tool sets to abstract to. Actually, what we’re training people to do is to take a very tiny tool set and only work with that tool set. So it’s, like, completely wrong.

LEVITT: Yeah, the ones the humans can calculate. The computer could do it, but—

WOLFRAM: Correct. And some of the techniques are only post-computer. So machine learning, obvious one to talk about, it’s like everywhere right now. Where is it in our math curricula? I’ve asked many people: Which math curricula around the world has machine learning in? Basically zilch.  

LEVITT: You focused on the choice of the tool, so mapping from words into a mathematical tool. Within economics, there’s an enormous parallel to what you’re talking about. But in economics, we have a second problem, which is: The world is really complicated and the questions we’re trying to answer are inherently complex. And so the second stage of abstraction is not just picking a tool. It’s also deciding what elements of the problem are worth keeping in a model and which ones to discard. And so this second stage is almost everything in economics because the choices you make here influence very dramatically the kinds of answers you come up with. Do you see that as less of a problem in mathematics than economics because they are better defined?

WOLFRAM: No, because part of the problem there is I see economics as an example application of mathematics or computation, as do I see modeling the pandemic, as do I see any of these things where you’re deploying basically computational thinking on the problem. And essentially what we want mathematics and computation for is exactly those complex, messy problems. How we abstract, what we abstract to, what we think is possible, determines to a large extent the kinds of answers we can get. And there’s another part to the whole thing, which was that — we’ll come on to step three at some point — but the idea of calculating was incredibly expensive up until computers. It was the most expensive part of this four-step process. And part of what you were trying to do in step two was minimize the cost of step three because only humans could do it, they could do a very limited amount of it, and it was very slow. But that’s completely changed. So now, when you abstract, you might abstract in multiple ways at once. You might say, well, actually, I’m going to compare the outcome of five different, four different approaches, all in parallel. Because when I get to the point of trying to work out the answer from them, often that’s cheap because I have a machine doing it. So the whole approach can be potentially totally different and allows us to, in a sense, throw in much messier types of problems and see what happens without necessarily knowing the techniques in detail that we need to get the answers.

LEVITT: So step three — so we’ve defined a problem. We’ve abstracted it and translated it into mathematical lingo. Step three is computing the answer. And we don’t need to talk too much about that, right? Anything you want to say about step three?

WOLFRAM: Well, I mean, one of the difficulties of our current age that we’re entering, the A.I. age as I call it, is knowing what you have to know about step three. So you want to leave it to a machine. That’s what I’m advocating. But obviously the machine can produce garbage. Now that might be because you’ve put garbage in step two or one or whatever. Or it might be because it screws up in some way. So there’s a question of: Do you have a sense of when machine learning does a good job? When you start out doing something fairly new, you need to know less about the insides because you’re not typically at the limits. I often compare this to driving. If you’re learning how to drive a car, a modern car, a lot of automation built in, you need to know stuff about the road and how the car works, but not much about how the car works, honestly. If you become a NASCAR or Formula One driver or something, you end up knowing a lot more about the detailed insides of that, because you’re much closer to the limit, and the detailed mechanics of how it does it become more important. But yes, as you say, mostly you don’t need to know a lot about it in its insides.

LEVITT: And then the fourth and final step is what you called interpretation. Describe that.

WOLFRAM: So it’s the reverse sort of, of step two, which is that you’re de-abstracting. You’re saying, “X equals three.” That was the answer we got out of our calculation in step three. Actually X equals four, I think in this case, because I had three friends. Did that answer the question I posed reasonably, sensibly? Or have I made some slip in my thinking somewhere along, or in the calculation?” And so I would say there’s four A and B. There’s A is, right, okay, we got the answer, we de-abstracted it to say, “Four meant four candies per person.” And the question: Is that reasonable? Well, suppose I’d started with 11, not 12. Then we’ve got some fraction as the answer here. Now would that have been reasonable? Well, it depends what sort of candies you’ve got. Can you divide them? Is it reasonable to divide them? So there’s this kind of interpretation. And what that may lead to is that you go around the whole four steps again. So you say, “The answer wasn’t really good enough or I made a mis-assumption or something, so I’m actually going to run the process again and hone it to a point where I believe I’ve got the answer sufficiently good and I feel like I’ve got my assumptions sufficiently accurate to what I wanted.” And so we sometimes paint this picture of a kind of helix, where you’re going around these steps multiple times until you feel it’s a good enough answer. 

LEVITT: So this interpretation step, when you take a simple problem like dividing the candy, it’s maybe harder to see than when you take an economic problem. Because what we see from step three is “what” in economics, like “how much,” but what matters usually in science is “why.” And “why” doesn’t usually pop out of steps one, two, and three directly. It’s a human endeavor to try to interpret what you observe in the data and then tell stories that are potentially causal around them. So in economics, I think this step four is really critical. That in many ways is the point at which you distinguish who’s a really good economist from just an average economist.

WOLFRAM: Yeah, I totally agree with that. Although of course the setup and the definition of the abstraction make a huge difference to how effectively you can do step four.

We’ll be right back with more of my conversation with Conrad Wolfram after this short break.

*      *      *

LEVITT: So you define a problem, you abstract, you compute, and then you interpret. I will say, for my own experience in economics, the only one of these four that doesn’t really matter, interestingly, is compute. So, like, I’m terrible — I’m not good at math computation. It is never a problem because, within economics, that is the skill that is rewarded in college and then in Ph.D. programs. So we have the most incredible overabundance of computational talent you could ever imagine. And there are many economists who aren’t very good at the other steps. But except for me, almost all of them are good at computation. I never worried about it, I always knew I could find a co-author — no matter how hard the computation piece was, that co-author could figure out no problem. And my own view was I’m not going to put any effort into developing those kinds of skills. I’ll put all my effort on these other three steps. And I think for me, it worked out really well.

WOLFRAM: Yeah, no, that’s right. And that’s why you’ve had a different perspective on many things from many others. But it’s a funny thing in the world, I mean, not just in economics, though economics is a good example of this. If you look at, where has math or computation been successful? It was successful for hundreds of years in physics in particular because we had problems of the planets going around the sun and that kind of thing. And it was very amenable to things that could be beautifully abstracted very neatly to something that you could calculate. So like accountancy, physics — biology was pretty much nonstarter because too much data, too messy. Economics has been this funny mix because it seemed like it might be quite neat, but in reality, that’s a, often, pretty poor representation of what’s really going on and the interpretation is very complicated, etc. So actually that was a funny one in the middle. And then we have all these areas which are brand new in a sense — there’s programming itself — which are born out of this age of mechanized calculating. But the point really is now, in the outside world, every decision in every walk of life has a kind of computational element potentially to it. We see this a lot in our democracies. If you’re voting, the arguments that are put forward to you are often much more quantitative in nature than they were 50, 100 years ago. Certainly during the pandemic, if we’d had the same pandemic, so to speak, 50 years earlier, I don’t think you’d have been seeing graphs and charts and discussion of risks in the way that we did. But the difficulty we’ve got is that most of our populations are unable to assimilate that because they haven’t had any education in it. And when they’re presented with this, they don’t really know how to distinguish one expert is talking garbage from an expert who’s perfectly sensible.

LEVITT: So Conrad, we’ve been talking about this approach as if it’s specific to math, but it must have important implications for the teaching of all subjects, right?

WOLFRAM: Yes. Take the real world as you project it will be and take it back to education so that our humans, our students, can get the maximum experience ready for life. The reason math is so much at the center of this is because it’s changed so fast. And it’s so central. I think that will happen with some other subjects as A.I., unfolds, but it hasn’t yet to the same extent. The real thing that needs changing here is the set of incentives, the ecosystem, so that subject change, not pedagogical change, can occur. Any pedagogy that helps you get better grades, that gets lots of funding — private sector or indeed public sector. Anything which tries to actually change the subject to match the real world, it’s almost the opposite — everything is locked down. And I think people are starting to realize there’s a major problem. I think it’ll become much more apparent in the next few years.

LEVITT: You don’t just complain about math. You’re trying to do something about it. You have built a radical new math curriculum that tries to teach the subject in a totally different way. Could you maybe talk about some example modules that are part of it? They were the most engaging math content I’ve ever seen. And it really shook me into the reality of how powerful this could be if it ever became how we taught math. So could you talk about some of your favorite modules you’ve developed?

WOLFRAM: The outside of this was I said to my team, “I want to build a math curriculum that assumes computers exist.” Seems obvious, but we couldn’t find one. And so the way we went about this was to say, “Look, why don’t we start with problems that we think might be engaging to a student?” Something they might care about in their life at that moment, and see if math can be helpful or not. For example, we came up with, can you spot a cheat? Can you work out who’s cheating and who isn’t? And the idea is, start with a problem, very open-ended problem. We have a very scaffolded kind of module to work through with a student who’s at an early stage of understanding this stuff. To give you one example out of that module, there’s a step where we get half the students in the class to toss a coin in reality and half to cheat by typing H and T or heads and tails into the computer without doing the experiment. And then we send the data to the teacher and we do a little bit of analysis on the data and we advise the teacher whether we thought student A cheated or not. They’re amazed when, typically, it’s obvious which one’s cheated. And they start to ask questions about it. Gosh, that’s weird. How did you know I cheated? And it’s to do with the patterns in the data. You tend to type too evenly, when you type it in by hand, as opposed to doing experiments.

LEVITT: People are really bad at mimicking randomness. People get very uncomfortable with long strings of heads, even though long strings of heads occur in nature. And it’s almost impossible for a person who’s from their head creating a string of heads and tails to make it look very much like true flips.

WOLFRAM: Indeed. That’s an early discussion into how do we do credit card fraud detection. A lot of other things where it’s to do with patterns and data. Now, of course, we can only do that sort of thing if there’s a computer there because you can’t do all this by hand. And it’s a very different sort of approach, right? You wouldn’t see that in a normal math curriculum because it’s not got all these pieces there. It’s not the approach one uses. So what we’ve been trying to do is go from a problem through a sort of story as to how you solve that problem through these four steps, and very deliberately saying which step you’re at — are you defining, are you abstracting, etc. Then we sometimes zoom out to a sort of primer where we say, “Look, we can talk about this in English, but actually, equations have been invented. So let’s talk a bit about what equations are and why they might be useful. Then let’s come back and use them because it happens to be useful as a tool in this case.” And then what we want to do next is say, “Okay, you’ve got the idea — very scaffolded of how to do this. Now let’s set you on a project where we’re just going to remind you of these four steps, but we’re going to give you a slightly more open-ended thing for you to do by yourself with a little bit of help to guide you.” Because where we want people to be in the end is: Here’s a problem — economics or mechanical engineering or life. Here’s a computer. Can you deploy computation to help you get a better answer to this problem? That’s where we want to be. And that’s the least scaffolded. Here’s a problem, go figure out what to do. So we’ve had, can I spot a cheat? Another one we did was how fast can you cycle around a particular cycle race? Let’s do some image processing on if you’ve got low handlebars or high handlebars, how that affects your aerodynamics. Is that the biggest effect or is it rolling resistance? So one of the things we’re also trying to do is throw in far more complicated models for students to get used to using and assimilating and being skeptical of than we’re asking them to make. Real life, you’re thrown very hard models, and the question is, do you believe any of them? Are they true? Can you work with them?

LEVITT: One of the things I like to do on my problem sets with my university students is to include all sorts of extraneous information, parameters, facts in the problems, because they’re so trained that if a word appears in a problem, it must be used, but in life, of course, that’s not true at all. It’s very disruptive to their thinking to have to work through and to actually — “Wait, this can’t matter at all.” Super important for thinking, but not in our rote training. It’s not something that almost they’ll ever see in the classroom.

WOLFRAM: Yeah, and catastrophically reinforced in our assessment systems. Years and years ago, I’d seen somebody who’d said how to write a good math assessment. And it says, “Very important you don’t give them any extra information that’s not needed,” because then you wouldn’t be assessing the right thing. But hey, ho, that doesn’t match the real world. That’s another typical thing which happens, which is people think fairness in assessment is to do with reproducibility of marking. But actually, of course, fairness is also to do with whether it matches the real world. And as you point out, extraneous information is everywhere and is perhaps an even bigger problem now than before, because we can do so much more working out that you don’t know what’s extraneous. You can potentially include things that seemed extraneous into your model and you can still compute with them. So it’s a very complicated business for people to get experience in.

LEVITT: Another of the models you put up that day was one where you took a photo of yourself or someone in the audience and facial recognition technology would make an assessment of which of the Harry Potter characters your face most closely resembled. And I remember that was just intriguing. In particular because you would never introduce facial recognition into a typical math sequence until the very end, because it’s some of the most complex computation you’ll ever do in these neural nets, and very hard to explain. But in your world, where it’s about asking questions and how sensitive is the facial recognition software to changing your expression, or putting on a hat, or whatnot — a whole different set of questions come to the fore, which I found inherently interesting.

WOLFRAM: One of the great things we can do is completely reorder the curriculum based on conceptual complexity. Machine learning is a great example. The idea of just using machine learning — not building new neural nets, but just using it — should be really early, should be somewhere in primary education because that’s how kids learn quite a lot of the time themselves.

LEVITT: There’s a couple of example exam questions and one that caught my attention was — it’s a visual; it’s got a bunch of sliders where you can control different dimensions. And the question is about buying life insurance. In some sense, it was a complex model underlying what the value of life insurance was and what the best life insurance product would be for the student. And by moving around with the different sliders, they had to understand the relationships between the variables and come up with the right contract to purchase and an explanation for why that was. Now, that is just completely and totally different than any kid is doing in the classroom today.

WOLFRAM: We’re trying to mimic real life ‘cause part of what I think is really important in education is get experience, accelerated experience, for what you’re going to face in real life. That is today’s real life. You’ve got complex models, you’ve got offers from many different companies for life insurance or whatever it is. And the question is, which one do you need to get? How do you not get fooled by it? Can you assess it in an intelligent way? What we really desperately need is open-ended assessments that ask fuzzy questions. So, typical other question I might put is: here’s the data from two versions of a website, which one is performing better? Well, depends what you mean by better. It depends what you’re trying to optimize. So those are the sorts of things I think of as computational questions that we need answered.

LEVITT: So my reaction to your ideas about radically changing the way we teach math, it’s: Well, sure, it would be awesome to make that change, but good luck! The tiniest changes are so hard to make in math curriculum. You’ll never convince anyone. But you pulled off something that I would say is close to a miracle. I don’t know how, but you convinced the country of Estonia to adopt your approach about a decade ago. How in the world did that come to be?

WOLFRAM: I mean, it actually didn’t take as much convincing as you might think. Estonia has been a very innovative country. And the minister at the time was a physicist and was pretty radical. He knew about some of my ideas, and within 10 minutes, it’s like, “Yeah, I’m convinced we need to do this. The question is how.” It was interesting also because Estonia did pretty well in international league tables of math.

LEVITT: So they have an amazing education system, which seems to make it even more surprising that the Minister of Education would want to shake things up.

WOLFRAM: His attitude was, it wasn’t fit for purpose, didn’t make sense to him. He agreed. It’s a shame that in most countries, the ecosystem of education has got so locked for subject change. In a way, it’s worse the more “important” the subject is, in quotes. So math is right at the top of that, which makes it more locked in subject change than side subjects. And one of the things I’ve argued a lot is that across our countries, we’ve got to set up the educational ecosystem so that subject change is allowed to happen, because we’ve got a very rapidly changing outside world. We’re going to see a lot of changes to all sorts of subjects because of modern A.I. So the way you do history in the real world will no doubt change a lot. We don’t know quite how yet. We’ve known how maths has changed for decades, and we still haven’t reacted to that. So we better get our act together, both for maths itself, but also as a pre-warning, so to speak, for changing other subjects.

LEVITT: I’ve worked with a lot of companies and it’s often very easy to get the boss to say, “Yes, that’s a change I want to make.” But then when you actually get into the layers of implementation, not very much ever happens. Were you actually successful in getting your curriculum implemented on the ground in Estonia in a big way?

WOLFRAM: Not in as big a way as I’d like. The mistake I made at the time was we were going to have phase two as really fundamentally reforming all the assessments. And by the time we got to phase two, minister had changed, government had changed. They weren’t against it, but it wasn’t their priority. That was something I learned. We didn’t manage to push that as hard as we might. Phase one was, can we change the subject in the classroom? There was enthusiasm on the ground. One of the problems we had, of course, was teachers are going to be panicked. Obviously we’ve put training on and things to help, and actually the modules that I was describing that we built, we have a teacher and a student version, and the idea of that was initially, it was like, how are we going to help the teachers teach it? Well, why don’t we just have a teacher edition, which tells them what they could do at different points in the lesson and write it all out. The teachers themselves, though, what really drove them forward in many cases was the students understood why they were in the lesson and what they were doing. As in, if you say to somebody, “Can I spot a cheat? Can we figure out how to do this?” You’re kind of on the same page as what you’re trying to do. If you set somebody a quadratic equation to solve, it’s like, “Uh, don’t really understand why I’m doing this.” And they’ve got a point. So there was a lot of drive and energy the teachers found they got from students engagement, because they understood what it was they were doing. We got some initial group of schools to do early trials of this and tested out students. And actually, amusingly, quite a lot of the students did as well or even better on their traditional math exams than the control groups who hadn’t done it.

LEVITT: Even though in principle, they weren’t spending all of that time on the rote things. They were spending their time on these more creative elements. It’s an amazingly valuable outcome because the challengers say, “Look, the kids aren’t going to learn the things that I, the critic of Conrad Wolfram, think are important.” But if they learn that anyway, almost by osmosis because they’re engaged in the material, then a whole slate of arguments against you just fall away.

WOLFRAM: They were having some traditional math stuff at the same time in most cases, and some of these were quite good students. But I think that, to some extent, that’s the case. They understand why they’re doing it. One of the most shocking things after my TED Talk years and years ago, most common comments that I had was, “Oh gosh, this is the first guy who’s explained to me why math has anything to do with my life. I’ve been learning math for 12 years of my life at school. I have no idea how it has anything to do with what I’ll do.” And I think the fact that there was a connection here between what math is for something they might want to solve gave them some impetus for learning the mechanical stuff, even if they didn’t really need it. There are a lot of people who are completely turned off by the abstraction of maths at the outset. They just don’t understand why they’re doing it. And, for whatever reason, they’re not motivated to push through that. Some of those people, if you find the right latching point in terms of problems they’re interested in, it’s extraordinary how good they are as computational thinkers, once you’ve got them engaged in something they care about. And so I think there are a lot of people right now we’ve just massively put off by starting with some abstract equation to solve, or whatever it is. And they don’t see the point of that, so they just switch off. So they never get to the point of feeling why they would use the math. And I think that’s something that we demonstrated in Estonia.

LEVITT: So you probably got the wrong idea when Estonia adopted your curriculum so readily. I suspect, just knowing what I know about education, that you haven’t had a whole lot of other successes in the last 10 years. Is that right? Is it discouraging?

WOLFRAM: It’s frustrating. My projection or hope was that we’d find Estonia and a few other small countries. I didn’t think the U.S. would suddenly run to this. I didn’t think the U.K. would suddenly run to this. The good news is that I think the discourse is completely different to what it was 10 years ago. In many countries, it’s just obvious that there’s a problem and there’s got to be some sort of change. The difficulty is how do you make that change? I think the other thing that’s pepped this up actually a bit is the whole generative A.I. business in the last year and a half. What that’s done is it’s got people to look again at the subject matter and to ask these questions about: what are we doing in education? So looking again has then put the spotlight back on what I’ve been saying. I mean, there are different detailed problems in most countries about making change to something as central as math. Now, Estonia is nice and small, and they’re agile, or they certainly were, I’m not sure. That may have come off the boil a little bit in recent times, but certainly they have been very agile, and that’s how they’ve thought they would win. The U.S. with regard to math has slightly different problems to, for example, the U.K. So, I mean, the sort of problem in the U.S. is it’s turned into a massive political battle.

LEVITT: Perhaps the most heated has been this very contentious math war going on in California. And this math war pits a group of people who would like to put data science into the high school math curriculum in place of Algebra 2. It seems like it wouldn’t be that controversial, but it’s got everybody so upset. Look, you and I and most sensible people think that if we had more data science, more of the kind of thinking we’ve been talking about today, it would be great. In California, that idea got coupled with this notion that the current math framework is inequitable, in the sense that the privileged kids are doing really well in the current system, especially in Algebra 2 and calculus. Many kids just can’t get through Algebra 2 because it’s abstract, because it’s boring. I don’t know what the reasons are, but it doesn’t make sense to a lot of people, and so it gets in the way of them going to college. And so the reformers thought, look, we can take care of two things at once. We’ll deal with this equity issue at the same time as we deal with the fact that the curriculum, Algebra 2, isn’t nearly so relevant to kids’ lives as data science. And that, I think, has really backfired because it’s given the traditionalists a foothold to say, “No, no, we shouldn’t do data science because we’re doing it for all the wrong reasons. It’s just about dumbing down. It’s just about removing barriers.” It’s been interesting, but discouraging to watch. My own view is that there’s no way in the short run that adding data science into math is going to be a powerful equalizing force. And the reason for that is that to do data science well is incredibly hard. We don’t even really understand how to teach data science. We’re just learning that. I’m certain that as data science or computation becomes very important in the math curriculum, it will trickle down from the most elite private schools, eventually down to public schools. All of these things always benefit the privileged first before they have broader returns. And so I actually think the premise of the reformers is wrong, not just maybe the political miscalculation that’s led to so much backlash from the traditionalists.

WOLFRAM: I think that’s very cleanly put from my understanding of it. They’re saying we don’t want it dumbed down. And I agree with them. We don’t want it dumbed down. And I don’t know whether what was proposed was doing any of that. But I agree aligning those two issues of equity and the subject we actually need is a huge error. The closest parallel I see to this in a very different scenario was Latin in places like the U.K. In the 1950s or so, if you were in any elite school and even some non-elite schools, you would learn Latin as pretty much a bedrock of your education. You couldn’t go to a top university without having a Latin A-level, which was the exam you took before. So if you’re going to read biology at Oxford, you needed Latin A-level. Sounds absolutely crazy now. I think that’s quite similar to what we’ve got with math at the moment. It’s assumed this importance, even though it doesn’t represent the actuality of what you need. Even in the 1950s in the U.K., people weren’t literally speaking Latin to each other. The big difference with math is we’ve got this other subject, which is very related, is this modern version. This math has become all empowering in the real world, unlike what Latin was. And yet we’re not moving to that. So it’s good news for people who support math. It’s become a more and more powerful subject. It’s just somewhat different.

You’re listening to People I (Mostly) Admire with Steve Levitt and his conversation with Conrad Wolfram. After this short break, they’ll return to talk about Conrad’s business Wolfram Research.

*      *      *

In addition to trying to change math, Conrad, along with his brother Stephen, run a pretty successful business called Wolfram Research. I want to learn more about that, and also see if Conrad’s got a clearer idea about what to do with his next 20 years than I do.

LEVITT: So everything we’ve been talking about so far, it’s really just a hobby. It’s not your actual job. You have a real job running the European portion of a company appropriately called Wolfram Research. And listeners might know of or have used your products like Mathematica or Wolfram Alpha. Could you explain what your products do for those who are unfamiliar?

WOLFRAM: Our idea is we believe in the power of computation to deliver answers. And we’ve been trying to build the ultimate way to assist people to do that over 36 years or so. What we’ve been trying to do is build a way of allowing any computation to happen — put together all the possible algorithms in the cleanest structure, this massive ecosystem of capability to deliver accurate answers with as much automation as possible. 

LEVITT: At the heart of this company is a programming language called Wolfram. And in the programming world, there’s a bifurcation between closed-source and open-source languages. Can you explain what those two terms mean and the logic that went into making Wolfram a closed source language?

WOLFRAM: Sure. Open source is really a plethora of different business models which relate to the idea that it’s free to use in some sense, and it’s open to change, and you can see all the innards of it and make changes as you want them. In most cases, you can just download something, use it immediately, and off you go.

LEVITT: Python, R, JavaScript, Ruby. Those are all examples of open-source.

WOLFRAM: That’s correct. Our approach to being a bit different, we’ve deliberately curated a large number of capabilities. We’ve tried to build those at a high level, to piece them together in a very ordered way. Now, why have we done that? Because in the end, after an initial go with open source, where everybody contributes from around the community and adds things in, it starts to become quite messy at some point because you need some sort of structure that really hangs these things together when it gets complicated. I think we’ve got six-and-a-half-thousand functions and having those all consistent, so you can immediately go from one to the other, so you can operate at a very high level, takes a bunch of effort, work, time to make work. And we think we have by far the fastest way to actually solve problems using computation because of that. Now, very interesting development with that is deploying modern large language models, L.L.M.s, to write code. ‘Cause the thing that’s happened the last year or so is that you can potentially utter something in English and get some code written. And it’s really exciting now because you can write a piece of Python out of an L.L.M., or you can instruct it to write a piece of Wolfram language. And the Wolfram language is incredibly short and clean and easy to understand and edit. For the human to follow on from that and make changes and use the abstract code is actually much easier. After a period where open source was everything, I think people are coming a bit more skeptical as it gets a bit messier of how well it delivers results. If you look, for example, compared to R, we have many capabilities I think are really clean there and things can just get done a lot quicker. Our language is openly available to look at. It’s that the setup of it is open. You can do many things for free with it. But just not everything.

LEVITT: I could imagine a skeptic saying, “Hey, at the start of this episode, I got the impression that Conrad was trying to change math education because he cared about kids. But now, I see he’s just doing it to get rich. He just wants to sell more subscriptions to his Wolfram computing company.” How do you respond to that?

WOLFRAM: What I would say initially is, “Oh my gosh, there are easier ways to try and improve our business than this,” right? 

LEVITT: I would say you’re either a really bad business person, because, my god, trying to change education is the most difficult task, and the number of subscriptions you could sell — I can’t really think of a worse path.

WOLFRAM: Right. It’s pretty bad as an idea for generating business per se. Yeah. So that isn’t my motivation, although it’s a perfectly reasonable question to ask because I think we need to understand for everybody, rather than exclude people because you somehow think that they have some edge to get, some incentive to go. I think it’s much better to just be honest about the incentive. If you’re a professor of math for 30 years, you have things to drive you forward to keep the thing perhaps as it is.

LEVITT: Sure, it might not be financial, but there’s prestige and other things you want to maintain.

WOLFRAM: Indeed. So the answer is, I haven’t done this to generate more business for us. It is because I actually do want to change the life. The reason I was in a place where I could see this needed doing was because I’ve been right in the middle of this revolution of computation that Wolfram has been part of driving. And I’ve also got some perspective of what happens in education, and almost nobody has that in that generality. And I think that’s why I saw this problem is such a massive chasm between the two, when other people haven’t really seen this. If you think math is a mainstream subject for everyone, the everyone is mostly not mathematicians. I know for our Mathematica, only 3 or 4 percent classify themselves as mathematicians of its users. The curriculum should be being set by a very wide range of people that are not just math. They’re physics or they’re C.E.O.s or whatever. And so actually that breadth is not being represented fully in who’s setting the curricula. And I think, again, that’s played into how the problem has arisen and why I can see that and other people can’t. So I think more it’s that I was in the right place because of the work we’ve been doing. And, yeah, it’s not a great way to make money, I’m afraid.

LEVITT: You’re an extremely smart person. You care deeply about ideas. You’re quite curious. And you’ve also happened to make some money along the way. And you and I are almost the same age. And I think we find ourselves in a not entirely dissimilar situation right now. And personally, I feel I’m at something of a crossroads. I’m trying to figure out, what will I do with the rest of my life? Do you feel like that as well?

WOLFRAM: Very interesting you should mention that. Yeah, to some extent. I’m wondering whether that’s an effect of the world as it is right now. It seems very difficult to make a lot of things happen in the world. It seemed like they ought to happen. Maybe it’s always been like that. But on the other hand, I look over things and I see, gosh, it’s amazing what one can do today that one couldn’t do 30 years ago, just the technology that’s available. The fun one can have with understanding things, getting information. We’ve now come to an era where, there’s a kind of assumption anyone potentially could do anything in a developed country. Now, that’s great in most ways, but unfortunately it also means that it gives one an awful lot of options and that in itself can be quite difficult to handle. And I find that myself, even though we’ve both had some amount of success in some directions, that doesn’t feel as satisfying. If only we could have done this, it might have worked better in this way. If you haven’t succeeded fully in what you’re trying to do, then somehow it’s your personal failure rather than the system that’s let you down. That’s sometimes what I feel anyway.

LEVITT: I probably should feel more disappointment about my own failures, but my direct experience — so I’ve spent a lot of time trying to change the world. I haven’t succeeded very much. I guess one reaction is to say, “Wow, I should have done better.” And the other is to say, “God, it’s really hard to change the world. It’s not that surprising to me that I failed to do it.” But the crossroads for me then is, given how hard it is to change the world, should I keep trying or should I just go sip piña coladas on an island or something like that? It doesn’t sound like you’ve yet gotten to that piña colada on the island approach to life.

WOLFRAM: My setup is that I get pretty low and irritated when I don’t have enough to do. And I know that of my own psychology well enough that I somewhat rule that out. Now, what the “doing” is may not be high intellectual work, right? But I know I have to be active. And so I rule that out, but it doesn’t necessarily give me a full direction as to, hey, what’s next?

I’d love to do a little listener poll. If you have a minute, send me an email with three pieces of information: your gender, whether you consider yourself a math person, and whether you are for or against replacing the traditional math curriculum with Conrad’s new math curriculum. So again, those three pieces of information: your gender, whether you consider yourself a math person, and whether you’re for or against replacing the traditional math curriculum with Conrad’s new math curriculum. The email address to send that to is That’s And I’ll tally up the data and report back the results in two weeks.

LEVITT: So this is the point in the show where I welcome on my producer Morgan to take listener questions.

LEVEY: Hi, Steve. So a few weeks ago, we had psychologist Ellen Langer on the show, who talked a lot about the benefits of mindfulness. And we had some great listener questions for Ellen Langer. So we sent them to her, and I’m going to read the questions and the responses that Dr. Langer gave.

LEVITT: Great.

LEVEY: The first question was from a listener named Kay, and Kay wanted to know if the research had shown that older people who socialize with younger people live longer than older people who socialize with people their own age or older. So, Dr. Langer generally agrees, but also thinks that it depends on the particular people with whom you’re spending your time. She says that research has shown there is a positivity effect that comes with age. So if you’re with others your age who are generally more positive and likely to be more mindful, then The older group might be preferable; the people your own age might be preferable. But, if that group of people, the older group, are opting out of things that you think would be fun, because they mindlessly assume they’re too old, then the younger group would be a better choice. So basically what Langer’s saying is that whatever encourages you to be mindful and more youthful will lead to better health and longevity.

LEVITT: I live in a complete fantasy world, where I continue to be young, even though I’m not young at all. And when I’m around people my own age, every single time, I’m completely shocked. I think that these people look and act like my parents. But then I look in the mirror, which I try not to do very much, and I’m like, my God, no, that’s me. I’m old too. But somehow, in the absence of direct evidence, I, in my head, can convince myself that I’m still young and sprightly. So that’s the self delusion that leads me to much prefer young people to people my own age.

LEVEY: So another question we had from a listener and this one came from Susan. And Susan is a left-handed person. Are you left-handed, Steve? I don’t even know.

LEVITT: No, I’m not. Thank God I’m not — I wouldn’t want to die nine years early like left-handed people die.

LEVEY: Yeah, I’ve heard that old wives tale, too. This question and Dr. Langer’s answer touch on this. Susan is left-handed, and she believes that left-handers process the world differently because the world is really built for right-handed people. there’s more right-handed scissors. There’s more right-handed desks.

LEVITT: Cars are set up for right-handed people, not left-handed people, for instance.

LEVEY: Yeah, that might be even more impactful than not having left-handed scissors. So lefties are often forced to learn how to do workarounds, and because of this, she says that they’re more mindful of process than right-handed people. Right-handed people can just mindlessly do things on autopilot because the world accommodates them. And Susan wants to know if this is going to be associated with any health benefits. Because, as Dr. Langer has told us, living a mindful life is better for your overall physical and mental health. Dr. Langer wrote that she has found lefties to be more mindful, for all the reasons Susan mentioned. She says that since longevity is influenced by mindfulness, she would assume that left-handed people tend to be healthier, live longer, and be more creative. So, Langer’s response seems to defy the old wives tale that left handed people die nine years younger than right-handed people.

LEVITT: So after Ellen did give that response, I just took a few minutes to look into it because I wondered where did this research come from that convinces everybody that being left-handed is one of the biggest risks for early mortality. What I found is that the initial studies were published — fancy journals like Nature, but since they were published, they’ve been completely debunked.

LEVITT: It turned out there was a very subtle mistake in the methodology these researchers used. They found a set of people who had died, and then they contacted the families of the people who died, and they asked whether they were left handed or right handed. And then they got the ages, and what they found is that the age at death of the people who were left handed was, on average, nine-years younger than the age of death of the people who were right handed. Okay, that sounds convincing. But what the researchers missed is that the identification of people as right- or left-handed, that’s something that’s changed a lot over time. For many decades, for centuries even, people who were inherently born left-handed would be trained out of it and remain right-handed for their entire life. so, when these researchers looked at the age of people dying, They were missing the fact that yes, left-handers were dying young, but that was because there were a lot more young people calling themselves left-handers than there were old people calling themselves left-handers because of this cultural component that had made being left handed something that wasn’t nearly so common earlier in time. This is exactly what Conrad Wolfram and I were talking about — the kind of common sense and data literacy and ability to think about how to formulate and answer a problem that these researchers botched, in this case, because they didn’t understand an important fact about the world, which is that the share of left-handed people had changed over time. But it also relates to how you, as a consumer of research, think about the world. And I think people who had taken Conrad Wolfram’s math class would have a much better chance at seeing through this mistake than people who’ve been trained in the traditional math background.

LEVEY: Kay and Susan, thank you so much for your questions. Dr. Langer, thank you so much for your responses. If you have a question for us, or a question for Conrad Wolfram, you can send us an email The address is, that’s We read every email that’s sent and we look forward to reading yours.

Next week we will feature a bonus episode in the feed and given all of the attention to Caitlin Clark and the WNBA, what better time than to replay one of my all time favorite episodes with basketball great Sue Bird. In two weeks, we’re back with a brand new episode featuring veterinarian Thomas Hildebrandt. Thomas has been leading the charge to save the northern white rhino from extinction. There are exactly two northern white rhinos left on the planet, and they’re both female. He’s also involved with efforts to bring the wooly mammoth back to life. I have to say, the creativity that Thomas brings to these problems is really incredible.

HILDEBRANDT: So the mammoth is selling much better than the northern white rhino. I would put it a little bit side by side by flying to the moon, because the mammoth is a very attractive animal from the appearance — what you can’t really say to a rhino, at least for many people, and that is the selling point.

As always, thanks for listening and we’ll see you back soon.

*      *      *

People I (Mostly) Admire is part of the Freakonomics Radio Network, which also includes Freakonomics Radio, No Stupid Questions, and The Economics of Everyday Things. All our shows are produced by Stitcher and Renbud Radio. This episode was produced by Morgan Levey with help from Lyric Bowditch, and mixed by Jasmin Klinger. We had research assistance from Daniel Moritz-Rabson. Our theme music was composed by Luis Guerra. We can be reached at, that’s Thanks for listening.

LEVITT: Since Charles Manson it has come to have a very different definition in the United States.

Read full Transcript


  • Conrad Wolfram, strategic director and European cofounder/C.E.O. of Wolfram Research, and founder of



Episode Video