We recently solicited your questions for Sanjoy Mahajan, author of Street-Fighting Mathematics. As usual, you came up with some excellent questions, about everything from the methodology of educated guessing to Mahajan’s business model (the PDF of his book is available for free). And Mahajan’s answers are terrific — thorough, thoughtful, and even funny (you can tell he likes Richard Feynman even before he says so). By the end of this Q&A you’ll know a bit more about the Gulf Oil spill, the economics of publishing, and the relationship between stair-climbing and eating jelly doughnuts. Thanks to all of you for the questions and especially to Mahajan for the great answers.
How does one improve his ability to make educated guesses? Is it just something one is born with? – Bobby
In college I took a class where we figured out if we tend to overestimate or underestimate when we make an educated guess. Then from that we were able to find out the percent that we tended to over or underestimate and theoretically this would lead us to make more accurate educated guesses. Is this the same process you use? And is it the same one the book recommends to be a better educated guesser? – Brian
What are the most important factors that improve estimating? For example, I would assume simple geometry is one (understanding area, volume, proportion), but what others are crucial? – Gary
Talent, it now seems, is made rather than born. That is the conclusion of a large body of research on world-class experts, from grandmaster chess players and concert pianists to championship golfers. This research, the subject of a previous interview in this blog, “The Science of Genius: A Q&A With Author David Shenk,” represents one of the most exciting conclusions of psychology research—for it implies that we can dramatically increase our skill in many areas, including in educated guessing.
This research has identified a particular type of practice — deliberate practice — as essential for improving expertise. In deliberate practice, one receives specific feedback on what to improve, whether through self-study or with a coach, and uses that feedback to make targeted improvements. Deliberate practice is contrasted with the usual habits of practice, such as how I tried to improve at chess: I simply played lots of chess games, with the result that at age 41 I am hardly better at chess than at age 11. For educated guessing, a powerful type of deliberate practice is first to make educated guesses and then to check the intermediate steps in the calculation to find where in particular the estimates were most inaccurate.
Furthermore, in teaching educated guessing, I find that developing skill in it requires mastering a repertoire of reasoning tools, just as a carpenter masters a repertoire of physical tools. Because the reasoning tools seem to be so important, I organize my teaching around the reasoning tools. For applied mathematics, which is the focus of Street-Fighting Mathematics, the reasoning tools for getting quick answers are as follows: dimensional analysis, easy cases, lumping, pictorial proofs, successive approximation, and reasoning by analogy. Science and engineering estimations are the subject of my next book, Street-Fighting Tools for Science and Engineering; there I find nine tools to be particularly useful (several of which overlap with those for mathematics).
Here is an example that illustrates the most important science-and-engineering estimation tool—divide-and-conquer reasoning—and shows how to build deliberate practice into one’s thinking. Suppose you want to estimate how much oil the U.S. consumes (in barrels per year). Without expertise in oil economics, a straight-off guess (before making a detailed estimate) is pretty much a wild guess. My wild guess is that anywhere from 1 million to 1 trillion barrels per year sounds reasonable.
Now it’s time to refine that wide estimate by using divide-and-conquer reasoning. Here, I’ll divide the estimate into two parts: (1) the amount of oil consumed by passenger vehicles (cars, pickup trucks, SUVs, etc.); and (2) the ratio of all oil consumption to passenger-vehicle consumption. The final estimate is the product of these two estimates.
However, the first part is still too hard for me to guess accurately. Thus, I refine it into simpler factors: (1) the number of passenger vehicles in the U.S; (2) the miles each vehicle is driven per year; (3) the typical gas mileage of a passenger vehicle (miles per gallon); and (4) the number of gallons in a barrel.
Here are my estimates for these four subparts:
The number of passenger vehicles in the United States: The U.S. population is 300 million, and maybe there is one vehicle per person, so there are probably 300 million (passenger) vehicles.
The miles each vehicle is driven per year: from looking for a used car many years ago, I remember that cars driven 10,000 miles per year or less were considered “low mileage.” So I’ll estimate that a typical passenger vehicle is driven 15,000 miles per year.
The typical gas mileage of a passenger vehicle (miles per gallon): small cars get maybe 40 miles per gallon on the highway; SUVs get maybe 15 miles per gallon. I’ll use 25 miles per gallon as a typical value.
The number of gallons in a barrel: I’ll pretend (to give a spot for deliberate practice later) that I have no idea about this value, and I make a semi-wild guess of 500 gallons per barrel.
Having divided, I now hope to conquer by combining all the subestimates:
300 million vehicles * 15,000 miles/vehicle/year * 1 gallon/25 miles * 1 barrel/500 gallons = 360 million barrels per year (for passenger vehicles).
I need one more estimate: the ratio of all oil consumption to passenger-vehicle consumption. On the one hand, passenger vehicles are an important oil user, so the ratio should be near 1. On the other hand, I can think of many other important uses (planes, trains, heating, electricity generation, fertilizer, and plastics), so the ratio should be much higher. My compromise is that passenger vehicles use 50 percent of all oil consumed, so the ratio in question is 2. My final estimate for U.S. oil consumption is therefore 720 million barrels per year.
Now it’s time to check the final estimate and, in order to apply deliberate practice, to check the intermediate steps. The actual oil consumption is 19.5 million barrels per day, which is 7.2 billion barrels per year (2008 estimate, from the CIA World Factbook). My estimate is very low—by a factor of 10. That error is too large for me, even with my low standards.
It’s time to check the individual estimates in order to find out what went wrong.
The number of passenger vehicles is 254 million (2007 data, U.S. Bureau of Transportation Statistics). My estimate of 300 million is too high but only by 20 percent.
The number of miles driven per vehicle per year is roughly 11,500 (2001 estimate, U.S. Department of Transportation, National Household Travel Survey). My estimate of 15,000 is roughly 30 percent too high.
The typical gas mileage is roughly 20 miles per gallon (2007 data, U.S. Bureau of Transportation Statistics). My estimate of 25 miles per gallon is high, but by only 20 percent. Furthermore, this error compensates for a portion of the preceding errors. That compensation is an additional benefit of divide-and-conquer reasoning: by splitting the problem into many parts, you give the various errors a chance to cancel each other.
The number of gallons per barrel is 42. My estimate (really, guess) of 500 gallons per barrel is far too high, by roughly a factor of 10. Aha, that explains the discrepancy between my estimate and the true oil consumption. (My estimate of the ratio between all oil consumption and passenger-vehicle consumption turns out to be quite accurate.)
That is the specific feedback on where the estimation can be improved. To use that feedback for deliberate practice, I ask myself, “How could I estimate the volume of a barrel?” Two methods come to mind.
First, I’ve seen barrels of tar at roadside construction sites. Perhaps they are oil barrels. Their volume can be divided into three factors (divide and conquer again!): depth, width, and height. These barrels are about 3 feet or 1 meter high. Their width and depth are about 1.5 feet or 0.5 meters. Thus, the volume is 1 meter * 0.5 meters * 0.5 meters (pretending that the barrels are square instead of circular in cross section, which is equivalent to pretending that pi equals 4). The product comes to 0.25 cubic meters or 250 liters or roughly 65 gallons. That estimate, although higher than the true value of 42 gallons, is a significant improvement over my guess of 500 gallons.
Here is the second method, which MIT students taught me: Oil costs about about $80 per barrel, and gasoline costs about $2 per gallon, so a barrel contains about 40 gallons!
How many barrels of crude has been leaking into the Gulf of Mexico every day? – jimi
I regret that I have no special knowledge of the rate at which oil has been spilling into the gulf, beyond what I read in the newspapers. But the question brings up two issues applicable to estimation in general.
The first issue is rounding and accuracy. In a natural-history museum, a guide was showing the visitors an ancient insect preserved in amber. “How old is that insect?” asked a visitor. “1,000,007 years,” said the guide. How can the age be known so precisely, the visitors wondered. “Because it was 1 million years old when I started here 7 years ago.”
Several early news reports about the spill had a similar flaw. Some reported a flow estimate of 5,000 barrels per day. However, other contemporaneous reports quoted the flow as 210,000 gallons per day. Because a barrel (of oil) is 42 gallons, the two numbers are numerically equivalent. However, they are psychologically different. The gallons-per-day figure of 210,000 includes a second nonzero digit (the “1″ in the “21″), implying that it is based on quite accurate measurements. The number 210,000 suggests an accuracy of a few percent. In contrast, the 5,000 (for barrels per day) suggests merely that the “5″ is somewhat reliable but promises little more. The suggested accuracy is perhaps 20 percent—a much more plausible conclusion, especially as some current flow estimates range up to 50,000 barrels per day.
The second issue is how to make a quantity meaningful. Quantities with dimensions (such as dimensions of barrels or gallons per day) are not meaningful in themselves. Is 5,000 barrels a day large or small? It’s hard to know. Even harder to decide: Is 210,000 gallons a day large or small? To make it almost impossible to decide, express the time in years: Is 80 million gallons a year a large or a small amount?
Because such quantities have dimensions, they do not receive a meaning until we compare them with a related quantity that has the same dimensions. To that end, compare the 5,000 barrels a day (or roughly 200,000 gallons per day) with another famous oil spill, the Exxon Valdez in Prince William Sound, Alaska. The supertanker spilled (conservatively) 11 million gallons of oil. Call it 10 million gallons. That means every 50 days (or roughly 2 months), the Gulf receives another Exxon Valdez worth of oil. (That comparison is based on using 5,000 barrels a day instead of current and usually higher flow estimates.)
What about the problem of bias in “street math”? It seems to me that rigor is the safeguard against bias hijacking estimates for manipulative ends — I remember during the presidential campaign that estimates of Palin rally attendance became controversial along these lines of street estimates — and also, much more troubling, is the ‘guesstimation’ that went on in the financial industry, where the banks colluded with ratings agency guesstimates on securities which were rigged to overinflate prices, thereby exploiting investor bias during a boom time (becoming blinded to the downside)? – frankenduf
Street-fighting reasoning can help develop financial judgment, a quality useful for spotting financial charlatans. The street-fighting technique of “easy cases” can, for example, help estimate loan payments. A key idea of easy-cases reasoning is to think about the extreme cases of a problem; these cases are usually easiest to understand, and their analysis helps build intuitions and ways of thinking.
Among loans, one extreme is the short-term or zero-interest loan. Near this extreme is a 10-year loan at 6 percent annual interest. (All loans are compounded monthly and repaid in equal monthly payments.) For concreteness, imagine that the principal is $120,000, in order to make the arithmetic easier. The approximate, easy-cases reasoning is as follows: if the interest rate were zero (the fully extreme case), the 120 payments would each be 1/120th of the principal, or $1,000.
Even when the interest rate is not zero, but still small, the principal-only payment of $1,000 per month will be the main portion of the payment. How can you tell whether the interest rate is small? Multiply the interest rate times the loan term and compare it to 100 percent. Here, 6 percent per year times 10 years gives 60 percent, which is somewhat smaller than 1, so the approximation that the payment is mostly principal is not too bad.
To improve the preceding zeroth approximation, estimate the interest. In the zero-interest (or all-principal) approximation, the principal declines at a constant rate from $120,000 to $0; therefore, the average principal balance is $60,000. A 6 percent annual interest means a 0.5 percent monthly interest, and 0.5 percent of $60,000 is $300. Thus, the interest will be roughly $300 per month—making the total payment $1.300 per month. (The actual payment is very close: $1,332 per month.)
The other extreme is the long-term loan (or, if the principal is negative, an annuity). For a loan near this extreme, consider the same principal of $120,000 but loaned at 12 percent annual interest for 30 years. Now the interest rate times the loan term is 360 percent, significantly larger than the 100 percent border between the extremes. In this extreme, the payments are mostly interest. The monthly interest is 1 percent, so the monthly payment is 1 percent of $120,000, or $1,200. This estimate is the zeroth approximation. It is already quite accurate—the true payment is $1,234 per month.
For the next approximation, I’ll just state the procedure without giving a proof. Because the zeroth approximation accounted only for the interest, the correction should increase the payment. The increase is estimated as follows:
Multiply the interest rate times the loan period. Here, the result is 360 percent.
Convert it to a number by dividing it by 100 percent. Here, the result is 3.6.
Raise e (the number 2.718…) to that number, and then take the reciprocal. Here, that means computing 1/e^3.6, which is roughly 0.027.
Convert that number to a percentage by multiplying it by 100 percent. Here, the result is 2.7 percent.
Increase the interest-only estimate by this amount. Here that gives $1,232 (instead of the true payment of $1,234). This corrected estimate is extremely accurate!
As another example with a more realistic interest rate, imagine the same principal ($120,000) loaned at 6 percent annual interest for 30 years (a typical fixed-rate mortgage in the U.S.). The interest-only payment estimate is $600 per month. The interest rate times the loan period is 180 percent or 1.8. The reciprocal of e^1.8 is approximately 0.17 or 17 percent. Thus, we increase the $600 per month by 17 percent, giving $702. Not bad: The true payment is $720.
Do authors pay you to advertise their books here? – Imad Qureshi
Please ballpark the amount of money you will make from this book. Does the fact that it is available in several free, pre-publication forms online affect this amount, and if so by how much? Would making its final form available free electronically affect it further, and if so by how much? – Quin
How much revenue did you lose by having this incredible visibility moment without having an available Kindle or iPad book or other e-copy? How much of that is recoverable? – Sandy
As far as I know, authors do not pay to have books featured here. [Ed.: No, they don’t; but thanks for the idea!] Those decisions rightly belong with the editors, based on what they find interesting and think will interest readers. The editors then ask a prospective author if he or she would answer questions about the book and related topics.
I was glad to do so. I did not know whether it would increase sales, for the publisher and I have made the book’s PDF file freely available. You may wonder why we did such a crazy thing.
I am fortunate to have a job where I get paid to develop and share knowledge. I find inspiration in these words of Thomas Jefferson: “He who receives an idea from me, receives instruction himself without lessening mine; as he who lights his taper at mine, receives light without darkening me.” (Letter to Isaac McPherson, 13 August 1813.)
I therefore wanted Street-Fighting Mathematics to be freely available. That required an open-minded publisher, for this mode of publishing entails risk. Publishing includes fixed costs such as copy-editing, editorial work, typesetting, publicity, and setting up the printing press. These costs are incurred even if no books are sold. But what if everyone reads the book online without buying the printed version? The publisher loses its shirt!
MIT Press gladly agreed to take the risk. They published the book, in print and online, under a Creative Commons NonCommercial ShareAlike license—the same license used by MIT’s OpenCourseWare. Roughly speaking, the license gives everyone permission to share verbatim or modified copies non-commercially. Meanwhile, MIT Press retains the commercial rights and sells a nicely printed and bound hard copy.
Using the Creative Commons license and thereby making the PDF file freely available may have cost revenue. On the other hand, it may increase the book’s visibility and increase revenue. It also makes buyers of the print version more likely to be satisfied, because they can read it for free online (and even print it if they choose) and decide if it is what they are looking for.
MIT Press’s risk was lessened by modern methods of typesetting. In the old days, an author handed the publisher a hand- or typewritten manuscript. The publisher turned the manuscript into page proofs. The author corrected the proofs, and with luck all the corrections were correctly incorporated into the metal blocks of type, resulting in a typeset book. The process was expensive, and mathematical text, called “penalty copy,” was especially expensive.
That process has changed greatly. The entire book (except for the cover) was typeset by me, and I gave it to MIT Press as a PDF file. They copy-edited it, I entered the changes into my files, and produced the final PDF file. If I wanted page proofs, I just printed my PDF file. Most of the typesetting expense has disappeared, and the typesetting is now often done entirely by the authors. This development reduces the fixed costs of publishing, and makes it easier for a publisher to risk using a free license.
In my case, I typeset the book using ConTeXt, which is based on the TeX typesetting system. TeX is one of the earliest pieces of free software. I find it fitting that free software helped enable, at least in this case, the creation and distribution of freely available content.
I know this makes me a philistine, but of what value is pi over, say, establishing that pi is exactly 3.14? That is, what would be lost or gained if instead of an endless number, we established that pi is this much and no more or less? Would our circles be clearly distorted? Would some mathematical principles tumble? – Wondering
The numerical error in declaring pi to be 3.14 is small: about 0.05 percent. To make that amount concrete, imagine a modified circle—one whose circumference is 3.14 (instead of pi) times its diameter. To make this new kind of circle, snip out a tiny piece of a true circle. If the modified circle’s diameter is 6 inches (15 centimeters), the snipped segment measures one-quarter of a millimeter—the thickness of two or three sheets of paper. Depending on the problem, this error may be small enough to ignore.
However, the conceptual problems with declaring pi to be 3.14 are more significant. It would make mathematics inconsistent by using conflicting definitions of pi: (1) as the ratio of circumference to diameter; and (2) as 3.14 exactly. Every area of mathematics that uses pi (differential equations, number theory, and much else) would now be inconsistent and the formerly clean statements would become incorrect or ambiguous.
That said, in the first stage of almost any analysis, one can profitably use pi equals 3.14 or even more extreme approximations, including pi equals 3. Imagine that you have a sheet of regular paper to turn into a cylindrical cover for a pencil holder, and the pencil holder has a diameter of about 5 inches. The approximation that pi equals 3 is enough to show that the 11-inch-long sheet of paper cannot wrap around the pencil holder, because 3*5 inches = 15 inches. Using more decimal places—for example, 3.1 or 3.14—wouldn’t change the circumference enough to make the project possible. Using approximations such as pi equals 3 or 3.14 (or sometimes pi equals 4 or even 1!) is very useful in the early stages of a project or analysis.
And that’s the purpose of street-fighting reasoning methods—to help you start, for often that is the hardest part. The motto: don’t just stand there, estimate something. Make enough assumptions to get started. You cannot lose by trying!
Enrico Fermi was famous for his ‘back-of-the-envelope’ calculations, including estimating the yield of nuclear bomb tests where he would drop torn paper and observe how far they were displaced. So, my question: aside from chaotic and supercritical systems, are there any other areas in which mathematical guesstimates are not useful? – Lystraeus
These are Fermi questions—made famous by the physicist Enrico Fermi: one question he asked physics students was how many piano tuners are there in Chicago? He also estimated the blast from an atom bomb by how far some scraps of paper he threw up in the air were displaced from his observation point. These are common types of questions in physics PhD qualifying exams. Do you give Fermi any credit? – Dr J
Fermi was a master of this kind of analysis. The Nobel Prize-winning physicist Richard Feynman, who was himself a master of it, said (in his book, Surely You’re Joking, Mr. Feynman!) that Fermi was even better at it than he was.
How did Fermi become so skilled? Partly from the way he tried to understand physics. In the years just following World War II, physics underwent huge changes, many due to the development of quantum electrodynamics. Because Europe was physically devastated by the war, and because many of the leading physicists fled Europe for America, America became the scientific center of the world. Leading centers in America included Berkeley and Caltech on the West Coast and Princeton and Harvard on the East Coast. Conferences on both coasts meant lots of cross-country travel—which in those days meant taking the train or driving. On those trips, Fermi would sit in the back of the car and think about physics. He would pick one area of physics and review in his mind all that he understood about it, and try to figure out ways of thinking that made the results obvious. Before the distractions of email, the Internet, and cell phones were invented, that meant days of concentration and deliberate practice (for more on deliberate practice, see the above answer to the questions about improving one’s educated guessing).
One area that is unsuitable for approximations is computing the small difference of large numbers. Then, small errors in the large numbers turn into big changes in the difference. For example, showing that energy is conserved—important in the development of physics—usually means finding the difference between the energy that goes in and the energy that goes out. A slight error in either energy significantly changes their difference, and can change one’s conclusions about energy conservation.
Does the cost of a precise calculation add commensurate value? – wild guess
It depends! If a rough calculation indicates that a project is likely to be feasible, then a more precise calculation is worth doing. If the rough calculation indicates that the project is almost certainly infeasible, then a more precise calculation is probably wasted. For example, if a first, rough estimate of the cost to build a new bridge across the Hudson River comes up with “roughly $10 billion, give or take maybe a factor of 2,” yet the available funds are a few hundred million dollars, then there’s little point in refining the estimates with a detailed business plan.
I work on the 7th floor. How many additional calories will I expend if, for the next four years, I take the stairs instead of the elevator? – Jordan
Time to estimate! The energy required to raise an object—pretend it is me—to a height is the object’s mass times the earth’s gravitational strength (“g”) times the height. My mass is 60 kilograms; the strength of gravity is 10 meters per second; and 7 floors is roughly 20 meters. The required energy is their product, which is 12,000 Joules. This energy is, however, just the mechanical energy.
Because the human “engine” is only about 25% efficient (internal-combustion engines are also about 25% efficient), the total energy required is a factor of 4 greater: 48,000 Joules. Each Calorie (with a capital C) is about 4,000 Joules, so the energy required to walk up the stairs is 12 Calories. I could use this value to answer the question, but it would give me just a very large number of Calories, and I would not immediately know whether that number is large or small.
To make this energy more meaningful, I compare it against another relevant energy. A useful estimation fact: a moderate-sized jelly doughnut provides 1 million Joules or about 250 Calories. That would be enough to climb the stairs 20 times. Thus, one jelly doughnut provides enough energy to climb the stairs (every weekday) for a month. Equivalently, climbing the stairs for a month will burn off the Calories from one jelly donut.
The distinction between calories (1 calorie is roughly 4 Joules) and Calories (1 Calorie is roughly 4,000 Joules) is sometimes ignored. The results are never pretty. One Internet diet plan that I saw many years ago recommended eating all kinds of candy and sugar because walking just a short distance would burn away the calories. The plan “worked” only because the computations of the energy provided by candy bars used Calories, whereas the computations of the energy required for walking used calories—and the distinction was ignored. The bogus factor of 1,000 thereby gained made the diet look easy.