Our Daily Bleg: Does “640K” Really Belong to Bill Gates?

Last week, Fred R. Shapiro, editor of the The Yale Book of Quotations, inaugurated Our Daily Bleg, with a request to learn the true source of the quote “Read my lips.”

A consensus has yet to be reached on the origin, but your thoughtful comments (to which Fred replied) made some headway — and possibly helped out Netflix.

Fred will keep blegging for quotes here on Thursdays. (You can send your own blegs to: bleg@freakonomics.com.) Here’s his next request:

Our Daily Bleg
by Fred R. Shapiro

There seems to be a strong correlation between interest in technology and interest in quotations, and many of the emblematic sayings of our time are computer sayings, such as “information wants to be free.”

Bartlett’s Familiar Quotations ignores computer culture almost completely (in 1989 the editor said, “There ought to be something about computers and artificial intelligence. Surely somebody somewhere said something memorable.”). In my own book, The Yale Book of Quotations, I tried to pay special attention to computer-related quotes, and, in order to continue this effort in the next edition, I have begun a Computer Quotations Project seeking contributions of information about famous technological adages.

Recently I posted a list of inquiries about famous computer quotations on the Times‘s Bits blog. One of the quotes I asked about was “640K ought to be enough for anybody,” attributed to Bill Gates (the earliest I have found this credited to Gates was in 1990). There were over 100 comments posted, including one that, on its face, appears to be by Mr. Gates himself:

The statement I made about memory space was that we need about one new bit of addressable memory every two years or so. We did our best to get the 68,000 to be used in the IBM P.C. because that would have simplified the address space issues a lot.

The schedule was six months too late for IBM. The VAX already had a clean 32 bit address space. The history is far more complicated in terms of the x86 memory space because we supported both Extended and Enhanced memory (bank switching). At no time was the software the limiting factor — it was always the hardware going from 20 bits to 24 bits segmented (Os/2 and Windows exploited this) and finally 32 bits linear.

I have always found it amusing that that quote is attributed to me but you can read interviews I gave about address space from the 1970’s talking about the growing need for address bits over time. 64 bits is nice but even that will run out.

– Posted by billg

Are any readers of this blog able to verify with Microsoft or Mr. Gates himself whether this was an authentic posting by him? Also, can anyone discover any evidence of the “640K quote” prior to 1990?


dnl2ba

I *have* seen another Bill Gates denial of having said 640k should be enough for anyone. Here's a Wired article from 1997 (albeit with a broken link):
http://www.wired.com/politics/law/news/1997/01/1484

Or, just trawl Google for other pages that have the same denial:
http://www.google.com/search?q="silly+quotation+attributed+to+me+that+says"

Ryan

"Artificial Intelligence is neither."

One of my professors in college told me that when I suggested I might want to work on AI for my Master's. That pretty much ended that idea, particularly after I took a graduate level AI class and found out he was right.

Fred Shapiro

#1 may well be correct. Although William Safire's 1988 column about "read my lips" was quite well-researched and accurate, he may have unintentionally popularized a Dirty Harry connection for the phrase.

Hovie

I'll bet that the NY Times's William Safire is to blame for crediting Dirty Harry with the phrase "read my lips." Albeit unintentionally!

On Sep 4 1988, Safire published an article on the use of "read my lips" by GHW Bush:

http://query.nytimes.com/gst/fullpage.html?res=940DE3D71F3AF937A3575AC0A96E948260

In this article he says "read my lips" is similar to "make my day," a phrase used by Reagan. He doesn't confuse the two phrases in the article, but he places them in very close context. I have a feeling that some people read this article incorrectly or remembered it wrong, thus giving rise to the erroneous Dirty Harry credit.

Spoon

@A programmer

If everything continues doubeling every two years, and I currently have 2^34 bytes of memory (not including video memory, which shares the same address space), then it should only take 60 years to need more then 64bits of addressable memory. Perhaps you think we'll address 64 bit blocks instead of 8 bit chunks? well that would only give us 6 additional years, and high end servers today come with the amount of memory I'll have in 6 years, so it doesn't really buy us those six years (assuming things stay at the same page) :/

A programmer

"64 bits is nice but even that will run out."

Wrong prediction. Let's do the maths : How much time would it take for a super-computer to access all of it ?

Imagine that I invent a processor with 1000 GHz (current processors don't go above 5 GHz, 200 times slower), and which can write 1 KB with each cycle, making an incredible bandwidth of about 1 million GB/s. It would take about 5 hours, simply to set the whole memory (2^64 bytes).

How much place would we need ? Current chip technology is in the 30nm, so, imagine that I invent a memory cell which is only 1 square nm flat. 10^18 such cells on a flat chip would make exactly 1 square meter. I would need 16 m^2 to cover the 64-bit range. Even if I achieve to place these cells on 10,000 layers, I would need 4cm times 4cm. The 10,000 layers add one more dimension, but nevertheless let's imagine it's only 4cm deep too, making a cube.

Therefore the longest path from 1 end of the cube to the other and back would be a little less than 14cm. At the speed of light, it's about 0.5 ns.

This means that the 1000 GHz processor is not better than a 2 GHz one when accessing this memory. If you want to do any serious work (other than simply writing values in a sequence), it takes much more than the previous 5 hours until you exhaust the whole : it is 500 times slower, therefore you need 5 time 500 = 2500 hours, or more than 100 days.

And all of this with the assumption that the processor is well enough superscaled to be able to make a complex operation on 1 KB of data at once. And hope you don't have a BSOD during these 100 days.

Conclusion : 64-bit long addresses are a safe bet. 16 exabytes should really be enough for anybody.

Read more...

Kumar Venkateswar

@A programmer

You're making the assumption that the architecture of systems will remain unchanged - for instance, that we're talking about a single processor architecture. Making assumptions like that will lead to similar assertions such as the one we're discussing.

Instead, think about the kinds of problems that would require an address space larger than 16 exabytes of storage, and the rate at which demand is increasing for solution to these problems.

If it was Bill Gates making that statement, that's what he's thinking about.

Nikhil Punnoose

@ A Programmer
Come on, dude. If there's anything we've learned, its that predictions about the future that underestimate the limits of human ingenuity are never something to put money on.

www.ramblingperfectionist.wordpress.com

Becca

"Are any readers of this blog able to verify with Microsoft or Mr. Gates himself whether this was an authentic posting by him?"

You tell us. You're the ones with IP address!

Ryan

Nikhil,
Richard Feynman has made a (long term) prediction on the limit of computers (with atoms/molecules being gates) that I'm pretty sure will hold.

Fred Shapiro

Response to #9: I believe the IP address is inconclusive.

Ethan

I doubt Bill Gates would put a comma in "68,000" when referring to the Motorola 68000-series of microprocessors. It wasn't the Intel 8,086 either. But hardly conclusive proof.

A programmer

Answers to #6, #7 and #8 : Technology has its physical limits, even if the rate of progress of computers in the last decades were incredible.

We are strongly limited by :
1) the minimum atomic size of the transistors and memory cells, and quantic effects
2) the speed of light
3) electromagnetic problems (current leaks, power consumption, frequency limits, ...)
3bis) thermic problems (how do you cool the 4x4x4 cm cubic memory efficiently ?)

All the assumptions I made in my calculations were very "generous" with the technology evolution. Best current bandwidth is in the 25 GB/s, far below my 1 million GB/s (see: http://en.wikipedia.org/wiki/List_of_device_bandwidths ).
1 square nm cells may be impossible, due to the fact that it's too close to atom size.
And a (mono-, multi- or vector-, see below) processor making a complex/parallel operation on 1 KB of data during each single cycle is well above current processor technology.

To #7: multiprocessor-architectures don't solve the physical limits problem. A 1000 GHz processor writting 1 KB in each cycle is the same as 64 processors with 125 GHz reading/writting each 128 bytes pro cycle. I'm safe, because we have real trouble reaching 10 GHz in high-end servers (at such frequencies, even very small perturbations are difficult to deal with), and I'm quite sure we won't approach the 100 GHz this century.

Moreover, with multi-processors, you even have more problems, because it does not scale well. How do you manage 64 procs wanting to access the same memory zone at the same time ? You'll have to include quite a bit of locking technology. And it will be a performance bottleneck.

"Instead, think about the kinds of problems that would require an address space larger than 16 exabytes of storage" : you want to solve EXPSPACE problems ? http://en.wikipedia.org/wiki/EXPSPACE Think about the maximum information speed (speed of light), and about the amount of time you would need to solve such problems.

Another argument: if you have lots of RAM, you'll need a lot of permanent memory (bands, hard disks, flash) space too, because of paging. And any kind of permanent memory I heard about is so sloooooooooooooooow. But you don't want to lose the precious 2^64 bytes of data you just got after 1 year of computation because of a power failure, do you ?

And last, but not least : current trend in solving big problems is more to build a cluster of servers, each solving one part of the problem, and having its own memory. The systems exchange only the necessary data through the network. In this case, you don't need to address more than 2^64 bytes per system, even if the sum is much more than 2^64 bytes.

As a conclusion : I disagree with these comments, and I am really sure that 64-bit addresses are here for a long, very long time.

Read more...

Alistair

"And any kind of permanent memory I heard about is so sloooooooooooooooow."

I wouldn't be so sure about that. Both MRAM and memristors seem to be promising technologies that might solve this problem...