If you type the search string "human knowledge doubles every five years" into Google today, it will return about 1,580,000 results.
The first result is from a Bellagio newsletter reporting on a conference, and it uses the sentence as if it were a given. The second is from President Clinton's 1998 speech to the AAS, where he asserts it as a given. The first post in this weblog quotes from it.
The third result is why I started this weblog. Here is part of the opening from The Low Beyond, by Eliezer Yudkowsky.
It began three and a half billion years ago in a pool of muck, when a molecule made a copy of itself and so became the ultimate ancestor of all earthly life.
It began four million years ago, when brain volumes began climbing rapidly in the hominid line.
Fifty thousand years ago with the rise of Homo sapiens sapiens.
Ten thousand years ago with the invention of civilization.
Five hundred years ago with the invention of the printing press.
Fifty years ago with the invention of the computer.
In less than thirty years, it will end.
At some point in the near future, someone will come up with a method of increasing the maximum intelligence on the planet - either coding a true Artificial Intelligence or enhancing human intelligence. An enhanced human would be better at thinking up ways of enhancing humans; would have an "increased capacity for invention". What would this increased ability be directed at? Creating the next generation of enhanced humans.
And what would those doubly enhanced minds do? Research methods on triply enhanced humans, or build AI minds operating at computer speeds. And an AI would be able to reprogram itself, directly, to run faster - or smarter. And then our crystal ball explodes, "life as we know it" is over, and everything we know goes out the window. "
Yudkowsky, like Ray Kurzweil and other advocates of what is known as the Singularity, extrapolates progress in computing sciences from past performance increases in computer efficiency, networking and falling prices for computation and memory. At some point in our lifetimes, computing will be so fast, so efficient, so networked and have access to so much memory, that we will be able to create artificial intelligence (AI). We will then be able to assign this AI the task of creating an even better computing machine, as well as assisting human intelligence in solving real world problems. Because of the increased capabilities of AI (which include indefatiguability and no need for coffe breaks as well as very high intelligence and concentration), progress will be so rapid that life as we know it will change dramatically and in a very short timeframe.
I really want this to be true. But I am afraid that a) we are exaggerating our progress on many fronts and b) even if we can build this better box, we will need to improve our knowledge exponentially before the box is built so we know what to put into it. And I see no evidence as yet that we'll be able to fill the box.
We'll be talking a lot about this--and I hope to involve some of the protagonists in the discussion. But if this isn't your cup of tea, you can adjust your visits to this site accordingly.