The Singularitarians need an exponential growth of human knowledge to enable their theory of short term artificial intelligence.
But I submit they are not the only ones who are changing their behaviour and expectations because of this general idea floating around. The idea that human knowledge doubles every five years can become an excuse for inaction.
If people think that human knowledge doubles every five years, they may think that improved science and technology will provide a 'deus ex machina' last minute cavalry charge over the hill solution to something, meaning they don't have to change their current behaviour.
They might continue smoking, thinking that surely, by the time they're 69 (the average age of diagnosis for cancer), we will have found a silver-bullet solution for all cancers.
Their organisations don't modify damaging behaviour, thinking that we will find a way to turn pollution into ice cream and champagne, or replace depleted fisheries with genetically modified de-boned cod that jump into our nets with a smile.
I think this attitude, to the extent that it exists, is quite dangerous. If my first pass through publicly available data suggests anything (and even when I finish it will need to be approached academically), it suggests that human knowledge does not double every five years.
What we do see is that specific sectors, such as nanotechnology or proteomics, grow even faster than that for short periods of time:
- When the sector is new
- When the sector is sexy
- When government or private industry sees a short term gain resulting from it
- When intermediate goals are visible and attainable
But exponential growth then reverts to a mean, or returns to an average growth that is far slower. That growth rate varies according to the availability of inputs, such as researchers, funding and above all the availability of findings from outside the given sector--the multi-disciplinary rescue. This has rescued Moore's Law more than once, and it is applicable across fields.
Not every problem deserves the resources of a Manhattan Project, which is the prime example of how throwing money and scientists at an issue can produce results. Some problems can be solved without a Manhattan Project. But I fear that problems like lung cancer and artificial intelligence do need resources at that scale...
...and I fear even more that a general, sloppy impression that human knowledge doubles every five years without any real effort or a need to get behind the knowledge machine and push is actually harming the cause of scientific and technological progress. It relieves us of personal responsibility. Oh, the new horde of Chinese researchers will solve it, so we don't have to.
That is pernicious.
Comments