When I was growing up in the early 1990s, I heard somewhere that there would be flying cars by the year 2000.  This made me very excited.  I couldn’t wait for my turn to ride in a flying car.  I even remember telling my next-door neighbor that there would be flying cars by the year 2000 – stating it as a well-known fact.

Around July of 1999 I started to become a little suspicious.  We were supposed to have flying cars in just 6 months and, as far as I could see, cars were still very much anchored to the ground and showing no signs of lift off.  I have a vivid memory of the tree I was climbing in my next-door neighbor’s yard when the realization hit me: we weren’t going to have flying cars in the year 2000 or anytime soon.  Further, the person who told me about the flying cars was wrong.  From that moment forward I learned to not believe everything I heard and rather think critically for myself and draw my own conclusions.

This anecdote has been particularly top of mind over the past few weeks while I’ve been reading The Singularity is Near by Ray Kurzweil.

The premise of the book has to do with the advancement of technology over time.  Kurzweil observes that over the past 150 years, computer-processing power has increased at an exponential rate.  Starting with electromechanical punch cards around the turn of the 20th century and advancing all the way to transistors and integrated circuits today, each year we have managed to develop increasingly faster and more cost effective computers.

Looking forward, and assuming the exponential pattern of growth holds true and extends infinitely, Kurzweil explains that in the early 2010s computers will have more processing power than the human brain (the book was published in 2006).  Looking even further ahead, Kurzweil posits that between the years 2020 and 2040, computers will have more processing power than all of the human brains that have ever existed, combined.  Kurzweil calls this moment the “singularity” and predicts that achieving this level of technological development will allow humans to (among other things) back up our memories onto computers, cure all diseases, and live forever.

Kurzweil then spends about 500 pages of his book explaining in intricate detail what life might be like after the singularity when humans transcend biology and artificial intelligence permeates every part of our life.  On a personal level, it’s clear that Kurzweil relishes this post-singularity world – he frolics in it and celebrates its every detail.

To give you a quick flavor of the content of the book, in one section, Kurzweil explores the issue of getting tiny robots (nanobots) – injected into the blood stream – to pass into the brain to heal brain damage.  The chief problem is that in between the blood stream and the brain is a very restrictive membrane called the blood-brain barrier.  Getting a nanobot to pass through this barrier is very difficult.  Kurzweil then explores three possible designs for nanobots that could penetrate this membrane.  The first is a nanobot small enough to pass right through, although the small size would significantly limit its functionality.  The second is a nanobot with a robotic arm that could reach out through the blood-brain barrier to contact the brain – which sounded very complex.  The third is a nanobot that could cut a hole in the membrane and then repair it after passing through to the other side.

Yeah – that’s the level of detail contained in this book.

I was really intrigued by the book, but one thing that Kurzweil underplays is the concept of the technology adoption curve.  Kruzweil himself is clearly someone who tries all new technology the minute it is available, but not everyone is like him.  Heck, I know people who just got their first iPhone last year.  There is always a pretty significant lag time between when a technology is first invented and when it becomes commercially successful.  This lag time can slow down technology adoption and it’s possible this lag time will grow as technology advances.

Overall, I think Kurzweil’s theories sound totally logical, but I think he’s drastically underestimated the amount of time that it will take for the singularity to even approach.  It’s already the year 2014 and, as far as I can tell, we’re not anywhere near having nanobots that we can inject into our blood stream to repair health problems.

We don’t even have flying cars yet.

The Singularity is Near
  • Sure, Kurzweil is viewed as a nut in certain circles.

    Still, I finished The Second Machine Age last week, and it made a point I continually forget to internalize, and that Kurzweil grasps well: technology has progressed at an exponential rate since the 1960s, and there’s no clear sign of it slowing down. If the amount we can process doubles every 2-3 years – and it may be even faster, depending on what metric you look at – then in, say, 25 years you’ll be 1000x as capable as your are now (2^10 cycles).

    A better perspective is to consider all of the technology development you’ve seen in your entire life, and then extrapolate that all of that will be doubled in 2-3 years. Or: all of the progress that took ~30 years will occur again in under 3 years.

    You’re quite correct in saying that the technology adoption curve is not this fast — by and large. However, I don’t think the time to adoption will increase. Take a look at this excellent graph from Asymco:

    http://www.asymco.com/wp-content/uploads/2013/11/Adoption-Rates-of-Consumer-Technologies.png

    Make sure to read the accompanying post: http://www.asymco.com/2013/11/18/seeing-whats-next-2/

    Adoption rates are speeding up, rapidly. The PC took 20 years to hit 90% adoption: the smartphone is on track to take 8 years. So are tablets. You can argue that this isn’t a real adoption shift, but I’m not so sure. How about this: how long did Facebook take to reach significant adoption? (Let’s say >50% population; it’s not 90% yet)

    Not to mention the “speed of business.” If a technology can help a business provide the same service at half the cost, that company will very rapidly put its competitors out of business. We haven’t seen this yet – no single technology has had that degree of impact, that quickly – but I’d expect the time between technology introduction and COGS to drop rapidly. Partly due to cloud services, faster deployment, less retraining of employees, faster software development, better prototyping, cheaper marginal manufacturing, etc.

    Consider this: as soon as you have a general-purpose robot, the number of things the robot can accomplish is going to go from almost nothing to a huge volume of things, very rapidly, as people write software to control it. You might look at such a robot a year after its introduction and say it can’t do much, and what it does do it does poorly, slowly, and with errors – but a few years down the line, it’s a different game. Take 6 years to penetrate one industry, 3 years on the one after that, then 1.5, then 0.75… pretty soon you’re adding an industry a month, at the same level of capability.

    I don’t think we’ll see a general purpose robot for some time. We’ll probably see AGI (Artificial General Intelligence) before that — at least, one of the founders of DeepMind (Google’s $600m acquisition) thinks there’s a very good chance of seeing AGI around 2025. That’ll be an interesting shock to the system.

    All of that said, the prospect of uploading one’s brain into a computer and emulating it is fraught with difficulty. I think that’ll take a lot longer.

    P.S. Moore’s law has variations. We’re no longer doubling actual speed so much as we’re doubling other metrics, e.g. performance per watt being the big one. So, given the same amount of power we can process twice as much, but it’s in parallel and not sequential. It also means we can deploy technology further away from central power grids, since the same compute power isn’t required. Or, you know, make it mobile.

  • Pingback: Rework – Lessons in Product Management « Andrew Eifler()