Computer intelligence surpassing humans in 30 years?

The IEEE Spectrum magazine has an entire issue devoted to evaluating the likelihood of a technological singularity within the next 30 years.

First, what is the technological singularity? Some people, such as Ray Kurzweil and Verner Vinge project that technology will advance to a point where computers will surpass human intelligence. This will create an acceleration of technology because computers will redesign better and better versions of themselves. Not long after this, technological intelligence will far surpass human intelligence. This is called a singularity because it becomes impossible for our human intelligence to see the outcome of such a technology.

The IEEE Spectrum is an interesting place to read about the singularity because the IEEE is run by engineers, the people closest to the technology that could make the singularity happen.

The articles in the Spectrum range from skepticism to expectation of the singularity. The magazine polls a number of technological thinkers that have a broad range of opinion. Most think that Moore’s law (the doubling of computer power every two years) will stop in the next 10 to 30 years. Most think that the technological singularity will or could possibly occur, but most think that it won’t happen within the next 30 years. Opinions range from 30 years to 70 years to distant future to never. Ironically, Gordon Moore, the author of Moore’s law believes that the singularity will never happen.

Personally, I found Rodney Brooks‘ article the most interesting. One of his observations is that computer intelligence probably won’t surpass human intelligence because we will be upgrading our own intelligence at the same time. For example, I want a brain implant that includes a face and name database of every person I’ve met.

One thought on “Computer intelligence surpassing humans in 30 years?”

  1. The main problem with defining intelligence on a linear scale, by which people consider the possibility of machines surpassing humans, is that it misses almost all of the interesting nuances. The most common measure of human intelligence is IQ, and large portions of that are memory function and timed ability measures, both of which computers already surpass humans massively.

    No, I think the way many people conceptualize a possible singularity is assuming a priori, and incorrectly, that intelligence necessarily equates to conceptions of self. And, well, I doubt it does. That means that computer conceptions of self would have to be intentionally developed by humans – but why would they?

    I’m not saying that it’s impossible – quite the contrary. It’s just that I don’t see much immediate value in computers with fledgling understandings of self, and thence have doubts about such self-awareness progressing to anything like a singularity in the manner that anyone seems to prognosticate.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>