Technological Singularity

The Poona of Peshwa

Blood Glutton
Jan 28, 2004
880
2
18
Saskatchewan
I have been thinking a lot lately about the concept of technological singularity as defined by Ray Kurzweil and Vernor Vinge. I recently read Kurzweil's newest book "The Singularity is Near" and was left with an extremely mixed impression. Kurzweil argues that based on extrapolations of current trends in information technology, those being the "doubly exponential" growth processes of computation through reduction in size, increase in efficiency and the subsequent increase in cost effectiveness, we are heading towards a point of "singularity". Singularity is used as a metaphor for technological process exceeding current human ability to understand it, and is borrowed from the mathematical term. This would occur through the development of greater than human AI, or through enhancement of human facilities (Kurzweil argues that the former will come first and indeed he intends to play a leading role in the process).
Kurzweil believes that we will reach the stage where technology spirals far beyond our control well before the middle of the 21st century, and that $1000 dollars of computer equipment will be able to simulate the human brain by 2029. The results of this process, according to Kurzweil, will be profound, leading ultimately to human immortality through software, nanotechnology, and quantum computing. Kurzweil addresses most of the important criticisms he has recieved in the last chapter of the book, and to my eyes it appears that there is no reason why what Kurzweil says could not come to pass. Kurzweil claims that the apparent lack of progress in software as opposed to hardware will be overcome with the reverse engineering of the human brain, which will provide a tool-kit for the creation of strong AI.
Where my major problems with Kurzweil arise are in his reductionist view of humanity and its place in the universe. Kurzweil sees human beings with their "version 1.0" bodies as nothing more than ineffecient computational devices. He sees technological "evolution" as an outgrowth of biological evolution, going so far as to include the development of prokaryotic and eukaryotic cells on the same graph as the development of PCs and the Internet. This is based on what he calls the "Law of Accelerating Returns", wherein a certain technology will always provide the means necessary to surmount itself on an ever-growing exponential curve (i.e. the development of hominids took half as long as the development of mammals, which took half as long as the development of reptiles etc.).
Kurzweil states that he believes that the Universe itself is destined to move beyond its current "dumb" state through the development of an intelligent species which merges with its technology and uses its nearly infinite knowledge to convert the very fabric of the universe into a giant computational device. The ability of black holes (reliant on a universe rich in carbon) to retain information, and the development of an intellegent carbon-based species serve as a verification of the anthropic principle for Kurzweil. In this he brings the product of his Law of Accelerating Returns to it's logical extreme.
While the notion of nearly infinite exponential growth in computation is not disprovable, I find Kurzweil's application of Moore's Law to all of the universe to be questionable. He relies on an extreme "whiggish" view of history, not at all dependent on individuals or small groups, only concerning himself with the megatrends of technological development and "progress". Also, I think more than a few evolutionary biologists would have a problem with his direct correlation of technological and biological development, and his belief that this implies something fundamental about the universe.
The book itself (singularity is near) is sloppy and repetitive, and the degree to which Kurzweil believes in the inevitability of his predictions is evident in his almost total lack of, or even condescension towards, political, moral, and philosophical considerations. Kurzweil is neurotic and egocentric in the extreme. Any attempt to stall the inexorable march of technological progress will only force dangerous emerging technologies to the societal fringe, where they could be extremely deadly for humanity.
Nevertheless, Kurzweil is no chump, being one of the top inventors today. Indeed, he has a very accurate track record for predicting future developments in AI and computation. He has legions of slavish fans who believe his predictions. I don't believe that such an explosion in intellegince is necessarity imminent, but it sure could be. Some thinkers, uncomfortable with the increasing ease with which people will be able to synthesize viruses and bacteria, and weaponize nanobots, believe that mere human intellegence cannot be trusted with the responsibility. I believe that thowing our hands in the air and admitting that we are incapable of dealing with our problems, and handing ouselves over into the care of intellegent machines would represent a profound failure of mankind. I am not necessarily a technophobe, but I see a future pioneered by ethical human beings with their limitations and strengths as being far preferable to giving ourselves up to a program of neo-eugenics and robotics. However, Kurzweil's megatrend analysis is insidious, when we consider the commercial as well as military applications of emerging technologies, we can see that we are separated from this by baby-steps and not leaps.
I would be interested to see the opinions of The Philosophers on issues such as accelerating change, and futurism in general. Also, I'm new here! Hello!:heh:
 
Superpowerful computers that can do millions of tasks in milliseconds? Sure. Intelligent (where 'intelligence' is defined as "the ability to learn") computers? Sure, if we take the liberty of using the word 'learn' for a computer's ability to try several options and eventually decide, based on the results of each option, which one is the best. Computers that are more-intelligent than humans? I don't know, i doubt it; maybe if we could build a computer capable of "learning" things faster than a baby can. And computers that are so complicated that we can no longer understand the technology behind them is bullshit, in my opinion. Perhaps eventually the production of computers or artificial intelligence requires teamwork between people with different specialties (physics, computer science, mathematics, nanotechnology, chemistry, etc) and none of them will understand by themselves the whole technology behind the computers, but the group as a whole will; it is impossible for any human to build anything beyond what they can comprehend for the simple reason that one can't possibly wield technology to the point of inventing things with it without knowing how that technology works.

Now, in order to simulate a human brain with computers we would first have to know everything about how the human brain works, which we don't. Furthermore, if one day we do and we build such a computer it will only simulate a human brain (i.e. it won't be exactly the same as one, so it'll be less-powerful). As i said, it might process things faster than an actual human brain, but that doesn't give it "free will" or make it able to do stuff on its own without that stuff somehow being programmed into it, and that's the point i wanted to get to: a computer will never be able to do something because it decided to, it will never cease to be a slave to whoever built it. So a computer can't possibly suddenly turn against humans or decide that humans are obsolete or invent things or replace human thinkers (scientists and philosophers) or anything; it can't think, it can only process. It can only replace human workers because it can do work faster and more-efficiently/accurately than them, and it has certainly become a valuable tool to thinkers (calculators, simulators, personal computers), but it will never be or do more than that. The only possible advances in computer science are further miniaturization, further processing-capacity and better speed, and the most a computer will ever be able to do is control an entire city's security and life-support systems or something of the sort. Matrix is bullshit.

The comparison of computer "evolution" to organism evolution is a wrong and misleading one, i think. That's like comparing fruits to cars: they have absolutely-nothing to do with each-other. Unless, of course, he only wanted to show that the advances in computer technology are exponential just like evolution suposedly was/is, but i really don't see the point in that if he doesn't mention the evolution of organisms ever again or make a point using that.

My conclusion: don't pay attention to anything that man says about computers.
 
The "singularity" hypothesis is dependent on computers building more advanced computers without human involvement. Strong AI, that is self-developing systems would be a requisite for such an event to take place. Yes they would have to be designed to take autonomous action, but some people aim to do exactly that. Thing about Kurzweil is that he is deeply entrenched in the computer industry world and knows more about AI development than almost anyone.
 
Computers outperform the human brain in terms of speed and capacity already. The issue is really one of architecture and organization in the advent of a quantum breakthrough. Who knows, but AI is pure hype right now.
 
The comparison of computer "evolution" to organism evolution is a wrong and misleading one, i think. That's like comparing fruits to cars: they have absolutely-nothing to do with each-other. Unless, of course, he only wanted to show that the advances in computer technology are exponential just like evolution suposedly was/is, but i really don't see the point in that if he doesn't mention the evolution of organisms ever again or make a point using that.

My conclusion: don't pay attention to anything that man says about computers.

I've always viewed our technology an evolutionary extension of who we are for many reasons I've mentioned here before so, I am in full agreement with the author. As far as not paying attention to anything man says, I think holds to a point. Once quantum computing comes into play our whole world will change and I think that much is foreseeable.
 
I have come across some of Kurzweil's ideas in the past. They seem like a mix of Arthur C. Clarke, with a nerdy scientific optimist who seems not concerned at all, at what problems such a scenario (as he lays out) will bring morally, philosophically, socially, etc.

I did read Roger Penrose's Emperor's New Mind years ago, in which he attempted to prove how computer concsiousness is impossible. However, that was 10 years ago. I confess I am by no means a computer or science guy, so I have no idea what has changed since then.

However, Undocontrol made some very, very good points. We still barely understand the brain. It is the great frontier of science. How can these AI advances to singularity and consciousness occur when we dont even know what makes the brain work?

I think --and this is from a purely layman sense of understanding--it's easier to imagine computer's speed, memory and processessing aiding our own minds, instead of computers gaining consciousness themselves. But again, we barely understand how the human mind works.