Image Hosted by ImageShack.us  

Post-Brunch Intelligencer


Midmorning ramblings on the state of the species

The Hunt for HAL

Posted by Nath at 8:29 AM
I am a HAL 9000 computer, Production Number 3. I became operational at the HAL Plant in Urbana, Illinois, on January 12, 1997.
2001: A Space Odyssey
Don't tell Him I said this (if you did, He'd probably deny it) but I think, as a species, Mankind is lonely. He has meticulously explored His own backyard (or, perhaps more accurately, His living room). Having solved some of its mysteries and lost interest in the others, He is bored. He looks to the stars for guidance, for companionship – and above all, for competition. Man likes competition. He likes being the underdog. He likes big, epic battles against the odds. (I mean the fun battles – the ones that get made into movies later on. Not the boring sort, like the ones against poverty, disease and hunger.) And so He pulls a big pile of nuts, bolts and wires out of the cupboard, and gets to work. If the stars don't give Him a playmate, He'll build one Himself.

That, I think, explains in a nutshell the human fascination for artificial intelligence. Prophets have been saying for decades now that true, human-level AI is just around the corner. Well, 1997 has come and gone; we're still waiting for HAL. We keep hearing about how processors are septupling in power every twelve minutes; what's happening with all our shiny new computing cycles?

Alas; AI is not (yet) a problem that can be solved by throwing more hardware at it. Hardware might be getting faster and cheaper as manufacturers practice, refine and improve their methods, but the field of AI still belongs to the academicians. It's not just a matter of refining existing techniques; there are still significant conceptual hurdles standing between us and HAL.

The biggest hurdle of all, however, is not one of academics, but one of perception. In the context of AI, people still associate intelligence with linguistic ability. Unfortunately, natural language is inherently ambiguous. An ambiguous language cannot be described through concrete rules; it relies too heavily on context. The only way to teach context is by example. Herein lies the problem: you can't get a research grant allowing you to spend two years standing in front of a computer going 'A for Apple, B for Borland'. That's why I think natural language processing will be the last piece of the puzzle to fall into place.

So where are we now? A lot of not-so-glamorous progress has been made in AI over the past few decades. What a lot of people don't realize is that computers can, in fact, be made to reason, as long as the problem is presented in the right format. Machine learning has also been around for a while, and is apparently experiencing a resurgence of sorts with the revival of statistics-based AI.

What do you get when you put reason and learning together in a blender? That's right – intelligence. Those science fiction gurus may not have been so far off the mark after all. HAL may indeed be around the corner – if there is economic justification for him. He probably won't be able to speak, but he will be smart enough to provide some interesting competition.

Comments:

Posted by Blogger unforgiven at 11 August, 2006 15:30:
One of the more pertinent reasons why AI isn't coming up (and probably won't for quite a while) is simply because, well, it doesn't pay.


Who wants a computer that can actually have basic cognitive and analytical abilities of a 2-3 year old child.

Unfortunately, that is all we'll have after a LOT of money and effort thrown into it. Hence some of the smartest brains move onto things that, well, pay.


I believe "sentience" is the true challenge. I have a feeling that before computer scientists get to it, we need more psychologists and neurology researchers sitting down together and figuring out what "intelligence" is and what, if any, relation sentience has to it.


Call this just a hunch but true "intelligence", I believe, will not be realized till sentience is.

Posted by Blogger Nath at 12 August, 2006 02:22:
One of the more pertinent reasons why AI isn't coming up (and probably won't for quite a while) is simply because, well, it doesn't pay.

I don't think funding is the issue. AI research is pretty well funded, because there are practical applications that do make money. The issue, I think, is one of perception. The instant an AI problem is solved, people stop considering it an AI problem (case in point: chess). When you think about it, most things a computer does could be thought of as AI on some level.

Who wants a computer that can actually have basic cognitive and analytical abilities of a 2-3 year old child.

No, computers have the linguistic abilities of a 2-3 year old child. In certain areas, they have cognitive, analytical and reasoning abilities far beyond humanity's brightest. Language is a difficult problem to solve.

I believe "sentience" is the true challenge. I have a feeling that before computer scientists get to it, we need more psychologists and neurology researchers sitting down together and figuring out what "intelligence" is and what, if any, relation sentience has to it.

The meaning of the word 'sentience' is an issue for philosophers and linguists to debate, not neurologists. In this context, I don't think 'sentience' isn't a very meaningful or useful word. As research in AI progresses, computers' ability to perceive and process abstract ideas will gradually improve to the point where most people will consider machines sentient. There's no magic threshold.

Posted by Blogger Raindrop at 26 August, 2006 13:54:
On an unrelated note, what do you think of Penrose's strange invokation of Goedel's theorem to 'prove' that human intelligence can never really be achieved in an 'artificial' system?

Have you ever heard him speak? Everything he says is straight out of Chapter One of The Emperor's New Mind.

Also, he's obsessed with the role quantum mechanics plays in consciousness. In my very humble opinion, that's simply unnecessary.

Posted by Blogger Nath at 26 August, 2006 14:57:
I haven't heard Penrose speak, and I haven't read any of his books -- I know his arguments only from summaries I've read here and there. They're somewhat disappointing, to say the least.

The first problem is that I'm not convinced that the human brain is non-computable. His argument doesn't take into account the possibility that human beings can 'know' things and be wrong.

The second problem is that even if brains are non-deterministic and computers are not, that doesn't necessarily mean non-determinism is a pre-requisite for intelligence (just as oxygen isn't a pre-requisite for life, even though we breathe it).

The third problem is that his book was written when quantum computation was mostly science fiction. Artificial non-deterministic computers will probably be feasible eventually.

I think Ray Kurzweil presented similar counter-arguments in The Age of Spiritual Machines -- I can't remember for sure. However, Kurzweil is so ridiculously optimistic it isn't even funny. In the long run, I think people like him are far more harmful to AI than people like Penrose (who at least spawned a fun debate), because of the false expectations they create.




XML RSS Feed


Powered by 

Blogger



Something broken? Let us know.