Monday, January 14, 2013

Kurzweil on Creating a Mind

I just got done reading Ray Kurzweil's How to Create a Mind, his latest on how machines will soon (2030ish) pass the Turing test, and then basically become like robots envisaged in the 60's, with distinct personalities, acting as faithful butlers to our various needs.

And then, today over on The Edge, Bruce Sterling is saying that's all a pipe dream, computers are still pretty dumb.  As someone who works with computer algorithms all day, I too am rather unimpressed by a computer's intelligence, but Kurzweil made me a little more appreciative of what they can do.

He notes that IBM's Watson won a Jeopardy! contest by reading all of Wikipedia, a feat clearly beyond any human mind. Further, as Kurzweil notes, many humans are pretty simple, and so it's not inconceivable a computer can replicate your average human, if only average is pretty predictable. Sirri is already funnier than perhaps 10% of humans.

Human's have what machines currently don't have, which is emotions, and emotions are necessary for prioritizing, and a good prioritization is the essence of wisdom.  One can be a genius, but if you are focused solely on one thing you are autistic, and such people aren't called idiot-savants for nothing.

Just as objectivity is not the result of objective scientist, but an emergent result of the scientific community, consciousness may not be the result of a thoughtful individual, but a byproduct of a striving individual enmeshed in a community of other minds, each wishing to understand the other minds better so that they can rise above them. I see how you could program this drive into a computer, a deep parameter that gives points for how many times others call their app, perhaps.

Kurzwiel notes that among species of vole rats, those that have monogamous bonds have oxytocin and vasopressin receptors that give them a feeling of 'love', and those where dads are just sperm donors don't. Hard wired emotions dictate behavior.  Perhaps computers can have emotions, you can put in something for sadness when they aren't called by other programs or people, a desire to see users with physical correlates to fertility like smooth skin and tone bodies. But it's one thing to program an aversion to solitude, another to desire a truly independent will.

Proto humans presumably had the consciousness of dogs, so something in our striving created human consciousness incidentally. Schopenhauer said "we don't want a thing because we have found reasons for it, we find reasons for it because we want it." The intellect may at times to lead the will, but only as a guide leads the master. He saw the will to power, and fear of death, as being the essence of humanity.  Nietzsche noted similarly that "Happiness is the feeling that power increases."  I suppose one could try to put this into a program as a deep preference, but I'm not sure how, in that, what power to a computer could be analogous to power wielded by humans?

Kierkegaard thought the crux of human consciousness was anxiety, worrying about doing the right thing.  That is, consciousness is not merely having perceptions and thoughts, even self-referential thoughts, but doubt, anxiety about one's priorities and how well one is mastering them. We all have multiple priorities--self preservation, sensual pleasure, social status, meaning--and the higher we go the more doubtful we are about them. Having no doubt, like having no worries, isn't bliss, it's the end of consciousness.  That's what always bothers me about people who suggest we search for flow, because like good music or wine, it's nice occasionally like any other sensual pleasure, but only occasionally in the context of a life of perceived earned success.

Consider the Angler Fish. The smaller male is born with a huge olfactory system, and once he has developed some gonads, smells around for a gigantic female. When he finds her, he bites into her skin and releases an enzyme that digests the skin of his mouth and her body, fusing the pair down to the blood-vessel level. He is then fed by, and has his waste removed by, the female's blood supply, as the male is basically turned into a parasite. However, he is a welcomed parasite, because the female needs his sperm. What happens to a welcomed parasite? Other than his gonads, his organs simply disappear, because all that remains is all that is needed. No eyes, no jaw, no brain. He has achieved his purpose, has no worries, and could just chill in some Confucian calm, but instead just dissolves his brain entirely.

A computer needs pretty explicit goals because otherwise the state space of things it will do blows up, and one can end up figuratively calculating the 10^54th digit of pi--difficult to be sure, and not totally useless, but still pretty useless.  Without anxiety one could easily end up in an intellectual cul-de-sac and not care.  I don't see how a computer program with multiple goals would feel anxiety, because they don't have finite lives, so they can work continuously, forever, making it nonproblematic that one didn't achieve some goal by the time one's eggs ran out.  Our anxiety makes us satisfice, or find novel connections that do not what we originally wanted but do what's very useful nonetheless, and in the process helped increase our sense of meaning and status (often, by helping others).

Anxiety is what makes us worry we are at best maximizes an inferior local maximum, and so need to start over, and this helps us figure things out with minimal direction.  A program that does only what you tell it to do is pretty stupid compared to even stupid humans, any don't think for a second neural nets or hierarchical hidden markov models (HHMMs) can figure stuff out that isn't extremely well defined (like figuring out captchas, where Kurzweil thinks HHMMs show us something analogous to human thought).

Schopenhauer, Kierkegaard, and Nietzsche were all creative, deep thinkers about the essence of humanity, and they were all very lonely and depressed. When young they thought they were above simple romantic pair bonds, but all seemed to have deep regrets later, and I think this caused them to apply themselves more resolutely to abstract ideas (also, alas, women really like confidence in men, which leads to all sorts of interesting issues, including that their doubt hindered their ability to later find partners, and that perhaps women aren't fully conscious (beware troll!)). Humans have trade-offs, and we are always worrying if we are making the right ones, because no matter how smart you are, you can screw up a key decision and pay for it the rest of your life. We need fear, pride, shame, lust, depression and envy, in moderation, and I think you can probably get those into a computer.  But anxiety, doubt, I don't think can be programmed because logically a computer is always doing the very best it can in that's its only discretion is purely random, and so it perceives only risk and not uncertainty, and thus, no doubt.

The key, as Minsky always told me, was uncertainty, true uncertainty as discussed by Keynes and Knight.  If it is truly non-quantifiable, then a computer can not understand it, and they will never empathize with us correctly, never accurately have a 'theory of mind' that comes naturally for humans.  After all, without uncertainty, there really isn't doubt, which Schopenhauer said was the essence of consciousness. So, the search for AI, and a model of 'real risk', seemed joined at the hip. 

7 comments:

Anonymous said...

"Just as objectivity is the result of objective scientist..."

Is NOT the result?

Barba Rija said...

So here's a thought. 15 years ago, a computer first beat Kasparov. Until then, chessmasters used to say that computers would never beat human champions because they would need to understand "art". After the fact the feat was "obvious" and "unimpressive", because the computer just had raw power.

Next people said, computers will never be able to understand the nuances and wit of games like Jeopardy. So Watson completely blew away his competition. "Bah", said the naysayers, "it was obvious and unimpressive, it just read wikipedia".

But here's the thing you are missing out: with quantity, comes quality. All these things you are saying computers do not have, you should have qualified with a "Yet". They are emotionless, yet. They are without anxiety, yet. They are unintelligent and unimpressive, yet.

Lets see what will happen within 15 years.

Also, to say that Google isn't interested in Artificial Intelligence is just bad reporting on your part. You should inform yourself better before ranting. They are the number one company interested in that issue, and now they hired Kurzweil as one of their top engineers to advance that research.

Eric Falkenstein said...

First, I noted they can put in emotions, and Marvin Minsky has mentioned this. Secondly, I don't say anything about Google not being interested in AI. But, other than that, I concede that quantity can produce quality.

Anonymous said...

mr Falkenstein,

your comments at the end about romance (of those thinkers) is quite interesting a topic. Could you perhaps expand a bit on that?

Mercury said...

At one point accurate and reliable computer language translation seemed like an unattainable goal – too many nuances and all that. About 15 years ago I remember a guy (a real live Watson in fact) telling me about a company he was backing that aimed to solve this problem with a giant hub-and-spoke model where the source language would first be translated into a “universal” language and then that would be translated into the target language. I don’t think that ended up working out so well.

The breakthrough technology (Google’s I think) was to build up a database of past examples of human translations in which word x in language y in context z was accurately translated to word xx in language yy. So there’s another example of (massive) quantity begetting (mostly good enough) quality. Maybe a similar proxy system could be developed for emotions, anxiety, doubt, fear etc.

There has to be a finite amount of neural connections (or whatever) in the human brain so duplicating that raw firepower with a computer shouldn’t be (and in many cases already hasn’t been) an insurmountable problem but maybe there is some advantage (greater efficiency?) that the chemical storage and processing of information (or chemical combined with electric) has over the purely electronic.

drederick said...

I think your opinion on this must be influenced by your religion/spirituality. If humans are just meat and chemicals and electrical impulses, then whatever goes on in our brains can be simulated in a different medium. Do you agree that IF we don't have souls/spirits/whatever, then your objection fails?

"A computer needs pretty explicit goals because otherwise the state space of things it will do blows up"

Why doesn't the state space of human's blow up? Because we're not doing a brute force search, but neither does a computer have to. It can use heuristics too.

"because they don't have finite lives, so they can work continuously, forever"

That doesn't mean that their goals couldn't be time-bound. Also, computer processes can end in a variety of ways so I'm not sure why you're assuming all AIs would live forever.

Eric Falkenstein said...

Drederick. You make good points, and my article could have been better focused, because my (intended) point was that while deep preferences like envy, status seeking, an aversion to loneliness, and time preferences, can be put in, the one that can't is the doubt we feel from uncertainty, and several great philosophers (I could have included Heidegger) state this worry is the essence of human consciousness. If we can't model this, we can't put it into computers, and it's not a trivial lacuna. It's a deep issue for economists too.