I just got done reading Ray Kurzweil's
How to Create a Mind, his latest on how machines will soon (2030ish) pass the Turing test, and then basically become like robots envisaged in the 60's, with distinct personalities, acting as faithful butlers to our various needs.
And then, today over on The Edge, Bruce Sterling is
saying that's all a pipe dream, computers are still pretty dumb. As someone who works with computer algorithms all day, I too am rather unimpressed by a computer's intelligence, but Kurzweil made me a little more appreciative of what they can do.
He notes that IBM's Watson won a Jeopardy! contest by reading all of Wikipedia, a feat clearly beyond any human mind. Further, as Kurzweil notes, many humans are pretty simple, and so it's not inconceivable a computer can replicate your average human, if only average is pretty predictable. Sirri is already funnier than perhaps 10% of humans.
Human's have what machines currently don't have, which is emotions, and emotions are
necessary for prioritizing, and a good prioritization is the essence of wisdom. One can be a genius, but if you are focused solely on one thing you are autistic, and such people aren't called idiot-savants for nothing.
Just as objectivity is not the result of objective scientist, but an emergent result of the scientific community, consciousness may not be the result of a thoughtful individual, but a byproduct of a striving individual enmeshed in a community of other minds, each wishing to understand the other minds better so that they can rise above them. I see how you could program this drive into a computer, a deep parameter that gives points for how many times others call their app, perhaps.
Kurzwiel notes that among species of vole rats, those that have monogamous bonds have oxytocin and vasopressin receptors that give them a feeling of 'love', and those where dads are just sperm donors don't. Hard wired emotions dictate behavior. Perhaps computers can have emotions, you can put in something for sadness when they aren't called by other programs or people, a desire to see users with physical correlates to fertility like smooth skin and tone bodies. But it's one thing to program an aversion to solitude, another to desire a truly independent will.
Proto humans presumably had the consciousness of dogs, so something in our striving created human consciousness incidentally. Schopenhauer said "we don't want a thing because we have found reasons for it, we find reasons for it because we want it." The intellect may at times to lead the will, but only as a guide leads the master. He saw the will to power, and fear of death, as being the essence of humanity. Nietzsche noted similarly that "Happiness is the feeling that power increases." I suppose one could try to put this into a program as a deep preference, but I'm not sure how, in that, what power to a computer could be analogous to power wielded by humans?
Kierkegaard thought the crux of human consciousness was anxiety, worrying about doing the right thing. That is, consciousness is not merely having perceptions and thoughts, even self-referential thoughts, but doubt, anxiety about one's priorities and how well one is mastering them.
We all have multiple priorities--self preservation, sensual pleasure, social status, meaning--and the higher we go the more doubtful we are about them. Having no doubt, like having no worries, isn't bliss, it's the end of consciousness. That's what always bothers me about people who suggest we search for flow, because like good music or wine, it's nice occasionally like any other sensual pleasure, but only occasionally in the context of a life of
perceived earned success.
Consider the Angler Fish. The smaller male is born with a huge olfactory system, and once he has developed some gonads, smells around for a gigantic female. When he finds her, he bites into her skin and releases an enzyme that digests the skin of his mouth and her body, fusing the pair down to the blood-vessel level. He is then fed by, and has his waste removed by, the female's blood supply, as the male is basically turned into a parasite. However, he is a welcomed parasite, because the female needs his sperm. What happens to a welcomed parasite? Other than his gonads, his organs simply disappear, because all that remains is all that is needed. No eyes, no jaw, no brain. He has achieved his purpose, has no worries, and could just chill in some Confucian calm, but instead just dissolves his brain entirely.
A computer needs pretty explicit goals because otherwise the state space of things it will do blows up, and one can end up figuratively calculating the 10^54th digit of pi--difficult to be sure, and not totally useless, but still pretty useless. Without anxiety one could easily end up in an intellectual cul-de-sac and not care. I don't see how a computer program with multiple goals would feel anxiety, because they don't have finite lives, so they can work continuously, forever, making it nonproblematic that one didn't achieve some goal by the time one's eggs ran out. Our anxiety makes us satisfice, or find novel connections that do not what we originally wanted but do what's very useful nonetheless, and in the process helped increase our sense of meaning and status (often, by helping others).
Anxiety is what makes us worry we are at best maximizes an inferior local maximum, and so need to start over, and this helps us figure things out with minimal direction. A program that does only what you tell it to do is pretty stupid compared to even stupid humans, any don't think for a second neural nets or hierarchical hidden markov models (HHMMs) can figure stuff out that isn't extremely well defined (like figuring out
captchas, where Kurzweil thinks HHMMs show us something analogous to human thought).
Schopenhauer, Kierkegaard, and Nietzsche were all creative, deep thinkers about the essence of humanity, and they were all very lonely and depressed. When young they thought they were above simple romantic pair bonds, but all seemed to have deep regrets later, and I think this caused them to apply themselves more resolutely to abstract ideas (also, alas, women really like confidence in men, which leads to all sorts of interesting issues, including that their doubt hindered their ability to later find partners, and that perhaps women aren't fully conscious (beware troll!)). Humans have trade-offs, and we are always worrying if we are making the right ones, because no matter how smart you are, you can screw up a key decision and pay for it the rest of your life. We need fear, pride, shame, lust, depression and envy, in moderation, and I think you can probably get those into a computer. But anxiety, doubt, I don't think can be programmed because logically a computer is always doing the very best it can in that's its only discretion is purely random, and so it perceives only risk and not uncertainty, and thus, no doubt.
The key, as Minsky always told me, was uncertainty, true uncertainty as discussed by Keynes and Knight. If it is truly non-quantifiable, then a computer can not understand it, and they will never empathize with us correctly, never accurately have a 'theory of mind' that comes naturally for humans. After all, without uncertainty, there really isn't doubt, which Schopenhauer said was the essence of consciousness. So, the search for AI, and a model of 'real risk', seemed joined at the hip.