Queequeg wrote:You are arguing that because "experience" cannot be measured, its beyond the scope science can take account of.
Which is precisely the subject of the well-known essay
facing up to the hard problem of consciousness.
David Chalmers wrote:The really hard problem of consciousness is the problem of experience. When we think and perceive, there is a whir of information-processing, but there is also a subjective aspect. As Nagel (1974) has put it, there is something it is like to be a conscious organism. This subjective aspect is experience. When we see, for example, we experience visual sensations: the felt quality of redness, the experience of dark and light, the quality of depth in a visual field. Other experiences go along with perception in different modalities: the sound of a clarinet, the smell of mothballs. Then there are bodily sensations, from pains to orgasms; mental images that are conjured up internally; the felt quality of emotion, and the experience of a stream of conscious thought. What unites all of these states is that there is something it is like to be in them. All of them are states of experience.
The
NY Times OP that Dan linked to - co-incidentally, published the very day that I started working at an AI company as tech writer! - makes a related point, which is that computers don't understand
meaning.
Today’s A.I. systems sorely lack the essence of human intelligence: understanding the situations we experience, being able to grasp their meaning. The mathematician and philosopher Gian-Carlo Rota famously asked, “I wonder whether or when A.I. will ever crash the barrier of meaning.” To me, this is still the most important question.
Meaning exists in a context and is embedded in the intentional stance. Hubert Dreyfuss (author of What Computers can't do) argued that there are fundamental aspects of these processes that are forever beyond computer (or any) science, because they refer to factors which are unconscious or subliminal - things we know without knowing how we know them (a.k.a. 'intuitions'.)
The story is well-told by now how the cocksure dreams of AI researchers crashed during the subsequent years — crashed above all against the solid rock of common sense. Computers could outstrip any philosopher or mathematician in marching mechanically through a programmed set of logical maneuvers, but this was only because philosophers and mathematicians — and the smallest child — were too smart for their intelligence to be invested in such maneuvers. The same goes for a dog. “It is much easier,” observed AI pioneer Terry Winograd, “to write a program to carry out abstruse formal operations than to capture the common sense of a dog.”
A dog knows, through its own sort of common sense, that it cannot leap over a house in order to reach its master. It presumably knows this as the directly given meaning of houses and leaps — a meaning it experiences all the way down into its muscles and bones. As for you and me, we know, perhaps without ever having thought about it, that a person cannot be in two places at once. We know (to extract a few examples from the literature of cognitive science) that there is no football stadium on the train to Seattle, that giraffes do not wear hats and underwear, and that a book can aid us in propping up a slide projector but a sirloin steak probably isn’t appropriate.
Logic, DNA and Poetry, Steve Talbott.
And all of this is
exactly what philosophical or scientific materialism is obliged to deny. The one thing it can't admit is that there is an ontological distinction between minds and computers. That is what leads to Daniel Dennett's attitude, which is (according to one of his recent critics) 'so preposterous as to verge on the deranged.' Of course, most people don't think through the issue as thoroughly as academic philosophers, but large numbers accept that computers and intelligence are basically interchangeable, without realising the underlying problem. It's a sovereign delusion of our age.