Anders Nielsen (joshu@diku.dk)
Tue, 14 Oct 1997 16:45:29 +0100
[...]
> Actually I believe that it is the human ability to make value
> judgements that separates us from computers. Humans can make value
> judgements on the spot (whadye think of this post so far?), but computers
> can't. Consequently the way to suss one out is to present it with some
new
> phenomena and ask it for its opinion.
>
[...]
>
> Lars suggested that you present the computer with a piece of poetry and
> ask it for its opinion.
>
> > Lars Marius Garshol wrote:
> >
> >
> > >
> > > JUDGE: What does this make you think:
> > > Snowfall,
> > > unspeakable, infinite
> > > loneliness.
> > >
> > > How do you list an answer to that?
> >
>
> I then agreed that the computer would be unable to come up with a
> convincing answer. I said:
>
> > All it takes to spot the
> > different between a human and a computer is to ask a few questions that
> > require judgement, ask them about likes and dislikes, opinions and
> > experiences.
>
> And the key point was supposed to be:
>
> > Unless the
> > computer has its own sense of value it cannot make value judgements
> > about new phenomena. You could always catch it out by presenting it
with
> > an original idea and asking it for its opinion.
> >
>
> In summary my objective with that post was to demonstrate that it would
> not be possible to build a computer that would pass the Turing Test. I
> believe that a computer would not pass the test because it would not be
> able to make value judgements.
...
>
> Sorry for repeating some of my earlier posts, but I feel I have made a
> strong case and proved an interesting point. I have shown that a computer
> cannot pass the Turing Test. Perhaps my argument is wrong, but I can't
> find anything in Bodvar's comments that refutes it.
Well..in that case I will give it a go:
You seem to base your argument that computers can't be programmed to pass
the Turing test (and therefore not be programmed to think) because
computers can't make value-judgments, they have no feelings. And I would
grant that were that the case, your argument would be correct, but nowhere
do you present a case for why computers can't be programmed to do
value-judgments, you just assume it's the case.
How do you know that humans have feelings (can make value-judgments)?
Well, you know that you have feelings, and you can see that other people
act as if they had feelings, and you can talk to other people, and they'll
respond to what you're saying as if they had feelings themselves, but you
can't be sure. You don' t have first-hand knowledge that other people have
feelings.
So the objective is to write a computer that can act as if it had feelings
in a convincing matter.
(but that has been clear all along).
Values and feelings are closely related to the direct body-stimuli (in fact
I'd say they are all direct derivatives of direct body-stimuli), so an
ordinary computer would have some trouble competing in this area, but you
could equip a computer with a video-camera to make up for loss of sight,
and similar machines for other body-functions (make a button that if you
press it (very tamagotchi-esque), the computer has a sensation similar to
what we experience when we're having sex (and set up a social pattern that
says it's only proper if other people/computers press the button, never
push it yourself...it'll rot your spine!))....These machines will
substitute for body-stimuli.
Then make a program that can learn the language and social patterns and so
forth from what it "hears" and "sees", and given enough learning time, I
can't see any problems with this machine becoming an AI.
The problem I think is that you all think of computer programs as if they
could only be:
if (sentence includes "flower") {
respond "I like flowers"
} else if (sentence includes "mom") {
respond "mom was nice"
} else if (sentence includes "not") {
respond "why are you so negative?"
etc...
and with this scheme I'll grant that no matter how large and complex you'll
never get anything that is proper AI (but fortunately I don't think you'd
ever get anything that could fool a (proper instructed and intelligent
enough) human in a Turing test).
All this is still a matter of faith (I believe AI is possible, others
don't) as noone has given any constructive recipes of AI-programs, but on
the other hand I don't think it's fair to say that you've proven that AI is
impossible.
PS:
regarding the japanese computer-idols:
Can you ever be sure that there isn't a human answering the questions for
them?
Like typing the answer, and just having a program that makes the graphic to
make it appear that the
program is answering?...I would imagine that's how it was made, but I
haven't seen much of them (Japanese computer-idols aren't really the big
thing here in Denmark).
-- post message - mailto:skwok@spark.net.hk unsubscribe/queries - mailto:diana@asiantravel.com homepage - http://www.geocities.com/Athens/Forum/4670
This archive was generated by hypermail 2.0b3 on Thu May 13 1999 - 16:42:05 CEST