From: Patrick van den Berg (cirandar@yahoo.com)
Date: Tue Oct 29 2002 - 21:15:41 GMT
Hi Sam,
sorry for the length, but I hope you bear with me.
> One side-effect of my 'eudaimonic' campaign is relevant to Turing
> tests and
> associated features. If the fourth level is intellect (ie
> pre-eminently
> logical reasoning) then there is no reason why we should not be able
> to make
> Turing machines to independently function on that level, so humanity
> can be
> seen as a midwife to the fourth level.
Ah, okay. I had to look up what 'eudaimonic' means, but you're simply
saying that human behavior can be simulated by Turing machines, and that
we just might be able to create turing machines that can evolve to the
'4th level'. When I'm waiting on the bus or something, I sometimes
imagine I meet somebody who is actually a really, really fast computer,
with a connexion with a huge, huge database. (The actual
computer/database might not be in the body I see, but could be located
in 10 large buildings: it communicates then through radiosignals with
the body I meet) It is indeed possible that this computer might really
fool me. Suppose the computer has a record of 10**60 human behaviors.
And that it has a similar amount of records of human behaviors in reply.
Than no matter HOW original, creative or crazy I act, the computer
mostlikely will have a near-perfect equivalent of my behavior in his
database, and thus will reply in a human-like fashion. (thus passing the
turing test)
But is there really no way for me to unmask that the person in front of
me is actually a computer?
According to the Goedel incompleteness theorem which Penrose uses, there
are some mathematical (un)truths that can't be decided in finite steps.
(I admit I FORGOT whether some problems can't be decided because they
are UNTRUE, or whether they are TRUE. It's been a while since I dived
into the mathematics- I hope that you'll forgive me on this now, and not
attack me 'ad hominim'! ;-) So all I have to do is give him a problem
that can't be decided in finite steps by means of an algoritm, but of
which I, as a mathematician (ahum), DO know the truth or untruth of. But
then, still, the computer might have a record of all existing
mathematicians too, so even if it can't decide the truth or untruth of
the problem, it only has to search the right mathematician's behavior in
it's database, and then simply borrow his (correct) answer.
So, even by firing 'noncomputable' problems at him, I STILL might not
uncover him as being a computer. Thus, this Super-Turing machine could
pass any practically conceivable Turing test -'practically', that is.
That's the whole point. I can know 100% for sure that inside a system,
there are true and untrue statements. Of course we do err sometimes, but
on many occasions we do not. I can't give you a noncomputable problem
just like that, but that's not necessary. We all know the basics of
number theory, and can perform +*-/ operations with natural numbers. 3,
4 and 7 are symbols- but we know what they mean. Given that we know what
the symbols mean, we know that the equation '3 + 4 = 7' is true. 100%.
Agreed? The same certainty we can have with some noncomputable problems.
Now to Dennett.
He writes (http://ase.tufts.edu/cogstud/papers/penrose.htm):
"Chess is a finite game (since there are rules for terminating
go-nowhere games as draws), so in principle there is an algorithm for
either checkmate or a draw, one that follows the brute force procedure
of tracing out the immense but finite decision tree for all possible
games. This is surely not a practical algorithm, since the tree's
branches outnumber the atoms in the universe. Probably there is no
practical algorithm for checkmate. And yet programs--algorithms--that
achieve checkmate with very impressive reliability in very short periods
of time are abundant. The best of them will achieve checkmate almost
always against almost any opponent, and the "almost" is sinking fast.
You could safely bet your life, for instance, that the best of these
programs would always beat me. But still there is no logical guarantee
that the program will achieve checkmate, for it is not an algorithm for
checkmate, but only an algorithm for playing legal chess--one of the
many varieties of legal chess that does well in the most demanding
environments. The following argument, then, is simply fallacious:
(1) X is superbly capable of achieving checkmate.
(2) There is no (practical) algorithm guaranteed to achieve checkmate.
therefore
(3) X does not owe its power to achieve checkmate to an algorithm.
So even if mathematicians are superb recognizers of mathematical truth,
and even if there is no algorithm, practical or otherwise, for
recognizing mathematical truth, it does not follow that the power of
mathematicians to recognize mathematical truth is not entirely
explicable in terms of their brains executing an algorithm. Not an
algorithm for intuiting mathematical truth--we can suppose that Penrose
has proved that there could be no such thing. What would the algorithm
be for, then? Most plausibly it would be an algorithm--one of very
many--for trying to stay alive, an algorithm that, by an extraordinarily
convoluted and indirect generation of byproducts, "happened" to be a
superb (but not foolproof) recognizer of friends, enemies, food,
shelter, harbingers of spring, good arguments--and mathematical truths!"
Pat again: Dennett uses in his 3-lines long version of Penrose's
argument words like 'superbly', 'practical' and 'power'. These are words
that Penrose doesn't use. He argues that it is A MATTER OF PRINCIPLE
that we can decide the truth or falsehoods of problems, where turing
machines sometimes can't because it never will stop in finite steps. I
don't know about noncomputability in Chess- I think it's perfectly
computable, just like Tic-Tac-Toe is, as we all learn when we grow up.
Chess is just more complex, but there's no essential difference in
computability I believe.
Of course humans, including mathematicians do make mistakes sometimes.
But I can say with 100% certainty that '3+4=7', with the meaning
attached to these symbols as we all share.
An "algorithm--one of very many--for trying to stay alive, an algorithm
that, by an extraordinarily convoluted and indirect generation of
byproducts, "happened" to be a superb (but not foolproof) recognizer of
friends, enemies, food, shelter, harbingers of spring, good
arguments--and mathematical truths" is indeed not inconceivable, whether
you follow Dennett's natural selectionlike receipy, or my Super-Turing
machine above. But Dennett says it himself: 'not foolproof'. When you
doubt the statement '3+4=7', and think you are liable to
'foolprovability' here, you are free to agree with Dennett. Otherwise,
Penrose still has a strong case against Strong AI.
Thanks for your time,
Greetings, Patrick.
__________________________________________________
Do you Yahoo!?
HotJobs - Search new jobs daily now
http://hotjobs.yahoo.com/
MOQ.ORG - http://www.moq.org
Mail Archive - http://alt.venus.co.uk/hypermail/moq_discuss/
MD Queries - horse@darkstar.uk.net
To unsubscribe from moq_discuss follow the instructions at:
http://www.moq.org/md/subscribe.html
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 10:38:07 GMT