LaMDA, Turing, and Natural Intelligence

“Any sufficiently advanced technology is indistinguishable from magic.” — Arthur C. Clarke

Brad Detchevery
3 min readJun 16, 2022

Recently many people have been commenting on Blake Lemoine’s series of articles about Google’s AI ‘Sentient’ bot. One thing I believe there can be no doubt in anyone’s mind that IF the chat excepts are true (and not edited or made up completely), they are very eerie if not downright creepy at the possibility that there is a ‘ghost in the machine’.

Many of the critics of Lemoine’s post suggest that it is clearly impossible that a sophisticated large language model (LMM) could achieve actual sentience, but is simply simulating responses best on the prompts given to it and the knowledge it has acquired through training exercises. Others argue that the ‘Turing Test’ simply isn’t a good enough indicator to true Artificial General Intelligence, and that earlier models such as ELIZA, and Eugene Goostman. They point out that the problem is that as humans we naturally tend to “assume” human characteristics to the ‘things’ we talk towards. Due to this we have a natural tendency that makes us easy to be fooled by ‘smart-witted’ programmers writing code which is not intelligent but is simply an illusion with the intent to ‘trick’ us.

I suggest that Turing’s true intent of his ‘test’, was simply to point out the fact that if a chat such as the one Lemoine has posted is real, if it can continue to carry on conversations ‘ad infinitum’ with correct, logical responses, the same way a human would, then their is no discernable difference between the intelligence of the machine, and the intelligence of any other human that we encounter on planet earth. Therefore, mathematically, the longer that machine carries on coherent conversation (the closer it gets to ad infitium) the closer it is to achieving “artificial intelligence”.

The critics are correct, the test cannot prove intelligence, it cannot prove sentience, but the problem of course as noted by ‘Decartes — Cogito Ergo Sum’ is that we can’t really prove anyone (or anything) is intelligent or sentient, save for perhaps ourselves. Every human being that we interact with today, could in fact be a sophisticated pattern matching robot that we are in fact simply anthropomorphizing into believing are thinking, sentient, beings just like ourselves. Therefore it is irrelevant whether or not the machine truly ‘experiences’ a state of mind like you believe yourself to experience because you can’t even be sure your fellow humans do either.

The problem, of course, is that all computer programs are ‘visible’ to humans to look at and dissect in functional detail. We have not [yet] fully figured out how to do this with the human brain. A programmer can read the lines of code, explain to the world “Eureka — here is what it does!”, therefore it is NOT intelligent..But what if we could someday do the same with the human brain, what if we discover, not a model, but an actual understanding of the functional process of ‘thinking’, and what if we should discover that it is in fact algorithmic in nature? That our thinking processes are a complicated series of neuro-chemical receptors which match a set of inputs, to predict a set of outputs, and generate an appropriate response, that a side effect of the interaction of these chemical and hormones generates a “feeling” of self which is nothing more then an illusion driven by a sense of “desire / need” that is pre-programmed as a sort of “prime directive” and we think this “magical thing” a soul?

A child learns to speak it’s first language from its environment. It learns by observing the interaction with other humans. It has a need to communicate (food, water, cold, change my diaper please). First as crying, later as specific ‘tones’, later as simple words, and eventually full on sentences (Dad? — Can I have the keys to the car?). It could be that this process is not essentially any different then how the Language Model is trained with a wide variety of inputs and anticipates ‘guesses’ the appropriate outputs. It is our tendency to think “but this cannot possibly be true!! — I am a rational, free-thinking, I make my own decisions!, I have my own thoughts”..but try some various drugs (whether legal or not), and you will find it does not take much alteration of those chemicals in your brain — before you no longer feel “in control” before you realize just how much of your ‘feeling of self’ is nothing more then a delusion.

Something to think about.

--

--

Brad Detchevery

Brad is a self-proclaimed ‘geek’…and proud of it. From computer programming, consulting writing and public speaking — Brad shares his ‘geekwisdom’ with us.