Monday, April 5, 2010

Fox's Method


While looking up various references in order to understand what biosemiotics is about, a thought occurred to me. I've read various attempts to explain what the consciousness problem is about (esp. Daniel Dennett) and wonder whether a technique that I used in my own work might help. Fox's Method consists of inventing an explanation for a phenomenon which may have no basis in fact. The issue is not whether or not the solution contains any truth, but rather whether or not the solution is psychologically satisfying. This tells us something of the properties that the real solution must have in order to be accepted.

For example, an AM radio is highly complex, consisting of various interacting circuits. We know that an explanation of how they function together is complex (in Howard's sense), but it is possible to make people understand the component functions of various elements of the circuit and how they come together to produce sound from radio waves. At the end of an explanation of this complex system a person is likely to say "I understand." This is our goal. If the radio fell from the sky in the laps of a primitive people with no conception of diodes, waves, modulation, etc., how could their shaman explain the voices? As my high school physics teacher did when confronted with a question beyond his ken, he might simply say, "God made it that way." Some might be satisfied with that.

"Consciousness" is difficult to define and difficult to explain. Daniel Dennett for one, has written an entire book on the subject, "Consciousness Explained." His book has been dismissed by unsatisfied critics as simply begging the question. Suppose we try to put together a fictionalized explanation of consciousness and see if it can be satisfying. According to Fox's Method, it doesn't matter how erroneous the explanation is, as long as it's satisfying. Let's make what I'll call a "false start." With some help from Descartes, we'll say that each of us has an ethereal "soul" that can communicate with the brain via a radio transmitter situated on the moon. The receivers are in special cells located in the brain while other cells can simultaneously sense the condition of the body and its environment and transmit the data to the moon. The soul's claim to fame is it's ability to integrate data, apply logical rules and use stored information about previous experience concerning survival to make and transmit decisions cues for "phantom" images, audio and other reprocessed sensations to the host on earth. (In the world of undersea robotics this is nearly the case using a system called telepresence in which there is a very realistic feedback loop between the operator and the remote operational device.) The trouble with this solution is that the story has simply displaced the consciousness problem to a remote location without telling us much about it. On the other hand, this dualist kind of mind-body separation is helpful at a psychological level. People who would be upset at the notion that consciousness is an illusion and that the brain is just a complex computer might be quite comfortable with the soul or a neural net computer being located on the moon sending directions to the body on earth. I don't know why this is so.

In our story, we might escape the use of a mysterious ethereal material or the use of a remote computer by postulating a human on the moon remotely operating the human on Earth. This human is operated by a human located on Mars who is operated by a human on Venus, ad infinitum or however many planets there are in the universe. (Sounds like the homunculus problem to me.) In the end a guy called God operates the whole thing. But, we only have one consciousness and free will to deal with.

26 comments:

Ira Glickstein said...

Great topic THANKS Joel!

A way to explain a radio is that there is a little man inside it reading a script, playing instruments, etc. (Indeed, Ronald Reagan was not at the game when he broadcast Chicago Cubs baseball. He was in the WHO studio in Des Moines, Iowa, and recreated the game by reading a teletype feed and playing sound effects. One time when the teletype jammed, he had the batter foul off ball after ball until the teletype resumed!)

The image Joel used is of a homunculi drawn in 1695 when some thought the father's sperm contained a tiny person who was inserted into the mother's uterus. I guess the mother also provided a homunculi and the two tiny people merged to create the child? Perhaps the two homunculi had sex and the sperm from the male homunculi contained a very tiny person who ... OOPS, sounds like an infinite regress.

That is the problem with homunculi whenever they turn up in explanations. For example, the way vision works is that there is a little person in your head who watches the images on the inside of your eyes, and, like the captain of an airplane, makes decisions as to what to do, using levers that work your arms, legs, and voice. Great explanation until you ask what is inside that little person's head and learn it is a tinier person who watches the images inside ... OOPS, infinite regress!

The reason there must be a God is that the Earth and life on it demands a Creator. But, who Created God? Must have been a Meta-God. Who Created the Meta-God, well, a Super-Meta-God, and so on, all the way up. (Believers answer that objection by saying God always existed. But it saves a step to say the Universe and Laws of Nature always existed and Natural processes led to the formation of the Sun and Earth and the Origin of Life.)

The prevalence of homonculi stories though, as Joel notes, is testimony to how well they satisfy people who lack scientific educations.

As for Daniel Dennett, while I was a PhD student in 1995 a group of us attended a lecture he gave at Cornell. He was a very entertaining and excellent speaker, but I did not buy his message. In particular, his ideas about "the intentional stance" fell flat for me.

intentionality is a philosophical term for goal-directed behavior by some agent. For example, when a human chess player makes a move, he has intentionality in that he intends to take his opponent's piece and set up a condition that will lead to victory. But, when a computer programmed to play chess makes exactly the same move in the same situation, it is merely executing steps of its program and it (the computer) has no idea it is playing chess. Only the programmer and the human opponent have any understanding and therefore intentionality in this matter.

Dennett says no, we should take the stance that any agent that exhibits intelligent actions has intentionality. He thus dismisses Searle's Chinese Room thought experiment. While the books of instructions in the room and the hapless human who knows no Chinese and is simply executing the instructions have no intentionality, he maintains that the overall System (books plus human) that is providing answers to written Chinese queries does have intentionality.

His Consciousness Explained is more properly consciousness avoided. The only thing I remember about his talk that I attended in 1995 was that he described an experiment where a series of tiny shocks were applied along a person't arm and the person interpreted them as being ordered in time up his arm when, in actuality, some of the shocks were given out of order. Somehow this illustrated his point about consciousness being an illusion.

Ira Glickstein

joel said...

Howard said: I can’t speak for all of them. but long before the word “biosemiotics” was coined, I joined a multitude of philosophers and scientists who were most curious about (1) the origin of life, and (2) the origin of thought. How did evolvable organisms arise from ordinary matter, and how did organisms start to think?

Joel requests: It would go a long way toward understanding what biosemiotics is about if Howard could use Fox's Method to create a fiction which would achieve the above goals. What would be a satisfying explanation even if not factually true?

Ira Glickstein said...

I can't speak for Howard, but here is a non-Creator "Fox's Method" story:

Everyone knows a TV image is merely an arrangement of colored dots. OK, now imagine a very large box of colored balls, shaken randomly. Over a very long time, every possible arrangement of the balls will occur. A patient observer will see a bunch of red balls together, a bunch of green balls together, black balls surrounding a bunch of blue balls, etc. At some point, the observer will see what looks like a TV image of a flower or a horse or a person. Indeed, over sufficient time, every possible TV image will appear to the (very) patient observer.

Now replace that box of colored balls with a soup of atomic elements, distributed on trillions of planets in billions of solar systems and millions of galaxies. Over trillions of years, random mixing of these elements, on some planet, will result in the formation of an algae or a plant or an animal, including a human.

The human who emerges will be fully formed, with a thinking brain, like Adam in the Creation story. On some planet two humans will happen to come into being, one male and one female, let us call them "Adam and Eve". They will observe the plants and animals and give them names. Their offspring will populate that planet, let us call it "Earth".

Over many generations, various explanations will be given for the origin of life on Earth, including Creation by an external God and scientific explanations such as those put forward by "a multitude of philosophers and scientists who were most curious about (1) the origin of life, and (2) the origin of thought. How did evolvable organisms arise from ordinary matter, and how did organisms start to think?"

Ira Glickstein

joel said...

I think the majority of people would not find your tale satisfying. It's missing a ratchet. Like your typing monkeys there isn't any mechanism to say "Hey, that's good. Hold that." Whether we have mixes of colored ball or molecules, your tale lacks a way for the good stuff from just flying apart into still another configuration. For example, we know that if you let amorphous sulfur sit around for a few weeks, you end up with crystalline sulfur, a more ordered form. (Contrary to what the general public thinks, the Second Law doesn't say that order goes to disorder.) Let's put Dr. Pangloss into the picture with "This is the best of all possible worlds." We'll define best as the most probable. Each random rearrangement moves us a little closer to the most probable arrangement of molecules and energy. But, we are only a station along the way. There is more to come.

Howard Pattee said...

Joel, isn't your description of Fox's Method the same as what we call a mythical explanation?

I agree that an "explanation" is any story where you go away happy. The issue is: What kind of story makes you happy?

Some people are happy with myths. Physicists exclude myths with Ockham's razor or Einstein's rule: "A theory should be as simple as possible, but no simpler." Some physicists are happy with string theory; some think string theory is a myth. (See Lee Smolin's The Trouble with Physics.

Ira Glickstein said...

I agree Joel that my totally random mixing story is not as satisfying as the Creation by God story or the scientific Random Origin of RNA World followed by Evolution and Natural Selection story.

In the God story, a Sentient Being Creates the Heavens and the Earth and everything on it, and, after each step, He says "It is Good." In the science story, Natural Selection says "Hey, that's good. Hold that" whenever Evolution gets something right.

Had I included a stabilization of "Good" mechanism in my Fox's Method story, it would not have been original, but simply a minor variation of the God or science stories. While scientific in the sense it is possible, my story avoids both Evolution and Natural Selection in a way that also avoids God. Quite an accomplishment if I do say so myself!

What I like about my story is that, like the box of colored balls, the "picture" varies from random to ordered and back again in relatively few steps. After an extemely long period of random patterns, fully-formed plants, animals, and humans appear, like a "flash in the pan", on Earth, continue on for a relatively small number of generations, and then fade out into more random patterns.

Humans have been around for about 100,000 years. We have been thinking metaphorically for 10,000. According to Stephen Hawking, we will probably do ourselves in within 1,000 years. Compared to trillions of years, on trillions of planets, that is more like a "flash in the pan" than the long-term stability that is implied by the God and science stories.

Ira Glickstein

joel said...

Ira, now that you elaborated with that phrase "man appears like a flash in the pan," I understand and like your story. Howard, I differs from a myth in that it is a total fabrication with intent to be just a step along the way. Can't you help out with a myth (if you like) about biosemiotics?

joel said...

Howard said: Joel, isn't your description of Fox's Method the same as what we call a mythical explanation?

Joel: I've thought it over and come to the conclusion that "Fox's Method" should be changed to Fox's Scientific Myth Making." :^)

Howard Pattee said...

Joel, before any explanation, mythical or scientific, will make sense you need to know some history of the problem. Biosemiotics is just the study of symbols. It covers many evolved levels. I find the brain level is too complex, so I focus on the origin of symbols. It’s an ancient question. The Bible begs the question. John says, “In the beginning was the word.” No problem there! Laotzu says, “Words come out of the womb of matter.” That myth or metaphor sounds more like a problem.

Everyone agrees that coded genetic symbols are necessary for evolution by self-replication, variation, and natural selection (the origin of Darwinian evolution). I began doing Miller-Urey type experiments (abiogenic synthesis) which is still a big area of research, but this just gets you complicated chemistry, a long way from a genetic code.

Von Neumann in the 1950s was also trying to understand the origin of evolution, by which he only meant the potential of growing more and more complex structures without any definable limit. He was interested only in the logic of evolvable self-replication of automata (computers), not the physics.

He started with an analogy with Turing’s Universal Computer (UTM) that can compute any function no matter how complicated a function is described. Turing showed that you can formally define a UTM as one single machine (think of it as hardware) that will execute any function that is fed to it by a description (think of software) of the function. (This essentially defines a computable function.) Then von Neumann asked: What does it take to have such a universal machine replicate?

Now the analogy breaks down. The problem is: How do you replicate hardware? (Note that software cannot change the hardware of a computer.) To replicate you must construct new hardware, so von Neumann simply postulated a Universal Constructor (UC) that when fed a description of itself would pick up the parts from a parts reservoir and assemble a copy of itself.

The evolutionary condition then requires that if the description of the UC mutates the UC must still be able to replicate the mutant. Call it UC2. Then to have unlimited complexity, the UC2 must replicate any UCn where n is the nth mutant and n is unlimited. That is a formal description of what Darwinian evolution does. This was the start of Artifical Life or the simulation of life in computers which is also a big field.

Von Neumann and Alife models are only logical simulations and don’t involve real matter, energy and physical laws. That is Laotzu’s and my problem. How did symbols come out of real matter? Or as my first paper (1969) on the subject asks, “How does a molecule become a message?”

Ira Glickstein said...

Thanks Howard for the background that will help Joel understand what you and others have been up to in the study of symbols.

Meanwhile, I've come up with two more examples of Fox's Scientific Myth Making, both of which I am sure are found somewhere in the SciFi world (where I seldom dwell).

BOOTSTRAP Imagine the Earth some decades or centuries from today when we have not only mastered genetic engineering and space travel but also time travel. Out of a spirit of adventure or altruism, they decide to go back in time and space to various galaxies and plant the seeds of biological life on a number of planets.

Then, as a lark, they go back 4 Billion years and do the same on the then newly-formed Earth. They thus provide a complete explanation and description of the origin of life on Earth that led to the evolution of humankind and made their feat of planting the seeds of life on ancient Earth possible.

("Bootstrapping" is the idea that, if you were really strong and had good balance, you could reach down, grab your bootstraps, and lift yourself off the ground. When a computer "boots" it uses a simple string of code to start the process of copying its Operating System from storage, thus bringing itself to life.)

PANSPERMIA Biological life originated somewhere other than Earth, call it planet Alpha. The agency of its origin might have been Random Mixing/Evolution/Natural Selection, or it may have been a Creator God, or, perhaps, like the Laws of Nature, Energy/Matter, and Space/Time, Life may always have existed in the Universe.

Whatever its origin, denizens of planet Alpha decide to plant the seeds of life on Beta, Gamma, Delat ... and Earth. They do so to preserve it from the possible destruction of their homeplanet due to natural or human-made disaster.

See my free online novel for a variation of Panspermia.

Ira Glickstein

joel said...

Howard said:
Now the analogy breaks down. The problem is: How do you replicate hardware? (Note that software cannot change the hardware of a computer.) To replicate you must construct new hardware, so von Neumann simply postulated a Universal Constructor (UC) that when fed a description of itself would pick up the parts from a parts reservoir and assemble a copy of itself.

Joel responds:
Whenever I contended that robots can act humanly there was always someone who said that they can't because they can't reproduce. I contend that this isn't so, if robots have human rights and are not held as slaves. Suppose a robot was designed so that it had a compartment with a set of digitized plans for its construction. Suppose robots had to be paid for the work they do. Suppose that after accumulating sufficient funds, a robot could go to a machine shop (say, Utero Inc.), present the plans and money and obtain a copy of itself. Not only could a robot replicate, but if originally programmed with the goal of perpetuating itself with as many offspring as possible, it would produce a bunch of sub-goals (like make as much money as possible) that we all would recognize as human-like.

joel said...

Fox's Chinese Kitchen

I think that Searle's Chinese Room (mentioned by Ira) contains an important red herring. The presence of the ignorant human (who knows no chinese) serves to allow him to say that a computer that can do chinese symbol response has no real knowledge of Chinese. Others contend that the combination of the human plus the reference book has some knowledge of chinese even though neither of them individually may recognize that fact. I offer you Fox's Chinese Kitchen. I'll keep the ignorant human, but this time people come to the window and place their orders for food. The human has a cook book describing exactly how to prepare each dish. The cook follows the instructions, produces the dish, but admits that he has no knowledge of chinese cooking despite the fact that clients believe there is a expert in the kitchen. Does the cook plus the recipe book have a knowledge of Chinese cooking? After many months the cook no longer needs to refer to the book. He has memorized the entire book. Does the cook now have some knowledge of Chinese cooking? After a still longer time, the cook begins to see certain patterns and rules in the preparation. If there is an required ingredient, he can substitute another. Does the cook now have an even more sophisticated knowledge of Chinese cooking? If we turn the cook into a robot which can memorize, pattern match and develop rules, aren't we back to the same problem we had before Searle's Chinese Room. We can make the situation even more obscure, if the robot is the one I described as having legal rights. If our Mandarin-cooking robot gets fired for lack of knowledge of Szechuan cuisine, will its desire for money drive it to go to the library and find a book on the subject?

Ira Glickstein said...

It took Howard from 1989 till 1994 to gently steer me to the correct view of AI and the Chinese Room (or Joel's Kitchen). I still lapse into the AI mass delusion every once in a while.

The English-only man memorizing the books is a well-known variation. Here is one I came up with:

An English man has memorized the Chinese Room and a Chinese woman has memorized an equivalent English Room. Either responds well to written messages in the foreign language.

They are in Chinatown in New York City around noontime and the Chinese woman writes a Chinese note: "Let's have lunch." The English man, using his memorized Chinese Room, writes an excellent Chinese note: "Great idea, I'd love some dim sum and shrimp lo mein." As the woman is about to escort him into a nice Chinese restaurant, he hands her an English note: "Let's have lunch." She replies with an English note: "Great, I'd love a hot dog with lots of chili!" Of course she is horrified when he pulls her into a Nedicks fast food place.


The problem, which also applies to Joel's Chinese Kitchen thought experiment, is that having memorized the Chinese Room (or the English Room) has not grounded any person (or robot) who has not learned a given language in the normal way. The opinions they express are not theirs at all, but rather the programmers!

Your English man who has never tasted Chinese food but who has memorized the (computer-program-like) process of taking written Chinese orders and cooking them up is not grounded in the Chinese language or Chinese food.

You say he will learn to improvise if certain ingredients are not available. Well, he may well do so, but I would argue that is only because, as a human, he has a sense of smell and taste and savor and crunch.

If, for example, he substitutes soy sauce when he is out of salt, and if this substitution has not been programmed into the book of instructions, he does so because, perhaps by accident when a spill occured, he has tasted soy sauce and noticed it is salty and because, as an eater of English food, he knows salt is also salty.

On the other hand, assuming the ingredients are labeled only in Chinese and he has never tasted them, even by accident, he may substitute sugar for unavailable salt, because both are white and granular, again using his human vision and knowledge of color, size, and shape.

Ok, say you allowed the English-only man in the Chinese Kitchen to taste and eat the Chinese food. Say you allowed him to hear his customers speaking in Chinese and not isolate him in a sound-proof room where written orders for Chinese food come through a slot. Say you allowed him Chinese TV along with English TV. Say you released him into the streets of Beijing.

Would he eventually learn spoken and written Chinese? Of course he would, but so would you. Perhaps he would have an advantage in that he already knows how to recognize and write Chinese characters.

The key is to get grounded in the real world. That is how we really learn things. You can be a PhD in physics and know everything about gyroscopic forces, but you will never learn to ride a bicycle unless you actually get on a bike and become grounded in the dynamics of balance.

Would a robot programmed with Chinese Room (or Kitchen) software and manual capabilities be grounded in the Chinese language or food? Not at all!

Perhaps, if it had sound and taste and smell sensors and software subroutines to interpret them, as well as learning routines, perhaps, just perhaps, it could get grounded but it would take a lifetime.

Meanwhile, that English man released from the Chinese kitchen is wandering around Beijing. A woman hands him a Chinese note: "Would you join me for lunch?" "Yes, I'd love some dim sum with hot shrimp!" he writes back in perfect Chinese as he makes a bee-line for the McDonalds!

Ira Glickstein

joel said...

Ira, given five years, maybe I can change your mind back the other way. :^) However, you know what they say about converts. But, first we have to be clear about what we're debating. The following statement from wikipedia under chinese room seems to me to suffice: "Searle identified a philosophical position he calls "strong AI": "The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds."
The footnote is also useful in defining the debate:
"This version is from Searle (1999), and is also quoted in Dennett 1991, p. 435. Searle's original formulation was "The appropriately programmed computer really is a mind, in the sense that computers given the right programs can be literally said to understand and have other cognitive states." (Searle 1980, p. 1). Strong AI is defined similarly by Russell & Norvig (2003, p. 947): "The assertion that machines could possibly act intelligently (or, perhaps better, act as if they were intelligent) is called the 'weak AI' hypothesis by philosophers, and the assertion that machines that do so are actually thinking (as opposed to simulating thinking) is called the 'strong AI' hypothesis."

First of all, I would contend that the Chinese Room only demonstrates that the Turing Test fails to tell us anything. One can be fooled into believing responses are from a thinking, feeling, conscious or knowledgeable being. Fine. So what? I never liked the Turing Test.
The point I'm trying to make in Fox's Chinese Kitchen, is that a room containing an ignorant human can do exactly what a machine can do. It can start with a set of instructions, it can learn by experience and it can integrate what it learns into a new set of procedures at which point its knowledgeable.

You seem to ignore the possibility of feedback. If the customer refuses to accept a meal prepared with the substitution selected by our robot there's no reason why it can't note that and avoid the bad substitution. If your multinational dating couple have a miscommunication about what constitutes a good dinner, there's no reason why they will not use feedback from the other person to make corrections to their knowledge bases.

I contend that the problem lies in our inability to define thinking, consciousness, etc., in humans. As soon as we have a solid definition, we can implement it into a robot by the process of evolutionary learning. The most fundamental is to define "pleasure." In my opinion, all the rest flows from that. I won't go any further until I'm sure we're in agreement about the statement of the problem.

Howard Pattee said...

This discussion is déjà vu all over again. Turing Tests and Chinese Rooms are no longer of interest in the robotics field. It is now engineering with “situated” robots in real environments. My PhD student, Richard Laing became NASA Team Leader in designing a Lunar Robot Factory.

Also see Andy Clark’s papers on robots in real environments. He has pretty well covered all the past arguments. Joel is right. The philosophical arguments are mostly over ambiguous definitions of key words, thinking, consciousness, etc.

The relevant reference literature is enormous!

Howard Pattee said...

The Figure 1 in the Moon Robot ref. seems to be missing. I don’t know what it was, but LOOK HERE for a Figure from a more detailed paper,

joel said...

Here's a scientific myth inspired by a video about sardines that I saw last night on TV. It's not far from a theory I recently saw on the internet concerning the ability of neurons to resonate in chorus like cicada locusts.
Once upon a time Zeus was bored. He went to Hephaestus and commanded him to build a plaything. "It must have a will of its own." he said. Hephaestus replied, "That's peachy but what do you mean?" I want it to do what it wants to do not what I want it to do." Hephaestus scratched his head. "You mean that you want it of all things that you have created, to be disobedient? The sea and winds obey your whim, but you want this thing to disobey your commands! " Zeus said he thought that might be amusing, so Hephaestus set to work.
He surveyed the entire world and settled on the lowly sardine for his model. The sardines form giant swarms that move as one, scurrying from here to there impervious to predators who prowl the Aegean Sea. They seem to act as a single intelligence, but each is independent. There is no leader, but those in the interior of the swarm are quick to follow their neighbor. Consensus on the direction of flight seems to be instantaneous yet unpredictable. After observing long and hard Hephaestus constructed the human brain with all its unpredictability and called it by its apparent free will.

Ira Glickstein said...

Joel writes: "First of all, I would contend that the Chinese Room only demonstrates that the Turing Test fails to tell us anything. One can be fooled into believing responses are from a thinking, feeling, conscious or knowledgeable being. Fine. So what? I never liked the Turing Test."

Great - You are on your way to rejecting the AI mass delusion from which I am still recovering.

Joel quotes Searle and others who distinguish "strong" AI from "weak" AI. I have no trouble with "weak" AI. I have not (yet) encountered "strong" AI, but hope to at some time in the future. Indeed I have published concepts that might achieve "strong" AI, but I agree with Searle it won't be with a traditional programmed computer. (By the way, Searle specifically accepts that the animal brain is an electro-chemical machine and that, since such a machine is capable of "strong" intelligence and intentionality and consciousness, etc., then we cannot exclude the possibility a copper and silicon machine could have the same "strong" properties.)

"WEAK" AI: I fully accept that Joel's Chinese Kitchen could learn to substitute ingredients and even improve the initial recipes by trial and error variations and customer feedback, given properly programmed learning algorithms. Computers today do things that, when done by humans, are rightly considered to be evidence of high levels of intelligence and education. My career was devoted to development of computer systems that often did these types of tasks better and faster than humans.

"Weak" AI systems are grounded via the intelligence and intentionality of their human designers, programmers, and users. This is borrowed intelligence and, while useful, is empty of any real understanding. (Like a student who learns by rote and recites answers but cannot explain them, or a worker who doggedly follows formal procedures written by others and cannot handle exceptions to them.)

"STRONG" AI: This will not be achieved by starting with a traditional programmed computer and allowing it to learn by feedback, even if it is
"situated" in a robot with sensors that interact with the environment. Howard mentions such projects by one of his students and others, and they are impressive and well designed, and I want to see my tax money support them. However, they are merely "weak" AI taken to the next level and will never achieve "strong" AI.

Unlike Dennett, I believe consciousness is not an illusion. It is real. In biological life, it is a property that starts with biological cells that have evolved over billions of years. Each cell is conscious of its environment. It exchanges chemical and electrical signals with nearby cells and the outside environment. It senses and absorbs nutrition, and rejects toxins. It experiences hunger and satisfaction at the lowest level of their reality. It is the descendant of a long line of biological cells going back 3.5 billion years who have learned these things the hard way.

Until we figure out how to make copper and silicon components that are as conscious and connected to their environments as biologial cells, and connect them to make systems, we will not achieve "strong" AI. We'll have to evolve it from the bottom up, not the top-down.

Ira Glickstein

Howard Pattee said...

Ask a more general question: How do we decide how similar in behavior two objects can be if they are not identical in their structure? A classical analogy in physics is the damped pendulum and the electrical resonant LC circuit with resistance. The equations can be symbolically exactly the same, so in one sense their behavior is exactly the same.

What we mean here by “behavior” is only one specific type of behavior under specific isolated boundary conditions. This identical behavior is the abstract displacement from equilibrium of a mass or a voltage. But mass and voltage in general behave entirely differently when subjected to different boundary conditions, forces, or measurements.

The same is the case for any two objects. A computer and a brain could in principle be structured to behave exactly the same way in a given chess game. However, they would behave entirely differently if subjected to a different game or a different environment. The same is true even for two brains. Ira’s brain and my brain may behave exactly the same way for solving a math problem, but entirely differently for solving a political problem.

Next question: If two structures are exactly the same in every structural detail and have been subjected to exactly the same historical environmental forces would their behavior necessarily be exactly the same? Ira would say, Yes, because he has faith in determinism. I would say, No, because I have faith that nothing is exact.

Ira Glickstein said...

Howard has hit the nub of the issue. Identical behavior does not necessarily imply identical structure. The same equations may describe the behavior of two very different objects, and two outwardly identical objects may be described internally by different equations.

For example, imagine three outwardly identical digital clocks. The first is stand-alone, the second is updated by a radio signal from an external time standard at midnight, and the third is continually updated by a radio signal from the external time standard. Can you distinguish them without cracking them open?

Yes, given time you can. On the first day of observation, for the first few hours, they all give the exact same time. However, during the course of the day, they begin to diverge a bit. At midnight #2 jumps a second or two and again agrees with #3. The second day #1 diverges more from the others. It would take a few days, but careful observation would reveal the differences.

Assume a situated programmed computer robot. It has visual, aural, touch, taste, and balance sensors equivalent to that of a normal human. On the surface, it is indistinguishable from a human being. Assume it plays tennis and chess, argues about sports and politics, composes passable poetry, etc.

Assume further it carries a blueprint that documents its mechanical design and software. It can reproduce itself (or any other similarly documented robot) by assembling parts from a depot. Assume that reproduction includes (sexual-like) crossover between the documentation of two such robots and some probability of mutation, and that the resultant robot carries a copy of the mutated documentation, so the robots can evolve by preferential selection of those that fit the environment best.

I believe careful observation could distinguish humans from these situated robots, even many generations from now when the evolution and selection has had its optimizing effect. (Reading the above, which I think is the best chance we have of creating truely intelligent robots, I have momentarily lapsed into my old AI mass delusion - Howard SAVE ME!)

Ira Glickstein

PS: On Howard's last point, assume the Universe is both discrete and finite. Would that not undermine Howard's "faith that nothing is exact"? (Absent hardware failure, and assuming time synchrony of inputs, two identical computers will run identical software and provide bit-for-bit identical outputs. That is the basis of the four synchronized computers in the Space Shuttle.)

joel said...

Ira said:
Unlike Dennett, I believe consciousness is not an illusion. It is real. In biological life, it is a property that starts with biological cells that have evolved over billions of years. Each cell is conscious of its environment.
Joel responds: When I read Ira's remark above I thought he had completely gone around the bend to extend the word consciousness to individual cells. I thought a bit more about it and recalled something vague about "microtubules." I did a search and voila! (God bless the internet.) Here for your astonishment and pleasure is an exerpt from http://www.basic.northwestern.edu/g-buehler/nerves.htm

Title: Are microtubules the 'nerves' of the cell?


A most important required step towards the concept of an 'intelligent' cell is to identify the specific structures and mechanisms which mediated between the light detection at the cell center on one hand and the extension of specific pseudopodia at the peripheral cellular cortex on the other. The mediator mechanism could not be explained by diffusible, chemical signals. Such signals would travel into every possible direction and, thus, would not be able to specify a particular direction for the extension of a pseudopodium. Therefore, the signals had to be confined to individual tracks that connected the cell center with specific locations of the cell periphery. The most promising candidate for this function seemed to be the microtubules.the microtubules radiate away from the center of the centrosome. Originating at this center they lead unbranchingly to the cellular cortex which contains the autonomously motile microplast domains. The situation is very reminiscent of nerves connecting the brain (centrosome) to a set of muscles (microplasts). The image shows some fuzzy spots in the center which are grazing sections of the microtubule organizing centers near the centrioles which we consider the eyes of the cell.
Another line of arguments to support microtubules as good candidates for cellular 'nerves' comes from experiments that interfere with microtubules: If anti-microtubular drugs are given to the cell it can still move all parts of its body, but the remarkable coordination of the typical shape changes is lost. This led to the following question. Are any signals, indeed, propagated along the microtubules to the cell cortex in response to pulsating near-infrared light? If so, how can they be detected?

Howard Pattee said...

Ira, you say, “Absent hardware failure, and assuming time synchrony of inputs, two identical computers will run identical software and provide bit-for-bit identical outputs.”

Now, because we have been here before, I know what you are thinking when you write these words. You are thinking “classical (Laplacean) determinism” and I would agree with your conclusion. I don’t agree with you assumption. Your words “Absent hardware failure” begs the question.

In present quantum theory “classical determinism” is impossible. Even in computers determinism is only justified as a very good statistical approximation. In fact the only reason computer hardware is coded in bit strings instead of Java or English is that bit strings have the lowest failure statistics. (Turing and von Neumann explained why.)

Approximate determinism is infinitely distant from determinism. For example, any event with a probability of 1 (one), that is, a deterministic event, can be expressed as an infinite string of nines: 0.999999999 . . .
However, any finite string of nines, even if it goes from here to Florida (about a billion nines) I think you would agree is not equal to 1 and therefore the event is not deterministic, even though for all practical purposes you would be willing to bet your life on it.

Ira Glickstein said...

Joel, thanks for the info on microtubules in cells. They seem to imply a primitive "nervous system" and centralized "brain" in individial cells. I did not know about them and I think it strengthens my argument that consciousness starts in the most basic building blocks of the electro=chemical machines we call biological life.

Now, how can we create something like cells in a machine made of copper and silicon? I do not know if we can or not, but I cannot exclude the possibility.

Howard, you seem to have ignored my point about the Universe being both finite and discrete.

DISCRETE: We know that energy comes in quanta which implies that matter is also quantized and not infinitely divisible. If space/time is also quantized and not infinitely divisible, then the Universe is discrete.

FINITE: Adding the assumption of finitude to a discrete Uiverse, it would contain a finite number of energy/matter quanta as well as a finite number of space/time cells.

Given these assumptions, a finite number of bits would describe the current state of the Universe.

Yes, I am assuming the Universe is a finite state machine. I believe Einstein and Spinoza would likely go along with this argument. Given a finite and discrete Universe, (which I know is hard to swallow), then probability is nothing more than a convenient shortcut method of keeping track of stuff and making pretty good predictions when we don't know the exact conditions.

Ira Glickstein

Howard Pattee said...

Your assumption that a finite state automaton is deterministic simply begs the question. Any implementation of a finite automaton must obey nature’s probabilistic laws. The laws may give state transition statistics very close to 1, but as I illustrated, determinism requires a transition with probability of exactly 1.

Whether or not the universe is continuous or discrete does not affect the experimental evidence that the laws of nature are probabilistic. Finiteness and discreteness could refer only to the physical states of the universe (the initial conditions). The concept of determinism refers only to how states change to the next state, the state transitions (the physical laws).

According to the laws of quantum mechanics it is impossible (the uncertainty principle) to measure the state (e.g., position and momentum) with deterministic precision even if is a discrete state. But most relevant is the fact that all you can predict about the next state from quantum laws is a probability.

I have agreed with you before that it is very possible that current scientific theories (i.e., theories that can be experimentally tested) turn out to be incomplete (Einstein’s wishful thinking). Until that happens, any concept of determinism remains only a figment of the imagination, a myth that simply imagines that the current theories are fundamentally wrong. Why would a conservative mind promote such myths ;)

joel said...

Howard, thanks for the link to the works of Andy Clark. He seems to be an excellent writer with some very innovative ideas. I'm especially fond of his description of the integration of the mole cricket with his environmental klipsch horn.

Ira Glickstein said...

Howard asks (regarding my continuing faith in classical determinism despite the apparent evidence from modern quantum physics): "Why would a conservative mind promote such myths ;)"

I'm just CONSERVING the classical, time-tested, causal views of Spinoza from the 1600's and Einstein from the 1900's. You embrace the self-described "wierd" views of upstart whipper-snapper physicists :^)

If the science of "uncertainty" prevails for another 100 years, I'll consider changing my views.


Ira Glickstein