Monday, June 23, 2008


Ira suggested that I try to summarize the field of BIOSEMIOTICS ― the study of how symbol systems control living organisms and societies. I’ll try to do this in a series of short posts of less than 750 words. Then you can ask questions if I am unclear, make comments or disagree with what I have said. Hopefully, we can clear up the problems, and go on to the next post.


A symbol system, like the genetic code, a natural language, mathematics, or an artificial computer language, requires a set of symbols and rules that reside in a memory. Memory-stored symbols are the fundamental and essential requirement for life. It is required for self-replication and all open-ended evolution. Memory is also necessary for any learning and thinking process in nervous systems. Memory is also a requirement for universal computation.

Memory and symbols can be physically implemented in endless ways, in molecules like DNA, in texts like this page, in photographs, in digital magnetic, electric and optical patterns in computers, and in neural patterns in the brain. But these particular types of memory are not what make memory of fundamental importance. So, what are the properties of memory that are essential in evolution, learning, and computation?

Two essential properties of memory are PERMANENCE and CHANGEABILITY. These properties sound incompatible, but they are complementary. In a previous post on the C- and L-minds, I compared permanence to the CONSERVATIVE aspect of memory, and I compared changeability to the LIBERAL aspect of memory. Clearly, success in adaptive evolution, learning, and social systems requires the proper balance of conservative permanence and liberal change. That is why I disagree with any liberal or conservative who claims an ideological superiority.

A good memory must also be quickly accessible, and its symbols must have the ability to effect or control a specific change. A gene must be capable of controlling protein synthesis. A brain must be capable of controlling muscles. A computer memory must be capable of changing the state of the hardware. In all of these symbol systems, genes, brains and computers, the memories have also evolved the property of self-reference. That is, genes can control their own expression, brains can think about their own thoughts (e.g., consciousness), and computer programs can address themselves. This turns out to be a mixed blessing. On the one hand, it allows organisms, brains and computers to inspect internal predictive models of the world. On the other hand, self-reference can lead to contradiction, infinite regress, and undecidable questions like whether we have free will.

While it is clear that evolution, learning, and computation could not occur unless memory has some degree of permanence and some degree of change, the nature and results of the changes are different in all three cases. In evolution, memory change is called mutation or variation, and changes are largely random. Natural selection determines the ultimate results. In nervous systems memory change is called learning. Learning is more complex and includes instruction, experience, reorganizing existing memory (thought or reasoning) and random or directed search, and often cultural selection. In computation, memory change is often called recursion or rewriting. A memory-stored program usually determines change, but programs can simulate random change and, model evolution, learning, and thinking.

Here is the classical problem of symbolic memory. The peculiar fact is that the physics of memory ― that is, the laws governing the material structures of memory symbols ― has no necessary relation to the function or meaning of the symbols. Symbol vehicles obey physical laws, but analysis of these diverse physical structures does not tell us what is important, namely the function or meaning of the symbols. Neither does analysis of these physical embodiments of memory tell us how the behaviors of memories differ in evolving organisms, brains and computers. Physical laws alone cannot predict or usefully describe the course of evolution, learning, thinking, or computation. Briefly, the problem is that symbols are arbitrarily related to their meaning or referent. The meaning or function of symbols is determined by a code or an interpreter. Symbols do not exist alone, but are a part of a language.

This fact has been a problem since the beginning of philosophy. It is
the root of the classical body-mind problem. Today in physics it is the basis of the measurement problem ― how the irreversible process interpreted as a measurement can arise from state-determined reversible laws. Some physicists also see this as an energy-information dichotomy. In biology this is the crux of the origin of life problem, how did this symbolic control of matter begin? How did molecules become messages? I call this the symbol-matter problem.


Ira Glickstein said...

Thanks Howard for starting a new series of Topics on Biosemiotics aimed at a GENERAL AUDIENCE.

Your first installment, on What is Memory is at exactly the right level and I hope Blog members who may not be familiar with this Topic area will invest the time to understand it.

Everyone, PLEASE, PLEASE post your questions and comments and take advantage of Howard's recognized expertise.

Most of us take for granted signs and symbols and so on without giving them much thought, or even recognizing the difference between a sign and a symbol. All we know about memory is that, as we age, we lose our minds!

SIGNS: When an animal is angry or fearful, it will bare its teeth and growl and raise its fur to make itself look more threatening. That is called a sign because the visual and audible effects are directly related to the message. Nearly all animal communication consists of signs as does much of human communication.

When I use an "emoticon" like :^) or :-( that is a sign because it looks like a person smiling or frowning.

SYMBOLS: On the other hand, humans (and some other animals) have the ability to communicate with symbols, such as words. The words "LOVE" and "SHOVE" have three letter in common but they convey totally different messages. "LOVE" stands for a warm emotion and feeling that is common to each of us, but the letters "L", "O", "V" and "E" are ordinary letters of the alphabet that are not particularly loving and are not directly related to the message we receive when they are put together to form the word "LOVE".

Similarly, in mathematics, 0 (zero) is the symbol for nothing at all, and (infinity) is the symbol for more than everything. They each use about the same amount of ink and take up the same amount of space but have totally opposite, and absolutely abstract meanings.

SYMBOL SYSTEMS - The key message from Howard is that symbol systems are necessary not just for high-level human communication but also for things like the genetic code.

One of Howard's key contributions to the academic discipline now known as biosemiotics is the concept of semantic closure now known as semiotic closure. That is the idea that symbols are generated, stored in memory, and then interpreted later. This principle is the key to understanding how all biological systems reproduce themselves and, in the process, evolve to forms better suited to the changing environment. It is also the key to mathematics, science, and computation, among other things!

SIMPLE EXAMPLE - Let us start simple and assume you have a dining room table you are fond of. You would like to hire a captenter to reproduce one for each of your children. One way to do this would be to measure each part of the table in detail and reproduce that part and put the parts together. That sounds great, but, would it work over many generations?

The answer is it would not work well at all! Why not? Well, say your table was damaged in use (a broken leg, a bad stain, a burn, etc.) Perhaps it was repaired (with glue and screws, paint, etc.) Reproduction by careful examination would copy each of these faults and repairs. Over several generations, your family would have poorer and poorer tables. (Like repreatedly zeroxing a document.)

The right way to reproduce that table would be to get a diagram ("blueprint") with construction instructions and build the new tables from that information. Of course, you would have to reproduce that diagram and give a copy, along with the reproduced tables, to each of your children so they could reproduce it and the table for their children.

That diagram is an example of what Howard means by "memory". The diagram is a symbolic 2D representation of a 3D object. The inked lines and words have no direct relationship to the table. However, any competent carpenter can interpret that diagram and reproduce a new, undamaged table, just like the one you had when it was brand new!

BIOLOGICAL REPRODUCTION - Amazingly, symbol systems were invented way, way, way before humans existed on Earth! Around 3.5 BILLION years ago, with the advent of primitive DNA-based life, that solution was adopted by ALL biological organisms.

The memory consists of sequences of patterns of atoms that make up a long-chain molecule called DNA. Like the lines and words on the diagram for the table, they have no direct relationship to the life form they code for. Yet, when that DNA is copied and contained in cells inserted into a competent womb, those cells reproduce and develop into a brand new baby. The baby gets brand new eyes and legs and internal organs even if its mother and father have damaged or worn out organs.

The baby also gets a copy of the DNA code in each and every cell of its body. That is the memory it will use to reproduce babies of its own.

Even more amazing is that with sexual reproduction (instructions coming partly from father and partly from mother), and with some random mutations in the copying process, the resultant baby will be somewhat different from both. Some offspring will happen to be better adapted to the environment and will survive and reproduce more effectively than others. That allows a species to adapt to new conditions and threats and opportunities and also to evolve into other species. What a great symbol system!

ANALOGY TO L- and C-MINDS - Howard ingeniously related the critical properties of memory PERMANENCE ("C-mind") and CHANGEABILITY ("L-mind") to our past discussions. BOTH ARE NECESSARY. (However, I love the analogy because, quite clearly, accurate copying of DNA is more critical, at least in the short run, than the random mutations necessary for biological adaptation. Score one for the C-minds! :^)

Ira Glickstein

Howard Pattee said...


I use the word “language” in its most general sense of a finite (small) set of physically arbitrary symbols and rules that can form an endless number of expressions that can be interpreted to control or represent events. The origin of natural language is as difficult to discover as the origin of life because spoken language leaves no trace. There are about 6000 natural languages on Earth, and while many are related, there are also differences that suggest many independent inventions of language. Linguistics is a controversial subject, but many linguists claim that most natural languages are more or less equal in their expressive power.

What does “expressive power” mean? How do we compare the expressive power of natural language with genetic expressions, mathematical expressions and computer languages? Genes and Java can’t express metaphors and emotions. On the other hand, natural language cannot synthesize proteins. The concept of “expressive power” of a language makes sense only in a defined universe of reference.

For example, the so-called Universal Turing Machine is actually a single machine that is universal only because of the expressive power of its formal language that can describe all possible Turing machines (and therefore all computable functions). The genetic language is expressively powerful in the universe of nucleic acids and proteins because it can describe all possible nucleic acids and proteins. More precisely, genes can control the synthesis of copies of itself as well as all possible proteins. But notice, the chemical synthesis is not done by the genetic description itself but by the protein enzymes it describes. So we have the primeval chicken-egg problem ― enzyme synthesis requires the gene’s description, but the gene’s description is executed by enzymes.

This is the origin of life problem (or the semiotic closure problem), and no one has solved it. At the much higher level of brains, the classical dualists like Descartes, Leibniz and Spinoza saw this as the mind-matter problem. Spinoza in the Ethics states the problem: “Body cannot determine mind to think, neither can mind determine body to motion or rest, or any state different from these, if such there be.” Today, few scientists are dualists, and some neuroscientists think they have solved the mind problem with their brain models. On the other hand, physicists know there is an irreducible complementarity between information-based symbolic events and energy-based lawful events. This is called the measurement problem. Nobody agrees on answers to these problems, so I will go back to what we know about language.

According to my broad definition of language ― a small set of physically arbitrary symbols and rules that can form an endless number of functional expressions ― the genetic system is a language, as is computer language and human language. What I think is more interesting is why natural language is so unlike genetic and computer languages. We know in detail the syntax and functions of genetic language and computer language. Their rules and functions are simple, explicit, and literal.

By contrast, human language is very complex. Its many figures of speech violate its own syntax. Its meanings are ambiguous, highly context-dependent, and often metaphorical. Most arguments are largely the result of misinterpretations caused by word ambiguity, and libraries are full of inconclusive attempts to verbally disambiguate concepts like truth, justice, virtue, pornography, equal rights, and C- and L-minds.

So, what is human language good for? One answer comes from asking what language is most commonly used for. For as long as we have any evidence, and probably long before that, humans have used language to tell stories These stories are mostly about humans, not just about their behavior, but about their instincts, emotions, and especially their imagination ― fear, heroism, evil, love, hate, jealousy, kings, gods, and wishful thinking.

Very old stories we often call myths. Stories that catch on in a group and propagate for many generations (memes) are often perceived as true and become sacred texts like the Torah, the Gospels, and the Koran. Modern stories we call fiction. Fiction that has lasted long enough we call the Classics, like the Aenead and Shakespeare. Recent stories that are popular are called best-sellers, like Harry Potter. Who knows what Ira’s story “2050 -The Hawking Plan” will become? Conclusion: Human language is good for telling stories.


Howard Pattee said...

Ira likes my analogy that C-minds preserve stories, while L-minds like to vary them. He says accurate copying of DNA is more critical, at least in the short run, than the random mutations necessary for biological adaptation. Score one for the C-minds! :^)

To even the score, I would say that depends entirely on how adaptive the story is for the short run; and anyway, evolution is for the long run. Also, the origin of the replication story was probably a random “frozen accident.” (e.g., Manfred Eigen’s hypercycles, Stuart Kaufmann’s random nets)

Ira Glickstein said...

I expected your second installment Biosemiotics "Topic" on "Language" to be posted as a new Topic ("New Post"), rather than as a Comment to your first installment.

That is OK, but if you wish to re-post it as a "New Post" that would also be OK. (I try to be flexible :^)

I think you have described what the general meaning of "language" is in as simple a way as possible, given the complexity of the subject.

We speak (in our natural language, English) about computer languages and the language of genetics because we see deep similarities. Yet, at the same time, there are gaping differences. Natural languages can capture and express metaphors and genetic langauges can code for and generate protein sequences, but not vice-versa.

Computer languages are so new and appear quite limited. All they can do is control digital computers. But, some would say, an advanced computer connected to sensors and actuators could be programmed to listen, see, speak and write using metaphors and even create new ones. A computer-controlled genetic engineering lab could even synthesize proteins, including new ones that don't exist in natural biological evolutiion!

I do take issue with you on one philosophical point. You say, in part: "...the classical dualists like Descartes, Leibniz and Spinoza saw this as the mind-matter problem. Spinoza in the Ethics states the problem: 'Body cannot determine mind to think, neither can mind determine body to motion or rest,...'"

I disagree with your classification of Spinoza as a "classical dualist". He rejected the classical concept that mind ("thought") is a different substance from body ("extension") and taught that these were merely two of the many aspects of the Universal substance. says:

"Spinoza contended that 'Deus sive Natura' ('God or Nature') was a being of infinitely many attributes, of which extension and thought were two. His account of the nature of reality, then, seems to treat the physical and mental worlds as one and the same. The universal substance consists of both body and mind, there being no difference between these aspects. This formulation is a historically significant solution to the mind-body problem known as neutral monism. The consequences of Spinoza's system also envisage a God that does not rule over the universe by providence, but a God which itself is the deterministic system of which everything in nature is a part."

Finally, I agree with Howard that natural languages are for telling stories. Folk tales come and go but the oldest and most enduring become myths and perhaps even memes and, generations later, the most enduring become sacred texts.

Howard asks: "...Who knows what Ira’s story '2052-The Hawking Plan' will become?"

Well, it has come and will probably go with little notice. In my dreams it is picked up by some august critic and becomes a best-seller and later a classic. Failing that, as time goes by it is bound to be discovered in the ancient relics of the Internet and be recognized as the prophetic message of the ages. (Yeah, right :^)

Ira Glickstein

Howard Pattee said...


I don’t want to get into the mind-body or symbol-matter problem, even though I have said a lot about it from a physicist’s point of view.

I want to start by emphasizing the close interrelation between genes, natural language, and formal (artificial) languages, or (pace Joel) genes, memes, and temes. I’ll start it as a new topic. I’m using as one source a new survey of evolutionary linguistics by Christine Kenneally "The First Word, Penguin 2007.

Of course I agree that Spinoza was not a Cartesian or Leibnizian dualist. However, he did have a problem with the mind-body relation about which there is endless dispute as to what he was thinking. See e.g.,


Deardra MacDonald said...

Thank you Ira for suggesting this important topic, and thank you Howard for writing this articles and future articles on Biosemiotics. The first time I read an article on Biosemiotics I was impressed, because it made total sense in helping me to understand the process of message/symbol systems exchange, as an indispensible characteristic of all terrestrial life form.

When I started reading articles on Biosemiotics
it was very helpful to me to read about the Cuttlefish’s Biosemiotics. It made me realize how important the study of Biosemiotics is because it opened an entirely new understanding of how to view human societies! Ahh!

Yes, I am beginning to understand the concept that symbol systems are necessary and are a part of the genetic code.

With respect as always, Deardra

Ira Glickstein said...

Thanks Howard for the link to Spinoza on Mind and Body.

The linked item exposes an apparent contradiction between two statements by Spinoza:

(1) "the mind and the body are one and the same thing, which is conceived now under the attribute of thought, now under the attribute of extension."

(2) "the body cannot determine the mind to thinking, nor can the mind determine the body to motion or rest, or to anything else."

The linked item suggests some approaches to reconcile the statements and I have thought of a different way to make sense of them.

Say you and I are observing an electron and I watch it pass through a single slit while you watch the same electron later pass through a double slit. To me, the electron acts like a particle while to you it acts like a wave. As we are observing one and the same electron, we could say the two aspects of the electron, the wave-like aspect and the particle-like aspect are two (of the possibly many) aspects of "one and the same thing."

At the same time, we could agree that the particle-like aspect "does not determine the" electron to wave-like action nor the reverse.

A simpler example would be you and me viewing a soda can from different directions. You observe a flat, silvery disk and I see a curved, multi-colored rectangle. Yet, the soda can we are looking at "is one and the same thing". The disk does not directy determine the rectangle nor the reverse.

If we pass the soda can past a grocery checkout, the scanner will "see" neither the disk nor the rectangle but only the bar code.

Why do you and I (and the bar code scanner) make such different "measurements" of what we know is one thing?

Well, of course, the scanner was designed to see only one aspect of packages, namely bar codes. You and I were both "designed" by evolution to see in two dimension. We made different mesurements of the electron and the can because we happened to be looking at different aspects of these things.

I think Spinoza would say the Universal substance has many, many aspects, but humans have been "designed" (by evolution we would now say) to be capable of conceiving ony two aspects, namely thought (mind) and extension (body). Mind and body appear quite different to us and mind as mind cannot be physical, nor can body as body be mental.

Ira Glickstein

joel said...

In discussing memory, Howard offered the notion that memory needed to have permanence and changability. These are definitely two important properties of memory, but there are properties that we need to be aware of in order to understand biological and non-biological systems. For example, one notion dating back to Freud (at least) is that all inputs to the brain are recorded. In such a view, memory needs to be divided into aware and non-aware categories. To bring memories out of storage and into awareness requires accessibility. Accessibility demands some kind of rational filing system and a process for searching. We experience important consequences of these facts. For example, we all know the letters of the alphabet and can recite them rapidly. Few of us can recite the alphabet backward or can count down from one hundred as fast as we can count up. When we "lose our train of thought, we don't work backward. We go to a landmark starting point and work our way forward. Some computers are designed with First in- First out (FIFO) memory stacks while others employ Last in First out (LIFO) stacks. The way we experience memory access in our own brains tell us something of its organization even if we don't understand the biology. From experience we know, for example, that bad habits are hard to break. We know perceptual memory is difficult but not impossible to modify. We know that searches for piece of memory fitting a unanswered question of which we have knowledge but not awareness, takes place on a continual basis. Hence, there are many dimensions by which we can locate the properties of memory. In machine terms we might say that the recording of information must be writable, rewritable and must possess accessibility.
With respect -Joel

Howard Pattee said...

I think Ira’s analogy of mind and body being orthogonal projections of one “substance-space” is a good solution. Spinoza certainly knew about Euclid’s axiomatic method because that’s the way he presented his theorems. If he knew about Descartes’ analytic geometry he might have used the orthogonal projection concept.


Howard Pattee said...

Joel’s important point about accessability as an essential requirement of memory, raises another difference between genetic and computer memories and brain memory that I will say more about.

Roughly speaking, memory storage in genes and computers is local and is accessible by an unambiguous procedure. Brains have a distributed memory that is usually addressed by context-dependent association. Formal math appears to be an exception.

Of course, nobody understands very much about how brain memory works. In spite of this ignorance, since about 1985, computers have tried to imitate distributed memory following the idea of J. J. Hopfield. (See Wiki “Hopfield net” and links)


Ira Glickstein said...

Joel has added consideration of the essential property of accessibility of memory to Howard's permanence and changeability


We know much of what is stored in our brain is at the subconscious level. How often have you strained to solve a puzzle or remember something you know you know and come up empty - only to have the correct answer pop into your head later when you are thinking about something else! That indicates there are vast quanitities of information stored in our brains that heve limited accessibility.

Joel's post got me wondering if there is an equivalent limited accessibility aspect to either genetic or computer memory. I was surprised at the the thoughts that popped up!


We know 80-90% of human DNA is so-called "junk DNA" - portions of our DNA that are not used when we reproduce. (Biologists say the genetic instructions that make up the junk DNA are not "expressed".)

Why does evolution and natural selection, which usually punish inefficiency, waste all that storage capacity on junk DNA and carefully copy all of it and pass it on to our children?

I think the junk DNA is a repository of genes that used to be adaptive and might be useful in the future if the environment changes.

As an example, take pigs. Prior to being domesticated, pigs had thick bristles and big tusks which were necesssary for survival in the wild. Farmers captured and selectively bred them over many generations, favoring pigs with fewer bristles and smaller tusks.

However, the genes for bristles and tusks were not eliminated from the pig's DNA. They were simply "switched off" by mutations in the genes that control which genes are expressed.

When domesticated pigs escape, they rather quickly revert to a feral state. In the wild environment, those pigs with bristles and tusks have a survival and reproduction advantage. Within a few generations, the whole herd has bristles and tusks!


When you delete a file in your PC the data in the file is not erased from memory! All that happens is that the storage locations for that file are marked as available. However, until you store another file on your PC, the data from the old "deleted" file remains in memory.

How can this limited availablity computer memory be utilized to our advantage? Well, some programs, like MS Word, remember the most recent "X" changes and allow you to "undo" them. Deleted files are stored in the "Recycle Bin" and may be restored. At bootup, some PCs allow you to revert the whole PC memory to some previous time or date.

Back in the 1970's when we had one of the first Apple II home computers, we accidentally deleted an important name and address file. I studied up on the "file allocation table" and was able to go back in and resurrect it.

When the police seize a computer, they can often recover files the suspect had deleted. When you delete an email it may still be available somewhere in storage as some politicians and businessmen have discovered to their horror.

Therefore, if you really, absolutely MUST delete something from your PC, make sure you use a special program that overwrites the data with random "0" and "1" codes.

Also be aware of the Wayback Machine. If you have posted anything on a website since 1990s, even if you delete it, it is likely it can be accessed on the Wayback Machine.

Ira Glickstein

joel said...

One of the beauties of the concept of evolution is its use as a tool for understanding the nature of various aspects of human nature. We can ask ourselves how memory developed as a survival tool.

Accessing isolated memories is of some use. For example, knowing that eating a certain plant made you sick is definitely of value. Visual pattern matching is essential to us, but note that memory of scents is even more primitive in evolution. A scent is a single if complex factor and requires no pattern matching or templating, because it has only one dimension. In other words, scent gives a perfect match in memory. Vision requires an assessment of multiple dimensions, since scenes containing the same elements present themselves in a variety of forms. Identifying a plant by sight is vastly more complex than recognizing it by smell. It's interesting that our ability to smell has diminished in proportion to our ability to transmit a visual description to others. Our ability to describe a smell or taste in words is virtually nil, while our ability to describe what we see is enormous. Thus we describe the taste of virtually everything from alligator to snake as being "something like chicken." That hardly provides others with an ability to discriminate one stew from another. Yet, taste and smell can evoke powerful recollections of places and situations in our past.

With respect -Joel

joel said...

Sorry, but I left something out of the above. Here's an indication of the importance of pattern matching in the evolution of memory in even the most primitive creatures. The reference to "center of gravity" is a single measure of the location of dark and light areas.

The memory template in Drosophila pattern vision at the flight simulator.
Ernst R, Heisenberg M.

Lehrstuhl für Genetik, Biozentrum, Würzburg, Germany.

Pattern recognition is studied in flight orientation of fixed flying Drosophila melanogaster controlling the horizontal rotations of an arena. Earlier experiments had suggested a simple mechanism of pattern recognition in which a memory template and the actual image are retinotopically matched. In contrast, we now show that Drosophila extracts at least two and probably four pattern parameters: size, vertical position of the center of gravity and, presumably horizontal/vertical extent as well as vertical separatedness of pattern elements. Moreover, the fly treats isolated pattern elements as a compound figure. Retinal transfer is possible between training and test if the centers of gravity of the compound figures are retained.

Jeff said...

I have a couple of questions:
1. In Howard's definition of language he said that the symbols within a language are arbitrarily related to what they represent. Does this necessarily have to be the case? Hieroglyphics and Kanji come to mind as examples of symbols that bore direct resemblance* to what they represented (although, interestingly, they later evolved into abstract arbitrary symbols.

2. Secondly, I don't really understand Howard's parallel between symbol/represented and mind/body problem. Could you please explain.

Thanks for the really interesting blog!

*Berkeley's objection to the proposition that ideas of the material universe resemble the actual material universe comes to mind. In one sense any resemblance between a picture of something and the thing that it is representing is pretty tenuous at best.

Ira Glickstein said...

To Jeff: Welcome to our Blog and I'm sorry your comment dated 18 July was not published until today when I saw and approved it. (Please send me an email with some bio info and your full name and I'll invite you to become an Author and then your Comments will appear immediately with no need for Moderation on my part! As an Author you will have the right to start new Topics as well, and I hope you do.)

You have made some technical points that I would prefer Howard to answer in more detail. My response is that an important idea of Howard's is, as you say "the symbols within a language are arbitrarily related to what they represent." [Emphasis added].

That is the distinction between what are know as "signs" and "symbols".

SIGNS - Some hieroglyphics and Chinese ideographs, are clearly and directly related to what they represent. A Hieroglyph of a bird would be an example. The traditional Japanese and Chinese for bird is "鳥" which doesn't look like a bird to me, but is a highly stylized representation of what, presumably, used to look like a bird! (As you know, the Japanese adopted some 1700-odd Chinese ideographs verbatim -the kanji-, but the remainder of their written language consists of more or less alphabetic characters.)

A modern sign would be a smiley face :^) or an upraised hand to indicate "stop".

SYMBOLS - These are arbitrarily related to the object or concept they represent, such as the word "bird" or the equivalent German "Vogel" or Russian "птица" which do not look or sound like a bird at all.

Howard (and I, he was chairman of my PhD committee :^) believe is it critically important for language to evolve into arbitrary representation for it to become open-ended and exhibit the true power of language. (You say "interestingly, [Hieroglyphs and Kanji] later evolved into abstract arbitrary symbols." which proves Howard's point!

DNA is powerful because, instead of consisting of blueprint-like diagrams directly representing the design details of eyes and ears and hearts and so on, it is a long-chain molecule that consists of sequences of the bases A (adenine), C (cytosine), G (guanine) and T (thymine). Triplets of bases, in turn, code for the amino acids. Long sequences of amino acids code for proteins. Amazingly, in the animal reproductive system, these proteins self-organize themselves into the organs of animal bodies! It is as if a list of the nuts and bolts and metal and plastic parts could self-oranize into a car or a PC or whatever!

Now, THAT is what I call a powerful language!

Ira Glickstein

PS: I am not that familiar with Berkeley but his "objection to the proposition that ideas of the material universe resemble the actual material universe" rings true to me. Spinoza believed that there is only one Universal substance, although it appears to we mere mortals as two kinds of substance: "extension" (body, material) or "thought" (mind, spirit). In actuality, the Universal substance has an infinite number of additional aspects we cannot apprehend. Thus, even the most advanced human concept of the material universe is a mere shadow of a shadow of its actuality.

Howard Pattee said...

I agree with Ira’s answer to Jeff’s first question. Jeff also asks why I compare the symbol-matter problem with the mind-body problem.

There is a short answer and long answers. The short answer is that both the concepts of symbol and mind require a categorical separation of, for examples, a subject from its object, a knower from what is known, the map from the territory, or reality from a representation of reality. (This short answer will not satisfy philosophers.)

I call this necessary separation the “epistemic cut” following the physicist Pauli and von Neumann who called it simply a “cut” that is necessary for physical measurement. (I’ll post the references if anyone wants them.) The cut is epistemic because it defines knowledge as separate from what knowledge is about.

For the long answers with discussions, Google “epistemic cut”. (I’m afraid these discussions won’t satisfy philosophers either!)