Friday, March 14, 2014

The MAGICAL Number Seven (plus or minus two)

Why do we “chunk” things in groups of about seven – seven days of the week, seven seas, seven sins, etc? The presentation I gave to the Philosophy Club in The Villages, FL, 14 March 2014 provides the theoretical answer. You may download a PowerPoint Show that should run on any Windows computer here: https://sites.google.com/site/iraclass/my-forms/PhiloMAGICALsevenMar2014.ppsx?attredirects=0&d=1

This is an easy-to-understand version of a more technical presentation I made to the Science-Technology Club in February, see: http://tvpclub.blogspot.com/2014/02/optimal-span-amazing-intersection-of.html


MILLER - PERSECUTED BY THE NUMBER SEVEN !



Way back in 1956 a classic scientific paper appeared in the Psychological Review with the intriguing title: The Magical Number Seven, Plus or Minus Two – Some Limits on Our Capacity for Processing Information. That paper was extremely important and influential and is still available online. George A. Miller begins with a strange plea:
My problem is that I have been persecuted by an integer … The persistence with which this number plagues me is far more than a random accident …
He presents the results of twenty experiments where human subjects were tested to determine what he calls our "Span of Absolute Judgment", that is, how many levels of a given stimulus we can reliably distinguish. Most of the results are in the range of five to nine, but some are as low as three or as high as fifteen. For example, our ears can distinguish five or six tones of pitch or about five levels of loudness. Our eyes can distinguish about nine different positions of a pointer in an interval. Using a vibrator placed on a person's chest, he or she can distinguish about four to seven different level of intensity, location, or duration, etc. The average Span of Absolute Judgment is 6.4 for Miller's twenty one-dimensional stimuli.

Miller also presents data for what he calls our "Span of Immediate Memory", that is, how many randomly presented items we can reliably remember. For example, we can remember about nine binary items, such as a series of "1" and "0", or about eight digits, or about six letters of the alphabet, or about five mono-syllabic words randomly selected out of a set of 1000.

At the end of his paper Miller rambles:
...And finally, what about the magical number seven? What about the seven wonders of the world, the seven seas, the seven deadly sins, the seven daughters of Atlas in the Pleiades, the seven ages of man, the seven notes of the musical scale, and the seven days of the week? What about the seven-point rating scale, the seven categories for absolute judgment, the seven objects in the span of attention, and the seven digits in the span of immediate memory?

For the present, I prefer to withhold judgment.

Perhaps there is something deep and profound behind all these sevens, something just calling out for us to discover it.
But I suspect that it is only a pernicious, Pythagorean coincidence. [my bold]
Well, it turns out that there IS something DEEP and PROFOUND behind "all these sevens" and I (Ira Glickstein) HAVE DISCOVERED IT. And, my insight applies not only to the span of human senses and memory, but also to the span of written language, management span of control, and even to the way the genetic "language of life" in RNA and DNA is organized. Furthermore, my discovery is not simply based on support from empirical evidence from many different domains, but has been mathematically derived from the basic Information Theory equation published in 1948 by Claude Shannon, and the adaptation of "Shannon Entropy" to the Intricacy of a biograph by Smith and Morowitz in 1982.


SIMPLICITY VS COMPLEXITY VS INTRICACY 


Albert Einstein wisely advises us to:
Make things as simple as possible, but no simpler!
Good advice, but how to follow it? Well, Edward Teller suggests:
Threads of simplicity … are not easily discovered in music or in science. Indeed, they usually can be discerned only with effort and training. Yet the underlying simplicity exists and once found makes new and more powerful relationships possible.


How to find those "threads of simplicity"? Well, we need to understand the difference between COMPLEXITY and INTRICACY which, in normal usage, are sometimes used interchangeably. However, there is an important distinction between them according to Smith and Morowitz (1982).

COMPLEXITY - Something is said to be complex if it has a lot of different parts, interacting in different ways. To completely describe a complex system you would have to completely describe each of the different types of parts and then describe the different ways they interact. Therefore, a measure of complexity is how long a description would be required for one person competent in that domain of knowledge to explain it to another.

A great example of UNNECESSARY COMPLEXITY is found in those "Rube Goldberg Inventions" where a relatively simple task is complicated by combining all sorts of different effects into a chain of ridiculous interactions.

INTRICACY - Something is said to be intricate if it has a lot of parts, but they may all be the same or very similar and they may interact in simple ways. To completely describe an intricate system you would only have to describe one or two or a few different parts and then describe the simple ways they interact.

A great example of useful INTRICACY is a window screen that is intricate but not at all complex. It consists of equally-spaced vertical and horizontal wires criss-crossing in a regular pattern in a frame where the spaces are small enough to exclude bugs down to some size. All you need to know is the material and diameter of the wires, the spacing betwen them, and the size of the window frame. Similarly, a field of grass is intricate but not complex.

If you think about it for a moment, it is clear that, given limited resources, they should be deployed in ways that minimize complexity to the extent possible, and maximize intricacy! That is why nearly all natural and artificial structures are configured as Hierarchical and have a common "Optimal Span".


 A SIMPLE FORMULA FOR MAXIMIZING INTRICACY AND REDUCING COMPLEXITY

What is Optimal Span?

With so many different types of hierarchical structures, each with its own purpose and use, you might think there is no common property they share other than their hierarchical nature. You might expect a particular Span of Control that is best for Management Structures in Corporations and a significantly different Span of Containment that is best in Written Language.

If you expected the Optimal Span to be significantly different for each case, you would be wrong!
According to System Science research and Information Theory, there is a single equation that may be used to determine the most beneficial Span. Thatoptimum value maximizes the effectiveness of the resources. A Management Structure should have the Span of Control that makes the best use of the number of employees available. A Written Language Structure should have the Span of Containment that makes the best use of the number of characters (or bits in the case of the Internet) available, and so on.

The simple equation for Optimal Span derived by [ Glickstein, 1996 ] is:

So= 1 + De
(Where D is the degree of the nodes and e is the Natural Number 2.71828459)

In the examples above, where the hierarchical structure may be described as a single-dimensional folded string where each node has two closest neighbors, the degree of the nodes is, D = 2, so the equation reduces to:

So= 1 + De = 1 + 2 x 2.71828459 = 6.43659

“Take home message”: OPTIMAL SPAN, So = ~ 6.4

Also see Quantifying Brooks Mythical Man-Month (Knol) , [Glickstein, 2003 ] and [ http://repository.tudelft.nl/assets/uuid:843020de-2248-468a-bf19-15b4447b5bce/dep_meijer_20061114.pdf] for the applicability of Optimal Span to Management Structures.
Ira Glickstein

Wednesday, February 19, 2014

Alzheimer’s A to Z Presentation to Philosophy Club 2/14/14 (Lee Conrad)


A person’s life is comprised of memories – that’s exactly what Alzheimer’s disease (AD) takes away from you! What makes this so difficult for caregivers: “It’s the living loss of a loved one – still with you; but, slowly drifting away”.

I presented this topic to The Villages Philosophy Club on 14 February 2014 and my Powerpoint slides are available:
https://sites.google.com/site/iraclass/my-forms/Philo%20ALZLHEIMERS%20V4.5.pps?attredirects=0&d=1

The brains of people with AD contain “Plaques” of Amyloid Beta, a protein, and tangles of another protein called tau. By the time a person manifests objectively measurable cognitive impairment, Amyloid-β, (Aß), is strongly positive in the regions of the brain known as the “DEFAULT NETWORK” – which are particularly vulnerable to (Aß) deposition.

Aß can fold into a misshapen form that causes other normal Aß molecules to assume the wrong shape and clump together. Proteins may later break off from the aggregate and seed the beginnings of the same process elsewhere. This biochemical domino effect involves a mechanism called “CORRUPTIVE PROTEIN TEMPLATING”.

As this biochemical domino effect spreads through the brain, this inexorable progression of Aß deposits engulfs most areas of the cerebral cortex (the brains outer layer) progressing to the midbrain finally reaching the lower brainstem and cerebellum in the organ’s deepest reaches.

There are no exact transition points that define when a patient has progressed from the pre-clinical phase to the mild cognitive impairment phase and then to the dementia phase.

Diagnosing dementia involves many factors which include:
o A complete physical
o A complete neurologic examination
o MRI & CT scans of the brain
o Blood & Urine tests
o Thorough discussion of patients medical history & symptoms

All of the above with a qualified neurologist and neuropsychologist

From my personal experience, observing my spouse, I could see signs of cognitive problems that needed to be addressed:
o Problems using the microwave
o Problems using TV remotes
o Problems w/ electronics that have to be programmed
o Difficulty organizing M – F pillbox for AM/PM medications
o Difficulty with driving directions through roundabouts

I obtained from a friend connected with the American Academy of Neurology suggestions for possible Neurologist from various institutions and current publications (April 2013) relevant to Mild Cognitive Impairment (MCI), Alzheimer’s dementia, genetic risk factors for late onset AD, recent genes associated with early onset AD, radioactive biomarkers for PET scans of the brain, and biochemical biomarkers present in cerebrospinal fluid (CSF). I also obtained excellent advice from a former classmate MD (specialty Internal Medicine), with experience diagnosing this disease, and his recommendation: “Seek a well- established neurologist in the field and obtain a blood test for the ApoE gene.

After a 3 - 5 hour initial exam with a neurologist associated with the Brain Institute, and the neurology dept., Shands Hospital, U. FL, Gainesville and psychologists on the staff --- the preliminary diagnosis was possible MCI with orders for a full day of neuropsychological testing, MRI of the brain and follow up in 2 months.

With the preliminary diagnosis of MCI, the review article (April 2013) on MCI was very important. The slides on MCI are self-explanatory regarding it’s clinical characterization and the significant interest in understanding the transition from MCI to AD.

After 8 hours of extensive neuropsychological testing at Shands, the final diagnosis was changed from MCI to early stages of AD, verified by the appearance of hippocampal atrophy, one of the first regions of the brain to suffer damage in AD patients. Blood tests results came back: Genotype *E2/*E4 (*E4 is a strong genetic risk factor for Alzheimer’s).

Recommended treatment: begin memory medication (Aricept) ASAP, returning in 4 months for subsequent evaluation of the effectiveness of the medication.

The blood test results for Genotype APO*E2 / APO*E4, are explained in the AAN Review Article Continuum on New Genes and Insights from old Genes, which explains the significance of APO*E4 as a genetic risk factor for AD, increasing the risk by a factor of 3 to 4 fold.

Subsequent slides explain three other genes strongly associated with “early Onset” Alzheimer’s (prior to age 65 and experiments with APP-Transgenic mice, genetically engineered (with the APP gene) to produce the precursor protein from which the human Aβ fragment is generated and spontaneously develops Aβ brain deposits at a relatively consistent age.

Future Experiments with APP-Transgenic Mice hopefully may yield important clues to solving the problem for early detection and the subsequent prevention of dementia.


Lee Conrad

Monday, February 10, 2014

Optimal Span - AMAZING Intersection of Hierarchy, Information, and Complexity Theories

I presented "Optimal Span - The AMAZING Intersection of Hierarchy Theory, Information Theory, and Complexity Theory" to The Villages Science and Technology Club today.

You may download my PowerPoint Show that should run on any Windows PC here:
https://sites.google.com/site/iraclass/my-forms/SciTechOptimalSpan10Feb2014.pps?attredirects=0&d=1
I began the presentation with Kurt Vonnegut's great poem that tells us about Tigers, Birds, and Humans and what they are compelled, by their Nature, to do. Of course: "MAN got to sit and wonder 'why, why, why?'" and then, after some study and contemplation, "MAN got to tell himself he understand!"

This Topic and my PowerPoint Show are based on my PhD dissertation: "Hierarchy Theory - Some Common Properties of Competitively-Selected Systems", System Science Department, Binghamton University, NY, 1996. If you wish to pursue further research in this area please contact me at ira@techie.com. A few copies of my dissertation are available.


The material that follows contains more detail than the PowerPoint Show.


Most complex structures are compositional or control hierarchies. An example of a compositional hierarchy is written language. A word is composed of characters. A simple sentence is composed of words. A paragraph is composed of simple sentences, and so on. An example of a control hierarchy is a management structure, where a manager controls a number of foremen or team leaders, and they, in turn, control a number of workers.


Optimal Span Hypothesis:

Optimal Span is about the same, between five and nine, for virtually all complex structures that have been competitively selected.




That includes the products of Natural Selection (Darwinian evolution) and the products of Artificial Selection (Human inventions that competed for acceptance by human society).
The hypothesis is supported by empirical data from varied domains and a derivation from Shannon’s Information Theory and Smith and Morowitz’s concept of intricacy.

What is a Hierarchy?

Hierarchy (fromGreek:ἱερός — hieros, ‘sacred’, and ἄρχω — arkho, ‘rule’) originally denoted the holy rule ranking of nine orders of angels, from God to Seraphims to Cherubims and so ondown to the Archangels and plain old Angels at the lowest level. Kind of like the organization of God’s Corporation!

The seminal book on this topic is Hierarchy Theory: The Challenge of Complex Systems[ Pattee, 1973 ]. This book includes a chapter by Nobel laureate Herbert A. Simon on “The Organization of Complex Systems”. Other chapters: James Bonner “Hierarchical Control Programs in Biological Development”; Howard H. Pattee “The Physical Basis and Origin of Hierarchical Control” and “Postscript: Unsolved Problems and Potential Applications of Hierarchy Theories”; Richard Levins “The Limits of Complexity”, and Clifford Grobstein “Hierarchical Order and Neogenesis”.
A more recent book, Complexity – The Emerging Science at the Edge of Order and Chaos, observes that the “hierarchical, building-block structure of things is as commonplace as air.” [ Waldrop, 1992 ]. Indeed, a bit of contemplation will reveal that nearly all complex structures are hierarchies.
There are two kinds of hierarchy. A few well-known examples will set the stage for more detailed examination of modern Hierarchy Theory:

Examples


1 -Management Structure (Control Hierarchy)

Workers at the lowest level are controlled by Team Leaders (or Foremen), teams are controlled by First-Level Managers who report to Second-Level managers and so on up to the Top Dog Executive. At each level, the Management Span of Control is the number of subordinates controlled by each superior.

2 -Software Package (Control Hierarchy)

Main Line computer program controls Units (or Modules, etc.) and the Units control Procedures that control Subroutines that control Lines of Code. At each level, the Span of Control is the number of lower-level software entities controlled by a higher-level entity.

3 – Written Language (Containment Hierarchy)

Characters at the lowest level are contained in Words. Words are contained in Simple Sentences. Simple Sentences in Paragraphs, and so on up to Sections, Chapters and the Entire Document. At each level, theSpan of Containment is the number of smaller entities contained by each larger.

4 – “Chinese boxes” (Containment Hierarchy)

A Large Box contains a number of Smaller Boxes which each contain Still Smaller Boxes down to the Smallest Box. At each level, the Span of Containment is the number of smaller entities contained by each larger.

Traversing a Hierarchy


Note that Examples 1 and 3 above were explained starting at the bottom of the hierarchy and traversing up to the top while Examples 2 and 4 were explained by starting at the top and traversing to the bottom.
Simple hierarchies of this type are called “tree structures” because you can traverse them entirely from the top or the bottom and cover all nodes and links between nodes.

"Folding” a “String”

A tree structure hierarchy can also be thought of an a one-dimensional “string” that is “folded” (or parsed) to create the tree structure. What does “folding” mean in this context?

As an amusing example, please imagine the Chief Executive of a Company at the head of a parade of all his or her employees. Behind the Chief Exec would be Senior Manager #1 followed by his or her First-Level Manager #1. Behind First-Level Manager #1 would be his or her employees. Behind the employees would be the First-level Manager #2 with his or her employees. After all the First-levels and their employees, Senior Manager #2 would join the parade with his or her First-Levels and their employees, and so on. If you took the long parade and called it a “string”, you could “fold” it at each group of employees, then again at each group of First-Level Managers, and again at the group of Senior Managers, and get the familiar management tree structure!

The above “parade” was described with the Chief Exec at the head of it, but you could just as well turn it around and have the lowest-level employees lead and the Chief Exec at the rear. When military hierarchies go to war, the lowest-level soldiers are usually at the front and the highest-level Generals well behind.

A more practical example is the text you are reading right now! It was transmitted over the Internet as a string of “bits” – “1″ and “0″ symbols. Each group of eight bits denotes a particular character. Some of the characters are the familiar numbers and upper and lower-case letters of our alphabet and others are special characters, such as the space that demarks a word (and is counted as a character that belongs to the word), punctuation characters such as a period or comma or question mark, and special control characters that denote things like new paragraph and so on.

You could say the string of 1′s and 0′s is folded every eight bits to form a Character. The string is folded again at each Space Character to form Words. Each group of Words is folded yet again at each comma or period symbol that denotes a Simple Sentence. Each group of Simple Sentences is again folded to make Paragraphs, and so on.

You could lay out a written document as a tree structure, similar to a Management hierarchy. The Characters would be at the bottom, the Words at the next level up, the Simple Sentences next, the Paragraphs next, and so on up to the whole Section, Chapter, and Book.

What is Optimal Span?

With all these different types of hierarchical structures, each with its own purpose and use, you might think there is no common property they share other than their hierarchical nature. You might expect a particular Span of Control that is best for Management Structures in Corporations and a significantly different Span of Containment that is best in Written Language.

If you expected the Optimal Span to be significantly different for each case, you would be wrong!
According to System Science research and Information Theory, there is a single equation that may be used to determine the most beneficial Span. Thatoptimum value maximizes the effectiveness of the resources. A Management Structure should have the Span of Control that makes the best use of the number of employees available. A Written Language Structure should have the Span of Containment that makes the best use of the number of characters (or bits in the case of the Internet) available, and so on.

The simple equation for Optimal Span derived by [ Glickstein, 1996 ] is:

So= 1 + De
(Where D is the degree of the nodes and e is the Natural Number 2.71828459)

In the examples above, where the hierarchical structure may be described as a single-dimensional folded string where each node has two closest neighbors, the degree of the nodes is, D = 2, so the equation reduces to:

So= 1 + De = 1 + 2 x 2.71828459 = 6.43659

“Take home message”: OPTIMAL SPAN, So = ~ 6.4

Also see Quantifying Brooks Mythical Man-Month (Knol) , [Glickstein, 2003 ] and [ Meijer, 2006 ] for the applicability of Optimal Span to Management Structures.

[Added 4 April 2013: The Meijer, 2006 link no longer works. His .pdf document is available at http://repository.tudelft.nl/assets/uuid:843020de-2248-468a-bf19-15b4447b5bce/dep_meijer_20061114.pdf ]

Examples of Competitively-Selected Optimal Span

Management Span of Control

Management experts have long recommended that Management Span of Control be in the range of five or six for employees whose work requires considerable interaction. Depending upon the level of interaction, experts recommend up to nine employees per department.This recommendation comes from experience with organizations with different Spans of Control. The most successful tend to have Spans in the recommended range, five to nine,an example of competitive-selection.

When the lowest level consists of service-type employees, whose interaction with each other is less complex, there may be a dozen or two or more in a department, but there will usually be one or more foremen or team leaders to reduce the effective Management Span of Control to the range five to nine.Corporate hierarchies usually haveabout the same range of first-level departments reporting to the next level up and so on.

Say you had a budget for 49 employees and had to organize them to make most effective use of your human resources. Which of the following seems most reasonable?

(A) you have ONE manager and 48 workers, which is a BROAD hierarchy. Management experts would say a Management Span of Control of 48 is way too much for anyone to handle!

(B) you have a third-level chief executive, three executive-level managers, each with three department managers, totaling THIRTEEN managers in a three-level management hierarchy and only 36 workers, which is a TALL hierarchy with an average Management Span of Control of only 3.3. Management experts would say this is way too inefficient with too many managers!

(C) you have a second-level manager and six department managers, totaling SEVEN managers and 42 workers in a MODERATE hierarchy with an average Management Span of Control of about 6.5. Management experts would say this is about right for most organizations where the workers have to interact with each other. Optimal Span theory supports this common-sense belief!

Human Span of Absolute Judgement

Evolution and Natural Selection have produced the human brain and nervous system and our senses of vision, hearing, and taste. It turns out that these senses are generally limited to five to nine gradations that can be reliably distinguished. It is also the case that we can remember about five to nine chunks of information at any one time. This is another example of competitive-selection, where, over the eons of evolutionary development, biological organisms competed and those that best fit the environment were selected to survive and reproduce.



George A Miller wrote a classic paper titled The Magical Number Seven, Plus or Minus Two: Some Limits on Our Capacity for Processing Information [ Miller, 1956 ]. He showed that human senses of sight, hearing, and taste were generally limited to five to nine gradations that could be reliably distinguished. Miller’s paper begins as follows:



"My problem is that I have been persecuted by an integer [7 +/- 2]. For seven years this number has followed me around, has intruded in my most private data, and has assaulted me from the pages of our most public journals. This number assumes a variety of disguises, being sometimes a little larger and sometimes a little smaller than usual, but never changing so much as to be unrecognizable. The persistence with which this number plagues me is far more than a random accident. There is, to quote a famous senator, a design behind it, some pattern governing its appearances. Either there really is something unusual about the number or else I am suffering from delusions of persecution.Miller’s paper is well worth reading and is available on the Internet at this link [Miller, 1956]"

Glickstein’s Theory of Optimal Span

Miller’s number also pursued me (Ira Glickstein) until I caught it and showed, as part of my PhD research,[ Glickstein, 1996 ]that, based on empirical data from varied domains, the optimal span for virtually all hierarchical structures falls into Miller’s range, five to nine. Using Shannon’s information theory, I also showed that maximum intricacy is obtained when the Span for single-dimensional structures is So = 1 + 2e = 6.4 (where e is the natural number, 2.71828459). My “magical number” is not the integer 7, but 6.4, a more precise rendition of Miller’s number!


Hierarchy and Complexity

Howard H. Pattee, one of the early researchers in hierarchy theory, posed a serious challenge:

Is it possible to have a simple theory of very complex, evolving systems? Can we hope to find common, essential properties of hierarchical organizations that we can usefully apply to the design and management of our growing biological, social, and technological organizations? [Pattee, 1973]
Pattee was the Chairman of my PhD Committee and I took the challenge very seriously!


The hypothesis at the heart of my PhD dissertation is that the optimal span is about the same for virtually all complex structures that have been competitively selected. That includes the products of Natural Selection (Darwinian evolution) and the products of Artificial Selection (Human inventions that competed for acceptance by human society).


Weak Statement of Hypothesis

In  the “weak” statement of the hypothesis, it is scientifically plausable to believe that diverse structures tend to have spans in the range of five to nine, based on empirical data from six domains plus a computer simulation.

The domains are:

Human Cognition: Span of Absolute Judgement (one, two and three dimensions), Span of Immediate Memory, Categorical hierarchies and the fine structure of the brain. These all conform to the hypothesis.

Written Language: Pictographic, Logographic, Logo-Syllabic, Semi-alphabetic, and Alphabetic writing. Hierarchically-folded linear structures in written languages, including English, Chinese, and Japanese writing. These all conform to the hypothesis.

Organization and Management of Human Groups: Management span of control in business and industrial organizations, military, and church hierarchies. These all conform to the hypothesis.

Animal and Plant Organization and Structure: Primates, schooling fish, eusocial insects (bees, ants), plants. These all conform to the hypothesis.

Structure and Organization of Cells and Genes: Prokaryotic and eukaryotic cells, gene regulation hierarchies. These all conform to the hypothesis.

RNA and DNA: Structure of nucleic acids. These all conform to the hypothesis.

Computer Simulations: Hierarchical generation of initial conditions for Conway’s Game of Life. (Two-dimensional ). These all conform to the hypothesis.

Strong Statement of Hypothesis

Shannon’s information theory, andthe concept of intricacy of a graphical representation of a structure [ Smith and Morowitz, 1982 ] can be used to derive a formula for the optimal span of a hierarchical graph.


This work extended the single-dimensional span concepts of management theory and Miller’s “seven plus or minus two” concepts to a general equation for any number of dimensions. I derived an equation that yields Optimal Span for a structure with one-, two-, three- or any number of dimensions!

The equation for Span (optimal) is:

So= 1 + De

(Where D is the degree of the nodes and e is the Natural Number 2.71828459)


NOTE: For a one-dimensional structure, such as a management hierarchy or the span of absolute judgement for a single-dimensional visual, taste or sound, the degree of the nodes, D = 2 . This is because each node is a link in a one-dimensional chain or string and so each node has two closest neighbors.

For a two-dimensional structure, such as a 2D visual or the pitch and intensity of a sound or a mixture of salt and sugar, D = 4. Each node is a link in a 2D mesh and so each node has four closest neighbors.

For a 3D structure, D = 6 because each node is a link in a 3D egg crate and has six closest neighbors.

Some of the examples in Miller’s paper were 2D and 3D and his published data agreed with the results ofthe formula. The computer simulation was 2D and also conformed well to the hypothesis.

In normal usage, complexity and intricacy are sometimes used interchangeably. However, there is an important distinction between them according to [ Smith and Morowitz, 1982 ].


COMPLEXITY - Something is said to be complex if it has a lot of different parts, interacting in different ways. To completely describe a complex system you would have to completely describe each of the different types of parts and then describe the different ways they interact. Therefore, a measure of complexity is how long a description would be required for one person competent in that domain of knowledge to explain it to another.


INTRICACY - Something is said to be intricate if it has a lot of parts, but they may all be the same or very similar and they may interact in simple ways. To completely describe an intricate system you would only have to describe one or two or a few different parts and then describe the simple ways they interact. For example, a window screen is intricate but not at all complex. It consists of equally-spaced vertical and horizontal wires criss-crossing in a regular pattern in a frame where the spaces are small enough to exclude bugs down to some size. All you need to know is the material and diameter of the wires, the spacing betwen them, and the size of the window frame. Similarly, a field of grass is intricate but not complex.


If you think about it for a moment, it is clear that, given limited resources, they should be deployed in ways that minimize complexity to the extent possible, and maximize intricacy!


Using [ Smith and Morowitz, 1982 ] concepts of inticacy, it is possible to compute the theoretical efficiency and effectiveness of a hierarchical structure. If it had the Optimal Span, it is 100% efficient, meaning that it attains 100% of the theoretical intricacy given the resources used.If not, the percentage of efficiency can be computed. For example, a one-dimensional tree structure hierarchy is 100% efficient (maximum theoretical intricacy) with a Span of 6.4. For a Span of five, it is 94% efficient (94% of maximum theoretical intricacy).It is also 94% efficient with a Span of nine. For a Span of four or twelve, it is 80% efficient.

Ira Glickstein

Monday, January 27, 2014

Old Geezers Should Communicate with Young Folks

[from billlifka, posted with his prior permission]
A few days ago, I heard an excellent talk by a local political guru of the Conservative persuasion. He was offering ideas how ordinary folks of his political persuasion might help the Republican Party do better in the coming election. The bottom line was that he recommended the old geezers in the audience should start communicating with young folks between the ages of fifteen and thirty.

His principal argument was that, beyond the age of thirty, people tend not to change much from what they have become and what they believe in at that point in their lives. Also, he argued that people in the target age range were open to considering new ideas, perhaps even eager to try new things. He cautioned that these strange young folks should be approached cautiously, even deviously, because if they knew an exchange of political ideas was the goal, they’d be turned off immediately.

I suspect he was right in all respects, at least for the majority of young adults. I’ve been following his recommended practice for a number of years, at least the basic idea. One of the great joys of my life is being able to learn new things that change my life even as I approach the end of life. I hope it will be the same for you.

As to a devious approach, it’s not like me not to say (or write) just what’s on my mind. Most folks who read what I write would judge it to be political, yet I argue that it may seem so to them but I am really writing about good governance, good goals and good process. If it turns out that the same people in office keep governing poorly, have troublesome objectives and use questionable means to achieve those objectives, it’s to be expected they will come off poorly in my writings.

Long before now you should have recognized that I prefer to focus on national issues I believe to be of greatest importance to the nation’s well-being, the elements of those issues, what history (should have) taught us about optional ways of addressing those issues and, usually, suggesting what I believe to be the best options.

As I do this, some people stand out as bad guys and, of late, these have all belonged to the left wing of the Democratic Party. I’m sorry about that because I really want our leaders to be outstanding leaders no matter which political party they call home. If they are not, then I am going to be against that party until they start running competent men and women for office who focus on the critical issues with honesty, coherence and perseverance.

I’ve never wanted anyone to vote for my preferred candidates just because they were my choice. What I want is for the overwhelming majority of voters to know the truth about main issues and how opposing candidates stand on those issues. I’ll never get what I want because the majority of voters don’t take time to become knowledgeable and the majority of politicians do their best to mislead the voters.

I pointed out precarious national and international conditions and the critical importance of competent leaders being elected. I emphasized that the vote of the young adults would determine the outcome. I was correct. 2014 is another critical election year. By now you should know how much your future has been adversely affected by the outcomes of elections for which you’ve been eligible voters.

For the record, I'm a registered Republican but, under that umbrella, a Conservative with a few Libertarian tendencies. I was a “cradle Democrat” and remained so until the current age of my oldest grandchild. I was an ardent fan of the Chicago Cubs who did manage to capture two pennants before they forgot how to play the game.

These days, rooting for the Republicans seems like rooting for the Cubs. Both teams have talented players, know the principles of the sport, play like gentlemen but seem not to understand the objective is to beat the opponents. It may be the absence of a good manager and an ossified front office. Maybe the players should mutiny.

Currently, 31% of the voting population considers itself to be Democratic, 25% considers itself Republican and 42% considers itself Independent. The Republican % is the lowest it’s been in modern times. It’s no wonder that Democratic political tactics are designed to solidify its base and to infuriate Republicans.

Rather than fighting the “enemy”, prominent Republicans yell at each other while elbowing members of their own team into less desirable vantage points from which to launch a campaign for the big enchilada. Tea partiers are similar to Republicans in many ways except they are mouthier and more fed up, but still mostly nice people. They have no effective central control which is another thing they’ve in common with Republicans.

Independents are growing in number, mostly cutting into the Republican fan base. The not-at-all independent media blames this on Republican ineptitude but a goodly part comes from people being tired of much political talk and little effective political results. While the Democrats are more guilty of that, their fan base expects them to be obnoxious and ineffective at governing; it’s part of their appeal.

It’s bad enough for Republicans to lose those Independents who used to show up for their bigger games but now many Republican fans avoid the big ones and hardly any buy season tickets. As for the younger fans, forget it; they enjoy the raucous Democratic play and can’t understand the headier Republican approach to the game.

Republicans needn’t play Democratic ball to win. It’s not necessary to throw at the batter’s head or slide into second with spikes high. However, barreling over a catcher blocking home plate is admirable; the runner has a right to the baseline. Forget longing for a star hitter with 50 HR’s a season and 500 K’s. They need single hitters with high batting averages, fielders who hit the cut-off man, pitchers who credit the fielding for their low ERA’s and runners who follow the coaching signs. It’s a team game, dummies, not a boxing match to determine a single champion!

This might be amusing if not for the serious matter of America’s heading for a cliff without a capable government. There’s a World Series each year but an America only once in 8,000 years of history Mike Huckabee is a good guy and was a viable candidate in 2008. He made the same main point as this essay, but with less colorful imagery. Unfortunately, he played to the stands allowing partisan “sportwriters” a chance for another “war on women” tirade. His lesser point could have been made by citing the Justice Dept’s war on the Little Sisters of the Poor; he would have been playing a winning game. There are Republicans who can do that.

billlifka

Thursday, January 9, 2014

Global Warming - REAL, but NOT a Big DEAL

[UPDATE: I reposted this at WUWT, the most viewed Climate website in the world, and have over 10,000 page views and 230 comments.]
We've reached a turning point where it is hard for any Global Warming Alarmist to claim (with a straight face) that the world as we know it is about to end in a decade or two or three unless we stop burning fossil fuels. Anyone deluded or foolish enough to make such a claim would be laughed at by many audiences.

GLOBAL WARMING IS REAL

Yes, the world has warmed 1°F to 1.5°F (0.6°C to 0.8°C) since 1880 when relatively good thermometers became available. Yes, part of that warming is due to human activities, mainly burning unprecedented quantities of fossil fuels that continue to drive an increase in carbon dioxide (CO2) levels. The Atmospheric "Greenhouse" Effect is a scientific fact!

BUT GLOBAL WARMING IS NOT A BIG DEAL

As the animated graphic clearly indicates, the theoretical climate models used by the Intergovernmental Panel on Climate Change (IPCC) are handcuffed to inordinately high estimates of climate sensitivity (how much temperatures are expected to rise given a doubling of CO2). Since the advent of good satellite-based global temperature data in 1979, observed temperatures have risen at a fraction of the IPCC predicted rate even as CO2 continues to rise. Relax, there is not and never has been any near-term "tipping point". The actual Earth Climate System is far less sensitive to CO2 than claimed the IPCC climate theory, as represented by their computer models. Global Warming since 1880 is mainly due to Natural Cycles and Processes not under human control. Yes, the same Natural Cycles and Processes that were responsible for the many Ice Age cycles that repeatedly occurred about every 100,000 years or so.

MY JANUARY 2014 PRESENTATION 

By a stroke of good fortune, last week I was scheduled to present "Visualizing the Atmospheric 'Greenhouse' Effect - Global warming is real, but how much is due to human activities and how big is the risk?" to the Philosophy Club in the Central Florida retirement community where I live. This is a great time for Global Warming Skeptics to put the Alarmists in their place.

Everyone in the highly interactive and supportive audience was aware of newspaper and TV reports of the drama of those ill-fated Global Warming "Research" activists whose Russian ship, the Academik Shokalskiy, got stuck in the summer ice of the Antarctic. (Fortunately, those people are safe, having been rescued by a helicopter from a Chinese icebreaker.) In addition to the Antarctic adventure gone wrong, in the week leading up to and following my talk, the media was overrun by stories of the "polar vortex" literally freezing large parts of the US and even causing Florida temperatures to drop below 30°F.

Of course, we realize that the cold wave is only anecdotal evidence and "weather is not climate". However, photos and videos of researchers stuck in the Antarctic summer ice as well as scenes of American life frozen in place for days on end, when combined with clear and irrefutable evidence of a slowdown in warming since 1979 and no statistically significant warming since 1996 (as depicted in the graphic above), has considerable emotional impact.

My animated PowerPoint Show, which should run on any Windows PC, is available for download here. (NOTE: I knew that many members of The Philosophy Club audience, while highly intelligent and informed, are not particularly scientifically astute. Therefore, I kept to the basics and  invited questions as I proceeded. Since most of them think in Fahrenheit, I was careful to give temperatures in that system. By contrast, my 2011 talk to the more scientifically astute members of our local Science and Technology Club Skeptic Strategy for Talking about Global Warming was more technical. Both presentations make use of animated PowerPoint charts and you are free to download and use them as you wish.)

My presentation is based on my five-part series for the most viewed climate website in the world, "Watts Up With That" (WUWT) where I am a Guest Contributor. The series is entitled "Visualizing the 'Greenhouse Effect'" - 1 - A Physical Analogy, 2 - Atmospheric Windows, 3 - Emission Spectra, 4 - Molecules and Photons, and 5 - Light and Heat.  The series, which ran in 2011, generated tens of thousands of page views at WUWT, along with thousands of comments.

I wrote the series because WUWT is a "skeptic" website and attracts some viewers who reject the basic physics of the Atmospheric "Greenhouse" Effect. (The owner of WUWT, Anthony Watts, like me, accepts the basic physics and the fact that some of the warming of the past century is indeed due to human activities, such as unprecedented burning of fossil fuels that have raised CO2 levels. However, we are skeptical about how much the Earth Surface has actually warmed, and how big a risk is posed by moderate increases in CO2 and temperature.)

HOW A REAL GREENHOUSE WORKS

I explained how a real physical Greenhouse works and how that is both similar and different from the Atmospheric "Greenhouse" Effect. The Greenhouse descriptions I learned in high school, as well as those available on the Internet, consider only the RADIATIVE effect. The glass roof of the Greenhouse allows visible light to pass through freely, heating the soil, plants, and air, but is opaque to the resultant infrared radiation, which is partly re-radiated back down into the Greenhouse, warming it further.  That part is true, but far from the whole story. The MAIN reason a Greenhouse stays warm is that it is airtight to restrict CONVECTION and it is insulated to restrict CONDUCTION. In fact, it is possible to construct a successful Greenhouse using a roof made from materials that allow both visible and infrared to pass freely, but is impossible to make a working Greenhouse that is not both airtight and insulated.

HOW THE ATMOSPHERIC "GREENHOUSE" EFFECT WORKS

All warm objects emit radiation at a wavelength dependent upon the temperature of the object. The Sun, at around 10,000 °F, emits "short-wavelength" infrared radiation, centered around 1/2 micron (one millionth of a meter). The soil, plants, and air in the Greenhouse, at around 60 to 100 °F, emit "long wavelength" radiation, centered around 10 microns (with most of the energy between 4 and 25 microns).   

The Atmospheric "Greenhouse" Effect works because:
  1. Short-wavelength radiation from the Sun passes freely through the gases that make up  the Atmosphere,
  2. About a third of this Sunlight is reflected back by white clouds, dust, and light-colored objects on the Surface, and that energy is lost to Space,
  3. The remaining two-thirds of  the Sunlight energy is absorbed by the Sea and Land Surface and causes it to warm,
  4. The warm Surface cools by emitting long-wavelength radiation at the Bottom of the Atmosphere, and this radiation passes towards the Top of the Atmosphere, where it is ultimately lost to Space,
  5. On the way to the Top of the Atmosphere, much of this radiation is absorbed by so-called "Greenhouse" gases (mostly water vapor and carbon dioxide) which causes the Atmosphere to warm,
  6. The warmed Atmosphere emits infrared radiation in all directions, some into Space where it is lost, and some back towards the Surface where it is once again absorbed and further warms the Surface.
  7. In addition to the RADIATIVE effects noted in points 1 through 6, the Surface is cooled by CONVECTION and CONDUCTION (thunderstorms, winds, rain, etc.)
THANK GOODNESS OR THE ATMOSPHERIC "GREENHOUSE" EFFECT

If not for the warming effect of "Greenhouse" gases, the Surface of the Earth would average about -1 °F, which would prevent life as we know it. This effect is responsible for about 60 degrees F of warming.

According to the Intergovernmental Panel on Climate Change (IPCC), the Earth Surface has warmed about 1.5 °F since good thermometer data became available around 1880. Some skeptics (including me) believe the actual warming is closer to 1 °F, and that government agencies have adjusted the thermometer record to exaggerate the warming by 30% or more.

However, it doesn't really matter whether the actual warming is 1 °F or 1.5 °F because we are arguing about only 0.5 °F, which is less than 1% of the warming due to the Atmospheric "Greenhouse" Effect.

HOW SENSITIVE IS THE CLIMATE TO HUMAN ACTIVITIES?

The IPCC claims that the majority of the warming since 1880 is due to human activities. It is true that we are burning unprecedented amounts of fossil fuel (coal, oil, gas), and that we are making land use changes that may reduce the albedo (reflectiveness) of the Surface. Most of the increase in Atmospheric CO2 (a 40% rise from about 270 to nearly 400 parts per million by volume) is due to human activities.

The IPCC claims that Climate Sensitivity (the average increase in Surface temperatures due to a doubling of CO2) is between 3 °F and 8 °F.  Some skeptics (including me) believe they are off by at least a factor of two, and possibly a factor of three, and that Climate Sensitivity is closer to 1 °F to 3 °F.

As evidence for our conclusions, we point to the fact that virtually ALL of the IPCC climate models have consistently over-estimated future temperature predictions as compared to the actual temperature record. Indeed, for the past 17 years as CO2 levels continue their rapid climb, temperatures have leveled off, which is proof that Natural Cycles, not under human control or influence, have cancelled out warming due to CO2 increases. Thus, Natural Cycles must have a larger effect than CO2.

VISUALIZING THE ATMOSPHERIC "GREENHOUSE" EFFECT

As I noted above, I wrote the "Visualizing" series for WUWT (1 - A Physical Analogy, 2 - Atmospheric Windows, 3 - Emission Spectra, 4 - Molecules and Photons, and 5 - Light and Heat) because some WUWT viewers are "Disbelievers" who have had an "equal and opposite" reaction to the "end of the world" excesses of the Global Warming "Alarmists".  By failing to understand and accept the basic science of the Atmospheric "Greenhouse" Effect, they have, IMHO, "thrown the baby out with the bathwater".

1 - A Physical Analogy

Albert Einstein was a great theoretical physicist, with all the requisite mathematical tools. However, he rejected purely mathematical abstraction and resorted to physical analogy for his most basic insights. For example, he imagined a man in a closed elevator being transported to space far from any external mass and then subjected to accelerating speeds. That man could not tell the difference between gravity on Earth and acceleration in space, thus, concluded Einstein, gravity and acceleration are equivalent, which is the cornerstone of his theory of relativity. Einstein never fully bought into the mainstream interpretation of quantum mechanics that he and others have called quantum weirdness and spooky action at a distance. He had trouble accepting a theory that did not comport with anything he considered a reasonable physical analogy!

So, if you have trouble accepting the atmospheric “greenhouse” effect because of the lack of a good physical analogy, you are in fine company.

Well, getting back to the Atmospheric "Greenhouse Effect, a "disbelieving" commenter on WUWT, suggested we think of the Sunlight as truckloads of energy going from the Sun to the Earth Surface, and the infrared radiation from the Surface as equal truckloads going the other way. How, he asked, could these equal and opposite truckloads do anything but cancel each other out as far as the amount of energy on the Surface of the Earth? In reply, I posted a comment with an analogy of truckloads of orange juice, representing short-wave radiation from Sun to Earth, and truckloads of blueberry juice, representing longwave radiation between Earth and the Atmosphere and back out to Space.

That thought experiment triggered my creativity. I imagined the Sun as a ball-pitching machine, throwing Yellow balls towards the "Earth" Surface (representing short-wave radiation) and Purple balls (representing long-wave radiation) bouncing back towards Space and interacting with the Atmosphere. The graphic below is one of my depictions of the physical analogy. Follow this link for more graphics and detail.


I imagined the Earth as a well-damped scale. The Yellow balls would bounce off the Surface and turn into Purple balls (representing long-wave radiation as the Earth absorbed the short-wave radiation and then emitted an equal quantity of long-wave radiation). The scale would read "1" unit.

If there was no Atmosphere, or if the Atmosphere contained no "Greenhouse" gases to obstruct the flight of the Purple balls, they would fly out towards Space.

I then imagined the Atmosphere as an obstacle that absorbed the Purple balls, split them in two, and emitted half of the smaller balls to Space and the other half back towards the Earth. The balls going towards Earth would be absorbed, further heating the Earth, and the warmed Earth would emit them back towards the Atmosphere. The process would be repeated with the balls being absorbed by "Greenhouse" gases in the Atmosphere, and then emitted with half going out to Space, and half back to the Earth. The sum of 1 + 1/2 + 1/4 + 1/8 +1/16 ... = 2 (approximately), so the scale reads "2" units.

Thus, in my simplified analogy, the "Greenhouse" gases in the "Atmosphere" cause the scale reading to double. So, the Atmospheric "Greenhouse" Effect causes the Earth Surface to be warmer than it would be absent the "Greenhouse" gases. I think Einstein would be pleased!  Read more detail at WUWT, including the 340 responses (comments received and my brilliant replies!)...,

2 - Atmospheric Windows

A real greenhouse has windows. So does the Atmospheric “greenhouse effect”. They are similar in that they allow Sunlight in and restrict the outward flow of thermal energy. However, they differ in the mechanism. A real greenhouse primarily restricts heat escape by preventing convection while the “greenhouse effect” heats the Earth because “greenhouse gases” (GHG) absorb outgoing radiative energy and re-emit some of it back towards Earth.
There are two main “windows” in the Atmospheric “greenhouse effect”. The first, the Visible Light Window, on the left side of the graphic, allows visible and near-visible light from the Sun to pass through with small losses, and the second, the Longwave Window, on the right, allows the central portion of the longwave radiation band from the Earth to pass through with small losses, while absorbing and re-emitting the left and right portions.
Sunlight Energy In = Thermal Energy Out
The graphic is an animated depiction of the Atmospheric “greenhouse effect” process.

On the left side:
(1) Sunlight streams through the Atmosphere towards the surface of the Earth.
(2) A portion of the Sunlight is reflected by clouds and other high-albedo surfaces and heads back through the Atmosphere towards Space. The remainder is absorbed by the Surface of the Earth, warming it.
(3) The reflected portion is lost to Space.
On the right side:
(1) The warmed Earth emits longwave radiation towards the Atmosphere. According to the first graphic, above, this consists of thermal energy in all bands ~7μ, ~10μ, and ~15μ.
(2) The ~10μ portion passes through the Atmosphere with litttle loss. The ~7μ portion gets absorbed, primarily by H2O, and the 15μ portion gets absorbed, primarily by CO2 and H2O. The absorbed radiation heats the H2O and CO2 molecules and, at their higher energy states, they collide with the other molecules that make up the air, mostly nitrogen (N2), oxygen (O2), ozone (O3), and argon (A) and heat them by something like conduction. The molecules in the heated air emit radiation in random directions at all bands (~7μ, ~10μ, and ~15μ). The ~10μ photons pass, nearly unimpeded, in whatever direction they happen to be emitted, some going towards Space and some towards Earth. The ~7μ and ~15μ photons go off in all directions until they run into an H2O or CO2 molecule, and repeat the absorption and re-emittance process, or until they emerge from the Atmosphere or hit the surface of the Earth.
(3) The ~10μ photons that got a free-pass from the Earth through the Atmosphere emerge and their energy is lost to Space. The ~10μ photons generated by the heating of the air emerge from the top of the Atmosphere and their energy is lost to Space, or they impact the surface of the Earth and are re-absorbed. The ~7μ and ~15μ generated by the heating of the air also emerge from the top or bottom of the Atmosphere, but there are fewer of them because they keep getting absorbed and re-emitted, each time with some transfered to the central ~10μ portion of the longwave band.
 the infrared (long-wavelength). Read more detail at WUWT, including the 489 responses (comments received and my brilliant replies!)...

3 - Emission Spectra

The Atmospheric “greenhouse effect” has been analogized to a blanket that insulates the Sun-warmed Earth and slows the rate of heat transmission, thus increasing mean temperatures above what they would be absent “greenhouse gases” (GHGs). Perhaps a better analogy would be an electric blanket that, in addition to its insulating properties, also emits thermal radiation both down and up. The graphic below, based upon actual measurements of long-wave radiation as measured by a satellite LOOKING DOWN from the Top of the Atmosphere as well as from the Surface LOOKING UP from the Bottom of the Atmmsphere, depicts the situation.
,
Description of graphic (from bottom to top):
Earth Surface: Warmed by shortwave (~1/2μ) radiation from the Sun, the surface emits upward radiation in the ~7μ, ~10μ, and ~15μ regions of the longwave band. This radiation approximates a smooth “blackbody” curve that peaks at the wavelength corresponding to the surface temperature.
Bottom of the Atmosphere: On its way out to Space, the radiation encounters the Atmosphere, in particular the GHGs, which absorb and re-emit radiation in the ~7μ and ~15μ regions in all directions. Most of the ~10μ radiation is allowed to pass through.
The lower violet/purple curve (adapted from figure 8.1 in Petty and based on measurements from the Tropical Pacific looking UP) indicates how the bottom of the Atmosphere re-emits selected portions back down towards the surface of the Earth. The dashed line represents a “blackbody” curve characteristic of 300ºK (equivalent to 27ºC or 80ºF). Note how the ~7μ and ~15μ regions approximate that curve, while much of the ~10μ region is not re-emitted downward.
“Greenhouse Gases”: The reason for the shape of the downwelling radiation curve is clear when we look at the absorption spectra for the most important GHGs: H2O, H2O, H2O, … H2O, and CO2. (I’ve included multiple H2O’s because water vapor, particularly in the tropical latitudes, is many times more prevalent than carbon dioxide.)
Note that H2O absorbs at up to 100% in the ~7μ region. H2O also absorbs strongly in the ~15μ region, particularly above 20μ, where it reaches 100%. CO2 absorbs at up to 100% in the ~15μ region.
Neither H2O nor CO2 absorb strongly in the ~10μ region.
Since gases tend to re-emit most strongly at the same wavelength region where they absorb, the ~7μ and ~15μ are well-represented, while the ~10μ region is weaker.
Top of the Atmosphere: The upper violet/purple curve (adapted from figure 6.6 in Petty and based on satellite measurements from the Tropical Pacific looking DOWN) indicates how the top of the Atmosphere passes certain portions of radiation from the surface of the Earth out to Space and re-emits selected portions up towards Space. The dashed line represents a “blackbody” curve characteristic of 300ºK. Note that much of the ~10μ region approximates a 295ºK curve while the ~7μ region approximates a cooler 260ºK curve. The ~15μ region is more complicated. Part of it, from about 17μ and up approximates a 260ºK or 270ºK curve, but the region from about 14μ to 17μ has had quite a big bite taken out of it. Note how this bite corresponds roughly with the CO2 absorption spectrum.
See more graphics and detail at WUWT, including the 476 responses (comments received and my brilliant replies!)...

4 - Molecules and Photons

In this part, we consider the interaction between air molecules, including Nitrogen (N2), Oxygen (O2), Water Vapor (H2O) and Carbon Dioxide (CO2), with Photons of various wavelengths. This may help us visualize how energy, in the form of Photons radiated by the Sun and the Surface of the Earth, is absorbed and re-emited by Atmospheric molecules.

The animated graphic has eight frames, as indicated by the counter in the lower right corner. Molecules are symbolized by letter pairs or triplets and Photons by ovals and arrows. The view is of a small portion of the cloud-free Atmosphere.
  1. During the daytime, Solar energy enters the Atmosphere in the form of Photons at wavelengths from about 0.1μ (micron – millionth of a meter) to 4μ, which is called “shortwave” radiation and is represented as ~1/2μ and symbolized as orange ovals. Most of this energy gets a free pass through the cloud-free Atmosphere. It continues down to the Surface of the Earth where some is reflected back by light areas (not shown in the animation) and where most is absorbed and warms the Surface.
  2. Since Earth’s temperature is well above absolute zero, both day and night, the Surface radiates Photons in all directions with the energy distributed approximately according to a “blackbody” at a given temperature. This energy is in the form of Photons at wavelengths from about 4μ to 50μ, which is called “longwave” radiation and is represented as ~7μ, ~10μ, and ~15μ and symbolized as violet, light blue, and purple ovals, respectively. The primary “greenhouse” gases (GHG) are Water Vapor (H2O) and Carbon Dioxide (CO2). The ~7μ Photon is absorbed by an H2O molecule because Water Vapor has an absorption peak in that region, the ~10μ Photon gets a free pass because neither H2O nor CO2 absorb strongly in that region, and one of the 15μ Photons gets absorbed by an H2O molecule while the other gets absorbed by a CO2 molecule because these gases have absorption peaks in that region.
  3. The absorbed Photons raise the energy level of their respective molecules (symbolized by red outlines).
  4. The energized molecules re-emit the Photons in random directions, some upwards, some downwards, and some sideways. Some of the re-emitted Photons make their way out to Space and their energy is lost there, others back down to the Surface where their energy is absorbed, further heating the Earth, and others travel through the Atmosphere for a random distance until they encounter another GHG molecule.
  5. This frame and the next two illustrate another way Photons are emitted, namely due to collisions between energized GHG molecules and other air molecules. As in frame (2) the Surface radiates Photons in all directions and various wavelengths.
  6. The Photons cause the GHG molecules to become energized and they speed up and collide with other gas molecules, energizing them. NOTE: In a gas, the molecules are in constant motion, moving in random directions at different speeds, colliding and bouncing off one another, etc. Indeed the “temperature” of a gas is something like the average speed of the molecules. In this animation, the gas molecules are fixed in position because it would be too confusing if they were all shown moving and because the speed of the Photons is so much greater than the speed of the molecules that they hardly move in the time indicated.
  7. The energized air molecules emit radiation at various wavelengths and in random directions, some upwards, some downwards, and some sideways. Some of the re-emitted Photons make their way out to Space and their energy is lost there, others back down to the Surface where their energy is absorbed, further heating the Earth, and others travel through the Atmosphere for a random distance until they encounter another GHG molecule.
  8. Having emitted the energy, the molecules cool down.
Read more detail at WUWT, including the 743 responses (comments received and my brilliant replies!)...

5 - Light and Heat

Solar “light” radiation in = Earth “heat” radiation to Space out! That’s old news to those of us who understand all energy is fungible (may be converted to different forms of energy) and energy/mass is conserved (cannot be created nor destroyed).
My Visualizing series [Physical Analogy, Atmospheric Windows, Emission Spectra, and Molecules/Photons] has garnered almost 2000 comments, mostly positive. I’ve learned a lot from WUWT readers who know more than I do. However, some commenters seem to have been taken in by scientific-sounding objections to the basic science behind the Atmospheric “Greenhouse Effect”. Their objections seemed to add more heat than light to the discussion. This posting is designed to get back to basics and perhaps transform our heated arguments into more enlightened understanding :^)

Solar "light" energy in is equal to Earth "heat" energy out.
Read more detail at WUWT, including the 958 responses (comments received and my brilliant replies!)...

ANSWERING THREE OBJECTIONS TO BASIC ATMOSPHERIC “GREENHOUSE EFFECT” SCIENCE

First of all, let me be clear where I am coming from. I’m a Lukewarmer-Skeptic who accepts that H2O, CO2 and other so-called “greenhouse gases” in the Atmosphere do cause the mean temperature of the Earth Surface and Atmosphere to be higher than they would be if everything was the same (Solar radiation, Earth System Albedo, …) but the Atmosphere was pure nitrogen.

The main scientific question for me, is how much does the increase in human-caused CO2 and human-caused albedo reduction increase the mean temperature above what it would be with natural cycles and processes? My answer is “not much”, because perhaps 0.1ºC to 0.2ºC of the supposed 0.8ºC increase since 1880 is due to human activities. The rest is due to natural cycles and processes over which we humans have no control. The main public policy question for me, is how much should we (society) do about it? Again, my answer is “not much”, because the effect is small and a limited increase in temperatures and CO2 may turn out to have a net benefit.

So, my motivation for this Visualizing series is not to add to the Alarmist “the sky is falling” panic, but rather to help my fellow Skeptics avoid the natural temptation to fall into an “equal and opposite” falsehood, which some of those on my side, who I call “Disbelievers”, do when they fail to acknowledge the basic facts of the role of H2O and CO2 and other gases in helping to keep temperatures in a livable range.

Objection #1: Visual and near-visual radiation is merely “light” which lacks the “quality” or “oomph” to impart warmth to objects upon which it happens to fall.

Answer #1: A NASA webpage targeted at children is sometimes cited because they say the near-IR beam from a TV remote control is not warm to the touch. Of course, that is not because it is near-visual radiation, but rather because it is very low power. All energy is fungible, and can be changed from one form to another. Thus, the 240 Watts/m^2 of visible and near-visible Solar energy that reaches and is absorbed by the Earth System, has the effect of warming the Earth System exactly as much as an equal number of Watts/m^2 of “thermal” mid- and far-IR radiation.

Objection #2: The Atmosphere, which is cooler than the Earth Surface, cannot warm the Earth Surface.

Answer #2: The Second law of Thermodynamics is often cited as the source of this falsehood. The correct interpretation is that the Second Law refers to net warming, which can only pass from the warmer to the cooler object. The back-radiation from the Atmosphere to the Earth Surface has been measured (see lower panel in the above illustration). All matter above absolute zero emits radiation and, once emitted, that radiation does not know if it is travelling from a warmer to a cooler surface or vice-versa. Once it arrives it will either be reflected or absorbed, according to its wavelength and the characteristics of the material it happens to impact.

Objection #3: The Atmospheric “Greenhouse Effect” is fictional. A glass greenhouse works mainly by preventing or reducing convection and the Atmosphere does not work that way at all.

Answer #3: I always try to put “scare quotes” around the word “greenhouse” unless referring to the glass variety because the term is misleading. Yes, a glass greenhouse works by restricting convection, and the fact that glass passes shortwave radiation and not longwave makes only a minor contribution.

Thus, I agree it is unfortunate that the established term for the Atmospheric warming effect is a bit of a misnomer. However, we are stuck with it. But, enough of semantics. Notice that the Earth System mean temperature I had to use to provide 240 Watts/m^2 of radiation to Space to balance the input absorbed from by the Earth System from the Sun was 255 K. However, the actual mean temperature at the Surface is closer to 288 K. How to explain the extra 33 K (33ºC or 58ºF)? The only rational explanation is the back-radiation from the Atmosphere to the Surface.

Ira Glickstein

Sunday, December 29, 2013

Flatland, Particle-Wave Duality and Super-Luminal Effects

The animated graphic above shows our 3-D Space plus Time view of the physical world and contrasts it with the very different view of "Flatlanders" who are restricted to 2-D Space plus Time.

This posting explores the possibility that insights from consideration of Flatland may be extended to higher dimensionality and shed light on Particle-Wave Duality and Super-Luminal (faster than light) effects such as those that may be associated with the EPR (Einstein-Podolsky-Rosen) experiments.

This is the second of a series. The first Flatland, Dimensionality, and QM Hidden Variables utilized "Flatland" analogies to explore what Feynman called "quantum weirdness" and Einstein called "spooky action at a distance". In particular, could it be that what we perceive as conflicts between ""particles" and "waves" are due to the limits of our perception to 3-D Space plus Time? If we imagine 4-D Space or higher dimensionality, could that help us better understand the "weirdness" and "spooky" nature of  Quantum Mechanics (QM)? Could we resolve questions about the Nature of the Universe such as deterministic vs probabilistic, discrete vs continuous, brain vs mind, and so on?

ANIMATED GRAPHIC

(a) and (b) in the graphic visualize "Particle-Wave Duality" based on the famous Double-Slit Experiment. In that experiment, sub-atomic objects, such as electrons or photons, act like either particles or waves, depending upon whether or not they are made to pass through a SINGLE or DOUBLE slit. If there is a SINGLE slit, they act like particles. If there is a DOUBLE slit, they act like waves.

(c) in the graphic visualizes "Super-luminal" (faster than the speed of light) effects such as those that may be associated with the Einstein-Podolsky-Rosen experiment. In the EPR experiments, the actions of an experimenter "A" appear to instantly affect the results measured by distant experimenter "B".

In the animation, we start with a 3-D object that is a bent red plastic tube:

(a) The bent tube is dropped into Flatland and lands on a Flatland shape that has a SINGLE-width slit in it. As the slit is too narrow for the bent tube to pass thru in that Horizontal orientation, the bent tube bounces and rotates 90° to a Vertical position and penetrates Flatland, leaving only a portion of a nearly vertical part of the bent tube in Flatland. The Flatlanders explore that part (the  tiny yellow circle) and call it a "PARTICLE". 
(b) The bent tube is dropped into Flatland and lands on a Flatland shape that has a DOUBLE-width slit in it. As the slit is  wide enough for the bent tube to pass thru in that Horizontal orientation, the bent tube lands there and  lies flat in a Horizontal position. The Flatlanders explore that part (the yellow wave) and call it a "WAVE".

Note that a 90° rotation in 3-D Space changes a "PARTICLE-like" object into a "WAVE-like" object for Flatlanders, and vice-versa.

Recall that in the first posting of this series Flatland, Dimensionality, and QM Hidden Variables a 90° rotation in 3-D Space changed the appearance of a Cola can from a CIRCLE to a RECTANGLE as viewed by Flatlanders.

A 90° rotation in  higher-dimension world appears to change the basic form of an object in a lower-dimension world. A PARTICLE appears to be a WAVE, a CIRCLE appears to be a RECTANGLE, and vice-versa.

Although not illustrated in the graphic, it turns out that a 180°rotation in a higher-dimensional world changes an object into its MIRROR-image as viewed in a lower-dimensional world.
(c) A human hand penetrates Flatland and is viewed by Flatlanders as five disconnected small yellow circles (or "PARTICLES"). A Flatland triangle-shape happens to touch the pinky of the human hand and the hand reacts by thrusting out its thumb, which happens to push a Flatland square-shape.
The Flatlanders are amazed that touching one PARTICLE (the pinky) causes a totally disconnected and distant PARTICLE (the thumb) to react "instantly" with no apparent means of communication. If we assume the highest speed of communication within Flatland is slower than the 3-D speed of light, to the Flatlanders this reaction seems to be SUPER-luminal (faster than light).


APPLYING FLATLANDER INSIGHT TO QUANTUM MECHANICS In my recent Dialog with Howard Pattee, we speculated on whether the Universe is actually probabilistic, which is the mainstream scientific view, or deterministic, which is definitely the minority view. I speculated that extending the Flatland scenario beyond 2-D and 3-D Space to 4-D and higher-dimensionality, might support an alternative QM interpretation such as that of David Bohm.


In a future posting, I plan to create an animated graphic that visualizes a 4-D Space world in which what we perceive as a PARTICLE-WAVE duality is actually a single object (Bohm's "pilot wave") as perceived by denizens of 4-D world.


Ira Glickstein 



Wednesday, November 27, 2013

Flatland, Dimensionality, and QM Hidden Variables


The animated graphic above shows our 3-D Space plus Time view of the physical world and contrasts it with the very different view of "Flatlanders" who are restricted to 2-D Space plus Time. This posting explores the possibility that insights from consideration of Flatland may be extended to higher dimensionality and shed light on what Feynman called "quantum weirdness" and Einstein called "spooky action at a distance".

In particular, could it be that what we perceive as conflicts between "particles" and "waves" are due to the limits of our perception to 3-D Space plus Time? If we imagine 4-D Space or higher dimensionality, could that help us better understand the "weirdness" and "spooky" nature of Quantum Mechanics (QM)? Could we resolve questions about the Nature of the Universe such as deterministic vs probabilistic, discrete vs continuous, brain vs mind, and so on?

ANIMATED GRAPHIC

Things we recognize as the same appear different to Flatlanders: 3-D Space residents recognize a can of cola as being the exact same object (a cylindrical solid) regardless of whether it is upright or on its side. However, when a 3-D can of cola intrudes upon the 2-D Space of Flatlanders, they see it as several different kinds of figures depending upon its orientation.

At the left edge of the graphic, the can is upright, and, to the Flatlanders, it appears as a CIRCLE of CONSTANT DIAMETER. When an identical can of cola is on its side, the Flatlanders first see a LINE as the lower part of the can intrudes upon their 2-D Space. Then, as the can is lowered, the LINE transforms into a NARROW RECTANGLE. As the can is lowered further, the RECTANGLE WIDENS. So, what is it? A CIRCLE? A LINE? A VARIABLE WIDTH RECTANGLE?

To we 3-D Space persons, the can is one, and only one, 3-D object, a CYLINDRICAL SOLID. To the Flatlanders, it is several different 1-D and 2-D objects.

Things we recognize as different appear the same to Flatlanders: Continuing to view the animated graphic, we see that a ball appears to us to be a 3-D SPHERE. To the Flatlanders, it is first a 0-D POINT, then a 2-D CIRCLE OF VARIABLE DIAMETER.

Furthermore, when the 3-D SPHERE intrudes such that the diameter of the Flatlander's 2-D CIRCLE is the same as the diameter of the upright can, they cannot distinguish between the can and the ball!

Things we recognize as a single object appear as multiple objects to Flatlanders: Continuing to view the graphic, when a moving 3-D hand intrudes into the 2-D Space, the Flatlanders see a wide variety of 0-D, 1-D, and 2-D objects. At first, when only three fingertips intrude, they see three POINTS. Then, as the fingers penetrate further, they see four small CIRCLES plus a POINT representing the thumb. Further penetration of the hand, beyond the wrist, yields an OVAL as viewed by Flatlanders.

APPLYING FLATLANDER INSIGHT TO QUANTUM MECHANICS

In my recent Dialog with Howard Pattee, we speculated on whether the Universe is actually probabilistic, which is the mainstream scientific view, or deterministic, which is definitely the minority view. I speculated that extending the Flatland scenario beyond 2-D and 3-D Space to 4-D and higher-dimensionality, might support an alternative QM interpretation such as that of David Bohm. This posting, which advocates what may be termed "Superdeterminism", is my attempt to support this alternate view.

Classical Physics vs Quantum Physics

Classical physicists accepted the view that the Universe is deterministic and this was the view of  Spinoza, Einstein, Bohm, (and it is also what I -Ira- would like to believe :^).

Quantum physicists generally accept the "Copenhagen Interpretation" of QM which is that the Universe is probabilistic. In our discussion, Howard supported the view that the Universe is probabilistic. While I accept the general consensus that the probabilistic interpretation of QM has stood the test of time and correctly predicted and explained the results of all experiments conducted to date, I nevertheless take the other view.

The Double Slit Experiment - Particles vs Waves

The Double Slit Experiment demonstrates that photons (or electrons) behave like particles when only one slit is open, but like waves when there are two slits. This raises the question: Is matter in general, or sub-atomic matter in particular, really waves or really particles, or something else?

Perhaps residents of a 4-D Space world would see matter as a single type of object and understand why we 3-D Space world residents sometimes see a wave and sometimes a particle? (The can of cola in the graphic above, which we in the 3-D Space world recognize as a single object no matter its orientation, is observed by Flatlanders either a circle with a continuous edge -or- as a line or rectangle with discrete edges.)

The EPR Paradox - Locality vs Realism

Einstein believed that the Universe exhibited both "Locality" (the influence of a distant event cannot be transmitted faster than the speed of light) and "Realism" (the value of a measurement exists before the measurement is made). Experiments conducted in the 1980's appear to prove him wrong and indicate that we must choose between "Locality" and "Realism" - we cannot have both!

I believe Einstein, if he had to choose, would pick "Realism" over "Locality", meaning that a distant event could exert an influence faster than the speed of light. However, perhaps residents of a 4-D Space world would see that the 3-D Space world was curled up within the 4-D Space world such that objects that appear distant in 3-D Space are actually much closer in 4-D Space. In the graphic above, the fingers of the hand appear to Flatlanders as unconnected points or circles, but we, in the 3-D Space world see that they are all parts of a single object.

A cornerstone of the mainstream scientific interpretation of QM is Heisenberg's uncertainty principle (1927), which is that it is impossible to exactly measure both the position and momentum of a sub-atomic particle.

In 1935, Einstein, Podolsky, and Rosen published a paper that proposed a thought experiment ("EPR") designed to show that Heisenberg Uncertainty was not correct and that QM, as understood and interpreted at the time, was not complete. The EPR idea was to have an experimenter produce two electrons (or photons) that were "entangled" such that they would fly apart at the same velocity in opposite directions and with opposite momentum.

An experimenter at location A would measure the exact time of arrival (and thus velocity) of particle A and a second experimenter at location B (the exact distance in the opposite direction) would measure the momentum of particle B. Since the particles have the same velocity and opposite momenta, this experiment would yield the exact position and momentum of the particles.

According to Wikipedia:
In his groundbreaking 1964 paper, "On the Einstein Podolsky Rosen paradox", physicist John Stewart Bell presented an analogy (based on spin measurements on pairs of entangled electrons) to EPR's hypothetical paradox. Using their reasoning, he said, a choice of measurement setting here should not affect the outcome of a measurement there (and vice versa). After providing a mathematical formulation of locality and realism based on this, he showed specific cases where this would be inconsistent with the predictions of QM.
In 1982, Alain Aspect performed an experiment that did not turn out well for Einstein's expectations. Aspect (and others) experimentally showed that QM was correct and that Einstein's expectations for both "locality" and "realism" could not be supported. In short, you either had to choose "locality" or "realism", but not both!

Einstein had passed away by the time the EPR experiments overturned his expectations. I believe, given the choice, he would insist upon "realism" and abandon "locality". In other words, he would accept that the action of an experimenter "Alice" at point A could instantaneously affect the results obtained by "Bob" at distant point B! (Please note that the EPR experiments did NOT show that INFORMATION could be transmitted from point A to point B faster than the speed of light, only that the actions of the distant experimenter could influence the results obtained locally.)

"Quantum Non-Locality":  The mainstream view of QM is founded on what Feynman called "quantum weirdness" and Einstein termed "spooky action at a distance". The technical term is "quantum non-locality" which means that microscopic measurements may reveal that sub-atomic "particles" that happen to be far apart in Space may never-the-less be "entangled" such that measurement of the state of one "particle" superluminally (faster than the speed of  light) affects the state of the other, no matter how far away it might be!

According to the mainstream view, the action of an experimenter "Alice" at point A could instantaneously affect the results obtained by "Bob" at distant point B! Despite the apparent "weirdness", Feynman accepts the mainstream view. However, Einstein clung to what is now termed "local realism", which is the view that the Universe has both "Locality" and "Realism".

"Locality" means that an object is DIRECTLY influenced ONLY by its immediate surroundings. Thus, the influence of a distant event will be delayed by a length of Time that is at least the distance multiplied by the speed of light. "Locality" is NOT a property of the mainstream interpretation of QM.

"Realism" means that all objects have a VALUE for any possible measurement and that this value EXISTS PRIOR to the measurement. According to the Schrödinger's cat thought experiment, the Copenhagen Interpretation of QM seems to entail that a cat in a sealed box may be both dead and alive until an experimenter opens the box and looks into it! According to that view, the "collapse of the wave function" requires the intervention of a CONSCIOUSNESS. In other words, the Moon may not exist if no one is currently observing it. "Realism" is NOT a property of the mainstream interpretation of QM.

Superdeterminism

According to Wikipedia:
John Bell discussed "Superdeterminism" in a BBC interview.  
There is a way to escape the inference of superluminal speeds and spooky action at a distance. But it involves absolute determinism in the universe, the complete absence of free will. Suppose the world is super-deterministic, with not just inanimate nature running on behind-the-scenes clockwork, but with our behavior, including our belief that we are free to choose to do one experiment rather than another, absolutely predetermined, including the "decision" by the experimenter to carry out one set of measurements rather than another, the difficulty disappears.  
There is no need for a faster than light signal to tell particle A what measurement has been carried out on particle B, because the universe, including particle A, already "knows" what that measurement, and its outcome, will be.  
Although he [Bell]  acknowledged the loophole, he also argued that it was implausible. Even if the measurements performed are chosen by deterministic random number generators, the choices can be assumed to be "effectively free for the purpose at hand," because the machine's choice is altered by a large number of very small effects. It is unlikely for the hidden variable to be sensitive to all of the same small influences that the random number generator was. 
Superdeterminism has also been criticized because of perceived implications regarding the validity of science itself. For example, Anton Zeilinger has commented: "[W]e always implicitly assume the freedom of the experimentalist... This fundamental assumption is essential to doing science. If this were not true, then, I suggest, it would make no sense at all to ask nature questions in an experiment, since then nature could determine what our questions are, and that could guide our questions such that we arrive at a false picture of nature."

As I read the above objections to Superdeterminism (or as I usually call it "Absolute or Strict Causality") it seems to me that Bell and Zeilinger are wrong to assume that the experimenter is "effectively free" or "we always assume the freedom of the experimentalist". As Bell acknowledges in the quote above, it is well known that what we commonly call a "random" number generator running in a digital computer is actually bit-for-bit DETERMINISTIC. That is, if we repeatedly start the "random" number generator with a given key number, the computer will repeat the exact same sequence of supposedly "random" numbers AND that sequence will pass statistical tests of "randomness"!

We all agree that a digital computer is a discrete, finite, deterministic machine. According to my view, so is the Universe. Yes, the Universe is much, much, much more complex, but it, and all biological organisms within the Universe, including humans, are machines! 


[UPDATE 29 Dec 2013] I have posted a follow-up to this Topic, with a new animated graphic, that extends the Flatland 2-D Space vs our 3-D Space dichotomy into higher dimensionality to further explore implications for our understanding of QM, see Flatland, Particle-Wave Duality and Super-Luminal Effects.

Ira Glickstein