What the hammer? What the chain?
In what furnace was thy brain?
What the anvil? What dread grasp
Dare its deadly terrors clasp?

“The Tyger”

Of all animals, man has the largest brain in proportion to his size.
The Parts of Animals

BIOLOGICAL evolution has been accompanied by increasing complexity. The most complex organisms on Earth today contain substantially more stored information, both genetic and extragenetic, than the most complex organisms of, say, two hundred million years ago - which is only 5 percent of the history of life on the planet, five days ago on the Cosmic Calendar.


The simplest organisms on Earth today have just as much evolutionary history behind them as the most complex, and it may well be that the internal biochemistry of contemporary bacteria is more efficient than the internal biochemistry of the bacteria of three billion years ago. But the amount of genetic information in bacteria today is probably not vastly greater than that in their ancient bacterial ancestors. It is important to distinguish between the amount of information and the quality of that information.

The various biological forms are called taxa (singular, taxon). The largest taxonomic divisions distinguish between plants and animals, or between those organisms with poorly developed nuclei in their cells (such as bacteria and blue - green algae) and those with very clearly demarcated and elaborately architectured nuclei (such as protozoa or people). All organisms on the planet Earth, however, whether they have well - defined nuclei or not, have chromosomes, which contain the genetic material passed on from generation to generation. In all organisms the hereditary molecules are nucleic acids.


With a few unimportant exceptions, the hereditary nucleic acid is always the molecule called DNA (deoxyribonucleic acid). Much finer divisions among various sorts of plants and animals, down to species, subspecies and races, can also be described as separate taxa.

A species is a group that can produce fertile offspring by crosses within but not outside itself. The mating of different breeds of dogs yields puppies which, when grown, will be reproductively competent dogs. But crosses between species - even species as similar as donkeys and horses - produce infertile offspring (in this case, mules). Donkeys and horses are therefore categorized as separate species. Viable but infertile matings of more widely separated species - for example, lions and tigers - sometimes occur, and if, rarely, the offspring are fertile, this indicates only that the definition of species is a little fuzzy.


All human beings are members of the same species, Homo sapiens, which means, in optimistic Latin, “Man, the wise.” Our probable ancestors, Homo erectus and Homo habilis  - now extinct - are classified as of the same genus (Homo) but of different species, although no one (at least lately) has attempted the appropriate experiments to see if crosses of them with us would produce fertile offspring.

In earlier times it was widely held that offspring could be produced by crosses between extremely different organisms. The Minotaur whom Theseus slew was said to be the result of a mating between a bull and a woman. And the Roman historian Pliny suggested that the ostrich, then newly discovered, was the result of a cross between a giraffe and a gnat. (It would, I suppose, have to be a female giraffe and a male gnat.) In practice there must be many such crosses which have not been attempted because of a certain understandable lack of motivation.

The chart that appears on page 26 will be referred to repeatedly in this chapter. The solid curve on it shows the times of earliest emergence of various major taxa. Many more taxa exist, of course, than are shown by the few points in the figure. But the curve is representative of the much denser array of points that would be necessary to characterize the tens of millions of separate taxa which have emerged during the history of life on our planet. The major taxa, which have evolved most recently, are by and large the most complicated.

Some notion of the complexity of an organism can be obtained merely by considering its behavior - that is, the number of different functions it is called upon to perform in its lifetime. But complexity can also be judged by the minimum information content in the organism’s genetic material.

A typical human chromosome has one very long DNA molecule wound into coils, so that the space it occupies is very much smaller than it would be if it were unraveled.

This DNA molecule is composed of smaller building blocks, a little like the rungs and sides of a rope ladder. These blocks are called nucleotides and come in four varieties. The language of life, our hereditary information, is determined by the sequence of the four different sorts of nucleotides. We might say that the language of heredity is written in an alphabet of only four letters.

But the book of life is very rich; a typical chromosomal DNA molecule in a human being is composed of about five billion pairs of nucleotides. The genetic instructions of all the other taxa on Earth are written in the same language, with the same code book. Indeed, this shared genetic language is one line of evidence that all the organisms on Earth are descended from a single ancestor, a single instance of the origin of life some four billion years ago.

The information content of any message is usually described in units called bits, which is short for “binary digits.” The simplest arithmetical scheme uses not ten digits (as we do because of the evolutionary accident that we have ten fingers) but only two, 0 and 1. Thus any sufficiently crisp question can be answered by a single binary digit - 0 or 1, yes or no. If the genetic code were written in a language of two letters rather than four letters, the number of bits in a DNA molecule would equal twice the number of nucleotide pairs.


But since there are four different kinds of nucleotides, the number of bits of information in DNA is four times the number of nucleotide pairs. Thus if a single chromosome has five billion (5 X 109) nucleotides, it contains twenty billion (2 X 1010) bits of information. [A symbol such as 109 merely indicates a one followed by a certain number of zeroes - in this case, nine of them.]

How much information is twenty billion bits? What would be its equivalent, if it were written down in an ordinary printed book in a modern human language? Alphabetical human languages characteristically have twenty to forty letters plus one or two dozen numerals and punctuation marks; thus sixty - four alternative characters should suffice for most such languages.


Since 26 equals 64 (2 X 2 X 2 X 2 X 2 X 2), it should take no more than six bits to specify a given character. We can think of this being done by a sort of game of “Twenty Questions,” in which each answer corresponds to the investment of a single bit to a yes/no question. Suppose the character in question is the letter J. We might specify it by the following procedure:

FIRST QUESTION: Is it a letter (0) or some other character (1)?
ANSWER: A letter (0)

SECOND QUESTION: Is it in the first half (0) or the second half of the alphabet (1)?

ANSWER: In the first half (0)

THIRD QUESTION: Of the thirteen letters in the first half of the alphabet, is it in the first seven (0) or the second six (1)?
ANSWER: In the second six (1)

FOURTH QUESTION: In the second six (H, I, J, K,L, M), is it in the first half (0) or the second half (1)?
ANSWER: In the first half (0)

FIFTH QUESTION: Of these letters H, I, J, is it H (0) or .is it one of I and J (1)?
ANSWER: It is one of I and J (1)

SIXTH QUESTION: Is it I (0) or J (1)?
ANSWER: It is J (1).

Specifying the letter J is therefore equivalent to the binary message, 001011. But it required not twenty questions but six, and it is in this sense that only six bits are required to specify a given letter. Therefore twenty billion bits are the equivalent of about three billion letters (2 X 1010/6 = 3 X 109).


If there are approximately six letters in an average word, the information content of a human chromosome corresponds to about five hundred million words (3 X 109/6 = 5 X 108). If there are about three hundred words on an ordinary page of printed type, this corresponds to about two million pages (5 X 108/3 X 102 X 108). If a typical book contains five hundred such pages, the information content of a single human chromosome corresponds to some four thousand volumes (2 x 108/5 x 102 = 4 x 103).


It is clear, then, that the sequence of rungs on our DNA ladders represents an enormous library of information. It is equally clear that so rich a library is required to specify as exquisitely constructed and intricately functioning an object as a human being. Simple organisms have less complexity and less to do, and therefore require a smaller amount of genetic information. The Viking landers that put down on Mars in 1976 each had preprogrammed instructions in their computers amounting to a few million bits. Thus Viking had slightly more “genetic information” than a bacterium, but significantly less than an alga.

The chart on page 26 also shows the minimum amount of genetic information in the DNA of various taxa. The amount shown for mammals is less than for human beings, because most mammals have less genetic information than human beings do. Within certain taxa - for example, the amphibians - the amount of genetic information varies wildly from species to species, and it is thought that much of this DNA may be redundant or functionless. This is the reason that the chart displays the minimum amount of DNA for a given taxon.

We see from the chart that there was a striking improvement in the information content of organisms on Earth some three billion years ago, and a slow increase in the amount of genetic information thereafter. We also see that if more than some tens of billions (several times 1010) of bits of information are necessary for human survival, extragenetic systems will have to provide them: the rate of development of genetic systems is so slow that no source of such additional biological information can be sought in the DNA.

The raw materials of evolution are mutations, inheritable changes in the particular nucleotide sequences that make up the hereditary instructions in the DNA molecule. Mutations are caused by radioactivity in the environment, by cosmic rays from space, or, as often happens, randomly - by spontaneous rearrangements of the nucleotides which statistically must occur every now and then. Chemical bonds spontaneously break. Mutations are also to some extent controlled by the organism itself. Organisms have the ability to repair certain classes of structural damage done to their DNA.


There are, for example, molecules which patrol the DNA for damage; when a particularly egregious alteration in the DNA is discovered, it is snipped out by a kind of molecular scissors, and the DNA put right. But such repair is not and must not be perfectly efficient: mutations are required for evolution. A mutation in a DNA molecule within a chromosome of a skin cell in my index finger has no influence on heredity. Fingers are not involved, at least directly, in the propagation of the species. What counts are mutations in the gametes, the eggs and sperm cells, which are the agents of sexual reproduction.

Accidentally useful mutations provide the working material for biological evolution - as, for example, a mutation for melanin in certain moths, which changes their color from white to black. Such moths commonly rest on English birch trees, where their white coloration provides protective camouflage. Under these conditions, the melanin mutation is not an advantage - the dark moths are starkly visible and are eaten by birds; the mutation is selected against. But when the Industrial Revolution began to cover the birch bark with soot, the situation was reversed, and only moths with the melanin mutation survived.


Then the mutation is selected for, and, in time, almost all the moths are dark, passing this inheritable change on to future generations. There are still occasional reverse mutations eliminating the melanin adaptation, which would be useful for the moths were English industrial pollution to be controlled. Note that in all this interaction between mutation and natural selection, no moth is making a conscious effort to adapt to a changed environment. The process is random and statistical.

Large organisms such as human beings average about one mutation per ten gametes - that is, there is a 10 percent chance that any given sperm or egg cell produced will have a new and inheritable change in the genetic instructions that determine the makeup of the next generation. These mutations occur at random and are almost uniformly harmful - it is rare that a precision machine is improved by a random change in the instructions for making it.

Most of these mutations are also recessive - they do not manifest themselves immediately. Nevertheless, there is already such a high mutation rate that, as several biologists have suggested, a larger complement of genetic DNA would bring about unacceptably high mutation rates: too much would go wrong too often if we had more genes.* If this is true, there must be a practical upper limit to the amount of genetic information that the DNA of larger organisms can accommodate. Thus large and complex organisms, by the mere fact of their existence, have to have substantial resources of extragenetic information. That information is contained, in all higher animals except Man, almost exclusively in the brain.


* To some extent the mutation rate is itself controlled by natural selection, as in our example of a “molecular scissors.”
But there is likely to be an irreducible minimum mutation rate (1) in order to produce enough genetic experiments for natural selection to operate on, and (2) as an equilibrium between mutations produced, say, by cosmic rays and the most efficient possible cellular repair mechanisms.

What is the information content of the brain? Let us consider two opposite and extreme poles of opinion on brain function. In one view, the brain, or at least its outer layers, the cerebral cortex, is equipotent: any part of it may substitute for any other part, and there is no localization of function. In the other view, the brain is completely hard - wired: specific cognitive functions are localized in particular places in the brain.


Computer design suggests that the truth lies somewhere between these two extremes. On the one hand, any nonmystical view of brain function must connect physiology with anatomy; particular brain functions must be tied to particular neural patterns or other brain architecture. On the other hand, to assure accuracy and protect against accident we would expect natural selection to have evolved substantial redundancy in brain function. This is also to be expected from the evolutionary path that it is most likely the brain followed.

The redundancy of memory storage was clearly demonstrated by Karl Lashley, a Harvard psycho-neurologist, who surgically removed (extirpated) significant fractions of the cerebral cortex of rats without noticeably affecting their recollection of previously learned behavior on how to run mazes. From such experiments it is clear that the same memory must be localized in many different places in the brain, and we now know that some memories are funneled between the left and right cerebral hemispheres by a conduit called the corpus callosum.

Lashley also reported no apparent change in the general behavior of a rat when significant fractions - say, 10 percent - of its brain were removed. But no one asked the rat its opinion. To investigate this question properly would require a detailed study of rat social, foraging, and predator - evasion behavior.

There are many conceivable behavioral changes resulting from such extirpations that might not be immediately obvious to the casual scientist but that might be of considerable significance to the rat - such as the amount of post-extirpation interest an attractive rat of the opposite sex now elicits, or the degree of disinterest now evinced by the presence of a stalking cat.*


(* Incidentally, as a test of the influence of animated cartoons on American life, try rereading this paragraph with the word “rat” replaced everywhere by “mouse,” and see if your sympathy for the surgically invaded and misunderstood beast suddenly increases.)

It is sometimes argued that cuts or lesions in significant parts of the cerebral cortex in humans - as by bilateral prefrontal lobotomy or by an accident - have little effect on behavior. But some sorts of human behavior are not very apparent from the outside, or even from the inside. There are human perceptions and activities that may occur only rarely, such as creativity. The association of ideas involved in acts - even small ones - of creative genius seems to imply substantial investments of brain resources. These creative acts indeed characterize our entire civilization and mankind as a species. Yet in many people they occur only rarely, and their absence may be missed by neither the brain damaged subject nor the inquiring physician.

While substantial redundancy in brain function is inevitable, the strong equipotent hypothesis is almost certainly wrong, and most contemporary neurophysiologists have rejected it. On the other hand, a weaker equipotent hypothesis - holding, for example, that memory is a function of the cerebral cortex as a whole  - is not so readily dismissible, although it is testable, as we shall see.

There is a popular contention that half or more of the brain is unused. From an evolutionary point of view this would be quite extraordinary: why should it have evolved if it had no function? But actually the statement is made on very little evidence. Again, it is deduced from the finding that many lesions of the brain, generally of the cerebral cortex, have no apparent effect on behavior. This view does not take into account,

(1) the possibility of redundant function; and

(2) the fact that some human behavior is subtle.

For example, lesions in the right hemisphere of the cerebral cortex may lead to impairments in thought and action, but in the nonverbal realm, which is, by definition, difficult for the patient or the physician to describe.

There is also considerable evidence for localization of brain function. Specific brain sites below the cerebral cortex have been found to be concerned with appetite, balance, thermal regulation, the circulation of the blood, precision movements and breathing. A classic study on higher brain function is the work of the Canadian neurosurgeon, Wilder Penfield, on the electrical stimulation of various parts of the cerebral cortex, generally in attempts to relieve symptoms of a disease such as psychomotor epilepsy. Patients reported a snatch of memory, a smell from the past, a sound or color trace - all elicited by a small electrical current at a particular site in the brain.

In a typical case, a patient might hear an orchestral composition in full detail when current flowed through Penfield’s electrode to the patient’s cortex, exposed after a craniotomy. If Penfield indicated to the patient  - who typically is fully conscious during such procedures - that he was stimulating the cortex when he was not, invariably the patient would report no memory trace at that moment. But when, without notice, a current would flow through the electrode into the cortex, a memory trace would begin or continue.


A patient might report a feeling tone, or a sense of familiarity, or a full retrieval of an experience of many years previous playing back in his mind, simultaneously but in no conflict with his awareness of being in an operating room conversing with a physician. While some patients described these flashbacks as “little dreams,” they contained none of the characteristic symbolism of dream material. These experiences have been reported almost exclusively by epileptics, and it is possible, although it has by no means been demonstrated, that non - epileptics are, under similar circumstances, subject to comparable perceptual reminiscences.

In one case of electrical stimulation of the occipital lobe, which is concerned with vision, the patient reported Seeing a fluttering butterfly of such compelling reality that he stretched out his hand from the operating table to catch it. In an identical experiment performed on an ape, the animal peered intently, as if at an object before him, made a swift catching motion with his right hand, and then examined, in apparent bewilderment, his empty fist.

Painless electrical stimulation of at least some human cerebral cortices elicits cascades of memories of particular events. But removal of the brain tissue in contact with the electrode does not erase the memory. It is difficult to resist the conclusion that at least in humans memories are stored somewhere in the cerebral cortex, waiting for the brain to retrieve them by electrical impulses - which, of course, are ordinarily generated within the brain itself.

If memory is a function of the cerebral cortex as a whole - a kind of dynamic reverberation or electrical standing wave pattern of the constituent parts, rather than stored statically in separate brain components -  this would explain the survival of memory after significant brain damage. The evidence, however, points in the other direction: In experiments performed by the American neurophysiologist Ralph Gerard at the University of Michigan, hamsters were taught to run a simple maze and then chilled almost to the freezing point in a refrigerator, a kind of induced hibernation.


The temperatures were so low that all detectable electrical activity in the animals’ brains ceased. If the dynamic view of memory were true, the experiment should have wiped out all memory of successful maze - running. Instead, after thawing, the hamsters remembered. Memory seems to be localized in specific sites in the brain, and the survival of memories after massive brain lesions must be the result of redundant storage of static memory traces in various locales.

Penfield, extending the findings of previous researchers, also uncovered a remarkable localization of function in the motor cortex. Certain parts of the outer layers of our brain are responsible for sending signals to or receiving signals from specific parts of the body. A version of Penfield’s maps of the sensory and motor cortices appear on pages 36 and 37. It reflects in an engaging way the relative importance of various parts of our body.


The enormous amount of brain area committed to the fingers - particularly the thumb  - and to the mouth and the organs of speech corresponds precisely to what in human physiology, through human behavior, has set us apart from most of the other animals. Our learning and our culture would never have developed without speech; our technology and our monuments would never have evolved without hands. In a way, the map of the motor cortex is an accurate portrait of our humanity.

But the evidence for localization of function is now much stronger even than this. In an elegant set of experiments, David Hubel of Harvard Medical School discovered the existence of networks of particular brain cells that respond selectively to lines perceived by the eye in different orientations. There are cells for horizontal, and cells for vertical, and cells for diagonal, each of which is stimulated only if lines of the appropriate orientation are perceived. At least some beginnings of abstract thought have thereby been traced to the cells of the brain.

The existence of specific brain areas dealing with particular cognitive, sensory or motor functions implies that there need not be any perfect correlation between brain mass and intelligence; some parts of the brain are clearly more important than others. Among the most massive human brains on record are those of Oliver Cromwell, Ivan Turgenev and Lord Byron, all of whom were smart but no Albert Einsteins. Einstein’s brain, on the other hand, was not remarkably large. Anatole France, who was brighter than many, had a brain half the size of Byron’s.


The human baby is born with an exceptionally high ratio of brain mass to body mass (about 12 percent); and the brain, particularly the cerebral cortex, continues to grow rapidly in the first three years of life - the period of most rapid learning. By age six, the mass of the brain is 90 percent of its adult value. The average mass of the brain of contemporary men is about 1,375 grams, almost three pounds. Since the density of the brain, like that of all body tissues, is about that of water (one gram per cubic centimeter), the volume of such a brain is 1,375 cubic centimeters, a little under a liter and a half. (One cubic centimeter is about the volume of an adult human navel.)

But the brain of a contemporary woman is about 150 cubic centimeters smaller. When cultural and child - rearing biases are taken into account, there is no clear evidence of overall differences in intelligence between the sexes. Therefore, brain mass differences of 150 grams in humans must be unimportant. Comparable differences in brain mass exist among adults of different human races (Orientals, on the average, have slightly larger brains than whites); since no differences in intelligence under similarly controlled conditions have been demonstrated there, the same conclusion follows. And the gap between the sizes of the brains of Lord Byron (2,200 grams) and Anatole France (1,100 grams) suggests that, in this range, differences of many hundreds of grams may be functionally unimportant.

On the other hand, adult human microcephalics, who are born with tiny brains, have vast losses in cognitive abilities; their typical brain masses are between 450 and 900 grams. A normal newborn child has a typical brain mass of 350 grams; a one-year- old, about 500 grams. It is clear that, as we consider smaller and smaller brain masses, there comes a point where the brain mass is so tiny that its function is severely impaired, compared to normal adult human brain function.

Moreover, there is a statistical correlation between brain mass or size and intelligence in human beings. The relationship is not one-to-one, as the Byron-France comparison clearly shows. We cannot tell a person’s intelligence in any given case by measuring his or her brain size. However, as the American evolutionary biologist Leigh van Valen of the University of Chicago has shown, the available data suggest a fairly good correlation, on the average, between brain size and intelligence. Does this mean that brain size in some sense causes intelligence?


Might it not be, for example, that malnutrition, particularly in utero and in infancy, leads to both small brain size and low intelligence, without the one causing the other? Van Valen points out that the correlation between brain size and intelligence is much better than the correlation between intelligence and stature or adult body weight, which are known to be influenced by malnutrition, and there is no doubt that malnutrition can lower intelligence. Thus beyond such effects, there appears to be an extent to which larger absolute brain size tends to produce higher intelligence.

In exploring new intellectual territory, physicists have found it useful to make order-of-magnitude estimates. These are rough calculations that block out the problem and serve as guides for future studies. They do not pretend to be highly accurate. In the question of the connection between brain size and intelligence, it is clearly far beyond present scientific abilities to perform a census of the function of every cubic centimeter of the brain. But might there not be some rough and approximate way in which to connect brain mass with intelligence?

The difference in brain mass between the sexes is of interest in precisely this context, because women are systematically smaller in size and have a lower body mass than men. With less body to control, might not a smaller brain mass be adequate? This suggests that a better measure of intelligence than the absolute value of the mass of a brain is the ratio of the mass of the brain to the total mass of the organism.

The chart on page 39 shows the brain masses and body masses of various animals. There is a remarkable separation of fish and reptiles from birds and mammals. For a given body mass or weight, mammals have consistently higher brain mass. The brains of mammals are ten to one hundred times more massive than the brains of contemporary reptiles of comparable size.


The discrepancy between mammals and dinosaurs is even more striking. These are stunningly large and completely systematic differences. Since we are mammals, we probably have some prejudices about the relative intelligence of mammals and reptiles; but I think the evidence is quite compelling that mammals are indeed systematically much more intelligent than reptiles.


(Also shown is an intriguing exception: a small ostrich - like theropod class of dinosaurs from the late Cretaceous Period, whose ratio of brain to body mass places them just within the regional diagram otherwise restricted to large birds and the less intelligent mammals. It would be interesting to know much more about these creatures, which have been studied by Dale Russell, chief of the Paleontology Division of the National Museums of Canada.)


We also see from the chart on page 39 that the primates, a taxon that includes man, are separated, but less systematically, from the rest of the mammals; primate brains are on the average more massive by a factor of about two to twenty than those of nonprimate mammals of the same body mass.

When we look more closely at this chart, isolating a number of particular animals, we see the results on page 40. Of all the organisms shown, the beast with the largest brain mass for its body weight is a creature called Homo sapiens. Next in such a ranking are the dolphins.* Again I do not think it is chauvinistic to conclude from evidence on their behavior that humans and dolphins are at least among the most intelligent organisms on Earth.

* By the criterion of brain mass to body mass, sharks are the smartest of the fishes, which is consistent with their ecological niche - predators have to be brighter than plankton browsers. Both in their increasing ratio of brain to body mass and in the development of coordinating centers in the three principal components of their brains, sharks have evolved in a manner curiously parallel to the evolution of higher vertebrates on the land.

The importance of this ratio of brain to body mass had been realized even by Aristotle. Its principal modern exponent has been Harry Jerison, a neuro-psychiatrist at the University of California at Los Angeles. Jerison points out that some exceptions exist to our correlation - e. g., the European pygmy shrew has a brain mass of 100 milligrams in a 4.7 gram body, which gives it a mass ratio in the human range. But we cannot expect the correlation of mass ratio with intelligence to apply to the smallest animals, because the simplest “housekeeping” functions of the brain must require some minimum brain mass.

The brain mass of a mature sperm whale, a close relative of the dolphin, is almost 9,000 grams, six and a half times that of the average man. It is unusual in total brain mass, not (compare with the figure below) in ratio of brain to body weight. Yet the largest dinosaurs had brain weight about 1 percent that of the sperm whale. What does the whale do with so massive a brain? Are there thoughts, insights, arts, sciences and legends of the sperm whale?

The criterion of brain mass to body mass, which involves no considerations of behavior, appears to provide a very useful index of the relative intelligence of quite different animals. It is what a physicist might describe as an acceptable first approximation.


(Note for future reference that the Australopithecines, who were either ancestral to man or at least close collateral relatives, also had a large brain mass for their body - weight; this has been determined by making casts of fossil braincases.)


I wonder if the unaccountable general appeal of babies and other small mammals -  with relatively large heads compared to adults of the same species -  derives from our unconscious awareness of the importance of brain to body mass ratios.

The data so far in this discussion suggest that the evolution of mammals from reptiles over two hundred million years ago was accompanied by a major increase in relative brain size and intelligence; and that the evolution of human beings from nonhuman primates a few million years ago was accompanied by an even more striking development of the brain.

The human brain (apart from the cerebellum, which does not seem to be involved in cognitive functions) contains about ten billion switching elements called neurons. (The cerebellum, which lies beneath the cerebral cortex, toward the back of the head, contains roughly another ten billion neurons.) The electrical currents generated by and through the neurons or nerve cells were the means by which the Italian anatomist Luigi Galvani discovered electricity. Galvani had found that electrical impulses could be conducted to the legs of frogs, which dutifully twitched; and the idea became popular that animal motion (“animation”) was in its deepest sense caused by electricity.


This is at best a partial truth; electrical impulses transmitted along nerve fibers do, through neurochemical intermediaries, initiate such movements as the articulation of limbs, but the impulses are generated in the brain. Nevertheless, the modern science of electricity and the electrical and electronic industries all trace their origins to eighteenth - century experiments on the electrical stimulation of twitches in frogs.

Only a few decades after Galvani, a group of literary English - persons, immobilized in the Alps by inclement weather, set themselves a competition to write a fictional work of consummate horror. One of them, Mary Wollstonecraft Shelley, penned the now famous tale of Dr. Frankenstein’s monster, who is brought to life by the application of massive electrical currents. Electrical devices have been a mainstay of gothic novels and horror films ever since. The essential idea is Galvani’s and is fallacious, but the concept has insinuated itself into many Western languages - as, for example, when I am galvanized into writing this book.

Most neurobiologists believe that the neurons are the active elements in brain function, although there is evidence that some specific memories and other cognitive functions may be contained in particular molecules in the brain, such as RNA or small proteins. For every neuron in the brain there are roughly ten glial cells (from the Greek word for glue), which provide the scaffolding for the neuronal architecture. An average neuron in a human brain has between 1,000 and 10,000 synapses or links with adjacent neurons.


(Many spinal - cord neurons seem to have about 10,000 synapses, and the so - called Purkinje cells of the cerebellum may have still more. The number of links for neurons in the cortex is probably less than 10,000.)


If each synapse responds by a single yes - or - no answer to an elementary question, as is true of the switching elements in electronic computers, the maximum number of yes/no answers or bits of information that the brain could contain is about 1010 X 103 = 1013, or 10 trillion bits (or 100 trillion = 10” bits if we had used 10* synapses per neuron).


Some of these synapses must contain the same information as is contained in other synapses; some must be concerned with motor and other noncognitive functions; and some may be merely blank, a buffer waiting for the new day’s information to flutter through.

If each human brain had only one synapse - corresponding to a monumental stupidity - we would be capable of only two mental states. If we had two synapses, then 22  -  4 states; three synapses, then 23 = 8 states, and, in general, for N synapses, 2N states. But the human brain is characterized by some 1013 synapses. Thus the number of different states of a human brain is 2 raised to this power - i.e., multiplied by itself ten trillion times. This is an unimaginably large number, far greater, for example, than the total number of elementary particles (electrons and protons) in the entire universe, which is much less than 1 raised to the power 103.


It is because of this immense number of functionally different configurations of the human brain that no two humans, even identical twins raised together, can ever be really very much alike. These enormous numbers may also explain something of the unpredictability of human behavior and those moments when we surprise even ourselves by what we do. Indeed, in the face of these numbers, the wonder is that there are any regularities at all in human behavior.


The answer must be that all possible brain states are by no means occupied; there must be an enormous number of mental configurations that have never been entered or even glimpsed by any human being in the history of mankind. From this perspective, each human being is truly rare and different and the sanctity of individual human lives is a plausible ethical consequence.

In recent years it has become clear that there are electrical microcircuits in the brain. In these micro - circuits the constituent neurons are capable of a much wider range of responses than the simple “yes” or “no” of the switching elements in electronic computers. The microcircuits are very small in size (typical dimensions are about 1/10,000 of a centimeter) and thus able to process data very rapidly. They respond to about 11100th of the voltage necessary to stimulate ordinary neurons, and are therefore capable of much finer and subtler responses.


Such microcircuits seem to increase in abundance in a manner consistent with our usual notions about the complexity of an animal, reaching their greatest proliferation in both absolute and relative terms in human beings. They also develop late in human embryology. The existence of such microcircuits suggests that intelligence may be the result not only of high brain - to - body - mass ratios but also of an abundance of specialized switching elements in the brain. Microcircuits make the number of possible brain states even greater than we calculated in the previous paragraph, and so enhance still farther the astonishing uniqueness of the individual human brain.

We can approach the question of the information content of the human brain in a quite different way -  introspectively. Try to imagine some visual memory, say from your childhood. Look at it very closely in your mind’s eye. Imagine it is composed of a set of fine dots like a newspaper wire-photo. Each dot has a certain color and brightness. You must now ask how many bits of information are necessary to characterize the color and brightness of each dot; how many dots make up the recalled picture; and how long it takes to recall all the details of the picture in the eye of the mind.


In this retrospective, you focus on a very small part of the picture at any one time; your field of view is quite limited. When you put in all these numbers, you come out with a rate of information processing by the brain, in bits per second. When I do such a calculation, I come out with a peak processing rate of about 5,000 bits per second.*


* Horizon to horizon comprises an angle of 180 degrees in a flat place. The moon is 0.5 degrees in diameter. I know I can see detail on it, perhaps twelve picture elements across. Thus my eye can resolve about 0.5/12 = 0.04 degrees. Anything smaller than this is too small for me to see. The instantaneous field of view in my mind’s eye, as well as in my real eye, seems to be something like 2 degrees on a side. Thus the little square picture I can see at any given moment contains about (2/0.04)2 = 2,500 picture elements, corresponding to the wirephoto dots.

Most commonly such visual recollections concentrate on the edges of forms and sharp changes from bright to dark, and not on the configuration of areas of largely neutral brightness. The frog, for example, sees with a very strong bias towards brightness gradients. However, there is considerable evidence that detailed memory of interiors and not just edges of forms is reasonably common. Perhaps the most striking case is an experiment with humans on stereo reconstruction of a three - dimensional image, using a pattern recalled for one eye and a pattern being viewed for the other. The fusion of images in this anaglyph requires a memory of 10,000 picture elements.

To characterize aft possible shades of gray and colors of such dots requires about 20 bits per picture element. Thus a description of my little picture requires 2,500 X 20 or about 50,000 bits. But the act of scanning the picture takes about 10 seconds, and thus my sensory data processing rate is probably not much larger than 50,000/10 = 5,000 bits per second.


For comparison, the Viking lander cameras, which also have a 0.04 degree resolution, have only six bits per picture element to characterize brightness, and can transmit these directly to Earth by radio at 500 bits per second. The neurons of the brain generate about 25 watts of power, barely enough to turn on a small incandescent light. The Viking lander transmits radio messages and performs all its other functions with a total power of about 50 watts.

But I am not recollecting visual images all my waking hours, nor am I continuously subjecting people and objects to intense and careful scrutiny. I am doing that perhaps a small percent of the time. My other information channels - auditory, tactile, olfactory and gustatory - are involved with much lower transfer rates. I conclude that the average rate of data processing by my brain is about (5,000/50) = 100 bits per second. Over sixty years, that corresponds to 2 x 1011 or 200 billion total bits committed to visual and other memory if I have perfect recall. This is less than, but not unreasonably less than, the number of synapses or neural connections (since the brain has more to do than just remember) and suggests that neurons are indeed the main switching elements in brain function.

A remarkable series of experiments on brain changes during learning has been performed by the American psychologist Mark Rosenzweig and his colleagues at the University of California at Berkeley. They maintained two different populations of laboratory rats - one in a dull, repetitive, impoverished environment; the other in a variegated, lively, enriched environment. The latter group displayed a striking increase in the mass and thickness of the cerebral cortex, as well as accompanying changes in brain chemistry.


These increases occurred in mature as well as in young animals. Such experiments demonstrate that physiological changes accompany intellectual experience and show how plasticity can be controlled anatomically. Since a more massive cerebral cortex may make future learning easier, the importance of enriched environments in childhood is clearly drawn.

This would mean that new learning corresponds to the generation of new synapses or the activation of moribund old ones, and some preliminary evidence consistent with this view has been obtained by the American neuroanatomist William Greenough of the University of Illinois and his coworkers. They have found that after several weeks of learning new tasks in laboratory contexts, rats develop the kind of new neural branches in their cortices that form synapses. Other rats, handled similarly but given no comparable education, exhibit no such neuro-anatomical novelties.


The construction of new synapses requires the synthesis of protein and RNA molecules. There is a great deal of evidence showing that these molecules are produced in the brain during learning, and some scientists have suggested that the learning is contained within brain proteins or RNA. But it seems more likely that the new information is contained in the neurons, which are in turn constructed of proteins and RNA.

How densely packed is the information stored in the brain? A typical information density during the operation of a modern computer is about a million bits per cubic centimeter. This is the total information content of the computer, divided by its volume. The human brain contains, as we have said, about 1013 bits in a little more than 103 cubic centimeters, for an information content of KP/IO3 = 1010, about ten billion bits per cubic centimeter; the brain is therefore ten thousand times more densely packed with information than is a computer, although the computer is much larger.


Put another way, a modern computer able to process the information in the human brain would have to be about ten thousand times larger in volume than the human brain. On the other hand, modern electronic computers are capable of processing information at a rate of 1016 to 1017 bits per second, compared to a peak rate ten billion times slower in the brain. The brain must be extraordinarily cleverly packaged and “wired,” with such a small total information content and so low a processing rate, to be able to do so many significant tasks so much better than the best computer.

The number of neurons in an animal brain does not double as the brain volume itself doubles. It increases more slowly. A human brain with a volume of about 1,375 cubic centimeters contains, as we have said, apart from the cerebellum about ten billion neurons and some ten trillion bits. In a laboratory at the National Institute of Mental Health near Bethesda, Maryland, I recently held in my hand a rabbit brain. It had a volume of perhaps thirty cubic centimeters, the size of an average radish, corresponding to a few hundred million neurons and some hundred billion bits - which controlled, among other things, the munching of lettuce, the twitchings of noses, and the sexual dalliances of grownup rabbits.

Since animal taxa such as mammals, reptiles or amphibians contain members with very different brain sizes, we cannot give a reliable estimate of the number of neurons in the brain of a typical representative of each taxon. But we can estimate average values which I have done in the chart on page 26. The rough estimates there show that a human being has about a hundred times more bits of information in his brain than a rabbit does. I do not know that it means very much to say that a human being is a hundred times smarter than a rabbit, but I am not certain that it is a ridiculous contention. (It does not, of course, follow that a hundred rabbits are as smart as one human being.)

We are now in a position to compare the gradual increase through evolutionary time of both the amount of information contained in the genetic material and the amount of information contained in the brains of organisms. The two curves cross (p. 26) at a time corresponding to a few hundred million years ago and at an information content corresponding to a few billion bits.


Somewhere in the steaming jungles of the Carboniferous Period there emerged an organism that for the first time in the history of the world had more information in its brains than in its genes. It was an early reptile which, were we to come upon it in these sophisticated times, we would probably not describe as exceptionally intelligent. But its brain was a symbolic turning point in the history of life. The two subsequent bursts of brain evolution, accompanying the emergence of mammals and the advent of manlike primates, were still more important advances in the evolution of intelligence.


Much of the history of life since the Carboniferous Period can be described as the gradual (and certainly incomplete) dominance of brains over genes.


Back to Contents