Sunday, July 4, 2010

Harry Harlow and the Science of Affection

Love at Goon Park: Harry Harlow and the Science of Affection by Deborah Blum. Perseus,

The News & Observer

March 9, 2003

Monkey Love

By Phillip Manning

Infants need cuddling, comforting, and a warm body to nestle against. Most modern parents know this from countless books and magazine articles emphasizing that parents should embrace their infants and bond with them. What most modern parents don't know, however, is that this advice is diametrically opposed to the counsel doctors were giving mom and dad less than a century ago. In his wildly popular book "The Care and Feeding of Children," published in 15 editions between 1894 and 1935, Dr. Luther Holt warned parents about the "vicious practice" of rocking a child in a cradle or picking her up when she cried. He also opposed hugging older children because too much affection would soften their moral fiber.

How could child-rearing practices change so dramatically? In her well-researched and eminently readable book, "Love at Goon Park," Pulitzer Prize-winning journalist Deborah Blum answers that question by telling the fascinating story of Harry Harlow, the psychologist whose research rewrote the rules of child care. Harry Harlow did not discover what children need by watching his own. In fact, Harlow was a hard-drinking, possibly alcoholic, workaholic who ignored his two sons so completely that it led his first wife to divorce him. After his boys were grown, he reacquainted himself with them, but his younger son said that although they got along, "we were never father and son." In fact, Harlow's insights about child rearing were not based on studying children at all but came out of his research with monkeys.

Harlow began studying monkeys because of a misunderstanding. After getting a Ph.D. from Stanford in 1930, he landed a teaching job in the psychology department at the University of Wisconsin in Madison. In those days behavioral research was done with rats, and Harlow planned to continue the work with them that he had started as a graduate student. Unfortunately, the rat lab at Wisconsin had been torn down before he arrived. There were no plans to replace it. "He was stranded," Blum writes. "He was an experimental psychologist with no way to conduct experiments an animal psychologist without rats [was like] an astronomer without a telescope." Harlow tried working with cats and frogs - he "flashed lights. He rang bells. He applied mild shocks to the frogs' legs." He concluded that they are easier to catch than to teach. Harlow began to watch the animals at the local zoo. He soon decided that monkeys were the ticket.

Harlow and his students threw together a primate lab in a deserted box factory nicknamed Goon Park (because the address 600 N. Park looked like GOON Park to some imaginative students) and began to study monkeys. He followed the child-care practices of the day meticulously, tucking the baby monkeys away in clean, neat nurseries just hours after their birth. The little monkeys thrived physically, but according to Blum they "seemed dumbfounded by loneliness. They would sit and rock, stare into space, suck their thumbs." When the monkeys were brought together, they didn't know what to do. A normal monkey's life is built around interaction with a larger society. The monkeys Harlow raised simply sat alone and stared at the floor.

Harlow and students knew something was wrong. Was it the formula they fed them, the light cycle, the antibiotics? They found a reference about how baby monkeys cling desperately to soft blankets. This led them to run a now famous experiment. They made a mother; in fact, they made two. One was a terry cloth covered doll, known as "cloth mother"; the other was a wire mom with a bottle attached so the babies could feed. The little monkeys didn't hesitate. They grabbed cloth mother, cuddling, stroking, sleeping next to it. They visited the wire mother to feed, but otherwise they ignored it. The message was clear writes Blum, "food is sustenance but a good hug is life itself."

The result of this experiment is unsurprising today, but in the 1950s, it sent shock waves through the psychology community. Behaviorism dominated psychology, and Harvard psychologist B.F. Skinner dominated behaviorism. He held that behavior was shaped by reward and punishment. Thus, monkeys should prefer whoever or whatever gave them food. Harlow's findings stood behaviorism on its ear. The monkeys preferred - in fact, seemed to adore - cloth mother. Harry Harlow had begun to explore the science of love.

His research turned darker. To test the depths of love, he carried out a series of experiments that would be considered cruel by today's animal-rights advocates. He began scaring the monkeys with noisy toys. The frightened infants would fly to cloth mother, clutch it tightly, and hold on for dear life. He removed cloth mother from the cage and watched the babies screech and cry in despair. He put cloth mother in a different room from the baby monkeys, separated by a window covered by a panel that the monkeys could raise. The researchers watched for hours as the babies doggedly raised the panel over and over again just see the cloth mom. Clearly, infants needed a mother for security, even if that
mother was just a lifeless bundle of cloth.

His next experiments reflected a deepening gloom in Harlow's own life. By now, he was one of the country's best known psychologists, but he was drinking more, working harder, traveling constantly. And his second wife was dying of breast cancer. Harlow decided to see what would happen to monkeys in a loveless world. He isolated baby monkeys completely for 30 days in enclosed cages; they saw nothing but the hands that fed them. When taken out, they were "enormously disturbed." Two of them refused to eat and starved to death. Those that survived were totally dysfunctional.

Toward the end of those experiments, Harlow was becoming dysfunctional himself. His drinking combined with Parkinson's disease to finally flatten him. Harry Harlow died in 1981 at age 76. But his legacy lives in modern parenting methods, especially on the need for maternal bonding. Another legacy is the booming animal-rights movement, which published a 95-page document on the evils of maternal-deprivation research after Harlow's death. These outgrowths of Harlow's research seem self-evident today, but as Blum writes, "The answers we call obvious today seem so, in real measure, because Harry Harlow conducted those studies."

The Secret of Life

DNA: The Secret of Life by James D. Watson (with Andrew Berry). Alfred A. Knopf,

The News & Observer

May 5, 2003

The Genes Scene

By Phillip Manning

Half a century ago, James D. Watson and Francis Crick unraveled the structure of a molecule that Crick said contained the "the secret of life." That molecule was deoxyribonucleic acid or DNA, and determining its structure promised to be one of those rare events in science: a discovery that has far-reaching practical consequences (such as the
invention of the transistor) and that tells us about ourselves and our world (such as Darwin's theory of evolution). Has the DNA revolution lived up to its billing?

That is the question Watson addresses in "DNA: The Secret of Life," and he is perfectly positioned to answer it. Watson, along with Crick and Maurice Wilkins, won a Nobel Prize in 1962 for discovering the double helix, and he has been a prominent figure in DNA-related science for 50 years. In this richly illustrated book, he describes the tremendous progress achieved by molecular biologists during that time.

He starts with the discovery itself, the structure of DNA, the Holy Grail of 1950s biology, which enabled scientists to understand how genetic information is stored and replicated. This understanding has profoundly transformed our world: genetically modified foods are on our tables; DNA "fingerprinting" is the gold standard for identifying criminals and exonerating the innocent; and the genes causing many inherited diseases - such as Huntington and Tay-Sachs - have been identified. In case after case, Watson explains the science that made these advances possible and enlivens his message with tales about the
scientists who participated in the revolution.

Take, for example, the chapter on the early days of recombinant DNA - the process that allows scientists to isolate and copy genes. Watson explains clearly the science behind the process, which he calls "cutting, pasting, and copying." Restriction enzymes cut a strand of DNA, isolating a desired gene. Ligase then glues the ends of the gene together forming a circular strand of DNA called a plasmid. The plasmid is inserted into a bacterium, which then goes about its usual business of reproducing itself and the plasmid. Thus, a single DNA molecule can produce enormous quantities of a gene. And since genes make proteins, the workhorses of all cells, scientists could potentially clone any
amount of any protein they desired.

That Herb Boyer and Stanley Cohen worked out this cloning technique is a matter of record. But who besides Watson would know that they thrashed out the details in a Waikiki deli. Or that Boyer was so enamored with DNA that he named his Siamese cats Crick and Watson. Later Boyer and another partner plunked down $500 and started the first biotech company. They named it Genentech, and its first product was human insulin. This development was a godsend to 8 million diabetics in the United States who previously had to control their disease with pig or cow insulin, which can cause allergic reactions. This is exciting stuff: a beneficial and practical use of technology that came directly out of scientists
newfound understanding of the double helix.

Another consequence of the DNA revolution was the sequencing of the human genome. The genome provides us with a portrait of our species; all that we are derives from the order of the 3.1 billion base pairs in the DNA that resides in almost every cell in our bodies. DNA analysis can also identify our closest relatives. Mary-Claire King and Allan Wilson compared the human genome with that of the chimpanzee and showed that the DNA sequences between the two species differ by a mere 1 percent. DNA sequencing can also provide historical insights. Another analysis led molecular biologists to conclude that the human lineage separated from the great apes about 5 million years ago, overturning paleontologists' long-held belief that the split occurred 25 million
years ago.

But molecular biology sheds light on questions that go back much further than a few million years. In fact, DNA technology has raised questions about - and possibly found answers to - the origin of life itself. Most DNA is found in the nucleus of a cell. Soon after Watson and Crick determined the structure of DNA, scientists found that although DNA provided the template that governed the life of the cell, its cousin ribonucleic acid (RNA) had the crucial role of ferrying DNA's instructions out into the body of the cell where proteins are assembled. But why do we need RNA? Why doesn't the cell simply make proteins in the nucleus? Francis Crick believes he has the answer: life (or at least genetic replication) started with RNA. DNA, which is a more stable molecule and better for long-term storage of genetic information, came later. This neatly answers the question of why modern cells depend on RNA for vital functions: it was there first, and natural selection put it to good use.

Watson's book, however, is more than a superb history of 50 years of DNA. Watson is the rare combination of a good writer and good scientist. His first book "The Double Helix," was a lively, first-hand account of the discovery of the structure of DNA. It was a best seller, despite the objections of feminists, who thought his portrayal of Maurice Wilkins's coworker Rosalind Franklin was unflattering and unfair. The tone of this book is more subdued, perhaps because Watson is older now and was assisted by a coauthor, Andrew Berry. But neither age nor coauthor can tame Watson completely, and flashes of the brash, outspoken 25-year-old shine through.

Consider his views on genetically modified (GM) foods, which are made from crops that carry a gene inserted in the plant's DNA. One famous example is Bt corn, in which scientists have introduced a gene that produces a toxin that kills insects, eliminating the need for pesticides. Although the taste of Bt corn is indistinguishable from ordinary corn, it scared consumers, especially in Europe where it was labeled "Frankenfood." Even Prince Charles weighed in on the issue, pronouncing that "this kind of genetic modification takes mankind into realms that belong to God."

But Watson will have none of this princely nonsense. "It is nothing less than an absurdity," he writes, "to deprive ourselves of the benefits of GM foods by demonizing them it is nothing less than a crime to be governed by the irrational suppositions of Prince Charles and others." Indeed, the greatest benefit of the DNA revolution may not be its material benefits. Like all great scientific advances, it is helping us beat back the shadows of superstition with knowledge.


War and the Fate of Industrial Societies

The Party’s Over: Oil, War and the Fate of Industrial Societies by Richard Heinberg. New Society,

The News & Observer

August 10, 2003

Bleak View of Our Energy Future

By Phillip Manning

If a pessimist sees the glass as half empty, then Richard Heinberg sees it as bone dry and dirty to boot. Among other catastrophes, he predicts “that the global industrial system will probably collapse … within the next few decades.” He foresees “a century of impending famine, disease, economic collapse, despotism, and resource wars.” Furthermore, the human population of the planet will have to drop to 2 billion, which “poses a serious problem, since there are currently over six billion of us.”

What’s precipitating all this gloom? Heinberg believes that world oil production will peak soon, causing nations to scramble madly for diminishing amounts of the precious “black gold” that fuels industrial civilization. In his new book “The Party’s Over,” Heinberg, an author and educator from California, offers equal measures of hard science and apocalyptic gloom. Though his speculations about the future seem exaggerated, there is little doubt that significant changes, long unaddressed, are coming.

Heinberg’s timetable for the world oil-production peak is based on the work of several respected geologists, beginning with M. King Hubbert. In 1956, Hubbert used a curve-fitting technique to predict that the flow of U.S. oil would begin to decline between 1966 and 1972. Such predictions had been made before and proved false. But Hubbert turned out to be right; American oil production started to drop in 1970.

Since then, geologists have refined Hubbert’s technique and applied it to world oil production. Their conclusions are amazingly consistent. Colin Campbell, an Oxford-trained geologist with many years of oil-exploration experience, writes that “the decline will begin before 2010.” Kenneth Deffeyes of Princeton University predicts it will happen in 2003 or 2004. “Close to 2010” predicts another geologist quoted by Heinberg. Yet another says the world oil production will peak in 2006.

Of course, some experts disagree. Heinberg presents their arguments — and then demolishes them. Chief among the Pollyannas is Bjorn Lomborg, the author of “The Skeptical Environmentalist.” Lomborg claims oil reserves are growing, that technological advances are allowing us to extract more oil from existing wells, and that substitutes for oil will be found before the wells run dry. Heinberg easily refutes two of his arguments. Since 1960, he writes, new oil discoveries have declined, and although technology allows us to extract more oil from a well than ever before, we are nearing the point where it will take more energy to get the last dregs of oil than the pumped-out crude provides.

Heinberg then attacks Lomborg’s conclusions about substitutes. One by one, he reviews the alternatives: natural gas (difficult to ship and production may peak soon); coal (abundant but polluting and gives a low net-energy yield); nuclear power (expensive and unsafe); energy conservation (crucial but not a panacea); wind power, solar power, geothermal wells, and other potential sources of energy all meet with the same dismal fate — they simply can’t replace oil. This analysis leads Heinberg to some depressing conclusions.

“Over the long term,” he writes, “the prospect of maintaining the coherence of large nation states like the US … appear dim.” As the supply of oil declines, one possibility, according to Heinberg, is that the world powers cooperate with one another to share more equitably the diminishing supplies of energy. Each nation would encourage its citizens to conserve energy and voluntarily reduce family size. But a more likely possibility, is that a few “rogue states” would attempt to grab an increasingly large share of the dwindling energy resources. These are nations “that tend to disregard international laws and treaties at will. Foremost among these are the US and to a lesser degree China.” The result: “If all-out competition is pursued … the result could be the destruction of not just industrial civilization but of humanity and most of the biosphere.”

Heinberg’s view is a bleak one. However, readers should consider two points before running to the gun shop to buy AK-47s to protect themselves in the wars for oil that Heinberg envisions. First, oil production probably will peak in the next decade. But that doesn’t mean that the world is out of oil; it simply means that year-to-year production will decline or hold steady. At some point, though, demand will exceed supply, and prices will rise. But Heinberg’s gloomy speculations of what happens then overlooks an important point: while it may be true that no other single source of energy can replace oil, together they might be able to make up the shortfall.

Solar power, for example, which now costs more than cheap oil, would become more attractive for home heating and electric power generation as oil prices increase. Natural gas, while currently not as transportable as oil, could run our cars. Nuclear power could become more feasible with stringent regulations and a secure repository for storing wastes. Conservation, while no panacea, could reduce demand for energy and moderate the economic impact of higher prices. Each alternative source of energy could replace some of the energy lost because of a diminished oil supply. Furthermore, higher prices for oil would accelerate development of improved technology, making the alternatives more attractive.

Philosophically, Heinberg’s view of the future is a pessimistic one, but history has shown that we humans are a resilient species when faced with serious problems. We have bounced back from plagues, famines, and Ice Ages. We have survived mutual assured destruction, political unrest, and world wars. If we can handle those things, my guess is we can muddle our way through a peak in world oil production. On the other hand, it never hurts to prepare. So, I agree with Heinberg on the need for more conservation and more research. The spur for thinking ahead, though, need not be turgid predictions of disaster. A more carefully reasoned appeal would work just as well.

What Makes Us Human

Nature via Nurture: Genes, Experience, & What Makes Us Human by Matt Ridley. HarperCollins,

The News & Observer


December 14, 2003

Nature or Nurture or Both?


By Phillip Manning

What makes us the way we are? Why does Sally get all A’s while little Susie is lucky to get a C? Why are some people depressed while others see only the sunny side? Why are so many of us deathly frightened of snakes and spiders? Were we born that way or are we products of our environment? Or both?

This is the essence of the nature vs. nurture debate, a 300-year-old argument over questions that go to the heart of human existence. Science has made strides these past three centuries in exploring this question. However, we are still a long way from a final answer, an unsettling fact for modern societies that expect science to unravel the world’s
mysteries. In response to these demands, scientists offer data,
hypotheses, and informed opinion before their findings are mature enough to support solid conclusions. There is nothing wrong with this; it’s how science advances. Hypotheses are tested against new data. Winning hypotheses become theories; winning theories become laws.

It is in this provisional spirit that British science writer Matt Ridley, author of the best seller “Genome,” offers a new way of looking at the nature vs. nurture debate. In “Nature via Nurture,” Ridley asserts that the debate is framed incorrectly because it pits the two against one another. Nature and nurture, he claims, are not independent but symbiotic. “Genes,” he writes, “are not puppet masters pulling the strings of your behavior but puppets at the mercy of your behavior.” His book aims to substantiate that hypothesis, a goal that is only partially realized because the evidence Ridley marshals to support it is slim and occasionally inapposite.

First, a caveat. We do know that some human attributes are determined entirely by genetics (nature) and some entirely by circumstance (nurture). If you inherit a specific mutant gene, you will get Huntington’s disease no matter how well you look after yourself, no matter what medicines you take. However, no one is genetically programmed to learn a certain language, say, Russian rather than Japanese. All humans have the knack for syntax, but you learn your native tongue entirely through nurture, by listening to people use that language. Ridley is less interested in these certainties than in the broad middle ground where genes and the environment interact in complicated feedback mechanisms that play a major role in determining who and what we are.

“Genes,” he writes, “are designed to take their cues from nurture.” This is undeniably true. Consider phobias. Most of us quickly learn to fear snakes and spiders, both of which were threats to our Stone Age ancestors. Experience with creepy crawlies over millions of years has wired our brains in ways that make it easy for us to learn to avoid them. And genes created the wiring. This is eminently sensible; natural selection via snake bites would weed out people who didn’t quickly learn to fear snakes. But given that the appearance of every attribute is governed by the rules of natural selection, one could argue that all genes are ultimately attributable to environment.

More important than asking how we got our genes is how our behavior causes them to be expressed or suppressed. Ridley gets at this issue with an example involving IQ scores. Studies of adopted siblings indicate that the genetic component of IQ scores rises from 20 percent in infancy (when nurture is critically important) to as much as 80 percent for people beyond middle age. Ridley believes that the change comes not because of innate differences in intelligence but because genes steer some people toward intellectual pursuits and others toward, say, athletics. “Genes,” he writes, “are the agents of nurture.”

This conclusion is speculative. Nobody has ever identified genes that incline one toward intellectual pursuits. But sometimes speculation is all science can provide, and Ridley’s interpretation seems sound. Environment clearly plays a part in determining IQ scores as evidenced by its dominant role in infants. Thus, it is reasonable to assume that the environment would continue to affect the IQ scores of adults. This indicates that people who score high on IQ tests do so because they choose an intellectual environment that produces good scores. And since the scores of adults largely depend on a genetic component, it is possible that their genes cause them to prefer that environment. If so, genes would indeed be the agents of nurture.

Ridley peppers his book with other scientific studies in an attempt to show how genes interact with environment to govern human behavior. Some of these are not persuasive. For instance, the discussion of how genes affect personality doesn’t do much to advance his principal argument. A combination of two genes partially accounts for the incidence of depression among adults. But this example only shows how genes can affect human behavior, and it has only a tenuous connection with the concept of nature via nurture.

Dead ends like this one arise because the idea Ridley is pursuing is new, and the science to prove his points is not fully developed. In fact, science cannot account for the behavior of most people most of the time. Genes, the environment, and the interaction of the two are certainly involved, but in most cases, the data is too skimpy to tell us how. Ridley’s writing style —witty, breezy, anecdotal, and entertaining — exacerbates the confusion. This style worked beautifully for his previous book “Genome,” in which Ridley played the well-informed voyeur cruising through the chromosomes searching for interesting genes. However, this book is more ambitious, an attempt to synthesize a new way
of looking at how we humans operate. And though his style makes “Nature via Nurture” an easy read, it often obscures the science that supports his central thesis.

Nonetheless, Ridley may be on to something important, something that will help us understand why we are the way we are. He has thrown out a broad hypothesis that should stimulate scientists, writers, and readers to think long and critically about an important issue.
####

A Scientist Presents Evidence for Belief

The Language of God: A Scientist Presents Evidence for Belief by Francis S. Collins.

The News & Observer

August 27, 2006

Discovering God

By PHILLIP MANNING

Francis Crick, Richard Dawkins, and Charles Darwin are all famous biologists. They all became atheists, whose beliefs, or lack thereof, were molded by their profession. Now comes another famous biologist publicly professing his views on religion. Francis Collins made his name leading the government’s effort to map the human genome. He too was an atheist. But in “The Language of God” he tells how he foreswore atheism to become a devout evangelical Christian.


Collins is a missionary. He offers the story of his religious conversion hoping the reader will see the light. If successful, Collins would do more than save souls. He would bridge what may be the central divide in contemporary thought: the chasm between the scientific method, which relies on reproducible observations, and religious belief, whose foundation is faith.


Many scientists agree with the late Stephen Jay Gould — the Harvard paleontologist, essayist, and Jewish agnostic. Gould famously posited that science and religion are two separate nonoverlapping domains, which should be respected but segregated. Collins finds Gould’s separate domains “unsatisfying” and the militant atheism of other scientists revolting. “May it never be so!” Collins exclaims in reaction to the concept of a heartless, Godless universe.


Collins begins his heartfelt tale on the hardscrabble farm where he grew up in Virginia. His parents were freethinking Yale graduates doing a “sixties” thing in the 1940s. They thought their children should learn music and sent them to a local church to sing in the choir. “They made it clear,” he writes, “that it would be a great way to learn music, but that the theology should not be taken too seriously.” Collins drifted into agnosticism. He didn’t know if God existed, and he didn’t much care. That began to change when he entered medical school at UNC-Chapel Hill, where he encountered patients “whose faith provided them with a strong reassurance of ultimate peace.” He began to read arguments for Christianity by C.S. Lewis, the Oxford scholar and Christian intellectual. Slowly, he came to believe in God.


What convinced him was Lewis’s concept of a “Moral Law,” aka “the law of right behavior.” Collins never actually defines the Moral Law, but a major component of it is altruistic behavior, the voice “calling us to help others even if nothing is received in return.” Collins concludes that the Moral Law — which he believes to be contrary to all natural instincts — must come from God.


But what kind of God? Collins rejects the deistic view, which casts God as a remote entity who set the universe in motion then wandered off to do something else. That was not the kind of God Collins wanted. He wanted “a theist God, who desires some kind of relationship with those special creatures called human beings.” Later, while hiking in the Cascade Mountains, he “knelt in the dewy grass as the sun rose and surrendered to Jesus Christ.”


Collins spiritual journey from agnostic to committed Christian was over. But this sincere, emotional story takes up little space. Most of the book is devoted to telling us why it makes sense to be a Christian. As his subtitle implies, Collins wants to present evidence that supports his beliefs. He does this by examining a set of hypothetical questions posed by a set of hypothetical unbelievers. But these questions are largely straw men, erected so Collins can knock them down.


“What About All the Harm Done in the Name Of Religion?” is typical of them. Collins admits that terrible acts have been committed in the name of religion — the Crusades, the Inquisition, Islamic jihads, and so on. These are, he argues, the products of fallible human beings, not religion itself. “Would you judge Mozart’s ‘The Magic Flute’ on the basis of a poorly rehearsed performance by fifth-graders?” Collins asks rhetorically. He then points out that atheist regimes in the Soviet Union and China were as brutal as any religious-leaning governments.


Collins is at his best and worst when he tackles Christian views on science, especially evolution. He bluntly rejects the position of young Earth creationists who believe the Biblical story that the Earth and all its species were created in six 24-hour days less than 10,000 years ago. About 45 percent of Americans hold these beliefs. “If these [young Earth] claims were actually true,” Collins writes, “it would lead to a complete and irreversible collapse of the sciences. … Young Earth Creationism has reached a point of intellectual bankruptcy, both in its science and in its theology.”


Intelligent Design, a movement based primarily on the perceived failure of evolution to explain life’s exuberant complexity, gets similar dismissive treatment. After a detailed analysis, Collins states that “Intelligent Design remains a fringe activity with little credibility within the mainstream scientific community.”


Not only does Collins side with science in its battles with fundamentalism, he uses its methods to defend his own faith. And that’s where the difficulties begin. As noted earlier, his religion starts with the Moral Law and its corollary, altruism. Collins believes this law is God’s gift to mankind, the thing that separates us from the other animals. Collins might be on to something if scientists were unable to square altruism and evolution, if they could find no examples of creatures other humans that exhibit this behavior.


However, many biologists contend that altruistic behavior is a product of evolution, a positive in mate selection, among other things. That is, females select nice guys because they are likely to make good fathers. Collins attempts to trash this argument by pointing out that a newly dominant male monkeys sometimes practice infanticide to clear the way for their own offspring. This is clearly not altruistic behavior. But humans have also practiced infanticide at times. In any case, occasional infanticide among monkeys does not preclude altruistic behavior. In fact, many cases of altruism among primates have been documented. In a recent book “Our Inner Ape,” the respected primatologist Frans de Waal observes, “It’s not unusual for apes to care for an injured companion, slowing down if another lags behind, cleaning another’s wounds.”


I have no doubt that there is a Moral Law, but Collins is unconvincing in attributing its existence to God. Ultimately, Collins winds up, like so many other deeply sincere proselytizers, trying to prove what can’t be proven. The most he can offer is “that a belief in God is intensely plausible.”
But plausible ideas are only starting points in science. Their validity must be established by rigorous testing. Collins may be as sure of his faith as he is of the map of the human genome, but the evidence he provides to support his beliefs do not meet scientific standards. He may have leapt across the chasm between science and religion but his book does not show the rest of us the way.


There is room for God in the minds of many people, but there is no rational apologia for Him. “Faith is believing what you know ain’t so,” wrote Mark Twain over 150 years ago. Then, as now, some believe, some don’t. Fortunately, science and religion can coexist peaceably as long as we recall Gould’s admonition to treat both domains with respect.

Health and Survival in a Bacterial

Good Germs, Bad Germs: Health and Survival in a Bacterial World by Jessica Snyder
The News & Observer

November 18, 2007

More food for thought

BY PHILLIP MANNING

Only 10 percent of the trillions of cells that make up your body are yours. The rest are bacteria, tiny single-celled microbes that dwell in and on almost every part of you. Most of these bacteria are beneficial, synthesizing vitamins and helping digest your food. As much as 30 percent of the calories you get from some foods comes from the actions of bacteria in your gut.
But some bacteria are deadly, as recent headlines about deaths from MRSA (methicillin-resistant Staphylococcus aureus) attest. Fatalities caused by this bacteria are increasing. In 2005, MRSA claimed the lives of 19,000 people in the United States, more than died of AIDS. Furthermore, other ailments caused by bacterial infections — hay fever and irritable bowel diseases, for instance — are also on the rise. In her comprehensively documented and well-crafted book, Jessica Snyder Sachs explains what’s behind this bacterial onslaught. The two most likely sources of the increase in bacterial infections seem, in many ways, the most unlikely: improved public sanitation and the widespread use of antibiotics.
The war on germs (the layman’s word for infectious bacteria and other microbes) began in earnest in the middle of the nineteenth century. One of the leaders was Florence Nightingale, who championed the “cleanliness is next to godliness” approach to public health. Nightingale and others had a profound impact on sanitary conditions in Europe and America. The improved sanitation they advocated largely stopped the cycle of waterborne epidemics that began with the crowding of civilization. This revolution in public health nearly doubled average life spans in the United States, from 38 years in 1850 to 66 in 1950.
But the cleanliness revolution had a downside. “Throughout the developed world,” Sachs writes, “allergies, asthma, and other types of inflammatory disorders have gone from virtually unknown to commonplace in modern times.” The reason for this increase was unknown until a Scottish epidemiologist began studying the health and habits of thousands of Britons. “Over the past century,” he concluded, “declining family size ... and higher standards of personal cleanliness have reduced the opportunity for cross infection in young families.” The upsurge in allergies and asthma, he said, was due to the decrease in the infections of childhood, which produced a weakened immune system in adults. Conversely, people who suffered a lot of runny noses as children were less likely to have them when they grew up.
As the revolution in public sanitation was ending in the 1950s, a new and equally important revolution was beginning: the widespread adoption of antibiotics. The first antibiotics were sulfa drugs. Then came penicillin. Others soon followed. These drugs were spectacularly successful in fighting bacterial diseases such as strep throat, scarlet fever and staph infections. But by 1955, new strains of bacteria had appeared that resisted treatment. Particularly noxious was a strain of Staphylococcus aureus, the so-called superbug behind today’s outbreak of MRSA. The 1950’s strain, Sachs writes, “shrugged off not only penicillin but every antibiotic on the pharmacist’s shelf.”
Scientists countered by developing methicillin, which worked well until the mid 1980s when methicillin-resistant strains developed. Hospitals then began treating staph-infected patients with vancomycin. To the surprise of no one, a vancomycin-resistant staph infection was reported in 2002.
To combat resistant strains, scientists developed ever more deadly drugs that indiscriminately attack the body’s bacteria. These “big guns” were especially useful for doctors in critical-care situations when there was no time to run tests to determine which germ was causing the illness. Heavily promoted by big pharma, broad-spectrum antibiotics quickly became physicians’ first line of defense against infections.
“But all this convenience had a dark side,” Sachs writes. “The scattergun attack of a broad-spectrum antibiotic razes not only the disease-causing organism that is its intended target but also the body’s ... protective and otherwise beneficial microflora.” When powerful antibiotics eliminate beneficial bacteria, it allows less friendly strains to flourish. One study showed that the longer a hospital patient took antibiotics, the greater their risk of acquiring a new infection.
What can be done to solve the problems created by improved public sanitation and overuse of broad-spectrum antibiotics? A return to the good old days of dirty water and no antibiotics is clearly out of the question. Thanks in large part to clean water and antibiotics, a person born in the United States can now expect to live to the ripe old age of 78. No, going backward is not the answer. But the problems of allergies and antibiotic resistance are real, and Sachs trots out some possible solutions.
Some approaches, such as a “dirt vaccine” that aims to stimulate the immune system, have produced mixed results at best. But others, like probiotic techniques (in which a friendly strain of bacteria is used to inhibit the growth of less friendly strains), have promise and some demonstrated successes.
Sachs aims her strongest, and most practical, recommendations at the twin problems caused by over exposure to broad-spectrum antibiotics: drug resistance and the inhibition of beneficial bacteria. Overuse of these antibiotics wipes out all but a few bacteria. The survivors have some form of resistance, and with all the other bacteria gone, the resistant strains flourish. Reducing antibiotic usage kills fewer good bacteria, increasing the competition that holds the resistant bugs in check. Such a reduction is possible, Sachs contends. “Unnecessary prescriptions still account for around one-third of all the antibiotics we take.” We also stay on antibiotics too long. Recent data indicate that courses shorter than the recommended length are just as effective for many infections.
Sachs also advocates using less disruptive drugs. When presented with an infected patient, many doctors reach for the biggest gun in their arsenal, usually a broad-spectrum antibiotic. This “shoot first, aim later” approach is easier for doctors, but hard on the body’s beneficial bacteria. Sachs encourages doctors to use more targeted drugs, antibiotics that just go after the bad guys.
The commonsense remedies offered by Sachs are not new. The problem is getting them adopted. As one doctor put it, “it’s just easier to prescribe the broad-spectrum and not have to worry about follow-up.” However, Sachs believes that a gentler approach is gaining acceptance in which the treatment of bacterial disease is “less a war on an invisible enemy than a restoration of balance.” After all, she points out, this is “and always will be, a bacterial world.”