Monday, July 5, 2010

Environment and natural

Environment
Indonesian From Wikipedia, the free encyclopedia
This is a page registration (not reviewed)
Jump to: navigation, search

Environment is a combination of physical conditions that include a state of natural resources such as land, water, solar energy, minerals, and flora and fauna that grow above ground and in the ocean, with institutions that include human creations such as decisions on how to use the physical environment.

Environment consists of abiotic and biotic components. Abiotic components are all that is not inanimate, such as land, air, water, climate, humidity, light, sound. While the biotic components are all things that animate such as plants, animals, humans and micro-organisms (viruses and bacteria).

Science is the science which studies the environmental or ecological environment. Environmental sciences are branches of biology.
[Edit] The concept of environment in Indonesia

Environment, in Indonesia often called "the environment". For example in Act no. 23 years in 1997 on Environmental Management, Environmental definition is unity with all things space, power, circumstances, and living creatures, including humans, and its behavior, which affects the livelihood and sustainability of human welfare as well as other living creatures.
[Edit] Institutional

By institution in Indonesia, the agency that regulates environmental issues is the Ministry of Environment (formerly: Minister of State for Population and Environment) and in the regions or provinces are Bapedal. While in the United States is the EPA (Environmental Protection Agency).

philosophia naturalis

Natural philosophy (from the Latin philosophia naturalist) is a term attached to the assessment of natural and physical universe once dominant before the development of modern science. Natural philosophy is seen as a precursor to the natural sciences such as physics.

Forms of science historically developed out of philosophy, or more specifically natural philosophy. At the universities of older, Chair-Chair of Natural Philosophy now well-established largely controlled by teachers of physics. Note modern science and scientists refer to the 19th century (Webster's Ninth New Collegiate Dictionary writes that the origin of the word "scientist" is from the year 1834). Previously, the word "science" simply means the degree of knowledge and scientists has not been written. Isaac Newton's scientific work from the year 1687 known as PhilosophiƦ Naturalis Principia Mathematica.

Earth Science (UK: earth science, geoscience) is a term for a collection of branches of science that studies the earth. This branch of science using a combination of physical science, geography, mathematics, chemistry and biology to establish a quantitative understanding of the layers of the earth.

In conducting its study, scientists in this field using the scientific method, namely the formulation of hypotheses through observation and gathering data about natural phenomena that continue to test these hypotheses. In earth science, a very important role in examining the data and form a hypothesis.

Nature is all matter and energy, especially in the form of its essence. Nature is the subject of scientific study. In scale, "nature" including everything from the subatomic universe. This includes all things animal, plant, and mineral, all natural resources and events (hurrikan, tornadoes, earthquakes). Also includes live animal behavior, and processes associated with inanimate objects.

natural sanrise

Sun or Surya is also called (from the name of god "Surya" - the sun god in the Hindu belief) is the nearest star to Earth with an average distance of 149.68 million kilometers (93,026,724 miles). The sun and the eight planets (which is already known / discovered by humans) to form the Solar System. The sun is categorized as a small star type

Illustration of the structure of the sun

The sun is a ball of incandescent gas and it did not completely spherical. Sun has the equator and poles since the rotational motion. Diameter ekuatorialnya 864 000 miles, while the middle line between the poles 43 miles shorter. The Sun is a member of the largest solar system, because 98% of the mass collected on the sun of the Solar System.

Besides being a center of circulation, the sun is also a central source of energy in the solar neighborhood. The sun's core and consists of three layers of skin, each photosphere, chromosphere and corona. To continue to shine, the sun, which consists of hot gas exchange substances with the substance of hydrogen helium through nuclear fusion reactions at levels of 600 million tonnes, with the loss of four million tons of mass each time.

The sun is believed to form at 4.6 billion years ago. Density of 1.41 solar masses is proportional to the mass of water. Total solar energy reaching the Earth's surface that are recognized as the solar constant equals 1370 watts per square meter at any time. The sun as the center of the Solar System is the second-generation star. Materials from the sun formed from the explosion of the first generation of stars like that are believed by scientists, that the universe was created by the big bang explosion about 14 000 million years ago.

Natural Knowledge


Knowledge is information or intimation of known or recognized by someone. Knowledge including, but not limited to the description, hypothesis, concepts, theories, principles and procedures that Bayesian probability is true or useful.

In another sense, knowledge is encountered and the various symptoms of human acquired through observation akal.Pengetahuan arises when a person uses his mind to recognize certain objects or events that have never seen or felt before. For example when someone tasting a new dish that he knew, he will gain knowledge about the shape, taste and aroma of these dishes.

Knowledge which emphasizes observation and sensory experience or empirical knowledge is known as a posteriori knowledge. This knowledge can be obtained by making observations and observations made empirically and rationally. Empirical knowledge can also be developed into descriptive knowledge if one can describe and illustrate all the characteristics, traits, and symptoms that existed at the empirical object. Empirical knowledge can also be obtained through personal experience that occurred repeatedly humans. For example, someone who is often chosen to lead the organization itself will gain knowledge of organizational management.

In addition to empirical knowledge, some knowledge gained through reason which later became known as rationalism. Rationalism emphasizes the character of a priori knowledge; no emphasis on experience. For example the knowledge of mathematics. In mathematics, the result of 1 + 1 = 2 is not obtained through experience or empirical observation, but through a logical reason.

Knowledge of health and illness is an experience of a person of good health and pain of someone that causes a person to act to overcome the problem of pain and act to preserve her health or even improve their health status. The pain will cause a person to act passive and or active with stage-step.

One's knowledge is influenced by several factors, including:

* Education

Education "is a process of changing attitudes and ethical behavior as well as a person or group of mature people through teaching and training efforts, then obviously we can kerucutkan an educational vision that is educating people.

* Media

Media that are specifically designed to reach a very wide community. So an example of the mass media is television, radio, newspapers, and magazines.

* Exposure informsi

understanding of information by Oxfoord Home Dictionary, is "that of Which one is apprised or Told: intelligence, news". Another dictionary states that information is something that can be known. But there is also emphasizing the information as knowledge transfer. In addition the term information also has other meanings as defined by the bill to mean the information technology as a technique to collect, prepare, store, manipulate, publish, analyze, and disseminate information with a specific purpose. While the information itself include data, text, images, sounds, codes, computer programs, databases. The big difference in the definition because the information is essentially information can not be described (intangible), while the information was encountered in everyday life, obtained from data and observations on the world around us and forwarded through the communication


Idea or an idea is a term used both in popular and in the field of philosophy with common sense "mental image" or "understanding". Especially Plato is thinking like this exponent.

The idea led to the emergence of the concept, which is the basis for all kinds of knowledge, both science and philosophy.

Now many people believe that the idea is an intellectual property such as copyright or patent.


In biology, a plant refers to organisms belonging to the Regnum Plantae. In it went all the most common organisms known as trees, shrubs, herb, grass, ferns, mosses, and a number of green algae. Carrying around 350,000 species of organisms including, excluding the green algae. Of that total, 258 650 species are flowering plants and 18 000 plant species of moss. Almost all members are autotrof plants, and derives its energy directly from sunlight through photosynthesis. Because the green color is very dominant on the members of this kingdom, another name used is Viridiplantae (green plants "). Other names are Metaphyta.

Natural Observation


Observation is an activity undertaken intelligent creatures, with the intention to feel and then understand the knowledge of a phenomenon based on knowledge and ideas that have been previously known.

Biological sciences and astronomy have historical basis in observations by amateurs.

The person performing the observation referred to the observer.

The phenomenon of the Greek language; phainomenon, "what is visible", in the Indonesian language could mean that:

1. symptoms, such natural phenomena
2. things that are perceived by the senses
3. things mystical or occult
4. facts, facts, events

Adjectival derivative of the word, phenomenal, means: "something extraordinary".

Some of the phenomena encountered

1. UFO
2. Tunguska
3. Smoketrail

Natural history


Natural history is the scientific research on plants and animals, more toward the learning methods rather than experimental observations, and include more study published in the magazine than an academic journal, so this term is considered ancient among the scientific community due to the scientific revolution. A person who is an expert in the field of natural history called a naturalist.

The roots of natural history can be tracked from the time of Aristotle and other classical philosophers who analyze the diversity of the natural world. When Europe entered the Middle Ages, natural history developed in the Islamic world by scientists such as al-Jahizh, ad-Dinawari, Ibn al-Baithar, etc.. In the Renaissance, began to note the number of organisms that began to pave the way for the taxonomy, culminating in a system developed by Carolus Linnaeus of Sweden.

Sunday, July 4, 2010

Harry Harlow and the Science of Affection

Love at Goon Park: Harry Harlow and the Science of Affection by Deborah Blum. Perseus,

The News & Observer

March 9, 2003

Monkey Love

By Phillip Manning

Infants need cuddling, comforting, and a warm body to nestle against. Most modern parents know this from countless books and magazine articles emphasizing that parents should embrace their infants and bond with them. What most modern parents don't know, however, is that this advice is diametrically opposed to the counsel doctors were giving mom and dad less than a century ago. In his wildly popular book "The Care and Feeding of Children," published in 15 editions between 1894 and 1935, Dr. Luther Holt warned parents about the "vicious practice" of rocking a child in a cradle or picking her up when she cried. He also opposed hugging older children because too much affection would soften their moral fiber.

How could child-rearing practices change so dramatically? In her well-researched and eminently readable book, "Love at Goon Park," Pulitzer Prize-winning journalist Deborah Blum answers that question by telling the fascinating story of Harry Harlow, the psychologist whose research rewrote the rules of child care. Harry Harlow did not discover what children need by watching his own. In fact, Harlow was a hard-drinking, possibly alcoholic, workaholic who ignored his two sons so completely that it led his first wife to divorce him. After his boys were grown, he reacquainted himself with them, but his younger son said that although they got along, "we were never father and son." In fact, Harlow's insights about child rearing were not based on studying children at all but came out of his research with monkeys.

Harlow began studying monkeys because of a misunderstanding. After getting a Ph.D. from Stanford in 1930, he landed a teaching job in the psychology department at the University of Wisconsin in Madison. In those days behavioral research was done with rats, and Harlow planned to continue the work with them that he had started as a graduate student. Unfortunately, the rat lab at Wisconsin had been torn down before he arrived. There were no plans to replace it. "He was stranded," Blum writes. "He was an experimental psychologist with no way to conduct experiments an animal psychologist without rats [was like] an astronomer without a telescope." Harlow tried working with cats and frogs - he "flashed lights. He rang bells. He applied mild shocks to the frogs' legs." He concluded that they are easier to catch than to teach. Harlow began to watch the animals at the local zoo. He soon decided that monkeys were the ticket.

Harlow and his students threw together a primate lab in a deserted box factory nicknamed Goon Park (because the address 600 N. Park looked like GOON Park to some imaginative students) and began to study monkeys. He followed the child-care practices of the day meticulously, tucking the baby monkeys away in clean, neat nurseries just hours after their birth. The little monkeys thrived physically, but according to Blum they "seemed dumbfounded by loneliness. They would sit and rock, stare into space, suck their thumbs." When the monkeys were brought together, they didn't know what to do. A normal monkey's life is built around interaction with a larger society. The monkeys Harlow raised simply sat alone and stared at the floor.

Harlow and students knew something was wrong. Was it the formula they fed them, the light cycle, the antibiotics? They found a reference about how baby monkeys cling desperately to soft blankets. This led them to run a now famous experiment. They made a mother; in fact, they made two. One was a terry cloth covered doll, known as "cloth mother"; the other was a wire mom with a bottle attached so the babies could feed. The little monkeys didn't hesitate. They grabbed cloth mother, cuddling, stroking, sleeping next to it. They visited the wire mother to feed, but otherwise they ignored it. The message was clear writes Blum, "food is sustenance but a good hug is life itself."

The result of this experiment is unsurprising today, but in the 1950s, it sent shock waves through the psychology community. Behaviorism dominated psychology, and Harvard psychologist B.F. Skinner dominated behaviorism. He held that behavior was shaped by reward and punishment. Thus, monkeys should prefer whoever or whatever gave them food. Harlow's findings stood behaviorism on its ear. The monkeys preferred - in fact, seemed to adore - cloth mother. Harry Harlow had begun to explore the science of love.

His research turned darker. To test the depths of love, he carried out a series of experiments that would be considered cruel by today's animal-rights advocates. He began scaring the monkeys with noisy toys. The frightened infants would fly to cloth mother, clutch it tightly, and hold on for dear life. He removed cloth mother from the cage and watched the babies screech and cry in despair. He put cloth mother in a different room from the baby monkeys, separated by a window covered by a panel that the monkeys could raise. The researchers watched for hours as the babies doggedly raised the panel over and over again just see the cloth mom. Clearly, infants needed a mother for security, even if that
mother was just a lifeless bundle of cloth.

His next experiments reflected a deepening gloom in Harlow's own life. By now, he was one of the country's best known psychologists, but he was drinking more, working harder, traveling constantly. And his second wife was dying of breast cancer. Harlow decided to see what would happen to monkeys in a loveless world. He isolated baby monkeys completely for 30 days in enclosed cages; they saw nothing but the hands that fed them. When taken out, they were "enormously disturbed." Two of them refused to eat and starved to death. Those that survived were totally dysfunctional.

Toward the end of those experiments, Harlow was becoming dysfunctional himself. His drinking combined with Parkinson's disease to finally flatten him. Harry Harlow died in 1981 at age 76. But his legacy lives in modern parenting methods, especially on the need for maternal bonding. Another legacy is the booming animal-rights movement, which published a 95-page document on the evils of maternal-deprivation research after Harlow's death. These outgrowths of Harlow's research seem self-evident today, but as Blum writes, "The answers we call obvious today seem so, in real measure, because Harry Harlow conducted those studies."

The Secret of Life

DNA: The Secret of Life by James D. Watson (with Andrew Berry). Alfred A. Knopf,

The News & Observer

May 5, 2003

The Genes Scene

By Phillip Manning

Half a century ago, James D. Watson and Francis Crick unraveled the structure of a molecule that Crick said contained the "the secret of life." That molecule was deoxyribonucleic acid or DNA, and determining its structure promised to be one of those rare events in science: a discovery that has far-reaching practical consequences (such as the
invention of the transistor) and that tells us about ourselves and our world (such as Darwin's theory of evolution). Has the DNA revolution lived up to its billing?

That is the question Watson addresses in "DNA: The Secret of Life," and he is perfectly positioned to answer it. Watson, along with Crick and Maurice Wilkins, won a Nobel Prize in 1962 for discovering the double helix, and he has been a prominent figure in DNA-related science for 50 years. In this richly illustrated book, he describes the tremendous progress achieved by molecular biologists during that time.

He starts with the discovery itself, the structure of DNA, the Holy Grail of 1950s biology, which enabled scientists to understand how genetic information is stored and replicated. This understanding has profoundly transformed our world: genetically modified foods are on our tables; DNA "fingerprinting" is the gold standard for identifying criminals and exonerating the innocent; and the genes causing many inherited diseases - such as Huntington and Tay-Sachs - have been identified. In case after case, Watson explains the science that made these advances possible and enlivens his message with tales about the
scientists who participated in the revolution.

Take, for example, the chapter on the early days of recombinant DNA - the process that allows scientists to isolate and copy genes. Watson explains clearly the science behind the process, which he calls "cutting, pasting, and copying." Restriction enzymes cut a strand of DNA, isolating a desired gene. Ligase then glues the ends of the gene together forming a circular strand of DNA called a plasmid. The plasmid is inserted into a bacterium, which then goes about its usual business of reproducing itself and the plasmid. Thus, a single DNA molecule can produce enormous quantities of a gene. And since genes make proteins, the workhorses of all cells, scientists could potentially clone any
amount of any protein they desired.

That Herb Boyer and Stanley Cohen worked out this cloning technique is a matter of record. But who besides Watson would know that they thrashed out the details in a Waikiki deli. Or that Boyer was so enamored with DNA that he named his Siamese cats Crick and Watson. Later Boyer and another partner plunked down $500 and started the first biotech company. They named it Genentech, and its first product was human insulin. This development was a godsend to 8 million diabetics in the United States who previously had to control their disease with pig or cow insulin, which can cause allergic reactions. This is exciting stuff: a beneficial and practical use of technology that came directly out of scientists
newfound understanding of the double helix.

Another consequence of the DNA revolution was the sequencing of the human genome. The genome provides us with a portrait of our species; all that we are derives from the order of the 3.1 billion base pairs in the DNA that resides in almost every cell in our bodies. DNA analysis can also identify our closest relatives. Mary-Claire King and Allan Wilson compared the human genome with that of the chimpanzee and showed that the DNA sequences between the two species differ by a mere 1 percent. DNA sequencing can also provide historical insights. Another analysis led molecular biologists to conclude that the human lineage separated from the great apes about 5 million years ago, overturning paleontologists' long-held belief that the split occurred 25 million
years ago.

But molecular biology sheds light on questions that go back much further than a few million years. In fact, DNA technology has raised questions about - and possibly found answers to - the origin of life itself. Most DNA is found in the nucleus of a cell. Soon after Watson and Crick determined the structure of DNA, scientists found that although DNA provided the template that governed the life of the cell, its cousin ribonucleic acid (RNA) had the crucial role of ferrying DNA's instructions out into the body of the cell where proteins are assembled. But why do we need RNA? Why doesn't the cell simply make proteins in the nucleus? Francis Crick believes he has the answer: life (or at least genetic replication) started with RNA. DNA, which is a more stable molecule and better for long-term storage of genetic information, came later. This neatly answers the question of why modern cells depend on RNA for vital functions: it was there first, and natural selection put it to good use.

Watson's book, however, is more than a superb history of 50 years of DNA. Watson is the rare combination of a good writer and good scientist. His first book "The Double Helix," was a lively, first-hand account of the discovery of the structure of DNA. It was a best seller, despite the objections of feminists, who thought his portrayal of Maurice Wilkins's coworker Rosalind Franklin was unflattering and unfair. The tone of this book is more subdued, perhaps because Watson is older now and was assisted by a coauthor, Andrew Berry. But neither age nor coauthor can tame Watson completely, and flashes of the brash, outspoken 25-year-old shine through.

Consider his views on genetically modified (GM) foods, which are made from crops that carry a gene inserted in the plant's DNA. One famous example is Bt corn, in which scientists have introduced a gene that produces a toxin that kills insects, eliminating the need for pesticides. Although the taste of Bt corn is indistinguishable from ordinary corn, it scared consumers, especially in Europe where it was labeled "Frankenfood." Even Prince Charles weighed in on the issue, pronouncing that "this kind of genetic modification takes mankind into realms that belong to God."

But Watson will have none of this princely nonsense. "It is nothing less than an absurdity," he writes, "to deprive ourselves of the benefits of GM foods by demonizing them it is nothing less than a crime to be governed by the irrational suppositions of Prince Charles and others." Indeed, the greatest benefit of the DNA revolution may not be its material benefits. Like all great scientific advances, it is helping us beat back the shadows of superstition with knowledge.


War and the Fate of Industrial Societies

The Party’s Over: Oil, War and the Fate of Industrial Societies by Richard Heinberg. New Society,

The News & Observer

August 10, 2003

Bleak View of Our Energy Future

By Phillip Manning

If a pessimist sees the glass as half empty, then Richard Heinberg sees it as bone dry and dirty to boot. Among other catastrophes, he predicts “that the global industrial system will probably collapse … within the next few decades.” He foresees “a century of impending famine, disease, economic collapse, despotism, and resource wars.” Furthermore, the human population of the planet will have to drop to 2 billion, which “poses a serious problem, since there are currently over six billion of us.”

What’s precipitating all this gloom? Heinberg believes that world oil production will peak soon, causing nations to scramble madly for diminishing amounts of the precious “black gold” that fuels industrial civilization. In his new book “The Party’s Over,” Heinberg, an author and educator from California, offers equal measures of hard science and apocalyptic gloom. Though his speculations about the future seem exaggerated, there is little doubt that significant changes, long unaddressed, are coming.

Heinberg’s timetable for the world oil-production peak is based on the work of several respected geologists, beginning with M. King Hubbert. In 1956, Hubbert used a curve-fitting technique to predict that the flow of U.S. oil would begin to decline between 1966 and 1972. Such predictions had been made before and proved false. But Hubbert turned out to be right; American oil production started to drop in 1970.

Since then, geologists have refined Hubbert’s technique and applied it to world oil production. Their conclusions are amazingly consistent. Colin Campbell, an Oxford-trained geologist with many years of oil-exploration experience, writes that “the decline will begin before 2010.” Kenneth Deffeyes of Princeton University predicts it will happen in 2003 or 2004. “Close to 2010” predicts another geologist quoted by Heinberg. Yet another says the world oil production will peak in 2006.

Of course, some experts disagree. Heinberg presents their arguments — and then demolishes them. Chief among the Pollyannas is Bjorn Lomborg, the author of “The Skeptical Environmentalist.” Lomborg claims oil reserves are growing, that technological advances are allowing us to extract more oil from existing wells, and that substitutes for oil will be found before the wells run dry. Heinberg easily refutes two of his arguments. Since 1960, he writes, new oil discoveries have declined, and although technology allows us to extract more oil from a well than ever before, we are nearing the point where it will take more energy to get the last dregs of oil than the pumped-out crude provides.

Heinberg then attacks Lomborg’s conclusions about substitutes. One by one, he reviews the alternatives: natural gas (difficult to ship and production may peak soon); coal (abundant but polluting and gives a low net-energy yield); nuclear power (expensive and unsafe); energy conservation (crucial but not a panacea); wind power, solar power, geothermal wells, and other potential sources of energy all meet with the same dismal fate — they simply can’t replace oil. This analysis leads Heinberg to some depressing conclusions.

“Over the long term,” he writes, “the prospect of maintaining the coherence of large nation states like the US … appear dim.” As the supply of oil declines, one possibility, according to Heinberg, is that the world powers cooperate with one another to share more equitably the diminishing supplies of energy. Each nation would encourage its citizens to conserve energy and voluntarily reduce family size. But a more likely possibility, is that a few “rogue states” would attempt to grab an increasingly large share of the dwindling energy resources. These are nations “that tend to disregard international laws and treaties at will. Foremost among these are the US and to a lesser degree China.” The result: “If all-out competition is pursued … the result could be the destruction of not just industrial civilization but of humanity and most of the biosphere.”

Heinberg’s view is a bleak one. However, readers should consider two points before running to the gun shop to buy AK-47s to protect themselves in the wars for oil that Heinberg envisions. First, oil production probably will peak in the next decade. But that doesn’t mean that the world is out of oil; it simply means that year-to-year production will decline or hold steady. At some point, though, demand will exceed supply, and prices will rise. But Heinberg’s gloomy speculations of what happens then overlooks an important point: while it may be true that no other single source of energy can replace oil, together they might be able to make up the shortfall.

Solar power, for example, which now costs more than cheap oil, would become more attractive for home heating and electric power generation as oil prices increase. Natural gas, while currently not as transportable as oil, could run our cars. Nuclear power could become more feasible with stringent regulations and a secure repository for storing wastes. Conservation, while no panacea, could reduce demand for energy and moderate the economic impact of higher prices. Each alternative source of energy could replace some of the energy lost because of a diminished oil supply. Furthermore, higher prices for oil would accelerate development of improved technology, making the alternatives more attractive.

Philosophically, Heinberg’s view of the future is a pessimistic one, but history has shown that we humans are a resilient species when faced with serious problems. We have bounced back from plagues, famines, and Ice Ages. We have survived mutual assured destruction, political unrest, and world wars. If we can handle those things, my guess is we can muddle our way through a peak in world oil production. On the other hand, it never hurts to prepare. So, I agree with Heinberg on the need for more conservation and more research. The spur for thinking ahead, though, need not be turgid predictions of disaster. A more carefully reasoned appeal would work just as well.

What Makes Us Human

Nature via Nurture: Genes, Experience, & What Makes Us Human by Matt Ridley. HarperCollins,

The News & Observer


December 14, 2003

Nature or Nurture or Both?


By Phillip Manning

What makes us the way we are? Why does Sally get all A’s while little Susie is lucky to get a C? Why are some people depressed while others see only the sunny side? Why are so many of us deathly frightened of snakes and spiders? Were we born that way or are we products of our environment? Or both?

This is the essence of the nature vs. nurture debate, a 300-year-old argument over questions that go to the heart of human existence. Science has made strides these past three centuries in exploring this question. However, we are still a long way from a final answer, an unsettling fact for modern societies that expect science to unravel the world’s
mysteries. In response to these demands, scientists offer data,
hypotheses, and informed opinion before their findings are mature enough to support solid conclusions. There is nothing wrong with this; it’s how science advances. Hypotheses are tested against new data. Winning hypotheses become theories; winning theories become laws.

It is in this provisional spirit that British science writer Matt Ridley, author of the best seller “Genome,” offers a new way of looking at the nature vs. nurture debate. In “Nature via Nurture,” Ridley asserts that the debate is framed incorrectly because it pits the two against one another. Nature and nurture, he claims, are not independent but symbiotic. “Genes,” he writes, “are not puppet masters pulling the strings of your behavior but puppets at the mercy of your behavior.” His book aims to substantiate that hypothesis, a goal that is only partially realized because the evidence Ridley marshals to support it is slim and occasionally inapposite.

First, a caveat. We do know that some human attributes are determined entirely by genetics (nature) and some entirely by circumstance (nurture). If you inherit a specific mutant gene, you will get Huntington’s disease no matter how well you look after yourself, no matter what medicines you take. However, no one is genetically programmed to learn a certain language, say, Russian rather than Japanese. All humans have the knack for syntax, but you learn your native tongue entirely through nurture, by listening to people use that language. Ridley is less interested in these certainties than in the broad middle ground where genes and the environment interact in complicated feedback mechanisms that play a major role in determining who and what we are.

“Genes,” he writes, “are designed to take their cues from nurture.” This is undeniably true. Consider phobias. Most of us quickly learn to fear snakes and spiders, both of which were threats to our Stone Age ancestors. Experience with creepy crawlies over millions of years has wired our brains in ways that make it easy for us to learn to avoid them. And genes created the wiring. This is eminently sensible; natural selection via snake bites would weed out people who didn’t quickly learn to fear snakes. But given that the appearance of every attribute is governed by the rules of natural selection, one could argue that all genes are ultimately attributable to environment.

More important than asking how we got our genes is how our behavior causes them to be expressed or suppressed. Ridley gets at this issue with an example involving IQ scores. Studies of adopted siblings indicate that the genetic component of IQ scores rises from 20 percent in infancy (when nurture is critically important) to as much as 80 percent for people beyond middle age. Ridley believes that the change comes not because of innate differences in intelligence but because genes steer some people toward intellectual pursuits and others toward, say, athletics. “Genes,” he writes, “are the agents of nurture.”

This conclusion is speculative. Nobody has ever identified genes that incline one toward intellectual pursuits. But sometimes speculation is all science can provide, and Ridley’s interpretation seems sound. Environment clearly plays a part in determining IQ scores as evidenced by its dominant role in infants. Thus, it is reasonable to assume that the environment would continue to affect the IQ scores of adults. This indicates that people who score high on IQ tests do so because they choose an intellectual environment that produces good scores. And since the scores of adults largely depend on a genetic component, it is possible that their genes cause them to prefer that environment. If so, genes would indeed be the agents of nurture.

Ridley peppers his book with other scientific studies in an attempt to show how genes interact with environment to govern human behavior. Some of these are not persuasive. For instance, the discussion of how genes affect personality doesn’t do much to advance his principal argument. A combination of two genes partially accounts for the incidence of depression among adults. But this example only shows how genes can affect human behavior, and it has only a tenuous connection with the concept of nature via nurture.

Dead ends like this one arise because the idea Ridley is pursuing is new, and the science to prove his points is not fully developed. In fact, science cannot account for the behavior of most people most of the time. Genes, the environment, and the interaction of the two are certainly involved, but in most cases, the data is too skimpy to tell us how. Ridley’s writing style —witty, breezy, anecdotal, and entertaining — exacerbates the confusion. This style worked beautifully for his previous book “Genome,” in which Ridley played the well-informed voyeur cruising through the chromosomes searching for interesting genes. However, this book is more ambitious, an attempt to synthesize a new way
of looking at how we humans operate. And though his style makes “Nature via Nurture” an easy read, it often obscures the science that supports his central thesis.

Nonetheless, Ridley may be on to something important, something that will help us understand why we are the way we are. He has thrown out a broad hypothesis that should stimulate scientists, writers, and readers to think long and critically about an important issue.
####

A Scientist Presents Evidence for Belief

The Language of God: A Scientist Presents Evidence for Belief by Francis S. Collins.

The News & Observer

August 27, 2006

Discovering God

By PHILLIP MANNING

Francis Crick, Richard Dawkins, and Charles Darwin are all famous biologists. They all became atheists, whose beliefs, or lack thereof, were molded by their profession. Now comes another famous biologist publicly professing his views on religion. Francis Collins made his name leading the government’s effort to map the human genome. He too was an atheist. But in “The Language of God” he tells how he foreswore atheism to become a devout evangelical Christian.


Collins is a missionary. He offers the story of his religious conversion hoping the reader will see the light. If successful, Collins would do more than save souls. He would bridge what may be the central divide in contemporary thought: the chasm between the scientific method, which relies on reproducible observations, and religious belief, whose foundation is faith.


Many scientists agree with the late Stephen Jay Gould — the Harvard paleontologist, essayist, and Jewish agnostic. Gould famously posited that science and religion are two separate nonoverlapping domains, which should be respected but segregated. Collins finds Gould’s separate domains “unsatisfying” and the militant atheism of other scientists revolting. “May it never be so!” Collins exclaims in reaction to the concept of a heartless, Godless universe.


Collins begins his heartfelt tale on the hardscrabble farm where he grew up in Virginia. His parents were freethinking Yale graduates doing a “sixties” thing in the 1940s. They thought their children should learn music and sent them to a local church to sing in the choir. “They made it clear,” he writes, “that it would be a great way to learn music, but that the theology should not be taken too seriously.” Collins drifted into agnosticism. He didn’t know if God existed, and he didn’t much care. That began to change when he entered medical school at UNC-Chapel Hill, where he encountered patients “whose faith provided them with a strong reassurance of ultimate peace.” He began to read arguments for Christianity by C.S. Lewis, the Oxford scholar and Christian intellectual. Slowly, he came to believe in God.


What convinced him was Lewis’s concept of a “Moral Law,” aka “the law of right behavior.” Collins never actually defines the Moral Law, but a major component of it is altruistic behavior, the voice “calling us to help others even if nothing is received in return.” Collins concludes that the Moral Law — which he believes to be contrary to all natural instincts — must come from God.


But what kind of God? Collins rejects the deistic view, which casts God as a remote entity who set the universe in motion then wandered off to do something else. That was not the kind of God Collins wanted. He wanted “a theist God, who desires some kind of relationship with those special creatures called human beings.” Later, while hiking in the Cascade Mountains, he “knelt in the dewy grass as the sun rose and surrendered to Jesus Christ.”


Collins spiritual journey from agnostic to committed Christian was over. But this sincere, emotional story takes up little space. Most of the book is devoted to telling us why it makes sense to be a Christian. As his subtitle implies, Collins wants to present evidence that supports his beliefs. He does this by examining a set of hypothetical questions posed by a set of hypothetical unbelievers. But these questions are largely straw men, erected so Collins can knock them down.


“What About All the Harm Done in the Name Of Religion?” is typical of them. Collins admits that terrible acts have been committed in the name of religion — the Crusades, the Inquisition, Islamic jihads, and so on. These are, he argues, the products of fallible human beings, not religion itself. “Would you judge Mozart’s ‘The Magic Flute’ on the basis of a poorly rehearsed performance by fifth-graders?” Collins asks rhetorically. He then points out that atheist regimes in the Soviet Union and China were as brutal as any religious-leaning governments.


Collins is at his best and worst when he tackles Christian views on science, especially evolution. He bluntly rejects the position of young Earth creationists who believe the Biblical story that the Earth and all its species were created in six 24-hour days less than 10,000 years ago. About 45 percent of Americans hold these beliefs. “If these [young Earth] claims were actually true,” Collins writes, “it would lead to a complete and irreversible collapse of the sciences. … Young Earth Creationism has reached a point of intellectual bankruptcy, both in its science and in its theology.”


Intelligent Design, a movement based primarily on the perceived failure of evolution to explain life’s exuberant complexity, gets similar dismissive treatment. After a detailed analysis, Collins states that “Intelligent Design remains a fringe activity with little credibility within the mainstream scientific community.”


Not only does Collins side with science in its battles with fundamentalism, he uses its methods to defend his own faith. And that’s where the difficulties begin. As noted earlier, his religion starts with the Moral Law and its corollary, altruism. Collins believes this law is God’s gift to mankind, the thing that separates us from the other animals. Collins might be on to something if scientists were unable to square altruism and evolution, if they could find no examples of creatures other humans that exhibit this behavior.


However, many biologists contend that altruistic behavior is a product of evolution, a positive in mate selection, among other things. That is, females select nice guys because they are likely to make good fathers. Collins attempts to trash this argument by pointing out that a newly dominant male monkeys sometimes practice infanticide to clear the way for their own offspring. This is clearly not altruistic behavior. But humans have also practiced infanticide at times. In any case, occasional infanticide among monkeys does not preclude altruistic behavior. In fact, many cases of altruism among primates have been documented. In a recent book “Our Inner Ape,” the respected primatologist Frans de Waal observes, “It’s not unusual for apes to care for an injured companion, slowing down if another lags behind, cleaning another’s wounds.”


I have no doubt that there is a Moral Law, but Collins is unconvincing in attributing its existence to God. Ultimately, Collins winds up, like so many other deeply sincere proselytizers, trying to prove what can’t be proven. The most he can offer is “that a belief in God is intensely plausible.”
But plausible ideas are only starting points in science. Their validity must be established by rigorous testing. Collins may be as sure of his faith as he is of the map of the human genome, but the evidence he provides to support his beliefs do not meet scientific standards. He may have leapt across the chasm between science and religion but his book does not show the rest of us the way.


There is room for God in the minds of many people, but there is no rational apologia for Him. “Faith is believing what you know ain’t so,” wrote Mark Twain over 150 years ago. Then, as now, some believe, some don’t. Fortunately, science and religion can coexist peaceably as long as we recall Gould’s admonition to treat both domains with respect.

Health and Survival in a Bacterial

Good Germs, Bad Germs: Health and Survival in a Bacterial World by Jessica Snyder
The News & Observer

November 18, 2007

More food for thought

BY PHILLIP MANNING

Only 10 percent of the trillions of cells that make up your body are yours. The rest are bacteria, tiny single-celled microbes that dwell in and on almost every part of you. Most of these bacteria are beneficial, synthesizing vitamins and helping digest your food. As much as 30 percent of the calories you get from some foods comes from the actions of bacteria in your gut.
But some bacteria are deadly, as recent headlines about deaths from MRSA (methicillin-resistant Staphylococcus aureus) attest. Fatalities caused by this bacteria are increasing. In 2005, MRSA claimed the lives of 19,000 people in the United States, more than died of AIDS. Furthermore, other ailments caused by bacterial infections — hay fever and irritable bowel diseases, for instance — are also on the rise. In her comprehensively documented and well-crafted book, Jessica Snyder Sachs explains what’s behind this bacterial onslaught. The two most likely sources of the increase in bacterial infections seem, in many ways, the most unlikely: improved public sanitation and the widespread use of antibiotics.
The war on germs (the layman’s word for infectious bacteria and other microbes) began in earnest in the middle of the nineteenth century. One of the leaders was Florence Nightingale, who championed the “cleanliness is next to godliness” approach to public health. Nightingale and others had a profound impact on sanitary conditions in Europe and America. The improved sanitation they advocated largely stopped the cycle of waterborne epidemics that began with the crowding of civilization. This revolution in public health nearly doubled average life spans in the United States, from 38 years in 1850 to 66 in 1950.
But the cleanliness revolution had a downside. “Throughout the developed world,” Sachs writes, “allergies, asthma, and other types of inflammatory disorders have gone from virtually unknown to commonplace in modern times.” The reason for this increase was unknown until a Scottish epidemiologist began studying the health and habits of thousands of Britons. “Over the past century,” he concluded, “declining family size ... and higher standards of personal cleanliness have reduced the opportunity for cross infection in young families.” The upsurge in allergies and asthma, he said, was due to the decrease in the infections of childhood, which produced a weakened immune system in adults. Conversely, people who suffered a lot of runny noses as children were less likely to have them when they grew up.
As the revolution in public sanitation was ending in the 1950s, a new and equally important revolution was beginning: the widespread adoption of antibiotics. The first antibiotics were sulfa drugs. Then came penicillin. Others soon followed. These drugs were spectacularly successful in fighting bacterial diseases such as strep throat, scarlet fever and staph infections. But by 1955, new strains of bacteria had appeared that resisted treatment. Particularly noxious was a strain of Staphylococcus aureus, the so-called superbug behind today’s outbreak of MRSA. The 1950’s strain, Sachs writes, “shrugged off not only penicillin but every antibiotic on the pharmacist’s shelf.”
Scientists countered by developing methicillin, which worked well until the mid 1980s when methicillin-resistant strains developed. Hospitals then began treating staph-infected patients with vancomycin. To the surprise of no one, a vancomycin-resistant staph infection was reported in 2002.
To combat resistant strains, scientists developed ever more deadly drugs that indiscriminately attack the body’s bacteria. These “big guns” were especially useful for doctors in critical-care situations when there was no time to run tests to determine which germ was causing the illness. Heavily promoted by big pharma, broad-spectrum antibiotics quickly became physicians’ first line of defense against infections.
“But all this convenience had a dark side,” Sachs writes. “The scattergun attack of a broad-spectrum antibiotic razes not only the disease-causing organism that is its intended target but also the body’s ... protective and otherwise beneficial microflora.” When powerful antibiotics eliminate beneficial bacteria, it allows less friendly strains to flourish. One study showed that the longer a hospital patient took antibiotics, the greater their risk of acquiring a new infection.
What can be done to solve the problems created by improved public sanitation and overuse of broad-spectrum antibiotics? A return to the good old days of dirty water and no antibiotics is clearly out of the question. Thanks in large part to clean water and antibiotics, a person born in the United States can now expect to live to the ripe old age of 78. No, going backward is not the answer. But the problems of allergies and antibiotic resistance are real, and Sachs trots out some possible solutions.
Some approaches, such as a “dirt vaccine” that aims to stimulate the immune system, have produced mixed results at best. But others, like probiotic techniques (in which a friendly strain of bacteria is used to inhibit the growth of less friendly strains), have promise and some demonstrated successes.
Sachs aims her strongest, and most practical, recommendations at the twin problems caused by over exposure to broad-spectrum antibiotics: drug resistance and the inhibition of beneficial bacteria. Overuse of these antibiotics wipes out all but a few bacteria. The survivors have some form of resistance, and with all the other bacteria gone, the resistant strains flourish. Reducing antibiotic usage kills fewer good bacteria, increasing the competition that holds the resistant bugs in check. Such a reduction is possible, Sachs contends. “Unnecessary prescriptions still account for around one-third of all the antibiotics we take.” We also stay on antibiotics too long. Recent data indicate that courses shorter than the recommended length are just as effective for many infections.
Sachs also advocates using less disruptive drugs. When presented with an infected patient, many doctors reach for the biggest gun in their arsenal, usually a broad-spectrum antibiotic. This “shoot first, aim later” approach is easier for doctors, but hard on the body’s beneficial bacteria. Sachs encourages doctors to use more targeted drugs, antibiotics that just go after the bad guys.
The commonsense remedies offered by Sachs are not new. The problem is getting them adopted. As one doctor put it, “it’s just easier to prescribe the broad-spectrum and not have to worry about follow-up.” However, Sachs believes that a gentler approach is gaining acceptance in which the treatment of bacterial disease is “less a war on an invisible enemy than a restoration of balance.” After all, she points out, this is “and always will be, a bacterial world.”

Saturday, July 3, 2010

NATURAL BLAST ART

Extraordinary Natural Beautiful


Indonesia is still a quality proven like equator in the World. Mount Bromo, a mountain located in Probolinggo, East Java made it into the big three for the world's best mountain climbers. Recognition that the world can not be denied. For, indeed the beauty of Bromo is a beautiful gift of the Creator extraordinary.

Mount Bromo and even surpasses some other world famous mountains such as Mount Fuji in Japan, Mount Sinai in Egypt and Jabal Toubkal in Morocco.

Here are three great mountains of data in the world who deserve to climb as published on the site which is headquartered in Russia is Mount Elbrus, Mount Olympus in Greece and bromo .. While seven other mountain top 10 world's best mountain to climb is worth the Jebel Toubkal in Morocco, Mount Matterhorn in Switzerland, South Africa's Table Mountain, Mt Ben Nevis in Scotland, Mount Sinai in Egypt, Mount Fuji in Japan, and Half Dome Mountain in the United States.

Mount Elbrus, Russia, is part of the Alps in Europe. This mountain is one of the highest mountain in Europe that has a height of 5642 meters. The second most beautiful is Mount Olympus, Greece. The mountain is known as the place of the gods residing. This is the highest mountain in Greece with an altitude reaches 2918 meters.

FOR EARTH


Action sympathetic 'Save the Planet'. Students from nature school Bandung (SAB) sympathetic to the action along the River Cikapundung, Thursday (04/22/2010). Action Cikapundung quid to throw trash along their journey. The action begins with the planting of trees in the nature and location of schools in the end the day with the earth campaign by distributing leaflets to keep the earth from destruction, global warming and environmental pollution.
By: eko kurniawan

natural school


Character Building

Human nature consists of two dimensions. Physical and spiritual dimensions. Two dimensions that should be touched by the learning process in human life. If the education portion of the two dimensions are not balanced, especially the lack of a spiritual dimension, there would be "moral disaster". No more being called honesty, caring, responsibility, mutual respect, and others.

Character-building (CB) is a field of study that meets the spiritual needs of every human being. However, very few schools that apply the CB as a field of study. The existing character of matter becomes one with the material in the field of belief in religious studies.

Agent of Change

Schools must become agents of change. This spirit seems to have faded. In fact, had penetrated into the paradigm of the public that schools are schools that excel in it is the students who are good and well. School "sag" is the school that it is foolish students and naughty-naughty or discharge children.

A favorite school or excel tend to not accept students who are troubled. They prefer to soak in the "comfort zone" that is "the best input. At the time the author adopted a new admissions system at a school without entrance test, but depending on the number of seats available, with the principal are not sure ask, "How about when we can dumb-dumb student and naughty-naughty."

The author replied, "It's not a school should be built for smart children stupid and a bad child? Must be agent of change! "By applying the Multiple Intelligence Research to every student in every year, there were no students who are stupid. Each student has a tendency of intelligence and learning styles vary and should be rewarded.

The Best Process

Consequences agent of change is a learning process that occurs in schools must be the best. Learning into long-term memory and their students will not forget for life. However, in reality, a lot is that once teachers complete the lessons, then taught science also missing.

The learning process must contain the power of positive emotions. Begin the process of learning beginning to end really touched feelings of students.

The Best Teacher

Consequences of "the best process is the best teacher. This time the question of teacher quality. Some surveys show the quality of teachers in Indonesia are still not quite good.

Good teacher as a facilitator. This concept has long been echoed and practiced, but only the "initial" the "warm". After that, go back to the old paradigm, which is 80 percent of a teacher-dominated learning. Supposedly, the percentage of students learning processes must be greater than the percentage of teachers teaching process.

Good teacher acts as a catalyst, IE keep trying ignited student ability, including his talent. Especially to students who "slow" in receiving and understanding information. Not even the students who took the side of "smart" course.

Good teachers are always trying to adjust her teaching style with learning style students. If the process of teaching and learning styles appropriate style, would appear there is no actual condition of the hard lessons and all students are capable of receiving information from teachers.

Applied Learning

Learning content from primary school level onwards should be applied in everyday life. Learning materials should not be "separated", not related to everyday life. At a minimum, learners understand the benefits of learning materials.

Which happened many, many students are not understanding to what a material taught by teachers. In a seminar attended by nearly 700 teachers kindergarten through high school teacher, a writer asked about the matter "tree factor", almost all the teachers can answer all questions, but when asked what the "tree factor", most of them do not know.

Management School

In a special school management training followed by hundreds of operators or owners of private schools throughout Indonesia in 2007, one can deduce how the lack of their understanding of good school management. In fact, school management is the management of high-level human resource empowerment, very complex, and requires professional people to manage them.

Authors often logical school management is like a white dove has two wings and flew into a life of noble purpose cage. The first wing is a system context, ie education providers, and the second wing is a content system, namely the principal and teachers.

How could the pigeon will fly until the goal when any one of its wings broken and can not work together. However, if the flutter of wings, what a beautiful harmony. God willing, these schools will become "the best school and bring all students to a destination that makes people who have benefiditas graduates in his life.

By: Munif Chatib