CCNet 35/2001, 6 March 2001

"How would the US government respond if astronomers announced an
asteroid or a comet was about to slam into the earth, perhaps
leveling buildings or destroying all civilization? The answer, it turns
out, is that nobody really knows. A new report by three specialists in
asteroid research calls present planning about how to respond to such a
threat "haphazard and unbalanced," and points out that the person most
likely to sound the alarm has conceded "he has no idea who in the US
government would be receptive to serious information" about an impending
impact. [...] So if a Tunguska-size object were seen heading toward Earth
and might be aiming toward a population center, who would the
astronomers call? "It's a very serious question," said Brian Marsden,
director of the Cambridge-based clearinghouse where all such
astronomical discoveries are first reported. "I have asked on occasion, but
nobody has told me."
--David L. Chandler, Boston Globe, 3 March 2001

"You're coughing, you're sneezing, you think you've just got the
flu, but you could be a victim of sunspots."
--Margaret Munro, National Post, 2 March 2001

"Murphy's Law - the assumption that if anything can go wrong, it
will - is about to be subjected to its biggest and most conclusive test
when up to 150,000 schoolchildren attempt to discover scientifically
whether toast really does tend to fall butter-side down. Equipped
with a piece of buttered toast, a paper plate and plenty of newspaper to
protect the floor, each pupil will be expected to let their
breakfast slide off the plate 20 times and to record how many times it
lands with the buttered side towards the carpet."
--Jenny Booth, The Sunday Telegraph, 4 March 2001

    Boston Globe, 3 March 2001

    National Post, 2 March 2001

    Albuquerque Journal, 3 March 2001

    John Wagoner <>


    Andrew Yee <>

    SpaceDaily, 5 March 2001

    Science Magazine Online

    Brian G. Marsden, Harvard-Smithsonian Center for Astrophysics

     Gibor Basri, Univ. of California at Berkeley

     Michael F. A'Hearn, University of Maryland

     Clark Whelton <>

     Andrew Glikson <>

     Iain Gilmour <>

(15) COMET C/2001 WM1 (LINEAR)
     Dan Green <>

     The Sunday Telegraph, 4 March 2001


From Boston Globe, 3 March 2001

Specialists see need for warning procedure

By David L. Chandler, Globe Staff, 3/3/2001

How would the US government respond if astronomers announced an asteroid or
a comet was about to slam into the earth, perhaps leveling buildings or
destroying all civilization?

The answer, it turns out, is that nobody really knows.

A new report by three specialists in asteroid research calls present
planning about how to respond to such a threat "haphazard and unbalanced,"
and points out that the person most likely to sound the alarm has conceded
"he has no idea who in the US government would be receptive to serious
information" about an impending impact.

Scientists have made great progress in the last two decades in understanding
the magnitude of the potential threat, and how to search for celestial time
bombs that might threaten us, but there has been little attention to how
society could or should respond. And yet, as the report explains, such
impacts represent "a very real, if low probability, threat that could
conceivably doom everyone we know and everything we care about."

The report was prepared by asteroid specialists Clark Chapman and Daniel
Durda of the Southwest Research Institute in Colorado and Robert Gold of
Johns Hopkins University in Baltimore.

Some smaller impacts, the report says, are not that rare. Objects the size
of an office building may strike the Earth about once per century, with
results that are localized but nevertheless more devastating than most other
natural disasters. The most recent such impact was on June 30, 1908 over a
remote area of the Tunguska river basin in Siberia. Even though the object
exploded five miles high and never hit the ground, trees were flattened and
charred over an area of 800 square miles - a swath greater than the New York
metropolitan area.

So if a Tunguska-size object were seen heading toward Earth and might be
aiming toward a population center, who would the astronomers call? "It's a
very serious question," said Brian Marsden, director of the Cambridge-based
clearinghouse where all such astronomical discoveries are first reported. "I
have asked on occasion, but nobody has told me."

Even if he reached someone receptive - perhaps someone at the White House,
NASA, the Department of Defense or the Federal Emergency Management Agency -
would they know what to do next? At present, that seems unlikely.

For example, the report said, "the most likely international disaster that
would result from an impact is an unprecedentedly large tsunami," or tidal
wave, "yet those entities and individuals responsible for warning, or
heeding warnings, about tsunamis are generally unaware of impact-induced

And, Marsden said, if a large object were hurtling toward the planet, poorly
informed officials might make the wrong decisions: "They might do something
foolhardy like trying to blow it up, not knowing what they're dealing with,
and make even more of a mess."

Because the larger, most devastating objects are actually easier to find,
"in a sense, the smaller objects are more dangerous," Marsden said. They are
much more frequent, and much more likely to sneak up on us with little or no

In some ways, the public may be more aware of the dangers than many
officials. Increasingly, unusual events are interpreted as being impacts of
celestial objects even when they are not. This week, there was a flurry of
news reports in England about a woman nearly being struck by a meteorite,
which left a charred, three-foot-deep hole in the ground. Scientists were
preparing to rush to the scene when local officials realized that the impact
had actually come from below: An underground powerline had short-circuited
and blasted the hole upward to the surface.

Such responses illustrate another aspect of the impact hazard that has
received too little attention, Chapman said. "A wire breaks in England, and
people go nuts ... There are effects beyond just the sheer destruction of an
impact itself," he said. "Issues like public panic, or a misinterpretation
by Pakistan, say, of an impact as an act of war."

Richard Binzel, a Massachusetts Institute of Technology astronomer and
asteroid specialist who has devised a Richter-like numerical scale for
describing the risk from any new asteroid that might be on a collision
course, said yesterday he fears that the first object discovered on a
collision course will most likely be a small one, "so small that there is no
rational expectation of anything more than a spectacular display. But how do
we know that for sure, and communicate that for sure, so the public response
is rational?"

This story ran on page 3 of the Boston Globe on 3/3/2001.
Copyright 2001 Globe Newspaper Company.


From National Post, 2 March 2001

Flu epidemics coincide with solar eruptions, B.C. study says

You're coughing, you're sneezing, you think you've just got the flu, but you
could be a victim of sunspots

Margaret Munro
National Post

VANCOUVER - Influenza epidemics are more likely to sweep the globe when the
sun develops spots and sends its excess energy barrelling toward Earth,
according to Canadian researchers.

"Epidemics are four times as likely during solar maxima," says Ken Tapping,
a solar physicist with the National Research Council, pointing to the
striking correlation between flu pandemics and the peaks of the 11-year
sunspot cycle, also known as the solar maximum.

He and two colleagues have compared flu and solar records dating back to
1729 and found a statistically significant connection. There were flu
epidemics, some of them killing millions of people, in 1729, 1830, 1918,
1957, 1968 and 1977, years when solar activity and flares bombarded the
Earth with extra radiation and cosmic rays.

There were also close correlations in other years, but they were not as
pronounced. The probability of the matches happening by chance is less than
2%, the researchers say in a report to appear in the Canadian Journal of
Infectious Diseases.

The research was done by Mr. Tapping, epidemiologist Dr. Rick Mathias of the
University of British Columbia and B.C. physician Dr. Dave Surkan.

They are at a loss to explain the connection, and Mr. Tapping warns it could
be "completely spurious." But he said the numbers are so compelling they
decided to publish the results so others can explore the finding.

"Even though things like this sound a bit strange at the start, when you
look around you find lots and lots of evidence that the sun is playing games
in our environment," Mr. Tapping said.

These games get much more intense during the solar maximum, when the sun --
and the sunspots that develop on its surface -- release pent-up energy with
the force of millions of hydrogen bombs and send clouds of radiation and
searing hot gas hurtling toward Earth.

Such storms can warp our planet's outer atmosphere, set off spectacular
aurora borealis displays, induce powerful magnetic currents in ground-based
power lines and damage communications satellites.

The sun is also brighter at the peak of the sunspot cycle, and the amount of
ultraviolet radiation hitting Earth increases, Mr. Tapping says, noting that
the solar cycle is evident in tree rings and sea sediments. Tree and
plankton growth is enhanced at the height of the solar cycle. There have
also been suggestions that fish are more plentiful in the sea and crops grow
better during that time.

Mr. Tapping, who heads the solar monitoring program at the NRC observatory
near Penticton, and his colleagues offer no explanation for the connection
between sun and flu in their research paper. Mr. Tapping was reluctant to
speculate on how flu viruses might be affected, or mutate, as a result of
increased solar activity. "We just don't know," he said.

Copyright 2001, National Post


From Albuquerque Journal, 3 March 2001

By John Fleck
Journal Staff Writer

Astronomers are watching a New Mexico-discovered "Christmas comet" that
should be visible with binoculars, and possibly with the naked eye, in

Dubbed "WM1," the comet still is extremely faint, visible only with powerful

The comet was discovered at a small observatory outside Socorro that
recently moved into first place on the all-time list of the world's most
prolific asteroid and comet hunters.

The Lincoln Near-Earth Asteroid Research Program's scientists hunt asteroids
on an orbit that might pose a risk of colliding with Earth someday.

Finding comets - orbiting balls of dust and ice - is just a happy side
effect, said Grant Stokes, the project's director.

"We just kind of plow through everything that moves in the sky," Stokes

Stokes' group found the object last November, but the faint moving dot on
their telescope images was initially mistaken for an orbiting rocky

It was not until December that scientists realized what it was.
"It wasn't recognized as a comet immediately," said Brian Marsden, head of
the International Astronomical Union's Minor Planet Center, the organization
that catalogues comets and asteroids.

Residents of the northern hemisphere should get their best view of the comet
in early December, Marsden said.

Comet brightness is notoriously unpredictable, and Marsden was cautious in
his forecast for WM1.

Many scientists still remember the derision heaped upon their field because
of failed predictions about the "comet of the century," Kohoutek, a 1973
apparition that fizzled.

In recent years, comet watchers have been spoiled by Hyakutake in 1996 and
Hale-Bopp, which peaked in 1997 with a spectacular display, but WM1 will not
be as bright.

Some scientists' calculations have suggested WM1 could be bright enough to
be easily seen with the naked eye, but Marsden's data puts it right on the
edge of visibility without a telescope or binoculars.

"You'd be hard-pressed seeing it with the naked eye," he said in a telephone
interview this week from his Boston office.

Marsden cautioned that it will not approach the easy visibility of Comet
Hale-Bopp. But it has caused a stir among comet-watchers as the brightest
comet they are likely to see this year.

The LINEAR observatory is run by Lincoln Laboratory at the Massachusetts
Institute of Technology for NASA and the U.S. Air Force.

The work began as an Air Force project to develop telescope technology to
track satellites. But the scientists realized in the mid-1990s that the same
technology could be used to find asteroids, which show up in telescopes as
tiny moving points of light.

In the years since, LINEAR's combination of powerful telescope cameras and
computer software to hunt for those moving points of light has created a
revolution in the hunt for asteroids.

LINEAR recently crossed a major milestone, passing the venerable Palomar
Observatory in California as the most prolific discoverer of asteroids and
comets in history.

From 1949 to 1995, Palomar discovered 3,014. LINEAR has discovered 3,571,
according to records kept by Marsden's Minor Planet Center.

It took Palomar almost half a century, but LINEAR has only been searching
since 1996.

Sixty of those discoveries have been comets, Stokes said, more than any
other comet hunter except NASA's orbiting Solar and Heliospheric
observatory satellite.

The scientists do it using their two telescopes to scan large areas of sky
each night, with multiple images taken over time. A computer then searches
the images for things that move, flagging them as possible asteroids or

NASA is funding the group's work in pursuit of a catalogue of all asteroids
that might be on orbits that would threaten Earth.

None has been found, but researchers estimate that less than half of the
potentially threatening asteroids that are out there have been discovered so

Copyright 2001 Albuquerque Journal


From John Wagoner <>


A faint comet is discovered many months before its closest approach to the
Sun. Orbital calculations show that the "dirty snowball" might reach
naked-eye brightness many months in the future. The inevitable result: some
astronomers who should know better tell some reporters
who don't that a Really Big Show is in the offing. Another cycle of hype and
disappointment begins.

This time the comet in question is C/2000 WM1, better known as Comet LINEAR,
for the Lincoln Laboratory Near-Earth Asteroid Research program, which first
swept it up last November 16th. (If the name sounds familiar, it should. The
LINEAR project is finding comets and asteroids at a dizzying pace, and last
July an earlier Comet LINEAR brightened into a nice binocular sight.) At
present this new Comet LINEAR glows only feebly, some 20,000 times fainter
than the dimmest stars visible without optical aid. According to the orbital
track computed by the International Astronomical Union's Central Bureau for
Astronomical Telegrams, by year's end the comet might indeed attain
naked-eye visibility.

But it's a little early to start calling this the Christmas Comet of 2001.
When LINEAR is at its best, it will be too close to the Sun to see well in a
dark sky and too far south to be accessible from the Northern Hemisphere.
Nevertheless, it could be a decent binocular target in the evening sky for
midnorthern observers in late November, and an even better sight for
southern skygazers a few weeks later. Or maybe not. As Sky & Telescope
contributing editor David Levy likes to say, comets are like cats -- they
have tails and do what they want. It is notoriously difficult to predict a
comet's performance months in advance while it's still a dim wisp in the
distant reaches of the solar system.

So, if someone asks you whether we'll have a bright comet for Christmas,
answer with the truth: "At this point, who knows?"

Copyright Sky & Telescope


From, 3 March 2001

by Vanessa Thomas

On February 18, a pair of planetary scientists from the California Institute
of Technology aimed the 10-meter Keck II telescope in Hawaii at an asteroid
known as 87 Sylvia. The resulting images revealed a spot of light hovering
near the 90-mile-wide (130-kilometer-wide) asteroid. Over the next four
nights, other astronomers using the Keck also found the blip and watched as
it circled around Sylvia.

Astronomers are starting to realize that many of our solar system's rocky
wanderers may harbor undiscovered satellites. In 1993, the Galileo
spacecraft discovered the first asteroid moon (later named "Dactyl")
orbiting 243 Ida. Since 1999, several other binary asteroid systems have
been uncovered by radar and visual observations, and many more asteroids are
suspected of hosting satellites. Jean-Luc Margot and Michael Brown are on a
hunt to find these stealthy asteroid companions.

"Our observations are part of an ongoing program to find asteroid binaries
with both optical and radar techniques," Margot explained. "Those systems
provide very valuable information about the composition and internal
structure of asteroids, and also about the collisional history in the main
belt of asteroids and in the inner solar system."

Brown and Margot used the Keck's adaptive optics system to reduce our
atmosphere's blurring effects and differentiate the tiny satellite's light
from Sylvia's. At the time of the discovery, Sylvia and its moon were 2.79
astronomical units (about 260 million miles or 418 million kilometers) away.
Considering the moon's observed brightness, the satellite is likely only a
few miles wide. Temporarily designated S/2000 (87) 1, the moon lies about
745 miles (1,200 kilometers) away from Sylvia and completes an orbit once
every four days.

Copyright 1996-2001 Kalmbach Publishing Co. 


From Andrew Yee <>

Washington University in St. Louis

Tony Fitzpatrick, (314) 935-5257, tony_fitzpatrick@ai

EMBARGOED FOR RELEASE: February 28, 2001, 2 p.m. EST

Scientists find evidence for wet, slushy Ganymede, Jupiter's largest moon

Planetary scientists studying Jupiter's icy moon Ganymede have combined
stereo images from the Galileo mission with Voyager images from the 1970s
and found provocative features on the moon.

They have mapped long swathes of bright flat terrain that they think is
evidence of water or slush that emerged one billion years or so ago. This
bright terrain, long since frozen over, lies uniformly in troughs about 1
kilometer (a little over a half mile) beneath Ganymede's older, darker,
cratered terrain.

Ganymede, the largest moon in the solar system, is an icy satellite of
Jupiter and is larger than the planet Mercury. The roles that volcanism and
various forms of tectonics have played in molding the complex topography of
Ganymede have been hotly debated over the years. But the newly created
images, taking advantage of the quantity of the Voyager images and the
higher resolution of the Galileo ones, point to volcanism as the main
impetus behind the troughs.

"This is a new kind of stereo topographical information over hundreds of
kilometers across Ganymede," said William B. McKinnon, Ph.D., professor of
earth and planetary sciences at Washington University in St. Louis and
co-author of the study published in Nature on March 1st, 2001.

"What we think we're seeing is evidence of an eruption of water on the
surface of Ganymede," said McKinnon. "We see these long, smooth troughs that
step down up to a full kilometer. They're really very much like rift valleys
on the Earth and they're repaved with something pretty smooth. The material
in the troughs is more like terrestrial lava in terms of its fluidity than
relatively stiff glacial ice. We can see this material is banked up against
the edges of the walls of the trough and appears to have been pretty fluid,
much more so than solid, albeit warm, ice. These features directly support
the idea that they were created by volcanism. "

The researchers used stereoimaging -- a method where three- dimensional
objects are reproduced by combining two or more images of the same subject
taken from slightly different angles -- to reconstruct the physical
topography of Ganymede's terrains. Maps were then generated from the

McKinnon says the images provide fundamental new information about what
really happened long ago on Jupiter's large satellite and also illuminates
an essential mystery about the way the body reworks its older, darker

One trough extends an estimated 900 kilometers, roughly 600 miles, the
approximate distance between St. Louis and New Orleans. "The long trough is
probably a billion years old, but it's actually one of the younger volcanic
features," McKinnon says. "It's the last gasp of the process that made the
bright terrain."

According to McKinnon, the geological explanation for such long lanes of
flatness is that they occurred by the extending and opening up of Ganymede's
crust. And then that portion of the crust became flooded with some sort of
lava. The high-resolution Galileo images show that material that flooded the
lanes is "no less liquid than a slush," McKinnon says. "But it is not
glacial ice, which would have big moraines and big round edges like a
flowing glacier does."

Moreover, the images reveal depressions that resemble volcanic calderas
along the edges of the bright terrains. On Earth, calderas are large,
more-or-less circular craters usually caused by the collapse of underground
lava reservoirs.

"The caldera-like features make a pretty good circumstantial case for
volcanism causing this topography," says McKinnon. "We think these
particularly bright terrains were formed by volcanism, which means that most
or all the other bright terrains started out this way, and became fractured,
or grooved, over time through tectonic forces."

The earliest proposal about Ganymede is that there was water on the Jovian
moon billions of years ago. An alternate theory proposed that the bright
features were glacier ice erupted from Ganymede's mantle. A third theory
proposes that Ganymede's rifts were caused by a process similar to seafloor
spreading seen on Earth. While crustal spreading could conceivably operate
on Ganymede, it cannot account for the smooth swaths McKinnon studied.

"In the places we have looked at, the two edges of the trough simply cannot
be matched up." The Galileo Mission will orbit around Jupiter and fly by
some of its moons for another two years before coming to an end. It has
gathered valuable images of the outer solar system and enhanced researchers'
understanding of Jupiter and its many moons. While it is not the first
mission to explore Jupiter -- there were four before it -- a number of
"firsts" have been documented.

Among them: it is the first atmospheric probe to enter Jupiter's atmosphere;
it is the first mission to discover a satellite of an asteroid (Ida's
satellite Dactyl); it is the first spacecraft to go into orbit around
Jupiter; it provided the first multispectral study of the Moon; it is the
first mission to make a close flyby of an asteroid (Gaspra); it provided the
first direct observations of a comet impacting a planet (Shoemaker-Levy 9)
and of active extraterrestrial volcanoes (Io); and it provided the first
strong evidence for an extraterrestrial ocean (Europa).


From SpaceDaily, 5 March 2001

Is Eros An Ancient Planetesimal Leftover From Sol's Birth

Laurel - March 5, 2001

When NASA's Near Earth Asteroid Rendezvous (NEAR) spacecraft left for
asteroid 433 Eros five years ago, scientists weren't certain what they would
find when the probe arrived. Was Eros a 30-km fragment from a planet that
broke apart billions of years ago? Or perhaps a jumble of space boulders
barely held together by gravity? Was Eros young or old, tough or fragile ...
no one knew for sure.

But now, after a year in orbit and a daring landing on the asteroid itself,
NEAR Shoemaker has beamed back data that could confirm what many scientists
have lately come to believe: Asteroid Eros is not a piece of some long-dead
planet or a loose collection of space debris. Instead, it's a relic from the
dawn of our solar system, one of the original building blocks of planets
that astronomers call "planetesimals."

As NEAR Shoemaker was heading for its historic landing on Feb. 12, 2001,
team members hoped the spacecraft --which was designed to orbit, not land--
would simply survive. When it did survive, they set their sights a little
higher. From its perch on the surface of the asteroid, NEAR's gamma-ray
spectrometer can detect key chemical signatures of a planetesimal -- data
that scientists are anxious to retrieve.

"The gamma-ray instrument is more sensitive on the ground than it was in
orbit," says Goddard's Jack Trombka, team leader for the GRS. "And the
longer we can accumulate data the better." NASA recently gave the go-ahead
for NEAR's mission to continue through Feb. 28th, tacking four days onto an
extension granted just after the spacecraft landed.

To do its work the GRS relies partly on cosmic rays, high-energy particles
accelerated by distant supernova explosions. When cosmic rays hit Eros, they
make the asteroid glow, although it's not a glow you can see with your eyes;
the asteroid shines with gamma-rays.

"Cosmic rays shatter atomic nuclei in the asteroid's soil," explains
Trombka. Neutrons that fly away from the cosmic ray impact sites hit other
atoms in turn. "These secondary neutrons can excite atomic nuclei (by
inelastic scattering) without breaking them apart." Such excited atoms emit
gamma-rays that the GRS can decipher to reveal which elements are present.

"We can detect cosmic-ray excited oxygen, iron and silicon, along with the
naturally radioactive elements potassium, thorium and uranium," says
Trombka. Measuring the abundances of these substances is an important test
of the planetesimal hypothesis.

Planetesimals came to be when the solar system was just a swirling
interstellar cloud, slowly collapsing to form the Sun and planets. Dust
grains condensed within that primeval gas.

The grains were small, but by hitting and sticking together they formed
pebble-sized objects that fell into the plane of the rotating nebula. The
pebbles accumulated into boulders, which in turn became larger bodies, 1 to
100 km wide. These were planetesimals -- the fundamental building blocks of
the planets.

For reasons unknown Eros was never captured by a growing protoplanet. It
remained a planetesimal even as other worlds in the solar system grew and

Dust grains are accumulating into asteroid-sized planetesimals, the building
blocks of planets.

Fully-developed planets like Earth are chemically segregated -- that is,
they have heavier elements near their cores and lighter ones at the surface.
Planetary scientists call this "differentiation." If Eros were a chip from a
planet that broke apart, perhaps in the asteroid belt, it would exhibit
chemical signatures corresponding to some layer from a differentiated world.

For example, Eros might be iron-rich if it came from the core of such a
planet or silicon-rich if it came from the crust.

Instead, "orbital data from the x-ray spectrometer (a low-energy cousin of
the GRS) showed Eros is very much like a type of undifferentiated meteorite
we find on Earth called ordinary chondrites," says Andrew Cheng, the NEAR
project scientist at Johns Hopkins University Applied Physics Laboratory
(APL), which manages the mission for NASA.

Eros seems to harbor a mixture of elements that you would only find in a
solar system body unaltered by melting (an unavoidable step in the process
of forming rocky planets). But, says Cheng, there is a possible discrepancy.

"The abundance of the element sulfur on Eros is less than we would expect
from an ordinary chondrite. However, the x-ray spectra tell us only about
the uppermost hundred microns of the surface, and we do not know if the
sulfur depletion occurs only in a thin surface layer or throughout the bulk
of the asteroid."

The GRS can go deeper, as much as 10 cm below the surface. Although the
instrument can't detect sulfur, it is sensitive to gamma-ray emissions from
other elements such as radioactive potassium that are indicators of melting.
Like sulfur, potassium is a volatile element -- it easily evaporates when a
rock is heated. Finding plenty of potassium would strengthen the conclusion
that Eros is an unmelted and primitive body.

On the other hand, a widespread dearth of "volatiles" would hint that Eros
isn't so primitive after all.

It might sound like an ivory-tower question, but knowing the makeup of this
asteroid -- both its internal structure and its chemical composition-- has a
practical application.

The solar system is littered with space rocks more or less like Eros, and
many come uncomfortably close to Earth. One day we may need to blow one
apart (or deflect one without blowing it apart) to avoid an unpleasant

Near-Earth asteroids are also potential mining resources as humans expand
into space. In either case, knowing more about them is a good idea!

"Our first four data sets are here and they look great," says Jack Trombka.
"John Goldsten, the lead engineer for the gamma-ray spectrometer at the
Johns Hopkins Applied Physics Laboratory, has done a fabulous job making the
instrument work on the surface, which is a different environment than orbit.

"We're just hoping to get as much data as we can before the mission ends."

Copyright 2001, SpaceDaily


A dEbate from Science Magazine Online

Cool objects found in young star clusters in Orion and Perseus, such as
those reported by M. R. Zapatero Osorio and colleagues in their research
article (6 Oct., p. 103), have been described variously as "planetary mass
objects," "isolated giant planets," "free-floating planets," and
"superplanets" (1). The word "planet" has been invoked because the masses of
these objects are apparently only about 5 to 10 times that of Jupiter.
However, even if those masses are confirmed, we maintain that such bodies
are better thought of as low-mass brown dwarfs, as they are not in orbit
around stars.

Brown dwarfs are "failed stars" with masses below 7.2% of the sun's mass
(0.072M). Unable to develop the central pressure and temperature required to
sustain hydrogen fusion, they decline in luminosity precipitously in less
than 100 million years (2), whereas stars maintain a near-constant
luminosity for billions of years. At even lower masses, below 0.013M (about
13 Jupiter masses), even deuterium fails to fuse, and thus, by analogy,
0.013M has been proposed as the boundary between brown dwarfs and planets
(3). However, deuterium burning is a relatively minor phenomenon, producing
at most a few-million-year slowdown in the cooling of low-mass sources just
after birth, and it seems disproportionate to draw such a major demarcation
solely on this basis.

Yet, if one must discriminate, use of the term "planets" for the bodies
below 0.013M is an inappropriate (albeit notably media-friendly) choice.
Science does not take place in a cultural vacuum, and the word "planet" has
a 3000-year history. Common usage today implies a low-mass object that is
born and orbits around a more massive stellar object, whereas the Orion and
Perseus objects are isolated and likely to have formed by direct collapse
and fragmentation of a molecular cloud core, just like stars and brown
dwarfs. Calling them "planets" implies swarms of Jupiters stripped from
their parent stars, a scenario currently unsupported by any evidence, and
one that has created confusion in the press (4).

Our preference is to call these bodies low-mass brown dwarfs, but if a new
name is deemed necessary based on characteristic mass alone, then we suggest
"grey dwarf." This term preserves the neutral terminology introduced by
Tarter with "brown dwarf" (5) and provides a link to higher mass
free-floating objects without suggesting implausible relationships to our
familiar solar system gas giants.

Mark McCaughrean1*
Neill Reid2
Chris Tinney3
Davy Kirkpatrick4
Lynne Hillenbrand5
Adam Burgasser6
John Gizis4
Suzanne Hawley7

1Astrophysikalisches Institut Potsdam, An der Sternwarte 16, 14482 Potsdam,
2Department of Physics and Astronomy, University of Pennsylvania,
Philadelphia, PA 19104-6396, USA.
3Anglo-Australian Observatory, Post Office Box 296, Epping, New South Wales
1710, Australia.
4Infrared Processing and Analysis Center 
5Astronomy Department
6Division of Geological and Planetary Sciences, California Institute of
Technology, Pasadena, CA 91125, USA.
7Department of Astronomy, University of Washington, Seattle, WA 98195, USA

*To whom correspondence should be addressed.
E-mail: m[no]j[spam]

References and Notes

1. P. W. Lucas and P. F. Roche, Mon. Not. R. Astron. Soc. 314, 858 (2000);
J. Najita, G. P. Tiede, J. S. Carr, Astrophys. J. 541, 977 (2000); M. R.
Zapatero Osorio et al., Science 290, 103 (2000).
2. J. Liebert, Astron. Soc. Pacific Conf. Ser. 212, 7 (2000); C. G. Tinney,
Nature 397, 37 (1999).
3. A. Burrows et al., Astrophys. J. 491, 856 (1997).
4. See, for example, the articles at;;
5. J. C. Tarter, Astrophysics of Brown Dwarfs (Cambridge Univ. Press,
Cambridge, 1986), p. 121.


Published dEbate responses at

What is a planet?
Dr Jack Lissauer   (22 February 2001)

Dual classification for Pluto and other bodies
Dr Brian G. Marsden   (22 February 2001)

How should we define "planet"?
Prof. Gibor Basri   (22 February 2001)

This is not an Important Question
Dr Dave Stevenson   (22 February 2001)

This is an important question.
Dr. Ben R. Oppenheimer   (23 February 2001)

What is not a planet
Chris Tinney   (27 February 2001)

Classification finds patterns
Michael F. A'Hearn   (1 March 2001)

Dr Brian G. Marsden,
Associate Director for Planetary Sciences
Harvard-Smithsonian Center for Astrophysics

E-mail Dr Brian G. Marsden:

Though seemingly evoking a very basic astronomical concept, the ancient word
"planet" may have been doomed as soon as it ceased to apply to the seven
traditional sky wanderers and acknowledged instead the place of the earth in
the Copernican revolution and Galileo's recognition of scarcely smaller
objects themselves in orbit about Jupiter. Whether one is talking about a
"terrestrial" or a "giant" planet in our solar system, or a "free-floating"
planet in a young star cluster, it has rarely been scientifically useful to
use the word without at least some qualification.

Nowhere has this been more evident than in connection with the smaller
sun-orbiting bodies to which one could not obviously instead apply the word
"comet". After its initial classification as the "eighth" planet, Ceres was
catalogued as the first "minor" planet, following the recognition of other
small bodies in the cisjovian belt. Several of the denizens of the
transneptunian belt found in recent years have also now been added to this
catalogue, including very recently, as No. 20000, an object the size of
Ceres. Its popular affection as the ninth "major", or "principal" planet
(given that Neptune understandably replaced Ceres as the eighth) has kept
Pluto out of the small-bodies catalogue, even though it has a diameter
little more than twice that of No. 20000 and the listing contains a number
of companion "plutinos" with orbital periods precisely half as long again as
that of Neptune. Many astronomers think it only a matter of time before
another, more distant, Pluto-sized transneptunian is found and included in
the compilation. It is a pity that Pluto would not be listed first.

Some thinking would have it that a gravitationally collapsed, and hence
approximately spherical body is a "planet" (if it isn't a star or a brown
dwarf), with no further qualification needed. Such a definition would
include Ceres, Pluto, the moon and a number of other small bodies. But small
members of the solar system are very obviously classifiable by the
relationship of their orbital characteristics to those of larger bodies, and
the sheer numbers make organized tabulations of small bodies necessary.
Consideration of both the physical and the dynamical situation therefore
suggests that an appropriate compromise might be to describe some bodies as
having "dual status". Indeed, a few objects have already been classified as
both planetary and cometary.

Prof. Gibor Basri,
Dept. of Astronomy
Univ. of California at Berkeley

E-mail Prof. Gibor Basri:

The current controversies over the astrophysical definition of "planet"
arise from the conflict between three different arenas in which the
discussion might take place. I'll call them (i) characteristics (the
physical properties of the object); (ii) circumstance (the circumstances and
environment in which the object is found); and (iii) cosmogony (the way in
which the object was formed). Essentially all the controversies come from a
disagreement on how much weight to give arguments in each of these arenas
(and lesser disagreements about where to draw the line within each of them).
For example, the debate over "free-floating planets" is over the use of
characteristics vs. cosmogony and circumstance, and the debate over Pluto is
between characteristics and circumstance.

The primary characteristic on which the border between planets and brown
dwarfs may be drawn is mass. There are two obvious effects of mass one might
use to make the distinction - source of luminosity and source of pressure
support. The former suggests making the dividing line at about a mass of 13
jupiters (where deuterium fusion becomes possible); this has been preferred.
It reflects the cultural impression that "stars shine by their own light,
but planets don't". We might call objects that are ever capable of fusion
"fusors". The circumstance important in defining planets is whether the
object is in orbit (and what sort of orbit, around what). Is being in orbit
(now, or at least during formation) an essential property of planets?
Definitions in the popular culture generally insist on this. The "Pluto
question" asks whether one can call the object a planet if there are a lot
of similar objects in similar orbits (Ceres lost on this count). A final
(defunct) criterion required the orbits to be fairly circular. This was
actually part discussion in the third arena: cosmogony (since circular
orbits were thought to imply formation in a protoplanetary disk). We now
understand that formation in disks is generic to fusors and non-fusors. The
extrasolar planetary systems that have been discovered show that we know too
little about planetary formation to resolve the many questions that have
arisen. It is natural, therefore, that any definitions of "planet" that rely
predominantly on cosmogony produce currently unresolvable debates, and more
confusion than enlightenment.

From an astrophysical point of view, it is cleanest to take a purely
mass-based definition. I don't agree that the presence or absence of fusion
is a "minor phenomenon" qualitatively; it is not remarkable that at the
boundary between classes it becomes so quantitatively. Since, however, the
word "planet" is used broadly in our culture, it is reasonable to take
account of its cultural meanings. I propose a definition that captures the
essential astrophysical and cultural imperatives: "A planet is a non-fusor
born in orbit around a fusor". This makes it sensible to call single non-
fusors something like "grey dwarfs". The system recently found with a 10
jupiter-mass object at 0.3 AU from a solar-type star, and a 25 jupiter- mass
object at 3 AU, is a stellar/brown dwarf binary system (albeit with an
intriguing cosmogony) in which the star has a close giant planetary
companion. My thesis is developed in much greater detail at: .

Michael F. A'Hearn
University of Maryland

E-mail Michael F. A'Hearn:

Why do we, as scientists, care how Pluto (or anything else) is classified?
This question must be answered before we can intelligently deal with how to
present the result of a classification to the public. Scientists put things
into groups, the members of which share common properties, in order to find
patterns that will enable s to better understand how the bodies work or how
they became what they are. If we are interested in origins, then it is clear
with our present understanding (which might change in the future) that
free-floating bodies of mass comparable to Jupiter are not in the same
category as Jupiter itself. Similarly, it is clear that Pluto is not a
planet like Jupiter but rather a planet like the numerous Plutinos in the
3:2 libration with Neptune. Thus Pluto should be classified as the largest

On the other hand, if we want to know how gravitatinoally dominated bodies
with a lot of ice work, the Pluto is the class of frammises, together with
Europa, Titan, and Triton.
Copyright 2001 by The American Association for the Advancement of Science.



From Clark Whelton <>

Gunn, Joel D. (editor and contributor) (2000) The Years Without Summer:
Tracing A.D. 536 and Its Aftermath.  BAR International Series 872,
Archaeopress, Oxford.  ISBN 1 84171 074 1.  Price:  GBP32 or US$47.

WHENCE A.D. 536?

THE YEARS WITHOUT SUMMER presents the thoughts of 15 archaeologists and
historians on the cultural consequences of a worldwide A.D. 536 atmospheric
event. A.D. 536 was shrouded in a cloud of dust that darkened the sun for
months according to historical reports. It and the 300 subsequent years
witnessed declining sea level and tree ring growth, i.e., "global cooling"
as it might be said. This long-forgotten episode in history has recently
emerged from the lost corridors of time. Thanks goes to atmospheric
scientists and dendroclimatologists who found extreme anomalies in new
climate records. Following these leads produced a plethora of surprising
findings on the cultural side of the equation.


Searching for the cause of the A.D. 536 cooling is left to others; THE YEARS
WITHOUT SUMMER is concerned with the cultural fallout. The episode may have
begun with some sort of cataclysm late in A.D. 535 or early A.D. 536.
Famines, wars, and cultural changes of unusual magnitude followed.  The
sudden appearance of such a powerful, worldwide history maker has lead some
scholars to think that A.D. 536 marks the true end of the Roman Empire, not
to mention the rise of Arthurian legend. Two authors (Jones, Young) explain
why. Similar repercussions and revolutions are evident in China (Houston)
and Africa (Schmidt). The Maya Early-Late Classic "Hiatus" is also at this
time. Two chapters (Robichaux, Chase and Chase) examine this question. One
author (Tanner) treats the question of 50-year sea level changes during and
after A.D. 536. Half of the contributions in the book are from the Atlantic
Coast of southeastern United States where subregional cultural changes vary
from good, to bad, to unaffected. Important patterns emerged such as
southward along both the European (Saxons) and North American (Algonkians)


For historians, the rediscovery of the A.D. 536 event requires some
adjustments of long-held hypotheses and offers some significant explanations
for elusive conditions, such as the terminal Roman Empire. For
archaeologists, without the advantages of historical comparison, the A.D.
536 event raises much more challenging issues. How are archaeologists to
cope with the fact of sudden changes that substantially rework cultural
phenomena? THE YEARS WITHOUT SUMMER suggests developing an event-oriented
archaeology. It will include the use of tree ring dating and modified
understandings of radiocarbon chronologies. Interpretation may be
facilitated through the established theoretical bases of punctuated
equilibrium models and complex systems theory.


The cause and consequences of the A.D. 536 event are being debated and you
are invited to participate. I have organized a web site at that discusses this problem and offers books on the
various ideas of possible causes, including a meteor/comet encounter and a
volcanic eruption. THE YEARS WITHOUT SUMMER volume is also available through
this web site.

If you have comments on your research during this time period, or upon any
of the related sources, please post them on the web site.


Joel D. Gunn, Ph.D.
Department of Anthropology
University of North Carolina



From Andrew Glikson <>

Dear Benny,

In an article cited in CCNet (02-03-01) G. Jeffrey Taylor states "Nor has
anyone found other indicators of impact, such as shocked quartz." In a
following comment "Doubts emerge about P/T impact hypothesis" you wrote: "I
think Iain Gilmour is right to ask for more convincing evidence - in
particular evidence of shocked materials - before he is prepared to accept
the rather weak P/T impact hypothesis."

However, evidence for impact/s concomitant with the P-T boundary existed
prior to the reporting of fullerenes with extraterrestrial He and Ar ratios
by Becker et al. 2001 (Science, vol. 291,p. 1530-1533), a point completely
missed by the media:

1. Araguinha, Brazil, constitutes a proven 42 km-diameter impact structure
discovered in 1928 and dated as 247+/-5.5 Ma, namely P-T boundary age
(Grieve's International crater listing; Deutsch et al., 1992,
Tectonophysics, 212, 205-218). 

2. Bona-fide shocked quartz fragments containing planar deformation features
(PDF) have been confirmed in P-T boundary sediments in Antarctica and New
South Wales by Retallack et al., 1998 (Geology, 26:979-982). 

Becker et al.'s (2001) evidence may or may not provide corroborating
evidence for these previously confirmed observations, as fullerenes can form
terrestrially, while their contained extraterrestrial He3/He4 and Ar ratios
may be contributed by micrometeorites and/or impact fallout. The real
question is whether the magnitude of the impact/s has been sufficient to
trigger the Norilsk volcanism and the P-T boundary mass extinction?
Clearly, the size of Araguinha (42 km) is insufficient in this regard.
Considering that 2/3 of impacts can be expected to fall in the oceans, and
that comets contain only low levels of siderophile and PGE elements (thus
accounting for the low Ir along the P-T boundary), a large oceanic cometary
impact can not be ruled out at the present state of knowledge.


Andrew Glikson
Australian National University


From Iain Gilmour <>


I have some concerns with Andrew Glikson's evidence (below). However, much
of this debate really does highlight the need for detailed, rigorous research.

Araguinha Impact structure, Brazil

At 42 km in diameter, not enormous, if a crater this size could cause an
extinction the size of the P-T then we really should be worried! However,
its age is also a problem. More recent Ar-40/Ar-39 ages by Hammerscmidt and
Von Engelhardt (1995) Meteoritics, 30: 227-233 on two size fractions of an
impact melt rock gave plateau ages of 245.5 +/- 3.5 Ma and 243.3 +/- 3.0 Ma,
respectively. This is some time removed from the P-T boundary at 251.4
Ma+/-0.3 Ma, see Bowring et al. (1998) Science, 280, 1039-1045 for an
up-to-date discussion on ages of the P-T boundary.

A major problem is that, unlike the K-T, we have no direct stratigraphic
correlation between this crater (or any other) and the P-T transition. Given
that it has been suggested that this transition may have been very short
(e.g. Rampino et al. (2000) Geology 28, 643-646; Bowring et al. (1998)
Science, 280, 1039-1045), even a few hundred thousand years error on the
crater age is a problem, let alone 3.5 million on an age 5 Ma before the P-T

The elegance of the K-T impact-extinction hypothesis proposed by Alvarez,
Smit and others is the extremely tight correlation between the age of the
Chicxulub impact, the age of its ejecta (as dated in impact glasses), and
the very tight and global stratigraphic correlation of that ejecta with the
environmental and paleontologic changes observed in the geological record.
Anyone who has ever visited the Italian or Western Interior US K-T
successions cannot but fail to be impressed by this. This is definitely not
the situation at the P-T.

Shocked Quartz in P-T(?) sediments

Two points spring to mind about the Retallack et al. study. Firstly, the
authors of that paper were themselves less than convinced by their own

The breccias at the three locations examined by Retallack et al. yielded
some shocked quartz, but it is an order of magnitude less abundant (0.2
vol%) and smaller (only as much as 176 micrometers in diameter) than shocked
quartz at some Cretaceous-Tertiary boundary sites. They observed faint Ir
anomalies but they were an order of magnitude less than iridium anomalies at
some Cretaceous-Tertiary boundary sites. The authors themselves concluded
"The idea that impact caused the extinctions thus remains to be demonstrated

Secondly: Concerns on the successions studied were raised by Isbell and
Askin (1999) Geology, 27, 859. In a comment on Retallack et al. they stated
"we are concerned about the placement of the boundary at Mount Crean in
southern Victoria Land, Antacrtica. It is our contention that the P-T
transition is not present at Mount Crean, we believe that Retallack et al.'s
boundary requires further discussion and clarification. Although we do not
dispute the possibility that Antarctic strata may record the P-T transition,
we believe that unequivocal evidence has yet to be collected and/or

In a reply to this comment, Retallack acknowledge that one of the
successions studied has not yielded age diagnostic fossils and argued that
they could constrain the boundary based on other evidence such as paleosols
suggesting that a variety of hypothesis of age and disconformities are
possible. All this highlights the difficulty facing geologists in
identifying stratigraphic boundaries around the world and the need for
thorough field investigations of these successions.


Dr Iain Gilmour
Senior Lecturer in Geochemistry
Planetary and Space Sciences Research Institute
The Open University
Milton Keynes MK7 6AA
United Kingdom

+44 190 865 5140 (direct)
+44 190 865 2883 (secretary)
+44 190 865 5910 (fax)

(15) COMET C/2001 WM1 (LINEAR)

From Dan Green <>

I strongly agree with Jonathan Shanklin's concern (expressed in the Feb. 28
installment of CCNet) about telling the public that comet C/2000 WM1 might
become a naked-eye object later this year (Ron Baalke, Feb. 27 installment).
The overwhelming evidence is that comets that are "dynamically new" (that
is, making their first apparent passage through the inner solar system)
increase and decrease in brightness via a much more shallow gradient than do
comets that have made multiple passages (though there are always occasional

Comet C/1973 E1 (Kohoutek; old-style 1973f = 1973 XII) followed an
inverse-third-power law (that is, 7.5 log r, or n = 3, where 2.5n = 7.5)
quite closely through its first passage through the inner solar system, but
our knowledge of light curves was not as complete then, and a possibly
spurious brightness surge shortly after discovery (a problem inherent via
imprecise photographic magnitudes) may have encouraged astronomers to assume
an inverse-fourth-power (10 log r) rise in brightness, which did not pan out
as the comet approached perihelion.  Comet C/1973 E1 did not "fizzle out" or
fade, contrary to some beliefs, but simply brightened and faded merrily at
around n = 3.

We now know better than to excite the public in a situation like this.
Currently, comet C/2000 WM1 is around total CCD magnitude 17 +/- 1. It has
only a very small coma (some 12 arcsec, by most measures), so does not
appear to be very active at its current heliocentric distance of 5 AU. It
appears to be fainter than C/1973 E1 was at the same distance, indicating
that it may be a smaller (or less active) comet. Thus, Mark Kidger is
correct (March 1 installment of CCNet) in advising that C/2000 WM1 should be
seen as remaining quite a bit fainter than C/1973 E1 got.

If comet C/2000 WM1 (LINEAR) attains naked-eye brightness at all, it will
probably be faintly naked-eye (meaning that only experienced observers will
be able to detect it without optical aid). If the comet does something
unexpected and brightens considerably beyond that point, it will be a
pleasant surprise and the public can be quickly notified at that stage. So
while it is always nice to have little stories about binocular comets
published by the news media (to let the public know that such objects are
there), I do hope that stories of a naked-eye Christmas comet (for C/2000
WM1) will not appear in the coming several months, unless something credible
changes in what we now know about this comet.

Dan Green 

Daniel W. E. Green                   [ ]
Harvard-Smithsonian Center for Astrophysics
Cambridge, MA  02138, U.S.A.


From The Sunday Telegraph, 4 March 2001

By Jenny Booth

MURPHY'S Law - the assumption that if anything can go wrong, it will - is
about to be subjected to its biggest and most conclusive test when up to
150,000 schoolchildren attempt to discover scientifically whether toast
really does tend to fall butter-side down.

Equipped with a piece of buttered toast, a paper plate and plenty of
newspaper to protect the floor, each pupil will be expected to let their
breakfast slide off the plate 20 times and to record how many times it lands
with the buttered side towards the carpet.

The children will then try to prove whether or not the butter itself
destabilises the toast, by repeating the experiment with an unbuttered slice
of toast. One side of the bread will have the letter "B" written in marker
pen. They must submit their results via the internet to a central computer

The Tumbling Toast Test will be launched in schools on Friday by Maths Year
2000, a Government-sponsored campaign to make maths more interesting.
Perhaps unsurprisingly, it has been sponsored by the makers of Lurpak

Leading the experiments will be (CCNet list-member) Robert Matthews, the
science correspondent of The Sunday Telegraph. In 1996 Mr Matthews wrote a
paper published in the European Journal of Physics entitled "Tumbling Toast,
Murphy's Law and the Fundamental Constants". In the article Mr Matthews
takes issue with most scientists, who say that by the laws of probability it
is equally likely that toast will land butter-side up as butter-side down.
He advances a theory based on the fact that toast rotates in mid-air.

In the time it typically takes to fall from a plate held at about waist
height down to the floor, the toast can turn over only once - making it
considerably more likely to land butter-side down, Mr Matthews suggests.

To test his theory, secondary school pupils will be asked to carry out a
third experiment, this time climbing on to a chair or ladder and allowing
the toast to fall from a height of 2.5 metres. If Mr Matthews's theory is
right, the extra distance that the toast has to fall should give it more
time to rotate, producing a more random result. In 1991 a BBC television
programme about Murphy's Law claimed to have proved that butter-up landings
were just as common as butter-down, through an experiment in which buttered
toast was tossed into the air 300 times.

Mr Matthews said: "The trouble with their experiment, of course, is that
toast doesn't often get tossed up in the air like a coin - well, not in most
homes I know, anyway. Toast usually ends up on the floor after falling from
plates or tables. I've done some sums that take this into account, and they
suggest there is a genuine Murphy's Law effect going on that really does
tend to make toast land butter-side down more often. And it's nothing to do
with the presence of butter or jam whatever."

Organisers hope that enough children will take part in the test to beat the
record for participation in an internet experiment, held by Tesco Schoolnet.
Last year, 132,010 pupils took part in Schoolnet's project to create an
electronic Domesday Book for the 21st century.

Murphy's Law was originally advanced in the 1940s by Edward A. Murphy, a
research and development officer at Wright-Patterson Air Force base in
Dayton, Ohio. He developed the theory after finding that every one of the
strain gauges he was using in an experiment on deceleration had been
incorrectly wired.

Copyright, The Sunday Telegraph
The CCNet is a scholarly electronic network. To subscribe/unsubscribe,
please contact the moderator Benny J Peiser <>.
Information circulated on this network is for scholarly and educational
use only. The attached information may not be copied or reproduced for
any other purposes without prior permission of the copyright holders.
The fully indexed archive of the CCNet, from February 1997 on, can be
found at
DISCLAIMER: The opinions, beliefs and viewpoints expressed in the
articles and texts and in other CCNet contributions do not  necessarily
reflect the opinions, beliefs and viewpoints of the moderator of this



"The tropical Pacific Ocean may be able to open a "vent" in its
heat-trapping cirrus cloud cover and release enough energy into space
to significantly diminish the projected climate warming caused by a
buildup of greenhouse gases in the atmosphere. If confirmed by further
research, this newly discovered effect -- which is not seen in current
climate prediction models -- could significantly reduce estimates of
future climate warming."
--Goddard Space Flight Center & American Meteorological
Society, 28 February 2001

"What's wrong with the computer forecasts? At the heart of the
climate scenarios is the calculation of the response of the climate
system to energy input from increases in minor greenhouse gases. The most
sophisticated computer program would have to track 5 million climate
parameters and their interactions, a feat ideally requiring 10^19 degrees of
freedom. The computer to carry out such a calculation does not yet exist.
More importantly, the physics of many climate interactions and
measured values of many parameters are poor. Furthermore, it is certain
that not all the causes of natural climate change, e.g., El
Nio-Southern Oscillation, or changes of the sun, are understood."
-- S. Baliunas and W. Soon, Harvard-Smithsonian Center for

"Press reports coming out of Siberia are referring to this winter as
the harshest in decades, and some hint at it being the coldest
winter in the history of the observed record there. During a cold wave
that swept across the region in mid-January, there were reports of
temperatures as low as -94F in the Kemerovo region-an area located about
2000 miles east of Moscow. [...] no one could argue that record cold
temperatures would be expected under a climate dominated by an enhanced
greenhouse effect. Temperatures this low only go to prove that
natural variability is still largely responsible for today's weather." 

    Andrew Yee <>

    Environmental News Network, 1 March 2001

    CO2 Science, 1 March 2001

    John L Daly, 27 February 2001

    CO2 Science, 1 March 2001

    CO2 Science, 1 March 2001

    World Climate Report, 5 March 2001

    S. Baliunas and W. Soon, Harvard-Smithsonian Center for Astrophysics

    Greening Earth Society, 5 March 2001

     World Climate Report, 5 March 2001

     Honolulu Star-Bulletin, 3 March 2001


From Andrew Yee <>

Lynn Chandler
Goddard Space Flight Center, Greenbelt, Md.          February 28, 2001
(Phone: 301/614-5562)

Stephanie Kenitzer
American Meteorological Society
(Phone: 425/432-2192)

Release No: 01-18


The tropical Pacific Ocean may be able to open a "vent" in its heat-trapping
cirrus cloud cover and release enough energy into space to significantly
diminish the projected climate warming caused by a buildup of greenhouse
gases in the atmosphere.

If confirmed by further research, this newly discovered effect -- which is
not seen in current climate prediction models -- could significantly reduce
estimates of future climate warming. Scientists from NASA's Goddard Space
Flight Center in Greenbelt, Md., and the Massachusetts
Institute of Technology present their findings in the March 2001 issue of
the Bulletin of the American Meteorological Society.

"High clouds over the western tropical Pacific Ocean seem to systematically
decrease when sea surface temperatures are higher," says Arthur Y. Hou of
Goddard's Data Assimilation Office. Hou and co-authors Ming-Dah Chou of
Goddard's Climate and Radiation Branch and Richard S. Lindzen of MIT
analyzed satellite observations over the vast ocean region, which stretches
from Australia and Japan nearly to the Hawaiian Islands.

The researchers compare this inverse relationship to the eye's iris, which
opens and closes to counter changes in light intensity. The "adaptive
infrared iris" of cirrus clouds opens and closes to permit the release of
infrared energy, thus resisting warmer tropical sea surface temperatures,
which occur naturally and are predicted to increase as the result of climate

The study compares detailed daily observations of cloud cover from Japan's
GMS-5 Geostationary Meteorological Satellite with sea surface temperature
data from the U. S. National Weather Service's National Centers for
Environmental Prediction over a 20-month period (January 1998 to August
1999). The researchers found that cumulus cloud towers produced less cirrus
clouds when they moved over warmer ocean regions. For each degree Celsius
rise in ocean surface temperature, the ratio of cirrus cloud area to cumulus
cloud area over the ocean dropped 17-27 percent. The observed range of
surface temperatures beneath the clouds varied by 6.3 degrees Fahrenheit
(3.5 degees C).

The authors propose that higher ocean surface temperatures directly cause
the decline in cirrus clouds by changing the dynamics of cloud formation and
rainfall. Cirrus clouds -- high-altitude clouds of ice crystals -- typically
form as a byproduct of the life cycle of cumulus towers created by rising
updrafts of heated, moist air. As these cumulus convective clouds grow
taller, cloud water droplets collide and combine into raindrops and fall out
of the cloud or continue to rise until they freeze into ice crystals and
form cirrus clouds.

"With warmer sea surface temperatures beneath the cloud, the coalescence
process that produces precipitation becomes more efficient," explains
Lindzen. "More of the cloud droplets form raindrops and fewer are left in
the cloud to form ice crystals. As a result, the area of cirrus cloud is

Clouds play a critical and complicated role in regulating the temperature of
the Earth. Thick, bright, watery clouds like cumulus shield the atmosphere
from incoming solar radiation by reflecting much of it back into space.
Thin, icy cirrus clouds are poor sunshields but very efficient insulators
that trap energy rising from the Earth's warmed surface. A decrease in
cirrus cloud area would have a cooling effect by allowing more heat energy,
or infrared radiation, to leave the planet.

If this "iris effect" is found to be a general process active in tropical
oceans around the world, the Earth may be much less sensitive to the warming
effects of such influences as rising greenhouse gas concentrations in the
atmosphere. The researchers estimate that this effect could cut by
two-thirds the projected increase in global temperatures initiated by a
doubling of carbon dioxide in the atmosphere.

The American Meteorological Society is the nation's leading professional
society for scientists in the atmospheric, oceanic, and related sciences.

MODERATOR'S NOTE: Interesting, isn't it? Or why do you think the joint press
release by NASA and the American Meteorological Society has been completely
ignored by environmental journalists?


From Environmental News Network, 1 March 2001

Leading Climate Scientist Disputes Global Warming Theories - Work of United
Nation's Climate Panel Criticized as Misrepresenting Work of Scientists

From Competitive Enterprise Institute
Thursday, March 01, 2001

WASHINGTON, D.C. - In a briefing before congressional staff, members of the
press and scientists, Professor Richard S. Lindzen of the Massachusetts
Institute for Technology assailed the politically driven work of the United
Nations' Intergovernmental Panel on Climate Change (IPCC), specifically its
pattern of misrepresenting the work of its contributing scientists to fit a
preconceived agenda.

"The whole notion of a scientific consensus has been contrived to disguise
the genuine disagreement among scientists on a number of different issues.
Major media outlets announced, incorrectly, as early as 1988 that the issue
of global warming was scientifically settled, and the IPCC has been spending
over a decade trying desperately to make their reports conform to this
belief," said Lindzen. "To think that hundreds of scientists could be in
full agreement in dozens of separate disciplines is ridiculous. The aura of
certainty with which the IPCC's conclusions are being reported is clearly
more a matter of politics than science."

Advocates of the theory of catastrophic global warming have managed to
manipulate results of climate science for years now, using a variety of
strategies to mislead the public and the media. Strategies that the IPCC has
used include issuing a summary that misrepresents the contents of the full
report, using language that means different things to scientists and laymen,
and exploiting public ignorance - and the embarrassment about this ignorance
- over scientific matters.

The briefing, titled "The Search for Scientific Consensus or The IPCC and
the One-Handed Scientists," was hosted by the Cooler Heads Coalition, a
subgroup of the National Consumers Coalition that focuses on global warming
issues. Members of the Cooler Heads Coalition include the Competitive
Enterprise Institute, Consumer Alert, the 60 Plus Association, and Frontiers
of Freedom. For more information, find the Cooler Heads Coalition online at

Richard S. Lindzen is the Alfred P. Sloan Professor of Meteorology at the
Massachusetts Institute of Technology. He is a lead co-author of Chapter 7
of the IPCC's forthcoming Third Assessment Report. He is also a consultant
to the Global Modeling and Simulation Group at NASA's Goddard Space Flight
Center and a Distinguished Visiting Scientist at California Institute of
Technology's Jet Propulsion Laboratory. One of the world's foremost
atmospheric scientists, Dr. Lindzen received his AB, SM, and PhD degrees
from Harvard University.

CEI, a non-profit, non-partisan public policy group founded in 1984, is
dedicated to the principles of free enterprise and limited government. For
more information, please contact Richard Morrison, associate director of
media relations, at or 202-331-1010, ext. 266.

For more information, contact:
Richard Morrison
Competitive Enterprise Institute
Web site:


From CO2 Science, 1 March 2001

Dramatic Changes in Climate Model Predictions of Sea Level Rise Due to
CO2-Induced Global Warming
Wild, M. and Ohmura, A.  2000.  Change in mass balance of polar ice sheets
and sea level from high-resolution GCM simulations of greenhouse warming.
Annals of Glaciology 30: 197-203.

What was done
The authors studied the mass balance of the Greenland and Antarctic ice
sheets using two general circulation models (GCMs) developed at the Max
Plank Institute for Meteorology, Hamburg, Germany: the older ECHAM3 GCM and
its new and improved replacement, the ECHAM4 GCM. Mass balance calculations
were made by each model for both present-day and doubled atmospheric CO2

What was learned
Under the doubled atmospheric CO2 scenario, the mass balance of the
Greenland ice sheet was projected to be negative in both models, indicative
of a net reduction in the size of the ice sheet. However, the newer ECHAM4
mass balance results for Greenland were "significantly smaller" (-63 mm per
year for the ECHAM4 model vs. -229 mm per year for the ECHAM3 model). The
two models were in close agreement in their mass balance projections for the
Antarctic ice sheet, however, where the ECHAM4 and ECHAM3 models projected
net increases in ice sheet growth of +22 and +23 mm per year, respectively.
Furthermore, at the time of doubled CO2, the authors state that the ECHAM3
model projects a sea level rise "close to zero" (0.2 mm per year), while the
ECHAM4 model projects a sea level fall of 0.6 mm per year.

What it means
With the introduction of the new ECHAM4 GCM, another climate alarmist
prediction melts away.  Whereas the older ECHAM3 GCM projected a sea level
rise from polar ice sheet wastage under a doubling of atmospheric CO2, the
newer ECHAM4 model projects a sea level decline. This is particularly good
news for people living in low-lying coastal regions and on oceanic islands,
who have been incessantly bombarded over the past few decades with climate
alarmist-inspired stories of widespread flooding as a consequence of
CO2-global warming causing sea levels to rise.
Copyright 2001.  Center for the Study of Carbon Dioxide and Global Change


From John L Daly, 27 February 2001

Another scare story, this time about loss of snow from the top of Mount
Kilimanjaro in Tanzania, a 5,900 metre mountain sitting almost on the
Equator. It is the mountain's height which allows snow and ice to accumulate
on its summit in spite of being in the tropics

"The famous snows of Kilimanjaro are rapidly receding, scientists reported
this past weekend ..." "According to Lonnie Thompson, a professor of
geological sciences, at least one-third of the massive ice field atop
Tanzania's Mount Kilimanjaro in Africa has melted over the past dozen
years." (Environment News Network, 22 Feb).

Here are three views of Mount Kilimanjaro taken from the same angle, 21
years apart. Sources are linked.

Kilimanjaro in 1976; A bit more snow around in 1983...; 21 years later in
(see photos at

In addition to the transient snow which comes and goes near the crater
summit of this old volcano, there is also a large ice field with glaciers
reaching down the slopes. These glaciers have been receding during the 20th

Also during the 20th century, the sun has been getting hotter, hotter than
at any time since solar observations began around 1600 AD, a particularly
significant factor given Kilimanjaro's location near the Equator. Glaciers
respond very slowly to such changes, but Thompson is in no doubt that
'climate change' (UN code for human-induced warming) is responsible, even
though the warmer sun would be a more than sufficient explanation.

So the question is - are the Kilimanjaro glaciers getting their warmth from
the sun, or from the CO2 greenhouse effect? The sun is a primary energy
source, the greenhouse effect being secondary. But Thompson and his UN
colleagues are quick to blame the secondary source without any evidence to
support such attribution.

We know the sun has warmed in the 20th century. That is an indisputable
fact. But has the atmosphere been warming in the vicinity of Kilimanjaro,
and most particularly has the atmosphere been warming at that altitude?
Thompson's claim would suggest an atmospheric warming was at work.
Fortunately we have a means to determine atmospheric temperature at that
location and at that altitude - those very inconvenient satellites again.

The above graph is a satellite measured temperature trace from January 1979
to January 2001, for 3.75S 36.25E, the same location as Kilimanjaro itself.
More importantly, the satellites record temperatures in the free atmosphere
between 1,000 and 8,000 metres altitude, Kilimanjaro being at 5,900 metres,
right within the measured altitude range. Not only has there been no overall
warming, but the coldest month in the entire series is actually the latest

Clearly, if one third of the glaciers have melted off during the last dozen
years as Thompson says, it has certainly not been caused by atmospheric
warming. That leaves only the sun, the obvious candidate anyway, and only
ideological commitment to the UN-IPCC policies could blame man for what is
obviously a natural process.


From CO2 Science, 1 March 2001

The Thinning of West Antarctica's Pine Island Glacier: How Serious Is It?

Shepherd, A., Wingham, D.J., Mansley, J.A.D. and Corr, H.F.J.  2001.  Inland
thinning of Pine Island Glacier, West Antarctica.  Science 291: 862-864.

In highlighting Shepherd et al.'s article in This Week in Science, Science
magazine's Supervisory Senior Editor Phillip D. Szuromi begins his brief
"Thinning Ice" summary (p. 785) by stating that "the West Antarctic Ice
Sheet contains enough water to raise sea level by 5 meters if it were to
melt completely."  This being the case, and in view of the fact that most
Antarctic ice discharge is provided by ice streams, he says that
"glaciologists (and the rest of us) would like to know sooner rather than
later if ice loss by streaming is accelerating." Sounds like a pretty
serious problem, alright.  Guess we better review the article and see what
the answer is.

What was done
The authors used satellite altimetry and interferometry to determine the
rate of change of ice thickness of the entire Pine Island Glacier drainage
basin between 1992 and 1999.

What was learned
It was determined that the grounded glacier thinned by up to 1.6 meters per
year between 1992 and 1999.

What it means
The authors note, first of all, that "the thinning cannot be explained by
short-term variability in accumulation and must result from glacier
dynamics." And since glacier dynamics typically respond to phenomena
operating on time scales of hundreds to thousands of years, this observation
would argue against 20th century warming being a primary cause of the
thinning. Shepherd et al. additionally say they could "detect no change in
the rate of ice thinning across the glacier over a 7-year period," which
also suggests that a long-term phenomenon of considerable inertia must be at
work in this particular situation.

But what if the rate of glacier thinning, which sounds pretty dramatic,
continues unabated? The authors state that "if the trunk continues to lose
mass at the present rate it will be entirely afloat within 600 years." And
if that happens? They say they "estimate the net contribution to eustatic
sea level to be 6 mm."

So let's see. That means that each century of the foreseeable future, we can
expect global sea level to rise by approximately one millimeter - that's one
one-thousandth of a meter, or about the thickness of a common paper clip.
Yes, it's a catastrophe alright ... a catastrophe of ridiculous hype.
Copyright 2001.  Center for the Study of Carbon Dioxide and Global Change


From CO2 Science, 1 March 2001

Overpeck, J. and Webb, R.  2000.  Nonglacial rapid climate events: Past and
future.  Proceedings of the National Academy of Sciences USA 97: 1335-1338.

What was done
The authors give a brief overview of what we know about climate variability
during the past several thousand years of relative warmth that has
characterized the current interglacial, i.e., the Holocene.

What was learned
In the category of ENSO/Pacific Variability, Overpeck and Webb cite evidence
indicating that shifts in ENSO frequency occur at annual, interannual and
multidecadal intervals, providing "evidence that ENSO may change in ways
that we do not yet understand."  In fact, they note that data from corals
seem to suggest that "interannual ENSO variability, as we now know it, was
substantially reduced, or perhaps even absent," during the middle of the

In the category of African-Asian Monsoon Variability, they cite evidence
indicating that large abrupt changes in monsoon moisture availability have
occurred multiple times throughout the past several thousand years, although
"a lack of research prevents precise reconstruction, explanation, or
modeling of these changes."  And in the category of North American Drought
Variability, the authors note that "droughts of the 20th century were
relatively minor compared with those in the past," which, they say, "opens
up the possibility that future droughts may be much greater as well."

What it means
Clearly, as this article well indicates, all sorts of climatic parameters
have experienced variations over the course of the Holocene that have far
exceeded variations that have occurred over the period of the Industrial
Revolution.  Equally clearly, these significant climatic perturbations,
which preceded the development of modern civilization, could not have been
caused by human activities.  Hence, it is only to be expected that we will
experience similar erratic climatic behavior in the future, which will also
not be caused by man.  Yet, as soon as the climate undergoes such changes,
the climate alarmists will point to them as proof of their predictions, when
in reality they are no proof at all.

This is the great danger we face: a faulty political-based rush to judgment
about the merits of the CO2 greenhouse effect theory based on some future
extreme climatic event that is not in any way, shape or form related to
anthropogenic CO2 emissions.  And in view of the hold that the climate
alarmist crowd already has on the minds (or is it the aspirations?) of
prominent politicians worldwide, the wrong decision would almost certainly
be reached.  Perhaps we better pray for good weather for more than the usual
Copyright 2001.  Center for the Study of Carbon Dioxide and Global Change


From Wold Climate Report, 5 March 2001

The U.N. Intergovernmental Panel on Climate Change has made much of Mann,
Bradley, and Hughes' reconstruction of global temperatures for the past
1,000 years, prominently featuring it in its new "Policymakers' Summary."
Starting around 1900, temperatures start soaring so abruptly that the figure
resembles a "hockey stick" (see Figure 1). This visually stunning
presentation of historical temperatures show current readings far exceeding
anything seen at least since the Battle of Hastings. Naturally, the stick's
blade has been linked by some to the burning of fossil fuels.

But paleoclimate reconstructions, moreso than just about any other climate
data, are subject to interpretation. All proxy sources-corals, tree rings,
ice and sediment cores, and so on-have inherent assumptions and varying
degrees of accuracy. The Mann group properly assigns error bars to their
reconstruction; for example, they are much less confident in their
reconstructions from 1,000 years ago than they are in their 20th-century

Still, the IPCC's overreliance on Mann's hockey stick has been attacked in
some circles because it lacks several well-known climate events. From 1,000
A.D. to about 1900, the Mann record actually shows a fairly steady 900-year
global cooling. The hockey stick's very straightness-the lack of a strong
Medieval Warm Period (from about 800-1200 A.D.) followed by a Little Ice
Age-is exactly where the problem lies. Mann and colleagues have responded
that those events were not, in fact, global, but rather regional temperature
changes, charging that thinking of them as global reflects our putting too
much faith in records from places from which we have better historical
documentation (western Europe, for instance).
Figure 1. Mann's famous "hockey stick" graph showing reconstructed
temperatures for the past 1,000 years. The IPCC's scenario of the 5.8C
warming by 2100 is off the charts. Literally. That's it up there near the
right-hand corner of this page.

So how accurate is the hockey stick? In a recent article in Science,
well-known geoscientist Wallace Broecker chimed in with an alternate
viewpoint. Broecker makes the case that climate changes similar to the
hockey stick's blade have taken place about every 1,500 years throughout the
present interglacial (the current warm period between major glacial
advances; this one's known as the Holocene); that there was a similar
climate shift in the transition from the Medieval Warm Period to the Little
Ice Age; and that the Medieval Warm Period was global and not regional.

The problem in reconstructing Holocene temperatures is that the long-term
fluctuations are on the order of 1C, so proxies must at least be accurate
to within 0.5C to be valuable over millennial time scales. According to
Broecker, that requirement limits the options for proxy temperature data to
two sources: mountain snowline elevations and boreholes (tree rings, corals,
and so on don't make the cut).

Using radiocarbon dates from wood and peat samples in the Swiss Alps as well
as other sediment samples, several authors have reconstructed Holocene
temperature peaks and found that they recur on the order of every 1,500
years. What's more, borehole temperature reconstructions from Greenland
correspond to Swiss Alp radiocarbon reconstructions and match historical
documentation of the Vikings' colonization and ultimate abandonment of their
Greenland settlements. Other circumstantial evidence is available from
elsewhere around the globe.

In other words, part of the recent temperature increase (the hockey stick's
blade) is actually part of some longer-term warming/cooling cycle.

While this notion of a 1,500-year global temperature cycle may be a
reasonable hypothesis, it's meaningless without some explanation as to why
we should expect it to occur. Aside from day-night and seasonal cycles, all
other periodic climatic events, including El Nios, are simply not
predictable. Broecker's explanation is linked to the thermohaline
circulation and is similar to Latif's hypothesis, which is related to the
saltiness of north Atlantic surface waters (see WCR, Vol. 6, No. 11, for
more details).

But let's face reality. No one really knows what global temperatures were
1,000 years ago. We can barely agree on what they are now, when people are
taking thousands of observations across the planet on a daily basis. Quite
honestly, it's unlikely that we will ever know what the global temperature
was in the 11th century to any degree of useful scientific accuracy.

Beyond academic circles, scientific accuracy seems to be irrelevant anyway.
The usefulness of scientific data for swinging public opinion would appear
to be all that matters. Take the much-balleyhooed "storyline" from the IPCC
"Policymakers' Summary" projecting temperatures rising almost 6C by the
year 2100. We would add that point to the hockey stick graph, but it's off
the scale. In fact, it nearly plots off this page. Nope, the plot point at
the top of this column wasn't an error: That's where the point would
actually plot on the graph! By 2100, the "hockey stick" will be a machete!
(Figure 1)

Pessimists will look at that projection, see a temperature that's far beyond
anything we've observed in the history of civilized society, and spend the
rest of their lives fretting about the planet we're wrecking through selfish
capitalism and the mess we're leaving for their grandchildren. But we doubt
many pessimists are reading our little report.

Optimists looking at the 2100 temperature projection see something that is,
by its very nature, preposterous, and continue to enjoy life safe in the
knowledge that, in the 30-year history of the environmentalist movement, not
a single projection of regional environmental disaster has ever come true.
Not one. Ever.

It's likely that efforts to improve reconstructions of global temperatures
from many data sources will continue over the next 10 years or more, and
scientists will continue to argue over the merits of various theories to
describe past temperature variations. This debate is healthy, it's welcome,
and it's part of scientific advancement. We can only hope that the misuse of
science for political purposes loses momentum with Chicken Little's dying


Broecker, W.S., 2001. Was the Medieval Warm Period global? Science, 291,

Latif, M., 2000. Tropical Pacific/Atlantic ocean interactions at
multi-decadal time scales, Geophysical Research Letters, 28, 539-542.

Mann, M., et al., 1999. Northern hemisphere temperature during the past
millennium: Inferences, uncertainties and limitations. Geophysical Research
Letters, 26, 759-762.

IPCC, 2001. Summary for policymakers, Third Assessment Report.


From George C Marshall Institute

S. Baliunas and W. Soon, Harvard-Smithsonian Center for Astrophysics, 60
Garden Street, Cambridge, MA 02138.

No evidence can be found for catastrophic global warming from the recent
rise in the air's carbon dioxide content as a result of human activities.
The elevated carbon dioxide concentration in the air has, however, had a
positive impact on plant growth.


The earth is warmer than it would be in the absence of the greenhouse gases
in the atmosphere. Most of the greenhouse effect is natural and caused
predominantly by water vapor and water droplets in clouds, then followed by,
in diminishing order of importance, carbon dioxide, methane and other minor
gases in the air. Since the Industrial Revolution, carbon dioxide
concentration has been increasing in the air owing to human actions like
coal combustion and deforestation,[1] with a rapid rise in the last several
decades. The increase in the air's carbon dioxide would suggest a rising
global temperature, all other things being equal. However, it is difficult
to calculate the response of the climate system to the small amount of
energy added by the presence of extra carbon dioxide in the air. The reason
is that climate is a complex, dynamical and non-linear system, with positive
and negative feedbacks, and knowledge of the causes and responses of climate
change is presently insufficient to give an accurate response.



What's wrong with the computer forecasts? At the heart of the climate
scenarios is the calculation of the response of the climate system to energy
input from increases in minor greenhouse gases. The most sophisticated
computer program would have to track 5 million climate parameters and their
interactions, a feat ideally requiring 10^19 degrees of freedom.[21] The
computer to carry out such a calculation does not yet exist. More
importantly, the physics of many climate interactions and measured values of
many parameters are poor. Furthermore, it is certain that not all the causes
of natural climate change, e.g., El Nio-Southern Oscillation, or changes of
the sun, are understood.

The poor simulation outcomes, as judged by the comparison with climate
observations, highlight the fact that major physical processes are
incorrectly modeled or completely neglected. The simulations calculate the
effects of a 2% perturbation in the energy budget of the climate system (+4
Watts per square meter for a doubling of the carbon dioxide concentration in
the atmosphere), in the face of uncertainties of 10% in the energy budget
(compared to a total energy of ~242 Watts per square meter of incident
sunlight at the top of the troposphere).[22] It does not seem possible to
compute accurately the response of the climate to an added warming expected
from doubling carbon dioxide when the unknowns in the climate physics are
more than an order of magnitude larger. Moreover, the simulations have
positive feedbacks that are perhaps unjustified (e.g., upper tropospheric
water vapor) and so yield an artificial warming.[23]

The warming 100 years from now in the absence of any other effects except
that of doubling the carbon dioxide content in the air can be estimated by
scaling the observed temperature response to the presence of increased
atmospheric carbon dioxide concentration in the last several decades. The
warming from doubling the air's carbon dioxide content should be less than
0.5 C, an amount within the bounds of observed, natural climate change. A
small, gradual warming should be not only tolerable but also beneficial, if
the record of human history, climate change and the environment is any
guide. [24]

It has become common to see impact studies giving catastrophic consequences
of global warming based on the flawed computer scenarios. For example, it is
incorrectly believed that diseases like malaria will spread to the populated
countries of the high Northern latitudes as a result of warmer temperatures
there. But malaria is endemic to those regions, and was common, especially
during the colder temperatures of the Little Ice Age.[25] More importantly,
the spread of diseases like malaria in economically advanced nations is
increasingly controlled by modern medicine and technology.

Is carbon dioxide a pollutant? No, it is essential to life on earth. Based
on extensive evidence from agricultural research on enhanced carbon dioxide
environments both in the field and in labs, carbon dioxide increases should
cause many plants to grow more vigorously and quickly.[26] The reason is
that most plants evolved under and so are better adapted to
higher-than-present atmospheric carbon dioxide concentrations. In
experiments doubling the air's carbon dioxide content, the productivity of
most herbaceous plants rises 30-50%, while the growth of woody plants rises
more so. The impacts of enhanced plant growth and related soil changes may
even provide a strong quenching effect of warming from carbon dioxide. The
vegetation feedbacks as a result of carbon dioxide fertilization have yet to
be correctly incorporated in the climate simulations.[27]

Partly as a result of elevated carbon dioxide in the air and more efficient
agricultural practices, the U.S. has experienced in recent decades enhanced
growth in vegetation. The acceleration of plant growth is of a magnitude
that the U.S., despite its energy use and resultant prosperity, may not be a
net emitter of carbon.[28]

There is no doubt about the improvement of the human condition through the
unfettered access to energy. Energy use may also produce local unwanted
pollutants as a byproduct. Those sources of true environmental pollution may
be tolerated or mitigated, based on rational considerations of the risks of
pollutants and benefits of energy use. But in the case of recent fears of
anthropogenic carbon dioxide, science indicates at most a little warming and
certainly better plant growth owing to the projected future increase of
carbon dioxide content in the air. An optimal warming and enhanced plant
growth should be of great benefit to mankind and the environment. [...]


From Greening Earth Society, 5 March 2001
Most scientists work on the notion that effects are created by causes, and
that understanding the cause allows one to predict the effect.

When applied to global warming, that premise seems pretty straightforward.
We burn fossil fuels, we burn forests, we grow trees, and therefore we
slightly change the composition of the atmosphere. In doing so we alter the
rate at which energy escapes to space. We then throw this net energy
difference into a huge computer model and it predicts how much the
horizontal and vertical distribution of the earth's temperature will change.
We next throw those results into a weather forecasting model and out pops an
altered frequency, hot days, cold days, tornadoes, and cat-scratch fever.
And of course, it is that kind of output that affects our lives.

Like the currently popular commercial says, "Its the output, Bucko!" But the
output is a function of the input-in this case, the input is a chain that
begins with what the United Nations Intergovernmental Panel on Climate
Change (IPCC) deems pernicious economic activity and the output is today's
weather. In between are calculations on the physics of atmospheric changes,
assumptions about the way climate works, and a magical translator from
climate to weather. One weak link in this formidably complex chain of
causation and the whole thing goes kerblooey.

Climate scientist James Hansen is always fond of saying that something like
the Kyoto Protocol will pass once the person he calls "the man in the
street" wakes up and smells the climate. So, here's a pedestrian example of
what we're talking about:

Suppose we are really confident of how burnt fossil fuels and
anthropogenerated dust distribute themselves between the atmosphere and the
biosphere (don't worry, we aren't). That would mean that we could explain,
say, 90 percent (0.90) of the variation between activities such as farming
and automobile driving and the concentrations of dust and carbon dioxide in
the air.

Then we input these factors into a climate model-those vehicles that both
the IPCC and the recent U.S. National Assessment (USNA) use to get from
human activity to climate disasters. We can test to see how well these
models predict the behavior of climate in recent decades. Over the United
States, the USNA models had an explanatory power of 0.00. Actually, they
achieved a scientific milestone, performing worse than a table of random
numbers. Think about that for a moment: That "negative knowledge" is akin to
people doing worse than chance on a multiple-choice test. (It has been
documented that some student athletes actually achieved that goal in a basic
atmospheric science course a few years ago. Perhaps they'll be asked to work
on the next National Assessment, due in 2004.)

In this example, our net understanding of how we go from economic activity
to the climate of the last few decades is 0.90 times 0.00, or 0.0000. As
they say, it's nada, ziparooney, not so hot.

The fact is that the IPCC's global climate models perform better than USNA's
regional ones. Most climate models come with an implicit
skull-and-crossbones statement concerning their reliability: "Warning: The
Scientist General has determined that use of global models for regional
assessments may be worse than hazarding a guess."

So given the uncertainty about how well the models work, even under ideal
circumstances, scientists must understand inputs very precisely if they are
to have any faith at all in the model output.

And that's where the IPCC really showed its scientific chutzpa: On page 14
of the "Policymakers' Summary" of their new "scientific assessment,"
approved by a team of U.S. administrators minutes before Bill Clinton left
office, they show the "Level of Scientific Understanding" (LOSU) for the
various factors that can change temperature through the atmosphere (Figure
Figure 1. Relative warming and "Level of Scientific Understanding" (LOSU)
for the various climate change factors described in the new IPCC report.
Twelve different factors are involved. The LOSU for eight of the 12 (67
percent) is given as "very low." For 75 percent, it's "low to very low." For
one (8 percent) it is "high." On a scale from "very low" to "very high,"
ranked one to five, the average confidence is 1.7, or slightly below "low."

"That isn't fair," responds the Washington climate cabal, which has becoming
malignantly bipartisan of late. "After all, we're highly confident of some
large effects and nonconfident of some smaller ones."

That really isn't true. The potential range of nonconfident cooling given by
sulfate aerosols is almost as large as the confident range of warming from
carbon dioxide and other greenhouse gases. That has to be correct, because
it is the only way to fudge around the fact that the earth's tropospheric
temperature has warmed only about 10 percent as much as greenhouse-only
models forecast, averaged over the last quarter-century.

In service of the protesting cabal, we have added up the ranges of radiative
forcing by combining the "high, best-estimate, and low" estimates for the
different classes of LOSU (which range from "high" to "very low"). In two
cases, we had to supply an average value, because the IPCC was too
circumspect to do so.

Like the ad says, Bucko, it's the output, which in this case is the
right-hand grouping in Figure 2. The range of temperature change ultimately
caused by today's atmospheric alterations varies from a warming of 3.2C to
a cooling of 1.8C.

Figure 2. The range of temperature change based on the IPCC's high,
best-estimate, and low values for the elements contained within each level
of scientific uncertainty. Those elements comprise the input factors
responsible for changing atmospheric energy balance and ultimately daily
weather. Notice that the temperature range totaled across all levels of
scientific uncertainty varies from a warming of 3.2C to a cooling of 1.8C.
That is 10 times the observed change in the last 100 years and exceeds the
difference between an ice-age and warm-age earth.

The bad news is that this range for the expected net temperature change for
today's atmosphere is 5.0C, or larger than the mean difference between the
depths of the ice ages and the warmest interglacial, and ten times the
observed change as all these atmospheric alterations took place. The good
news for IPCC and the USNA is that the range is so large that no forecast
can rightly be called wrong!

We now call your attention to Table 1, which is lifted from page 9 of the
same "Policymakers' Summary." The right-hand column gives the confidence in
these weather changes in the next century. How on God's getting-greener
earth do we get from an average LOSU of "low-to-very-low" for the various
factors that cause climate change, to an average of "very likely" for each
of the associated weather changes?

Table 1. Estimates of confidence in observed and projected changes in
extreme weather and climate events, reproduced from the IPCC's Summary for
Policymakers, of the Third Assessment Report, page 9. The right-hand column
shows the confidence level for projected changes during the 21st century.
Notice how confident the IPCC is in their output, despite a very high range
of uncertainty as to the inputs (Figure 1).
Logically, you simply can't get there from here. You cannot negotiate a
chain of causation with any confidence when one of the links is so weak.
Instead, the IPCC assumes the projected warming (despite our averaging a
"low" scientific understanding of its causes), and then attempts to relate
it to future weather.

Last issue, we noted a recent Nature article that invalidated much of Figure
1 a mere three weeks after it was approved in Shanghai. "Assuming" a large
warming, in spite of "very low" confidence as to its cause, will be equally
hazardous to IPCC's ultimate credibility.

Oh, yes-did we mention there are no climate models that can really relate
most local weather to global climate changes? As we said, It's the output,
Bucko! Stay tuned. 


From World Climate Report, 5 March 2001

Press reports coming out of Siberia are referring to this winter as the
harshest in decades, and some hint at it being the coldest winter in the
history of the observed record there.

During a cold wave that swept across the region in mid-January, there were
reports of temperatures as low as -94F in the Kemerovo region-an area
located about 2000 miles east of Moscow.

Needless to say, the suffering there is widespread (even more so than
usual), with a far greater than normal number of frostbite cases being
reported at the local hospital.

The "official" all-time record for the lowest temperature ever recorded in
the Northern Hemisphere is -90F set at Oimekon, Siberia, on Feb. 5 and 7,
1892. That record was later tied in Verkhoyansk, Siberia, on Feb. 6, 1933.

It is unclear as to the exact nature of the -94F recorded this year. Was it
observed in such a manner that it will become "official"? That is, were
proper standards observed as to instrument type, location, exposure? And, it
could take some time before such things are clarified. Nevertheless, it was
very, very, very cold.

If the -94 reading does hold up, that's big news. Really big.

What we'd have here is one of the most remarkable records possible. Not the
coldest night of the year. Not the coldest temperature ever recorded in New
York City, the state of Maine, or even North America. We're talking the
lowest temperature ever recorded in the entire Northern Hemisphere. That's
one heckuva record. That's only one step away from the ultimate record: the
absolute lowest global temperature ever recorded (-129F at Vostok,
Antarctica, on July 21, 1983).

So if -94F is it, even after more than 100 years of doing our best to raise
the quality of life in central Siberia by burning fossil fuels and enhancing
the earth's greenhouse effect, we have been unable to prevent all-time
record low temperatures from occurring.

In fact, without throwing any anthropogenic warming into the mix, some would
claim we might be seeing temperatures up there dropping to around -100F. Of
course, when you're that cold, who's counting?

But seriously, no one could argue that record cold temperatures would be
expected under a climate dominated by an enhanced greenhouse effect.
Temperatures this low only go to prove that natural variability is still
largely responsible for today's weather. 


From Honolulu Star-Bulletin, 3 March 2001

Public concerns table Kona global-warming experiment

By Rod Thompson

KEAHOLE POINT, North Kona -- Faced with public opposition, the Natural
Energy Laboratory of Hawaii Authority has voted to exclude an experiment
with carbon dioxide from its Kona waters.

The laboratory's board of directors earlier gave preliminary approval to an
experiment on carbon dioxide "sequestration," meaning locking carbon dioxide
in ocean water. The experiment would help see if the method could be used to
slow carbon dioxide buildup in the atmosphere, thought to cause global

Authority Executive Director Jeff Smith said the experiment was "obviously
well thought out."

But the board voted against it because of concerns about its scientific
merits, legal ramifications, a change in scope, public opposition, and
opposition by the Keahole Point Tenants Association, he said.

Gerard Nihous, the scientist in charge of the experiment, conceded, "We have
faced a mountain of public opposition." He does not believe the opposition
was based on good science.

For example, opponents quoted the Union of Concerned Scientists, which said
ocean sequestration is untested and that it must be carefully studied, he
said. That is exactly why the research group wants tests, Nihous said. And
scientists want careful tests because that is the only way to get accurate

The Natural Energy Laboratory of Hawaii Authority exclusion applies only to
a patch of ocean at Keahole Point about two miles wide and 2.6 miles out to
sea. The experiment might still be done in the general area, perhaps outside
the state's 3-mile-wide territorial waters, Nihous said.

A "planning ballet" under way since 1997 would make it difficult to transfer
the $5 million project elsewhere in the world, he said. Environmental
studies of the Kona waters would have to be started from scratch elsewhere,
he said.

Keahole was chosen because the plan was to pipe liquid carbon dioxide from
shore to an undersea depth of about 3,000 feet. The plan now is to pipe
carbon dioxide from a ship while two other ships assist.

The experiment will pump small amounts of the liquefied gas two hours at a
time, increasing to 7.6 metric tons in two hours.

Scientists expect a plume of droplets will rise and dissolve in the water,
never getting higher than 2,000 feet below the surface.

The droplets will turn the water in the 50-foot-wide plume from its normal
slight alkalinity to a slight acidity, "not highly acidic," Nihous said. The
effects may last six to 12 hours, he said.

The experiment is to find out exactly how big the plume is, how acidic and
how long it lasts, he said.

Nihous hopes the tests can still be done on schedule in the fall of this

2001 Honolulu Star-Bulletin

The CCNet is a scholarly electronic network. To subscribe/unsubscribe,
please contact the moderator Benny J Peiser <>.
Information circulated on this network is for scholarly and educational use
only. The attached information may not be copied or reproduced for
any other purposes without prior permission of the copyright holders. The
fully indexed archive of the CCNet, from February 1997 on, can be found at
DISCLAIMER: The opinions, beliefs and viewpoints expressed in the articles
and texts and in other CCNet contributions do not  necessarily reflect the
opinions, beliefs and viewpoints of the moderator of this network.

CCCMENU CCC for 2000