A Question of Trust

Andrea Stierle
Plant Pathology, MSU-Bozeman
Chemistry, MT Tech.-UM

It seems paradoxical that scientific research, in many ways one of the most questioning and skeptical of human activities, should be dependent on personal trust. But the fact is that without trust, the research enterprise could not function.--Arnold S. Relman, Editor, New England Journal of Medicine, 1983

Do scientists trust each other?--William Macgregor, English Professor

We were right in the middle of a most pleasant dinner conversation concerning my recent trip to an international science congress in Nova Scotia. Somewhere between the Cornish game hen and the rhubarb pie, however, a dear friend posed a deceptively simple question: do scientists trust each other? Not one to answer even a deceptively simple question with a simple answer, I thought a moment before responding. I was struck by the sheer serendipity of his timing, as I had just finished reading two books on the endless flights to and from Nova Scotia dealing with this very issue: Cantor's Dilemma, and Science: Good, Bad, and Bogus. We entered into a spirited discussion of the essential nature of scientific research and scientific researchers, and how science compares to the humanities in this regard. Although a few of our observations are perhaps better left unsaid, and certainly better left unpublished, I would like to take a few moments to explore and develop a few insights on the concept of professional trust in the scientific community.

Scientific research follows a well-established pathway that has evolved over the years to provide a reliable framework for new theories, hypotheses and observations to be presented. An initial observation or a particular problem inspires a working hypothesis or theory that attempts to explain the observed phenomena. A series of experiments must then be devised to test and challenge the hypothesis. If repeated experimentation continues to support the scientific claim, then new experiments must be designed to challenge the hypothesis. If the scientist, acting as his own devil's advocate, believes that all evidence supports his initial hypothesis, then he can submit his findings in the form of a manuscript to a peer-reviewed journal. The peer review process is a critical step in maintaining objectivity and empiricism in science. A well written research paper clearly states the proposed theory, clearly outlines and details experimental protocols, and clearly presents data for evaluation and review. If the review panel feels that there are fundamental flaws in either the original hypothesis, the experimental methods, the data acquisition or manipulation, or in the conclusions reached based on experimental results, then the paper will not be published unless these problems are addressed. This can be a lengthy and arduous procedure, but at its best, this review process yields valid publications. A truly bad experiment or an essentially flaky hypothesis usually does not survive peer review. Of course, not all reviewers devote the requisite time and attention this process demands, and bad papers are the result. But it is the nature of scientific research to invite both discussion and replication. And it is the essential nature of scientists to question and challenge the technical merit of their colleagues' work. A trained scientist should be able to repeat any experiment reported in the literature. Bad science certainly slips through the editorial cracks, and if trivial will simply sit quietly between the pages of some journal, unread and unnoted. If the paper is noteworthy, however, the scientific public will usually recognize its flaws and begin a series of critical letters to the editor or letters to the authors.

Do scientists usually consider "bad science" to be fraudulent, and that the "bad scientists" intentionally deceived the public? A most recent example of controversial science might provide some insight to this question. In 1989, two electrochemists startled the scientific community and lay public alike with their announcement that they had achieved "cold fusion" in a simple laboratory experiment. In their excitement, and under pressure from their respective universities, Stanley Pons of the University of Utah and Martin Fleischmann of the University of Southampton, England, described their experiment in a press release, not in a detailed, reviewed journal publication. Low-temperature nuclear fusion is a phenomenon of incredible importance. Scientists immediately repeated the experiments described by Pons and Fleischmann, with mixed results. Using the methods detailed in their somewhat sketchy initial report, fortified with additional details provided through personal communication, a few scientists saw the exact same phenomena reported by Pons and Fleischmann, although they did not consider it evidence of a low temperature nuclear reaction. Many more scientists could not replicate their results. The majority of scientists seem to believe that whatever happened in the initial experiment, it was not "cold fusion." As a result of this controversy, the infamous Utahns have been accused of bad scientific method, rushing forward with a claim that had not survived the peer review process, and most likely would not have considering their nebulous results. They have been accused of hubris, of premature declaration, of poor judgment. But I have never caught even a subtle hint that Pons and Fleischmann are believed to be guilty of intentional deception. Is it a basic belief in the integrity of our peers that prevents this assertion, professional courtesy, or simply the knowledge that we all live in the same glass house? Most likely, it is a combination of all three. But despite the general lack of acclaim for the results of Pons and Fleischmann, their work spawned a series of international symposia dealing with low-temperature nuclear fusion. Even "bad science" can be inspirational.

Of course, some "bad science" is truly fraudulent science, and fraud can often masquerade as valid science, often for many years. Martin Gardner, a magician and self-proclaimed debunker of scientific fraud, has compiled a fascinating series of his essays in Science, Good, Bad, and Bogus (Avon Books, 1981). Gardner divides his attention between the scientist as the perpetrator and the perceiver of deception. In Great Fakes of Science, he lists famous cases in which scientific duplicity runs the gamut between gentle data massage to outright fraud. Perhaps the most famous of these cases is that of Piltdown Man. A skull found in a gravel pit in London by Charles Dawson was reportedly the oldest fossil remnant of ancient man, the missing link between ape and human. Forty years after its discovery it was indeed shown to be a "link" between man and ape: Dawson simply combined the cranium of a fossil human skull with a modern ape jaw, stained it to achieve the desired patina, and offered it to the world. The intention to deceive is incontrovertible.

As a scientist, however, I find Gardner's description of the scientist as the detector of fraud to be even more enlightening. In a series of essays devoted to debunking Uri Geller as a true psychic and relegating him to the rank of ordinary magician, Gardner repeatedly states that any time a "psychic" (or magician pretending to be a psychic) tries to prove his validity, he requests a roomful of scientists. Magicians, for whom deception is a way of life, know that scientists are easy to fool, and it's not hard to understand why. Our laboratory equipment is exactly what it seems to be; there are no mirrors, no trick flasks to fool the unwary. Our lab assistants are not trained to surreptitiously switch chemicals on us in the middle of an experiment. Scientists tend to be rational beings working in a rational world. But magic is an irrational world designed to confuse and mislead. According to Gardner, Geller's staunchest supporters are two laser physicists, Harold Puthoff and Russell Targ. They believe absolutely that Geller can bend spoons and read minds, even though they have never seen a spoon in the process of contorting itself, even though several magicians have explained the tricks Geller employs. They refuse to believe that Geller would cheat. It is beyond their frame of reference as scientists.

Anyone who has any interest in understanding the scientific mindset should read Cantor's Dilemma (Penguin Books, 1989), written by Carl Djerassi, an eminent scientist and winner of the National Medal of Science for his invention of the first oral contraceptive. Djerassi creates a fascinating scenario concerning the discovery of a major breakthrough in cancer research by Professor Isidore Cantor. His theory explaining the onset and metastases of cancer is so clever and innovative it must be correct. It simply remains to devise an experiment to prove his theory. Cantor designs a simple but elegant experiment that will establish the validity of his theory beyond a shadow of a doubt. A gifted research assistant performs the experiment brilliantly; Cantor's theory is proven and earns the two colleagues a shared Nobel Prize. Unfortunately, another scientist cannot replicate the seminal experiment which precipitates Cantor's essential dilemma. Does he accuse his assistant of "fudging" his results, exposing himself to the unpleasant fallout associated with flawed results? The twists and turns are endlessly absorbing, and a real ethics primer for any research investigator. This beautifully written little book delineates the real culprits in "bad science"--ego, enthusiasm, and self-delusion. Scientists are spurred by the desire to discover, to explain, to create. And maybe, just maybe, to be remembered as brilliant discoverers or creators.

So, do scientists trust each other? For the most part, the answer is yes. We constantly challenge each other's discoveries, but are anxious to use them as building blocks in our own work. We question and doubt explanations, but usually respect the authenticity of the data supporting even the most spurious theories. We trust each other in part because we assume all scientists are seekers of knowledge, in part because our methods mitigate the possibility of fraud. Early in our training we are drilled in proper protocol and the need to replicate, replicate, then replicate once more. I have been involved in scientific research for fourteen years, first as a lowly research assistant, then as an even lowlier graduate student. I am now a research scientist who, in collaboration with my scientist husband Don, has achieved some small success with a remarkable little fungus that has proven itself capable of doing something that a fungus really should not be doing. Validating our initial discovery required two full years of painstaking effort, and peer evaluation on several levels. Despite the strength of our data, there are probably scientists who do not believe in our fungus. They most likely believe we have made some critical mistake in our investigation, but none would probably accuse us of fraud. So we will continue to accrue data and to build on our results, and we will continue to invite peer evaluation. Because that is the essential nature of scientists and scientific research.


Contents | Home