For my final essay in this Roundtable, I want to shift momentarily from the distortion of science in the public sphere to the distortion of science within the profession itself.
Today the world of science is drowning in information. When I speak to my former colleagues (I left my tenured professorship at the University of New Hampshire in 1994 to become a filmmaker), I hear the same, consistent refrain: "There's just too much information." When you run too much of anything through a processing system, the inevitable result is reduced assimilation efficiency. It happens when you feed animals too much food (they can't digest all of it), and it happens when you overwhelm a profession with too much information (it can't process it all).
Too much information causes decreased accuracy in transmission. I've seen this in the limited experiences of my own career in science. Recently, I poked my head back into the literature of my former field, marine biology, and to my surprise saw that much of the work I published in prominent journals in the 1980s has not been cited for more than two decades.
I wrote about this to a dozen of my old colleagues (some of whom are now members of the National Academy of Sciences), and their basic response was, "Get used to it." Following a suggestion from one of them, I looked at the instructions to authors for a journal I used to subscribe to, Trends in Ecology and Evolution, or TREE. The instructions tell authors this, verbatim: "Concentrate on the seminal references of the past 2-4 years (most references should be no more than five years old)."
How does that work if one is engaged in actual science? What happens if the seminal references occurred 25 years ago? You're supposed to ignore them and focus on recent work, even if the recent stuff is trivial? What sort of a message does that send to authors? This message: It's not about searching for the truth; it's about searching for the current. My colleagues assure me this practice is part of a broader general ethic in scientific publications.
On another note related to scientific accuracy, last fall articles in The Atlantic
and The New Yorker commented on the dangerous amount of noise in current scientific literature caused by "false positives," which are, roughly speaking, studies that tell a good story but aren't actually true. I spoke at a recent epidemiology conference where the attendees confirmed this trend. They contended that it's relatively easy to get funding to demonstrate that Chemical X causes cancer. But if someone has already published that finding, and you suspect it to be incorrect, it's almost impossible to get funding to set the record straight. It's a general standard of seeking the next new and exciting thing, rather than attending to the boring job of maintaining complete accuracy in the literature.
So to bring the argument of this Roundtable back to Rick Perry and the political distortion of science: The battle supposedly focuses on the accurate propagation of science in the public arena, but I'm increasingly concerned about the accurate propagation of science within the profession itself.
How can scientists not care about accuracy within science -- accepting the routine dismissal of "old" literature and the regular proliferation of false positives -- but then simultaneously express shock about the inaccurate propagation of scientific information outside the profession? Yes, the political distortion of science by candidates for public office is frustrating and deserves concern, but there is a serious risk of a glass-houses situation. If the science world doesn't give more thought to maintaining standards within the profession, first and foremost, it will have little ability to combat the distortion of science in the wider culture.