“Publicity…would certainly follow,” fretted the editor of one top journal. “A possible general panic,” predicted a researcher. Both were explaining why a study linking childhood leukemia to fluorescent lights should not be published. That fear trumped the conclusion of other reviewers–scientists who evaluate whether a manuscript should be published in a journal–who called the paper “intriguing” and an “extraordinary piece of deductive reasoning.” The paper was rejected.

This is how science works? Despite its objective face, science is as shot through with ideology as any political campaign; and now that dirty secret is coming out. The party line is that papers submitted to journals are rejected only for reasons of substance–the methodology is suspect, the data don’t support the conclusions, the journal has better papers to use. But lately scientists have been privately fuming over rejections they blame on censorship. And this summer, the issue exploded in public. Dr. Thomas Chalmers of the Harvard School of Public Health charged that a paper he co-authored, which concluded that chlorine in drinking water raises the risk of bladder and rectal cancers, had been rejected by three journals partly because reviewers “were uneasy about informing people about this problem.” (Chlorination kills microbes that cause typhus and other diseases). Before The American Journal of Public Health accepted the paper, Chalmers says, his data had been “suppressed. Papers are rejected all the time based on the biases of reviewers.” The bias he sees is the conviction that the wares of technology, from pesticides to radiation, pose little risk.

‘Vitriolic reviews’: Some scientists and journal editors angrily deny that ideology colors decisions on whether to print a study. “Editors like to publish innovative work, not suppress it,” says Dr. Drummond Rennie, deputy editor of the Journal of the American Medical Association. “On the subject of peer review, people can easily get dreadfully paranoid.” But others acknowledged the problem. “There are many examples of bias on the part of my reviewers,” says Mervyn Susser, an epidemiologist at Columbia University and editor of the journal that published the chlorination paper. “We had a recent experience in which vitriolic reviews revealed very powerful preconceptions that low doses [of radiation] can’t possibly cause cancer. They felt if you get such a result you should throw it out the window [and not tell the public about it].”

That mind-set runs through the peer-review documents obtained by Newsweek. One assessment of the chlorination study calls it, “conducted carefully and rigorously,” but feared that “the casual reader [might get] the impression that…[chlorination] is a potential problem with respect to cancer risk.” That, of course, was exactly the point. The paper linking fluorescent lights to childhood leukemia met similar resistance. The New England Journal of Medicine reviewer called it “an intriguing idea that can be readily tested,” but NEJM rejected it “because it does not warrant the publicity.” The Lancet feared a “general panic in which nurseries are plunged into semi-darkness.” (The paper finally was accepted by Cancer Causes and Control.) “There was clearly a discrepancy between the reviewers’ favorable comments and the reluctance to publish,” says Samuel Ben-Sasson of Hebrew University in Jerusalem, the paper’s lead author.

To be sure, science is not routinely censored. Several researchers who work in areas that stir controversy–lead’s effects on intelligence, toxicity of chemicals–say they have never had a paper rejected for political reasons. And it is “reasonable,” as Columbia’s Susser argues, to be more careful with papers on health issues than on, say, a new species of nematoda. “You don’t want to press the panic button unless the work is very strong,” he says. What many scientists object to is what they perceive as a double standard that welcomes studies that conclude all is well but erects barriers to those that raise alarms. One leading cancer journal, for instance, recently published an industry study concluding that the fluoride added to drinking water does not increase the risk of cancer in Lab animals. That same journal rejected a government study, by researchers at the National Institute of Environmental Health Sciences, that reported an increase in rare bone cancers among male rats fed fluoride. The journal explained that it does not publish Lab-animal studies anymore. “No one wants to touch this,” says toxicologist James Huff of NIEHS about the persistent evidence that fluoride poses some hazard.

Tiny risk: Bias doesn’t end with publication, Harvard’s Chalmers says. In the year of the spinmeister, science gets spun, too. The New York Times called the cancer risk from chlorination “tiny,” even though the 38 percent and 21 percent elevated risks for bladder and rectal cancers, respectively, are 380,000 and 210,000 times higher than the level the government defines as a “negligible” risk. The National Cancer Institute began its press release on the study, “Chlorinated drinking water offers immense health benefits.”

Chalmers hasn’t made many friends at science journals by opening this debate, but some researchers applaud him. “He’s made statements about something that is very, very disturbing,” said toxicologist Ellen Silbergeld of the University of Maryland. “[Suppression of studies] is particularly vicious when they concern public-health issues.” But the risk that censorship poses to public health may be the least of it. If science loses its reputation for probity, its conclusions will carry no more weight than any interest group’s.