Fluoride Action Network

Medical Journals Exercise Clout in News Coverage

Source: Los Angeles Times | February 20th, 2000 | by David Shaw

Every week, the most prestigious medical and scientific journals in the world send advance information on their newest research findings to general interest news organizations in cities large and small. Most of the journals also send along press releases, touting the most important of their findings and hoping to see the names of their publications on the evening TV news, on Page 1 of major newspapers, and in cover stories in the national newsmagazines.

They get their wish with startling regularity.

While political writers provide firsthand coverage of campaigns, speeches, debates, votes and elections, and sportswriters provide firsthand coverage of games, races and matches, medical writers get the vast majority of their stories about medical breakthroughs, real or exaggerated, secondhand–not from visiting research laboratories or by interviewing scientists or attending scientific meetings, but from reading such publications as the New England Journal of Medicine, the Journal of the American Medical Assn., Nature, Science, Lancet and the British Medical Journal.

“Most medical reporters are slaves to the journals,” says Tom Siegfried, science editor of the Dallas Morning News. “The journal system is destructive of good reporting.”

Mainstream medical coverage is sometimes little more than “a kind of translation service” for the journals, says Daniel S. Greenberg, publisher of the Washington-based Science and Government Report newsletter.

For the most part, the major medical journals set the journalistic agenda in mainstream press coverage of medical news. They have what amounts to “a stranglehold over information about biomedical research,” in the words of Natalie Angier, a medical writer for the New York Times.

That often means that mainstream news organizations–and, more important, the public–are potential victims of the growing pursuit of publicity, individual and institutional acclaim, research grants and commercial profiteering that often drive the announcement of medical breakthroughs in the journals.

There are more than 25,000 science and medical journals, but only a few have major impact. Many of those are written for doctors, not for the public. The New England Journal, for example, is published by the Massachusetts Medical Society for its members, just as the Journal of the American Medical Assn. is published for members of the AMA. AMA members get a copy of the journal as part of their membership, and members make up about 85% of the publication‚s 350,000 circulation in this country. (Subscriptions to the New England Journal are not part of membership; they are paid for separately. The journal‚s total circulation is 183,000.)

Journals as Cash Cows

The primary function of the medical journals, their editors say, is to tell doctors about advances in scientific research and new remedies for various medical conditions.

These days, major medical journals are not just printed conversations among doctors, with mainstream journalists as welcome eavesdroppers. The journals are market-movers. Stock prices soar for companies whose drugs receive a favorable medical journal story–especially when that story is followed by mainstream media coverage.

Thus, the journals have “increasingly become cash cows” for their owners, as Dr. Lawrence K. Altman, longtime medical writer for the New York Times, wrote last summer. When the New England Journal last disclosed internal financial data, almost 20 years ago, annual profits were less than $400,000; now, Altman says, they‚re about $20 million.

“The imperative to sustain and build those profits is changing how the journals do business–and how the public learns about medicine,” he says. “Few would say that articles in leading journals are distorted or unreliable, but the quest for profits raises several disturbing issues: the journals‚ increasing appetite for publicity; how a burst of publicity from an article can inflate the importance of a new finding; [and] the drug industry‚s influence on the journals through advertising revenues.”

Angier says the journals are now “so hungry for publicity that they really trump up their papers to an astonishing degree. The abstracts they send to journalists are really over-hyped.”

Mainstream reporters have not always relied so heavily on the medical journals. They used to get more of their information by attending medical meetings, where they could listen firsthand to scientific presentations, question scientists about them, make valuable contacts and pick up information that might later be useful in stories with a broad perspective. At many scientific meetings–and at the more casual mealtime conversations before, between and after the meetings–researchers often speak more openly about their work than they do in formal interviews or articles. But many meetings produce no immediate stories, and over the past five or 10 years–in a time of tighter budgets and smaller staffs–most editors and television news directors increasingly see meetings as a luxury they can no longer afford.

“You get tons more information going to scientific meetings than you could ever get from reading all the journals that come across your desk,” says Rick Weiss, who writes about science and medicine for the Washington Post, “but you might only get one or two breaking news stories out of a four-day scientific meeting, and an editor might ask, ŒWhy did I spend $733 for a plane ticket to Toronto for [that]?‚ . . . and indeed, they do ask that question, and more and more often they say, ŒWhy don‚t you just do this by phone?‚ To my mind, it‚s a shortsighted, money-saving strategy that in the long run is going to hurt the quality of science journalism.”

Vague Attribution

Reporters‚ dependence on the journals is not always as evident as it might be. Many will begin their stories by writing about the basic findings in a study, then go on to the implications of the findings and quote various experts and not mention the journal as the source of the information until several paragraphs–or more–into the story.

“Journal attribution is often buried because a lot of reporters don’t want their editors to know they spend all their time reading the journals,” says Robert Lee Hotz, a science writer at the Los Angeles Times.

But reading the journals is „part of our job,‰ Hotz says, and medical reporters for the most respected news organizations don‚t just regurgitate or rewrite the journals. They interview authorities in the field to help them evaluate the validity, significance and newsworthiness of the newly published studies before doing their stories. Still, the starting point for–and basic thrust of–most mainstream stories about new treatments and new drugs comes from the journals, not from an individual reporter‚s curiosity, enterprise or imagination.

This is especially true of smaller publications and local television stations, where staffs are small, and time, space and experience are limited. But it also applies to big-city news organizations.

There is nothing intrinsically wrong with covering what the journals publish, of course, just as there is nothing inherently good about individual enterprise reporting. The journals generally publish solid science from reputable researchers on subjects that are often of interest and value to the public. But good, experienced medical reporters pursuing stories independently can often bring greater breadth, context and immediacy to material for their own lay readers, and critics say an over-reliance on the journals prevents many from doing this.

“I‚ve heard other medical reporters . . . say that they have their whole month blocked out,‰ Altman says. „Every day of the week, they know certain journals come and they just cover the journals and write it that way,” Altman says.

Because the journals send out their advance information–scientific abstracts, press releases, early copies of the next issue–not to be used until the specific date of publication, they contribute to herd journalism, with everyone jumping on the same story on the same day.

The New York Times is widely regarded as the national leader in medical and science coverage, with a staff of more than 20 reporters and editors for a science section that averages about 12 pages a week. The paper was the first to make a strong commitment to original and comprehensive reporting in this area and it‚s been so influential for so long that to many other medical writers, its stories are “like hearing the voice of God thundering down from the hills,” as Hotz puts it.

Even at the New York Times, though, “a reasonably high proportion” of the paper‚s stories on medical breakthroughs come from the journals, says Cornelia Dean, the paper‚s science editor. “We don‚t like to report medical breakthroughs on our own, so usually we take the coward’s way out and wait until a . . . journal reports it.”

At the Los Angeles Times, Joel Greenberg, the science editor, says he meets with his staff of medical and science writers every Monday, “with all the tip sheets from the journals in hand, to decide what we‚ll write.” As a result, says Thomas Maugh, who writes most of the paper’s stories on medical breakthroughs, ‘95% of my stories come from the journals. We’re spoon-fed. They manipulate us. But it benefits us as well as them, so I don’t see the harm.”

Key Exceptions

Journalists at other news organizations agree. Sort of.

“We kind of take our marching orders from them,” says Philip Elmer-Dewitt, the assistant managing editor in charge of science, medical and technology coverage at Time magazine. “I talk a lot about finding ways to do stuff before it gets in the journals or stuff that isn’t in the journals, but it‚s so rarely successful.”

There are exceptions, of course. Medical reporters at Time, the New York Times and the Los Angeles Times–as well as Newsweek, the Washington Post, the Chicago Tribune and a few other major news organizations–often provide good enterprise reporting. Some publications divide responsibilities for daily, journal-based “breakthrough stories” and longer-term projects among different reporters. At the Los Angeles Times, for example, while Maugh largely concentrates on the former, Terence Monmaney does mostly what he calls “issue-oriented stories on the ethical and human side of medicine.”

“I don’t like breakthrough stories because they‚re usually too good to be true,” Monmaney says. “They focus on the simple-minded benefits of medical research and they reinforce some of the more unfortunate perceptions of the way science is done–that it‚s kind of instantaneous discoveries, coming out of the blue. There’s a rhetoric they rely on that, years later, on sober reflection, turns out to have been excessively optimistic.”

Like Monmaney, some reporters at top publications don’t write about purported breakthroughs precisely so they can avoid both an over-reliance on the journals and the breathless tone that almost invariably creeps into such stories.

“By and large, I do stories that are not about any . . . single research laboratory and the work of any particular group,” says Laurie Garrett of Newsday in New York, who studied immunology in graduate school and has been writing about science and medicine for 20 years. “I tend to look at entire subject areas and then survey the field and all the people in the field who are doing the key research. I try to be ahead of the journals. I kind of see [them] . . . as my competition, not as the basis for my reporting.”

Garrett says she‚s heard many very experienced journalists say they read the introduction and conclusion to journal articles and don‚t even look at the actual data in the study until they have decided to do the story. That, she says, is an abandonment of journalistic responsibility.

“I go straight to the data tables first,” she says. “If the data tables are not pretty convincing, I’m not wasting my time with [the researchers‚] . . . yammering at the conclusion or their hypothesizing at the opening. If I find the data tables sway me . . . then I really scrutinize who was in this study, what was the context of the study, who was the control group, was this a well-designed study. Then, your last thing is what do they make of their data.”

But many medical writers and editors seem reluctant to evaluate medical studies on their own. There are literally hundreds of individual specialties and sub-specialties in medicine, and even the best-trained, most experienced medical writer can‚t be an expert in all or even very many of them.

“We don‚t feel competent to make a judgment,” says the New York Times‚ Dean.

No wonder. Just as political reporters and court reporters and sportswriters often disagree on the meaning of a given event, so medical reporters don‚t always agree on what a medical study means, even after they read about it in a journal. Last October, for example, the Journal of the American Medical Assn. published a special issue devoted to obesity. The issue included one story about injections of the hormone leptin as a weight loss treatment. The Los Angeles Times described the study as “a ray of hope.” But the New York Times said leptin “had little effect except at the highest dose teste” and pointed out that most of the people taking the highest dose “‘dropped out of the study [before it was complete] because they found it too unpleasant to inject themselves daily with large volumes of the substance”–information not included in the Los Angeles Times story. The Washington Post story was less favorable to leptin than the Los Angeles Times but more favorable than the New York Times. The Wall Street Journal and USA Today didn‚t even mention leptin in their report on obesity and the Journal studies.

Ingelfinger Rule‚ In many, if not most cases, lay journalists say they have no choice but to rely on journals, in part because scientists often refuse to speak directly to them, at least until after their studies are published in the journals.


One reason is the reluctance of many scientists to be seen by their colleagues as courting public attention. Another is a concern that the media will inevitably overstate what they say. But most scientists who refuse to talk to reporters make that decision not out of fear of either hype or peer disapproval. They do so because of “the Ingelfinger rule.”

The Ingelfinger rule takes its name from Franz Ingelfinger, who created it in 1969 while he was editor of the New England Journal of Medicine. Simply put, the rule–now followed by most other top medical and scientific journals as well–said that a journal would not publish a study if it had previously been “offered to any other book, journal or newspaper.” Thus, if an enterprising reporter calls a scientist with questions about research that has not yet been published in one of the major journals and the scientist talks to him at length and the story is published in, say, Newsweek or the Los Angeles Times, any journal he subsequently submits it to will most likely reject it. Most academic institutions require their faculty members to publish papers periodically; the journals help satisfy that requirement while simultaneously according researchers peer approval and ego gratification. Unwilling to risk losing all that, most refuse to talk to reporters in any detail until after their studies are published in a journal.

Scientists are “scared to death–they‚ve been so intimidated by the journals,” Garrett says. “I have had cases where scientists [I‚ve interviewed] have been in a total panic and begged me to pull the article. . . . ŒOh please, Science magazine will get me in trouble‚ or “Please don‚t do it. Nature will boycott my story.”

Ingelfinger was quite clear about his intent in imposing his rule. The New England Journal, founded in 1812, wasn‚t the hugely successful enterprise even 30 years ago that it is now, and when he found out that publications being sent free to doctors had covered a study before he’d published it, he decided that undercut the commercial value of those studies to his subscription-based journal. So he imposed his competitive ban. He did it “largely for self-interested purposes,” acknowledges Dr. Marcia Angell, now editor in chief of the New England Journal.

When Ingelfinger was criticized for stifling the dissemination of important medical news, he revised his rule and said he was not trying to prevent scientists from talking to mainstream publications; he was most concerned, he said, about stories appearing in other medical publications–especially if those stories had already been accepted by (but had not yet appeared in) his journal. Coverage of medical developments “in a lay news medium never come near qualifying as prior publication in my mind,” he wrote in a journal editorial.

Dr. Arnold Relman, who succeeded Ingelfinger as journal editor in 1977, applied the Ingelfinger rule more broadly than did its originator. He said its purpose was not so much to protect the New England Journal from competition as to protect the public from misinformation.

Errors in mainstream news accounts of medical breakthroughs often occur, Relman says, when reporters base their stories on “press releases or interviews or institutional announcements or presentations at a scientific meeting, where–often–quite preliminary and unverified work is presented.”

Scientists sometimes “talk too soon and too expansively” about their work, Relman says, and “as time passes and other people look into the same claims, the results are changed or disappear altogether. . . . The only reliable way of getting new scientific information is through reputable, peer review sources”–i.e., the journals. When articles are submitted to the journals, editors ask experts in the specific fields under study to review them before they are considered for publication. That’s called “peer review.”

“We all know that if something makes it into one of the top-notch journals, that means it’s good work and it has a measure of credibility because it’s been peer-reviewed,” says Susan Okie, medical writer for the Washington Post and a medical doctor.

Peer Review

The New England Journal, the most prestigious of the journals–and the one most often quoted by mainstream news media, even though it does not routinely send out press releases–says it publishes less than 10% of the approximately 2,500 pieces of original research it receives each year; Angell says 90% of the papers she does publish undergo significant revision as a result of peer review.

But such critics as Altman of the New York Times say peer review is overstated and overrated, and lay journalists should not, therefore, impute to the journals a kind of scientific infallibility. To Altman, what the journals call peer review is “just another name for editing. Peer review doesn‚t mean other scientists repeat the original clinical trial; they just look at what the researchers have submitted, and the researchers don‚t submit their original data any more than I submit my original notes when I turn in a story. Peer review cannot weed out false claims when it has no access to original data. Any contention that peer review is a purely, or even largely, scientific process is nonsense.”

Moreover, as Altman wrote in an article for the journal Ethics and Policy in Scientific Publication, because there are no published criteria for the selection of peer reviewers, “it is widely believed that [they] . . . are often chosen for their political connections and friendship, rather than for their skills as reviewers.”

Journal editors strenuously deny this.

Dr. Robert Steinbrook, one of five deputy editors at the New England Journal of Medicine and before that, a medical writer for the Los Angeles Times, says those undertaking peer review examine the actual science of a given work, “how it was designed and how the studied population was chosen and whether the parameters were valid.”

Angell acknowledges that peer review has its flaws but says, “It’s still way better than whatever [process] is second-best. I can‚t imagine running a scientific journal without it. We review the reviewers and if a reviewer is way out of line . . . we don‚t necessarily rely on that person’s review.” At least two outside experts and one statistical consultant review each work, she says. Despite charges that peer review is often done by research assistants, rather than by the research experts themselves, she insists this is not so at the New England Journal. “Our papers are being reviewed by the people to whom we send them, not shunted off” to post-doctorate students, she says.

Dr. Richard Glass, co-editor of the Journal of the American Medical Assn., says peer review is “the only way to make sure printed conclusions don‚t go too far.”

Lack of Criticism

The mainstream media often do go too far, though, and one reason for that, some medical writers say, is that the journals don‚t publish enough negative studies–studies that refute a hypothesis or that show a particular drug or treatment has no effect. That distorts the public perception of the scientific process, and leaves many medical reporters writing about an overwhelming preponderance of positive studies–purported breakthroughs.

Negative studies are important. A scientist may spend a decade or two studying a particular condition or treatment just so he can ultimately stand at a particular scientific crossroads and drive a stake in the ground with a sign that says, “Don’t go this way.” That can save other scientists–and society at large–countless years and untold sums of money that might otherwise be wasted following promising but ultimately false trails.

Editors at the New England Journal say they understand this problem and are trying to publish more negative studies. But Glass says negative studies are “less likely to be submitted for publication” than studies showing a positive effect. Negative studies are often completed, then not submitted, he says, “because the research was supported by commercial interests, and maybe they‚re less interested in having those results published.”

In other words, if a pharmaceutical company finances research into a particular drug, and the drug is found to have no effect on the condition it‚s intended to cure, the pharmaceutical company isn‚t eager to have those findings published. Because those who finance medical research often require confidentiality pledges from the researchers they sponsor, that decision is the company‚s to make.

This is just one element in what many medical writers see as the biggest challenge they face today, and one of the biggest dangers confronting the public: Increasingly, medical research is funded not by the federal government or by presumably disinterested academic institutions, but by drug companies and others in private industry, all of whom have a vested interest in the outcome of the research.

In 1998, researchers from the University of Toronto found that 96% of the researchers who had written favorably about calcium channel-blocking drugs for hypertension were funded by the firms that sold those drugs. Only 37% of the researchers who criticized the drugs were similarly financed.

“Any reviewer of a scientific paper cannot look over the shoulder of the scientist while he‚s doing the work,” says Sheldon Krimsky, adjunct professor in the Department of Family Medicine and Community Health at the Tufts Medical School and a longtime critic of conflicts of interest in medical research. “There’s a certain amount of trust that goes into it. Involved in this trust is a commitment to disinterestedness. But financial interest mitigates to some degree the commitment to disinterestedness.”

Conflicts of Interest

How can journal editors, mainstream reporters, doctors and the public know if a particular breakthrough is genuine or if it‚s being exaggerated by those who stand to profit from it? Because doctors, in particular, count on the journals for unbiased information, findings that may be influenced by the profit motives of the research sponsors could adversely affect the lives of their patients.

The top journals have instituted several conflict of interest regulations to try to prevent this–either by prohibiting or by requiring disclosure of any potential conflicts. But such regulations are often difficult to enforce; not everyone agrees on what constitutes a conflict and not everyone is forthcoming in disclosing potential conflicts. Sometimes it‚s the researcher who‚s not forthcoming, sometimes it‚s the journal. Either way, the doctors and lay reporters who rely on the journals–and the patients and readers who rely on the doctors and mainstream news reports–may be misled.

Two years ago, for example, a scientist who wrote that zinc lozenges might be an effective treatment for the common cold later acknowledged that before his study was published, he had invested in the company that made the lozenges; after publication, the company‚s stock shot up and he sold some shares for a profit of about $145,000. The researcher said he‚d reported his stock purchase to editors at the Annals of Internal Medicine, the journal that published his study, but the journal did not disclose his investment to its readers.

Ethics Issues

The New England Journal of Medicine is the self-proclaimed pacesetter on ethical issues in medical publishing. Last October, however, Monmaney of the Los Angeles Times disclosed that the journal had “apparently violated its own ethics policy numerous times in the last three years, publishing articles by researchers with drug company ties and not disclosing the potential conflicts of interest.” Monmaney said that he had identified eight review articles on drug therapy published by the journal since 1997 and written by “researchers with undisclosed links to drug companies that marketed treatments evaluated in the articles.”

In one case, for example, Monmaney found that the author of a review of breast cancer treatments had “received consulting fees, research funds and speaking fees from multiple companies that make drugs assessed in his article.”

Angell said the authors of these articles had informed the journal of their connections but that the journal had “failed to apply its policies properly.” She has since launched her own investigation of the situation and found that “Monmaney was correct as far as he went but we turned up additional instances of failures to apply the conflict of interest policy properly.” The results of the investigation will be published later this month. Angell also said she was taking steps to “bring our practice into conformity with our policy.”

One step she is not likely to take is to ban publication of any research sponsored by private industry. If she did that, her publication would be mighty slim. Medical research has become so expensive that only profit-motivated private industry can afford to sponsor most of it.

“No academic institution or government consortium, in my opinion, has the wherewithal to do it the way it needs to be done,” says Dr. Dennis Slamon, director of clinical research at UCLA‚s Jonsson Cancer Center and developer of the breast cancer drug Herceptin. Typical five-year research grants from sources other than private industry “maybe have a budget of $1 million, $1.2 million,” Slamon says. “The Herceptin project, if we’d done a clinical consortium [of nonprofit funding sources], maybe we’d have had a budget of $3 million or $4 million.” But Slamon says Genentech, the San Francisco-based biotechnology firm, spent “in excess of $150 million to $170 million” to develop Herceptin. Typically, it takes even more money–$500 million–and up to 15 years for one drug to move from the laboratory to the drugstore. Overall, drug companies will spend an estimated $24 billion this year on research and development.

“Industry has to be involved,” Slamon says. But medical journals and lay journalists alike “have to be critical and skeptical” and ask questions that will help them determine if the resultant research is legitimate.

“There’s no substitute for responsible reporting.”

Nor is there any substitute for journalistic initiative–for enterprise reporting–rather than an over-reliance on official medical journals, regardless of whether the journals are the ultimate repository of medical knowledge or the ultimate vehicle of the medical-industrial complex . . . or both.