Bad News #1: Peer Review Panic: When Science Gets It Wrong


“It’s peer-reviewed, so it must be true.”
That phrase is media catnip. From front-page coverage of buzzy new studies to political debates over masks, medicine, and climate change, the term “peer-reviewed” is often treated as the final word — a stamp of absolute reliability. But here’s the bad news: peer review is messy, inconsistent, and increasingly unable to catch fraud, bias, or bad science.
This week, we delve into the explosion of retractions, replication failures, and editorial scandals that have exposed deep cracks in the foundation of academic publishing.
While known to the science community, the failures of peer review rarely excite the attention of general news reporters, who prefer to publicize exciting "breakthroughs" rather than clean up embarrassing mistakes.
But first some fun.
The Bad News Quiz
5 quick T/F questions to test your news literacy.
- (Easy) – The peer review process is designed to catch scientific fraud before publication.
- (Medium) – The number of scientific paper retractions has increased in the past decade.
- (Medium) – Preprint servers like arXiv and bioRxiv bypass peer review entirely.
- (Hard) – The 2020 Lancet paper on hydroxychloroquine was retracted after editors verified that the underlying dataset could not be accessed or validated.
- (Very Hard) – In a landmark 2015 study, more than 70% of top psychology papers replicated successfully.
This Week’s False Narrative
The Alzheimer’s Con: How a Doctored Study Fooled Science for 18 Years
It was hailed as a breakthrough. It shaped nearly two decades of Alzheimer’s research. It pulled in billions in funding. And it was a fraud.
In March 2006, Nature published a paper that promised to unlock the mystery of Alzheimer’s. It came from a prestigious lab, claimed a shocking discovery, and landed with the kind of media splash most scientists only dream of. The authors said they had isolated a toxic protein fragment — amyloid beta*56 — that could trigger memory loss in healthy animals. In other words, they may have found Alzheimer’s smoking gun.
What followed was a funding gold rush. Billions of dollars poured into amyloid-targeting drug trials. Labs across the world tried to replicate, build on, and outdo the results. The authors — neuroscientist Sylvain Lesné and his mentor Karen Ashe at the University of Minnesota — became stars in their field. The amyloid theory was back in the spotlight, bolstered by a single, elegant experiment.
But there was a problem. The experiment wasn’t real.
And no one noticed — not the peer reviewers, not the journal editors, not the scientific community. Not for 16 years.
The Protein That Changed Everything
Lesné and Ashe’s paper did more than report a result. It reshaped the field. Amyloid beta had long been linked to Alzheimer’s, but proving a direct cause of memory impairment had been elusive. The study claimed to do just that: inject Aβ*56 into rats, and their cognition fell apart.
The reaction was electric. The study became one of the most-cited Alzheimer’s papers ever published, and by 2022, it was the fourth-most-cited lab study in the field. Pharmaceutical companies leaned harder into amyloid research. Entire NIH grant cycles followed its lead. And yet, behind the scenes, many researchers struggled to replicate the results.
Still, the paper remained unchallenged. Its prestige — the Nature imprimatur, Ashe’s reputation, the eye-catching results — protected it. And in the background, the citation count kept climbing.
The Quiet Investigator
Then came Matthew Schrag
In 2022, the Vanderbilt neuroscientist was digging into suspicious Alzheimer’s claims on a separate case when he stumbled across the 2006 paper. He wasn’t looking for trouble. But something about the images didn’t sit right.
Western blots — a common method for identifying proteins — appeared too perfect. Too clean. Patterns repeated. Bands looked… copied. Schrag started pulling at the thread. What he found shocked him.
Using digital forensic tools, he discovered that multiple images had likely been manipulated — copy-pasted, stretched, spliced. He suspected data fabrication, possibly to force a clean narrative about the toxicity of Aβ*56. But this wasn’t an honest mistake. It looked deliberate.
He took his findings to Science magazine. And that’s when the dam began to crack.
The House of Cards
Charles Piller, a veteran investigative reporter, picked up the story. With help from image analysis experts — including Dr. Elisabeth Bik, a scientific misconduct sleuth revered for her hawk-eyed reviews — they began a deep audit of Lesné’s work.
The findings were staggering. Over 20 papers from the same lab showed signs of manipulated data. The 2006 study was just the crown jewel.
Still, science moves slowly. Despite mounting evidence, Nature didn’t retract the paper immediately. Nor did the University of Minnesota. Internal reviews were launched. Committees were formed. Lesné denied wrongdoing.
Meanwhile, the paper remained live. Researchers were still citing it. Some were still designing trials based on its conclusions.
Finally, in June 2024 — eighteen years after publication — Nature retracted the paper. The retraction notice cited “evidence of image manipulation… including splicing, duplication, and erasure.” Every co-author except Lesné agreed to pull it. The fraud was undeniable. But the damage was done.
A Retraction Crisis
The Nature retraction, while belated, is hardly unique.
Retractions of peer-reviewed papers are at an all-time high — and not because scientists are becoming more honest. Many are pulled only after massive public backlash or whistleblowers, like Schrag.
- The now-infamous Lancet study on hydroxychloroquine was based on a data set from a shadowy company (Surgisphere) that refused to share its sources. It was retracted less than two weeks after publication — but only after influencing global COVID policy.
- In 2023, Science and Nature retracted multiple high-profile neuroscience studies due to manipulated images and fake data.
- Psychology and nutrition science are in the middle of a “replication crisis” — where large numbers of “peer-reviewed” findings simply can’t be reproduced by independent researchers.
The Review Process Is Flawed
- Peer reviewers are unpaid, overworked, and often unqualified in specific subfields. One study found that many reviewers don’t even catch obvious statistical errors.
- Journals are incentivized to publish splashy results, not careful null findings. That’s how weak studies with big claims slip through — especially in hot-button fields like climate, medicine, or economics.
- “Peer review rings” — fake identities used to review and approve one’s own papers — have been found across dozens of journals. In some cases, thousands of papers were published with no real oversight.
The Media Is Part of the Problem
News outlets regularly use “peer-reviewed” as a lazy substitute for credibility — even when:
- The study was retracted (and they don’t update the story),
- The authors had conflicts of interest,
- Or the findings contradict existing evidence or guidelines.
A 2022 analysis found that one-third of retracted COVID papers were still being cited positively in news coverage after they were retracted.
Peer Review’s Blind Spot
Lesné's lies evaded detection for so long in large part because no one expected them.
The truth is, peer review isn’t built to catch fraud. Reviewers aren’t trained to spot doctored images. Journals — even elite ones like Nature — rarely run forensic checks. The system runs on trust, reputation, and good intentions.
In 2006, no one imagined that a rising postdoc in a major lab would fake an entire dataset. The reviewers evaluated the logic, not the pixels. And Ashe, by her own admission, didn’t closely inspect the images either.
It took an outsider — with no stake in the original paper, and no institutional bias to protect — to blow the whistle.
The Fallout
The belated retraction sent shockwaves anew – this time in dismay – through the scientific community. Lesné’s scientific career unraveled: by early 2025, he resigned from his tenured faculty position at University of Minnesota under the cloud of the scandal .
The case also cast a shadow over the amyloid-centric approach to Alzheimer’s.
As Dr. Bik noted, the fraudulent study “led many other studies in the wrong direction,” creating “false hope among patients and their families, and…frustrations and missed opportunities” for other researchers who chased a dead-end result . Indeed, countless lab hours and significant funding were wasted trying to build on a result that we now know was bogus.
Some experimental Alzheimer’s treatments (designed to target amyloid per the hypothesis) even resulted in adverse effects for volunteers, all for a dubious cause .
Media coverage of the debunking was extensive in scientific outlets, yet the story initially flew outside most mainstream press.
For those following closely, the affair has been a sobering illustration of how even elite journals and famous scientists are not immune to “one of the most awful, egregious and serious scientific frauds” in recent memory .
It has fueled public skepticism about research: If a highly cited Nature paper can be fraudulent, how many other findings might be built on sand?
Within the research community, opinions diverged on the broader impact.
Some experts argued that while this misconduct was alarming, it “did not derail most of Alzheimer’s research,” since other lines of evidence kept the amyloid theory alive. In fact, even as the Aβ*56 paper fell, new amyloid-targeting drugs (e.g. lecanemab) have shown modest benefits in patients, suggesting the overall hypothesis retained some validity.
Others, however, stress that the scandal eroded trust and highlighted a need to diversify research approaches. The National Institute of Health (NIH) and journals may now face pressure to tighten oversight – for example, by expanding data auditing, encouraging replication studies, and protecting whistleblowers who call out fraud.
Ultimately, this case study has become a cautionary tale about the limits of peer review. It illustrates how critical flaws – even outright data fabrication – can slip through the cracks of a system supposed to safeguard scientific quality.
The downstream consequences were severe: years of misdirected research, squandered funding, and dashed hopes for breakthroughs.
Yet the silver lining is that the exposure of this failure is spurring conversations about reform. As Dr. Bik warned, the Lesné scandal is likely just “the tip of the iceberg” .
It underscores the urgent need for stronger checks in scientific publishing to prevent such debacles – ensuring that bad science doesn’t make it into print, and that truly important findings aren’t built on fraud.
So Is Peer Review Useless?
No — but it’s not proof of truth. Peer review is an imperfect form of quality control. It works best when:
- It’s open and transparent (e.g. open reviews with signed reviewer names),
- Authors share their data for replication,
- Journals support post-publication critique and correction.
New models like Registered Reports (where a study’s methods are reviewed before results are known), and tools like PubPeer and Retraction Watch, are helping. But the public — and the press — need to understand that science is a process, not a list of truths.
Peer review isn’t dead — but it’s overdue for radical surgery. Here’s what we think can help restore trust in the scientific process without turning it into dogma:
1. Open Peer Review
Let the public see who reviewed what, and why. Transparency keeps reviewers honest and exposes conflicts of interest. Journals should publish the full review history — including rejections and revisions — alongside the paper.
2. Independent Image and Data Auditing
Before publication, journals should run automated checks for figure tampering (splicing, cloning, erasing) and ensure raw data is uploaded and publicly accessible. If you can’t reproduce the chart, you shouldn’t believe the claim.
3. Reward Replication Studies
The current system rewards novelty over truth. That needs to flip. Funders and journals must incentivize high-quality replications, not just clickbait “breakthroughs.” Think slow science over splashy preprints.
4. Watch Dogs With Teeth
Platforms like PubPeer, Retraction Watch, and Factland need real visibility — and protection. Whistleblowers, fraud-hunters, and post-publication reviewers should be treated as essential infrastructure, not trolls.
5. Media Must Stop Outsourcing Credibility
Journalists: “peer-reviewed” is not a magic word. Learn to read methods, spot junk, and follow up when studies get debunked. If you’re quoting research, quote the uncertainty too.
6. Fact Markets & Citizen Juries
Systems like Factland can crowdsource accountability. Imagine if scientists, readers, and experts could stake their reputation or tokens on whether a study holds up 6 months later — and a jury decides based on evidence. That’s adversarial truth-seeking in action.
Bottom Line
Science is still our best tool for understanding reality. But peer review is not divine revelation — it’s a filter. And like all filters, it needs cleaning.
Want better science? Demand radical transparency, real incentives for replication, and a system where truth isn’t decided behind closed doors.
Factland is here to help build it.
This story is part of our ongoing series at Factland.org — where claims face evidence, and facts get their day in court. Subscribe to Bad News for weekly investigations into media malpractice, scientific failure, and the narratives that shape our world.