Attention IsNotAll You Need

A manifesto for evidence-based science

You've read the papers. You know the numbers.

More than half of preclinical findings don't replicate. Psychology's "replication crisis" found only 39% of landmark studies held up. Amgen scientists reproduced just 6 of 53 "landmark" cancer papers.

And yet the original papers? Still cited. Still taught. Still shaping research directions.

Because science doesn't run on evidence anymore. It runs on attention.

The System We Inherited

You didn't create this system. But you live inside it.

Tenure committees count citations. Grant panels check h-indexes. Journal prestige determines career trajectory.

The incentives are clear: publish novel findings in high-impact journals. Get cited. Build reputation.

What's not incentivized?

  • Replication studies ("not novel enough")
  • Null results ("not interesting")
  • Methodological critiques ("too negative")
  • Long-term validation ("too slow")

The result: a literature optimized for attention capture, not knowledge accumulation.

What Citations Actually Measure

Be honest: when you cite a paper, are you vouching for its validity?

Or are you citing because it's:

  • The canonical reference everyone uses
  • From a well-known lab
  • Required by reviewers
  • The first result in your literature search
  • What your PI told you to cite

Citations track what the field talks about. Not what the field has verified.

The correlation between citation count and replication success is essentially zero.

The Hidden Cost

You've probably experienced this:

Six months into a project, you discover the foundational paper doesn't replicate. The effect size was inflated. The methods were underspecified. The "n=12" buried in supplementary materials explains everything.

Now multiply that by every lab, every postdoc, every grant cycle.

$28B

wasted annually on irreproducible research in the US alone

Freedman et al.

That's not just money. That's careers. Graduate students who spent years on dead ends. Promising researchers who left science because their "failures" were actually the literature's failures.

The system isn't just inefficient. It's burning out the people trying to do real science.

What You Actually Need

Before you build on a finding, you need to know:

  • Has anyone tried to replicate this? What happened?
  • What's the total sample size across all studies?
  • Are there contradictory findings I should know about?
  • How robust is the methodology?
  • Has this lab had retractions in this area?

This information exists. Scattered across preprints, failed replications, PubPeer comments, conference whispers, and that one senior professor who "knows the real story."

But there's no system that surfaces it when you need it.

The Missing Infrastructure

Science has infrastructure for publication (journals), discovery (PubMed), and reputation (citations).

What it doesn't have is infrastructure for verification.

We need a system that:

  • Extracts specific claims from papers, not just metadata
  • Links claims to replication attempts and contradictions
  • Tracks evidence strength over time
  • Flags methodological concerns and lab history
  • Updates continuously as new evidence emerges

Not a layer on top of citations. An alternative to citations.

A Different Scorecard

Imagine if, before committing to a research direction, you could see:

CLAIM

"Compound X inhibits tumor growth via pathway Y"

3

Replications

1

Contradiction

847

Total n

78

Evidence Score

Not "this paper has 500 citations."

But "this claim has been tested 4 times, with 3 supporting and 1 contradicting, across 847 subjects, with a methodological flag about dose-response."

That's the information you need to make a decision. That's what the literature should have given you all along.

What Changes

If evidence strength becomes visible, the incentives shift.

Replication studies become valuable—they update the evidence graph. Null results matter—they provide signal about what doesn't work. Methodological rigor pays off—it affects the credibility score.

Labs with a track record of reproducible findings become more visible. Flashy-but-fragile results get flagged before they waste a generation of PhDs.

Science starts rewarding what it was supposed to reward all along: findings that hold up.

Why Now

AI is about to make this exponentially worse.

Every major AI model is trained on scientific literature. They learn from papers. They absorb the patterns. They internalize what gets cited, what gets mentioned, what sounds authoritative.

And they have no idea what actually replicates.

To an LLM, a highly-cited paper that failed replication looks identical to a highly-cited paper that's been validated dozens of times. Both get weighted the same. Both shape the model's understanding of reality.

AI isn't fixing the attention-as-truth problem. It's encoding it into neural weights and amplifying it at scale.

When a researcher asks an AI assistant for background on a drug target, the model reaches for whatever was mentioned most often in its training data. Not whatever was most rigorously validated.

When an AI generates a literature review, it pattern-matches on citation patterns—the same flawed proxy we've been using for decades, now automated and accelerated.

We are building the most powerful knowledge tools in human history on a foundation that doesn't distinguish between popular and true.

The old paradigm treated citations as evidence of correctness. AI models inherited this assumption wholesale. And now they're deploying it at a scale no human system ever could.

Every day this continues, the problem compounds. Bad science gets embedded deeper. Irreproducible findings get amplified further. The gap between what AI "knows" and what's actually true grows wider.

This is why scientific due diligence can't wait. Not because it would be nice to have. Because without it, we're automating the propagation of scientific error at unprecedented scale.

The Arc of Progress

Thomas Kuhn observed that science doesn't advance in a straight line. It lurches through paradigm shifts—periods where the old framework cracks under the weight of anomalies it can't explain.

We are in one of those moments now.

The paradigm that citations equal credibility, that prestigious journals equal truth, that the scientific record is self-correcting—this paradigm is cracking. The anomalies are too large to ignore.

Every major leap in human progress has required better tools for separating truth from belief.

The printing press democratized access to knowledge. The scientific method systematized how we generate it. Peer review created gatekeeping for quality.

Each was revolutionary. Each is now insufficient.

The next leap requires infrastructure that doesn't just publish claims or count attention—but systematically tracks what the evidence actually supports.

This is bigger than fixing science. It's about preserving the epistemic foundations that human progress depends on.

The Work Ahead

This isn't about blaming researchers. You're responding rationally to broken incentives.

And this isn't about replacing peer review or journals. It's about adding the layer that's been missing: continuous, systematic tracking of what the evidence actually supports.

The technology to do this at scale finally exists. The question is whether we build it.

Citation counts helped us navigate an ocean of papers.

But attention was only ever a proxy for what we actually needed.

Attention is not all you need.

We're building the evidence layer.

If you're tired of building on sand, join us.

Request Early Access