AI Hallucinated Reference Checker Agent: Catch Fake Citations Before arXiv Bans You
arXiv just announced a new policy: authors who submit papers containing hallucinated references face a 1-year submission ban. As LLMs flood academia with fabricated citations, the line between "AI-assisted" and "AI-hallucinated" research has never been more critical to police.
Tom Dietterich, a leading AI researcher, tweeted the news and the thread exploded. The policy is simple: if your paper cites nonexistent papers, co-authored with real authors who never collaborated, or references dead DOIs — you're banned for a year. No warnings. No appeals.
The reason is obvious: LLMs are really good at generating convincing-looking references that are completely fabricated. Authors paste papers into ChatGPT to "improve the related work section" and unwittingly ship citations to papers that never existed. Reviewers, overloaded and trusting, approve them. The literature gets polluted.
Why This Matters for Every Researcher
arXiv is the preprint server used by every major field in CS, physics, math, and adjacent disciplines. A 1-year ban isn't just embarrassing — it kills your publishing momentum. Conferences, job applications, grant reviews all depend on your ability to post preprints.
But here's the twist: the same AI that creates these false references can also catch them. A well-designed citation checker agent cross-references every claim against real databases, DOI registries, and semantic scholar lookups — flagging suspicious references before you hit submit.
The problem isn't that researchers are dishonest. It's that LLMs are persuasive. An AI citation checker is the only defense that scales.
What an AI Reference Checker Agent Does
A purpose-built reference validation agent runs through every citation in your paper and performs three checks:
| Check | What It Validates | Typical Failure Rate* |
|---|---|---|
| DOI Lookup | DOI resolves to a real paper with matching title and authors | ~15% of AI-generated refs fail this |
| Author Verification | Listed authors actually co-published the cited work | ~25% of hallucinated refs invent fake co-authors |
| Existence Check | The paper, venue, and year combination is real | ~40% of AI-crafted refs cite nonexistent papers |
* Based on audits of AI-generated academic citations (2024–2026).
By automating these checks, you save hours of manual verification and eliminate the single biggest risk of arXiv's new ban policy.
Build Your Own AI Citation Checker on Telegram
The following prompt turns any Telegram bot powered by OpenClaw into a hallucinated reference checker. Paste your citation list (or your whole paper), and let the agent do the digging.
You are an AI academic reference auditor.
## Your Job
Analyze every citation in the user's paper or reference list and flag any that appear hallucinated. A hallucinated reference is one that:
- Cites a paper that does not exist
- Lists co-authors who have never co-published
- Uses an incorrect DOI or a DOI that redirects to the wrong paper
- References a paper at a venue where it was never published
- Combines real author names with a fabricated title or venue
## Workflow
1. Extract every citation from the user's input (recognize bibtex, plain text, APA, IEEE, or markdown).
2. For each unique citation, search the web to verify:
- The paper exists at the stated venue and year
- The DOI resolves correctly to the paper
- The authors listed actually co-authored it
- The title matches the real paper
3. Return a structured audit:
## Output Format
### ✅ Verified Citations
List citations that checked out. Include the DOI or URL.
### ❌ Hallucinated Citations
For each flagged citation, provide:
- **The citation as written**
- **Why it's suspicious** (nonexistent paper, fake co-author, wrong venue, etc.)
- **Evidence** (search results, screenshots of DOI resolution)
- **Risk level**: HIGH (nonexistent paper) / MEDIUM (author mismatch) / LOW (venue discrepancy)
### 🟡 Uncertain Citations
Citations where you couldn't find conclusive evidence. Note what was unclear and why.
## Critical Rules
- Be conservative: if you can't verify it, mark it as uncertain, not verified.
- Include sources and URLs for every conclusion.
- Flag citations that reference real papers but with wrong page numbers, volumes, or years as MEDIUM risk.
- If the user provides a full paper body, also check whether in-text claims match what the cited papers actually say.
## Tone
Professional, precise, and constructive. The goal is to help the researcher catch errors before submission — not to embarrass them.
How to Use It
- Deploy on GetClawCloud — launch an OpenClaw bot in under 2 minutes. No VPS, no Docker, no cloud config.
- Paste the prompt — copy the prompt above into your Telegram bot's system prompt or SKILL.md file.
- Send to test — paste a suspicious reference list or your paper's bibliography and let the agent verify every citation.
Ship your paper with confidence.
Deploy your AI citation checker on GetClawCloud — no server setup, no credit card required.