Who Owns the Code Your AI Wrote? Build an Audit Agent on Telegram
Claude Code's source leaked — and nobody could prove who owned it. The same question applies to every AI-written line in your codebase. Here's how to audit it with a single Telegram prompt.
Three stories hit Hacker News this week that, taken together, form a warning every developer should hear:
- Ghostty is leaving GitHub — Mitchell Hashimoto, GitHub user #1299 (joined 2008), is moving his entire open-source ecosystem off the platform. After 18 years.
- Who owns the code Claude Code wrote? — A legal analysis that went viral (256+ points) examining what happens when an AI writes most of its own codebase, then Anthropic issued DMCA takedowns for it.
- CVE-2026-3854: GitHub RCE vulnerability — A single
git pushcould compromise millions of repositories. Discovered using AI.
The thread connecting them? Your code — and who really controls it — is suddenly much harder to answer.
The Gray Area Nobody's Talking About
When Anthropic accidentally published 512,000 lines of Claude Code's source code (March 31, 2026), the codebase was mirrored across GitHub before sunrise. One developer rewrote the entire thing in Python with an AI tool. The repository hit 100,000 stars in a single day — the fastest in GitHub history.
Then came the DMCA takedowns. And then came the question nobody had a clean answer to:
This isn't a hypothetical. Every team shipping AI-assisted code faces three unanswered questions:
- Ownership: Did a human make enough creative decisions for copyright to attach? (USCO guidance says "no" for purely AI-generated output)
- Contamination: Did your AI model train on GPL-licensed code and quietly inject copyleft requirements into your proprietary codebase?
- Employment: Does your employment contract already assign AI-assisted work to your employer — even if copyright is unclear?
Most teams are ignoring this entirely. The ones who aren't are building audit workflows into their pipeline.
How an AI Ownership Audit Agent Works
Instead of waiting for a lawsuit, you can build an AI agent that audits every piece of AI-generated code entering your codebase. The agent:
- Scans code for license headers, dependency chains, and attribution comments
- Researches whether AI-generated patterns might be derived from known open-source implementations
- Flags high-risk patterns: GPL dependencies, unlicensed generated code, missing attributions
- Documents a compliance trail so you can prove due diligence
The best part? You don't need a complex pipeline. One prompt in Telegram, connected to an OpenClaw agent with web search, does the job.
The Prompt: AI Code Ownership & License Audit Agent
Copy-paste this into your OpenClaw-powered Telegram bot, then send it code snippets, file paths, or dependency lists for analysis.
How to use:
- Deploy OpenClaw on GetClawCloud (1-minute setup)
- Connect it to Telegram (built-in pairing)
- Send this prompt as your first message
- Then send code, licenses, or scenarios — the agent audits them
💡 Works in any OpenClaw agent with web search. Paste, send your first audit target, and get a structured report.
Real Scenarios This Agent Handles
🔬 "Audit this file"
Send a code file. The agent scans for license headers, checks dependencies, and flags any patterns that match known open-source implementations. Use it before every commit.
📦 "Check my dependencies"
Paste a package.json or Cargo.toml. The agent checks every dependency's license, cross-references SPDX identifiers, and flags GPL/AGPL contamination risks.
⚖️ "I used Claude Code to write this module — what's my risk?"
Describe how the code was generated (model, prompt, and how much you modified the output). The agent evaluates human creative input and estimates copyrightability under current guidance.
🔄 "Compare these two snippets"
Provide an AI-generated snippet and a known OSS file. The agent compares them for structural similarity and estimates derivative work risk.
📋 "Generate an SBOM compliance report"
The agent builds a software bill of materials with license annotations — useful for due diligence, investor audits, and insurance questionnaires.
Why This Matters Now
The legal landscape is shifting fast:
- US Copyright Office (March 2025): AI-generated content with insufficient human authorship is not copyrightable. "Prompting alone" doesn't qualify.
- EU AI Act (August 2025): Requires transparency for AI training data — providers must disclose what licensed code was used for training.
- Ongoing lawsuits: Multiple class-action suits against GitHub Copilot, OpenAI, and others for training on GPL code without attribution.
- CVE-2026-3854: GitHub's own infrastructure was compromised — if the platform hosting your code can be breached, the provenance of every commit matters.
Level Up: Automated Pre-Commit Auditing
Want to catch issues before they reach production? The same audit logic can run automatically:
- Pre-commit hook: Pipe new AI-generated code through the agent before merging
- Daily dependency scan: Schedule the agent to check your dependency tree every morning — delivered to your Telegram
- Weekly compliance report: Cron job runs the full audit on your repository's AI-generated modules and posts a summary
With OpenClaw's cron scheduling, you can automate recurring audits and have them delivered directly to your Telegram inbox — no dashboards to check.
Getting Started
Three steps, under two minutes:
- Launch an OpenClaw agent on GetClawCloud — no VPS, no Docker, no config
- Connect Telegram — one-click pairing, works out of the box
- Paste the audit prompt above and send your first code snippet for analysis
The same agent can handle code audits, research, monitoring, and more — it's a single OpenClaw deployment that grows with your workflow.
Deploy Your Audit Agent in 1 Minute
Launch OpenClaw on the cloud, connect Telegram, and paste the audit prompt. No server setup, no complex pipelines.
Start with GetClawCloud →