AI Agent Skills Composer: Discover & Stack AI Coding Agent Enhancements
Addy Osmani's "Agent Skills" repo just crossed 26,000 stars on GitHub — and for good reason. It's not another "AI rules" collection. It's a structured framework of 20+ skills that turn AI coding agents from "prompt-and-pray" into something resembling a senior engineer. The problem? Most developers don't know which skills apply to their work, or how to compose them into a coherent workflow. That's where this agent comes in.
Let's be real: the default behavior of any AI coding agent is to take the shortest path to "done." You ask for a feature, it writes the feature. No spec. No tests. No review. No launch checklist. It produces code, declares victory, and moves on.
Addy Osmani calls this out in his Agent Skills essay — what's missing from AI coding agents is everything that makes a senior engineer valuable. The spec that forces you to think before you write. The test that defines correctness. The review that catches assumptions. The ship checklist that prevents production incidents.
Agent Skills solves this by injecting structured workflows into the agent's context — not essays about best practices, but step-by-step processes with checkpoints and exit criteria. The repo ships 20 skills organized around six SDLC phases: Define → Plan → Build → Verify → Review → Ship.
But here's the challenge Agent Skills doesn't solve: discovery and composition. The repo is a library. You need a librarian — someone who knows your stack, your team's workflow, your project's phase, and can recommend exactly the right skills in the right order. That's the agent we're building today.
What's Trending on HN — and Why It Matters
Agent Skills landed on Hacker News in a moment when the community is ready for it:
📌 Agent Skills (26K+ stars) — Addy Osmani
Score: 129 points · Trending
A framework of 20 structured skills for Claude Code and compatible AI coding agents. Each skill is a markdown file with YAML frontmatter containing activation rules, instructions, and exit criteria. The skills enforce SDLC discipline that AI agents skip by default: spec writing, test-first development, design review, rollback planning, and production readiness checks.
Related trend: "I am worried about Bun" (422 points)
The zig→rust port of Bun surfaces deep concerns about runtime reliability, API churn, and single-point-of-failure in tooling. The lesson: when foundational tools change, you need structured processes, not just trust.
Related trend: Microsoft Edge passwords in memory (433 points)
A security researcher found Edge stores all passwords in plain text in memory — "by design." The lesson: AI agents that review your infrastructure decisions with a critical eye are not optional.
These stories share a thread: default behavior is not safe behavior. Agent Skills provides the scaffolding to force better defaults. The agent we're building provides the discovery layer — so you don't have to read 20 markdown files to figure out which three skills your project actually needs today.
What the Agent Skills Composer Actually Does
This agent acts as a skill librarian and composer. It doesn't replace Agent Skills — it makes them usable:
🔍 Phase 1: Audit Your Current Workflow
The agent asks about your project type (web app? CLI tool? library?), your current AI coding tool (Claude Code? Cursor? Codex?), your team size, and where your pain points are — quality issues? slow reviews? shipping bugs? no testing habits?
📚 Phase 2: Discover Relevant Skills
Based on your profile, the agent indexes the Agent Skills repo and recommends 3-5 specific skills. It explains what each does, which SDLC phase it covers, and what problem it solves for you. For example: "You're shipping without specs → start with /spec. Your reviews take too long → add /review and /ship next."
🧩 Phase 3: Compose a Skill Stack
The agent helps you order skills into a daily workflow. Which runs automatically? Which ones you trigger manually? Which run in sequence? It generates a configuration you can paste into your AI coding tool.
🔄 Phase 4: Monitor Effectiveness
After a week, the agent checks back: are skills running? Which ones are helping? Which are being ignored? It adjusts recommendations based on actual usage patterns.
The Prompt
Copy this prompt and paste it into your OpenClaw-powered Telegram bot. The agent will guide you through setting up your Agent Skills stack.
💡 Works with any OpenClaw agent that has web search access. The agent indexes the Agent Skills repo in real-time, so it always recommends the latest skills.
Real-World Example: Solo Web Developer
Here's what happens when a solo developer building a Rails app runs this agent:
🎯 Your Skill Pack: Solo Rails Foundation
- /spec — Define what you're building before you write code. Solves your "building the wrong thing" problem.
- /test — Always write the failing test first. Solves your "bugs in production" problem.
- /review — Self-review checklist before PR. Solves your "nobody else reviews my code" problem.
- /ship — Launch readiness checklist. Solves your "deploying alone is scary" problem.
📋 Workflow:
- Start session: run /spec to scope the task
- Per unit of work: run /test → write the failing test → agent implements → verify pass
- Before commit: /review generates self-review prompts
- Before deploy: /ship validates readiness
💡 Tip: Don't install all 20 skills. Start with /test for one week. If tests improve, add /review. Add /spec and /ship only after the review habit is solid. Layering skills too fast is how they get ignored.
After a week, the user comes back and says: "I used /test and /review daily. I caught three edge cases I would have missed. But /spec felt heavy for small changes." The agent adjusts: use /spec only for features larger than 2 pomodoros, keep /test and /review for everything.
How to Use It
- Deploy OpenClaw on GetClawCloud — one click, no server config
- Paste the prompt above into your Telegram bot — the agent will ask about your setup and recommend skills
- Send to test — describe your project and pain points, and the agent returns a curated skill pack with workflow
Going Deeper: Custom Skills
Once you're comfortable with the starter pack, you can create your own skills. The format is straightforward:
The key: exit criteria and evidence. Without them, a skill is just an essay. With them, it's a process you can verify. That's the whole Agent Skills philosophy in a single template.
Why This Beats Dumping 20 Skills at Once
Every developer who encounters Agent Skills has the same first reaction: "I should install all of them." That's the wrong move. Here's why:
- Cognitive load kills adoption. 20 skills = 20 things to remember = you remember none of them.
- Skills conflict in practice. /spec before /plan? /plan before /build? The order matters and the default may not fit your workflow.
- Context window tax. Each skill injects text into the agent's context. 20 skills means less room for actual code.
- Skills are phase-dependent. During maintenance, you don't need /spec. During greenfield, you don't need /ship yet.
A skill composer agent solves all of these. It treats the skills repo as a library, not a manual. It curates. It phases. It adjusts. It fires the librarian so you can stay in flow.
Who This Actually Helps
- Solo developers — no senior engineer to review your code? Let the skill stack be your reviewer.
- Small teams (2-5) — standardize on shared skill stacks so everyone ships with the same quality bar.
- Tech leads & CTOs — define team-wide skill requirements without micromanaging individual workflows.
- Indie hackers — ship faster with fewer production incidents by enforcing review and ship checklists.
- AI tool skeptics — the structured approach fixes what makes AI coding agents unreliable: the lack of process.
Get Your Skill Stack in 2 Minutes
Deploy OpenClaw on GetClawCloud, paste the prompt, and describe your project. The agent will return a curated set of skills with a workflow that actually fits your day. No more dumping 20 README files into your context.
Start on GetClawCloud →