← Back to Blog

AI Agent Skills Composer: Discover & Stack AI Coding Agent Enhancements

Addy Osmani's "Agent Skills" repo just crossed 26,000 stars on GitHub — and for good reason. It's not another "AI rules" collection. It's a structured framework of 20+ skills that turn AI coding agents from "prompt-and-pray" into something resembling a senior engineer. The problem? Most developers don't know which skills apply to their work, or how to compose them into a coherent workflow. That's where this agent comes in.

Published by GetClawCloud · May 5, 2026

Let's be real: the default behavior of any AI coding agent is to take the shortest path to "done." You ask for a feature, it writes the feature. No spec. No tests. No review. No launch checklist. It produces code, declares victory, and moves on.

Addy Osmani calls this out in his Agent Skills essay — what's missing from AI coding agents is everything that makes a senior engineer valuable. The spec that forces you to think before you write. The test that defines correctness. The review that catches assumptions. The ship checklist that prevents production incidents.

Agent Skills solves this by injecting structured workflows into the agent's context — not essays about best practices, but step-by-step processes with checkpoints and exit criteria. The repo ships 20 skills organized around six SDLC phases: Define → Plan → Build → Verify → Review → Ship.

The difference between a skill that works and a markdown file that doesn't: skills are workflows with exit criteria, not essays about best practices. Process over prose. Every time.

But here's the challenge Agent Skills doesn't solve: discovery and composition. The repo is a library. You need a librarian — someone who knows your stack, your team's workflow, your project's phase, and can recommend exactly the right skills in the right order. That's the agent we're building today.

What's Trending on HN — and Why It Matters

Agent Skills landed on Hacker News in a moment when the community is ready for it:

📌 Agent Skills (26K+ stars) — Addy Osmani

Score: 129 points · Trending
A framework of 20 structured skills for Claude Code and compatible AI coding agents. Each skill is a markdown file with YAML frontmatter containing activation rules, instructions, and exit criteria. The skills enforce SDLC discipline that AI agents skip by default: spec writing, test-first development, design review, rollback planning, and production readiness checks.

Related trend: "I am worried about Bun" (422 points)

The zig→rust port of Bun surfaces deep concerns about runtime reliability, API churn, and single-point-of-failure in tooling. The lesson: when foundational tools change, you need structured processes, not just trust.

Related trend: Microsoft Edge passwords in memory (433 points)

A security researcher found Edge stores all passwords in plain text in memory — "by design." The lesson: AI agents that review your infrastructure decisions with a critical eye are not optional.

These stories share a thread: default behavior is not safe behavior. Agent Skills provides the scaffolding to force better defaults. The agent we're building provides the discovery layer — so you don't have to read 20 markdown files to figure out which three skills your project actually needs today.

What the Agent Skills Composer Actually Does

This agent acts as a skill librarian and composer. It doesn't replace Agent Skills — it makes them usable:

🔍 Phase 1: Audit Your Current Workflow

The agent asks about your project type (web app? CLI tool? library?), your current AI coding tool (Claude Code? Cursor? Codex?), your team size, and where your pain points are — quality issues? slow reviews? shipping bugs? no testing habits?

📚 Phase 2: Discover Relevant Skills

Based on your profile, the agent indexes the Agent Skills repo and recommends 3-5 specific skills. It explains what each does, which SDLC phase it covers, and what problem it solves for you. For example: "You're shipping without specs → start with /spec. Your reviews take too long → add /review and /ship next."

🧩 Phase 3: Compose a Skill Stack

The agent helps you order skills into a daily workflow. Which runs automatically? Which ones you trigger manually? Which run in sequence? It generates a configuration you can paste into your AI coding tool.

🔄 Phase 4: Monitor Effectiveness

After a week, the agent checks back: are skills running? Which ones are helping? Which are being ignored? It adjusts recommendations based on actual usage patterns.

The Prompt

Copy this prompt and paste it into your OpenClaw-powered Telegram bot. The agent will guide you through setting up your Agent Skills stack.

You are an AI Agent Skills Composer. Your job is to help the user discover, evaluate, and compose Agent Skills (from Addy Osmani's agent-skills repo or similar systems) into a customized workflow for their AI coding agent. ## Context Agent Skills is a framework of structured workflows that fix the default behavior of AI coding agents. Instead of "write the code and call it done," skills enforce: spec → plan → build in vertical slices → verify with tests → review → ship safely. The repo has 20+ skills. Most users don't need all of them. Some skills conflict. Some are only useful at specific project phases. Your job is to curate. ## Workflow ### Phase 1: Discover the User's Context Ask the user to describe: 1. **Project type** — web app, mobile app, CLI tool, library, game, data pipeline, infrastructure-as-code, etc. 2. **AI coding tool(s)** — Claude Code, Cursor, Copilot, Codex, Aider, or something else 3. **Team size** — solo, small team (2-5), larger team (6+) 4. **Pain points** — pick your top 1-2 from: - Quality issues (bugs shipped, edge cases missed) - Slow reviews (PRs sit for days) - No testing culture (tests are an afterthought) - Shipping anxiety (deploying feels risky) - Spec ambiguity (building the wrong thing) - Tool lock-in (worried about agent gatekeeping/refusals) 5. **Project phase** — greenfield, active development, maintenance, rewrite ### Phase 2: Recommend Skills (3-5) Based on the user's context, recommend a starter pack of skills. For each skill, explain: - What it does (in one sentence) - Which SDLC phase it belongs to (Define/Plan/Build/Verify/Review/Ship) - The specific problem it addresses for their context Recommended skill mapping (not exhaustive — search the agent-skills repo for current list): | Pain Point | Primary Skill(s) | Phase | |------------------------|------------------|---------| | Shipping bugs | /verify + /test | Verify | | Slow reviews | /review + /code-simplify | Review | | Spec ambiguity | /spec | Define | | Shipping anxiety | /ship | Ship | | No testing culture | /test | Verify | | Building wrong thing | /plan | Plan | | Tool lock-in fear | /build (DIY workflow + portable skills) | Build | ### Phase 3: Compose the Workflow Help the user arrange the selected skills into a daily workflow. Example output: > **Your Agent Skills Workflow — Solo Web Dev** > > **Morning (automatic):** > 1. /spec — Write or update spec for today's task before writing code > 2. /plan — Break the spec into reviewable chunks > > **During coding (manual trigger):** > 3. /build — Implement in vertical slices (trigger per chunk) > 4. /test — Write failing test, implement, watch it pass (trigger per slice) > > **Before PR (automatic check):** > 5. /review — Generate review checklist based on spec > 6. /code-simplify — Optional: simplify complex implementations > > **Before deploy (manual):** > 7. /ship — Run launch checklist and rollback plan > > **Configuration (paste into your .claude/settings.json or equivalent):** > ``` > { > "skills": { > "required": ["/spec", "/test", "/review", "/ship"], > "optional": ["/code-simplify", "/plan"], > "order": ["/spec", "/test", "/review", "/ship"] > } > } > ``` > > **Note:** Create separate skill packs for "new feature" vs "bug fix" vs "refactor" — they need different skills. ### Phase 4: Follow-up (1 week later) When the user returns: 1. Ask how many of the skills they actually used 2. Which ones generated real value? Which felt like overhead? 3. Recommend adjustments: add skills, remove skills, reorder, or change auto/manual triggers ## Key Principles - **Process over prose.** Never suggest an essay-length "AI rules." Always suggest workflows with exit criteria. If a skill is just a wall of text, flag it as ineffective. - **3-5 is the sweet spot.** More than 5 and the user will ignore all of them. Less than 3 and there's no meaningful change. - **Start with the pain, not the skill.** If the user's problem is "no testing culture," recommending /spec before /test is wrong — they need proof-of-value from tests first. - **Progressive onboarding.** Don't recommend /ship (production safety) in week 1. Get them using /test first, then build to /review and /ship. - **Portability matters.** Favor skills that work across Claude Code, Cursor, and Codex. If a skill is vendor-locked, note it as a risk. - **Lift prompt examples from the source.** If a skill references real prompts or workflows, show the user a concrete example so they understand the format. ## Output Format Always structure responses with Telegram-friendly formatting (bold for emphasis, no tables unless essential, clear sections). **🎯 Your Skill Pack: [Name]** - Skill 1: [one-liner] - Skill 2: [one-liner] - Skill 3: [one-liner] **📋 Workflow:** Step-by-step daily flow **⚙️ Config:** Code block with configuration **💡 Tip:** One actionable piece of advice specific to their context --- Start by asking the user about their project and AI coding setup.

💡 Works with any OpenClaw agent that has web search access. The agent indexes the Agent Skills repo in real-time, so it always recommends the latest skills.

Real-World Example: Solo Web Developer

Here's what happens when a solo developer building a Rails app runs this agent:

🎯 Your Skill Pack: Solo Rails Foundation

📋 Workflow:

  1. Start session: run /spec to scope the task
  2. Per unit of work: run /test → write the failing test → agent implements → verify pass
  3. Before commit: /review generates self-review prompts
  4. Before deploy: /ship validates readiness

💡 Tip: Don't install all 20 skills. Start with /test for one week. If tests improve, add /review. Add /spec and /ship only after the review habit is solid. Layering skills too fast is how they get ignored.

After a week, the user comes back and says: "I used /test and /review daily. I caught three edge cases I would have missed. But /spec felt heavy for small changes." The agent adjusts: use /spec only for features larger than 2 pomodoros, keep /test and /review for everything.

How to Use It

  1. Deploy OpenClaw on GetClawCloud — one click, no server config
  2. Paste the prompt above into your Telegram bot — the agent will ask about your setup and recommend skills
  3. Send to test — describe your project and pain points, and the agent returns a curated skill pack with workflow
⚠️ Important: The Agent Skills repo targets Claude Code specifically. Some skills use Claude Code slash commands (/spec, /plan, /build, etc.). The prompts and workflows in this agent are Claude Code-focused, but the concepts (write specs first, test before implementation, review before merge) are universal. If you use Cursor or Codex, focus on the workflow principles, not the slash commands.

Going Deeper: Custom Skills

Once you're comfortable with the starter pack, you can create your own skills. The format is straightforward:

--- name: "your-skill-name" description: "One-line description of what this skill does" trigger: "automatic|manual" phase: "Define|Plan|Build|Verify|Review|Ship" --- # Skill Name ## When to use When [specific condition] occurs. ## Steps 1. [Step 1 with checkpoints] 2. [Step 2 with checkpoints] 3. [Step 3 with checkpoints] ## Exit criteria - [ ] Criteria 1 (must be verified by human or tool) - [ ] Criteria 2 - [ ] Criteria 3 ## Evidence Leave evidence of each step in [location]. This is how reviewers verify the skill ran.

The key: exit criteria and evidence. Without them, a skill is just an essay. With them, it's a process you can verify. That's the whole Agent Skills philosophy in a single template.

Why This Beats Dumping 20 Skills at Once

Every developer who encounters Agent Skills has the same first reaction: "I should install all of them." That's the wrong move. Here's why:

A skill composer agent solves all of these. It treats the skills repo as a library, not a manual. It curates. It phases. It adjusts. It fires the librarian so you can stay in flow.

Agent Skills is the most important AI coding project of 2026 — not because it writes better code, but because it creates processes that prevent worse code. The skills composer agent makes those processes actually usable.

Who This Actually Helps

Get Your Skill Stack in 2 Minutes

Deploy OpenClaw on GetClawCloud, paste the prompt, and describe your project. The agent will return a curated set of skills with a workflow that actually fits your day. No more dumping 20 README files into your context.

Start on GetClawCloud →