
10 results

Audit a project's Claude Code setup against official Anthropic documentation. Evaluates 9 areas: CLAUDE.md, Rules, Skills, Sub-agents, Hooks, Permissions, Settings, MCP servers, and Feature Selection. Produces a scored AUDIT-REPORT.md with findings that cite documentation sources fetched during the session. Use this skill when the user says 'audit', 'audit my setup', 'review my Claude Code config', 'check my configuration', 'how well am I using Claude Code', 'audit my CLAUDE.md', or wants to assess whether their project is making effective use of Claude Code features. Even if the user only mentions one area (e.g., 'are my hooks right?'), use this skill — it evaluates all 9 areas for a complete picture.

End-of-session handoff and start-of-session recall. Use when the user types /checkpoint or signals they're wrapping up — writes a single git commit (subject = one-sentence summary, body = full narrative) and appends the same sentence to CHECKPOINTS.md. ALSO use at the start of a fresh session when the user references prior work ("where were we", "continue", "the thing from yesterday", or any reference to a past decision, file, or investigation the current session hasn't seen) — read CHECKPOINTS.md first, then fetch full commit bodies only for sessions that match.

Cross-document and cross-code coherency analysis — finds contradictions, conflicting values, and inconsistencies across multiple files, then researches official sources to resolve factual errors and asks the user only about preference decisions. MUST use this skill whenever the user asks to compare, cross-check, cross-reference, or verify consistency between 2+ files of any type — docs, configs, code, specs, design tokens, changelogs, READMEs. Trigger phrases include but are not limited to: 'check these are consistent', 'do these contradict', 'make sure everything aligns', 'find inconsistencies', 'cross-check', 'cross-reference', 'telling the same story', 'out of sync', 'gotten out of date', 'don't match', 'disagree', 'drifted'. Also trigger when the user names specific files and asks if they conflict, even without using the word coherency. If the user mentions multiple files and wants to know whether they agree — that's this skill. Do NOT skip this skill just because you could read the files yourself; the skill provides structured claim extraction, severity classification, factual-vs-preference triage, official-source research, and interactive resolution that ad-hoc comparison misses.

Interactive Q&A that collects the user's decisions when agents present multiple options or when multiple agents disagree. Produces a short plain-language brief the user can hand to Plan Mode, a specialized agent, or keep in context. Use this skill whenever any agent (or a board-meeting, brainstorming, or pre-plan session) comes back with 2+ reasonable options, trade-offs, or conflicting recommendations that need a human call — the user is not a developer, so options must be in non-technical plain English with the agent's recommendation flagged and any disagreement between agents surfaced. Also trigger on 'ask me', 'Q&A me', 'let me decide', 'collect my decisions', 'interactive Q&A', 'I need to decide', or any moment the current step is blocked waiting on a human decision. Do not guess the user's preference — ask.

Strategic reality check for any project. Researches whether the solution already exists, whether there are better approaches, and whether the codebase follows current best practices — using live documentation, not stale training data. Use this skill when the user says 'perspective', 'am I on the right track?', 'is there a better way?', 'has someone already built this?', 'reality check', or any request to evaluate whether the current project direction is optimal. Also use when the user questions whether they are reinventing the wheel, wonders about alternatives, or wants to validate architecture against what exists in the ecosystem.

Compresses verbose text into the shortest precise phrasing that preserves every constraint and leaves zero ambiguity. Use whenever the user wants to shorten, compress, tighten, distill, or cut down any text — CLAUDE.md sections, README files, AGENTS.md personas, CONTRIBUTING.md, commit messages, error messages, package descriptions, or any prose that says too much with too many words. Also use when writing new documentation content from scratch that should be maximally concise. Trigger phrases: 'philosophier', 'distill this', 'make this shorter', 'compress this', 'tighten this up', 'cut it down', 'too wordy', 'too verbose', 'every line should carry weight', 'say more with less'. If a user pastes a paragraph and asks for it shorter — this is the skill. If they link a file and say it's too long — this is the skill.

Stress-test and improve implementation plans before execution. Researches alternatives, challenges complexity, checks modularity and feasibility, then delivers a full revised plan. Use whenever the user has a plan — any format (PLAN.md, inline, pasted, attached) — and wants it reviewed before starting. Trigger on 'review this plan', 'challenge this plan', 'stress test', 'is this plan good', 'before we start', 'check the plan', or any request to validate or improve an implementation plan. Also use when the user seems hesitant about a plan or asks 'does this make sense'. Even if the user just says 'plan-challenger' or 'challenge it', invoke this skill.

Audits Claude's own output against the user's original request in the current session — did Claude do exactly what was asked, no more, no less? Checks requirements, styles, constraints, and flags scope creep (extras the user never asked for). Use when the user says "verify", "check your work", "did you follow my request", "audit this", "compliance check", or questions whether Claude improvised, missed something, or added unrequested features. This is specifically about holding Claude accountable to the user's brief — NOT for reviewing other people's code, checking external systems, debugging config files, running tests, or verifying things Claude didn't create.

Audits any website for SEO, AEO (Answer Engine Optimization), GEO (Generative Engine Optimization), and Structured Data. Crawls with Playwright, runs Lighthouse, checks Perplexity citations, applies research-backed rules, scores deterministically, and produces a prioritised fix list with letter grades. Use this skill whenever the user mentions a domain or URL and wants to know what's wrong with it from an SEO, content, structured data, or AI-readiness perspective — even if they don't use the word 'audit'. Triggers include: auditing a website, checking SEO issues, evaluating AI-readiness or answer-engine optimisation, comparing two sites or documentation portals, reviewing structured data or JSON-LD, checking robots.txt AI crawler policy, running a website health check, assessing whether content will appear in Perplexity/ChatGPT search results, or any request for a scored website quality analysis. Also triggers on: /website-audit [domain] [categories...]