
67 results

Use when shipping code - complete workflow: merge main, run tests, multi-review with auto-fix, commit, push, create PR. Replaces manual commit-push-pr flows.

Autonomously triage and resolve a GitHub issue from analysis to PR ready for merge - handles investigation, fixes, testing, and PR workflow

Use when creating skills, writing SKILL.md files, editing skill definitions, or adding new reusable techniques to ai-coding-config

Set up or update AI coding configurations - interactive setup for Claude Code, Cursor, and other AI coding tools

Generate context handoff and copy to clipboard for new session - preserves decisions, progress, and next steps across conversations

Change or activate a personality for both Cursor and Claude Code - syncs personality across tools with alwaysApply frontmatter

Multi-agent code review with diverse perspectives - run multiple specialized reviewers in parallel for comprehensive analysis

Set up repos with linting, formatting, and CI/CD - configures ESLint, Prettier, Husky, Ruff based on detected language

Save and resume development sessions across conversations - preserves context, decisions, and progress for continuity

Initialize development environment for git worktree - creates worktree, installs deps, validates setup for productive work

Autonomous production error resolution system - analyzes, prioritizes, and fixes errors from Sentry or error logs

Scan dependencies for updates, discover new features from changelogs, implement quick wins, create issues for larger opportunities - transforms maintenance into capability expansion

Merge PR, sync local to main, clean up branch - the satisfying end to a feature workflow

Use when completing tasks autonomously, working independently, delivering PR-ready code, or needing end-to-end implementation without supervision

Use when reviewing frontend design, checking UI quality, auditing visual consistency, or verifying responsive behavior across viewports

Use when debugging errors, investigating failures, finding root causes, or tracing unexpected behavior to its source

Use when writing commit messages, creating PR descriptions, naming branches, or needing git messages that explain why changes were made

Use when choosing libraries, evaluating npm packages, deciding build vs buy, researching technology choices, or comparing library options

Use when finding, downloading, or fetching service logos, brand icons, official SVG assets, or adding integration logos

Use when reviewing logging, checking error tracking, auditing monitoring patterns, or ensuring production issues are debuggable at 3am

Use when reviewing performance, finding N+1 queries, checking algorithmic complexity, or catching efficiency problems before production

Use when writing prompts, creating agent instructions, designing system prompts, or crafting LLM-to-LLM communication

Use when reviewing for production readiness, fragile code, error handling, resilience, reliability, or catching bugs before deployment

Use when reviewing security, checking for injection flaws, auditing authentication, or finding OWASP vulnerabilities before attackers do

Use when simplifying code, reducing complexity, eliminating redundancy, or making code more readable without changing behavior

Use when recalling conversations from Limitless Pendant, finding what was discussed in meetings, searching lifelogs, or answering 'what did I say about' questions

Use when current web info needed, verifying APIs still work, checking latest versions, or avoiding outdated implementations

Use when debugging bugs, test failures, unexpected behavior, or needing to find root cause before fixing

Triage and address PR comments from code review bots - analyzes feedback, prioritizes issues, fixes valid concerns, and declines incorrect suggestions

Generate or update AGENTS.md with project context for AI assistants - creates universal context for Claude Code, Cursor, Copilot

Use when reviewing architecture, checking design patterns, auditing dependencies, or catching structural problems before they multiply

Use when writing tests, generating test coverage, creating unit/integration tests, or ensuring code is proven to work before shipping

Use when writing prompts, agent instructions, SKILL.md, commands, system prompts, Task tool prompts, prompt engineering, or LLM-to-LLM content

Use when scanning for code pattern inconsistencies - prop naming, implementation approaches, boolean conventions, import patterns, deprecated usage

Use when analyzing test coverage, reviewing test quality, finding coverage gaps, or identifying brittle tests that test the wrong things

Use when facing hard architectural decisions, multiple valid approaches exist, need diverse perspectives before committing, or want M-of-N synthesis on complex problems

Use when reviewing for logic bugs, edge cases, off-by-one errors, race conditions, or finding correctness issues before users do

Use when running tests, checking test results, getting pass/fail status, or needing terse output that preserves context

Use when reviewing mobile UX, checking responsive design, testing touch interactions, or verifying mobile layouts work on phones and tablets

Use when writing, reviewing, or refactoring React or Next.js code, optimizing React performance, fixing re-render issues, reducing bundle size, eliminating waterfalls, or improving data fetching patterns

Execute development task autonomously from description to PR-ready - handles implementation, testing, and git workflow without supervision

Generate or update llms.txt to help LLMs understand your site - standardized file for AI assistants to navigate documentation

Use when scanning for missing user feedback - loading states, success confirmation, error display, empty states, moments where users don't know what's happening

AI Product Manager - maintain living product understanding, keep docs/knowledge/ current as single source of truth

Use when automating browsers, testing pages, taking screenshots, checking UI, verifying login flows, or testing responsive behavior

Load relevant coding rules for the current task - analyzes context and loads only needed rules from rules/

Use when scanning for inconsistent user experiences - tooltip behaviors, loading patterns, feedback mechanisms, interaction patterns that feel different in different places

Use when reviewing code style, checking naming conventions, auditing project patterns, or ensuring consistency with codebase conventions

Clean up a git worktree after its PR has been merged - verifies merge status and safely removes the worktree directory

Use when reviewing UX, user experience, interfaces, user-facing features, or need empathy/design perspective on code changes

Use when scanning for dead-end error paths - errors without retry, failures without recovery options, places where users get stuck when things go wrong

Scan codebase for polish issues - the "last 15%" that separates good from polished

Use when finding new AI agent skills, discovering capabilities, installing skills from GitHub, searching skill marketplaces, or expanding what Claude can do - like Neo downloading martial arts in The Matrix

Use when analyzing YouTube videos, extracting insights from tutorials, researching video content, or learning from talks and presentations

Research product intelligence on competitors and industry trends - analyzes features, positioning, and opportunities

Use when managing project learnings - view, add, search, prune, or export operational knowledge that persists across sessions

Use when reviewing error handling, finding silent failures, checking try-catch patterns, or ensuring errors surface properly with actionable feedback

Use when designing user interfaces, writing user-facing content, crafting error messages, or making experiences feel obvious and polished

Verify a fix actually works before claiming success - runs tests, checks live behavior, confirms fixes work from user perspective

Use when rough ideas need design before code, requirements are fuzzy, multiple approaches exist, or you need to explore options before implementation

Use when testing MCP servers, debugging MCP tool responses, exploring MCP capabilities, or diagnosing why an MCP tool returns unexpected data

Use when reviewing comments, checking docstrings, auditing documentation accuracy, or finding stale/misleading comments in code

Use when finding meeting transcripts, searching Fireflies recordings, getting action items from calls, or answering 'what was discussed in the meeting' questions

Use when auditing SEO, improving search rankings, implementing structured data, or optimizing for organic traffic and Core Web Vitals