Skills vs Agents vs MCPs vs Hooks
Four different primitives that all extend an AI coding assistant. They look similar at first, behave very differently in practice, and the right choice depends on what you're trying to teach the tool.
The one-line summary
- Skill — a procedure the host loads as content. Static text, runs in the same conversation.
- Sub-agent — a separate LLM call with its own tools and context. Dynamic delegation.
- MCP server — a long-running process that exposes tools to the host over a standard protocol. Live system access.
- Hook — a shell command the host fires automatically on lifecycle events. Enforcement, not guidance.
If you remember nothing else: skills are content, agents are processes, MCPs are servers, hooks are triggers.
Skills: teach the host
A skill is a Markdown file with front matter. The host loads it into the current conversation either on demand (a slash command) or automatically (project scope). The model then follows the procedure in-line. Cheapest primitive to author, most portable, least powerful — a skill cannot read your inbox or block a commit.
Reach for a skill when: the work is procedural, repeats often, and would otherwise be re-explained to the model every time. See What are Claude Skills for the format and anatomy.
Sub-agents: delegate the task
A sub-agent is an entirely separate LLM call. The host hands off a specific task with a constrained tool allow-list and a tight system prompt; the sub-agent works in its own context window and returns a single answer. Its scratch work doesn't pollute the parent conversation.
Three properties make sub-agents useful:
- Context isolation. The parent stays focused on the user's main task; the sub-agent burns its own tokens.
- Specialised tooling. A "research" agent can have web-search but no shell; a "test-runner" agent can run shell but cannot edit files.
- Different model. Use a faster model for triage, a slower one for the hard stuff.
Reach for a sub-agent when: the work is well-scoped, benefits from a clean slate, and the parent doesn't need to see the intermediate reasoning. Browse the agents hub for examples.
MCP servers: connect the systems
MCP — the Model Context Protocol — is an open standard for exposing tools to LLM clients. An MCP server is a process you run (locally or remotely) that advertises a set of tools over a JSON-RPC interface. Compatible AI clients (Claude Code, Cursor, Gemini CLI, increasingly others) can call those tools during a conversation.
Anything that needs live access lives in an MCP: Gmail, GitHub, Slack, Notion, Linear, Postgres, your filesystem, your internal APIs. The skill / agent layers above can reference the tools an MCP exposes ("use the github.list_issuestool to find open bugs") but they don't carry the connection themselves.
Shared Context stores MCP server configurations — the launch descriptor and credentials your AI tool needs — not the running process. Install one and the entry lands in your platform's native config so the tool becomes available the next time the assistant starts. See the MCP servers hub.
Hooks: enforce, don't suggest
A hook is a shell command the host fires automatically on lifecycle events — before a tool call, after a file edit, before the model responds, when a session starts or stops. Unlike skills (which the model can decide to ignore), hooks always run.
Common uses:
- Format the file the assistant just wrote.
- Lint after every edit; block the response if linting fails.
- Refuse tool calls that would touch sensitive paths.
- Ping a Slack channel when a long-running session starts.
- Log every command for audit.
Reach for a hook when: the rule must apply every time, not just when the model remembers. Hooks are the right place for safety constraints, audit requirements, and team-wide style enforcement. See the hooks hub.
How they compose
These primitives don't compete — they compose. A typical mature setup combines all four:
- A collection of skills teaches the host the team's procedures.
- One or two sub-agents handle long-running tasks (research, testing) without polluting the main context.
- A few MCP servers wire in the live systems people work in (issue tracker, knowledge base).
- A small set of hooks enforces the things the team can't afford to leave to the model — lint, secrets, audit.
The decision is rarely "skill OR agent" — it's "what's the smallest thing that solves this?" Start with a skill; promote to a sub-agent if the context cost gets ugly; reach for an MCP only when you need live system access; reach for a hook only when "the model should remember" isn't strong enough.
Quick decision table
| You want to… | Primitive |
|---|---|
| Teach a procedure once and have the host follow it. | Skill |
| Run a complex task in its own context window. | Sub-agent |
| Let the host call your CRM / inbox / database. | MCP server |
| Force a check to run on every edit. | Hook |
| Bundle several of the above into a "starter kit." | Collection |
Where the lines blur
Two real ambiguities to be aware of. First, every host renders these primitives slightly differently — Claude Code's notion of a hook is more developed than Cursor's right now, and Gemini's "extensions" sit somewhere between skills and collections. Second, an ambitious skill can simulate a sub-agent by including instructions like "before you answer, plan privately and discard." It works, but it's worse than the real thing because the privately-planned tokens still count against the parent's context.
For a fuller working glossary, see the Glossary. For the Claude plug-in container that bundles many of these together, see How Claude Plugins Work.