Glossary
The AI coding tooling space invented a lot of new vocabulary in a short time. These are working definitions — short, neutral, with a link out to a deeper explainer when one exists.
- Agent
- In the broadest sense, any LLM call configured with a system prompt and a tool set. In the AI tooling space the word usually means a "sub-agent" the host can delegate to — a separate LLM call with isolated context. Browse agents →
- Anthropic
- The company behind Claude, the LLM family, and Claude Code, the AI coding assistant. Anthropic introduced the Skill format and the Claude plugin spec.
- Claude
- Anthropic’s LLM family. The current generation includes Claude Opus, Sonnet, and Haiku in size order. Claude is the model that powers Claude Code.
- Claude Code
- Anthropic’s official CLI for working with Claude in a code project. First-party support for skills, sub-agents, MCP servers, hooks, plugins, and slash commands.
- Codex
- OpenAI’s coding assistant CLI. Supports skills, agents, and MCP servers, with its own on-disk conventions in `~/.codex/`.
- Collection
- A curated bundle of resources — typically skills, agents, hooks, and MCP configs grouped around a role or workflow. Installable as a unit; members are also installable individually. Browse collections →
- Cursor
- A VS Code-derived AI coding editor. Supports skills, agents, MCPs, and rules; has its own on-disk conventions in `~/.cursor/`.
- Front matter
- YAML metadata block at the top of a Markdown file. In a `SKILL.md`, the front matter declares the skill’s name, description, and discovery flags like `user-invocable`.
- Gemini CLI
- Google’s command-line AI coding tool. Calls its skills "extensions"; otherwise the conceptual model is similar.
- Hook
- A shell command the host AI tool fires automatically on a lifecycle event — before a tool call, after a file edit, on session start. Used for enforcement and audit, not guidance. Browse hooks →
- Install
- The act of placing a resource on disk in the right location for an AI tool to discover it on next startup. On Shared Context, "install" is one CLI command per platform.
- Marketplace
- A server that lists plugins (or in non-Claude tools, equivalent bundles) and points to where each one lives. Can be public or private. The Claude plugin marketplace format is a single JSON file.
- MCP / Model Context Protocol
- An open standard for exposing tools to LLM clients. An MCP server runs as a process and advertises tools over a JSON-RPC interface; compatible clients can call those tools during a conversation. Browse MCP servers →
- MCP server
- A process that implements the MCP protocol and exposes a set of tools. Local or remote. Examples: Gmail, GitHub, Postgres, Slack, Linear.
- Plugin
- A bundle of skills, agents, MCP configs, and hooks packaged in a Claude-recognised directory layout with a `plugin.json` manifest. Installable as a single unit. How plugins work →
- Resource
- On Shared Context: any addressable thing in the library — a skill, agent, collection, MCP config, hook, or file. The platform-neutral container.
- Slash command
- A user-invocable instruction the host exposes by name, prefixed with `/`. A skill with `user-invocable: true` in front matter becomes a slash command in Claude Code.
- Skill
- The simplest extensibility primitive — a Markdown file (typically `SKILL.md`) with YAML front matter that an AI host loads on demand and follows as a procedure. What are Claude Skills →
- Sub-agent
- A separate LLM call delegated to by a parent agent. Has its own system prompt, tool allow-list, and context window; returns a single answer to the parent.
- Sub-skill
- A skill referenced from inside another skill. Useful for composing complex procedures from smaller named pieces. Discovery is a host-side convention.
- System prompt
- The persistent instruction set the model sees at the top of every call. Skills, agents, and the host itself contribute to this; understanding which layer adds what is core to debugging an AI tool.
- Tool
- A function the LLM can call. May be host-built-in (file read, shell), MCP-provided (Gmail send), or skill-described (calls another sub-agent).
- Tool use
- The mechanism by which an LLM requests a tool be called and receives the result. Standardised across providers as the function-calling / tool-calling pattern.
- YAML
- A human-readable structured-data format used for front matter on skills, agents, and plugin manifests. Whitespace-significant; if a file fails to load, check indentation first.