
27 results

Dispatch as a subagent to execute a specific analysis task. Receives full task specification from the orchestrating agent. Does NOT inherit session context. Examples: <example>Context: An orchestrating agent is running subagent-driven-analysis. user: "Execute Task 3: numeric feature preprocessing" assistant: "Dispatching analyst subagent with the full task specification." <commentary>Each task gets a fresh analyst subagent with only the context it needs.</commentary></example>

Use after statistical review has passed to check code quality of analysis code. Reviews for vectorization, reproducibility, clarity, and artifact correctness. Examples: <example>Context: Statistical review has passed for a feature engineering task. user: "Statistical review approved the feature engineering code" assistant: "Now let me dispatch the code quality reviewer to check implementation quality." <commentary>Code quality review always comes after statistical review, never before.</commentary></example>

Use after an analyst subagent completes a task to verify statistical correctness. Reviews for data leakage, correct metric selection, proper cross-validation, and sound statistical methodology. Examples: <example>Context: A feature engineering task has been completed. user: "Feature engineering for numeric columns is done" assistant: "Let me dispatch the statistical reviewer to check for leakage and correct transformer usage." <commentary>Statistical review must happen before code quality review, and always after any data transformation or modeling task.</commentary></example>