zpower426

zpower426

@zpower426
27 published skills0 installs

27 results

zpower426
Collection

datapowers

0
zpower426
Agent

analyst

Dispatch as a subagent to execute a specific analysis task. Receives full task specification from the orchestrating agent. Does NOT inherit session context. Examples: <example>Context: An orchestrating agent is running subagent-driven-analysis. user: "Execute Task 3: numeric feature preprocessing" assistant: "Dispatching analyst subagent with the full task specification." <commentary>Each task gets a fresh analyst subagent with only the context it needs.</commentary></example>

0
zpower426
Agent

code-quality-reviewer

Use after statistical review has passed to check code quality of analysis code. Reviews for vectorization, reproducibility, clarity, and artifact correctness. Examples: <example>Context: Statistical review has passed for a feature engineering task. user: "Statistical review approved the feature engineering code" assistant: "Now let me dispatch the code quality reviewer to check implementation quality." <commentary>Code quality review always comes after statistical review, never before.</commentary></example>

0
zpower426
Agent

statistical-reviewer

Use after an analyst subagent completes a task to verify statistical correctness. Reviews for data leakage, correct metric selection, proper cross-validation, and sound statistical methodology. Examples: <example>Context: A feature engineering task has been completed. user: "Feature engineering for numeric columns is done" assistant: "Let me dispatch the statistical reviewer to check for leakage and correct transformer usage." <commentary>Statistical review must happen before code quality review, and always after any data transformation or modeling task.</commentary></example>

0
zpower426
Skill

brainstorm

Start a new analysis project with structured hypothesis-first brainstorming

0
zpower426
Skill

Execute Plan

Execute an analysis plan using subagent-driven analysis with two-stage review

0
zpower426
Skill

Write Plan

Break an approved analysis design into discrete, executable tasks

0
zpower426
Skill

analysis-manifest

Use at the START of every analysis session and at the END of every skill stage. Maintains artifacts/analysis_manifest.json as the single source of truth for analysis state, preventing context drift across long sessions.

0
zpower426
Skill

brainstorming

Use BEFORE any analysis project, ML task, or significant data work. Explores business context, data characteristics, and analytical approach before any code or modeling.

0
zpower426
Skill

data-exploration

Use when exploring any dataset for the first time, or when asked to perform EDA. Enforces systematic exploration before any modeling or transformation.

0
zpower426
Skill

data-profiling

Use BEFORE dispatching any subagent that needs to understand the dataset. Generates a high-density, PII-free data profile in Markdown so subagents receive structured context instead of raw data.

0
zpower426
Skill

data-validation

Use before any model training, feature engineering, or data transformation. Validates data schema, quality constraints, and statistical expectations.

0
zpower426
Skill

debugging-pipelines

Use when a data pipeline, model, or analysis produces unexpected results, errors, or performance degradation. Enforces root cause investigation before any fix.

0
zpower426
Skill

executing-plans

Use when executing a written analysis plan task by task. Manages task state, enforces two-stage review (statistical first, then code quality), and gates manifest updates behind completed reviews.

0
zpower426
Skill

feature-engineering

Use when creating, transforming, or selecting features. Enforces leakage-free, reproducible feature pipelines.

0
zpower426
Skill

finishing-an-analysis-branch

Use when analysis is complete, all validations pass, and you need to decide how to integrate the work — guides delivery of analysis artifacts via commit, PR, or archive.

0
zpower426
Skill

leakage-guard

Use whenever building features for time-series or any temporal dataset. Enforces strict temporal integrity: no future data in features, no post-event information, correct CV strategy.

0
zpower426
Skill

model-evaluation

Use when evaluating a trained model. Enforces one-time test set evaluation, statistical significance testing, and calibration checks.

0
zpower426
Skill

model-selection

Use when choosing which model(s) to train. Enforces baseline comparison before hyperparameter tuning, and correct metric selection for the task type.

0
zpower426
Skill

report-writing

Use when writing analysis reports, stakeholder summaries, or presenting model results. Enforces reproducibility, honest uncertainty communication, and actionable conclusions.

0
zpower426
Skill

requesting-statistical-review

Use after completing any major analytical task (EDA, features, modeling). Enforces rigorous auditing for target leakage, metric choice, and statistical significance.

0
zpower426
Skill

subagent-driven-analysis

Use when executing an analysis plan with independent tasks. Dispatches fresh subagents per task with two-stage review: statistical correctness first, then code quality.

0
zpower426
Skill

test-driven-data-science

Use before any training step to enforce three-layer data validation: Physical (schema), Logical (business rules), Statistical (distribution drift). Blocks training if any CRITICAL assertion fails.

0
zpower426
Skill

using-datapowers

Start here. Introduction to the skills library for data mining and statistical rigor.

0
zpower426
Skill

verification-before-delivery

Use when any analysis, model, or report is claimed to be complete. Runs mandatory artifact integrity, statistical evidence, and reproducibility checks before any delivery.

0
zpower426
Skill

writing-analysis-plans

Use after brainstorming to break an approved analytical design into executable tasks. Each task must be completable in 15-30 minutes with full specificity.

0
zpower426
Skill

writing-data-skills

Use when creating a new datapowers skill. Defines the required structure, Statistical Pressure Test protocol, Iron Law format, and adversarial testing methodology.

0