
14 results

Review an OpenMetadata connector against golden standards. Runs multi-agent analysis covering architecture, code quality, type safety, testing, and performance. When a PR number is given, automatically posts the quality summary to the PR description and a detailed review as a PR comment.

Deep reliability audit for OpenMetadata connectors — runs 7 investigation prompts (metadata, errors, auth, lineage, scale, synthesis, implementation) against connector standards

Load all OpenMetadata connector development standards into context. Use before building or reviewing connectors to ensure consistent patterns.

Use when implementing new features or fixing bugs to enforce test-driven development. Guides the RED-GREEN-REFACTOR cycle for Java (JUnit), Python (pytest), and TypeScript (Jest/Playwright) in OpenMetadata.

Use to review code changes with a two-stage process - first checking spec/requirements compliance, then code quality. Works on staged changes, branches, or PRs.

Meta-skill loaded at session start. Directs Claude to check for applicable OpenMetadata skills before starting any task. Ensures structured workflows are followed.

Use when starting a non-trivial feature, refactor, or multi-file change. Forces structured design thinking before writing any code - brainstorm approaches, get approval, then create a step-by-step implementation plan.

Build and deploy a full local OpenMetadata stack with Docker to test your connector in the UI. Handles code generation, build optimization, health checks, and guided testing.

Use before claiming any task is complete. Requires running actual verification commands and showing evidence — no "should work" claims without proof.

Use after implementing any feature or fix to ensure comprehensive test coverage. Enforces 90% line coverage in openmetadata-service, integration tests for all API endpoints in openmetadata-integration-tests, and Playwright E2E tests for UI changes.

Build a new OpenMetadata connector from scratch — scaffold JSON Schema, Python boilerplate, and AI context using schema-first architecture with code generation across Python, Java, TypeScript, and auto-rendered UI forms.

Use when debugging a failing test, build error, or runtime issue that isn't immediately obvious. Guides a 4-phase root cause analysis instead of random fix attempts.