Automatically scan user inputs and LLM outputs for safety issues including prompt injection, PII leaks, harmful content, toxicity, and hallucinations. Use when processing untrusted text, reviewing code for security issues, or validating LLM responses.