llm-inject-scan
A tiny, fast library that scans user prompts for risky patterns before they reach your LLM model. It flags likely prompt-injection attempts so you can block, review, or route them differently—without making a model call.
purgeai-shield
PurgeAI Shield - AI Security SDK for prompt injection detection, jailbreak prevention, and PII protection