Search
breaker-ai
CLI to scan prompts for injection risks
v1.0.2 URL:
https://unpkg.com/breaker-ai@1.0.2
Open
Browse Files
breaker
ai
prompt
injection
security
breaker-ai
cli
prompt-security
prompt-injection
prompt-safety
prompt-risk
prompt-scanner
prompt-checker
prompt-validator
llm-inject-scan
A tiny, fast library that scans user prompts for risky patterns before they reach your LLM model. It flags likely prompt-injection attempts so you can block, review, or route them differently—without making a model call.
v0.1.1 URL:
https://unpkg.com/llm-inject-scan@0.1.1/dist/index.ts
Open
Browse Files
llm
prompt-injection
jailbreak-detection
guardrails
ai-guardrails
ai-security
prompt-security
prompt-leak
injection
security
openai
anthropic
gpt
llm-injection