Search
llm-inject-scan
A tiny, fast library that scans user prompts for risky patterns before they reach your LLM model. It flags likely prompt-injection attempts so you can block, review, or route them differently—without making a model call.
v0.1.1 URL:
https://unpkg.com/llm-inject-scan@0.1.1/dist/index.ts
Open
Browse Files
llm
prompt-injection
jailbreak-detection
guardrails
ai-guardrails
ai-security
prompt-security
prompt-leak
injection
security
openai
anthropic
gpt
llm-injection