llm-guard
A TypeScript library for validating and securing LLM prompts
text-moderate
A comprehensive JavaScript library for content moderation, including profanity filtering, sentiment analysis, and toxicity detection. Leveraging advanced algorithms and external APIs, TextModerate provides developers with tools to create safer and more po
toxicity-analyzer
A package to analyze text for profanity and to detect the rating of an Image using AI
safespeak
A TypeScript/JavaScript SDK to integrate with safespeak