node-llama-cpp
Run AI models locally on your machine with node.js bindings for llama.cpp. Enforce a JSON schema on the model output on the generation level
tokenlens
A lightweight registry of LLM model information, like name and context sizes, for building AI-powered apps.
@lenml/tokenizers
a lightweight no-dependency fork of transformers.js (only tokenizers)
ai-ctrf
Generate AI summaries of test results using a wide range of AI models like OpenAI, Anthropic, Gemini, Mistral, Grok, DeepSeek, Azure, Perplexity, and OpenRouter
@tokenlens/models
Tree-shakeable static models.dev catalog split by provider for TokenLens.