llama-node
Node.js Library for Large Language Model LLaMA/RWKV
inference-server
Libraries and server to build AI applications. Adapters to various native bindings allowing local inference. Integrate it with your application, or use as a microservice.
llm-complete
A command-line tool for generating text completions using local LLM models with GPT4All