node-llama-cpp
Run AI models locally on your machine with node.js bindings for llama.cpp. Enforce a JSON schema on the model output on the generation level
batch-me-up
A utility for efficiently splitting data into batches based on available CPU resources