VectorDB.js
is a simple in-memory vector database for Node.js. It's an easy way to do text similarity.
- Works 100% locally and in-memory by default
- Uses hnswlib-node for simple vector search
- Uses Embeddings.js for simple text embeddings
- Supports OpenAI, Mistral and local embeddings
- Caches embeddings
- Automatically resizes database size
- Store objects with embeddings
- MIT license
Install VectorDB.js
from NPM:
npm install @themaximalist/vectordb.js
For local embeddings, install the transformers library:
npm install @xenova/transformers
For remote embeddings like OpenAI and Mistral, add an API key to your environment.
export OPENAI_API_KEY=...
export MISTRAL_API_KEY=...
To find similar strings, add a few to the database, and then search.
import VectorDB from "@themaximalist/vectordb.js"
const db = new VectorDB();
await db.add("orange");
await db.add("blue");
const result = await db.search("light orange");
// [ { input: 'orange', distance: 0.3109036684036255 } ]
By default VectorDB.js
uses a local embeddings model.
To switch to another model like OpenAI, pass the service.
const db = new VectorDB({
dimensions: 1536,
embeddings: {
service: "openai"
}
});
await db.add("orange");
await db.add("blue");
await db.add("green");
await db.add("purple");
// ask for up to 4 embeddings back, default is 3
const results = await db.search("light orange", 4);
assert(results.length === 4);
assert(results[0].input === "orange");
With Mistral Embeddings:
const db = new VectorDB({
dimensions: 1536,
embeddings: {
service: "mistral"
}
});
// ...
Being able to easily switch embeddings providers ensures you don't get locked in!
VectorDB.js
was built on top of Embeddings.js, and passes the full embeddings
config option to Embeddings.js
.
VectorDB.js
can store any valid JavaScript object along with the embedding.
const db = new VectorDB();
await db.add("orange", "oranges");
await db.add("blue", ["sky", "water"]);
await db.add("green", { "grass": "lawn" });
await db.add("purple", { "flowers": 214 });
const results = await db.search("light green", 1);
assert(results[0].object.grass == "lawn");
This makes it easy to store metadata about the embedding, like an object id, URL, etc...
VectorDB.js
works great by itself, but was built side-by-side to work with Model Deployer.
Model Deployer is an easy way to deploy your LLM and Embedding models in production. You can monitor usage, rate limit users, generate API keys with specific settings and more.
It's especially helpful in offering options to your users. They can download and run models locally, they can use your API, or they can provide their own API key.
It works out of the box with VectorDB.js.
const db = new VectorDB({
embeddings: {
service: "modeldeployer",
model: "api-key",
}
});
await db.add("orange", "oranges");
await db.add("blue", ["sky", "water"]);
await db.add("green", { "grass": "lawn" });
await db.add("purple", { "flowers": 214 });
const results = await db.search("light green", 1);
assert(results[0].object.grass == "lawn");
Learn more about deploying embeddings with Model Deployer.
VectorDB.js
is currently used in the following projects:
- AI.js — simple AI library
- Model Deployer — deploy AI models in production
- HyperType — knowledge graph toolkit
- HyperTyper — multidimensional mind mapping
MIT
Created by The Maximalist, see our open-source projects.