SQLite for Vectors - A fast, lightweight vector database built with C++ and HNSW (Hierarchical Navigable Small World) algorithm for approximate nearest neighbor search.
- 🚀 High Performance: Built with C++ and optimized HNSW algorithm
- 🐍 Python Integration: Native Python bindings with NumPy support
- 🦀 Rust CLI: Command-line interface for easy database operations
- 💾 Persistent Storage: Custom binary format with automatic save/load
- 🔍 Fast Search: Approximate nearest neighbor search with configurable parameters
- 📦 Multi-Language: C++, Python, and Rust APIs
import feather_py
import numpy as np
# Open or create a database
db = feather_py.DB.open("my_vectors.feather", dim=768)
# Add vectors
vector = np.random.random(768).astype(np.float32)
db.add(id=1, vec=vector)
# Search for similar vectors
query = np.random.random(768).astype(np.float32)
ids, distances = db.search(query, k=5)
print(f"Found {len(ids)} similar vectors")
for i, (id, dist) in enumerate(zip(ids, distances)):
print(f" {i+1}. ID: {id}, Distance: {dist:.4f}")
# Save the database
db.save()#include "include/feather.h"
#include <vector>
int main() {
// Open database
auto db = feather::DB::open("my_vectors.feather", 768);
// Add a vector
std::vector<float> vec(768, 0.1f);
db->add(1, vec);
// Search
std::vector<float> query(768, 0.1f);
auto results = db->search(query, 5);
for (auto [id, distance] : results) {
std::cout << "ID: " << id << ", Distance: " << distance << std::endl;
}
return 0;
}# Create a new database
feather new my_db.feather --dim 768
# Add vectors from NumPy files
feather add my_db.feather 1 --npy vector1.npy
feather add my_db.feather 2 --npy vector2.npy
# Search for similar vectors
feather search my_db.feather --npy query.npy --k 10- C++17 compatible compiler
- Python 3.8+ (for Python bindings)
- Rust 1.70+ (for CLI tool)
- pybind11 (for Python bindings)
-
Clone the repository
git clone <repository-url> cd feather
-
Build C++ Core
g++ -O3 -std=c++17 -fPIC -c src/feather_core.cpp -o feather_core.o ar rcs libfeather.a feather_core.o
-
Build Python Bindings
pip install pybind11 numpy python setup.py build_ext --inplace pip install -e . -
Build Rust CLI
cd feather-cli cargo build --release
feather::DB: Main C++ class providing vector database functionality- HNSW Index: Hierarchical Navigable Small World algorithm for fast ANN search
- Binary Format: Custom storage format with magic number validation
- Multi-language Bindings: Python (pybind11) and Rust (FFI) interfaces
Feather uses a custom binary format:
[4 bytes] Magic number: 0x46454154 ("FEAT")
[4 bytes] Version: 1
[4 bytes] Dimension
[Records] ID (8 bytes) + Vector data (dim * 4 bytes)
- Index Type: HNSW with L2 distance
- Max Elements: 1,000,000 (configurable)
- Construction Parameters: M=16, ef_construction=200
- Memory Usage: ~4 bytes per dimension per vector + index overhead
DB.open(path: str, dim: int = 768): Open or create databaseadd(id: int, vec: np.ndarray): Add vector with IDsearch(query: np.ndarray, k: int = 5): Search k nearest neighborssave(): Persist database to diskdim(): Get vector dimension
static std::unique_ptr<DB> open(path, dim): Factory methodvoid add(uint64_t id, const std::vector<float>& vec): Add vectorauto search(const std::vector<float>& query, size_t k): Search vectorsvoid save(): Save to disksize_t dim() const: Get dimension
feather new <path> --dim <dimension>: Create new databasefeather add <db> <id> --npy <file>: Add vector from .npy filefeather search <db> --npy <query> --k <count>: Search similar vectors
import feather_py
import numpy as np
# Create database for sentence embeddings
db = feather_py.DB.open("sentences.feather", dim=384)
# Add document embeddings
documents = [
"The quick brown fox jumps over the lazy dog",
"Machine learning is a subset of artificial intelligence",
"Vector databases enable semantic search capabilities"
]
for i, doc in enumerate(documents):
# Assume get_embedding() returns a 384-dim vector
embedding = get_embedding(doc)
db.add(i, embedding)
# Search for similar documents
query_embedding = get_embedding("What is machine learning?")
ids, distances = db.search(query_embedding, k=2)
for id, dist in zip(ids, distances):
print(f"Document: {documents[id]}")
print(f"Similarity: {1 - dist:.3f}\n")import feather_py
import numpy as np
db = feather_py.DB.open("large_dataset.feather", dim=512)
# Batch add vectors
batch_size = 1000
for batch_start in range(0, 100000, batch_size):
for i in range(batch_size):
vector_id = batch_start + i
vector = np.random.random(512).astype(np.float32)
db.add(vector_id, vector)
# Periodic save
if batch_start % 10000 == 0:
db.save()
print(f"Processed {batch_start + batch_size} vectors")- Batch Operations: Add vectors in batches and save periodically
- Memory Management: Consider vector dimension vs. memory usage trade-offs
- Search Parameters: Adjust
kparameter based on your precision/recall needs - File I/O: Use SSD storage for better performance with large databases
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests for new functionality
- Submit a pull request
[Add your license information here]