Overview
Features
Semantic memory storage
Store and retrieve text passages based on semantic meaning using embeddings rather than keywords.
Memorize multiple texts
Memorize several texts in a single operation for later semantic retrieval.
PDF memorization
Memorize contents of a PDF by reading up to 20 pages at a time, chunking into meaningful segments, and storing.
Conversational chunking
LLM-assisted splitting of long texts into short, meaningful chunks and memorizing them iteratively.
Semantic retrieval
Retrieve the most relevant stored texts for a query, with human-readable relevance descriptions.
Local vector store (ChromaDB)
Uses ChromaDB for vector storage and fast similarity search.
Embedding generation with Ollama
Generates embeddings using Ollama for the vector store.
MCP tooling and deployment
Includes MCP configuration and Docker Compose-based deployment to run ChromaDB, Ollama, and the MCP server.
Who Is This For?
- Developers:Build local semantic memory systems by storing and retrieving text passages using embeddings and vector search.
- Data scientists / ML engineers:Prototype and test retrieval workflows with PDFs and long documents in a local setup.
- Researchers:Study content-based retrieval methods in privacy-conscious, local deployments with an admin GUI for inspection.




