Overview
Features
On-Premises isolated deployment
Operate fully on-premises with containers; all LLM, embedding, and reranker components run locally without external dependencies.
Custom GPT local docs integration
Query local documents via the ChatGPT app or web with custom GPTs, while the indexer runs on your PC or cloud and ChatGPT serves as the LLM.
Anthropic Claude integration
Use Anthropic Claude as the primary LLM for local document querying with a local indexer.
MCP Desktop integration
MCP integration support using the MCP docker-compose and Anthropic Desktop app configuration.
Multi-mode deployment via docker-compose
Three docker-compose options (local, ChatGPT-enabled, MCP-enabled) to fit different usage scenarios.
Local indexing of documents
Recursively index LOCAL_FILES_PATH; supports .pdf, .xls, .docx, .txt, .md, .csv.
Configurable ML stack
Configure EMBEDDING_MODEL_ID and EMBEDDING_SIZE for Qdrant, OLLAMA_MODEL for LLM, and RERANKER_MODEL for reranking; includes USER_ID and PASSWORD for ChatGPT integration.
MCP installation and tooling
Smithery-based MCP installation path; Copilot, and Claude-specific MCP configuration supported; includes mcp.json guidance and OTP handling.
Who Is This For?
- Developers:Set up on-prem RAG with local document indexing and MCP integration.
- IT/DevOps:Deploy containerized Minima across teams with MCP and local data access.
- End users:Query local documents via ChatGPT, Claude, or Copilot using local indexing.
- MCP developers:Integrate Minima with MCP desktop apps and Copilot workflows.




