Elasticsearch Memory

Elasticsearch Memory

Persistent memory with hierarchical categorization, semantic search, and intelligent auto-detection. Install via PyPI.

1
Stars
1
Forks
0
Releases

Overview

Elasticsearch Memory MCP is a Model Context Protocol server that provides persistent, intelligent memory by leveraging Elasticsearch for vector embeddings and hierarchical categorization. It organizes memories into five categories: identity, active_context, active_project, technical_knowledge, and archived, with automatic category detection and confidence scoring, plus manual reclassification support. The server features Intelligent Auto-Detection with an accumulative scoring system (0.7-0.95 confidence range) and 23+ keyword patterns for context-aware categorization. A Batch Review System enables bulk processing of uncategorized memories with approve/reject/reclassify workflows, delivering faster moderation. A Backward Compatible Fallback loads v5 uncategorized memories with no data loss during upgrades. Optimized Context Loading prioritizes memories in a hierarchical order to reduce tokens by approximately 60-70% and improves relevance ranking. The memory is persisted with vector embeddings for semantic search, session management with checkpoints, and conversation snapshots. The MCP server can be used via available tools like save_memory, load_initial_context, review_uncategorized_batch, apply_batch_categorization, search_memory, and auto_categorize_memories, and is installable from PyPI. It supports Claude Desktop and Claude Code CLI integration.

Details

Owner
fredac100
Language
Python
License
MIT License
Updated
2025-12-07

Features

Hierarchical Memory Categorization

Five category types: identity, active_context, active_project, technical_knowledge, archived; automatic detection with confidence scoring and manual reclassification.

Intelligent Auto-Detection

Accumulative scoring system with 0.7-0.95 confidence range and 23+ keyword patterns for context-aware categorization.

Batch Review System

Review uncategorized memories in batches with approve/reject/reclassify workflows; delivers speed improvements over item-by-item processing.

Backward Compatible Fallback

Seamless loading of v5 uncategorized memories with no data loss during upgrades and graceful degradation.

Optimized Context Loading

Hierarchical priority loading reduces memory load from ~117 memories to ~30-40 and achieves 60-70% token reduction with smart relevance ranking.

Persistent Memory

Vector embeddings for semantic search; session management with checkpoints and conversation snapshots.

Audience

Claude DesktopConfigure Claude Desktop to connect to the MCP server via uvx or Python module for memory access.
Claude Code CLIRegister and connect via Claude Code CLI to access memory search, batch review, and categorization workflows.
DevelopersContribute to or integrate the MCP server in custom applications and testing.

Tags

elasticsearchmemoryMCPhierarchical categorizationsemantic searchvector embeddingsauto-detectionbatch reviewbackward compatibilitysession managementclaude integration