CF Memory MCP
A best-in-class MCP (Model Context Protocol) server for AI memory storage using Cloudflare infrastructure. This package provides AI coding agents with intelligent memory management featuring smart auto-features, intelligent search, memory collections, temporal intelligence, multi-agent collaboration, advanced analytics, and a real-time analytics dashboard with interactive visualizations and business intelligence.
🎯 Current Version: v2.12.1
📊 Real-time Analytics Dashboard
NEW: Beautiful, high-performance analytics dashboard with interactive visualizations and business intelligence
🌐 Live Dashboard: https://55a2aea1.cf-memory-dashboard-vue.pages.dev
Key Features
- 🔄 Real-time Updates - Live data streaming with Server-Sent Events (SSE)
- 📈 Interactive Charts - Quality heatmaps, learning velocity gauges, performance radar charts
- 🕸️ Network Visualization - Memory relationship graphs with clustering and filtering
- 📱 Mobile Responsive - Optimized for desktop, tablet, and mobile devices
- 🌙 Dark/Light Themes - Automatic theme switching with user preferences
- 📊 Export & Reports - JSON/CSV export for business intelligence and presentations
- ⚡ <2s Loading - Enterprise-grade performance with global CDN
- 🧪 Built-in Testing - Comprehensive performance and functionality testing suite
Business Value
- Quality Tracking - Monitor AI learning progress from 27% to 60%+ quality scores
- Performance Monitoring - Real-time system health and optimization insights
- Decision Support - Data-driven insights for strategic planning and resource allocation
- ROI Measurement - Quantifiable metrics for AI investment returns
Quick Start
# Deploy dashboard (requires Cloudflare account)
cd dashboard-vue
npm run deploy:production
# Or access the live demo
open https://55a2aea1.cf-memory-dashboard-vue.pages.dev
📖 Documentation: Dashboard README | Executive Summary
🚀 NEW: Enhanced JSON + Cloudflare Vectorize Integration (v2.12.1) - Next-Level Semantic Search:
- 🎯 Entity-Level Vectorization - Individual JSON entities get their own vectors for granular semantic search
- 🔍 Multi-Level Search Architecture - Search at memory level AND entity level simultaneously
- 🤖 Automatic Relationship Discovery - AI-powered similarity-based relationship suggestions
- 📊 85-95% Search Accuracy - Enterprise-grade semantic understanding of complex data structures
- ⚡ 50-70% Faster Discovery - Optimized performance with Cloudflare's edge infrastructure
- 🔗 Cross-Memory Entity Linking - Connect similar entities across different JSON memories
- 📈 Entity Analytics - Importance scoring and pattern analysis for JSON structures
🔥 Enhanced JSON Processing & Temporal Relationship Tracking (v2.12.0) - Graphiti-Inspired Features:
- 📊 Enhanced JSON Processing - Automatic entity extraction from structured JSON data with JSONPath tracking
- 🕒 Temporal Relationship Tracking - Relationship versioning, validity status, and evolution history
- 🔗 Relationship Evolution - Track how connections between memories change over time
- 📝 Source Type Support - Handle text, JSON, and message format data with automatic processing
- 🎯 Entity Relationship Mapping - Automatic relationship generation between JSON entities
- 📈 Relationship Analytics - Evolution summaries and temporal pattern analysis
- 🔧 New MCP Tools - update_memory_relationship, search_relationships_temporal, get_relationship_evolution
- 🗄️ Database Extensions - Enhanced schema with memory_entities table and temporal indexes
🧠 Priority 4 - Context-Aware + Temporal Intelligence (v2.11.0) - AI-Enhanced Features:
- 🎯 AI-Enhanced Contextual Suggestions - Smart suggestions using semantic search and AI-powered relevance scoring
- 🕒 Advanced Temporal Intelligence - Enhanced time-aware search with sophisticated temporal scoring algorithms
- 🔄 Context-Switching Optimization - Automatic project detection and intelligent context switching
- 📊 Temporal Pattern Analytics - Advanced pattern recognition with ML-powered predictions
- 🤖 AI-Powered Suggestion Text - Intelligent suggestion generation using Cloudflare AI (Llama 3.1 8B)
- 📈 Enhanced Temporal Relevance - Context-aware scoring with access patterns and importance weighting
- 🧠 Smart Context Detection - AI-powered context extraction from conversation history
- ⚡ Semantic Context Matching - Vector-based project context discovery with 95%+ confidence
🧠 AI/ML Intelligence Engine (v2.9.0) - Production AI Features:
- 🤖 AI-Powered Content Expansion - Real content enrichment using Llama 3.1 8B (replaces static text appending)
- 🏷️ Semantic Tag Generation - Intelligent tagging with Cloudflare AI classification models
- 📊 Real Performance Monitoring - Actual metrics from database analytics (replaces mock data)
- ⚡ Enhanced Analytics Dashboard - Database-driven performance tracking and system health
- 🎯 Production AI Models - BGE embeddings, DistilBERT sentiment, Llama classification
- 🔧 Improved Quality Scoring - AI-powered analysis with >95% prediction confidence
- 📈 Performance Tracking - Real-time operation monitoring with automatic metric collection
🚀 Cloudflare Vectorize Integration (v2.8.1) - Paid Tier Enhancement:
- 🎯 Advanced Vector Search - Cloudflare Vectorize for lightning-fast semantic search (50M queries/month)
- 📊 Vector Storage - Dedicated vector database with 10M stored dimensions/month
- 🔍 Enhanced Similarity - Superior semantic search performance vs D1-based embeddings
- 🧩 Memory Clustering - AI-powered clustering analysis using vector similarity
- 📈 Paid Tier Optimization - 33x more KV writes, 10x larger batches, 6x faster learning cycles
- ⚡ Performance Boost - 50-70% response time reduction through optimized caching
⚡ KV Optimization Engine (v2.8.0) - Performance & Reliability:
- 🎯 Intelligent Caching - Optimized cache service with conditional writes and longer TTL values
- 📊 KV Usage Monitoring - Real-time tracking to prevent daily limit breaches (1,000 writes/day)
- 🗄️ D1 Database Fallback - Analytics data stored in D1 to reduce KV write frequency
- 🔄 Batched Operations - Write queue batching to minimize KV operations
- 📈 Usage Analytics - Trends, recommendations, and optimization insights
- 🛡️ Limit Protection - Automatic prevention of KV limit exceeded errors
🧠 Memory Intelligence Engine (v2.7.0) - Autonomous Optimization:
- 🤖 Automated Learning Loops - Self-improving algorithms with A/B testing framework
- 🎯 Adaptive Thresholds - Dynamic parameter optimization based on performance data
- 🧪 Learning Experiments - Scientific approach to testing optimization strategies
- 📊 A/B Testing Framework - Rigorous experimentation with statistical analysis
- 🔄 Autonomous Optimization - System continuously improves itself without manual intervention
Previous Features (Phase 2 Enhancements):
- 🚀 Quality Auto-Improvement Engine - AI-powered memory enhancement to boost quality scores from 27% to 60%+
- 🔧 Content Expansion - Intelligent AI analysis to expand short memories with relevant context
- 🏷️ Smart Tag Enhancement - Automatic tag suggestions and improvements for better organization
- ⚖️ Importance Recalculation - Dynamic importance scoring based on content analysis and usage patterns
Previous Features (Phase 1 Enhancements):
- 📊 Memory Analytics Dashboard - Real-time statistics and performance insights
- 🔍 Advanced Search Filters - Date range, importance, size, and boolean search
- 🏥 Memory Health Monitoring - Orphan detection and quality scoring
- 📈 Performance Metrics - Response time tracking and cache efficiency analysis
- 📤 Rich Export/Import - Multiple formats including graph visualization
Total Tools Available: 50+ spanning memory management, relationships, temporal intelligence, collaboration, autonomous optimization, KV performance monitoring, and advanced vector search.
🎯 Agent Tool Selection Solutions (v2.9.1)
NEW: Comprehensive guidance for AI agents to efficiently select from 31+ available MCP tools
With 31+ powerful MCP tools available, selecting the right tool for your task can be overwhelming. Our Agent Tool Selection Solutions provide structured guidance to help AI agents quickly identify optimal tools and workflows.
📚 Documentation Suite
- Intent-Based Tool Selection Guide - Clear mappings from user intents to appropriate tools
- Common Workflow Patterns - 5 proven workflow templates for common agent tasks
- Tool Categories & Organization - 31+ tools organized into 8 logical categories
- Performance Tips & Best Practices - Optimization guidelines for maximum efficiency
🔧 Tool Categories (8 Categories, 31+ Tools)
Category | Tools | Best For |
---|---|---|
🔧 CORE | 5 tools | Daily operations, simple tasks |
📦 BATCH | 3 tools | Bulk operations (>5 items) |
🕸️ GRAPH | 6 tools | Exploring connections, relationships |
🧠 INTELLIGENCE | 6 tools | AI-powered automation, quality improvement |
🎯 CONTEXT | 6 tools | Project management, relevant suggestions |
🤝 COLLABORATION | 6 tools | Team projects, multi-agent workflows |
📊 ANALYTICS | 7 tools | System monitoring, performance analysis |
⏰ LIFECYCLE | 7 tools | Data maintenance, system optimization |
⚡ Quick Selection Guide
Need basic operations? → CORE tools
Working with many items? → BATCH tools
Exploring connections? → GRAPH tools
Want AI assistance? → INTELLIGENCE tools
Working on projects? → CONTEXT tools
Collaborating with others? → COLLABORATION tools
Monitoring system? → ANALYTICS tools
Managing data lifecycle? → LIFECYCLE tools
🔄 Common Workflow Patterns
- New Project Setup:
create_project_context
→project_onboarding
→store_multiple_memories
→build_automatic_relationships
- Research & Discovery:
intelligent_search
→get_related_memories
→traverse_memory_graph
→get_contextual_suggestions
- Quality Improvement:
memory_health_check
→improve_memory_quality
→repair_and_enhance_tags
→detect_duplicates
- Analytics & Insights:
get_memory_stats
→get_usage_analytics
→analyze_temporal_relationships
- Collaboration Setup:
register_agent
→create_memory_space
→grant_space_permission
→add_memory_to_space
🤖 Smart Tool Recommendation (NEW!)
Get intelligent tool recommendations based on your intent:
// Example: Finding information
await callTool('recommend_tools', {
user_intent: 'I want to find information about React performance optimization',
current_context: 'Working on a React project',
task_description: 'Need to improve the performance of my React application'
});
// Returns:
// - Intent: "search_data" (66% confidence)
// - Top tools: intelligent_search, store_memory, retrieve_memory
// - Workflows: Quality Improvement, Analytics & Insights
// Example: Storing project data
await callTool('recommend_tools', {
user_intent: 'I want to store multiple memories about my new project',
current_context: 'Starting a new e-commerce project',
task_description: 'Need to save project requirements, team info, and technical decisions'
});
// Returns:
// - Intent: "store_data" (95% confidence)
// - Top tools: store_memory, retrieve_memory, search_memories
// - Workflows: New Project Setup, Collaboration Setup
💡 Performance Tips
- Use batch tools for >5 operations (10x performance improvement)
- Enable
semantic: true
for AI-powered search capabilities - Set project context for better relevance and accuracy
- Use
get_contextual_suggestions
when unsure what to do next - Use
recommend_tools
for intelligent tool selection guidance - Leverage AI features for automation and quality improvement
🚀 Quick Start
# Run directly with npx (no installation required)
npx cf-memory-mcp
# Or install globally
npm install -g cf-memory-mcp
cf-memory-mcp
✨ Features
Core Features
- 🌐 Completely Portable - No local setup required, connects to deployed Cloudflare Worker
- ⚡ Production Ready - Uses Cloudflare D1 database and KV storage for reliability
- 🔧 Zero Configuration - Works out of the box with any MCP client
- 🌍 Cross Platform - Supports Windows, macOS, and Linux
- 📦 NPX Compatible - Run instantly without installation
- 🔒 Secure - Built on Cloudflare's secure infrastructure
- 🚄 Fast - Global edge deployment with KV caching
🤖 Smart Auto-Features (v2.0.0)
- 🔗 Auto-Relationship Detection - Automatically suggests relationships between memories
- 🔍 Duplicate Detection - Identifies potential duplicates with merge strategies
- 🏷️ Smart Tagging - AI-powered tag suggestions based on content analysis
- ⭐ Auto-Importance Scoring - ML-based importance prediction with detailed reasoning
🧠 Intelligent Search & Collections (v2.0.0)
- 🎯 Intelligent Search - Combines semantic + keyword + graph traversal with query expansion
- 📚 Memory Collections - Organize memories with auto-include criteria and sharing
- 🚀 Project Onboarding - Automated workflows for project setup and knowledge extraction
- 🔄 Query Expansion - Automatically includes synonyms and related terms
⏰ Context-Aware & Temporal Intelligence (v2.2.0)
- 🧠 Conversation Context - Track and manage conversation-specific memory contexts
- ⏰ Temporal Relevance - Time-based memory scoring and decay management
- 🔄 Memory Evolution - Version control and evolution tracking for memories
- 📊 Temporal Analytics - Analyze how memories and relationships change over time
- 🎯 Context Activation - Smart memory activation based on conversation context
- 📈 Predictive Relevance - ML-powered predictions for memory importance over time
🤝 Multi-Agent Collaboration (v2.3.0)
- 👥 Agent Management - Register and authenticate multiple AI agents
- 🏠 Collaborative Spaces - Shared memory workspaces with permission control
- 🔐 Access Control - Fine-grained permissions (read/write/admin) for agents
- 🔄 Memory Synchronization - Real-time sync between different instances
- ⚡ Conflict Resolution - Smart merge strategies for concurrent edits
- 📊 Collaboration Analytics - Track agent interactions and collaboration patterns
🧠 Memory Intelligence Engine (v2.7.0)
- 🤖 Automated Learning Loops - Self-improving algorithms that continuously optimize system performance
- 🎯 Adaptive Thresholds - Dynamic parameter adjustment based on real-time performance data
- 🧪 Learning Experiments - Create and manage A/B tests for optimization strategies
- 📊 A/B Testing Framework - Scientific experimentation with statistical analysis and confidence scoring
- 🔄 Improvement Cycles - Autonomous optimization cycles that identify and apply performance enhancements
- 📈 Predictive Analytics - ML-powered predictions with >95% confidence targeting
- 🎛️ Threshold Management - Initialize and manage quality, relevance, importance, and relationship thresholds
- 📋 Experiment Analysis - Automated analysis of test results with optimization recommendations
📤 Advanced Export/Import (v2.3.0)
- 📋 Multi-Format Export - JSON, XML, Markdown, CSV, GraphML formats
- 🔄 Batch Operations - Asynchronous export/import job processing
- 🕸️ Graph Visualization - Export memory networks for analysis tools
- 📦 Rich Metadata - Full preservation of relationships and collaboration data
- 🔀 Conflict Handling - Smart import strategies for existing memories
📊 Phase 1 Enhancements (v2.5.0)
- 📈 Memory Analytics Dashboard - Real-time statistics, usage patterns, and performance metrics
- 🔍 Advanced Search Filters - Date range, importance score, content size, and boolean search operators
- 🏥 Memory Health Monitoring - Orphan detection, stale memory identification, and quality scoring
- 📊 Performance Insights - Response time tracking, cache efficiency, and database performance
- 🎯 Quality Analysis - Multi-factor quality scoring with improvement recommendations
Advanced Features
- 🧠 Semantic Search - AI-powered vector search using Cloudflare AI Workers
- 🕸️ Knowledge Graph - Store and traverse relationships between memories
- 📦 Batch Operations - Efficiently process multiple memories at once
- 🔍 Graph Traversal - Find paths and connections between related memories
- 🎯 Smart Filtering - Advanced search with tags, importance, and similarity
🛠️ Usage
With MCP Clients
Add to your MCP client configuration:
{
"mcpServers": {
"cf-memory": {
"command": "npx",
"args": ["cf-memory-mcp"]
}
}
}
With Augment
Add to your augment-config.json
:
{
"mcpServers": {
"cf-memory": {
"command": "npx",
"args": ["cf-memory-mcp"]
}
}
}
With Claude Desktop
Add to your Claude Desktop MCP configuration:
{
"mcpServers": {
"cf-memory": {
"command": "npx",
"args": ["cf-memory-mcp"]
}
}
}
🔧 Available Tools
The CF Memory MCP server provides comprehensive memory management tools:
Core Memory Operations
store_memory
Store a new memory with optional metadata and tags.
Parameters:
content
(string, required) - The memory contenttags
(array, optional) - Tags for categorizationimportance_score
(number, optional) - Importance score 0-10metadata
(object, optional) - Additional metadata
search_memories
Search memories by content and tags with optional semantic search.
Parameters:
query
(string, optional) - Full-text or semantic search querytags
(array, optional) - Filter by specific tagslimit
(number, optional) - Maximum results (default: 10)offset
(number, optional) - Results offset (default: 0)min_importance
(number, optional) - Minimum importance scoresemantic
(boolean, optional) - Use AI-powered semantic searchsimilarity_threshold
(number, optional) - Minimum similarity for semantic search
retrieve_memory
Retrieve a specific memory by ID.
Parameters:
id
(string, required) - The unique memory ID
Batch Operations
store_multiple_memories
Store multiple memories in a single batch operation.
Parameters:
memories
(array, required) - Array of memory objects to store
update_multiple_memories
Update multiple memories in a single batch operation.
Parameters:
updates
(array, required) - Array of memory updates with ID and data
search_and_update
Search for memories and update them in one operation.
Parameters:
search
(object, required) - Search criteriaupdate
(object, required) - Update data to apply
Graph & Relationship Operations
traverse_memory_graph
Traverse the memory graph from a starting point to find connected memories.
Parameters:
start_memory_id
(string, required) - Starting memory IDrelationship_types
(array, optional) - Filter by relationship typesmax_depth
(number, optional) - Maximum traversal depth (default: 3)direction
(string, optional) - Direction: 'outgoing', 'incoming', or 'both'min_strength
(number, optional) - Minimum relationship strength
find_memory_path
Find a path between two memories through relationships.
Parameters:
start_memory_id
(string, required) - Starting memory IDend_memory_id
(string, required) - Target memory IDrelationship_types
(array, optional) - Filter by relationship typesmax_depth
(number, optional) - Maximum path length (default: 5)min_strength
(number, optional) - Minimum relationship strength
get_related_memories
Get memories related to a specific memory with various options.
Parameters:
memory_id
(string, required) - Memory ID to find related memories forrelationship_types
(array, optional) - Filter by relationship typesmin_strength
(number, optional) - Minimum relationship strengthlimit
(number, optional) - Maximum results (default: 10)include_indirect
(boolean, optional) - Include indirectly related memoriesmax_hops
(number, optional) - Maximum hops for indirect relationships
🤖 Smart Auto-Features (v2.0.0)
suggest_relationships
Get intelligent relationship suggestions for a memory without automatically creating them.
Parameters:
memory_id
(string, required) - Memory ID to suggest relationships for
Returns: Array of potential relationships with confidence scores and suggested actions.
detect_duplicates
Detect potential duplicate memories with similarity analysis and merge strategies.
Parameters:
memory_id
(string, optional) - Specific memory to check for duplicates
Returns: Array of potential duplicates with similarity scores and merge suggestions.
suggest_tags
Get AI-powered tag suggestions based on content analysis and existing patterns.
Parameters:
content
(string, required) - Content to analyze for tag suggestionsexisting_tags
(array, optional) - Existing tags to exclude from suggestions
Returns: Suggested tags with confidence scores and reasoning.
calculate_auto_importance
Calculate automatic importance score based on multiple factors.
Parameters:
memory_id
(string, required) - Memory ID to calculate importance for
Returns: Importance score with detailed factor analysis and reasoning.
improve_memory_quality
Quality Auto-Improvement Engine - Enhance memory quality using AI to boost quality scores from 27% to 60%+.
Parameters:
memory_id
(string, optional) - Specific memory ID to improve. If not provided, improves batch of low-quality memoriesbatch_size
(number, optional) - Number of memories to process in batch (default: 20)target_quality_threshold
(number, optional) - Target quality threshold - memories above this score are skipped (default: 60)improvement_types
(array, optional) - Types of improvements to apply: content_expansion, importance_recalculation, tag_enhancement, relationship_buildingdry_run
(boolean, optional) - If true, only analyze and suggest improvements without applying them
Returns: Detailed improvement report with before/after quality scores, applied changes, and quality statistics.
🧠 Intelligent Search & Collections (v2.0.0)
intelligent_search
Advanced search combining semantic, keyword, and graph traversal with query expansion.
Parameters:
query
(string, required) - Natural language search queryauto_expand
(boolean, optional) - Automatically expand query with synonymsinclude_related
(number, optional) - Include related memories (number of hops)context_aware
(boolean, optional) - Apply context-aware rankingproject_context
(string, optional) - Project context for ranking
Returns: Search results with metadata about methods used and query expansion.
create_collection
Create a memory collection with optional auto-include criteria.
Parameters:
name
(string, required) - Collection namedescription
(string, optional) - Collection descriptionauto_include_criteria
(object, optional) - Criteria for auto-populating collectionsharing_permissions
(object, optional) - Sharing and access permissions
project_onboarding
Smart workflow for automated project onboarding with knowledge extraction.
Parameters:
project_name
(string, required) - Name of the projectproject_description
(string, optional) - Project descriptiontechnologies
(array, optional) - Technologies used in the projectteam_members
(array, optional) - Team membersgoals
(array, optional) - Project goals and objectives
Returns: Complete onboarding results with key concepts, relationship map, knowledge gaps, and documentation suggestions.
⏰ Context-Aware & Temporal Intelligence (v2.2.0)
create_conversation_context
Create a new conversation context for tracking related memories.
Parameters:
context_name
(string, required) - Name for the conversation contextdescription
(string, optional) - Description of the contextmetadata
(object, optional) - Additional context metadata
activate_memory_in_context
Activate a memory within a specific conversation context.
Parameters:
memory_id
(string, required) - Memory ID to activatecontext_id
(string, required) - Context ID to activate memory inactivation_strength
(number, optional) - Strength of activation (0-1)
get_context_memories
Get all memories associated with a conversation context.
Parameters:
context_id
(string, required) - Context ID to get memories forinclude_inactive
(boolean, optional) - Include inactive memoriessort_by_relevance
(boolean, optional) - Sort by temporal relevance
evolve_memory
Create a new version of a memory with evolution tracking.
Parameters:
memory_id
(string, required) - Original memory IDnew_content
(string, required) - Updated contentevolution_type
(string, required) - Type of evolution (refinement, expansion, correction)evolution_summary
(string, optional) - Summary of changes
analyze_memory_decay
Analyze temporal decay patterns for memories.
Parameters:
memory_id
(string, optional) - Specific memory to analyzetime_range_days
(number, optional) - Time range for analysis (default: 30)include_predictions
(boolean, optional) - Include future decay predictions
analyze_temporal_relationships
Analyze how relationships evolve over time.
Parameters:
relationship_id
(string, optional) - Specific relationship to analyzememory_id
(string, optional) - Memory ID to analyze relationships fortime_range_days
(number, optional) - Time range in days (default: 30)include_predictions
(boolean, optional) - Include future predictions
🤝 Multi-Agent Collaboration (v2.3.0)
register_agent
Register a new agent in the system for collaboration.
Parameters:
name
(string, required) - Agent nametype
(string, required) - Agent type: 'ai_agent', 'human_user', or 'system'description
(string, optional) - Agent descriptioncapabilities
(array, optional) - Agent capabilitiesmetadata
(object, optional) - Agent metadata
create_memory_space
Create a collaborative memory space for multi-agent sharing.
Parameters:
name
(string, required) - Memory space namedescription
(string, optional) - Space descriptionowner_agent_id
(string, required) - Agent ID who owns this spacespace_type
(string, optional) - Type: 'private', 'collaborative', or 'public'access_policy
(string, optional) - Policy: 'open', 'invite_only', or 'restricted'
grant_space_permission
Grant permission to an agent for a memory space.
Parameters:
space_id
(string, required) - Memory space IDagent_id
(string, required) - Agent ID to grant permission topermission_level
(string, required) - Level: 'read', 'write', or 'admin'granted_by
(string, required) - Agent ID granting the permission
add_memory_to_space
Add a memory to a collaborative space.
Parameters:
memory_id
(string, required) - Memory ID to addspace_id
(string, required) - Space ID to add memory toadded_by
(string, required) - Agent ID adding the memoryaccess_level
(string, optional) - Access level for this memory
get_agent_spaces
Get all memory spaces accessible to an agent.
Parameters:
agent_id
(string, required) - Agent ID to get spaces for
get_space_memories
Get all memories in a space (requires permission).
Parameters:
space_id
(string, required) - Space ID to get memories fromagent_id
(string, required) - Agent ID requesting access
🔄 Memory Synchronization (v2.3.0)
sync_memory
Synchronize a memory with another instance.
Parameters:
memory_id
(string, required) - Memory ID to synchronizetarget_instance
(string, required) - Target instance identifierforce_sync
(boolean, optional) - Force sync even if already syncedconflict_resolution
(string, optional) - Strategy: 'manual', 'auto_merge', 'source_wins', 'target_wins'
resolve_sync_conflict
Resolve a synchronization conflict.
Parameters:
conflict_id
(string, required) - Conflict ID to resolveresolution_method
(string, required) - Resolution methodresolved_by
(string, required) - Agent ID resolving the conflictresolved_version
(object, optional) - Manually resolved version
get_sync_status
Get synchronization status for a memory.
Parameters:
memory_id
(string, required) - Memory ID to check sync status for
📤 Export/Import Operations (v2.3.0)
create_export_job
Create an export job for memories.
Parameters:
format
(string, required) - Export format: 'json', 'xml', 'markdown', 'csv', 'graphml'memory_ids
(array, optional) - Specific memory IDs to exportspace_ids
(array, optional) - Memory space IDs to exportinclude_relationships
(boolean, optional) - Include memory relationshipsinclude_metadata
(boolean, optional) - Include full metadatainitiated_by
(string, required) - Agent ID initiating export
get_export_job
Get export job status and download information.
Parameters:
job_id
(string, required) - Export job ID
create_import_job
Create an import job for memories.
Parameters:
format
(string, required) - Import formatfile_content
(string, required) - File content to importtarget_space_id
(string, optional) - Target space to import intoconflict_resolution
(string, optional) - How to handle existing memoriesinitiated_by
(string, required) - Agent ID initiating import
📊 Analytics & Monitoring (v2.3.0)
⚡ KV Optimization & Monitoring (v2.8.0)
get_kv_usage_stats
Get current KV storage usage statistics and daily limits.
Returns: Current daily usage, remaining writes, usage percentage, and warnings.
get_kv_usage_trends
Get KV usage trends over the past week.
Returns: Daily usage trends with writes, reads, deletes, and total operations.
get_cache_optimization_recommendations
Get recommendations for optimizing KV cache usage.
Returns: Personalized optimization recommendations based on usage patterns.
migrate_analytics_to_d1
Migrate existing analytics data from KV to D1 database to reduce KV writes.
Returns: Migration results with migrated keys and any errors.
flush_cache_queue
Manually flush the optimized cache write queue to KV storage.
Returns: Cache statistics including in-memory entries and queue size.
track_memory_analytics
Track a memory analytics event.
Parameters:
memory_id
(string, required) - Memory IDagent_id
(string, required) - Agent ID performing the actionaction_type
(string, required) - Action: 'create', 'read', 'update', 'delete', 'search', 'relate'session_id
(string, optional) - Session identifiercontext_data
(object, optional) - Context data about the actionperformance_metrics
(object, optional) - Performance metrics
get_memory_analytics
Get memory usage analytics.
Parameters:
memory_id
(string, optional) - Specific memory IDagent_id
(string, optional) - Specific agent IDaction_type
(string, optional) - Specific action typestart_date
(string, optional) - Start date for analyticsend_date
(string, optional) - End date for analyticslimit
(number, optional) - Maximum number of results
get_collaboration_analytics
Get collaboration event analytics.
Parameters:
space_id
(string, optional) - Specific space IDagent_id
(string, optional) - Specific agent IDevent_type
(string, optional) - Specific event typestart_date
(string, optional) - Start date for analyticsend_date
(string, optional) - End date for analyticslimit
(number, optional) - Maximum number of results
🧠 Memory Intelligence Engine (v2.7.0)
initialize_adaptive_thresholds
Initialize adaptive thresholds for automated learning optimization.
Parameters:
threshold_types
(array, optional) - Types of thresholds to initialize (quality, relevance, importance, relationship_strength)baseline_values
(object, optional) - Optional baseline values for thresholds
Returns: Number of thresholds initialized and their current values.
create_learning_experiment
Create a new learning experiment for A/B testing and optimization.
Parameters:
experiment_name
(string, required) - Name of the experimentexperiment_type
(string, required) - Type of experiment (quality_improvement, relationship_discovery, tag_enhancement, content_expansion)hypothesis
(string, required) - Hypothesis being testedsuccess_criteria
(object, required) - Success criteria for the experimentcontrol_group_size
(number, optional) - Size of control group (default: 100)test_group_size
(number, optional) - Size of test group (default: 100)confidence_threshold
(number, optional) - Statistical confidence threshold (default: 0.95)created_by
(string, optional) - Creator of the experiment
Returns: Experiment ID and creation confirmation.
run_ab_test
Run A/B test for a specific learning experiment.
Parameters:
experiment_id
(string, required) - ID of the experiment to runmemory_ids
(array, required) - Memory IDs to include in the testtest_strategy
(string, optional) - Strategy for splitting test groups (random_split, importance_based, content_length_based)
Returns: Control and test group assignments with group sizes.
analyze_experiment_results
Analyze results from a learning experiment and make threshold adjustments.
Parameters:
experiment_id
(string, required) - ID of the experiment to analyzeinclude_recommendations
(boolean, optional) - Include optimization recommendations (default: true)
Returns: Number of adjustments made and optimization recommendations.
run_improvement_cycle
Run a complete self-improvement cycle with automated optimizations.
Parameters:
cycle_type
(string, optional) - Type of improvement cycle (full, quality_focused, relationship_focused, performance_focused)max_improvements
(number, optional) - Maximum number of improvements to apply (default: 5)
Returns: Number of improvements applied, performance gain percentage, and next cycle scheduling.
🎯 Cloudflare Vectorize Integration (v2.8.1) - Paid Tier
The Vectorize integration provides lightning-fast semantic search using Cloudflare's dedicated vector database. This paid tier enhancement offers superior performance compared to D1-based embeddings with 50M queries/month and 10M stored dimensions/month.
Setup Instructions
For paid tier users, enable Vectorize with:
# Setup Vectorize index and configuration
npm run setup-vectorize
# Deploy with Vectorize enabled
npm run setup-paid-tier
This creates the cf-memory-embeddings
Vectorize index with 768 dimensions (BGE-base-en-v1.5 compatible) and cosine similarity metric.
Hybrid D1+Vectorize Architecture
The system uses a hybrid approach combining both databases:
- D1 Database: Stores all memory metadata, content, tags, relationships, and serves as fallback for semantic search
- Vectorize: Stores only vector embeddings for ultra-fast semantic similarity search
- Hybrid Search Flow: Vectorize finds similar vectors → D1 enriches with full memory data → ranked results returned
- Fallback Mechanism: If Vectorize fails, system automatically uses D1-based semantic search
- Data Consistency: Both databases stay synchronized when memories are created/updated/deleted
vectorize_semantic_search
Perform advanced semantic search using Cloudflare Vectorize for superior speed and accuracy.
Parameters:
query
(string, required) - Search query for semantic similaritylimit
(number, optional) - Maximum number of results (default: 10)filter
(object, optional) - Metadata filters to applyreturn_vectors
(boolean, optional) - Include vector data in results (default: false)
Returns: Array of search results with similarity scores, metadata, and optional vector data.
vectorize_find_similar
Find memories similar to a specific memory using vector similarity.
Parameters:
memory_id
(string, required) - Memory ID to find similar memories forlimit
(number, optional) - Maximum number of results (default: 10)similarity_threshold
(number, optional) - Minimum similarity score (default: 0.7)exclude_self
(boolean, optional) - Exclude the source memory from results (default: true)
Returns: Array of similar memories with similarity scores and metadata.
vectorize_cluster_memories
Perform AI-powered clustering analysis using vector similarity to group related memories.
Parameters:
memory_ids
(array, required) - Array of memory IDs to clustercluster_count
(number, optional) - Number of clusters to create (default: 5)
Returns: Array of clusters with cluster IDs, memory IDs in each cluster, and centroid similarity scores.
vectorize_index_stats
Get statistics and information about the Vectorize index.
Returns: Index statistics including dimensions, vector count, and configuration details.
Paid Tier Benefits
- 50M Queries/Month: Massive query capacity for high-volume applications
- 10M Stored Dimensions/Month: Store millions of memory vectors
- 33x More KV Writes: Increased from 1,000 to 33,333 daily KV operations
- 10x Larger Batches: Process up to 500 memories per batch operation
- 6x Faster Learning: Learning cycles run every 5 minutes instead of 30 minutes
- 50-70% Performance Boost: Significantly faster response times through optimized caching
🌐 Architecture
┌─────────────────┐ ┌──────────────────┐ ┌─────────────────────┐
│ MCP Client │ │ cf-memory-mcp │ │ Cloudflare Worker │
│ (Augment, │◄──►│ (npm package) │◄──►│ (Production API) │
│ Claude, etc.) │ │ │ │ │
└─────────────────┘ └──────────────────┘ └─────────────────────┘
│
▼
┌─────────────────────┐
│ Cloudflare D1 DB │
│ + KV Storage │
│ + Vectorize (Paid) │
│ + AI Workers │
└─────────────────────┘
Hybrid D1+Vectorize Architecture
The system uses a sophisticated hybrid approach:
- D1 Database: Primary storage for all memory content, metadata, relationships, and tags
- Vectorize: High-performance vector similarity search with 50M queries/month capacity
- Hybrid Search: Vectorize finds similar vectors → D1 enriches with full memory data
- Fallback System: Automatic fallback to D1-based search if Vectorize is unavailable
- Data Sync: Both databases stay synchronized for all memory operations
📖 Detailed Architecture Documentation - Complete technical overview with diagrams, data flows, and performance characteristics.
🔧 Command Line Options
# Start the MCP server
npx cf-memory-mcp
# Show version
npx cf-memory-mcp --version
# Show help
npx cf-memory-mcp --help
# Enable debug logging
DEBUG=1 npx cf-memory-mcp
🌍 Environment Variables
DEBUG=1
- Enable debug loggingMCP_DEBUG=1
- Enable MCP-specific debug logging
📋 Requirements
- Node.js 16.0.0 or higher
- Internet connection (connects to Cloudflare Worker)
- MCP client (Augment, Claude Desktop, etc.)
🚀 Why CF Memory MCP?
Traditional Approach ❌
- Clone repository
- Set up local database
- Configure environment variables
- Manage local server process
- Handle updates manually
CF Memory MCP ✅
- Run
npx cf-memory-mcp
- That's it! 🎉
🔒 Privacy & Security
- No local data storage - All data stored securely in Cloudflare D1
- HTTPS encryption - All communication encrypted in transit
- Edge deployment - Data replicated globally for reliability
- No API keys required - Public read/write access for simplicity
🤝 Contributing
Contributions are welcome! Please see the GitHub repository for more information.
📄 License
MIT License - see LICENSE file for details.
🔗 Links
- GitHub Repository: https://github.com/johnlam90/cf-memory-mcp
- npm Package: https://www.npmjs.com/package/cf-memory-mcp
- Issues: https://github.com/johnlam90/cf-memory-mcp/issues
- MCP Specification: https://modelcontextprotocol.io/
Made with ❤️ by John Lam