MCP Server Setup — All Editors & Clients¶
Copy-paste the config for your editor and you're done. No
nmem initneeded — the server auto-initializes on first use.
Table of Contents¶
- Requirements
- Claude Code (Plugin)
- Claude Code (Manual MCP)
- Cursor
- Windsurf (Codeium)
- VS Code
- Claude Desktop
- Cline
- Zed
- Google Antigravity
- JetBrains IDEs
- Gemini CLI
- Amazon Q Developer
- Neovim
- Warp Terminal
- Custom / Other MCP Clients
- Alternative: Python Module
- Alternative: Docker
- Environment Variables
- Resource Usage
- Available Tools
- Resources
- Agent Instructions
- Troubleshooting
Requirements¶
- Python 3.11+
- pip or uv package manager
Note: If using
uvx(recommended for Claude Code), you don't need to install manually —uvxhandles it automatically.
Claude Code (Plugin — Recommended)¶
The easiest way. One command installs everything:
/plugin marketplace add nhadaututtheky/neural-memory
/plugin install neural-memory@neural-memory-marketplace
This auto-configures the MCP server, skills, commands, agent, and hooks.
Done. No further setup needed.
Claude Code (CLI — Recommended for Manual Setup)¶
The official way to add MCP servers to Claude Code:
# Global (all projects):
claude mcp add --scope user neural-memory -- nmem-mcp
# Or with uvx (no pip install needed):
claude mcp add --scope user neural-memory -- uvx --from neural-memory nmem-mcp
# Project-only:
claude mcp add neural-memory -- nmem-mcp
Alternatively, add to your project's .mcp.json manually:
{
"mcpServers": {
"neural-memory": {
"command": "uvx",
"args": ["--from", "neural-memory", "nmem-mcp"]
}
}
}
Or if you installed via pip (no uvx):
Note: Do NOT add MCP servers to
~/.claude/settings.jsonor~/.claude/mcp_servers.json— Claude Code does not read MCP config from those files. Useclaude mcp addor.mcp.json.
Cursor¶
Add to ~/.cursor/mcp.json (global) or .cursor/mcp.json (project):
With uvx (no pip install needed):
{
"mcpServers": {
"neural-memory": {
"command": "uvx",
"args": ["--from", "neural-memory", "nmem-mcp"]
}
}
}
Restart Cursor after adding the config.
Windsurf (Codeium)¶
Add to ~/.codeium/windsurf/mcp_config.json:
With uvx:
{
"mcpServers": {
"neural-memory": {
"command": "uvx",
"args": ["--from", "neural-memory", "nmem-mcp"]
}
}
}
Restart Windsurf after adding.
VS Code¶
With Continue Extension¶
Add to ~/.continue/config.json under mcpServers:
With Copilot Chat (MCP support)¶
Add to VS Code settings.json:
VS Code Extension (GUI)¶
For a graphical experience, install the NeuralMemory VS Code Extension from the marketplace.
Claude Desktop¶
Add to claude_desktop_config.json:
- macOS:
~/Library/Application Support/Claude/claude_desktop_config.json - Windows:
%APPDATA%\Claude\claude_desktop_config.json
With uvx:
{
"mcpServers": {
"neural-memory": {
"command": "uvx",
"args": ["--from", "neural-memory", "nmem-mcp"]
}
}
}
Windows — full path (if nmem-mcp not in PATH):
Restart Claude Desktop after adding.
Cline¶
Add to Cline MCP settings (cline_mcp_settings.json in your VS Code workspace):
With uvx:
{
"mcpServers": {
"neural-memory": {
"command": "uvx",
"args": ["--from", "neural-memory", "nmem-mcp"],
"disabled": false
}
}
}
Zed¶
Add to Zed settings.json (~/.config/zed/settings.json):
Google Antigravity¶
Google's AI-powered editor with built-in MCP Store.
Option 1: MCP Store (GUI)¶
- Open the MCP Store via the
...dropdown at the top of the editor's agent panel - Browse & install servers directly
- Authenticate if prompted
Option 2: Custom Config (for NeuralMemory)¶
- Open MCP Store → click "Manage MCP Servers"
- Click "View raw config"
- Add NeuralMemory to
mcp_config.json:
With uvx:
{
"mcpServers": {
"neural-memory": {
"command": "uvx",
"args": ["--from", "neural-memory", "nmem-mcp"]
}
}
}
- Save and restart the editor.
Tip: Antigravity also supports connecting to NeuralMemory's FastAPI server mode. Run
nmem serveand connect via HTTP if you prefer server-side integration.
JetBrains IDEs (IntelliJ, PyCharm, WebStorm)¶
JetBrains IDEs support MCP via the built-in AI Assistant or the JetBrains AI plugin.
Go to Settings → Tools → AI Assistant → MCP Servers → Add, or edit the config file directly:
- Location:
.idea/mcpServers.json(project) or global settings
With uvx:
{
"mcpServers": {
"neural-memory": {
"command": "uvx",
"args": ["--from", "neural-memory", "nmem-mcp"]
}
}
}
Restart the IDE after adding.
Gemini CLI¶
Add to ~/.gemini/settings.json:
With uvx:
{
"mcpServers": {
"neural-memory": {
"command": "uvx",
"args": ["--from", "neural-memory", "nmem-mcp"]
}
}
}
Amazon Q Developer¶
Add to ~/.aws/amazonq/mcp.json:
With uvx:
{
"mcpServers": {
"neural-memory": {
"command": "uvx",
"args": ["--from", "neural-memory", "nmem-mcp"]
}
}
}
Neovim¶
With mcp-hub.nvim or similar MCP plugin, add to your mcpservers.json:
Or configure in Lua:
Warp Terminal¶
Add to Warp's MCP config (~/.warp/mcp.json):
Custom / Other MCP Clients¶
NeuralMemory uses stdio transport (JSON-RPC 2.0 over stdin/stdout). Any MCP-compatible client can connect:
Or with explicit Python:
{
"name": "neural-memory",
"transport": "stdio",
"command": "python",
"args": ["-m", "neural_memory.mcp"]
}
Alternative: Python Module Directly¶
If nmem-mcp is not in your PATH, use the Python module:
macOS/Linux with specific Python:
Windows with full path:
{
"neural-memory": {
"command": "C:\\Users\\YOU\\AppData\\Local\\Programs\\Python\\Python312\\python.exe",
"args": ["-m", "neural_memory.mcp"]
}
}
Alternative: Docker¶
docker run -i --rm -v neuralmemory:/root/.neuralmemory ghcr.io/nhadaututtheky/neural-memory:latest nmem-mcp
{
"neural-memory": {
"command": "docker",
"args": [
"run", "-i", "--rm",
"-v", "neuralmemory:/root/.neuralmemory",
"ghcr.io/nhadaututtheky/neural-memory:latest",
"nmem-mcp"
]
}
}
Environment Variables¶
| Variable | Default | Description |
|---|---|---|
NEURALMEMORY_BRAIN |
"default" |
Brain name to use |
NEURALMEMORY_DATA_DIR |
~/.neuralmemory |
Data directory |
NEURAL_MEMORY_DEBUG |
0 |
Enable debug logging (1 to enable) |
MEM0_API_KEY |
— | Mem0 API key (for import) |
COGNEE_API_KEY |
— | Cognee API key (for import) |
Example with custom brain:
Resource Usage¶
| Metric | Value |
|---|---|
| RAM (idle) | ~12-15 MB |
| RAM (active, small brain) | ~30-35 MB |
| RAM (active, large brain) | ~55-60 MB |
| CPU | Near 0% when idle |
| Disk | ~1-50 MB per brain (SQLite) |
| Startup time | < 2 seconds |
NeuralMemory is lightweight — it won't slow down your editor.
Available Tools¶
Once configured, these 38 tools are available to your AI assistant:
Core Memory¶
| Tool | Description |
|---|---|
nmem_remember |
Store a memory (fact, decision, insight, todo, error, etc.) |
nmem_recall |
Query with spreading activation (depth 0-3) |
nmem_context |
Inject recent context at session start |
nmem_todo |
Quick TODO with 30-day expiry |
nmem_auto |
Auto-capture memories from conversation text |
nmem_suggest |
Autocomplete suggestions from brain neurons |
nmem_edit |
Edit memory type, content, or priority by fiber ID |
nmem_forget |
Soft delete (set expiry) or hard delete (permanent removal) |
Brain Management¶
| Tool | Description |
|---|---|
nmem_stats |
Brain statistics and freshness |
nmem_health |
Brain health diagnostics (purity score, grade, top penalties) |
nmem_evolution |
Brain evolution metrics (maturation, plasticity) |
nmem_version |
Brain version control (snapshot, rollback, diff) |
nmem_transplant |
Copy memories between brains |
nmem_conflicts |
View and resolve memory conflicts |
nmem_alerts |
Brain health alerts lifecycle |
Session & Context¶
| Tool | Description |
|---|---|
nmem_session |
Track working session state and progress |
nmem_eternal |
Save project context, decisions, instructions |
nmem_recap |
Load saved context at session start |
Learning & Training¶
| Tool | Description |
|---|---|
nmem_index |
Index codebase for code-aware recall |
nmem_train |
Train brain from documentation files |
nmem_train_db |
Train brain from database schema |
nmem_habits |
Manage learned workflow habits |
nmem_review |
Spaced repetition review system |
nmem_pin |
Pin/unpin memories (pinned = permanent, skip decay/prune) |
Cognitive Reasoning¶
| Tool | Description |
|---|---|
nmem_hypothesize |
Create and manage hypotheses with Bayesian confidence tracking |
nmem_evidence |
Submit evidence for/against — auto-updates confidence |
nmem_predict |
Falsifiable predictions with deadlines, linked to hypotheses |
nmem_verify |
Verify predictions correct/wrong — propagates to hypotheses |
nmem_cognitive |
Hot index: ranked summary of active hypotheses + predictions |
nmem_gaps |
Knowledge gap metacognition: detect, track, resolve |
nmem_schema |
Schema evolution: evolve hypotheses via SUPERSEDES chain |
nmem_explain |
Trace shortest path between two concepts |
Utilities¶
| Tool | Description |
|---|---|
nmem_import |
Import from ChromaDB, Mem0, Cognee, Graphiti, LlamaIndex |
nmem_narrative |
Generate timeline/topic/causal narratives |
nmem_telegram_backup |
Send brain .db backup to Telegram |
Sync (Multi-Device)¶
| Tool | Description |
|---|---|
nmem_sync |
Trigger manual sync (push/pull/full) |
nmem_sync_status |
Show pending changes, devices, last sync |
nmem_sync_config |
Configure hub URL, auto-sync, conflict strategy |
Tool Tiers¶
By default all 38 tools are exposed on every API turn. If you want to reduce token overhead, configure a tool tier in ~/.neuralmemory/config.toml:
Or via CLI:
nmem config tier --show # show current tier
nmem config tier standard # set to standard
nmem config tier full # reset to full
| Tier | Tools | Est. Tokens | Savings |
|---|---|---|---|
full (default) |
26 | ~3,800 | — |
standard |
8 | ~1,400 | ~63% |
minimal |
4 | ~700 | ~82% |
Tier contents:
- minimal —
remember,recall,context,recap - standard — minimal +
todo,session,auto,eternal - full — all 38 tools
Hidden tools remain callable — only the schema listing changes. If the AI model already knows a tool name, it can still call it even when the tool is not exposed in
tools/list.
Resources¶
The MCP server provides resources for system prompts:
| Resource URI | Description |
|---|---|
neuralmemory://prompt/system |
Full system prompt for AI assistants |
neuralmemory://prompt/compact |
Compact version for token-limited contexts |
Get MCP Config via CLI¶
View System Prompt via CLI¶
Agent Instructions¶
Copy these instructions into your project's CLAUDE.md (for Claude Code) or .cursorrules (for Cursor) to teach your AI assistant how to use NeuralMemory proactively.
For Claude Code¶
See docs/agent-instructions/CLAUDE.md for the full template.
For Cursor¶
See docs/agent-instructions/.cursorrules for the full template.
Quick Version (any editor)¶
## Memory System — NeuralMemory
This workspace uses NeuralMemory for persistent memory.
Use nmem_* MCP tools PROACTIVELY.
### Session Start (ALWAYS)
1. nmem_recap() — Resume context
2. nmem_context(limit=20, fresh_only=true) — Recent memories
3. nmem_session(action="get") — Current task
### Auto-Remember
- Decision made → nmem_remember(content="...", type="decision", priority=7)
- Bug fixed → nmem_remember(content="...", type="error", priority=7)
- TODO found → nmem_todo(task="...", priority=6)
### Auto-Recall
Before asking user → nmem_recall(query="<topic>", depth=1)
### Session End
nmem_auto(action="process", text="<session summary>")
nmem_session(action="set", feature="...", progress=0.8)
Troubleshooting¶
"nmem-mcp" not found¶
# Check if installed
pip show neural-memory
# Check if nmem-mcp is in PATH
which nmem-mcp # macOS/Linux
where nmem-mcp # Windows
# If not found, use Python module instead
python -m neural_memory.mcp
Tools not appearing in editor¶
- Verify the MCP config file path is correct for your editor
- Restart the editor completely
- Check editor logs for MCP connection errors
- Test manually:
echo '{"jsonrpc":"2.0","method":"tools/list","id":1}' | nmem-mcp
Python version mismatch¶
# NeuralMemory requires Python 3.11+
python --version
# If you have multiple Python versions, specify the full path
Windows: encoding errors¶
NeuralMemory handles Windows stdio encoding automatically. If you still see encoding issues:
{
"neural-memory": {
"command": "python",
"args": ["-m", "neural_memory.mcp"],
"env": {
"PYTHONIOENCODING": "utf-8"
}
}
}
Permission denied (macOS/Linux)¶
uvx not found¶
Debug mode¶
Reset to fresh state¶
Quick Reference¶
| Editor | Config File | Config Format |
|---|---|---|
| Claude Code | claude mcp add or .mcp.json |
{ "mcpServers": { ... } } |
| Cursor | ~/.cursor/mcp.json |
{ "mcpServers": { ... } } |
| Windsurf | ~/.codeium/windsurf/mcp_config.json |
{ "mcpServers": { ... } } |
| Claude Desktop | See path above | { "mcpServers": { ... } } |
| VS Code (Continue) | ~/.continue/config.json |
{ "mcpServers": [ ... ] } |
| VS Code (Copilot) | VS Code settings.json |
{ "mcp": { "servers": { ... } } } |
| Cline | cline_mcp_settings.json |
{ "mcpServers": { ... } } |
| Zed | ~/.config/zed/settings.json |
{ "language_models": { "mcp_servers": { ... } } } |
| Antigravity | mcp_config.json (via MCP Store) |
{ "mcpServers": { ... } } |
| JetBrains | .idea/mcpServers.json |
{ "mcpServers": { ... } } |
| Gemini CLI | ~/.gemini/settings.json |
{ "mcpServers": { ... } } |
| Amazon Q | ~/.aws/amazonq/mcp.json |
{ "mcpServers": { ... } } |
| Neovim | mcpservers.json (plugin-dependent) |
{ "mcpServers": { ... } } |
| Warp | ~/.warp/mcp.json |
{ "mcpServers": { ... } } |
Minimum config for any editor:
That's it. Copy, paste, restart. Done.