Spreading Activation¶
The core retrieval mechanism in NeuralMemory.
What is Spreading Activation?¶
Spreading activation is a cognitive model of how memory retrieval works in the human brain. When you think of a concept, related concepts become more "active" and accessible.
Example: Thinking of "apple" activates: - Fruit (category) - Red/green (colors) - Tree (where it grows) - Pie (food made with it) - iPhone (brand association)
NeuralMemory implements this computationally:
Query: "apple"
│
▼ activate
[apple] ─────► [fruit] ─────► [banana]
│ │
│ ▼
└─────► [red] ─────► [strawberry]
How It Works¶
NeuralMemory supports two activation modes: Classic (distance-based decay) and Reflex (trail-based decay through fiber pathways). Reflex mode is the default in v0.6.0+.
1. Anchor Selection (Time-First)¶
The query is parsed to identify anchor neurons, with time neurons as primary anchors:
Anchors identified (priority order):
last Tuesday(time - weight 1.0)Alice(entity - weight 0.8)suggest(action - weight 0.6)auth(concept - weight 0.4)
Time-first anchoring constrains the search to temporally relevant memories before expanding to entities and concepts.
2. Initial Activation¶
Anchor neurons receive weighted initial activation:
3. Activation Spread¶
Classic Mode (distance-based)¶
Activation spreads through synapses with hop-based decay:
Example with decay_factor = 0.8:
Hop 0: [Alice] = 1.0
Hop 1: [Meeting with Alice] = 1.0 * 0.9 * 0.8 = 0.72
Hop 2: [JWT suggestion] = 0.72 * 0.8 * 0.8 = 0.46
Hop 3: [Auth module] = 0.46 * 0.7 * 0.8 = 0.26
Reflex Mode (trail-based) ¶
Activation spreads along fiber pathways with trail decay, conductivity, and time factors:
Fibers act as signal pathways - ordered sequences of neurons that have been co-activated before. Signals travel along these established trails:
Fiber pathway: [Tuesday] → [Alice] → [Meeting] → [JWT] → [Auth]
│ │ │ │ │
Activation: 1.0 0.76 0.55 0.38 0.25
↑
conductivity=0.95, time_factor=0.98
Key differences from classic mode:
- Conductivity: Frequently-used fibers conduct signals better (0.0-1.0)
- Time factor: Recently-conducted fibers transmit stronger signals
- Pathway order: Signals follow established neuron sequences, not arbitrary BFS
4. Co-Activation (Hebbian Binding) ¶
When multiple anchor sets activate the same neuron, it receives a co-activation boost based on the Hebbian principle: "Neurons that fire together wire together."
Tuesday anchor Alice anchor auth anchor
│ │ │
▼ ▼ ▼
[Meeting notes] ◄─── [JWT suggestion] ───► [Auth module]
│
co-activated by 3/3 anchors
binding_strength = 1.0
Co-activation detection finds neurons activated by multiple independent anchor sets and boosts their scores:
5. Subgraph Extraction¶
The highest-scoring connected region is extracted as the result, with co-activated neurons ranked highest.
Configuration¶
Brain Config¶
@dataclass
class BrainConfig:
decay_rate: float = 0.1 # How fast activation decays
reinforcement_delta: float = 0.05 # How much to strengthen on use
activation_threshold: float = 0.2 # Minimum activation
max_spread_hops: int = 4 # Maximum traversal depth
max_context_tokens: int = 1500 # Max tokens in response
Decay Rate¶
Controls how quickly activation fades:
Classic mode (distance-based):
Reflex mode (trail-based):
| decay_rate | Effect |
|---|---|
| 0.05 | Slow decay, wide spread |
| 0.1 | Default, balanced |
| 0.2 | Fast decay, focused |
| 0.3 | Very focused, nearby only |
Fiber Conductivity ¶
Fibers have a conductivity value (0.0-1.0) that affects signal transmission:
- 1.0: Full conductivity (new or frequently-used fiber)
- 0.5: Moderate conductivity
- 0.0: No signal passes through
Conductivity increases by +0.02 each time a fiber is activated (capped at 1.0).
Time Factor ¶
Recently-conducted fibers transmit stronger signals:
| Age | Time Factor |
|---|---|
| Just now | ~1.0 |
| 1 day | ~0.86 |
| 3 days | ~0.57 |
| 7 days | ~0.1 |
Activation Threshold¶
Neurons below this level are ignored:
Lower threshold = more results, potentially noisy Higher threshold = fewer results, more precise
Max Spread Hops¶
Limits how far activation travels:
| Hops | Reach | Use Case |
|---|---|---|
| 1 | Direct connections | Simple lookups |
| 2-3 | Local context | Most queries |
| 4+ | Extended graph | Deep analysis |
Depth Levels¶
The CLI uses depth levels to control spreading:
nmem recall "query" --depth 0 # Instant: 1 hop
nmem recall "query" --depth 1 # Context: 2-3 hops
nmem recall "query" --depth 2 # Habit: 4+ hops
nmem recall "query" --depth 3 # Deep: full traversal
Auto-Detection¶
Without --depth, the system auto-detects:
| Query Pattern | Detected Depth |
|---|---|
| "What is X?" | Instant (0) |
| "What happened before X?" | Context (1) |
| "Do I usually X?" | Habit (2) |
| "Why did X happen?" | Deep (3) |
Synapse Weight Effects¶
Higher-weight synapses transfer more activation:
High weight (0.9): [A] ──0.9──► [B] → B gets 0.72 activation
Low weight (0.3): [A] ──0.3──► [B] → B gets 0.24 activation
This naturally prioritizes: - Strong causal links over weak associations - Recent/reinforced paths over old/unused ones
Multi-Anchor Convergence¶
The power of spreading activation shows with multiple query terms:
Query: "Alice auth Tuesday"
[Tuesday] ────┐ (time anchor, weight 1.0)
│
▼
[Alice] ──────┤
▼
[JWT meeting] ← HIGH SCORE (all 3 converge)
▲
│
[auth] ───────┘
In reflex mode, co-activated neurons (activated by multiple anchor sets) receive a binding strength score proportional to how many anchors they were reached from. This replaces simple additive scoring with Hebbian-style binding.
Performance Characteristics¶
Hybrid Mode (default)¶
| Graph Size | Classic (ms) | Hybrid (ms) | Speedup | Recall |
|---|---|---|---|---|
| 100 neurons | 1.6 | 0.3 | 4.6x | 77% |
| 1K neurons | 2.2 | 0.4 | 5.9x | 55% |
| 3K neurons | 3.2 | 0.4 | 7.3x | 50% |
| 5K neurons | 2.0 | 0.4 | 4.6x | 51% |
See Benchmarks for full results. Regenerate with python benchmarks/run_benchmarks.py.
How Hybrid Works¶
- Reflex pass (fast): Conduct signals along fiber pathways
- Discovery pass (limited BFS): Classic activation with
max_hops // 2to find neurons outside fibers - Merge: Reflex results are primary; discovery results are dampened (0.6x) to keep fiber signals ranked higher
This gives the speed of reflex activation with the coverage of classic BFS.
Compared to Other Approaches¶
vs. Vector Similarity¶
| Aspect | Vector Search | Spreading Activation |
|---|---|---|
| Model | Geometric distance | Graph traversal |
| Relationships | Implicit | Explicit typed edges |
| Multi-hop | Multiple queries | Single traversal |
| Explanation | "Similar embedding" | "Connected via X" |
vs. Keyword Search¶
| Aspect | Keyword Search | Spreading Activation |
|---|---|---|
| Model | Text matching | Semantic graph |
| Synonyms | Manual expansion | Automatic via links |
| Context | None | Full graph context |
| Ranking | TF-IDF / BM25 | Activation convergence |
Reflex vs Classic Mode¶
You can choose which activation mode to use:
# Hybrid reflex mode (default in v0.6.0+)
# Reflex trail activation + limited classic BFS for discovery
pipeline = ReflexPipeline(storage, config, use_reflex=True)
# Classic mode (distance-based, pre-v0.6.0 behavior)
pipeline = ReflexPipeline(storage, config, use_reflex=False)
| Aspect | Classic | Hybrid (default) |
|---|---|---|
| Decay model | Distance-based (decay^hops) |
Trail + limited BFS |
| Time handling | Equal weight | Time-first primary anchors |
| Fiber role | Static memory cluster | Signal pathway with conductivity |
| Multi-anchor | Additive intersection | Co-activation (Hebbian binding) |
| Recency | Not considered | Time factor boosts recent fibers |
| Speed | Baseline | 4-7x faster at scale |
| Recall | 100% (by definition) | ~50-77% of classic's neurons |
Debugging Activation¶
Use --show-routing to see activation paths:
Output:
Query parsed: entities=[auth], intents=[decision]
Anchors: [neuron-auth-123 (w=0.8), neuron-decision-456 (w=0.4)]
Mode: reflex
Fibers: 3 containing anchor neurons
Trail activation:
fiber-001 (conductivity=0.95, time_factor=0.92):
neuron-auth-123 → neuron-jwt-789 (activation: 0.70)
neuron-jwt-789 → neuron-meeting-012 (activation: 0.48)
Co-activations: neuron-jwt-789 (binding_strength=1.0, sources=2)
Result: "We decided to use JWT for authentication"