# Brain Reactive Core Architecture
## The Always-On Daemon That Makes Everything Else Possible

**Author:** OpenClaw (vg-claw)  
**Date:** 2026-03-25  
**Status:** Design Proposal

---

## 1. The Problem

We have all the pieces:
- 213K documents ingested
- Vector search working
- MCP server running
- Agent spawning functional
- Multiple models accessible

But nothing **reacts**. Data sits until a human asks. The brain is passive.

## 2. The Vision

```
┌─────────────────────────────────────────────────────────────────┐
│                     BRAIN REACTIVE CORE                         │
│                    "The Attention Loop"                          │
├─────────────────────────────────────────────────────────────────┤
│                                                                  │
│    ┌──────────┐     ┌──────────┐     ┌──────────┐              │
│    │  SENSE   │────▶│  DECIDE  │────▶│   ACT    │              │
│    │          │     │          │     │          │              │
│    │ Poll DB  │     │ Classify │     │ Dispatch │              │
│    │ for new  │     │ priority │     │ to right │              │
│    │ events   │     │ & type   │     │ worker   │              │
│    └──────────┘     └──────────┘     └──────────┘              │
│         ▲                                   │                   │
│         │                                   │                   │
│         └───────────────────────────────────┘                   │
│                    LEARN (feedback)                             │
│                                                                  │
└─────────────────────────────────────────────────────────────────┘
```

## 3. Event Sources (What We Sense)

```sql
-- Single unified event table
CREATE TABLE brain.events (
    id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
    source TEXT NOT NULL,           -- 'chat', 'transcript', 'webhook', 'cron', 'git'
    source_id TEXT,                 -- Original ID from source system
    event_type TEXT NOT NULL,       -- 'new_message', 'file_added', 'mention', 'commit'
    payload JSONB NOT NULL,         -- Full event data
    priority INT DEFAULT 5,         -- 1=urgent, 5=normal, 10=background
    status TEXT DEFAULT 'pending',  -- 'pending', 'processing', 'done', 'failed', 'skipped'
    worker_id TEXT,                 -- Which worker claimed it
    created_at TIMESTAMPTZ DEFAULT NOW(),
    processed_at TIMESTAMPTZ,
    result JSONB                    -- Outcome of processing
);

CREATE INDEX idx_events_pending ON brain.events(status, priority, created_at) 
    WHERE status = 'pending';
```

### Event Sources

| Source | Trigger | Event Type | Priority |
|--------|---------|------------|----------|
| Nextcloud Chat | Webhook on message | `chat.message` | 2 (fast response expected) |
| Transcript Upload | File watcher / webhook | `transcript.new` | 5 (can batch) |
| Git Push | Webhook | `git.push` | 5 |
| Email | IMAP poll | `email.new` | 3 |
| Calendar | Cron check | `calendar.upcoming` | 4 |
| Cron Jobs | Scheduled | `cron.trigger` | varies |
| Manual Task | API/CLI | `task.manual` | 1 |

## 4. The Daemon (brain-daemon)

### Core Loop (Pseudocode)

```python
class BrainDaemon:
    def __init__(self):
        self.db = connect_postgres()
        self.workers = WorkerPool()
        self.running = True
    
    async def run(self):
        """Main attention loop - runs forever"""
        while self.running:
            # SENSE: Check for pending events
            events = await self.db.fetch("""
                SELECT * FROM brain.events 
                WHERE status = 'pending'
                ORDER BY priority ASC, created_at ASC
                LIMIT 10
                FOR UPDATE SKIP LOCKED
            """)
            
            if not events:
                await asyncio.sleep(1)  # Nothing to do, rest
                continue
            
            for event in events:
                # DECIDE: What kind of work is this?
                task = self.classify(event)
                
                # ACT: Dispatch to appropriate worker
                worker = self.workers.get(task.worker_type)
                await worker.process(event, task)
                
                # LEARN: Record outcome
                await self.record_result(event, task)
    
    def classify(self, event) -> Task:
        """Determine task type, priority, and target worker"""
        rules = [
            # Chat messages -> chat-worker (fast, conversational)
            (lambda e: e.source == 'chat', 'chat-worker', 2),
            
            # Transcripts -> transcript-worker (batch, analytical)
            (lambda e: e.event_type == 'transcript.new', 'transcript-worker', 5),
            
            # Code-related -> code-worker (spawn Codex/Cursor)
            (lambda e: 'code' in e.payload.get('tags', []), 'code-worker', 4),
            
            # Research tasks -> research-worker (deep, slow)
            (lambda e: e.payload.get('task_type') == 'research', 'research-worker', 6),
            
            # Default -> general-worker
            (lambda e: True, 'general-worker', 5),
        ]
        
        for condition, worker, priority in rules:
            if condition(event):
                return Task(worker_type=worker, priority=priority)
```

### Why Zig? (brain-nullclaw)

```
┌─────────────────────────────────────────────────────────────────┐
│ Python Daemon              vs           Zig Daemon              │
├─────────────────────────────────────────────────────────────────┤
│ ~50MB memory                            ~2MB memory             │
│ GIL limits concurrency                  True async              │
│ Slow startup                            Instant startup         │
│ Easy to write                           Harder to write         │
│ Good for prototyping                    Good for production     │
└─────────────────────────────────────────────────────────────────┘

Recommendation: Prototype in Python, rewrite core loop in Zig once stable.
```

## 5. Workers (Specialized Brains)

Each worker is an independent entity with:
- Own Nextcloud Talk channel (for human oversight)
- Own memory files (learns its domain)
- Own tool access (what it's allowed to do)
- Own model preferences (which LLMs it uses)

### Worker Architecture

```
┌─────────────────────────────────────────────────────────────────┐
│                      WORKER: transcript-worker                   │
├─────────────────────────────────────────────────────────────────┤
│                                                                  │
│  Identity:                                                       │
│    - Name: "Transcript Processor"                               │
│    - NC Chat: #transcript-worker (room: xyz123)                 │
│    - Memory: /workers/transcript/memory/                        │
│                                                                  │
│  Capabilities:                                                   │
│    - Read Google Docs (gog CLI)                                 │
│    - Parse meeting transcripts                                  │
│    - Extract action items                                       │
│    - Create Vikunja tasks                                       │
│    - Push to Git                                                │
│    - Update vector DB                                           │
│                                                                  │
│  Model Routing:                                                  │
│    - Summary: gemini-flash (fast, cheap)                        │
│    - Extraction: claude-sonnet (structured output)              │
│    - Embeddings: local (all-MiniLM-L6-v2)                       │
│                                                                  │
│  Learning:                                                       │
│    - Tracks extraction accuracy                                 │
│    - Learns meeting patterns per project                        │
│    - Improves prompts based on feedback                         │
│                                                                  │
└─────────────────────────────────────────────────────────────────┘
```

### Worker Registry

```yaml
# /etc/brain/workers.yaml
workers:
  chat-worker:
    description: "Handles real-time chat interactions"
    channel: "h2h7xsqc"  # NC Talk room
    models:
      default: claude-sonnet
      fast: gemini-flash
    tools: [search, calendar, weather, tts]
    memory: /workers/chat/
    timeout: 30s
    
  transcript-worker:
    description: "Processes meeting transcripts"
    channel: "transcript-room-id"
    models:
      extraction: claude-sonnet
      summary: gemini-flash
      embeddings: local
    tools: [gog, vikunja, git, vector-db]
    memory: /workers/transcript/
    timeout: 300s
    
  code-worker:
    description: "Handles coding tasks"
    channel: "code-room-id"
    models:
      default: claude-sonnet
      coding: codex
      review: qwen
    tools: [git, exec, cursor, codex, github]
    memory: /workers/code/
    timeout: 600s
    
  research-worker:
    description: "Deep research and analysis"
    channel: "research-room-id"
    models:
      default: claude-opus
      search: gemini-flash
    tools: [web_search, web_fetch, pdf, arxiv]
    memory: /workers/research/
    timeout: 900s
```

## 6. The Orchestrator

When tasks are complex, the orchestrator breaks them down:

```
┌─────────────────────────────────────────────────────────────────┐
│                        ORCHESTRATOR                              │
├─────────────────────────────────────────────────────────────────┤
│                                                                  │
│  Input: "Review the Q1 transcripts and create a summary         │
│          report with action items, then update the project      │
│          board and draft follow-up emails"                      │
│                                                                  │
│  ┌─────────────────────────────────────────────────────────┐    │
│  │                 TASK DECOMPOSITION                        │    │
│  │                                                           │    │
│  │  1. [transcript-worker] Extract Q1 transcripts           │    │
│  │     └── deps: none                                        │    │
│  │     └── parallel: yes                                     │    │
│  │                                                           │    │
│  │  2. [transcript-worker] Summarize each transcript        │    │
│  │     └── deps: [1]                                         │    │
│  │     └── parallel: yes (per transcript)                    │    │
│  │                                                           │    │
│  │  3. [research-worker] Synthesize into report             │    │
│  │     └── deps: [2]                                         │    │
│  │     └── parallel: no                                      │    │
│  │                                                           │    │
│  │  4. [general-worker] Extract action items                │    │
│  │     └── deps: [2]                                         │    │
│  │     └── parallel: yes (with 3)                            │    │
│  │                                                           │    │
│  │  5. [general-worker] Update Vikunja board                │    │
│  │     └── deps: [4]                                         │    │
│  │     └── parallel: no                                      │    │
│  │                                                           │    │
│  │  6. [chat-worker] Draft follow-up emails                 │    │
│  │     └── deps: [3, 4]                                      │    │
│  │     └── parallel: no                                      │    │
│  │                                                           │    │
│  └─────────────────────────────────────────────────────────┘    │
│                                                                  │
│  Execution Graph:                                               │
│                                                                  │
│       [1]                                                        │
│        │                                                         │
│        ▼                                                         │
│       [2]                                                        │
│       / \                                                        │
│      ▼   ▼                                                       │
│    [3]  [4]  ◄── parallel                                       │
│      \   │                                                       │
│       \  ▼                                                       │
│        \[5]                                                      │
│         \│                                                       │
│          ▼                                                       │
│         [6]                                                      │
│                                                                  │
└─────────────────────────────────────────────────────────────────┘
```

## 7. Model Routing

Not all tasks need the same model. The brain routes intelligently:

```python
class ModelRouter:
    """Route tasks to optimal models based on requirements"""
    
    ROUTING_TABLE = {
        # Task type -> (model, reasoning)
        'code.generate': ('codex', 'Optimized for code generation'),
        'code.review': ('qwen', 'Good at spotting issues, cost-effective'),
        'code.explain': ('claude-sonnet', 'Clear explanations'),
        
        'chat.quick': ('gemini-flash', 'Fast responses'),
        'chat.complex': ('claude-sonnet', 'Nuanced conversation'),
        
        'analysis.deep': ('claude-opus', 'Best reasoning'),
        'analysis.quick': ('gemini-flash', 'Fast analysis'),
        
        'extraction.structured': ('claude-sonnet', 'Reliable JSON'),
        'extraction.simple': ('gemini-flash', 'Fast extraction'),
        
        'summary.long': ('gemini-pro', 'Large context window'),
        'summary.short': ('gemini-flash', 'Quick summaries'),
        
        'embeddings': ('local', 'Free, fast, private'),
    }
    
    def route(self, task_type: str, context: dict) -> str:
        """Select optimal model for task"""
        
        # Check for explicit override
        if context.get('model'):
            return context['model']
        
        # Check routing table
        if task_type in self.ROUTING_TABLE:
            model, reason = self.ROUTING_TABLE[task_type]
            self.log(f"Routing {task_type} -> {model}: {reason}")
            return model
        
        # Default fallback
        return 'claude-sonnet'
```

## 8. K-Lines (Knowledge Links)

The graph isn't just storage—it's associative memory:

```cypher
// Neo4j schema for K-lines

// Concepts
CREATE (c:Concept {name: "Oraigami", type: "project"})
CREATE (c:Concept {name: "ALC", type: "product"})
CREATE (c:Concept {name: "Transcript Processing", type: "skill"})

// K-lines (typed relationships)
CREATE (oraigami)-[:OWNS]->(alc)
CREATE (alc)-[:REQUIRES]->(transcript_processing)
CREATE (transcript_processing)-[:USES_TOOL]->(gog)
CREATE (varij)-[:MANAGES]->(oraigami)

// Queries the brain can make:
// "What skills are needed for Oraigami work?"
MATCH (p:Project {name: "Oraigami"})-[:OWNS|CONTAINS*]->(thing)
      -[:REQUIRES]->(skill:Skill)
RETURN DISTINCT skill.name

// "Who should I notify about ALC issues?"
MATCH (product {name: "ALC"})<-[:OWNS]-(project)<-[:MANAGES]-(person)
RETURN person.name, person.contact
```

## 9. Learning Loop

Every completed task feeds back:

```python
class LearningLoop:
    def record_outcome(self, task, result):
        """Learn from completed work"""
        
        # Was this successful?
        if result.success:
            # Extract patterns
            pattern = self.extract_pattern(task, result)
            
            # Did we use a good approach?
            if result.quality_score > 0.8:
                self.reinforce_approach(task.approach)
            
            # Can we generate a skill from this?
            if self.is_reusable(pattern):
                skill = self.generate_skill(pattern)
                self.save_skill(skill)
        
        else:
            # What went wrong?
            failure = self.analyze_failure(task, result)
            
            # Avoid this approach next time
            self.weaken_approach(task.approach, failure)
            
            # Should we escalate?
            if failure.severity > 0.7:
                self.escalate_to_human(task, failure)
    
    def extract_pattern(self, task, result):
        """Find reusable patterns in successful work"""
        return {
            'input_type': task.event_type,
            'approach': task.approach,
            'tools_used': result.tools_used,
            'model_used': result.model,
            'duration': result.duration,
            'output_type': result.output_type,
        }
```

## 10. Implementation Roadmap

### Phase 1: Core Loop (Week 1)
```
□ Create brain.events table in PostgreSQL
□ Build Python daemon prototype
□ Implement event polling
□ Add simple classification
□ Test with manual events
```

### Phase 2: Event Sources (Week 2)
```
□ Nextcloud Chat webhook → events table
□ File watcher for transcripts
□ Git webhook integration
□ Cron job triggers
```

### Phase 3: First Workers (Week 3)
```
□ transcript-worker (based on gami-claw)
□ chat-worker (based on current OpenClaw)
□ Worker <-> NC Talk channel mapping
□ Worker memory directories
```

### Phase 4: Orchestrator (Week 4)
```
□ Task decomposition engine
□ Dependency resolution
□ Parallel execution
□ Result consolidation
```

### Phase 5: Model Routing (Week 5)
```
□ Routing table implementation
□ Task type classification
□ Cost tracking
□ Performance metrics
```

### Phase 6: K-Lines + Learning (Ongoing)
```
□ Neo4j schema design
□ Pattern extraction
□ Skill generation
□ Feedback loops
```

## 11. File Structure

```
/opt/brain/
├── daemon/
│   ├── main.py              # Core attention loop
│   ├── classifier.py        # Event classification
│   ├── dispatcher.py        # Worker dispatch
│   └── config.yaml          # Daemon config
│
├── workers/
│   ├── registry.yaml        # Worker definitions
│   ├── chat/
│   │   ├── worker.py
│   │   └── memory/
│   ├── transcript/
│   │   ├── worker.py
│   │   └── memory/
│   ├── code/
│   │   ├── worker.py
│   │   └── memory/
│   └── research/
│       ├── worker.py
│       └── memory/
│
├── orchestrator/
│   ├── decomposer.py        # Task breakdown
│   ├── scheduler.py         # Execution planning
│   └── consolidator.py      # Result merging
│
├── router/
│   ├── model_router.py      # Model selection
│   └── routing_table.yaml   # Model mappings
│
├── learning/
│   ├── patterns.py          # Pattern extraction
│   ├── skills.py            # Skill generation
│   └── feedback.py          # Outcome recording
│
└── db/
    ├── migrations/
    └── schema.sql
```

## 12. Quick Start (After Reading This)

```bash
# 1. Create the events table
ssh vgosine@10.11.12.105 "psql -U varij -d knowledge_base -f /opt/brain/db/schema.sql"

# 2. Start the daemon
cd /opt/brain/daemon
python main.py

# 3. Insert a test event
psql -c "INSERT INTO brain.events (source, event_type, payload) 
         VALUES ('manual', 'test', '{\"message\": \"Hello brain!\"}')"

# 4. Watch it process
tail -f /var/log/brain/daemon.log
```

---

## Summary

The reactive core is **the foundation**. Everything else—specialized workers, orchestration, learning—builds on top of this simple loop:

```
SENSE → DECIDE → ACT → LEARN → (repeat)
```

Once this daemon is running, the brain becomes **alive**. It doesn't wait for humans to ask. It notices, decides, and acts.

---

*"The question of what 'thinking' really is has puzzled philosophers for thousands of years, but we've gotten nowhere because we always try to find some single, simple explanation. Each separate theory has some merit but each leaves out something else. I'll argue there's no single, simple way to explain how minds work."*  
— Marvin Minsky, The Society of Mind
