April 16, 2026 โ€ข Version: 2026.4.12

CLI Memory Commands Crash with 'Unknown memory embedding provider: ollama'

CLI memory commands (status, backup, rebuild) crash because the ollama embedding provider is registered by a plugin at runtime, but CLI code paths do not load plugins, causing a hardcoded provider lookup to fail.

๐Ÿ” Symptoms

When executing any CLI memory command with the ollama provider configured, the command terminates immediately with a JavaScript error.

Error Output

$ openclaw memory status
Error: Unknown memory embedding provider: ollama
    at getAdapter (manager-FzeN0TEi.js:341:22)
    at createEmbeddingProvider (manager-FzeN0TEi.js:393:25)
    at MemoryIndexManager.loadProviderResult (manager-FzeN0TEi.js:2759:16)

$ openclaw memory backup
Error: Unknown memory embedding provider: ollama
    at getAdapter (manager-FzeN0TEi.js:341:22)
    at createEmbeddingProvider (manager-FzeN0TEi.js:393:25)
    at MemoryIndexManager.loadProviderResult (manager-FzeN0TEi.js:2759:16)

$ openclaw memory rebuild
Error: Unknown memory embedding provider: ollama
    at getAdapter (manager-FzeN0TEi.js:341:22)
    at createEmbeddingProvider (manager-FzeN0TEi.js:393:25)
    at MemoryIndexManager.loadProviderResult (manager-FzeN0TEi.js:2759:16)

Affected Commands

  • openclaw memory status
  • openclaw memory backup
  • openclaw memory rebuild

Working Code Path (Gateway)

The gateway runtime processes memory searches correctly because plugin initialization occurs before the memory system is initialized:

# Gateway memory_search works fine
$ openclaw gateway memory_search "test query"
[Returns results using ollama embeddings]

Configuration That Triggers the Bug

# ~/.openclaw/config.yaml
agents:
  defaults:
    memorySearch:
      provider: "ollama"
      model: "snowflake-arctic-embed2"
      remote:
        baseUrl: "http://192.168.1.100:11434"
      fallback: "none"

๐Ÿง  Root Cause

Architectural Divergence: Plugin Registration vs. CLI Code Paths

The memory embedding provider system has two registration mechanisms with incompatible initialization sequences.

Mechanism 1: Plugin-Based Registration (Gateway-Ready)

The ollama plugin registers its embedding provider at runtime through the plugin API:

// extensions/ollama/index.js
api.registerMemoryEmbeddingProvider(ollamaMemoryEmbeddingProviderAdapter);

This registration occurs during the plugin initialization phase, which runs before the memory system is fully initialized in the gateway runtime.

Mechanism 2: Hardcoded Builtin Registration (CLI-Only)

CLI memory commands invoke registerBuiltInMemoryEmbeddingProviders() which populates a hardcoded array:

// manager-FzeN0TEi.js (compiled bundle)
const builtinMemoryEmbeddingProviderAdapters = [
  'local',
  'openai',
  'gemini',
  'voyage',
  'mistral'
];

function registerBuiltInMemoryEmbeddingProviders() {
  for (const adapter of builtinMemoryEmbeddingProviderAdapters) {
    // Register each adapter with its associated metadata
  }
}

The Failure Sequence

  1. User invokes openclaw memory status
  2. CLI initializes a MemoryIndexManager instance
  3. MemoryIndexManager.loadProviderResult() is called with the configured provider ("ollama")
  4. createEmbeddingProvider() invokes getAdapter()
  5. getAdapter() checks the internal registry for "ollama"
  6. Since CLI never loaded plugins, "ollama" is not in the registry
  7. Error is thrown: "Unknown memory embedding provider: ollama"

Why Gateway Works

The gateway runtime explicitly loads and initializes all enabled plugins before initializing any subsystem:

// Gateway initialization sequence (simplified)
await pluginManager.loadPlugins();           // Loads extensions/ollama/index.js
await pluginManager.initializePlugins();     // Calls api.registerMemoryEmbeddingProvider()
await memoryManager.initialize();            // Now has ollama in registry

Broader Implications

The builtinMemoryEmbeddingProviderAdapters array is missing several documented providers:

  • ollama - First-party bundled extension (this bug)
  • lmstudio - Documented as valid in runtime schema
  • bedrock - Documented as valid in runtime schema

All of these would exhibit identical crash behavior when configured as CLI memory providers.

Privacy/Security Side Effect

When using provider: “auto” as a workaround, the auto-select priority chain is:

local(10) โ†’ openai(20) โ†’ gemini(30) โ†’ voyage(40) โ†’ mistral(50)

Since ollama has no autoSelectPriority defined, it is never selected by auto-select. Users seeking local/self-hosted embeddings may inadvertently transmit memory data to OpenAI’s servers.

๐Ÿ› ๏ธ Step-by-Step Fix

Modify the builtinMemoryEmbeddingProviderAdapters array in the manager bundle to include ollama.

Before

// manager-FzeN0TEi.js (line ~XXX)
const builtinMemoryEmbeddingProviderAdapters = [
  'local',
  'openai',
  'gemini',
  'voyage',
  'mistral'
];

After

// manager-FzeN0TEi.js (line ~XXX)
const builtinMemoryEmbeddingProviderAdapters = [
  'local',
  'openai',
  'gemini',
  'voyage',
  'mistral',
  'ollama'
];

Implementation Steps

  1. Locate the builtinMemoryEmbeddingProviderAdapters definition in the source manager file (not the compiled bundle)
  2. Add 'ollama' as the final element in the array
  3. Rebuild the application bundle
  4. Alternatively, submit a pull request to the OpenClaw repository

Alternative Fix: Load Plugins in CLI Memory Commands

If the preferred solution is to maintain feature parity between CLI and gateway:

// In CLI memory command initialization (pseudo-code)
async function initializeMemoryCommands() {
  // Load plugins before initializing memory system
  await pluginManager.loadPlugins();
  await pluginManager.initializePlugins();
  
  // Now ollama (and other plugin providers) are registered
  registerBuiltInMemoryEmbeddingProviders();
  
  // Continue with command setup
}

Change the provider to one of the builtin providers:

# ~/.openclaw/config.yaml
agents:
  defaults:
    memorySearch:
      provider: "local"  # Use local instead of ollama
      # Or use "openai" if you have an API key
      # Or use "auto" (but note privacy implications)

๐Ÿงช Verification

After applying the fix, verify by running the affected CLI commands.

Test Commands

# Test 1: Memory status command
$ openclaw memory status
โœ“ Connected to memory index
โœ“ Provider: ollama
โœ“ Model: snowflake-arctic-embed2
โœ“ Endpoint: http://192.168.1.100:11434
โœ“ Status: Ready

# Test 2: Memory backup command
$ openclaw memory backup --output ./memory-backup.json
โœ“ Backup completed successfully
โœ“ Records: 1,247 entries
โœ“ File: ./memory-backup.json

# Test 3: Memory rebuild command
$ openclaw memory rebuild --provider ollama
โœ“ Rebuild initiated
โœ“ Processing embeddings...
โœ“ Complete: 1,247 records re-embedded

Expected Exit Codes

All commands should exit with code 0 on success.

Integration Test: CLI vs Gateway Parity

Verify that both code paths produce identical results:

# Run same query via CLI
$ openclaw memory search "meeting notes from last week" --limit 5
[
  {
    "id": "mem_abc123",
    "content": "Quarterly planning meeting...",
    "score": 0.94
  }
]

# Run same query via gateway
$ openclaw gateway memory_search "meeting notes from last week"
[
  {
    "id": "mem_abc123",
    "content": "Quarterly planning meeting...",
    "score": 0.94
  }
]

# Results should match

Test with All Previously Affected Providers

Verify that other plugin-based providers (lmstudio, bedrock) would also be fixed if added:

# If lmstudio is added to builtin providers
$ openclaw memory status --provider lmstudio
โœ“ Provider: lmstudio
โœ“ Model: your-configured-model
โœ“ Status: Ready

โš ๏ธ Common Pitfalls

Environment-Specific Traps

  • Apple Silicon (macOS arm64): Ollama on Apple Silicon may require Rosetta 2 for certain models. Ensure ollama serve is running before executing memory commands.
  • Docker Environments: If OpenClaw runs inside Docker, ensure the Ollama endpoint is accessible from the container network. Use --network=host or configure proper networking.
  • Tailscale/Remote Endpoints: When using Tailnet IPs, verify that the Tailscale daemon is running and the subnet is advertised.

Configuration Pitfalls

  • Endpoint URL Format: The base URL must include the port. Incorrect: http://192.168.1.100/ollama, Correct: http://192.168.1.100:11434
  • Model Name Mismatch: Ensure the model name exactly matches one available in Ollama. Run ollama list to verify.
  • Fallback Configuration: Setting fallback: "none" will cause immediate failure if Ollama is unreachable. Consider fallback: "local" for resilience.

Privacy Implications of “auto” Workaround

# WRONG: Using auto-select as a workaround
agents:
  defaults:
    memorySearch:
      provider: "auto"  # โš ๏ธ Sends data to OpenAI!

# CORRECT: Explicitly specify local provider
agents:
  defaults:
    memorySearch:
      provider: "ollama"  # โœ“ Stays local

Misunderstanding: Plugin vs Builtin Distinction

Many users are unaware that providers fall into two categories:

  • Builtin Providers: Hardcoded in the manager, always available
  • Plugin Providers: Registered by plugins at runtime

This distinction is not documented, leading to confusion when CLI commands fail.

Version Compatibility

The bug affects version 2026.4.12. If upgrading from an earlier version, verify that the ollama plugin is compatible with the new manager bundle.

  • Unknown memory embedding provider: lmstudio โ€” Same bug affects lmstudio provider (also plugin-registered)
  • Unknown memory embedding provider: bedrock โ€” Same bug affects bedrock provider (also plugin-registered)
  • Unknown memory embedding provider: [any plugin-registered provider] โ€” Generalization of this bug
  • Plugin initialization failed: ollama โ€” Occurs when the ollama plugin fails to load, which may mask this error in some scenarios
  • ECONNREFUSED โ€” Network error when Ollama endpoint is unreachable; distinct from provider registration error
  • Model not found: [model-name] โ€” Ollama server does not have the requested model; different error path

Historical Context

  • 2026.3.x: Plugin system introduced for extensions, including ollama provider
  • 2026.4.x: CLI memory commands extracted to separate entry point that bypasses plugin initialization
  • 2026.4.12: Current version with the divergence bug (this issue)

Similar Patterns in Codebase

The same architectural pattern (builtin vs plugin registration) exists in:

  • Memory retrieval providers
  • LLM providers
  • Tool adapters

These may exhibit similar CLI-vs-gateway inconsistencies.

Evidence & Sources

This troubleshooting guide was automatically synthesized by the FixClaw Intelligence Pipeline from community discussions.