CLI Memory Commands Crash with 'Unknown memory embedding provider: ollama'
CLI memory commands (status, backup, rebuild) crash because the ollama embedding provider is registered by a plugin at runtime, but CLI code paths do not load plugins, causing a hardcoded provider lookup to fail.
๐ Symptoms
When executing any CLI memory command with the ollama provider configured, the command terminates immediately with a JavaScript error.
Error Output
$ openclaw memory status
Error: Unknown memory embedding provider: ollama
at getAdapter (manager-FzeN0TEi.js:341:22)
at createEmbeddingProvider (manager-FzeN0TEi.js:393:25)
at MemoryIndexManager.loadProviderResult (manager-FzeN0TEi.js:2759:16)
$ openclaw memory backup
Error: Unknown memory embedding provider: ollama
at getAdapter (manager-FzeN0TEi.js:341:22)
at createEmbeddingProvider (manager-FzeN0TEi.js:393:25)
at MemoryIndexManager.loadProviderResult (manager-FzeN0TEi.js:2759:16)
$ openclaw memory rebuild
Error: Unknown memory embedding provider: ollama
at getAdapter (manager-FzeN0TEi.js:341:22)
at createEmbeddingProvider (manager-FzeN0TEi.js:393:25)
at MemoryIndexManager.loadProviderResult (manager-FzeN0TEi.js:2759:16)Affected Commands
openclaw memory statusopenclaw memory backupopenclaw memory rebuild
Working Code Path (Gateway)
The gateway runtime processes memory searches correctly because plugin initialization occurs before the memory system is initialized:
# Gateway memory_search works fine
$ openclaw gateway memory_search "test query"
[Returns results using ollama embeddings]Configuration That Triggers the Bug
# ~/.openclaw/config.yaml
agents:
defaults:
memorySearch:
provider: "ollama"
model: "snowflake-arctic-embed2"
remote:
baseUrl: "http://192.168.1.100:11434"
fallback: "none"๐ง Root Cause
Architectural Divergence: Plugin Registration vs. CLI Code Paths
The memory embedding provider system has two registration mechanisms with incompatible initialization sequences.
Mechanism 1: Plugin-Based Registration (Gateway-Ready)
The ollama plugin registers its embedding provider at runtime through the plugin API:
// extensions/ollama/index.js
api.registerMemoryEmbeddingProvider(ollamaMemoryEmbeddingProviderAdapter);This registration occurs during the plugin initialization phase, which runs before the memory system is fully initialized in the gateway runtime.
Mechanism 2: Hardcoded Builtin Registration (CLI-Only)
CLI memory commands invoke registerBuiltInMemoryEmbeddingProviders() which populates a hardcoded array:
// manager-FzeN0TEi.js (compiled bundle)
const builtinMemoryEmbeddingProviderAdapters = [
'local',
'openai',
'gemini',
'voyage',
'mistral'
];
function registerBuiltInMemoryEmbeddingProviders() {
for (const adapter of builtinMemoryEmbeddingProviderAdapters) {
// Register each adapter with its associated metadata
}
}The Failure Sequence
- User invokes
openclaw memory status - CLI initializes a
MemoryIndexManagerinstance MemoryIndexManager.loadProviderResult()is called with the configured provider ("ollama")createEmbeddingProvider()invokesgetAdapter()getAdapter()checks the internal registry for"ollama"- Since CLI never loaded plugins,
"ollama"is not in the registry - Error is thrown:
"Unknown memory embedding provider: ollama"
Why Gateway Works
The gateway runtime explicitly loads and initializes all enabled plugins before initializing any subsystem:
// Gateway initialization sequence (simplified)
await pluginManager.loadPlugins(); // Loads extensions/ollama/index.js
await pluginManager.initializePlugins(); // Calls api.registerMemoryEmbeddingProvider()
await memoryManager.initialize(); // Now has ollama in registryBroader Implications
The builtinMemoryEmbeddingProviderAdapters array is missing several documented providers:
- ollama - First-party bundled extension (this bug)
- lmstudio - Documented as valid in runtime schema
- bedrock - Documented as valid in runtime schema
All of these would exhibit identical crash behavior when configured as CLI memory providers.
Privacy/Security Side Effect
When using provider: “auto” as a workaround, the auto-select priority chain is:
local(10) โ openai(20) โ gemini(30) โ voyage(40) โ mistral(50)Since ollama has no autoSelectPriority defined, it is never selected by auto-select. Users seeking local/self-hosted embeddings may inadvertently transmit memory data to OpenAI’s servers.
๐ ๏ธ Step-by-Step Fix
Recommended Fix: Add Ollama to Builtin Providers
Modify the builtinMemoryEmbeddingProviderAdapters array in the manager bundle to include ollama.
Before
// manager-FzeN0TEi.js (line ~XXX)
const builtinMemoryEmbeddingProviderAdapters = [
'local',
'openai',
'gemini',
'voyage',
'mistral'
];After
// manager-FzeN0TEi.js (line ~XXX)
const builtinMemoryEmbeddingProviderAdapters = [
'local',
'openai',
'gemini',
'voyage',
'mistral',
'ollama'
];Implementation Steps
- Locate the
builtinMemoryEmbeddingProviderAdaptersdefinition in the source manager file (not the compiled bundle) - Add
'ollama'as the final element in the array - Rebuild the application bundle
- Alternatively, submit a pull request to the OpenClaw repository
Alternative Fix: Load Plugins in CLI Memory Commands
If the preferred solution is to maintain feature parity between CLI and gateway:
// In CLI memory command initialization (pseudo-code)
async function initializeMemoryCommands() {
// Load plugins before initializing memory system
await pluginManager.loadPlugins();
await pluginManager.initializePlugins();
// Now ollama (and other plugin providers) are registered
registerBuiltInMemoryEmbeddingProviders();
// Continue with command setup
}Temporary Workaround (Not Recommended for Production)
Change the provider to one of the builtin providers:
# ~/.openclaw/config.yaml
agents:
defaults:
memorySearch:
provider: "local" # Use local instead of ollama
# Or use "openai" if you have an API key
# Or use "auto" (but note privacy implications)๐งช Verification
After applying the fix, verify by running the affected CLI commands.
Test Commands
# Test 1: Memory status command
$ openclaw memory status
โ Connected to memory index
โ Provider: ollama
โ Model: snowflake-arctic-embed2
โ Endpoint: http://192.168.1.100:11434
โ Status: Ready
# Test 2: Memory backup command
$ openclaw memory backup --output ./memory-backup.json
โ Backup completed successfully
โ Records: 1,247 entries
โ File: ./memory-backup.json
# Test 3: Memory rebuild command
$ openclaw memory rebuild --provider ollama
โ Rebuild initiated
โ Processing embeddings...
โ Complete: 1,247 records re-embeddedExpected Exit Codes
All commands should exit with code 0 on success.
Integration Test: CLI vs Gateway Parity
Verify that both code paths produce identical results:
# Run same query via CLI
$ openclaw memory search "meeting notes from last week" --limit 5
[
{
"id": "mem_abc123",
"content": "Quarterly planning meeting...",
"score": 0.94
}
]
# Run same query via gateway
$ openclaw gateway memory_search "meeting notes from last week"
[
{
"id": "mem_abc123",
"content": "Quarterly planning meeting...",
"score": 0.94
}
]
# Results should matchTest with All Previously Affected Providers
Verify that other plugin-based providers (lmstudio, bedrock) would also be fixed if added:
# If lmstudio is added to builtin providers
$ openclaw memory status --provider lmstudio
โ Provider: lmstudio
โ Model: your-configured-model
โ Status: Readyโ ๏ธ Common Pitfalls
Environment-Specific Traps
- Apple Silicon (macOS arm64): Ollama on Apple Silicon may require Rosetta 2 for certain models. Ensure
ollama serveis running before executing memory commands. - Docker Environments: If OpenClaw runs inside Docker, ensure the Ollama endpoint is accessible from the container network. Use
--network=hostor configure proper networking. - Tailscale/Remote Endpoints: When using Tailnet IPs, verify that the Tailscale daemon is running and the subnet is advertised.
Configuration Pitfalls
- Endpoint URL Format: The base URL must include the port. Incorrect:
http://192.168.1.100/ollama, Correct:http://192.168.1.100:11434 - Model Name Mismatch: Ensure the model name exactly matches one available in Ollama. Run
ollama listto verify. - Fallback Configuration: Setting
fallback: "none"will cause immediate failure if Ollama is unreachable. Considerfallback: "local"for resilience.
Privacy Implications of “auto” Workaround
# WRONG: Using auto-select as a workaround
agents:
defaults:
memorySearch:
provider: "auto" # โ ๏ธ Sends data to OpenAI!
# CORRECT: Explicitly specify local provider
agents:
defaults:
memorySearch:
provider: "ollama" # โ Stays localMisunderstanding: Plugin vs Builtin Distinction
Many users are unaware that providers fall into two categories:
- Builtin Providers: Hardcoded in the manager, always available
- Plugin Providers: Registered by plugins at runtime
This distinction is not documented, leading to confusion when CLI commands fail.
Version Compatibility
The bug affects version 2026.4.12. If upgrading from an earlier version, verify that the ollama plugin is compatible with the new manager bundle.
๐ Related Errors
Directly Related Errors
Unknown memory embedding provider: lmstudioโ Same bug affects lmstudio provider (also plugin-registered)Unknown memory embedding provider: bedrockโ Same bug affects bedrock provider (also plugin-registered)Unknown memory embedding provider: [any plugin-registered provider]โ Generalization of this bug
Contextually Related Errors
Plugin initialization failed: ollamaโ Occurs when the ollama plugin fails to load, which may mask this error in some scenariosECONNREFUSEDโ Network error when Ollama endpoint is unreachable; distinct from provider registration errorModel not found: [model-name]โ Ollama server does not have the requested model; different error path
Historical Context
- 2026.3.x: Plugin system introduced for extensions, including ollama provider
- 2026.4.x: CLI memory commands extracted to separate entry point that bypasses plugin initialization
- 2026.4.12: Current version with the divergence bug (this issue)
Similar Patterns in Codebase
The same architectural pattern (builtin vs plugin registration) exists in:
- Memory retrieval providers
- LLM providers
- Tool adapters
These may exhibit similar CLI-vs-gateway inconsistencies.