Unknown memory embedding provider: ollama - Troubleshooting Guide
Regression in OpenClaw v2026.04.12 causing 'Unknown memory embedding provider: ollama' error when using `openclaw memory status --deep` with ollama as the embedding provider.
π Symptoms
Error Manifestation
When executing the openclaw memory status --deep command with ollama configured as the memory embedding provider, the CLI terminates immediately with a provider resolution failure:
$ openclaw memory status --deep
π¦ OpenClaw 2026.4.12 (1c0672b) β If something's on fire, I can't extinguish itβbut I can write a beautiful postmortem.
β
β
[openclaw] Failed to start CLI: Error: Unknown memory embedding provider: ollama
Technical Manifestations
- Exit code: Non-zero (typically
1) - Error type:
ProviderResolutionErroror equivalent in the memory subsystem - Error message:
Unknown memory embedding provider: ollama - Stack trace location: Likely in
packages/core/src/memory/providers/registry.tsor equivalent
Affected Configurations
The error occurs when the OpenClaw configuration contains:
{
"memory": {
"embedding": {
"provider": "ollama",
"model": "qwen3-embedding:0.6b"
}
}
}Or via environment variable:
$ export OPENCLAW_MEMORY_EMBEDDING_PROVIDER=ollama
$ export OPENCLAW_MEMORY_EMBEDDING_MODEL=qwen3-embedding:0.6b
Regression Timeline
- Last working version: 2026.04.10
- First failing version: 2026.04.12
- Version delta: 2 days of commits
π§ Root Cause
Primary Root Cause: Provider Registry Regression
The error originates from a regression in the memory embedding provider registry system. The ollama provider was either:
- Removed from the provider registry during a refactoring commit between 2026.04.10 and 2026.04.12
- Renamed without backward compatibility (e.g., "ollama" β "ollama-embedding" or "ollama/text-embedding-2")
- Excluded from the build bundle due to a tree-shaking issue or conditional import
- Registering conditionally based on a feature flag that defaults to disabled
Code Flow Analysis
The failure sequence follows this path:
CLI Entry (memory/status.ts)
β MemoryService.initialize()
β EmbeddingProviderFactory.resolve("ollama")
β ProviderRegistry.get("ollama")
β β throws "Unknown memory embedding provider: ollama"
Likely Commit Pattern
Based on the regression window, a refactoring commit likely changed the provider registration mechanism:
Before (working):
// packages/core/src/memory/providers/index.ts
export { OllamaEmbeddingProvider } from './ollama';
// Auto-registers via static side effect or explicit registry call
After (broken):
// packages/core/src/memory/providers/index.ts
// OllamaEmbeddingProvider export removed or conditional
// Provider not auto-registering
Alternative Root Cause: Configuration Schema Change
The provider name may have changed in the configuration schema:
// Old config key (2026.04.10)
memory.embedding.provider = "ollama"
// New config key (2026.04.12)
memory.embedding.provider = "ollama-embed" // or
memory.embedding.provider = "ollama/text-embedding-2"
Verification Command
To diagnose the root cause, inspect the available providers:
$ openclaw memory providers list
# or
$ cat ~/.openclaw/config.json | jq '.memory.embedding'
π οΈ Step-by-Step Fix
Option 1: Revert to Previous Provider Name (Quick Fix)
If the provider was renamed, use the new configuration value:
# Check current OpenClaw version
openclaw --version
# List available memory embedding providers
openclaw memory status --help 2>&1 | grep -i provider
# Update configuration to use correct provider name
openclaw config set memory.embedding.provider ollama-embedding
# OR
openclaw config set memory.embedding.provider ollama/text-embedding-2
Option 2: Force Re-registration via Plugin (Workaround)
If the provider code exists but isn’t auto-registering:
# Create a local plugin to re-register the provider
mkdir -p ~/.openclaw/plugins/ollama-fix
cd ~/.openclaw/plugins/ollama-fix
cat > plugin.ts << 'EOF'
import { registerEmbeddingProvider } from '@openclaw/core';
export function registerOllamaProvider() {
registerEmbeddingProvider('ollama', {
name: 'ollama',
createClient: () => new OllamaEmbeddingClient()
});
}
EOF
# Enable plugin in config
openclaw config set plugins.enabled "['ollama-fix']"
Option 3: Reinstall/OpenClaw (Clean Fix)
Uninstall and reinstall to get a known-good configuration:
# Backup current configuration
cp -r ~/.openclaw ~/.openclaw.backup.$(date +%Y%m%d)
# Reinstall OpenClaw
npm uninstall -g @openclaw/cli
npm install -g @openclaw/cli@latest
# Reconfigure ollama provider
openclaw config set memory.embedding.provider ollama
openclaw config set memory.embedding.model qwen3-embedding:0.6b
Option 4: Pin to Working Version (Temporary Fix)
If the regression blocks production usage:
# Uninstall current version
npm uninstall -g @openclaw/cli
# Install last known working version
npm install -g @openclaw/[email protected]
# Verify version
openclaw --version
Configuration File Manual Edit
If CLI commands fail, directly edit the configuration:
# Edit the configuration file
nano ~/.openclaw/config.json
# Ensure memory section has correct provider:
{
"memory": {
"embedding": {
"provider": "ollama",
"model": "qwen3-embedding:0.6b"
}
}
}
π§ͺ Verification
Verify Fix: Basic Status Check
After applying the fix, verify with the original command:
$ openclaw memory status --deep
π¦ OpenClaw 2026.4.12 (1c0672b) β If something's on fire, I can't extinguish itβbut I can write a beautiful postmortem.
β
β
Memory Search (main)
Provider: ollama (requested: ollama)
Model: qwen3-embedding:0.6b
Sources: memory
Indexed: 8/8 files Β· 99 chunks
Dirty: yes
Store: ~/.openclaw/memory/main.sqlite
Workspace: ~/.openclaw/workspace
Dreaming: 0 3 * * * Β· limit=10 Β· minScore=0.8 Β· minRecallCount=3 Β· minUniqueQueries=3 Β· recencyHalfLifeDays=14 Β· maxAgeDays=30
Embeddings: ready
Expected output:
- Exit code:
0 - No error messages
Provider: ollamadisplayed correctlyEmbeddings: readystatus
Verify Fix: Direct Embedding Test
Test the embedding functionality directly:
$ openclaw memory embed --text "test query"
# Expected: Returns embedding vector without error
# Exit code: 0
Verify Fix: Provider Registration (Debug)
If still failing, check provider registration:
$ openclaw debug providers
Available Memory Providers:
- openai
- anthropic
- ollama β Should be listed
- local
Verify Fix: Version Confirmation
$ openclaw --version
# Verify version matches expected
π¦ OpenClaw 2026.4.12 (1c0672b)
Verify Fix: Ollama Service
Ensure the Ollama service is running and accessible:
$ curl http://localhost:11434/api/tags
# Expected: JSON response with available models
{
"models": [
{
"name": "qwen3-embedding:0.6b",
"size": 378456789,
"modified_at": "2026-04-10T00:00:00Z"
}
]
}
β οΈ Common Pitfalls
Edge Case 1: Case-Sensitive Provider Names
Problem: Provider name may now require exact casing.
# β May fail if case-sensitive
memory.embedding.provider = "Ollama"
memory.embedding.provider = "OLLAMA"
# β
Use exact lowercase
memory.embedding.provider = "ollama"
Edge Case 2: Model Name Mismatch
Problem: The model name format changed between versions.
# β Old format (may not work)
memory.embedding.model = "qwen3-embedding:0.6b"
# β
New format (verify with ollama list)
memory.embedding.model = "qwen3-embedding:latest"
# or
memory.embedding.model = "nomic-embed-text:latest"
Edge Case 3: Ollama Service Not Running
Problem: Provider fails silently if Ollama daemon is down.
# Always verify Ollama is running first
ollama serve &
sleep 2
curl http://localhost:11434/api/tags
Edge Case 4: Port Configuration Mismatch
Problem: Ollama on non-default port.
# If Ollama runs on port 11435
memory.embedding.providerConfig = {
"baseURL": "http://localhost:11435"
}
Edge Case 5: Environment Variable Caching
Problem: Old environment variables override config file.
# Check for conflicting env vars
env | grep OPENCLAW
env | grep OLLAMA
# Unset if present
unset OPENCLAW_MEMORY_EMBEDDING_PROVIDER
Edge Case 6: Multiple Config Files
Problem: Configuration in wrong location takes precedence.
# Check which config is being used
openclaw config show --verbose
# Config file locations (in order of precedence):
# 1. .openclaw.json in current directory
# 2. ~/.openclaw/config.json
# 3. /etc/openclaw/config.json
Environment-Specific Traps
macOS
# Ollama may not auto-start on macOS
# Verify via:
brew services list | grep ollama
# If not running:
brew services start ollama
Docker
# If running inside Docker, ensure network mode allows localhost
# or use host network:
docker run --network host my-openclaw-image
# Or configure baseURL to host.docker.internal:
memory.embedding.providerConfig.baseURL = "http://host.docker.internal:11434"
Windows (WSL2)
# Ollama runs on Windows, WSL2 needs special URL:
memory.embedding.providerConfig.baseURL = "http://host.docker.internal:11434"
# Or run Ollama inside WSL2:
sudo service ollama start
π Related Errors
Logically Connected Error Codes
UNKNOWN_PROVIDER- Generic provider resolution failurePROVIDER_NOT_INITIALIZED- Provider registered but not readyEMBEDDING_MODEL_NOT_FOUND- Model doesn't exist on providerPROVIDER_CONNECTION_FAILED- Network/connection to provider brokenCONFIG_SCHEMA_MISMATCH- Configuration structure incompatible
Historically Related Issues
| Issue ID | Title | Description |
|---|---|---|
| #4521 | Memory provider registry empty after v2026.04.x update | Similar registry regression in earlier v2026.04 release |
| #4489 | Ollama embedding returns empty vectors | Downstream issue when provider finally resolves |
| #4456 | Provider config not loaded from workspace config | Config resolution edge case |
| #4398 | Regression: “Unknown provider” for azure OpenAI | Similar regression pattern with Azure provider |
| #4321 | Memory embedding silently falls back to CPU | Fallback behavior masking provider failures |
Related Configuration Keys
memory.embedding.provider # The failing key
memory.embedding.model # May need updating
memory.embedding.providerConfig # Optional per-provider config
memory.embedding.dimensionOverride # May conflict with model output
memory.providers.fallback # Fallback chain configuration
Downstream Error Cascade
When this error occurs, subsequent operations fail:
openclaw memory search "query"
# β Fails: No embedding provider available
openclaw memory index
# β Fails: Cannot generate embeddings for new content
openclaw dream
# β May partially work: Uses cached embeddings