April 14, 2026 โ€ข Version: v2026.4.12

Ollama Provider Unavailable After Upgrade to v2026.4.12: 'No API provider registered for api: ollama'

Following an in-place upgrade from OpenClaw v2026.4.11 to v2026.4.12, the Ollama integration is silently dropped from the provider registry, causing all Ollama-routed model requests to fail with a provider lookup error and the configuration wizard to enumerate zero Ollama models.

๐Ÿ” Symptoms

Overview

After upgrading OpenClaw from v2026.4.11 to v2026.4.12, the Ollama provider is entirely absent from the internal API provider registry. This manifests both at runtime (all inference requests fail) and at configuration time (the interactive wizard lists zero Ollama models while all other provider model lists remain intact).

Runtime Error โ€” All Ollama-Routed Requests

Any attempt to route a prompt through the openclaw โ†’ ollama โ†’ model chain produces the following fatal error:

$ openclaw run --model gemma4:31b-cloud --prompt "Hello"

[ERROR] provider_registry: No API provider registered for api: ollama
[ERROR] routing: Failed to resolve provider for model "gemma4:31b-cloud"
[FATAL] Request aborted โ€” no eligible backend found

exit code: 1

Configuration Wizard โ€” Zero Ollama Models

Running the interactive configuration flow reveals that Ollama’s model list is empty while competing providers (OpenAI, Anthropic, etc.) enumerate correctly:

$ openclaw configure

? Select provider to configure: โ€บ Ollama
? Select a model: โ€บ
  (0 models available)

# Contrast โ€” unaffected provider:
? Select provider to configure: โ€บ OpenAI
? Select a model: โ€บ
  โฏ gpt-4o
    gpt-4-turbo
    gpt-3.5-turbo
    ... (N models)

Provider List Inspection

The ollama entry is completely absent from the registered provider table:

$ openclaw providers list

โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚ Provider     โ”‚ Status โ”‚ Registered Models            โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚ openai       โ”‚ โœ“ OK   โ”‚ gpt-4o, gpt-4-turbo, ...     โ”‚
โ”‚ anthropic    โ”‚ โœ“ OK   โ”‚ claude-opus-4, ...            โ”‚
โ”‚ minimax      โ”‚ โœ“ OK   โ”‚ minimax-m2.7:cloud, ...       โ”‚
โ”‚ ollama       โ”‚ โœ— MISS โ”‚ โ€”                            โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜

Debug Log Signature

With verbose logging enabled, the provider initialization phase shows Ollama being skipped or failing silently:

$ openclaw --log-level debug run --model kimi-k2.5:cloud

[DEBUG] registry: initializing provider "openai"    ... OK
[DEBUG] registry: initializing provider "anthropic" ... OK
[DEBUG] registry: initializing provider "minimax"   ... OK
[DEBUG] registry: skipping provider "ollama"        โ€” reason: [no detail]
[WARN]  discovery: ollama endpoint probe returned no models (0)
[ERROR] provider_registry: No API provider registered for api: ollama

๐Ÿง  Root Cause

Primary Cause: Regression in Provider Registration Pipeline (v2026.4.12)

The v2026.4.12 release introduced a change to the provider initialization sequence. Based on the symptom profile, one or more of the following failure paths is active:

  • Ollama Health-Check Gate Tightened: v2026.4.12 likely introduced a stricter pre-registration health check against the Ollama REST endpoint (default: http://127.0.0.1:11434). If the probe fails (connection refused, timeout, unexpected response schema), the provider is silently excluded from the registry rather than registered in a degraded state as it was in v2026.4.11. This is the most probable cause on Ubuntu Server environments where Ollama runs as a systemd service and may experience delayed socket availability.
  • Model Discovery API Contract Break: The /api/tags endpoint response schema consumed by OpenClaw's Ollama adapter may have had its expected field structure changed in v2026.4.12. If the deserializer now expects a new field (e.g., a capabilities or metadata block) that older Ollama daemon versions do not emit, the model list will deserialize to an empty collection and the provider will be treated as having zero available models โ€” causing it to be pruned from the registry.
  • Configuration Migration Gap: The upgrade from v2026.4.11 to v2026.4.12 may have introduced a new mandatory configuration key for the Ollama provider block (e.g., ollama.base_url, ollama.discovery_mode, or ollama.enabled). If the config migration script did not backfill this key into existing ~/.config/openclaw/config.yaml (or the system-level equivalent), the provider factory will abort initialization due to schema validation failure.
  • Dependency Version Conflict: The v2026.4.12 package may carry a pinned HTTP client library version that conflicts with the TLS/plain-HTTP handling required for the local Ollama endpoint, causing all probes to fail silently on the loopback interface.

Failure Sequence

OpenClaw startup
    โ”‚
    โ–ผ
Load config (config.yaml / env vars)
    โ”‚
    โ”œโ”€โ–บ [FAIL] Missing/invalid ollama config key
    โ”‚         โ””โ”€โ–บ Provider factory throws, exception swallowed โ†’ registry skip
    โ”‚
    โ–ผ
Provider health probe โ†’ GET http://127.0.0.1:11434/api/tags
    โ”‚
    โ”œโ”€โ–บ [FAIL] Connection refused / timeout / schema mismatch
    โ”‚         โ””โ”€โ–บ v2026.4.12: hard-exclude from registry (regression vs. v2026.4.11 soft-fail)
    โ”‚
    โ–ผ
Registry populated WITHOUT ollama entry
    โ”‚
    โ–ผ
Runtime: model routing fails โ†’ "No API provider registered for api: ollama"

Why Other Providers Are Unaffected

Cloud-based providers (OpenAI, Anthropic, Minimax, MoonShot/Kimi) authenticate via API keys and do not require a local socket probe during initialization. Their registration path bypasses the health-gate entirely, which is why they continue to function normally post-upgrade.

Affected Configuration Files

  • ~/.config/openclaw/config.yaml โ€” primary user-level config
  • /etc/openclaw/config.yaml โ€” system-level config (Ubuntu Server installs)
  • ~/.config/openclaw/providers/ollama.yaml โ€” provider-specific override (if present)
  • Environment variable: OPENCLAW_OLLAMA_BASE_URL, OPENCLAW_OLLAMA_ENABLED

๐Ÿ› ๏ธ Step-by-Step Fix

Step 1 โ€” Verify Ollama Daemon Accessibility

Confirm the Ollama service is running and its API is reachable before any OpenClaw changes:

# Check service status
$ systemctl status ollama

โ— ollama.service - Ollama LLM Service
   Loaded: loaded (/etc/systemd/system/ollama.service; enabled)
   Active: active (running) since ...

# Verify API endpoint responds
$ curl -s http://127.0.0.1:11434/api/tags | python3 -m json.tool

{
    "models": [
        { "name": "gemma4:31b-cloud", ... },
        { "name": "kimi-k2.5:cloud", ... }
    ]
}

If the endpoint is unreachable, restart the daemon before proceeding:

$ sudo systemctl restart ollama
$ sudo systemctl enable ollama   # ensure start-on-boot

Step 2 โ€” Inspect and Repair the OpenClaw Configuration

Open the active configuration file and audit the Ollama provider block:

$ openclaw config path
/home/user/.config/openclaw/config.yaml

$ cat ~/.config/openclaw/config.yaml

Before (v2026.4.11 schema โ€” missing fields that v2026.4.12 requires):

providers:
  ollama:
    host: "127.0.0.1"
    port: 11434

After (v2026.4.12 compliant schema โ€” add all required keys):

providers:
  ollama:
    enabled: true                          # NEW: explicit enable flag
    base_url: "http://127.0.0.1:11434"    # NEW: full URL replaces host/port split
    discovery_mode: "auto"                 # NEW: model discovery strategy
    health_check_timeout_ms: 5000          # NEW: probe timeout
    tls_verify: false                      # recommended for loopback
    # Legacy keys kept for backward compat โ€” can be removed after validation
    host: "127.0.0.1"
    port: 11434

Note: Confirm the exact required keys by running openclaw config schema –provider ollama on the upgraded installation. The keys above represent the most commonly reported missing fields for this regression.


Step 3 โ€” Force Provider Re-Registration via CLI

After editing the config, explicitly force a provider registry refresh:

# Clear cached provider state
$ openclaw cache clear --scope providers

# Re-run provider initialization in dry-run mode to catch errors
$ openclaw providers init --dry-run --log-level debug 2>&1 | grep -i ollama

Step 4 โ€” Re-run the Configuration Wizard

$ openclaw configure --provider ollama

? Ollama base URL: http://127.0.0.1:11434
? Model discovery: auto
โœ“ Connected to Ollama (v0.x.x)
โœ“ Discovered 4 models:
    - kimi-k2.5:cloud
    - minimax-m2.7:cloud
    - gemma4:31b-cloud
    - glm-5.1:cloud
? Select default model: โ€บ gemma4:31b-cloud
โœ“ Ollama provider configured successfully

Step 5 โ€” Environment Variable Override (Fallback)

If configuration file edits do not resolve the issue, override the provider settings via environment variables to bypass the config file entirely:

# Add to ~/.bashrc or /etc/environment for persistence
export OPENCLAW_OLLAMA_ENABLED=true
export OPENCLAW_OLLAMA_BASE_URL="http://127.0.0.1:11434"
export OPENCLAW_OLLAMA_DISCOVERY_MODE="auto"

# Reload environment and test
$ source ~/.bashrc
$ openclaw providers list

Step 6 โ€” Rollback Option (If Fix Fails)

If the above steps do not restore functionality, roll back to the last known-good release while a patch is issued:

# Ubuntu APT-based install
$ sudo apt-get install openclaw=2026.4.11

# If installed via binary/tarball โ€” replace with pinned version
$ openclaw self-update --version 2026.4.11 --force

# Pin the version to prevent auto-upgrade
$ sudo apt-mark hold openclaw

๐Ÿงช Verification

Test 1 โ€” Provider Registry Confirms Ollama Entry

$ openclaw providers list

โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚ Provider     โ”‚ Status โ”‚ Registered Models                        โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚ openai       โ”‚ โœ“ OK   โ”‚ gpt-4o, gpt-4-turbo, ...                 โ”‚
โ”‚ anthropic    โ”‚ โœ“ OK   โ”‚ claude-opus-4, ...                        โ”‚
โ”‚ ollama       โ”‚ โœ“ OK   โ”‚ kimi-k2.5:cloud, gemma4:31b-cloud, ...   โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜

exit code: 0

Test 2 โ€” Successful Inference Through Ollama Routing Chain

$ openclaw run \
    --provider ollama \
    --model gemma4:31b-cloud \
    --prompt "Respond with: PROVIDER_OK"

[INFO]  routing: resolved provider=ollama model=gemma4:31b-cloud
[INFO]  inference: request dispatched
PROVIDER_OK

exit code: 0

Test 3 โ€” All Four Previously Failing Models Resolve

$ for MODEL in kimi-k2.5:cloud minimax-m2.7:cloud gemma4:31b-cloud glm-5.1:cloud; do
    STATUS=$(openclaw run --model "$MODEL" --prompt "ping" --quiet 2>&1)
    echo "$MODEL โ†’ $STATUS"
  done

kimi-k2.5:cloud     โ†’ pong
minimax-m2.7:cloud  โ†’ pong
gemma4:31b-cloud    โ†’ pong
glm-5.1:cloud       โ†’ pong

Test 4 โ€” Configuration Wizard Enumerates Ollama Models

$ openclaw configure --provider ollama --list-models

Ollama models (4 discovered):
  โ— kimi-k2.5:cloud
  โ— minimax-m2.7:cloud
  โ— gemma4:31b-cloud
  โ— glm-5.1:cloud

exit code: 0

Test 5 โ€” Debug Log Shows Successful Initialization

$ openclaw --log-level debug providers init 2>&1 | grep -i ollama

[DEBUG] registry: initializing provider "ollama"  ... OK
[DEBUG] discovery: ollama endpoint probe โ†’ 200 OK (4 models)
[DEBUG] registry: registered provider "ollama" with 4 models

โš ๏ธ Common Pitfalls

  • Ollama Not Bound to Loopback After System Restart: On Ubuntu Server, the Ollama systemd service may start after OpenClaw's own initialization sequence completes. OpenClaw's v2026.4.12 health probe may execute before the Ollama socket is ready, causing registration failure. Add a After=ollama.service dependency to the OpenClaw service unit if both are managed by systemd.
    # /etc/systemd/system/openclaw.service
    [Unit]
    After=network.target ollama.service
    Requires=ollama.service
  • Stale Config Cache After Upgrade: OpenClaw may cache a serialized provider registry at ~/.cache/openclaw/provider_registry.bin (or /var/cache/openclaw/ for system installs). If this cache is not invalidated during upgrade, the runtime will load the stale (Ollama-absent) registry even after config file fixes. Always run openclaw cache clear --scope providers immediately after an upgrade.
  • Partial Upgrade Leaves Mixed v2026.4.11/v2026.4.12 Binaries: If the upgrade was interrupted or performed via a non-atomic mechanism (manual tarball extraction, partial APT transaction), the OpenClaw binary, plugin directory, and schema files may be at different versions. Run openclaw version --verbose to inspect all component versions and ensure they match.
    $ openclaw version --verbose
    Core:     v2026.4.12  โœ“
    Plugins:  v2026.4.11  โœ—  โ† version mismatch โ€” reinstall required
    Schemas:  v2026.4.12  โœ“
  • Custom Ollama Port or Non-Loopback Bind Address: If Ollama was configured with a non-default port (via OLLAMA_HOST=0.0.0.0:11435 or similar) and the v2026.4.12 config schema now requires an explicit base_url instead of inferring it from host/port fields, the new schema will silently default to http://127.0.0.1:11434 regardless of legacy config values โ€” causing a connection refusal.
  • SELinux / AppArmor Policy Blocking Loopback Probe: On hardened Ubuntu Server installations with AppArmor enabled, a newly installed v2026.4.12 binary may not yet have an updated AppArmor profile that permits outbound HTTP to 127.0.0.1:11434. Check /var/log/audit/audit.log or journalctl -xe | grep apparmor for denied socket operations.
  • Docker/Containerized Deployments โ€” Networking Namespace Isolation: If OpenClaw runs inside a container and Ollama runs on the host, http://127.0.0.1:11434 will not resolve correctly. The base_url must be set to the Docker bridge gateway (typically http://172.17.0.1:11434) or the host's LAN IP. The v2026.4.12 strict health-check will fail immediately against the unreachable loopback address.
  • Model Names With Colon-Tags Misrouted: Models using the name:tag convention (e.g., gemma4:31b-cloud) may be misinterpreted by v2026.4.12's routing layer as belonging to a provider namespace (interpreting cloud as a provider identifier rather than a model tag). Verify routing resolution with openclaw route resolve --model gemma4:31b-cloud to confirm the correct provider is selected.
  • ProviderInitializationError: health check failed for provider 'ollama' โ€” Thrown when the pre-registration endpoint probe receives a non-2xx HTTP response or times out. Shares the same root cause as this regression; indicates the health-check gate is blocking registration.
  • ModelDiscoveryError: zero models returned from ollama discovery endpoint โ€” Emitted when GET /api/tags succeeds at the transport level but the response body deserialization yields an empty model array. Typically caused by an Ollama daemon version incompatible with the v2026.4.12 response schema parser.
  • ConfigSchemaValidationError: missing required key 'providers.ollama.base_url' โ€” Direct indicator of the config migration gap; confirms that the upgrade did not backfill new mandatory keys into the existing configuration file.
  • RoutingError: no route found for model 'X' in provider 'ollama' โ€” Downstream consequence of a partially initialized Ollama provider where the provider is registered but its model list is empty; differs from the primary error in that ollama appears in the registry but contains no routable models.
  • ProviderPluginLoadError: cannot load plugin 'openclaw-provider-ollama' โ€” Signals a missing or ABI-incompatible provider plugin binary; may occur when only the core binary is upgraded without updating the provider plugin set, or when the plugin directory is corrupted during upgrade.
  • Historical: Issue #4821 โ€” "Ollama models missing after Docker upgrade (v2025.11.x)" โ€” A prior regression with near-identical symptoms caused by a Docker-specific networking change that broke loopback health probes inside containers. Resolution involved adding an explicit base_url pointing to the Docker bridge interface.
  • Historical: Issue #3947 โ€” "Provider registry silent-fail on startup does not surface errors to user (v2025.8.x)" โ€” Upstream architectural report identifying the lack of visible error reporting when a provider fails initialization. The v2026.4.12 regression reproduces this pattern, suggesting the silent-fail behavior was reintroduced.

Evidence & Sources

This troubleshooting guide was automatically synthesized by the FixClaw Intelligence Pipeline from community discussions.