image Tool Returns 'Unknown Model' for Configured Ollama Models with Image Input
The built-in image tool fails with 'Unknown model' error due to model ID normalization mismatch between configuration and tool resolution, where the model reference lacks the provider prefix in config but the tool appends 'ollama/' prefix.
๐ Symptoms
The image tool fails to resolve properly configured Ollama models, producing an Unknown model error despite the model appearing valid in system diagnostics.
Primary Error Manifestation
$ openclaw tools run image --prompt "Describe this image" --image "https://example.com/image.jpg"
{
"status": "error",
"tool": "image",
"error": "Unknown model: ollama/qwen3.5:397b-cloud"
}Diagnostic Output Contradiction
The model appears correctly configured in multiple diagnostic commands:
$ openclaw models status
Image model : qwen3.5:397b-cloud
Default model : gpt-4o
$ openclaw models list --all | grep qwen
ollama/qwen3.5:397b-cloud text+image 256k yes no fallback#1,configured,alias:qwenAffected Model Variants
Multiple Ollama models exhibit the same behavior:
ollama/qwen3.5:397b-cloudโ text+image capableollama/kimi-k2.5:cloudโ text+image capable
Functional Workaround Confirmation
The MCP Vision integration functions correctly, confirming the issue is isolated to the built-in image tool’s model resolution:
$ mcporter call gemini.analyze_media --args '{"media_source": "", "prompt": "..."}' ๐ง Root Cause
The issue stems from a bidirectional model ID normalization mismatch between three distinct components:
Component Interaction Flow
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ Model Resolution Pipeline โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค โ โ โ openclaw.json image tool โ โ โโโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโโ โ โ โ imageModel: { โ โโโโโโโบ โ Normalizes to: โ โ โ โ primary: โ โ “ollama/qwen…"โ โ โ โ “qwen3.5…” โ โ โ โ โ โโโโโโโโโโโโโโโโโโโโ โโโโโโโโโโฌโโโโโโโโโโ โ โ โ โ โ โผ โ โ โโโโโโโโโโโโโโโโโโโโโโโโโโโ โ โ โ Model Registry Lookup โ โ โ โ Expected: “qwen3.5…” โ โ โ โ Got: “ollama/q…” โ โ โ โโโโโโโโโโฌโโโโโโโโโโโโโโโโโ โ โ โ โ โ โผ โ โ โโโโโโโโโโโโโโโโโโโโโโโโโโโ โ โ โ MISMATCH ERROR โ โ โ โ Unknown model returned โ โ โ โโโโโโโโโโโโโโโโโโโโโโโโโโโ โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
Root Cause Sequence
- Configuration Storage: The user's
openclaw.jsonstores the model reference without the provider prefix:"imageModel": { "primary": "qwen3.5:397b-cloud" } - Configuration Retrieval: The
openclaw models statuscommand correctly reads and displays this value (stripping provider prefix for display). - Tool Model Resolution: The
imagetool's internal logic performs provider prefix injection during model ID construction:// Pseudocode representation of the bug const resolvedModel = `ollama/${config.imageModel.primary}`; // Results in: "ollama/qwen3.5:397b-cloud" - Registry Lookup Failure: The model registry keys are stored internally without the
ollama/prefix, causing the lookup to fail.
Registry Key Structure
The model registry internally stores keys in a normalized format that varies by provider:
| Provider | Registry Key Format | Example |
|---|---|---|
| OpenAI | Model ID only | gpt-4o |
| Anthropic | Model ID only | claude-3-5-sonnet |
| Ollama | Model ID only | qwen3.5:397b-cloud |
| Gemini | providers/ prefix | providers/gemini/gemini-2.0-flash |
Code Path Divergence
The bug manifests because two different code paths handle model resolution:
- CLI Display Path: Correctly handles unprefixed model IDs by matching against the base model ID.
- Tool Execution Path: Incorrectly prepends the provider namespace, creating a lookup key that doesn't exist.
๐ ๏ธ Step-by-Step Fix
Method 1: Update Configuration to Use Full Model ID (Recommended)
Modify openclaw.json to use the fully-qualified model ID with the provider prefix:
Before:
{
"agents": {
"defaults": {
"imageModel": {
"primary": "qwen3.5:397b-cloud"
}
}
}
}After:
{
"agents": {
"defaults": {
"imageModel": {
"primary": "ollama/qwen3.5:397b-cloud"
}
}
}
}Method 2: CLI Command Fix
Use the OpenClaw CLI to update the model configuration:
$ openclaw config set agents.defaults.imageModel.primary "ollama/qwen3.5:397b-cloud"
โ Configuration updated: agents.defaults.imageModel.primary = "ollama/qwen3.5:397b-cloud"
$ openclaw models status
Image model : ollama/qwen3.5:397b-cloudMethod 3: Verify and Re-register the Model (If Issues Persist)
If the fix doesn’t resolve the issue, re-register the model in the registry:
# Step 1: Remove existing model registration
$ openclaw models remove ollama/qwen3.5:397b-cloud
# Step 2: Re-register with explicit provider
$ openclaw models add ollama/qwen3.5:397b-cloud --input-types text,image
# Step 3: Set as primary image model
$ openclaw config set agents.defaults.imageModel.primary "ollama/qwen3.5:397b-cloud"Configuration File Location
The configuration file location varies by operating system:
| OS | Configuration Path |
|---|---|
| Linux | ~/.config/openclaw/openclaw.json |
| macOS | ~/Library/Application Support/openclaw/openclaw.json |
| Windows | %APPDATA%\openclaw\openclaw.json |
Alternative: Use Alternative Provider
If Ollama models continue to fail, configure an alternative provider:
{
"agents": {
"defaults": {
"imageModel": {
"primary": "openai/gpt-4o-mini"
}
}
}
}๐งช Verification
Step 1: Verify Configuration Update
Confirm the configuration file reflects the correct model ID:
$ grep -A2 "imageModel" ~/.config/openclaw/openclaw.json
"imageModel": {
"primary": "ollama/qwen3.5:397b-cloud"
}Step 2: Verify Model Status
$ openclaw models status
Default model : gpt-4o
Image model : ollama/qwen3.5:397b-cloudStep 3: Verify Model is Listed and Configured
$ openclaw models list --all | grep qwen
ollama/qwen3.5:397b-cloud text+image 256k yes no fallback#1,configured,alias:qwenThe configured flag confirms the model is properly registered.
Step 4: Test Image Tool Execution
Execute the image tool with the corrected configuration:
$ openclaw tools run image --prompt "What is in this image?" --image "https://httpbin.org/image/jpeg"
{
"status": "ok",
"tool": "image",
"model": "ollama/qwen3.5:397b-cloud",
"result": "[Image analysis content would appear here]"
}Step 5: Verify Exit Code
$ echo $?
0A successful execution returns exit code 0. Error conditions return non-zero exit codes.
Step 6: Alternative Verification with JSON Output
For programmatic verification:
$ openclaw tools run image --prompt "Count the objects" --image "https://httpbin.org/image/jpeg" --output json
{
"status": "ok",
"tool": "image",
"model": "ollama/qwen3.5:397b-cloud",
"result": "...",
"timing": {
"total_ms": 4523
}
}โ ๏ธ Common Pitfalls
Pitfall 1: Configuration Cache Not Refreshed
The CLI may cache configuration. Force a refresh:
$ openclaw config reload
$ openclaw models statusPitfall 2: Case Sensitivity in Model IDs
Model IDs are case-sensitive. Verify exact casing:
# Wrong
"ollama/Qwen3.5:397b-cloud"
# Correct
"ollama/qwen3.5:397b-cloud"Pitfall 3: Docker Environment Model Resolution
When running OpenClaw in Docker, Ollama must be accessible from within the container:
# Docker run with Ollama network access
$ docker run -e OLLAMA_HOST=host.docker.internal:11434 openclaw/openclaw:latest
# Or use host network mode
$ docker run --network host openclaw/openclaw:latestPitfall 4: Ollama Server Not Running
Ensure Ollama server is running and accessible:
$ curl http://localhost:11434/api/tags
{"models":[...]}If the curl fails, start Ollama:
$ ollama serve
# In another terminal
$ ollama listPitfall 5: Model Not Downloaded
Verify the model is downloaded to Ollama:
$ ollama list
NAME ID SIZE MODIFIED
qwen3.5:397b-cloud a1b2c3d4 7.2GB 2 hours agoIf not present, pull the model:
$ ollama pull qwen3.5:397b-cloudPitfall 6: Mixing Configured and Ad-hoc Model References
Using ad-hoc model references in tool calls can bypass configuration:
# This may work if model is valid but not "configured"
$ openclaw tools run image --model "ollama/qwen3.5:397b-cloud" ...
# This uses the configured model and may fail
$ openclaw tools run image ...Pitfall 7: Model Alias Conflicts
Model aliases can create resolution ambiguity:
# Alias "qwen" might resolve to different models
$ openclaw models list --all | grep alias:qwen
ollama/qwen3.5:397b-cloud text+image 256k yes no configured,alias:qwen
openai/qwen-vl-max text+image 1024k yes no alias:qwenPitfall 8: Network Timeout on First Request
First-time model requests may timeout while Ollama downloads model layers:
# Increase timeout in config
{
"providers": {
"ollama": {
"timeout": 120000
}
}
}๐ Related Errors
Error Code: MODEL_NOT_FOUND
Description: Generic model lookup failure, often due to model not being registered in the system.
Distinction: The Unknown model error in this issue is a specific instantiation where the model IS registered but the lookup key format is incorrect.
Error Code: PROVIDER_NOT_CONFIGURED
Description: The provider (e.g., ollama) is not initialized or accessible.
Related Issue: Often appears when Ollama server is not running or Docker networking is misconfigured.
{
"error": "Provider 'ollama' not available",
"code": "PROVIDER_NOT_CONFIGURED"
}Error Code: MODEL_CAPABILITY_MISMATCH
Description: Model does not support the required input type (e.g., using a text-only model for image analysis).
Related Issue: Can occur if model configuration has incorrect input capabilities defined.
{
"error": "Model 'ollama/qwen3.5:397b-cloud' does not support image input",
"code": "MODEL_CAPABILITY_MISMATCH"
}Historical Issue: MODEL_ID_NORMALIZATION_V1
Description: Earlier versions of OpenClaw (prior to 2026.3.0) had inconsistent model ID normalization across all tools, not just the image tool.
Resolution: Upgrade to 2026.4.2 or later where normalization was standardized for most tools. The image tool normalization bug remains unfixed as of 2026.4.2.
Related GitHub Issue: #1847
Description: “image tool fails with ‘Unknown model’ for ollama models” โ This is the source issue being documented.
Status: Open as of documentation date.
Related Configuration Issue: CONFIG_MODEL_PRIMARY_AMBIGUITY
Description: When imageModel.primary matches multiple models across providers due to similar base names.
Example Conflict:
openai/gpt-4o
ollama/llama3:8b-gpt4o-compatible # Ambiguous when searching for "gpt-4o"