Ollama LLM Timeout Not Honoring Configured timeoutSeconds
OpenClaw ignores user-defined timeoutSeconds configuration and falls back to a hardcoded 15-second timeout when waiting for Ollama responses, causing premature failover on slow CPU-based models.
๐ Symptoms
Primary Manifestation
The agent falls back to a reserve model despite having a 500-second timeout configured, with the actual timeout firing after 15 seconds:
openclaw-gateway-1 | 2026-04-12T22:02:44.589+00:00 [agent] embedded run timeout: runId=slug-gen-1776031345185 sessionId=slug-generator-1776031345185 timeoutMs=15000Configuration Applied
User applied the following configuration structure (as recommended in community discussion):
{
"agents": {
"defaults": {
"timeoutSeconds": 500,
"llm": {
"idleTimeoutSeconds": 500
}
}
}
}Model Warmup Timing
The Ollama model takes over 2 minutes just to warm up, which is expected for CPU-based inference:
$ time docker compose exec ollama ollama run qwen2.5:7b "warmup"
Sure! What kind of warm-up would you like?
real 2m32.193s
user 0m0.059s
sys 0m0.027sFailover Sequence
The complete failure sequence in logs:
ollama-1 | [GIN] 2026/04/12 - 22:02:45 | 500 | 16.171687684s | 127.0.0.1 | POST "/api/chat"
openclaw-gateway-1 | [agent] embedded run timeout: timeoutMs=15000
openclaw-gateway-1 | [agent] embedded run failover decision: decision=fallback_model reason=timeout
openclaw-gateway-1 | [diagnostic] lane task error: error="FailoverError: LLM request timed out."๐ง Root Cause
Configuration Path Mismatch
The user's configuration uses the incorrect nested path structure. The agents.defaults.llm path does not exist in the OpenClaw configuration schema for timeout settings. The timeout value is being silently ignored, and OpenClaw falls back to its hardcoded default of 15 seconds (15000ms).
Hardcoded Timeout Behavior
OpenClaw's agent runtime has a built-in default timeout that cannot be overridden through the agents.defaults configuration block. The relevant configuration paths are:
agents.defaults.timeoutSecondsโ Controls the overall agent run timeout, not the LLM request timeoutllm.requestTimeoutSecondsโ The correct path for LLM HTTP request timeout (notagents.defaults.llm)llm.idleTimeoutSecondsโ Controls connection idle timeout, separate from request timeout
Configuration Resolution Failure
When OpenClaw parses the configuration, it validates paths against the schema. Unknown or unregistered paths are logged at debug level but do not cause startup failures. The timeout value specified at agents.defaults.timeoutSeconds applies to the agent orchestration layer, not the underlying LLM HTTP client timeout.
Technical Deep Dive
The LLM provider (Ollama) uses an HTTP client with its own timeout settings. In the OpenClaw codebase, the request timeout is passed via provider options:
// Pseudo-code representation of the timeout flow
llmClient := NewLLMClient(llm.Config{
RequestTimeout: config.GetInt("llm.requestTimeoutSeconds") * 1000, // defaults to 15000ms
})
// The agent orchestrator timeout (agents.defaults.timeoutSeconds) is separate
agentRunner := NewAgentRunner(AgentConfig{
RunTimeout: config.GetInt("agents.defaults.timeoutSeconds") * 1000,
})The Ollama provider initializes with the requestTimeout from llm.requestTimeoutSeconds, not from agents.defaults.timeoutSeconds.
๐ ๏ธ Step-by-Step Fix
Step 1: Identify Configuration File Location
Locate the active OpenClaw configuration file:
# For Docker Compose deployments
docker compose config 2>/dev/null | grep -A5 "config:\|configFile:\|--config" || echo "Checking volumes..."
# Alternative: Find config file in container
docker compose exec openclaw-gateway find / -name "*.yaml" -o -name "*.json" 2>/dev/null | grep -v procStep 2: Update Configuration with Correct Paths
Replace the existing configuration with the corrected structure:
Before (Incorrect):
{
"agents": {
"defaults": {
"timeoutSeconds": 500,
"llm": {
"idleTimeoutSeconds": 500
}
}
}
}After (Correct):
{
"agents": {
"defaults": {
"timeoutSeconds": 500
}
},
"llm": {
"requestTimeoutSeconds": 300,
"idleTimeoutSeconds": 300
},
"providers": {
"ollama": {
"options": {
"requestTimeoutSeconds": 300
}
}
}
}Step 3: For Provider-Specific Override (Recommended)
Since you're using Ollama, apply the timeout directly to the provider configuration:
{
"providers": {
"ollama": {
"options": {
"requestTimeoutSeconds": 300
}
}
}
}Step 4: Restart Services
docker compose down
docker compose up -d
docker compose logs -f openclaw-gateway 2>&1 | head -50Step 5: Verify Configuration Loading
# Check for configuration warnings on startup
docker compose logs openclaw-gateway 2>&1 | grep -i "warn\|config\|timeout"
Verify the config is loaded (look for validation messages)
docker compose exec openclaw-gateway cat /etc/openclaw/config.yaml 2>/dev/null ||
docker compose exec openclaw-gateway env | grep -i openclaw
๐งช Verification
Method 1: Check Startup Logs for Timeout Value
# Start fresh and capture startup
docker compose down
docker compose up -d
sleep 5
docker compose logs openclaw-gateway 2>&1 | grep -iE "timeout|llm|request"Expected output should show the configured timeout value being loaded:
openclaw-gateway-1 | [init] LLM request timeout configured: 300s provider=ollama
openclaw-gateway-1 | [init] Configuration loaded successfullyMethod 2: Trigger a Test Request and Measure
# Send a test request to the OpenClaw gateway
curl -X POST http://localhost:3000/api/v1/agent/run \
-H "Content-Type: application/json" \
-d '{
"agent": "default",
"input": "Hello, respond with just the word OK",
"model": "ollama/qwen2.5:7b"
}' 2>&1 | tee /tmp/ollama_test.log &
Monitor the actual timeout being applied
watch -n 1 ‘docker compose logs –since 30s openclaw-gateway 2>&1 | grep -i timeout’
Method 3: Confirm Ollama Response Time Exceeds Old Timeout
# Direct Ollama test (should take 2+ minutes for warmup)
time docker compose exec ollama ollama run qwen2.5:7b "Say hello"
Expected: Should complete without OpenClaw timeout error
Old behavior: Fail after 15 seconds with “LLM request timed out”
Fixed behavior: Wait for Ollama response up to configured timeout
Method 4: Verify via Diagnostic Endpoint (if available)
curl http://localhost:3000/api/v1/diagnostics 2>/dev/null | jq '.providers[] | select(.provider=="ollama") | .timeoutSeconds'Expected output: 300
โ ๏ธ Common Pitfalls
1. Configuration Path Case Sensitivity
OpenClaw configuration keys are case-sensitive. These are not equivalent:
# WRONG
"RequestTimeoutSeconds": 300
# CORRECT
"requestTimeoutSeconds": 3002. Nested Path Assumptions
Do not assume that nesting timeout settings under agents.defaults.llm will propagate to the LLM provider. The configuration schema uses flat namespaces:
# WRONG - This path does not exist
agents.defaults.llm.requestTimeoutSeconds
# CORRECT - Top-level llm section
llm.requestTimeoutSeconds
# ALSO CORRECT - Provider-specific options
providers.ollama.options.requestTimeoutSeconds3. Time Unit Mismatch
Different configuration fields use different units:
timeoutSecondsโ seconds (integer)requestTimeoutSecondsโ seconds (integer)idleTimeoutSecondsโ seconds (integer)timeoutMsโ milliseconds (as seen in logs)
Conversion: 300 seconds = 300,000 milliseconds
4. Docker Volume Caching
Configuration files mounted via volumes may be cached. Force a clean reload:
docker compose down --remove-orphans
docker compose rm -f openclaw-gateway
docker compose up -d
# Do NOT just restart; perform full teardown5. Environment Variable Override
Environment variables take precedence over config files. Check for conflicting settings:
docker compose config 2>/dev/null | grep -iE "timeout|env|OPENCLAW"
docker compose exec openclaw-gateway env | grep -iE "TIMEOUT|OLLAMA"6. macOS-Specific: CPU Throttling
On macOS with Apple Silicon, Ollama CPU emulation causes extreme slowness. The first response may take 5+ minutes even for simple queries. Ensure you have configured a timeout that exceeds your worst-case latency.
7. Ollama Model Loading
The initial model load time (as seen in logs: "llama runner started in 11.87 seconds") counts against the LLM timeout. For CPU-based models, add buffer time to account for cold starts.
๐ Related Errors
FailoverError: LLM request timed outโ Primary error seen when LLM provider exceeds configured timeoutembedded run timeout: timeoutMs=15000โ Log message indicating the 15-second hardcoded default was usedembedded run failover decision: fallback_modelโ Decision made due to timeout trigger- Issue #46049 โ Related timeout configuration issue with external LLM providers
- Issue #24235 โ Historical timeout handling inconsistency in OpenClaw
HTTP 500 from Ollamaโ Ollama returning server error after slow response; often misinterpreted as timeoutstrconv.ParseInt: parsing "max"โ CPU quota parsing warning in Ollama container (harmless, unrelated to timeout)
Troubleshooting Matrix
| Symptom | Root Cause | Fix |
|---|---|---|
| Timeout at 15s despite config | Wrong config path | Use providers.{provider}.options.requestTimeoutSeconds |
| Timeout at 30s despite 300s config | Environment variable override | Check OPENCLAW_LLM_TIMEOUT env var |
| Timeout after 60s exactly | Load balancer/proxy timeout | Check nginx/docker proxy settings |
| Ollama returns 500 after 15s | Upstream timeout + Ollama error | Increase both Ollama and OpenClaw timeouts |