Symptom
When using LM Studio or other local LLM providers, OpenClaw’s web interface displays 0 tokens for both lifetime and past 30-day usage statistics, even though the model is actively being used for inference. The usage data remains at zero regardless of how many requests are made through the local provider.
Environment Details:
- LM Studio running on
192.168.X.X:1234 - Model:
qwen/qwen3.5-35b-a3b - OpenClaw version:
2026.3.13 (61d171a) - Operating System: Ubuntu 22.04
Root Cause Analysis
The root cause is that OpenClaw’s token usage tracking does not properly parse or display the usage field from local provider API responses.
Investigation confirmed that LM Studio does expose token usage data in API responses:
{ “usage”: { “prompt_tokens”: X, “completion_tokens”: Y, “total_tokens”: Z } }
The usage field is available in the API response but OpenClaw’s frontend or API layer is not reading and displaying this data for local providers. Cloud providers (OpenAI, Anthropic, etc.) correctly display token usage, indicating that the tracking mechanism exists but is not being applied to local provider responses.
Key Findings:
- LM Studio API at
http://localhost:1234/v1/chat/completionsreturns validusagedata - The data is accessible but not being captured by OpenClaw’s statistics system
- The web UI shows “0” for all token metrics when using local providers
Solution
To resolve this issue, the OpenClaw development team needs to:
-
Update the token usage parsing logic to handle responses from local providers:
- Ensure the
usagefield is extracted from all provider response types, not just cloud providers - Add local provider response handling to the token usage tracking module
- Ensure the
-
Example code fix (conceptual): // Before (cloud providers only): if (provider === ‘openai’ || provider === ‘anthropic’) { trackUsage(response.usage); }
// After (all providers): if (response.usage) { trackUsage(response.usage); }
-
Verify the fix by testing with LM Studio or similar local providers:
- Make several API calls through the local provider
- Check that the web UI updates to show non-zero token counts
- Confirm lifetime and 30-day statistics are populated correctly
Prevention
To prevent this issue from recurring:
- Standardize token usage tracking across all provider types (cloud and local)
- Add integration tests that verify token usage statistics are correctly displayed for each supported provider type, including:
- OpenAI
- Anthropic
- LM Studio
- Ollama
- Other local providers
- Implement automated UI validation to ensure the statistics panel displays correct values
- Document provider-specific behavior in the OpenClaw documentation for developers contributing provider integrations
Additional Information
Related Components:
- OpenClaw Web UI (statistics/usage display)
- Provider API response handlers
- Token usage tracking service
Test Command for Verification:
curl -X POST http://localhost:1234/v1/chat/completions
-H “Content-Type: application/json”
-d ‘{“messages”: [{“role”: “user”, “content”: “Hello”}], “model”: “qwen3.5-35b-a3b”}’
Verify the response contains the usage field with token counts.
Status: This is a confirmed behavior bug where the data exists but is not being displayed. A code change is required in OpenClaw to properly handle local provider responses.