<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
    <channel>
        <title>Web-Ui on FixClaw</title>
        <link>https://fixclaw.dev/tags/web-ui/</link>
        <description>Recent content in Web-Ui on FixClaw</description>
        <generator>Hugo -- gohugo.io</generator>
        <language>en-us</language>
        <lastBuildDate>Mon, 01 Jan 0001 00:00:00 +0000</lastBuildDate><atom:link href="https://fixclaw.dev/tags/web-ui/index.xml" rel="self" type="application/rss+xml" /><item>
            <title>Agent Timeout Does Not Display Error in Web UI - UI Hangs Indefinitely</title>
            <link>https://fixclaw.dev/troubleshooting/agent-timeout-does-not-display-error-in-web-ui---ui-hangs-indefinitely/</link>
            <pubDate>Fri, 10 Apr 2026 00:00:00 +0000</pubDate>
            <guid>https://fixclaw.dev/troubleshooting/agent-timeout-does-not-display-error-in-web-ui---ui-hangs-indefinitely/</guid>
            <description>&lt;h2 id=&#34;symptom&#34;&gt;Symptom&#xA;&lt;/h2&gt;&lt;p&gt;When an LLM request exceeds the agent timeout threshold, the following behavior is observed:&lt;/p&gt;&#xA;&lt;ol&gt;&#xA;&lt;li&gt;&lt;strong&gt;Gateway logs&lt;/strong&gt; show &lt;code&gt;ConnectionAbortedError: [WinError 10053]&lt;/code&gt; indicating the client aborted the connection&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Agent logs&lt;/strong&gt; correctly log the timeout with &lt;code&gt;decision=surface_error reason=timeout&lt;/code&gt;&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Web UI&lt;/strong&gt; continues to display a loading spinner indefinitely&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;No error message&lt;/strong&gt; is presented to the user&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;User cannot retry&lt;/strong&gt; without manually refreshing the page, losing conversation context&lt;/li&gt;&#xA;&lt;/ol&gt;&#xA;&lt;p&gt;The user expects to see a timeout error message and have the ability to retry, but instead the UI becomes unresponsive.&lt;/p&gt;&#xA;&lt;h2 id=&#34;root-cause-analysis&#34;&gt;Root Cause Analysis&#xA;&lt;/h2&gt;&lt;p&gt;The investigation reveals that the root cause is a &lt;strong&gt;race condition in error event propagation&lt;/strong&gt;:&lt;/p&gt;&#xA;&lt;ol&gt;&#xA;&lt;li&gt;&#xA;&lt;p&gt;When the agent detects an LLM timeout, it correctly identifies the timeout condition and decides to surface an error (&lt;code&gt;decision=surface_error reason=timeout&lt;/code&gt;)&lt;/p&gt;&#xA;&lt;/li&gt;&#xA;&lt;li&gt;&#xA;&lt;p&gt;However, the agent&amp;rsquo;s timeout mechanism terminates the connection (causing &lt;code&gt;ConnectionAbortedError&lt;/code&gt;) &lt;strong&gt;before&lt;/strong&gt; the WebSocket can send the &lt;code&gt;final&lt;/code&gt; event containing the &lt;code&gt;status: &amp;quot;timeout&amp;quot;&lt;/code&gt; information&lt;/p&gt;&#xA;&lt;/li&gt;&#xA;&lt;li&gt;&#xA;&lt;p&gt;The Web UI client never receives the &lt;code&gt;final&lt;/code&gt; event because the connection has already been terminated, leaving it in an indeterminate state showing a perpetual loading spinner&lt;/p&gt;&#xA;&lt;/li&gt;&#xA;&lt;li&gt;&#xA;&lt;p&gt;The custom gateway (&lt;code&gt;ai_router.py&lt;/code&gt;) functions correctly with retry logic, but cannot compensate for the agent aborting the connection prematurely&lt;/p&gt;&#xA;&lt;/li&gt;&#xA;&lt;/ol&gt;&#xA;&lt;p&gt;The core issue is that the &lt;strong&gt;error status is logged but not communicated&lt;/strong&gt; to the UI before the connection teardown occurs.&lt;/p&gt;&#xA;&lt;h2 id=&#34;solution&#34;&gt;Solution&#xA;&lt;/h2&gt;&lt;p&gt;To resolve this issue, ensure that the &lt;code&gt;final&lt;/code&gt; event with error status is always sent to the Web UI &lt;strong&gt;before or during&lt;/strong&gt; connection teardown when a timeout occurs:&lt;/p&gt;&#xA;&lt;ol&gt;&#xA;&lt;li&gt;&#xA;&lt;p&gt;&lt;strong&gt;Modify the agent timeout handler&lt;/strong&gt; to ensure the &lt;code&gt;final&lt;/code&gt; event is flushed to the WebSocket before the connection is terminated&lt;/p&gt;&#xA;&lt;/li&gt;&#xA;&lt;li&gt;&#xA;&lt;p&gt;&lt;strong&gt;Wrap the error propagation&lt;/strong&gt; in a try-finally block that guarantees the &lt;code&gt;final&lt;/code&gt; event is sent even when the connection is being aborted:&#xA;try:&#xA;# timeout handling logic&#xA;finally:&#xA;# Ensure final event is sent before connection abort&#xA;await send_final_event(status=&amp;ldquo;timeout&amp;rdquo;, error=&amp;ldquo;Agent request timed out&amp;rdquo;)&#xA;# Then safely abort the connection&lt;/p&gt;&#xA;&lt;/li&gt;&#xA;&lt;li&gt;&#xA;&lt;p&gt;&lt;strong&gt;Add a timeout buffer&lt;/strong&gt; that allows the final event to be delivered before the connection is forcibly closed&lt;/p&gt;&#xA;&lt;/li&gt;&#xA;&lt;li&gt;&#xA;&lt;p&gt;&lt;strong&gt;Verify WebSocket closure order&lt;/strong&gt; - the &lt;code&gt;final&lt;/code&gt; event should be sent and acknowledged before &lt;code&gt;ConnectionAbortedError&lt;/code&gt; is raised&lt;/p&gt;&#xA;&lt;/li&gt;&#xA;&lt;/ol&gt;&#xA;&lt;h3 id=&#34;code-location&#34;&gt;Code Location&#xA;&lt;/h3&gt;&lt;p&gt;The fix should be implemented in the agent&amp;rsquo;s timeout handling logic where the connection abort occurs, ensuring proper event sequencing.&lt;/p&gt;&#xA;&lt;h2 id=&#34;prevention&#34;&gt;Prevention&#xA;&lt;/h2&gt;&lt;p&gt;To prevent this issue from recurring:&lt;/p&gt;&#xA;&lt;ol&gt;&#xA;&lt;li&gt;&#xA;&lt;p&gt;&lt;strong&gt;Implement event-before-abort pattern&lt;/strong&gt;: Always send status events before terminating connections in any error scenario, not just timeouts&lt;/p&gt;&#xA;&lt;/li&gt;&#xA;&lt;li&gt;&#xA;&lt;p&gt;&lt;strong&gt;Add integration tests&lt;/strong&gt;: Create tests that verify the UI receives proper error events when agents timeout, specifically checking for &lt;code&gt;final&lt;/code&gt; event delivery&lt;/p&gt;&#xA;&lt;/li&gt;&#xA;&lt;li&gt;&#xA;&lt;p&gt;&lt;strong&gt;WebSocket health checks&lt;/strong&gt;: Add monitoring to detect when the UI is stuck in a loading state and automatically trigger recovery&lt;/p&gt;&#xA;&lt;/li&gt;&#xA;&lt;li&gt;&#xA;&lt;p&gt;&lt;strong&gt;Timeout handler review&lt;/strong&gt;: Audit all timeout and abort code paths to ensure consistent error event propagation&lt;/p&gt;&#xA;&lt;/li&gt;&#xA;&lt;li&gt;&#xA;&lt;p&gt;&lt;strong&gt;Client-side timeout handling&lt;/strong&gt;: Add a client-side timeout fallback in the Web UI that detects when no events have been received for a configured period and displays an appropriate message&lt;/p&gt;&#xA;&lt;/li&gt;&#xA;&lt;/ol&gt;&#xA;&lt;h2 id=&#34;additional-information&#34;&gt;Additional Information&#xA;&lt;/h2&gt;&lt;ul&gt;&#xA;&lt;li&gt;&lt;strong&gt;Affected deployment&lt;/strong&gt;: Docker deployment on Ubuntu-based container with Windows 11 host&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Frequency&lt;/strong&gt;: Intermittent - occurs when LLM response exceeds the 60-second agent timeout threshold&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Workaround&lt;/strong&gt;: Refresh the page (loses conversation context)&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Related components&lt;/strong&gt;:&#xA;&lt;ul&gt;&#xA;&lt;li&gt;Agent timeout handler&lt;/li&gt;&#xA;&lt;li&gt;WebSocket event dispatcher&lt;/li&gt;&#xA;&lt;li&gt;Web UI state management&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;h2 id=&#34;sources&#34;&gt;Sources&#xA;&lt;/h2&gt;&lt;ul&gt;&#xA;&lt;li&gt;&lt;a href=&#34;https://github.com/openclaw/openclaw/issues/64793&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;GitHub Issue #64793&lt;/a&gt;&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;</description>
        </item><item>
            <title>Token Usage Statistics Show as 0 for Local Providers (LM Studio)</title>
            <link>https://fixclaw.dev/troubleshooting/token-usage-statistics-show-as-0-for-local-providers-lm-studio/</link>
            <pubDate>Sun, 15 Mar 2026 00:00:00 +0000</pubDate>
            <guid>https://fixclaw.dev/troubleshooting/token-usage-statistics-show-as-0-for-local-providers-lm-studio/</guid>
            <description>&lt;h2 id=&#34;symptom&#34;&gt;Symptom&#xA;&lt;/h2&gt;&lt;p&gt;When using LM Studio (or other local LLM providers) with OpenClaw, the web interface shows &lt;strong&gt;0 tokens&lt;/strong&gt; for both lifetime and past 30-day usage statistics. This occurs even though:&lt;/p&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;The model is actively being used for inference&lt;/li&gt;&#xA;&lt;li&gt;Token usage data is being returned in the API responses from the local provider&lt;/li&gt;&#xA;&lt;li&gt;Curl requests to the provider confirm the &lt;code&gt;usage&lt;/code&gt; field is present in responses&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;p&gt;&lt;strong&gt;Example API Response from LM Studio:&lt;/strong&gt;&#xA;{&#xA;&amp;ldquo;usage&amp;rdquo;: {&#xA;&amp;ldquo;prompt_tokens&amp;rdquo;: X,&#xA;&amp;ldquo;completion_tokens&amp;rdquo;: Y,&#xA;&amp;ldquo;total_tokens&amp;rdquo;: Z&#xA;}&#xA;}&lt;/p&gt;&#xA;&lt;p&gt;The token counts are available but not being displayed in OpenClaw&amp;rsquo;s statistics dashboard.&lt;/p&gt;&#xA;&lt;h2 id=&#34;root-cause-analysis&#34;&gt;Root Cause Analysis&#xA;&lt;/h2&gt;&lt;p&gt;The root cause is that &lt;strong&gt;OpenClaw is not capturing or displaying token usage statistics for local providers&lt;/strong&gt;. While cloud providers (OpenAI, Anthropic, etc.) have built-in support for tracking and displaying usage metrics, local providers like LM Studio appear to lack this integration.&lt;/p&gt;&#xA;&lt;p&gt;Specifically:&lt;/p&gt;&#xA;&lt;ol&gt;&#xA;&lt;li&gt;LM Studio &lt;strong&gt;does&lt;/strong&gt; expose token usage data via the standard &lt;code&gt;usage&lt;/code&gt; field in API responses&lt;/li&gt;&#xA;&lt;li&gt;The OpenClaw web interface is designed to show usage statistics from stored/aggregated data&lt;/li&gt;&#xA;&lt;li&gt;Local provider responses may not be properly parsed, stored, or retrieved for display in the UI&lt;/li&gt;&#xA;&lt;li&gt;The routing and display logic may be optimized for cloud providers and not account for local provider metadata&lt;/li&gt;&#xA;&lt;/ol&gt;&#xA;&lt;p&gt;This is a &lt;strong&gt;feature gap&lt;/strong&gt; in the local provider integration, not a crash or data corruption issue.&lt;/p&gt;&#xA;&lt;h2 id=&#34;solution&#34;&gt;Solution&#xA;&lt;/h2&gt;&lt;h3 id=&#34;for-users-workaround&#34;&gt;For Users (Workaround):&#xA;&lt;/h3&gt;&lt;ol&gt;&#xA;&lt;li&gt;&lt;strong&gt;Manual tracking&lt;/strong&gt;: Keep a separate log of token usage by monitoring API response payloads directly via curl or a proxy tool&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Check provider dashboard&lt;/strong&gt;: Some local providers like LM Studio may have their own usage tracking (verify in LM Studio&amp;rsquo;s interface)&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Use logging&lt;/strong&gt;: Enable detailed API logging in OpenClaw to capture raw responses and manually calculate usage&lt;/li&gt;&#xA;&lt;/ol&gt;&#xA;&lt;h3 id=&#34;for-developers-fix-required&#34;&gt;For Developers (Fix Required):&#xA;&lt;/h3&gt;&lt;p&gt;The fix involves modifying the OpenClaw codebase to:&lt;/p&gt;&#xA;&lt;ol&gt;&#xA;&lt;li&gt;Parse the &lt;code&gt;usage&lt;/code&gt; field from local provider API responses&lt;/li&gt;&#xA;&lt;li&gt;Store token usage data in the same manner as cloud providers&lt;/li&gt;&#xA;&lt;li&gt;Ensure the usage data flows through to the web UI&amp;rsquo;s statistics components&lt;/li&gt;&#xA;&lt;li&gt;Add LM Studio and other local providers to the supported usage tracking list&lt;/li&gt;&#xA;&lt;/ol&gt;&#xA;&lt;p&gt;&lt;strong&gt;Relevant code areas to investigate:&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;Provider response parsing logic (likely in &lt;code&gt;src/providers/&lt;/code&gt;)&lt;/li&gt;&#xA;&lt;li&gt;Usage statistics storage and retrieval (likely in &lt;code&gt;src/database/&lt;/code&gt; or &lt;code&gt;src/storage/&lt;/code&gt;)&lt;/li&gt;&#xA;&lt;li&gt;Web UI statistics rendering (likely in &lt;code&gt;src/web/components/usage-statistics/&lt;/code&gt;)&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;h2 id=&#34;prevention&#34;&gt;Prevention&#xA;&lt;/h2&gt;&lt;ul&gt;&#xA;&lt;li&gt;When using local providers, be aware that OpenClaw&amp;rsquo;s built-in usage statistics may not reflect actual token consumption&lt;/li&gt;&#xA;&lt;li&gt;Consider using provider-native monitoring tools alongside OpenClaw&lt;/li&gt;&#xA;&lt;li&gt;When evaluating token efficiency with local models, rely on direct API response analysis rather than OpenClaw&amp;rsquo;s dashboard&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;h2 id=&#34;additional-information&#34;&gt;Additional Information&#xA;&lt;/h2&gt;&lt;ul&gt;&#xA;&lt;li&gt;&lt;strong&gt;Affected Provider&lt;/strong&gt;: LM Studio (and likely other local LLM providers with similar API structures)&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Tested Endpoint&lt;/strong&gt;: &lt;code&gt;http://localhost:1234/v1/chat/completions&lt;/code&gt;&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Confirmed Working&lt;/strong&gt;: LM Studio does return standard OpenAI-compatible usage data&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Environment&lt;/strong&gt;: Tested with model &lt;code&gt;qwen/qwen3.5-35b-a3b&lt;/code&gt; on Ubuntu 22.04&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;p&gt;This issue represents a missing feature rather than a defect—local providers expose the necessary data, but OpenClaw&amp;rsquo;s frontend does not consume and display it.&lt;/p&gt;&#xA;&lt;h2 id=&#34;sources&#34;&gt;Sources&#xA;&lt;/h2&gt;&lt;ul&gt;&#xA;&lt;li&gt;&lt;a href=&#34;https://github.com/openclaw/openclaw/issues/49890&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;GitHub Issue #49890&lt;/a&gt;&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;</description>
        </item></channel>
</rss>
