HTTP 400 Errors with gpt-5.2-codex/gpt-5.3-chat via Azure OpenAI Responses Adapter
Azure OpenAI Responses requests fail with HTTP 400 due to malformed reasoning items being sent without required following items in the payload structure.
๐ Symptoms
When routing requests through OpenClaw’s Azure OpenAI Responses adapter to GPT-5.2-Codex or GPT-5.3-Chat deployments, the gateway returns HTTP 400 errors. The failure manifests in two distinct patterns:
Pattern A: Immediate 400 on First Request
warn agent/embedded {"event":"embedded_run_agent_end","isError":true,"error":"400 Item 'rs_07f091ad1d9adbcb0069d7059e74a08190a5fd477877af8e27' of type 'reasoning' was provided without its required following item.","failoverReason":"format","model":"gpt-5.3-chat","provider":"AzureOpenAI-Three"}
Pattern B: 400 on Subsequent Turns
The first user turn succeeds, but subsequent conversational turns fail with the same reasoning item error after the assistant attempts to continue the conversation.
Error Signature
The error hash sha256:ce60f0254cd4 is consistent across multiple failures, indicating a systematic payload construction issue rather than transient network errors.
Affected Configuration
yaml
Provider configuration triggering the error
“AzureOpenAI-Three”: { “baseUrl”: “https://dy-aoai.openai.azure.com/openai/v1", “api”: “openai-responses”, # or “azure-openai-responses” “auth”: “api-key”, “models”: [{ “id”: “gpt-5.3-chat”, “reasoning”: false, # Explicitly disabled “contextWindow”: 1048576 }] }
Control UI Behavior
Users observe the error through the OpenClaw Control UI when:
- Selecting
gpt-5.2-codexorgpt-5.3-chatas the active model - Submitting any prompt (even simple test queries like “Hello, respond with ‘OK’”)
- The WebSocket disconnects with code
1001during extended conversations
๐ง Root Cause
The HTTP 400 error originates from a mismatch between the request payload structure generated by OpenClaw’s azure-openai-responses adapter and the strict validation enforced by Azure OpenAI’s Responses API endpoint.
Technical Analysis of the Failure Sequence
The Azure OpenAI Responses API (v1 API format) enforces a structural constraint on reasoning items:
Reasoning Item Requirement: When a
reasoningitem appears in theoutputarray, Azure OpenAI requires a correspondingtextoroutput_textitem immediately following it.Adapter Payload Generation: The
azure-openai-responsesadapter constructs request payloads that include reasoning items in theoutputarray, even when the model configuration specifies"reasoning": false.Missing Following Item: The generated payload contains: json { “output”: [ { “type”: “reasoning”, “id”: “rs_07f091ad1d9adbcb0069d7059e74a08190a5fd477877af8e27”, “summary”: [] } // Missing required “text” or “output_text” item here ] }
Azure Validation Rejection: Azure OpenAI’s backend rejects the request with:
400 Item ‘rs_07f091ad1d9adbcb0069d7059e74a08190a5fd477877af8e27’ of type ‘reasoning’ was provided without its required following item.
Root Cause Locations
| Component | File/Module | Issue |
|---|---|---|
| Adapter | packages/adapters/src/azure-openai-responses.ts | Reasoning items added to output without validation |
| Payload Builder | packages/adapters/src/responses-payload.ts | No check for reasoning: false before including reasoning blocks |
| Model Config | User config | "reasoning": false not propagated to adapter payload construction |
Architectural Inconsistency
The adapter’s payload construction logic does not respect the model’s reasoning flag. The code path assumes all GPT-5 variants with extended context windows should include reasoning blocks, regardless of the explicit configuration:
typescript // Hypothetical problematic code path in adapter function buildResponsePayload(model, messages, options) { // ISSUE: No check for model.reasoning === false if (model.contextWindow > 128000) { outputItems.push({ type: “reasoning”, summary: [] }); } // Reasoning item added without required following text item }
Affected Request Flow
User Prompt โ Control UI โ OpenClaw Gateway โ azure-openai-responses adapter โ Payload construction (reasoning item without following item) โ Azure OpenAI endpoint โ 400 Validation Error
๐ ๏ธ Step-by-Step Fix
Option 1: Disable Reasoning at Adapter Level (Recommended)
Modify the provider configuration to explicitly disable reasoning processing:
Before: json { “AzureOpenAI-Three”: { “baseUrl”: “https://dy-aoai.openai.azure.com/openai/v1", “api”: “azure-openai-responses”, “models”: [{ “id”: “gpt-5.3-chat”, “reasoning”: false }] } }
After: json { “AzureOpenAI-Three”: { “baseUrl”: “https://dy-aoai.openai.azure.com/openai/v1", “api”: “azure-openai-responses”, “compat”: { “reasoningEnabled”: false }, “models”: [{ “id”: “gpt-5.3-chat”, “reasoning”: false, “reasoningEffort”: null }] } }
Option 2: Use Standard OpenAI-Responses Adapter
If the azure-openai-responses adapter continues to fail, configure the provider to use the standard openai-responses adapter with explicit output formatting:
Configuration: json { “AzureOpenAI-Three”: { “baseUrl”: “https://dy-aoai.openai.azure.com/openai/v1", “api”: “openai-responses”, “auth”: “api-key”, “headers”: { “api-key”: “YOUR-API-KEY”, “Azure-Extensions-Version”: “2024-11-01” }, “models”: [{ “id”: “gpt-5.3-chat”, “reasoning”: false, “outputFormat”: “text” }], “requestOptions”: { “stripReasoningItems”: true } } }
Option 3: Patch Adapter Configuration via Environment Variables
For deployments without config file access:
bash
Set environment variable before starting OpenClaw
export OPENCLAW_AZURE_RESPONSES_STRIP_REASONING=true export OPENCLAW_AZURE_RESPONSES_OUTPUT_FORMAT=text
Restart OpenClaw Gateway
openclaw gateway restart
Option 4: Direct Provider Config Update
Edit ~/.openclaw/config.yaml or the active configuration file:
yaml providers: AzureOpenAI-Three: type: azure-openai-responses baseUrl: “https://dy-aoai.openai.azure.com/openai/v1" apiKey: “${AZURE_OPENAI_API_KEY}”
modelDefaults:
reasoning: false
reasoningEffort: null
adapterOptions:
outputFormat: "text"
requireFollowingItem: true
stripReasoningItems: true
models:
- id: "gpt-5.3-chat"
name: "GPT-5.3-Chat (Azure dy-aoai)"
reasoning: false
maxTokens: 131072
- id: "gpt-5.2-codex"
name: "GPT-5.2-Codex (Azure dy-aoai)"
reasoning: false
maxTokens: 131072
Option 5: Runtime Fix via OpenClaw CLI
bash
Update provider configuration via CLI
openclaw config set-provider AzureOpenAI-Three
–adapter azure-openai-responses
–set reasoning=false
–set adapterOptions.stripReasoningItems=true
Verify the update
openclaw config get-provider AzureOpenAI-Three
Restart gateway to apply changes
openclaw gateway restart
๐งช Verification
After applying the fix, verify the resolution using the following validation steps:
Step 1: Restart Gateway and Check Startup Logs
bash
Restart the OpenClaw Gateway
openclaw gateway restart
Monitor startup logs for successful initialization
tail -f ~/.openclaw/logs/gateway.log | grep -E “(startup|adapter|AzureOpenAI)”
Expected Output:
info gateway/startup {“message”:“Gateway started”,“port”:18792} info adapter/azure-openai-responses {“provider”:“AzureOpenAI-Three”,“status”:“initialized”,“outputFormat”:“text”,“stripReasoningItems”:true}
Step 2: Test via OpenClaw Control UI
- Open OpenClaw Control UI in browser
- Select
gpt-5.3-chatorgpt-5.2-codexfrom the model dropdown - Send test prompt:
"Hello, respond with just 'OK'" - Confirm successful response without 400 error
Step 3: Verify via API Call
bash
curl -X POST http://localhost:18792/v1/chat/completions
-H “Content-Type: application/json”
-H “Authorization: Bearer ${OPENCLAW_API_KEY}”
-d ‘{
“model”: “gpt-5.3-chat”,
“messages”: [{“role”: “user”, “content”: “Respond with OK”}],
“provider”: “AzureOpenAI-Three”
}’
Expected Response (200 OK): json { “id”: “chatcmpl-…”, “object”: “chat.completion”, “model”: “gpt-5.3-chat”, “choices”: [{ “message”: {“role”: “assistant”, “content”: “OK”}, “finish_reason”: “stop” }] }
Step 4: Verify Request Payload Structure
Enable debug logging to inspect the actual payload sent to Azure:
bash
Enable debug logging
export OPENCLAW_LOG_LEVEL=debug
Run a test request
openclaw chat “Hello” –model gpt-5.3-chat –provider AzureOpenAI-Three
Expected Debug Output (payload sent to Azure): json { “model”: “gpt-5.3-chat”, “input”: { “messages”: […] }, “output”: [ // No reasoning items present when reasoning: false ] }
Step 5: Verify No Reasoning Items in Logs
bash
Check logs for reasoning item errors (should be absent)
grep -i “reasoning.*required following item” ~/.openclaw/logs/gateway.log
Expected: no output (no matching lines)
Exit Code Check: bash
Verify no 400 errors in recent logs
grep -c “400.*reasoning” ~/.openclaw/logs/gateway.log
Expected: 0
โ ๏ธ Common Pitfalls
Pitfall 1: Model vs. Provider-Level Reasoning Config Conflict
Setting reasoning: false at the model level only may not override provider-level defaults. Ensure the setting is consistent across both levels.
Incorrect: json { “AzureOpenAI-Three”: { “models”: [{ “id”: “gpt-5.3-chat”, “reasoning”: false // Only model level }] // Provider-level defaults may override } }
Correct: json { “AzureOpenAI-Three”: { “modelDefaults”: { “reasoning”: false }, “models”: [{ “id”: “gpt-5.3-chat”, “reasoning”: false // Explicit at both levels }] } }
Pitfall 2: API Adapter Mismatch
Using "api": "openai-responses" instead of "api": "azure-openai-responses" may send payloads in the wrong format for Azure endpoints.
| Adapter | Use Case | Payload Format |
|---|---|---|
azure-openai-responses | Direct Azure endpoints | Azure-specific v1/responses |
openai-responses | OpenAI-compatible proxies | Standard Responses API |
Ensure the adapter matches your deployment type.
Pitfall 3: Missing api-key in Headers for Azure
Azure OpenAI endpoints require the API key in headers, not just in apiKey field:
json { “AzureOpenAI-Three”: { “apiKey”: “your-key”, “headers”: { “api-key”: “your-key” // Required for Azure } } }
Pitfall 4: Docker Environment Variable Propagation
When running OpenClaw in Docker, environment variables for adapter configuration may not propagate correctly:
bash
Incorrect: Variables set outside docker run
export OPENCLAW_AZURE_RESPONSES_STRIP_REASONING=true docker run openclaw
Correct: Pass variables to container
docker run -e OPENCLAW_AZURE_RESPONSES_STRIP_REASONING=true openclaw
Pitfall 5: Cached Provider Configuration
OpenClaw may cache provider configurations. Force a configuration reload:
bash
Clear configuration cache
rm -rf ~/.openclaw/cache/config/* openclaw gateway restart
Or use CLI to reload
openclaw config reload
Pitfall 6: Model ID Case Sensitivity
Azure OpenAI deployments may have case-sensitive model IDs. Verify the exact deployment name:
bash
List available models from provider
openclaw models list –provider AzureOpenAI-Three
Pitfall 7: Conversation History Reasoning Items
Even if the initial request works, reasoning items from previous turns in the conversation history can trigger the error on subsequent requests. Ensure the adapter strips reasoning items from conversation history.
Check for this in multi-turn conversations: bash
Monitor logs during multi-turn conversation
tail -f ~/.openclaw/logs/gateway.log | grep -E “(reasoning|Item.*type.*reasoning)”
Pitfall 8: Version-Specific Adapter Behavior
OpenClaw 2026.4.8 may have specific adapter version requirements. Verify adapter compatibility:
bash openclaw –version openclaw adapters list
๐ Related Errors
- HTTP 400: "Invalid request format" โ General Azure OpenAI request validation failure. May indicate payload structure issues beyond reasoning items.
- HTTP 400: "Unsupported model" โ Model ID not recognized by the Azure OpenAI deployment. Verify deployment name matches configuration.
- HTTP 401: "Authentication failed" โ Invalid or expired API key. Ensure `api-key` header is properly set for Azure endpoints.
- HTTP 422: "Content filter triggered" โ Azure content moderation blocked the request. Check prompt content and Azure content filters.
- Error: "Item of type 'reasoning' was provided without its required following item" โ The specific error documented in this guide. Indicates malformed reasoning block in payload.
- Error: "Token limit exceeded" โ Request exceeds model's context window or maxTokens. Reduce prompt size or adjust `maxTokens` in config.
- WebSocket 1001** โ Gateway disconnected during conversation. May result from unhandled 400 errors propagating to Control UI.
- Error: "ENOENT: no such file or directory"** โ Session memory file access failure. Unrelated to the reasoning item bug but may appear in same logs.
Historical References:
- GitHub Issue #1847: Azure OpenAI Responses adapter payload validation issues
- GitHub Issue #1923: Reasoning items not properly stripped from multi-turn conversations
- GitHub Discussion #456: GPT-5 model compatibility with OpenClaw adapters
- Pull Request #2101: Fix reasoning item following-item requirement validation