OmniRoute Version
3.5.5
Installation Method
Built from source
Operating System
Linux
OS Version
Arch Linux
Node.js Version
20.20.2
Provider(s) Involved
Codex
Model(s) Involved
codex/gpt-5.4
Client Tool
OpenCode Desktop
Description
On the Responses -> Chat Completions translation path, OmniRoute emits stream chunks with model: "gpt-4" even when the upstream model is gpt-5.4. This is caused by a fallback in openaiResponsesToOpenAIResponse() when state.model is not populated.
Steps to Reproduce
- Use a Codex gpt-5.4 conversation and trigger a normal streamed reply through OmniRoute.
- Observe the emitted Chat Completions chunks.
- The translated chunks report
gpt-4 as the model, even though the upstream response is gpt-5.4.
Expected Behavior
The translated chunks should preserve the upstream model value (gpt-5.4 in this case).
Actual Behavior
OmniRoute falls back to gpt-4 in emitted chunks when state.model is missing.
Test Impact
Needs a new unit test
Error Logs / Output
stream chunk model=gpt-4 (expected gpt-5.4)
Screenshots
No response
Additional Context
This is a separate correctness issue from the response.failed handling bug, but it lives in the same translation path and was reproduced during the same investigation.
Validation Plan
node --import tsx/esm --test tests/unit/translator-resp-openai-responses.test.mjs
npm run typecheck:core
OmniRoute Version
3.5.5
Installation Method
Built from source
Operating System
Linux
OS Version
Arch Linux
Node.js Version
20.20.2
Provider(s) Involved
Codex
Model(s) Involved
codex/gpt-5.4
Client Tool
OpenCode Desktop
Description
On the Responses -> Chat Completions translation path, OmniRoute emits stream chunks with
model: "gpt-4"even when the upstream model isgpt-5.4. This is caused by a fallback inopenaiResponsesToOpenAIResponse()whenstate.modelis not populated.Steps to Reproduce
gpt-4as the model, even though the upstream response isgpt-5.4.Expected Behavior
The translated chunks should preserve the upstream model value (
gpt-5.4in this case).Actual Behavior
OmniRoute falls back to
gpt-4in emitted chunks whenstate.modelis missing.Test Impact
Needs a new unit test
Error Logs / Output
Screenshots
No response
Additional Context
This is a separate correctness issue from the
response.failedhandling bug, but it lives in the same translation path and was reproduced during the same investigation.Validation Plan
node --import tsx/esm --test tests/unit/translator-resp-openai-responses.test.mjsnpm run typecheck:core