Skip to content

[BUG] Codex stream chunks fall back to gpt-4 instead of upstream model #1094

@yart

Description

@yart

OmniRoute Version

3.5.5

Installation Method

Built from source

Operating System

Linux

OS Version

Arch Linux

Node.js Version

20.20.2

Provider(s) Involved

Codex

Model(s) Involved

codex/gpt-5.4

Client Tool

OpenCode Desktop

Description

On the Responses -> Chat Completions translation path, OmniRoute emits stream chunks with model: "gpt-4" even when the upstream model is gpt-5.4. This is caused by a fallback in openaiResponsesToOpenAIResponse() when state.model is not populated.

Steps to Reproduce

  1. Use a Codex gpt-5.4 conversation and trigger a normal streamed reply through OmniRoute.
  2. Observe the emitted Chat Completions chunks.
  3. The translated chunks report gpt-4 as the model, even though the upstream response is gpt-5.4.

Expected Behavior

The translated chunks should preserve the upstream model value (gpt-5.4 in this case).

Actual Behavior

OmniRoute falls back to gpt-4 in emitted chunks when state.model is missing.

Test Impact

Needs a new unit test

Error Logs / Output

stream chunk model=gpt-4 (expected gpt-5.4)

Screenshots

No response

Additional Context

This is a separate correctness issue from the response.failed handling bug, but it lives in the same translation path and was reproduced during the same investigation.

Validation Plan

  • node --import tsx/esm --test tests/unit/translator-resp-openai-responses.test.mjs
  • npm run typecheck:core

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions