-
Notifications
You must be signed in to change notification settings - Fork 41
Responses API endpoint rejects system messages in input array #52
Description
Bug Description
The /v1/responses endpoint rejects requests that include role: "system" messages in the input array, returning:
400 - {'detail': 'System messages are not allowed'}
The official OpenAI Responses API accepts system messages in the input array, so ccproxy is stricter than the upstream API.
Root Cause
CodexMessage in ccproxy/plugins/codex/models.py restricts roles to only "user" and "assistant":
class CodexMessage(BaseModel):
role: Literal["user", "assistant"] # ← "system" not allowed
content: strWhen a client (e.g., langchain-openai with use_responses_api=True) sends a system message in the input, Pydantic validation fails before the request reaches any conversion logic.
Impact
This forces downstream projects like EvoScientist to fall back from Responses API to Chat Completions API when using ccproxy, which means:
- No
reasoningparameter support — langchain-openai tiesreasoningto the Responses API; using Chat Completions means reasoning must be disabled - OAuth users get a degraded experience compared to direct API key users
For reference: EvoScientist/EvoScientist#130, EvoScientist/EvoScientist#135
Note
The Chat Completions → Responses API converter (openai_to_openai/requests.py) already handles system messages correctly — it extracts them from messages and puts them into the instructions parameter. But this converter is only triggered on the /v1/chat/completions path, not on /v1/responses.
Suggested Fix
Option A (minimal): Add "system" to CodexMessage.role, and in the /v1/responses handler, extract system messages from input and merge them into the instructions parameter before forwarding upstream — reusing the same logic already in _build_responses_payload_from_chat_request.
Option B (passthrough): Since the official OpenAI Responses API accepts system messages in input, simply allow them to pass through without conversion.
Reproduction
from langchain.chat_models import init_chat_model
model = init_chat_model(
"gpt-5.4",
model_provider="openai",
base_url="http://127.0.0.1:8000/codex/v1",
api_key="ccproxy-oauth",
use_responses_api=True,
)
from langchain_core.messages import SystemMessage, HumanMessage
model.invoke([SystemMessage(content="You are helpful"), HumanMessage(content="Hi")])
# → BadRequestError: 400 - {'detail': 'System messages are not allowed'}Environment
- ccproxy-api: 0.2.7
- langchain-openai: 1.1.12
- Python: 3.13
- OS: macOS