Skip to content

Warp AI: OpenAI /v1/responses 404 'rs_… not found' after interrupting High reasoning response #8481

@vitalik-fourty4

Description

@vitalik-fourty4

Summary

When using Warp AI with an OpenAI API key (BYOK) and a “High reasoning” model, interrupting an in-progress streaming response (specifically by sending a new message before the previous one completes) can leave the conversation in a failing state.

Subsequent requests fail against the OpenAI Responses API with:

  • 404 Not Found
  • invalid_request_error
  • message: Item with id 'rs_…' not found. Items are not persisted when store is set to false … remove this item from your input.

This looks like Warp is referencing a previous response item id (rs_…) in the next request input while store:false is used, so the item is not retrievable/persisted server-side after interruption.

Environment

  • Warp version (current): v0.2025.12.17.17.17.stable_02
  • OS: macOS
  • Provider: OpenAI (BYOK key configured in Warp settings)
  • Warp AI model: “High reasoning” (observed across multiple “high reasoning” models)

Frequency note:

  • This occurred across earlier Warp versions as well.
  • On v0.2025.12.17.17.17.stable_02 it’s much less frequent; only seen once since updating.

Steps to reproduce

  1. Start a new Warp AI conversation.
  2. Select a “High reasoning” model.
  3. Send a prompt that yields a streaming response.
  4. Before the response completes, interrupt it by sending another message in the same conversation.

Actual result

Warp shows an error like:

{
  "message": "Item with id 'rs_0ec4614a2d8aea4901695fe070c6d4819c8f3d3e5f1ffc7c00' not found. Items are not persisted when `store` is set to false. Try again with `store` set to true, or remove this item from your input.",
  "type": "invalid_request_error",
  "param": "input",
  "code": null
}

(Previously also seen with other rs_… ids, e.g. rs_0ec4614a2d8aea4901695fdff9147c819c8b2c844d5081fb09.)

Expected result

Interrupting/canceling a response should end the stream cleanly and allow sending the next message without causing API errors.

Workarounds

  • Start a new conversation/thread.
  • Change the model (seems to reset whatever state is causing the bad rs_… reference).

Debug info

  • conversation_id (from Warp debug info): 07f4767d-1ded-4a42-ae43-f59d560693ca

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions