Skip to content

Conversation

@KirschX
Copy link

@KirschX KirschX commented Oct 28, 2025

Background

#9780

In order to use finishReason with useChat on the frontend, users currently have to manually send it from the server—either by adding it to metadata or using createUIMessageStream.

I raised a question in the discussion tab, and it seems worthwhile to include finishReason by default so developers can access it without additional setup.

Summary

This PR adds the finishReason parameter to the onFinish callback in useChat, allowing developers to know why a stream finished (stop, length, content-filter, tool-calls, etc.).

Changes:

  • Added finishReason to ChatOnFinishCallback type
  • Updated UI message stream handling to propagate finishReason
  • Updated tests to verify finishReason is passed correctly
  • Added changeset for patch release

Manual Verification

Checklist

  • Tests have been added / updated (for bug fixes / features)
  • Documentation has been added / updated (for bug fixes / features)
  • A patch changeset for relevant packages has been added (for bug fixes / features - run pnpm changeset in the project root)
  • I have reviewed this pull request (self-review)

Future Work

Related Issues

closes #9780

@gr2m gr2m added the ai/ui label Oct 28, 2025
Copy link
Collaborator

@gr2m gr2m left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks for the pull request.

For manual verification, I would use one examples/next-openai and update one of the examples in there to some how handle the finish reason, to make sure it works as expected for the most common cases

@@ -1,3 +1,4 @@
import { FinishReason } from '../types/language-model';
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this can be removed?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@gr2m
My miss.. removed it.

expect(letOnFinishArgs).toMatchInlineSnapshot(`
[
{
"finishReason": undefined,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I haven't looked in detail yet, but is there a case where finishReason would be undefined? We might need to update our fixtures in the tests

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think so, finishReason can be undefined when there’s no finish event (abort, disconnect, server error). For normal completions it should be set (usually 'stop').

I’ve updated fixtures: normal paths include finishReason: 'stop'; abnormal paths keep it undefined.

@KirschX KirschX requested a review from gr2m October 29, 2025 10:12
This PR adds the finishReason parameter to the onFinish callback in useChat, allowing developers to know why a stream finished (stop, length, content-filter, tool-calls, etc.).

Changes:
- Added finishReason to ChatOnFinishCallback type
- Updated UI message stream handling to propagate finishReason
- Updated tests to verify finishReason is passed correctly
- Added changeset for patch release
@KirschX KirschX force-pushed the feat/use-chat-onfinish-finish-reason branch from 3b8a2ad to fe0f666 Compare October 30, 2025 17:42
@KirschX
Copy link
Author

KirschX commented Oct 30, 2025

thanks for the pull request.

For manual verification, I would use one examples/next-openai and update one of the examples in there to some how handle the finish reason, to make sure it works as expected for the most common cases

I updated one of a example file in 'examples/next-openai'(use-chat-data-ui-parts).

Sorry for late update after requesting review.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants