Skip to content

Conversation

@Psanyi89
Copy link

@Psanyi89 Psanyi89 commented Oct 24, 2025

Description

[#8423]

[ What changed? Feel free to be brief. ]
I added a new response model N8nChatResponse so it can be parsed.
I had to extend convertchatmessage so it can handle the different datastruct correctly.

AI Code Review

  • Team members only: AI review runs automatically when PR is opened or marked ready for review
  • Team members can also trigger a review by commenting @continue-review

Checklist

  • I've read the contributing guide
  • The relevant docs, if any, have been updated or created
  • The relevant tests, if any, have been updated or created

Screen recording or screenshot

image ## Tests

I tested just manual I couldn't find where are you storing example streams for testing


Summary by cubic

Adds N8N JSON response parsing to Ollama chat streaming so the N8N AI Agent returns proper messages. Also adds support for streaming blocks as separate thinking messages.

  • New Features
    • Parse responses with a "type" field and map "content" to assistant messages.
    • Stream content as role "thinking" and stop on , using a simple thinking state to avoid mixed outputs.

@Psanyi89 Psanyi89 requested a review from a team as a code owner October 24, 2025 11:04
@Psanyi89 Psanyi89 requested review from tingwai and removed request for a team October 24, 2025 11:04
@dosubot dosubot bot added the size:M This PR changes 30-99 lines, ignoring generated files. label Oct 24, 2025
@github-actions
Copy link

github-actions bot commented Oct 24, 2025

All contributors have signed the CLA ✍️ ✅
Posted by the CLA Assistant Lite bot.

Copy link
Contributor

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1 issue found across 1 file

Prompt for AI agents (all 1 issues)

Understand the root cause of the following 1 issues and fix them.


<file name="core/llm/llms/Ollama.ts">

<violation number="1" location="core/llm/llms/Ollama.ts:162">
Storing the thinking state in the static _isThinking flag makes it global across all chat streams, so one request entering a &lt;think&gt; block forces other concurrent streams to emit thinking messages instead of assistant replies. Track the thinking state per stream instead of on the class.</violation>
</file>

Since this is your first cubic review, here's how it works:

  • cubic automatically reviews your code and comments on bugs and improvements
  • Teach cubic by replying to its comments. cubic learns from your replies and gets better over time
  • Ask questions if you need clarification on any suggestion

React with 👍 or 👎 to teach cubic. Mention @cubic-dev-ai to give feedback, ask questions, or re-run the review.

@Psanyi89
Copy link
Author

I have read the CLA Document and I hereby sign the CLA

@Psanyi89 Psanyi89 changed the title Missing response parsing for N8N Ai Agent bug:Missing response parsing for N8N Ai Agent Responses #8423 Oct 24, 2025
@Psanyi89 Psanyi89 changed the title bug:Missing response parsing for N8N Ai Agent Responses #8423 bug: Missing response parsing for N8N Ai Agent Responses Oct 24, 2025
@Psanyi89
Copy link
Author

recheck

@Psanyi89
Copy link
Author

I have read the CLA Document and I hereby sign the CLA

recheck

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

size:M This PR changes 30-99 lines, ignoring generated files.

Projects

Status: Todo

Development

Successfully merging this pull request may close these issues.

1 participant