Open
Description
Currently llm responses are added as-is. For thinking-models this means if think token content is returned as part of the llm response it will be added. However this is usually not desired. Therefore add logic to remove ... parts especially for auto-multi-turn conversations.
Consider adding a convenience method for other use cases.