-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: add batch run beta component #5489
base: main
Are you sure you want to change the base?
Conversation
Attached is a flow to test the component. |
conversation.append({"role": "user", "content": text}) | ||
|
||
# Invoke the model | ||
response = model.invoke(conversation) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No need to do this. Langchain has a batch method. Also, it is better to using async methods as it can help with performance.
https://python.langchain.com/docs/how_to/lcel_cheatsheet/#batch-a-runnable
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So what's the proposal? Change this component or create a "batch mode" for models?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated
This component allows multiple texts coming from a LF DataFrame object to be processed by a language model and returns a DataFrame with the generated content per row.