-
Notifications
You must be signed in to change notification settings - Fork 95
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enable tool support for ollama #164
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just reviewing the code, it looks good!
Can you add some tests? I'd like to see tests for the following:
for_api
get_parameters
call
using Mimic to avoid doing a live calldo_process_response
Having tests really helps the maintainability of the project since I'm not easily able to test all the different chat models.
Feel free to ask questions if you need help with any of that. Thanks!
1792a81
to
daedf50
Compare
I added several tests to the best of my knowledge (I'm new to langchain). Please have a look… |
daedf50
to
85f00b9
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for adding tests! Sorry I was slow to respond.
I added some specific requests. The one example I gave about assert [%{"function" => _} | _] = data.tools
applies to all the other tests as well.
|
||
data = ChatOllamaAI.for_api(ollama_ai, [], [fun]) | ||
|
||
assert [%{"function" => _} | _] = data.tools |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One of the main goals with the test is to verify it's formatting the data as the targeted LLM expects it. Not just that it's a map with a "function" key. Also, because of the test setup, you can safely make assumptions about the expected data. For example...
assert [%{} = result_data] = data.tools
# now create a result_data assertion
We want to test that the LangChain.Function
data structure is structured how the supported Ollama server expects it to be. What does it expect things to be called? Anything different or unusual?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I now assert against the full structure.
Will this PR be part of the 0.3.0 release? |
I'd love to get this merged in if someone wants to pick it up and help finish the changes! |
Sorry, I had a lot on my plate… I can take another shot. Since then ollama added async tool support which is not with my previous changes. But maybe I'll clean up this PR first, then look at async later. |
de67662
to
58a01df
Compare
58a01df
to
bfee9e3
Compare
@brainlid Rebased and made the changes. |
Thanks for the contribution and for making the changes! That was a large chunk of work. |
Hey guys Im testing these changes in the demo project and Im getting:
Wondering if some of the interfaces has changed |
@mustela Did you set The code does not support streaming tool support (yet). |
That's did the trick! thank you @alappe ❤️ ! Looking forward to have streaming tool support 🙏🏼 |
Adds basic support for tools with ollama
v0.3
, tested with the hairbrush example: