-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: add and expose api_params for OpenAIGenerator in LLMEvaluator based classes #7987
Conversation
Pull Request Test Coverage Report for Build 9821988850Details
💛 - Coveralls |
@shadeMe Could you please take over the review for this PR? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Many thanks for the PR. A couple of changes:
For the classes that build upon LLMEvaluator, I did not serialize api_params. Since we are just passing it to the .super () Should I do so?
Yes, you'll need to pass the new init paramter to the default_to/from_dict
functions.
I am unsure of how to add a test for api_base_url when running it since we would need to have a server in the CI. I can confirm that it works locally when I spin up my own server, but I don't know if that's possible with Haystack's CI.
You can add an integration test that reads the API URL from an env var, mark it with pytest.mark.skipif
and test it locally.
Pull Request Test Coverage Report for Build 9890104460Details
💛 - Coveralls |
This is not going to 100% solve what you are asking for, but it will lay the foundation for adding like 2-3 lines of code as an else if to allow "azure" as the API (which you can make a PR for if this gets merged). api_params can handle any of the parameters defined by the Azure component and the generation_kwargs: https://docs.haystack.deepset.ai/reference/generators-api#:~:text=Module%20azure-,AzureOpenAIGenerator,-A%20Generator%20component I didn't add the code to allow for Azure because I wanted this PR to create and test api_params. Azure support can be added after once it has been established that api_params works for OpenAI and the OpenAI based servers (which it does seem to from my testing). In the init of LLMEval, we would simply do:
|
Related Issues
AzureOpenAI
inLLMEvaluator
#7946, Add support for llama.cpp llm evaluator #7718 (sort of)Proposed Changes:
The general change pertains to allowing for
OpenAIGenerator
parameters to be set when initializing any of theLLMEvaluators
. One of the benefits to this is theapi_base_url
that allows us to set a local host (or remote host) where we serve a custom model instead of sending it to OpenAI. This, then, allows for "local" evaluation.How did you test it?
I modified some of the tests and added a test for
api_base_url
.Notes for the reviewer
For the classes that build upon
LLMEvaluator
, I did not serializeapi_params
. Since we are just passing it to the.super ()
Should I do so?I am unsure of how to add a test for api_base_url when running it since we would need to have a server in the CI. I can confirm that it works locally when I spin up my own server, but I don't know if that's possible with Haystack's CI.
The test would be something like:
Checklist
fix:
,feat:
,build:
,chore:
,ci:
,docs:
,style:
,refactor:
,perf:
,test:
.