You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
"""Evaluate whether the AI-generated response is bad, and if so, request an alternate expert answer.
59
65
If no expert answer is available, this query is still logged for SMEs to answer.
@@ -65,6 +71,8 @@ def validate(
65
71
prompt (str, optional): Optional prompt representing the actual inputs (combining query, context, and system instructions into one string) to the LLM that generated the response.
66
72
form_prompt (Callable[[str, str], str], optional): Optional function to format the prompt based on query and context. Cannot be provided together with prompt, provide one or the other. This function should take query and context as parameters and return a formatted prompt string. If not provided, a default prompt formatter will be used. To include a system prompt or any other special instructions for your LLM, incorporate them directly in your custom form_prompt() function definition.
67
73
metadata (dict, optional): Additional custom metadata to associate with the query logged in the Codex Project.
74
+
options (ProjectValidateOptions, optional): Typed dict of advanced configuration options for the Trustworthy Language Model.
75
+
quality_preset (Literal["best", "high", "medium", "low", "base"], optional): The quality preset to use for the TLM or Trustworthy RAG API.
0 commit comments