Skip to content

Commit

Permalink
0.29.0 version bump (#1140)
Browse files Browse the repository at this point in the history
* version bump

* simpler lc quickstart

* update installs and imports

* update langchain instrumentation docs

* remove groundedness ref from providers.md

* build docs fixes

* remove key cell

* fix docs build

* firx formatting for stock.md

* remove extra spaces

* undo format change

* update docstrings for hugs and base provider

* openai docstring updates

* hugs docstring update

* update context relevance hugs docstring

* more docstring updates

* remove can be changed messages from openai provider docstrings
  • Loading branch information
joshreini1 authored May 17, 2024
1 parent 32de002 commit 05f7e74
Show file tree
Hide file tree
Showing 9 changed files with 513 additions and 558 deletions.
2 changes: 0 additions & 2 deletions docs/trulens_eval/api/providers.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,8 +9,6 @@

::: trulens_eval.feedback.provider.base.LLMProvider

::: trulens_eval.feedback.groundedness

::: trulens_eval.feedback.groundtruth

::: trulens_eval.feedback.embeddings
32 changes: 1 addition & 31 deletions docs/trulens_eval/evaluation/feedback_implementations/stock.md
Original file line number Diff line number Diff line change
Expand Up @@ -83,7 +83,6 @@ API Reference: [LLMProvider][trulens_eval.feedback.provider.base.LLMProvider].
filters:
- "!^_"


## Embedding-based

API Reference: [Embeddings][trulens_eval.feedback.embeddings.Embeddings].
Expand Down Expand Up @@ -111,35 +110,7 @@ API Reference: [Embeddings][trulens_eval.feedback.embeddings.Embeddings].
filters:
- "!^_"

## Combinators

### Groundedness

API Reference: [Groundedness][trulens_eval.feedback.groundedness.Groundedness]

::: trulens_eval.feedback.groundedness.Groundedness
options:
heading_level: 4
show_bases: false
show_root_heading: false
show_root_toc_entry: false
show_source: false
show_docstring_classes: false
show_docstring_modules: false
show_docstring_parameters: false
show_docstring_returns: false
show_docstring_description: true
show_docstring_examples: false
show_docstring_other_parameters: false
show_docstring_attributes: false
show_signature: false
separate_signature: false
summary: false
group_by_category: false
members_order: alphabetical
filters:
- "!^_"

## Combinations

### Ground Truth Agreement

Expand Down Expand Up @@ -167,4 +138,3 @@ API Reference: [GroundTruthAgreement][trulens_eval.feedback.groundtruth.GroundTr
members_order: alphabetical
filters:
- "!^_"

199 changes: 60 additions & 139 deletions docs/trulens_eval/tracking/instrumentation/langchain.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -25,89 +25,92 @@
"source": [
"## Example Usage\n",
"\n",
"Below is a quick example of usage. First, we'll create a standard LLMChain."
"To demonstrate usage, we'll create a standard RAG defined with LCEL.\n",
"\n",
"First, this requires loading data into a vector store."
]
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# required imports\n",
"from langchain_openai import OpenAI\n",
"from langchain.chains import LLMChain\n",
"from langchain.prompts import PromptTemplate\n",
"from langchain.prompts.chat import HumanMessagePromptTemplate, ChatPromptTemplate\n",
"from trulens_eval import TruChain\n",
"import bs4\n",
"from langchain.document_loaders import WebBaseLoader\n",
"\n",
"# typical LangChain rag setup\n",
"full_prompt = HumanMessagePromptTemplate(\n",
" prompt=PromptTemplate(\n",
" template=\n",
" \"Provide a helpful response with relevant background information for the following: {prompt}\",\n",
" input_variables=[\"prompt\"],\n",
" )\n",
"loader = WebBaseLoader(\n",
" web_paths=(\"https://lilianweng.github.io/posts/2023-06-23-agent/\",),\n",
" bs_kwargs=dict(\n",
" parse_only=bs4.SoupStrainer(\n",
" class_=(\"post-content\", \"post-title\", \"post-header\")\n",
" )\n",
" ),\n",
")\n",
"chat_prompt_template = ChatPromptTemplate.from_messages([full_prompt])\n",
"docs = loader.load()\n",
"\n",
"from langchain_openai import OpenAIEmbeddings\n",
"\n",
"embeddings = OpenAIEmbeddings()\n",
"\n",
"from langchain_community.vectorstores import FAISS\n",
"from langchain_text_splitters import RecursiveCharacterTextSplitter\n",
"\n",
"llm = OpenAI(temperature=0.9, max_tokens=128)\n",
"chain = LLMChain(llm=llm, prompt=chat_prompt_template, verbose=True)"
"text_splitter = RecursiveCharacterTextSplitter()\n",
"documents = text_splitter.split_documents(docs)\n",
"vectorstore = FAISS.from_documents(documents, embeddings)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To instrument an LLM chain, all that's required is to wrap it using TruChain."
"Then we can define the retriever chain using LCEL."
]
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": null,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"🦑 Tru initialized with db url sqlite:///default.sqlite .\n",
"🛑 Secret keys may be written to the database. See the `database_redact_keys` option of Tru` to prevent this.\n"
]
}
],
"outputs": [],
"source": [
"# instrument with TruChain\n",
"tru_recorder = TruChain(chain)"
"from langchain.schema import StrOutputParser\n",
"from langchain_core.runnables import RunnablePassthrough\n",
"from langchain.chat_models import ChatOpenAI\n",
"from langchain import hub\n",
"\n",
"retriever = vectorstore.as_retriever()\n",
"\n",
"prompt = hub.pull(\"rlm/rag-prompt\")\n",
"llm = ChatOpenAI(model_name=\"gpt-3.5-turbo\", temperature=0)\n",
"\n",
"def format_docs(docs):\n",
" return \"\\n\\n\".join(doc.page_content for doc in docs)\n",
"\n",
"rag_chain = (\n",
" {\"context\": retriever | format_docs, \"question\": RunnablePassthrough()}\n",
" | prompt\n",
" | llm\n",
" | StrOutputParser()\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Similarly, LangChain apps defined with LangChain Expression Language (LCEL) are also supported."
"To instrument an LLM chain, all that's required is to wrap it using TruChain."
]
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain.prompts import ChatPromptTemplate\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"prompt = ChatPromptTemplate.from_template(\"tell me a short joke about {topic}\")\n",
"model = ChatOpenAI()\n",
"output_parser = StrOutputParser()\n",
"\n",
"chain = prompt | model | output_parser\n",
"\n",
"tru_recorder = TruChain(\n",
" chain,\n",
" app_id='Chain1_ChatApplication'\n",
")"
"from trulens_eval import TruChain\n",
"# instrument with TruChain\n",
"tru_recorder = TruChain(rag_chain)"
]
},
{
Expand All @@ -134,10 +137,10 @@
"\n",
"provider = OpenAI()\n",
"\n",
"context = TruChain.select_context(chain)\n",
"context = TruChain.select_context(rag_chain)\n",
"\n",
"f_context_relevance = (\n",
" Feedback(provider.qs_relevance)\n",
" Feedback(provider.context_relevance)\n",
" .on_input()\n",
" .on(context)\n",
" .aggregate(np.mean)\n",
Expand All @@ -160,7 +163,7 @@
"outputs": [],
"source": [
"from trulens_eval.app import App\n",
"context = App.select_context(chain)"
"context = App.select_context(rag_chain)"
]
},
{
Expand All @@ -183,7 +186,7 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
Expand Down Expand Up @@ -217,7 +220,7 @@
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
Expand Down Expand Up @@ -246,63 +249,9 @@
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": null,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Module langchain*\n",
" Class langchain.agents.agent.BaseMultiActionAgent\n",
" Method plan: (self, intermediate_steps: 'List[Tuple[AgentAction, str]]', callbacks: 'Callbacks' = None, **kwargs: 'Any') -> 'Union[List[AgentAction], AgentFinish]'\n",
" Method aplan: (self, intermediate_steps: 'List[Tuple[AgentAction, str]]', callbacks: 'Callbacks' = None, **kwargs: 'Any') -> 'Union[List[AgentAction], AgentFinish]'\n",
" Class langchain.agents.agent.BaseSingleActionAgent\n",
" Method plan: (self, intermediate_steps: 'List[Tuple[AgentAction, str]]', callbacks: 'Callbacks' = None, **kwargs: 'Any') -> 'Union[AgentAction, AgentFinish]'\n",
" Method aplan: (self, intermediate_steps: 'List[Tuple[AgentAction, str]]', callbacks: 'Callbacks' = None, **kwargs: 'Any') -> 'Union[AgentAction, AgentFinish]'\n",
" Class langchain.chains.base.Chain\n",
" Method __call__: (self, inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Union[List[langchain_core.callbacks.base.BaseCallbackHandler], langchain_core.callbacks.base.BaseCallbackManager, NoneType] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, run_name: Optional[str] = None, include_run_info: bool = False) -> Dict[str, Any]\n",
" Method invoke: (self, input: Dict[str, Any], config: Optional[langchain_core.runnables.config.RunnableConfig] = None, **kwargs: Any) -> Dict[str, Any]\n",
" Method ainvoke: (self, input: Dict[str, Any], config: Optional[langchain_core.runnables.config.RunnableConfig] = None, **kwargs: Any) -> Dict[str, Any]\n",
" Method run: (self, *args: Any, callbacks: Union[List[langchain_core.callbacks.base.BaseCallbackHandler], langchain_core.callbacks.base.BaseCallbackManager, NoneType] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) -> Any\n",
" Method arun: (self, *args: Any, callbacks: Union[List[langchain_core.callbacks.base.BaseCallbackHandler], langchain_core.callbacks.base.BaseCallbackManager, NoneType] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) -> Any\n",
" Method _call: (self, inputs: Dict[str, Any], run_manager: Optional[langchain_core.callbacks.manager.CallbackManagerForChainRun] = None) -> Dict[str, Any]\n",
" Method _acall: (self, inputs: Dict[str, Any], run_manager: Optional[langchain_core.callbacks.manager.AsyncCallbackManagerForChainRun] = None) -> Dict[str, Any]\n",
" Method acall: (self, inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Union[List[langchain_core.callbacks.base.BaseCallbackHandler], langchain_core.callbacks.base.BaseCallbackManager, NoneType] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, run_name: Optional[str] = None, include_run_info: bool = False) -> Dict[str, Any]\n",
" Class langchain.memory.chat_memory.BaseChatMemory\n",
" Method save_context: (self, inputs: Dict[str, Any], outputs: Dict[str, str]) -> None\n",
" Method clear: (self) -> None\n",
" Class langchain_core.chat_history.BaseChatMessageHistory\n",
" Class langchain_core.documents.base.Document\n",
" Class langchain_core.language_models.base.BaseLanguageModel\n",
" Class langchain_core.language_models.llms.BaseLLM\n",
" Class langchain_core.load.serializable.Serializable\n",
" Class langchain_core.memory.BaseMemory\n",
" Method save_context: (self, inputs: 'Dict[str, Any]', outputs: 'Dict[str, str]') -> 'None'\n",
" Method clear: (self) -> 'None'\n",
" Class langchain_core.prompts.base.BasePromptTemplate\n",
" Class langchain_core.retrievers.BaseRetriever\n",
" Method _get_relevant_documents: (self, query: 'str', *, run_manager: 'CallbackManagerForRetrieverRun') -> 'List[Document]'\n",
" Method get_relevant_documents: (self, query: 'str', *, callbacks: 'Callbacks' = None, tags: 'Optional[List[str]]' = None, metadata: 'Optional[Dict[str, Any]]' = None, run_name: 'Optional[str]' = None, **kwargs: 'Any') -> 'List[Document]'\n",
" Method aget_relevant_documents: (self, query: 'str', *, callbacks: 'Callbacks' = None, tags: 'Optional[List[str]]' = None, metadata: 'Optional[Dict[str, Any]]' = None, run_name: 'Optional[str]' = None, **kwargs: 'Any') -> 'List[Document]'\n",
" Method _aget_relevant_documents: (self, query: 'str', *, run_manager: 'AsyncCallbackManagerForRetrieverRun') -> 'List[Document]'\n",
" Class langchain_core.runnables.base.RunnableSerializable\n",
" Class langchain_core.tools.BaseTool\n",
" Method _arun: (self, *args: 'Any', **kwargs: 'Any') -> 'Any'\n",
" Method _run: (self, *args: 'Any', **kwargs: 'Any') -> 'Any'\n",
"\n",
"Module trulens_eval.*\n",
" Class trulens_eval.feedback.feedback.Feedback\n",
" Method __call__: (self, *args, **kwargs) -> 'Any'\n",
" Class trulens_eval.utils.langchain.WithFeedbackFilterDocuments\n",
" Method _get_relevant_documents: (self, query: str, *, run_manager) -> List[langchain_core.documents.base.Document]\n",
" Method get_relevant_documents: (self, query: 'str', *, callbacks: 'Callbacks' = None, tags: 'Optional[List[str]]' = None, metadata: 'Optional[Dict[str, Any]]' = None, run_name: 'Optional[str]' = None, **kwargs: 'Any') -> 'List[Document]'\n",
" Method aget_relevant_documents: (self, query: 'str', *, callbacks: 'Callbacks' = None, tags: 'Optional[List[str]]' = None, metadata: 'Optional[Dict[str, Any]]' = None, run_name: 'Optional[str]' = None, **kwargs: 'Any') -> 'List[Document]'\n",
" Method _aget_relevant_documents: (self, query: 'str', *, run_manager: 'AsyncCallbackManagerForRetrieverRun') -> 'List[Document]'\n",
"\n"
]
}
],
"outputs": [],
"source": [
"from trulens_eval.tru_chain import LangChainInstrument\n",
"LangChainInstrument().print_instrumentation()"
Expand Down Expand Up @@ -330,37 +279,9 @@
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": null,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Components:\n",
"\tTruChain (Other) at 0x2b60a3660 with path __app__\n",
"\tLLMChain (Other) at 0x2b5cdb3e0 with path __app__.app\n",
"\tPromptTemplate (Custom) at 0x2b605e580 with path __app__.app.prompt\n",
"\tChatOpenAI (Custom) at 0x2b5cdb4d0 with path __app__.app.llm\n",
"\tStrOutputParser (Custom) at 0x2b60a3750 with path __app__.app.output_parser\n",
"\n",
"Methods:\n",
"Object at 0x2b5cdb3e0:\n",
"\t<function Chain.__call__ at 0x2a6c17560> with path __app__.app\n",
"\t<function Chain.invoke at 0x2a6c16de0> with path __app__.app\n",
"\t<function Chain.ainvoke at 0x2a6c16e80> with path __app__.app\n",
"\t<function Chain.run at 0x2a6c17b00> with path __app__.app\n",
"\t<function Chain.arun at 0x2a6c17d80> with path __app__.app\n",
"\t<function LLMChain._call at 0x2a6c6c2c0> with path __app__.app\n",
"\t<function LLMChain._acall at 0x2a6c6c860> with path __app__.app\n",
"\t<function Chain.acall at 0x2a6c177e0> with path __app__.app\n",
"\t<function Chain._call at 0x2a6c17380> with path __app__.app\n",
"\t<function Chain._acall at 0x2a6c17420> with path __app__.app\n",
"\t<function Runnable.invoke at 0x2a669ba60> with path __app__.app\n",
"\t<function Runnable.ainvoke at 0x2a669bb00> with path __app__.app\n"
]
}
],
"outputs": [],
"source": [
"async_tc_recorder.print_instrumented()"
]
Expand Down
24 changes: 10 additions & 14 deletions trulens_eval/examples/quickstart/langchain_quickstart.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@
"metadata": {},
"outputs": [],
"source": [
"# ! pip install trulens_eval openai langchain chromadb langchainhub bs4 tiktoken"
"# ! pip install trulens_eval openai langchain langchain-openai faiss-cpu bs4 tiktoken"
]
},
{
Expand Down Expand Up @@ -58,17 +58,13 @@
"# Imports main tools:\n",
"from trulens_eval import TruChain, Tru\n",
"tru = Tru()\n",
"tru.reset_database()\n",
"\n",
"# Imports from LangChain to build app\n",
"import bs4\n",
"from langchain import hub\n",
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.document_loaders import WebBaseLoader\n",
"from langchain.embeddings import OpenAIEmbeddings\n",
"from langchain.schema import StrOutputParser\n",
"from langchain.text_splitter import RecursiveCharacterTextSplitter\n",
"from langchain.vectorstores import Chroma\n",
"from langchain_core.runnables import RunnablePassthrough"
]
},
Expand Down Expand Up @@ -110,17 +106,17 @@
"metadata": {},
"outputs": [],
"source": [
"text_splitter = RecursiveCharacterTextSplitter(\n",
" chunk_size=1000,\n",
" chunk_overlap=200\n",
")\n",
"from langchain_openai import OpenAIEmbeddings\n",
"\n",
"splits = text_splitter.split_documents(docs)\n",
"embeddings = OpenAIEmbeddings()\n",
"\n",
"vectorstore = Chroma.from_documents(\n",
" documents=splits,\n",
" embedding=OpenAIEmbeddings()\n",
")"
"from langchain_community.vectorstores import FAISS\n",
"from langchain_text_splitters import RecursiveCharacterTextSplitter\n",
"\n",
"\n",
"text_splitter = RecursiveCharacterTextSplitter()\n",
"documents = text_splitter.split_documents(docs)\n",
"vectorstore = FAISS.from_documents(documents, embeddings)"
]
},
{
Expand Down
Loading

0 comments on commit 05f7e74

Please sign in to comment.