Skip to content

Commit

Permalink
Context filtering guardrails (#1192)
Browse files Browse the repository at this point in the history
* fix select_context to call return individual items, not collected list

* display utilities, context filtering, oh my

* Update trulens_eval/trulens_eval/tru_chain.py

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>

* undo bad ellipsis change

* change moved import

* update llama quickstart

* get llama working

* llama updates

* add docstring examples

* guardrails docs page

* allow tabbed content setting in mkdocs yml

* formatting

* api ref links

* api ref files for guardrails

* api ref to guardrails in mkdocs

* update in trubot

* fix typo

* add missing init

* api ref updates

* docstring updates

* docstring updates

* base context filter for custom apps

* guardrails in quickstart

* make display util more efficient

* switch openai model to std

* drop unneccessary import

* docs for base guardrails

* remove unneccessary name

* drop unneccessary args

* using sentence

* fix select_context

* langchain quickstart updates

* fix link

* langchain feedback type error handling

* llama feedback type error handling

* base feedback type error handling

* pass context as default argument to lambda function in base

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>

* revert ellipsis change

* fix test failure

* move docstrings, simplify

* accept directionality from feedback

* set wraper.__dict__ too

* filter on demand

* format, remove unused imports

* format

* format

* assert llama-index installed

* assert langchain installed

* test llam aguardrails optional mod

* format

* copy signature instead of dict

* add note

---------

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>
  • Loading branch information
sfc-gh-jreini and ellipsis-dev[bot] authored Jun 21, 2024
1 parent 98cb4a0 commit 36bf645
Show file tree
Hide file tree
Showing 21 changed files with 2,436 additions and 291 deletions.
3 changes: 3 additions & 0 deletions docs/trulens_eval/api/guardrails/index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
# TruLens Guardrails

::: trulens_eval.guardrails.base
3 changes: 3 additions & 0 deletions docs/trulens_eval/api/guardrails/langchain.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
# Guardrails with Langchain

::: trulens_eval.guardrails.langchain
3 changes: 3 additions & 0 deletions docs/trulens_eval/api/guardrails/llama.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
# Guardrails with Llama-Index

::: trulens_eval.guardrails.llama
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
93 changes: 93 additions & 0 deletions docs/trulens_eval/guardrails/index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,93 @@
# Guardrails

Guardrails play a crucial role in ensuring that only high quality output is produced by LLM apps. By setting guardrail thresholds based on feedback functions, we can directly leverage the same trusted evaluation metrics used for observability, *at inference time*.

## Typical guardrail usage

Typical guardrails *only* allow decisions based on the output, and have no impact on the intermediate steps of an LLM application.

![Standard Guardrails Flow](simple_guardrail_flow.png)

## _TruLens_ guardrails for internal steps

While it is commonly discussed to use guardrails for blocking unsafe or inappropriate output from reaching the end user, _TruLens_ guardrails can also be leveraged to improve the internal processing of LLM apps.

If we consider a RAG, context filter guardrails can be used to evaluate the *context relevance* of each context chunk, and only pass relevant chunks to the LLM for generation. Doing so reduces the chance of hallucination and reduces token usage.

![Context Filtering with Guardrails](guardrail_context_filtering.png)

## Using _TruLens_ guardrails

_TruLens_ context filter guardrails are easy to add to your app built with custom python, _Langchain_, or _Llama-Index_.

!!! example "Using context filter guardrails"

=== "python"

```python
from trulens_eval.guardrails.base import context_filter

feedback = (
Feedback(provider.context_relevance)
.on_input()
.on(Select.RecordCalls.retrieve.rets)
)

class RAG_from_scratch:
@context_filter(feedback, 0.5)
def retrieve(query: str) -> list:
results = vector_store.query(
query_texts=query,
n_results=3
)
return [doc for sublist in results['documents'] for doc in sublist]
...
```

=== "with _Langchain_"

```python
from trulens_eval.guardrails.langchain import WithFeedbackFilterDocuments

feedback = (
Feedback(provider.context_relevance)
.on_input()
.on(Select.RecordCalls.retrieve.rets)
)

filtered_retriever = WithFeedbackFilterDocuments.of_retriever(
retriever=retriever,
feedback=feedback
threshold=0.5
)

rag_chain = (
{"context": filtered_retriever
| format_docs, "question": RunnablePassthrough()}
| prompt
| llm
| StrOutputParser()
)
```

=== "with _Llama-Index_"

```python
from trulens_eval.guardrails.llama import WithFeedbackFilterNodes

feedback = (
Feedback(provider.context_relevance)
.on_input()
.on(Select.RecordCalls.retrieve.rets)
)

filtered_query_engine = WithFeedbackFilterNodes(query_engine,
feedback=feedback,
threshold=0.5)
```

!!! warning

Feedback function used as a guardrail must only return a float score, and cannot also return reasons.

TruLens has native python and framework-specific tooling for implementing guardrails. Read more about the availble guardrails in [native python](../api/guardrails/index), [Langchain](../api/guardrails/langchain) and [Llama-Index](../api/guardrails/llama).
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
10 changes: 10 additions & 0 deletions mkdocs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,9 @@ markdown_extensions:
# - pymdownx.mark
# - pymdownx.smartsymbols
- pymdownx.superfences
- pymdownx.tabbed:
alternate_style: true
- pymdownx.details
# - pymdownx.tasklist:
# custom_checkbox: true
#- pymdownx.tilde
Expand Down Expand Up @@ -162,6 +165,7 @@ theme:
- toc.follow
# - toc.integrate
- content.code.copy
- content.tabs

nav:
- 🏠 Home: index.md
Expand Down Expand Up @@ -227,6 +231,8 @@ nav:
- Where to Log: trulens_eval/tracking/logging/where_to_log/index.md
- ❄️ Logging in Snowflake: trulens_eval/tracking/logging/where_to_log/log_in_snowflake.md
- πŸ““ Logging Methods: trulens_eval/tracking/logging/logging.ipynb

- πŸ›‘οΈ Guardrails: trulens_eval/guardrails/index.md
- πŸ” Guides:
# PLACEHOLDER: - trulens_eval/guides/index.md
- Any LLM App: trulens_eval/guides/use_cases_any.md
Expand Down Expand Up @@ -260,6 +266,10 @@ nav:
- trulens_eval/api/endpoint/index.md
- OpenAI: trulens_eval/api/endpoint/openai.md
- 𝄒 Instruments: trulens_eval/api/instruments.md
- πŸ›‘οΈ Guardrails:
- trulens_eval/api_guardrails/index.md
- πŸ¦œοΈπŸ”— Langchain Guardrails: trulens_eval/api/guardrails/langchain.md
- πŸ¦™ Llama-Index Guardrails: trulens_eval/api/guardrails/llama.md
- πŸ—„ Database:
- trulens_eval/api/database/index.md
- ✨ Migration: trulens_eval/api/database/migration.md
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -118,7 +118,7 @@
"from trulens_eval import Select\n",
"from trulens_eval import TP\n",
"from trulens_eval import Tru\n",
"from trulens_eval.utils.langchain import WithFeedbackFilterDocuments\n",
"from trulens_eval.guardrails.langchain import WithFeedbackFilterDocuments\n",
"\n",
"pp = PrettyPrinter()\n",
"\n",
Expand Down
Loading

0 comments on commit 36bf645

Please sign in to comment.