Skip to content

Commit

Permalink
dedup (#517)
Browse files Browse the repository at this point in the history
* dedup

* delete dups

* another try

* make one assets and images folder instead of two

---------

Co-authored-by: Josh Reini <[email protected]>
  • Loading branch information
piotrm0 and joshreini1 authored Oct 26, 2023
1 parent 3b70a57 commit 3d3ad17
Show file tree
Hide file tree
Showing 40 changed files with 28 additions and 28 deletions.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ TruLens-Eval has two key value propositions:
* Anything that is tracked by the instrumentation can be evaluated!

The process for building your evaluated and tracked LLM application with TruLens is shown below 👇
![Architecture Diagram](https://www.trulens.org/Assets/image/TruLens_Architecture.png)
![Architecture Diagram](https://www.trulens.org/assets/images/TruLens_Architecture.png)

### Installation and setup

Expand Down
File renamed without changes.
File renamed without changes
File renamed without changes
File renamed without changes.
File renamed without changes
File renamed without changes
File renamed without changes.
File renamed without changes
File renamed without changes
File renamed without changes
File renamed without changes
File renamed without changes
File renamed without changes
File renamed without changes.
File renamed without changes
File renamed without changes
File renamed without changes
File renamed without changes
File renamed without changes
File renamed without changes
Binary file removed docs/trulens_eval/Assets/image/Chain_Explore.png
Binary file not shown.
Binary file removed docs/trulens_eval/Assets/image/Evaluations.png
Binary file not shown.
Binary file removed docs/trulens_eval/Assets/image/Leaderboard.png
Binary file not shown.
Binary file not shown.
2 changes: 1 addition & 1 deletion docs/trulens_eval/gh_top_intro.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ TruLens-Eval has two key value propositions:
* Anything that is tracked by the instrumentation can be evaluated!

The process for building your evaluated and tracked LLM application with TruLens is shown below 👇
![Architecture Diagram](https://www.trulens.org/Assets/image/TruLens_Architecture.png)
![Architecture Diagram](https://www.trulens.org/assets/images/TruLens_Architecture.png)

### Installation and setup

Expand Down
4 changes: 2 additions & 2 deletions docs/trulens_eval/intro.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Welcome to TruLens-Eval!

![TruLens](https://www.trulens.org/Assets/image/Neural_Network_Explainability.png)
![TruLens](https://www.trulens.org/assets/images/Neural_Network_Explainability.png)

Evaluate and track your LLM experiments with TruLens. As you work on your models and prompts TruLens-Eval supports the iterative development and of a wide range of LLM applications by wrapping your application to log key metadata across the entire chain (or off chain if your project does not use chains) on your local machine.

Expand All @@ -20,7 +20,7 @@ TruLens-Eval has two key value propositions:

The process for building your evaluated and tracked LLM application with TruLens is below 👇

![Architecture Diagram](https://www.trulens.org/Assets/image/TruLens_Architecture.png)
![Architecture Diagram](https://www.trulens.org/assets/images/TruLens_Architecture.png)

## Installation and Setup

Expand Down
4 changes: 2 additions & 2 deletions trulens_eval/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Welcome to TruLens-Eval!

![TruLens](https://www.trulens.org/Assets/image/Neural_Network_Explainability.png)
![TruLens](https://www.trulens.org/assets/images/Neural_Network_Explainability.png)

Evaluate and track your LLM experiments with TruLens. As you work on your models and prompts TruLens-Eval supports the iterative development and of a wide range of LLM applications by wrapping your application to log key metadata across the entire chain (or off chain if your project does not use chains) on your local machine.

Expand All @@ -20,7 +20,7 @@ TruLens-Eval has two key value propositions:

The process for building your evaluated and tracked LLM application with TruLens is below 👇

![Architecture Diagram](https://www.trulens.org/Assets/image/TruLens_Architecture.png)
![Architecture Diagram](https://www.trulens.org/assets/images/TruLens_Architecture.png)

## Installation and Setup

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -302,7 +302,7 @@
"\n",
"Note: Average feedback values are returned and displayed in a range from 0 (worst) to 1 (best).\n",
"\n",
"![Chain Leaderboard](https://www.trulens.org/Assets/image/Leaderboard.png)\n",
"![Chain Leaderboard](https://www.trulens.org/assets/images/Leaderboard.png)\n",
"\n",
"To dive deeper on a particular chain, click \"Select Chain\".\n",
"\n",
Expand All @@ -312,13 +312,13 @@
"\n",
"The evaluations tab provides record-level metadata and feedback on the quality of your LLM application.\n",
"\n",
"![Evaluations](https://www.trulens.org/Assets/image/Leaderboard.png)\n",
"![Evaluations](https://www.trulens.org/assets/images/Leaderboard.png)\n",
"\n",
"### Deep dive into full chain metadata\n",
"\n",
"Click on a record to dive deep into all of the details of your chain stack and underlying LLM, captured by tru_chain_recorder.\n",
"\n",
"![Explore a Chain](https://www.trulens.org/Assets/image/Chain_Explore.png)\n",
"![Explore a Chain](https://www.trulens.org/assets/images/Chain_Explore.png)\n",
"\n",
"If you prefer the raw format, you can quickly get it using the \"Display full chain json\" or \"Display full record json\" buttons at the bottom of the page."
]
Expand Down
6 changes: 3 additions & 3 deletions trulens_eval/examples/quickstart/dashboard_appui.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -8,11 +8,11 @@
"\n",
"This notebook describes how to run your apps from the streamlit dashboard. Following this notebook, you should be able to access your apps and interact with them within the streamlit dashboard under the **Apps** page (see screenshot below). Make sure to check the **Setting up** section below to get your app in the list of apps on that page.\n",
"\n",
"![App Runner](https://www.trulens.org/Assets/image/appui/apps.png)\n",
"![App Runner](https://www.trulens.org/assets/images/appui/apps.png)\n",
"\n",
"Clicking *New session* under any of these apps will bring up an empty transcript of the interactions between the user (you) and the app (see screenshot below). Typing a message under *Your message* on the bottom of the window, and pressing enter, will run your app with that specified message as input, produce the app output, and add both to the chat transcript under the *Records* column.\n",
"\n",
"![Blank Session](https://www.trulens.org/Assets/image/appui/blank_session.png)\n",
"![Blank Session](https://www.trulens.org/assets/images/appui/blank_session.png)\n",
"\n",
"Several other inputs are present on this page which control what about the produced transcript record to show alongside their inputs/outputs.\n",
"\n",
Expand All @@ -24,7 +24,7 @@
"\n",
"An example of a running session with several selectors is shown in the following screenshot:\n",
"\n",
"![Running Session](https://www.trulens.org/Assets/image/appui/running_session.png)\n",
"![Running Session](https://www.trulens.org/assets/images/appui/running_session.png)\n",
"\n",
"The session is preserved when navigating away from this page, letting you inspect the produced records in the **Evaluation** page, for example. To create a new session, you first need to end the existing one by pressing the \"End session\" button on top of the runner page."
]
Expand Down
6 changes: 3 additions & 3 deletions trulens_eval/examples/quickstart/langchain_quickstart.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -265,7 +265,7 @@
"\n",
"Note: Average feedback values are returned and displayed in a range from 0 (worst) to 1 (best).\n",
"\n",
"![Chain Leaderboard](https://www.trulens.org/Assets/image/Leaderboard.png)\n",
"![Chain Leaderboard](https://www.trulens.org/assets/images/Leaderboard.png)\n",
"\n",
"To dive deeper on a particular chain, click \"Select Chain\".\n",
"\n",
Expand All @@ -275,13 +275,13 @@
"\n",
"The evaluations tab provides record-level metadata and feedback on the quality of your LLM application.\n",
"\n",
"![Evaluations](https://www.trulens.org/Assets/image/Leaderboard.png)\n",
"![Evaluations](https://www.trulens.org/assets/images/Leaderboard.png)\n",
"\n",
"### Deep dive into full chain metadata\n",
"\n",
"Click on a record to dive deep into all of the details of your chain stack and underlying LLM, captured by tru_chain_recorder.\n",
"\n",
"![Explore a Chain](https://www.trulens.org/Assets/image/Chain_Explore.png)\n",
"![Explore a Chain](https://www.trulens.org/assets/images/Chain_Explore.png)\n",
"\n",
"If you prefer the raw format, you can quickly get it using the \"Display full chain json\" or \"Display full record json\" buttons at the bottom of the page."
]
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -166,7 +166,7 @@
#
# Note: Average feedback values are returned and displayed in a range from 0 (worst) to 1 (best).
#
# ![Chain Leaderboard](https://www.trulens.org/Assets/image/Leaderboard.png)
# ![Chain Leaderboard](https://www.trulens.org/assets/images/Leaderboard.png)
#
# To dive deeper on a particular chain, click "Select Chain".
#
Expand All @@ -176,13 +176,13 @@
#
# The evaluations tab provides record-level metadata and feedback on the quality of your LLM application.
#
# ![Evaluations](https://www.trulens.org/Assets/image/Leaderboard.png)
# ![Evaluations](https://www.trulens.org/assets/images/Leaderboard.png)
#
# ### Deep dive into full chain metadata
#
# Click on a record to dive deep into all of the details of your chain stack and underlying LLM, captured by tru_chain_recorder.
#
# ![Explore a Chain](https://www.trulens.org/Assets/image/Chain_Explore.png)
# ![Explore a Chain](https://www.trulens.org/assets/images/Chain_Explore.png)
#
# If you prefer the raw format, you can quickly get it using the "Display full chain json" or "Display full record json" buttons at the bottom of the page.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -166,7 +166,7 @@
#
# Note: Average feedback values are returned and displayed in a range from 0 (worst) to 1 (best).
#
# ![Chain Leaderboard](https://www.trulens.org/Assets/image/Leaderboard.png)
# ![Chain Leaderboard](https://www.trulens.org/assets/images/Leaderboard.png)
#
# To dive deeper on a particular chain, click "Select Chain".
#
Expand All @@ -176,13 +176,13 @@
#
# The evaluations tab provides record-level metadata and feedback on the quality of your LLM application.
#
# ![Evaluations](https://www.trulens.org/Assets/image/Leaderboard.png)
# ![Evaluations](https://www.trulens.org/assets/images/Leaderboard.png)
#
# ### Deep dive into full chain metadata
#
# Click on a record to dive deep into all of the details of your chain stack and underlying LLM, captured by tru_chain_recorder.
#
# ![Explore a Chain](https://www.trulens.org/Assets/image/Chain_Explore.png)
# ![Explore a Chain](https://www.trulens.org/assets/images/Chain_Explore.png)
#
# If you prefer the raw format, you can quickly get it using the "Display full chain json" or "Display full record json" buttons at the bottom of the page.

Expand Down
6 changes: 3 additions & 3 deletions trulens_eval/generated_files/all_tools.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -265,7 +265,7 @@
"\n",
"Note: Average feedback values are returned and displayed in a range from 0 (worst) to 1 (best).\n",
"\n",
"![Chain Leaderboard](https://www.trulens.org/Assets/image/Leaderboard.png)\n",
"![Chain Leaderboard](https://www.trulens.org/assets/images/Leaderboard.png)\n",
"\n",
"To dive deeper on a particular chain, click \"Select Chain\".\n",
"\n",
Expand All @@ -275,13 +275,13 @@
"\n",
"The evaluations tab provides record-level metadata and feedback on the quality of your LLM application.\n",
"\n",
"![Evaluations](https://www.trulens.org/Assets/image/Leaderboard.png)\n",
"![Evaluations](https://www.trulens.org/assets/images/Leaderboard.png)\n",
"\n",
"### Deep dive into full chain metadata\n",
"\n",
"Click on a record to dive deep into all of the details of your chain stack and underlying LLM, captured by tru_chain_recorder.\n",
"\n",
"![Explore a Chain](https://www.trulens.org/Assets/image/Chain_Explore.png)\n",
"![Explore a Chain](https://www.trulens.org/assets/images/Chain_Explore.png)\n",
"\n",
"If you prefer the raw format, you can quickly get it using the \"Display full chain json\" or \"Display full record json\" buttons at the bottom of the page."
]
Expand Down
6 changes: 3 additions & 3 deletions trulens_eval/generated_files/all_tools.py
Original file line number Diff line number Diff line change
Expand Up @@ -96,7 +96,7 @@
#
# Note: Average feedback values are returned and printed in a range from 0 (worst) to 1 (best).
#
# ![Chain Leaderboard](https://www.trulens.org/Assets/image/Leaderboard.png)
# ![Chain Leaderboard](https://www.trulens.org/assets/images/Leaderboard.png)
#
# To dive deeper on a particular chain, click "Select Chain".
#
Expand All @@ -106,13 +106,13 @@
#
# The evaluations tab provides record-level metadata and feedback on the quality of your LLM application.
#
# ![Evaluations](https://www.trulens.org/Assets/image/Leaderboard.png)
# ![Evaluations](https://www.trulens.org/assets/images/Leaderboard.png)
#
# ### Deep dive into full chain metadata
#
# Click on a record to dive deep into all of the details of your chain stack and underlying LLM, captured by tru_chain_recorder.
#
# ![Explore a Chain](https://www.trulens.org/Assets/image/Chain_Explore.png)
# ![Explore a Chain](https://www.trulens.org/assets/images/Chain_Explore.png)
#
# If you prefer the raw format, you can quickly get it using the "Display full chain json" or "Display full record json" buttons at the bottom of the page.

Expand Down
2 changes: 1 addition & 1 deletion trulens_explain/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Welcome to TruLens!

![TruLens](https://www.trulens.org/Assets/image/Neural_Network_Explainability.png)
![TruLens](https://www.trulens.org/assets/images/Neural_Network_Explainability.png)


TruLens is a cross-framework library for deep learning explainability. It provides a uniform abstraction over a number of different frameworks. It provides a uniform abstraction layer over TensorFlow, PyTorch, and Keras and allows input and internal explanations.
Expand Down

0 comments on commit 3d3ad17

Please sign in to comment.