Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] Can't get LLamaIndex to Work with Trulens - 'property' object has no attribute 'startswith' #1688

Open
anubhavmisra opened this issue Dec 8, 2024 · 0 comments
Assignees
Labels
bug Something isn't working

Comments

@anubhavmisra
Copy link

Bug Description
Can't get TruLlama object with LlamaIndex query/chat engine.

To Reproduce
tru_recorder = TruLlama(query_engine, app_name='LlamaIndex", app_version="base', feedbacks=[f_context_relevance,f_answer_relevance])

Expected behavior
This code should give me a tru_recorder object that I can then use to record queries.

Relevant Logs/Tracebacks
{
"name": "AttributeError",
"message": "'property' object has no attribute 'startswith'",
"stack": "---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[16], line 1
----> 1 tru_recorder = TruLlama(query_engine, app_name='LlamaIndex", app_version="base', feedbacks=[f_context_relevance,f_answer_relevance])
2 # with tru_recorder as recording:
3 # query_engine.query("What project are we talking about?")

File ~/code/pocs/trulens-llamaindex/.venv/lib/python3.10/site-packages/trulens/apps/llamaindex/tru_llama.py:343, in TruLlama.init(self, app, **kwargs)
338 kwargs["root_class"] = pyschema_utils.Class.of_object(
339 app
340 ) # TODO: make class property
341 kwargs["instrument"] = LlamaInstrument(app=self)
--> 343 super().init(**kwargs)

File ~/code/pocs/trulens-llamaindex/.venv/lib/python3.10/site-packages/trulens/core/app.py:449, in App.init(self, connector, feedbacks, **kwargs)
446 self.app = app
448 if self.instrument is not None:
--> 449 self.instrument.instrument_object(
450 obj=self.app, query=select_schema.Select.Query().app
451 )
452 else:
453 pass

File ~/code/pocs/trulens-llamaindex/.venv/lib/python3.10/site-packages/trulens/core/instruments.py:1057, in Instrument.instrument_object(self, obj, query, done)
1053 if any(
1054 isinstance(attr_value, cls) for cls in self.include_classes
1055 ):
1056 inner_query = query[attr_name]
-> 1057 self.instrument_object(attr_value, inner_query, done)
1059 for base in mro:
1060 # Some top part of mro() may need instrumentation here if some
1061 # subchains call superchains, and we want to capture the
1062 # intermediate steps. On the other hand we don't want to instrument
1063 # the very base classes such as object:
1064 if not self.to_instrument_module(base.module):

File ~/code/pocs/trulens-llamaindex/.venv/lib/python3.10/site-packages/trulens/core/instruments.py:1057, in Instrument.instrument_object(self, obj, query, done)
1053 if any(
1054 isinstance(attr_value, cls) for cls in self.include_classes
1055 ):
1056 inner_query = query[attr_name]
-> 1057 self.instrument_object(attr_value, inner_query, done)
1059 for base in mro:
1060 # Some top part of mro() may need instrumentation here if some
1061 # subchains call superchains, and we want to capture the
1062 # intermediate steps. On the other hand we don't want to instrument
1063 # the very base classes such as object:
1064 if not self.to_instrument_module(base.module):

File ~/code/pocs/trulens-llamaindex/.venv/lib/python3.10/site-packages/trulens/core/instruments.py:1057, in Instrument.instrument_object(self, obj, query, done)
1053 if any(
1054 isinstance(attr_value, cls) for cls in self.include_classes
1055 ):
1056 inner_query = query[attr_name]
-> 1057 self.instrument_object(attr_value, inner_query, done)
1059 for base in mro:
1060 # Some top part of mro() may need instrumentation here if some
1061 # subchains call superchains, and we want to capture the
1062 # intermediate steps. On the other hand we don't want to instrument
1063 # the very base classes such as object:
1064 if not self.to_instrument_module(base.module):

File ~/code/pocs/trulens-llamaindex/.venv/lib/python3.10/site-packages/trulens/core/instruments.py:1163, in Instrument.instrument_object(self, obj, query, done)
1160 if isinstance(v, (str, bool, int, float)):
1161 pass
-> 1163 elif self.to_instrument_module(type(v).module):
1164 self.instrument_object(obj=v, query=query[k], done=done)
1166 elif isinstance(v, Sequence):

File ~/code/pocs/trulens-llamaindex/.venv/lib/python3.10/site-packages/trulens/core/instruments.py:509, in Instrument.to_instrument_module(self, module_name)
506 def to_instrument_module(self, module_name: str) -> bool:
507 """Determine whether a module with the given (full) name should be instrumented."""
--> 509 return any(
510 module_name.startswith(mod2) for mod2 in self.include_modules
511 )

File ~/code/pocs/trulens-llamaindex/.venv/lib/python3.10/site-packages/trulens/core/instruments.py:510, in (.0)
506 def to_instrument_module(self, module_name: str) -> bool:
507 """Determine whether a module with the given (full) name should be instrumented."""
509 return any(
--> 510 module_name.startswith(mod2) for mod2 in self.include_modules
511 )

AttributeError: 'property' object has no attribute 'startswith'"
}

Environment:

  • OS: MacOS 14.6.1
  • Python Version 3.10.11
  • TruLens version:
    trulens 1.2.10
    trulens-apps-langchain 1.2.10
    trulens-apps-llamaindex 1.2.10
    trulens-core 1.2.10
    trulens-dashboard 1.2.10
    trulens_eval 1.2.10
    trulens-feedback 1.2.10
    trulens-otel-semconv 1.2.10
    trulens-providers-litellm 1.2.10
  • Versions of other relevant installed libraries
    llama-index 0.12.3
    llama-index-agent-openai 0.4.0
    llama-index-cli 0.4.0
    llama-index-core 0.12.3
    llama-index-embeddings-ollama 0.4.0
    llama-index-embeddings-openai 0.3.1
    llama-index-indices-managed-llama-cloud 0.6.3
    llama-index-legacy 0.9.48.post4
    llama-index-llms-ollama 0.4.2
    llama-index-llms-openai 0.3.2
    llama-index-multi-modal-llms-openai 0.3.0
    llama-index-program-openai 0.3.1
    llama-index-question-gen-openai 0.3.0
    llama-index-readers-file 0.4.1
    llama-index-readers-llama-parse 0.4.0

Additional context
Full code:

%% [markdown]

Ingestion

%%

from llama_index.core import VectorStoreIndex, SimpleDirectoryReader, Settings
from llama_index.embeddings.ollama import OllamaEmbedding

from llama_index.llms.ollama import Ollama

documents = SimpleDirectoryReader("data").load_data()

Settings.embed_model = OllamaEmbedding(
model_name="mxbai-embed-large",
base_url="http://localhost:11434",
ollama_additional_kwargs={"mirostat": 0},
)

ollama

Settings.llm = Ollama(model="llama3.1", request_timeout=360.0)

index = VectorStoreIndex.from_documents(
documents,
)

query_engine = index.as_query_engine()

%% [markdown]

%%

Imports main tools:

Imports from langchain to build app. You may need to install langchain first

with the following:

!pip install langchain>=0.0.170

import numpy as np
from trulens.core import Feedback
from trulens.core import Select
from trulens.core import TruSession
from trulens.apps.llamaindex import TruLlama

Initialize LiteLLM-based feedback function collection class:

import litellm
from trulens.providers.litellm import LiteLLM

litellm.set_verbose = False

provider = LiteLLM(
model_engine="ollama/llama3.1", api_base="http://localhost:11434"
)

Define a groundedness feedback function

f_groundedness = (
Feedback(
provider.groundedness_measure_with_cot_reasons, name="Groundedness"
)
.on(Select.RecordCalls.retrieve.rets.collect())
.on_output()
)

Question/answer relevance between overall question and answer.

f_answer_relevance = (
Feedback(provider.relevance_with_cot_reasons, name="Answer Relevance")
.on_input()
.on_output()
)

Context relevance between question and each context chunk.

f_context_relevance = (
Feedback(
provider.context_relevance_with_cot_reasons, name="Context Relevance"
)
.on_input()
.on(Select.RecordCalls.retrieve.rets[:])
.aggregate(np.mean) # choose a different aggregation method if you wish
)

context = TruLlama.select_context(query_engine)

%% [markdown]

Query

%%

tru_recorder = TruLlama(query_engine, app_name='LlamaIndex", app_version="base', feedbacks=[f_context_relevance,f_answer_relevance]) #This is the only part that fails.
with tru_recorder as recording:
query_engine.query("What project are we talking about?")

@anubhavmisra anubhavmisra added the bug Something isn't working label Dec 8, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants