-
Notifications
You must be signed in to change notification settings - Fork 1.3k
Description
When I deploy GPT-OSS-20B using Ollama on a Linux system, after clicking the Deploy button on the UI, the wren-ai-service reports the following error, I tried redeploying, but it had no effect. ----------------------------------->>>>
wren-ai-service-1 | > embedding [src.pipelines.indexing.db_schema.embedding()] encountered an error<
wren-ai-service-1 | > Node inputs:
wren-ai-service-1 | {'chunk': "<Task finished name='Task-231' coro=<AsyncGraphA da...",
wren-ai-service-1 | 'embedder': '<src.providers.embedder.litellm.AsyncDocument Embed...'}
wren-ai-service-1 | *********************************************************** *********************
wren-ai-service-1 | Traceback (most recent call last):
wren-ai-service-1 | File "/app/.venv/lib/python3.12/site-packages/litellm/llm s/openai/openai.py", line 1127, in aembedding
wren-ai-service-1 | headers, response = await self.make_openai_embedding_re quest(
wren-ai-service-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^
wren-ai-service-1 | File "/app/.venv/lib/python3.12/site-packages/litellm/lit ellm_core_utils/logging_utils.py", line 190, in async_wrapper
wren-ai-service-1 | result = await func(*args, **kwargs)
wren-ai-service-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^
wren-ai-service-1 | File "/app/.venv/lib/python3.12/site-packages/litellm/llm s/openai/openai.py", line 1080, in make_openai_embedding_request
wren-ai-service-1 | raise e
wren-ai-service-1 | File "/app/.venv/lib/python3.12/site-packages/litellm/llm s/openai/openai.py", line 1073, in make_openai_embedding_request
wren-ai-service-1 | raw_response = await openai_aclient.embeddings.with_raw _response.create(
wren-ai-service-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^
wren-ai-service-1 | File "/app/.venv/lib/python3.12/site-packages/openai/_leg acy_response.py", line 381, in wrapped
wren-ai-service-1 | return cast(LegacyAPIResponse[R], await func(*args, **k wargs))
wren-ai-service-1 | ^^^^^^^^^^^^^^^^^^^^^ ^^^^^^
wren-ai-service-1 | File "/app/.venv/lib/python3.12/site-packages/openai/reso urces/embeddings.py", line 251, in create
wren-ai-service-1 | return await self._post(
wren-ai-service-1 | ^^^^^^^^^^^^^^^^^
wren-ai-service-1 | File "/app/.venv/lib/python3.12/site-packages/openai/_bas e_client.py", line 1794, in post
wren-ai-service-1 | return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
wren-ai-service-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^
wren-ai-service-1 | File "/app/.venv/lib/python3.12/site-packages/openai/_bas e_client.py", line 1594, in request
wren-ai-service-1 | raise self._make_status_error_from_response(err.respons e) from None
wren-ai-service-1 | openai.NotFoundError: 404 page not found
wren-ai-service-1 |
wren-ai-service-1 | During handling of the above exception, another exception o ccurred:
wren-ai-service-1 |
wren-ai-service-1 | Traceback (most recent call last):
wren-ai-service-1 | File "/app/.venv/lib/python3.12/site-packages/litellm/mai n.py", line 3736, in aembedding
wren-ai-service-1 | response = await init_response # type: ignore
wren-ai-service-1 | ^^^^^^^^^^^^^^^^^^^
wren-ai-service-1 | File "/app/.venv/lib/python3.12/site-packages/litellm/llm s/openai/openai.py", line 1172, in aembedding
wren-ai-service-1 | raise OpenAIError(
wren-ai-service-1 | litellm.llms.openai.common_utils.OpenAIError: 404 page not found
wren-ai-service-1 |
wren-ai-service-1 | During handling of the above exception, another exception o ccurred:
wren-ai-service-1 |
wren-ai-service-1 | Traceback (most recent call last):
wren-ai-service-1 | File "/app/.venv/lib/python3.12/site-packages/hamilton/as ync_driver.py", line 122, in new_fn
wren-ai-service-1 | await fn(**fn_kwargs) if asyncio.iscoroutinefunction(fn ) else fn(**fn_kwargs)
wren-ai-service-1 | ^^^^^^^^^^^^^^^^^^^^^
wren-ai-service-1 | File "/app/.venv/lib/python3.12/site-packages/langfuse/de corators/langfuse_decorator.py", line 219, in async_wrapper
wren-ai-service-1 | self._handle_exception(observation, e)
wren-ai-service-1 | File "/app/.venv/lib/python3.12/site-packages/langfuse/de corators/langfuse_decorator.py", line 520, in _handle_exception
wren-ai-service-1 | raise e
wren-ai-service-1 | File "/app/.venv/lib/python3.12/site-packages/langfuse/de corators/langfuse_decorator.py", line 217, in async_wrapper
wren-ai-service-1 | result = await func(*args, **kwargs)
wren-ai-service-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^
wren-ai-service-1 | File "/src/pipelines/indexing/db_schema.py", line 318, in embedding
wren-ai-service-1 | return await embedder.run(documents=chunk["documents"])
wren-ai-service-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
wren-ai-service-1 | File "/app/.venv/lib/python3.12/site-packages/backoff/_as ync.py", line 151, in retry
wren-ai-service-1 | ret = await target(*args, **kwargs)
wren-ai-service-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
wren-ai-service-1 | File "/src/providers/embedder/litellm.py", line 154, in r un
wren-ai-service-1 | embeddings, meta = await self._embed_batch(
wren-ai-service-1 | ^^^^^^^^^^^^^^^^^^^^^^^^
wren-ai-service-1 | File "/src/providers/embedder/litellm.py", line 115, in _ embed_batch
wren-ai-service-1 | responses = await asyncio.gather(
wren-ai-service-1 | ^^^^^^^^^^^^^^^^^^^^^
wren-ai-service-1 | File "/src/providers/embedder/litellm.py", line 101, in e mbed_single_batch
wren-ai-service-1 | return await aembedding(
wren-ai-service-1 | ^^^^^^^^^^^^^^^^^
wren-ai-service-1 | File "/app/.venv/lib/python3.12/site-packages/litellm/uti ls.py", line 1597, in wrapper_async
wren-ai-service-1 | raise e
wren-ai-service-1 | File "/app/.venv/lib/python3.12/site-packages/litellm/uti ls.py", line 1448, in wrapper_async
wren-ai-service-1 | result = await original_function(*args, **kwargs)
wren-ai-service-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
wren-ai-service-1 | File "/app/.venv/lib/python3.12/site-packages/litellm/mai n.py", line 3751, in aembedding
wren-ai-service-1 | raise exception_type(
wren-ai-service-1 | ^^^^^^^^^^^^^^^
wren-ai-service-1 | File "/app/.venv/lib/python3.12/site-packages/litellm/lit ellm_core_utils/exception_mapping_utils.py", line 2301, in exception_type
wren-ai-service-1 | raise e
wren-ai-service-1 | File "/app/.venv/lib/python3.12/site-packages/litellm/lit ellm_core_utils/exception_mapping_utils.py", line 465, in exception_type
wren-ai-service-1 | raise NotFoundError(
wren-ai-service-1 | litellm.exceptions.NotFoundError: litellm.NotFoundError: No tFoundError: OpenAIException - 404 page not found
wren-ai-service-1 | ----------------------------------------------------------- --------
wren-ai-service-1 | Oh no an error! Need help with Hamilton?
wren-ai-service-1 | Join our slack and ask for help! https://join.slack.com/t/h amilton-opensource/shared_invite/zt-2niepkra8-DGKGf_tTYhXuJWBTXtIs4g
wren-ai-service-1 | ----------------------------------------------------------- --------
wren-ai-service-1 |
wren-ai-service-1 | E0930 05:26:56.255 8 wren-ai-service:100] Failed to prepare semantics: litellm.NotFoundError: NotFoundError: OpenAIException - 404 page not found
wren-ai-service-1 | Traceback (most recent call last):
wren-ai-service-1 | File "/app/.venv/lib/python3.12/site-packages/litellm/llm s/openai/openai.py", line 1127, in aembedding
wren-ai-service-1 | headers, response = await self.make_openai_embedding_re quest(
wren-ai-service-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^
wren-ai-service-1 | File "/app/.venv/lib/python3.12/site-packages/litellm/lit ellm_core_utils/logging_utils.py", line 190, in async_wrapper
wren-ai-service-1 | result = await func(*args, **kwargs)
wren-ai-service-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^
wren-ai-service-1 | File "/app/.venv/lib/python3.12/site-packages/litellm/llm s/openai/openai.py", line 1080, in make_openai_embedding_request
wren-ai-service-1 | raise e
wren-ai-service-1 | File "/app/.venv/lib/python3.12/site-packages/litellm/llm s/openai/openai.py", line 1073, in make_openai_embedding_request
wren-ai-service-1 | raw_response = await openai_aclient.embeddings.with_raw _response.create(
wren-ai-service-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^
wren-ai-service-1 | File "/app/.venv/lib/python3.12/site-packages/openai/_leg acy_response.py", line 381, in wrapped
wren-ai-service-1 | return cast(LegacyAPIResponse[R], await func(*args, **k wargs))
wren-ai-service-1 | ^^^^^^^^^^^^^^^^^^^^^ ^^^^^^
wren-ai-service-1 | File "/app/.venv/lib/python3.12/site-packages/openai/reso urces/embeddings.py", line 251, in create
wren-ai-service-1 | return await self._post(
wren-ai-service-1 | ^^^^^^^^^^^^^^^^^
wren-ai-service-1 | File "/app/.venv/lib/python3.12/site-packages/openai/_bas e_client.py", line 1794, in post
wren-ai-service-1 | return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
wren-ai-service-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^
wren-ai-service-1 | File "/app/.venv/lib/python3.12/site-packages/openai/_bas e_client.py", line 1594, in request
wren-ai-service-1 | raise self._make_status_error_from_response(err.respons e) from None
wren-ai-service-1 | openai.NotFoundError: 404 page not found
wren-ai-service-1 |
wren-ai-service-1 | During handling of the above exception, another exception o ccurred:
wren-ai-service-1 |
wren-ai-service-1 | Traceback (most recent call last):
wren-ai-service-1 | File "/app/.venv/lib/python3.12/site-packages/litellm/mai n.py", line 3736, in aembedding
wren-ai-service-1 | response = await init_response # type: ignore
wren-ai-service-1 | ^^^^^^^^^^^^^^^^^^^
wren-ai-service-1 | File "/app/.venv/lib/python3.12/site-packages/litellm/llm s/openai/openai.py", line 1172, in aembedding
wren-ai-service-1 | raise OpenAIError(
wren-ai-service-1 | litellm.llms.openai.common_utils.OpenAIError: 404 page not found
wren-ai-service-1 |
wren-ai-service-1 | During handling of the above exception, another exception o ccurred:
wren-ai-service-1 |
wren-ai-service-1 | Traceback (most recent call last):
wren-ai-service-1 | File "/src/web/v1/services/semantics_preparation.py", lin e 92, in prepare_semantics
wren-ai-service-1 | await asyncio.gather(*tasks)
wren-ai-service-1 | File "/app/.venv/lib/python3.12/site-packages/langfuse/de corators/langfuse_decorator.py", line 219, in async_wrapper
wren-ai-service-1 | self._handle_exception(observation, e)
wren-ai-service-1 | File "/app/.venv/lib/python3.12/site-packages/langfuse/de corators/langfuse_decorator.py", line 520, in _handle_exception
wren-ai-service-1 | raise e
wren-ai-service-1 | File "/app/.venv/lib/python3.12/site-packages/langfuse/de corators/langfuse_decorator.py", line 217, in async_wrapper
wren-ai-service-1 | result = await func(*args, **kwargs)
wren-ai-service-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^
wren-ai-service-1 | File "/src/pipelines/indexing/db_schema.py", line 376, in run
wren-ai-service-1 | return await self._pipe.execute(
wren-ai-service-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^
wren-ai-service-1 | File "/app/.venv/lib/python3.12/site-packages/hamilton/as ync_driver.py", line 375, in execute
wren-ai-service-1 | raise e
wren-ai-service-1 | File "/app/.venv/lib/python3.12/site-packages/hamilton/as ync_driver.py", line 366, in execute
wren-ai-service-1 | outputs = await self.raw_execute(_final_vars, overrides , display_graph, inputs=inputs)
wren-ai-service-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
wren-ai-service-1 | File "/app/.venv/lib/python3.12/site-packages/hamilton/as ync_driver.py", line 326, in raw_execute
wren-ai-service-1 | raise e
wren-ai-service-1 | File "/app/.venv/lib/python3.12/site-packages/hamilton/as ync_driver.py", line 321, in raw_execute
wren-ai-service-1 | results = await await_dict_of_tasks(task_dict)
wren-ai-service-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
wren-ai-service-1 | File "/app/.venv/lib/python3.12/site-packages/hamilton/as ync_driver.py", line 23, in await_dict_of_tasks
wren-ai-service-1 | coroutines_gathered = await asyncio.gather(*coroutines)
wren-ai-service-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
wren-ai-service-1 | File "/app/.venv/lib/python3.12/site-packages/hamilton/as ync_driver.py", line 36, in process_value
wren-ai-service-1 | return await val
wren-ai-service-1 | ^^^^^^^^^
wren-ai-service-1 | File "/app/.venv/lib/python3.12/site-packages/hamilton/as ync_driver.py", line 91, in new_fn
wren-ai-service-1 | fn_kwargs = await await_dict_of_tasks(task_dict)
wren-ai-service-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
wren-ai-service-1 | File "/app/.venv/lib/python3.12/site-packages/hamilton/as ync_driver.py", line 23, in await_dict_of_tasks
wren-ai-service-1 | coroutines_gathered = await asyncio.gather(*coroutines)
wren-ai-service-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
wren-ai-service-1 | File "/app/.venv/lib/python3.12/site-packages/hamilton/as ync_driver.py", line 36, in process_value
wren-ai-service-1 | return await val
wren-ai-service-1 | ^^^^^^^^^
wren-ai-service-1 | File "/app/.venv/lib/python3.12/site-packages/hamilton/as ync_driver.py", line 91, in new_fn
wren-ai-service-1 | fn_kwargs = await await_dict_of_tasks(task_dict)
wren-ai-service-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
wren-ai-service-1 | File "/app/.venv/lib/python3.12/site-packages/hamilton/as ync_driver.py", line 23, in await_dict_of_tasks
wren-ai-service-1 | coroutines_gathered = await asyncio.gather(*coroutines)
wren-ai-service-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
wren-ai-service-1 | File "/app/.venv/lib/python3.12/site-packages/hamilton/as ync_driver.py", line 36, in process_value
wren-ai-service-1 | return await val
wren-ai-service-1 | ^^^^^^^^^
wren-ai-service-1 | File "/app/.venv/lib/python3.12/site-packages/hamilton/as ync_driver.py", line 122, in new_fn
wren-ai-service-1 | await fn(**fn_kwargs) if asyncio.iscoroutinefunction(fn ) else fn(**fn_kwargs)
wren-ai-service-1 | ^^^^^^^^^^^^^^^^^^^^^
wren-ai-service-1 | File "/app/.venv/lib/python3.12/site-packages/langfuse/de corators/langfuse_decorator.py", line 219, in async_wrapper
wren-ai-service-1 | self._handle_exception(observation, e)
wren-ai-service-1 | File "/app/.venv/lib/python3.12/site-packages/langfuse/de corators/langfuse_decorator.py", line 520, in _handle_exception
wren-ai-service-1 | raise e
wren-ai-service-1 | File "/app/.venv/lib/python3.12/site-packages/langfuse/de corators/langfuse_decorator.py", line 217, in async_wrapper
wren-ai-service-1 | result = await func(*args, **kwargs)
wren-ai-service-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^
wren-ai-service-1 | File "/src/pipelines/indexing/db_schema.py", line 318, in embedding
wren-ai-service-1 | return await embedder.run(documents=chunk["documents"])
wren-ai-service-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
wren-ai-service-1 | File "/app/.venv/lib/python3.12/site-packages/backoff/_as ync.py", line 151, in retry
wren-ai-service-1 | ret = await target(*args, **kwargs)
wren-ai-service-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
wren-ai-service-1 | File "/src/providers/embedder/litellm.py", line 154, in r un
wren-ai-service-1 | embeddings, meta = await self._embed_batch(
wren-ai-service-1 | ^^^^^^^^^^^^^^^^^^^^^^^^
wren-ai-service-1 | File "/src/providers/embedder/litellm.py", line 115, in _ embed_batch
wren-ai-service-1 | responses = await asyncio.gather(
wren-ai-service-1 | ^^^^^^^^^^^^^^^^^^^^^
wren-ai-service-1 | File "/src/providers/embedder/litellm.py", line 101, in e mbed_single_batch
wren-ai-service-1 | return await aembedding(
wren-ai-service-1 | ^^^^^^^^^^^^^^^^^
wren-ai-service-1 | File "/app/.venv/lib/python3.12/site-packages/litellm/uti ls.py", line 1597, in wrapper_async
wren-ai-service-1 | raise e
wren-ai-service-1 | File "/app/.venv/lib/python3.12/site-packages/litellm/uti ls.py", line 1448, in wrapper_async
wren-ai-service-1 | result = await original_function(*args, **kwargs)
wren-ai-service-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
wren-ai-service-1 | File "/app/.venv/lib/python3.12/site-packages/litellm/mai n.py", line 3751, in aembedding
wren-ai-service-1 | raise exception_type(
wren-ai-service-1 | ^^^^^^^^^^^^^^^
wren-ai-service-1 | File "/app/.venv/lib/python3.12/site-packages/litellm/lit ellm_core_utils/exception_mapping_utils.py", line 2301, in exception_type
wren-ai-service-1 | raise e
wren-ai-service-1 | File "/app/.venv/lib/python3.12/site-packages/litellm/lit ellm_core_utils/exception_mapping_utils.py", line 465, in exception_type
wren-ai-service-1 | raise NotFoundError(
wren-ai-service-1 | litellm.exceptions.NotFoundError: litellm.NotFoundError: No tFoundError: OpenAIException - 404 page not found
wren-ai-service-1 |
wren-ai-service-1 | *********************************************************** *********************
wren-ai-service-1 | > embedding [src.pipelines.indexing.table_description.embed ding()] encountered an error<
wren-ai-service-1 | > Node inputs:
wren-ai-service-1 | {'chunk': "<Task finished name='Task-245' coro=<AsyncGraphA da...",
wren-ai-service-1 | 'embedder': '<src.providers.embedder.litellm.AsyncDocument Embed...'}
wren-ai-service-1 | *********************************************************** *********************
wren-ai-service-1 | Traceback (most recent call last):
wren-ai-service-1 | File "/app/.venv/lib/python3.12/site-packages/litellm/llm s/openai/openai.py", line 1127, in aembedding
wren-ai-service-1 | headers, response = await self.make_openai_embedding_re quest(
wren-ai-service-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^
wren-ai-service-1 | File "/app/.venv/lib/python3.12/site-packages/litellm/lit ellm_core_utils/logging_utils.py", line 190, in async_wrapper
wren-ai-service-1 | result = await func(*args, **kwargs)
wren-ai-service-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^
wren-ai-service-1 | File "/app/.venv/lib/python3.12/site-packages/litellm/llm s/openai/openai.py", line 1080, in make_openai_embedding_request
wren-ai-service-1 | raise e
wren-ai-service-1 | File "/app/.venv/lib/python3.12/site-packages/litellm/llm s/openai/openai.py", line 1073, in make_openai_embedding_request
wren-ai-service-1 | raw_response = await openai_aclient.embeddings.with_raw _response.create(
wren-ai-service-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^
wren-ai-service-1 | File "/app/.venv/lib/python3.12/site-packages/openai/_leg acy_response.py", line 381, in wrapped
wren-ai-service-1 | return cast(LegacyAPIResponse[R], await func(*args, **k wargs))
wren-ai-service-1 | ^^^^^^^^^^^^^^^^^^^^^ ^^^^^^
wren-ai-service-1 | File "/app/.venv/lib/python3.12/site-packages/openai/reso urces/embeddings.py", line 251, in create
wren-ai-service-1 | return await self._post(
wren-ai-service-1 | ^^^^^^^^^^^^^^^^^
wren-ai-service-1 | File "/app/.venv/lib/python3.12/site-packages/openai/_bas e_client.py", line 1794, in post
wren-ai-service-1 | return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
wren-ai-service-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^
wren-ai-service-1 | File "/app/.venv/lib/python3.12/site-packages/openai/_bas e_client.py", line 1594, in request
wren-ai-service-1 | raise self._make_status_error_from_response(err.respons e) from None
wren-ai-service-1 | openai.NotFoundError: 404 page not found
wren-ai-service-1 |
wren-ai-service-1 | During handling of the above exception, another exception o ccurred:
wren-ai-service-1 |
wren-ai-service-1 | Traceback (most recent call last):
wren-ai-service-1 | File "/app/.venv/lib/python3.12/site-packages/litellm/mai n.py", line 3736, in aembedding
wren-ai-service-1 | response = await init_response # type: ignore
wren-ai-service-1 | ^^^^^^^^^^^^^^^^^^^
wren-ai-service-1 | File "/app/.venv/lib/python3.12/site-packages/litellm/llm s/openai/openai.py", line 1172, in aembedding
wren-ai-service-1 | raise OpenAIError(
wren-ai-service-1 | litellm.llms.openai.common_utils.OpenAIError: 404 page not found
wren-ai-service-1 |
wren-ai-service-1 | During handling of the above exception, another exception o ccurred:
wren-ai-service-1 |
wren-ai-service-1 | Traceback (most recent call last):
wren-ai-service-1 | File "/app/.venv/lib/python3.12/site-packages/hamilton/as ync_driver.py", line 122, in new_fn
wren-ai-service-1 | await fn(**fn_kwargs) if asyncio.iscoroutinefunction(fn ) else fn(**fn_kwargs)
wren-ai-service-1 | ^^^^^^^^^^^^^^^^^^^^^
wren-ai-service-1 | File "/app/.venv/lib/python3.12/site-packages/langfuse/de corators/langfuse_decorator.py", line 219, in async_wrapper
wren-ai-service-1 | self._handle_exception(observation, e)
wren-ai-service-1 | File "/app/.venv/lib/python3.12/site-packages/langfuse/de corators/langfuse_decorator.py", line 520, in _handle_exception
wren-ai-service-1 | raise e
wren-ai-service-1 | File "/app/.venv/lib/python3.12/site-packages/langfuse/de corators/langfuse_decorator.py", line 217, in async_wrapper
wren-ai-service-1 | result = await func(*args, **kwargs)
wren-ai-service-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^
wren-ai-service-1 | File "/src/pipelines/indexing/table_description.py", line 97, in embedding
wren-ai-service-1 | return await embedder.run(documents=chunk["documents"])
wren-ai-service-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
wren-ai-service-1 | File "/app/.venv/lib/python3.12/site-packages/backoff/_as ync.py", line 151, in retry
wren-ai-service-1 | ret = await target(*args, **kwargs)
wren-ai-service-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
wren-ai-service-1 | File "/src/providers/embedder/litellm.py", line 154, in r un
wren-ai-service-1 | embeddings, meta = await self._embed_batch(
wren-ai-service-1 | ^^^^^^^^^^^^^^^^^^^^^^^^
wren-ai-service-1 | File "/src/providers/embedder/litellm.py", line 115, in _ embed_batch
wren-ai-service-1 | responses = await asyncio.gather(
wren-ai-service-1 | ^^^^^^^^^^^^^^^^^^^^^
wren-ai-service-1 | File "/src/providers/embedder/litellm.py", line 101, in e mbed_single_batch
wren-ai-service-1 | return await aembedding(
wren-ai-service-1 | ^^^^^^^^^^^^^^^^^
wren-ai-service-1 | File "/app/.venv/lib/python3.12/site-packages/litellm/uti ls.py", line 1597, in wrapper_async
wren-ai-service-1 | raise e
wren-ai-service-1 | File "/app/.venv/lib/python3.12/site-packages/litellm/uti ls.py", line 1448, in wrapper_async
wren-ai-service-1 | result = await original_function(*args, **kwargs)
wren-ai-service-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
wren-ai-service-1 | File "/app/.venv/lib/python3.12/site-packages/litellm/mai n.py", line 3751, in aembedding
wren-ai-service-1 | raise exception_type(
wren-ai-service-1 | ^^^^^^^^^^^^^^^
wren-ai-service-1 | File "/app/.venv/lib/python3.12/site-packages/litellm/lit ellm_core_utils/exception_mapping_utils.py", line 2301, in exception_type
wren-ai-service-1 | raise e
wren-ai-service-1 | File "/app/.venv/lib/python3.12/site-packages/litellm/lit ellm_core_utils/exception_mapping_utils.py", line 465, in exception_type
wren-ai-service-1 | raise NotFoundError(
wren-ai-service-1 | litellm.exceptions.NotFoundError: litellm.NotFoundError: No tFoundError: OpenAIException - 404 page not found
wren-ai-service-1 | ----------------------------------------------------------- --------
wren-ai-service-1 | Oh no an error! Need help with Hamilton?
wren-ai-service-1 | Join our slack and ask for help! https://join.slack.com/t/h amilton-opensource/shared_invite/zt-2niepkra8-DGKGf_tTYhXuJWBTXtIs4g
wren-ai-service-1 | ----------------------------------------------------------- --------
wren-ai-service-1 |
wren-ai-service-1 | INFO: 172.22.0.6:48548 - "GET /v1/semantics-preparation s/946809ff97179a91f426f2be46af94f1681bb14e/status HTTP/1.1" 200 OK
wren-ai-service-1 | INFO: 172.22.0.6:48550 - "POST /v1/question-recommendat ions HTTP/1.1" 200 OK
wren-ai-service-1 | I0930 05:26:58.071 8 wren-ai-service:187] Request cb95d1e2- a363-41d2-bb0d-b51e48b54fd7: Generate Question Recommendation pipeline is runnin g...
wren-ai-service-1 | I0930 05:26:58.071 8 wren-ai-service:507] Ask Retrieval pip eline is running...
wren-ai-service-1 | INFO: 172.22.0.6:48558 - "GET /v1/question-recommendati ons/cb95d1e2-a363-41d2-bb0d-b51e48b54fd7 HTTP/1.1" 200 OK
wren-ai-service-1 | INFO: 172.22.0.6:48568 - "GET /v1/question-recommendati ons/cb95d1e2-a363-41d2-bb0d-b51e48b54fd7 HTTP/1.1" 200 OK
wren-ai-service-1 | INFO: 172.22.0.6:48570 - "GET /v1/question-recommendati ons/cb95d1e2-a363-41d2-bb0d-b51e48b54fd7 HTTP/1.1" 200 OK
wren-ai-service-1 | INFO: 172.22.0.6:48576 - "GET /v1/question-recommendati ons/cb95d1e2-a363-41d2-bb0d-b51e48b54fd7 HTTP/1.1" 200 OK
wren-ai-service-1 | INFO: 172.22.0.6:48578 - "GET /v1/question-recommendati ons/cb95d1e2-a363-41d2-bb0d-b51e48b54fd7 HTTP/1.1" 200 OK
wren-ai-service-1 | INFO: 172.22.0.6:48592 - "GET /v1/question-recommendati ons/cb95d1e2-a363-41d2-bb0d-b51e48b54fd7 HTTP/1.1" 200 OK
wren-ai-service-1 | INFO: 172.22.0.6:48608 - "GET /v1/question-recommendati ons/cb95d1e2-a363-41d2-bb0d-b51e48b54fd7 HTTP/1.1" 200 OK
wren-ai-service-1 | INFO: 172.22.0.6:49178 - "GET /v1/question-recommendations/cb95d1e2-a363-41d2-bb0d-b51e48b54fd7 HTTP/1.1" 200 OK
wren-ai-service-1 | INFO: 172.22.0.6:49186 - "GET /v1/question-recommendations/cb95d1e2-a363-41d2-bb0d-b51e48b54fd7 HTTP/1.1" 200 OK
wren-ai-service-1 | INFO: 172.22.0.6:49198 - "GET /v1/question-recommendations/cb95d1e2-a363-41d2-bb0d-b51e48b54fd7 HTTP/1.1" 200 OK
wren-ai-service-1 | INFO: 172.22.0.6:49206 - "GET /v1/question-recommendations/cb95d1e2-a363-41d2-bb0d-b51e48b54fd7 HTTP/1.1" 200 OK
wren-ai-service-1 | INFO: 172.22.0.6:49218 - "GET /v1/question-recommendations/cb95d1e2-a363-41d2-bb0d-b51e48b54fd7 HTTP/1.1" 200 OK
wren-ai-service-1 | INFO: 172.22.0.6:49228 - "GET /v1/question-recommendations/cb95d1e2-a363-41d2-bb0d-b51e48b54fd7 HTTP/1.1" 200 OK
wren-ai-service-1 | INFO: 172.22.0.6:49244 - "GET /v1/question-recommendations/cb95d1e2-a363-41d2-bb0d-b51e48b54fd7 HTTP/1.1" 200 OK
wren-ai-service-1 | INFO: 172.22.0.6:49246 - "GET /v1/question-recommendations/cb95d1e2-a363-41d2-bb0d-b51e48b54fd7 HTTP/1.1" 200 OK
wren-ai-service-1 | INFO: 172.22.0.6:49250 - "GET /v1/question-recommendations/cb95d1e2-a363-41d2-bb0d-b51e48b54fd7 HTTP/1.1" 200 OK
wren-ai-service-1 | INFO: 172.22.0.6:49254 - "GET /v1/question-recommendations/cb95d1e2-a363-41d2-bb0d-b51e48b54fd7 HTTP/1.1" 200 OK
wren-ai-service-1 | INFO: 172.22.0.6:47472 - "GET /v1/question-recommendations/cb95d1e2-a363-41d2-bb0d-b51e48b54fd7 HTTP/1.1" 200 OK
wren-ai-service-1 | INFO: 172.22.0.6:47488 - "GET /v1/question-recommendations/cb95d1e2-a363-41d2-bb0d-b51e48b54fd7 HTTP/1.1" 200 OK
wren-ai-service-1 | INFO: 172.22.0.6:47496 - "GET /v1/question-recommendations/cb95d1e2-a363-41d2-bb0d-b51e48b54fd7 HTTP/1.1" 200 OK
wren-ai-service-1 | INFO: 172.22.0.6:47500 - "GET /v1/question-recommendations/cb95d1e2-a363-41d2-bb0d-b51e48b54fd7 HTTP/1.1" 200 OK
wren-ai-service-1 | INFO: 172.22.0.6:47504 - "GET /v1/question-recommendations/cb95d1e2-a363-41d2-bb0d-b51e48b54fd7 HTTP/1.1" 200 OK
wren-ai-service-1 | INFO: 172.22.0.6:47508 - "GET /v1/question-recommendations/cb95d1e2-a363-41d2-bb0d-b51e48b54fd7 HTTP/1.1" 200 OK
wren-ai-service-1 | INFO: 172.22.0.6:47514 - "GET /v1/question-recommendations/cb95d1e2-a363-41d2-bb0d-b51e48b54fd7 HTTP/1.1" 200 OK
wren-ai-service-1 | INFO: 172.22.0.6:47518 - "GET /v1/question-recommendations/cb95d1e2-a363-41d2-bb0d-b51e48b54fd7 HTTP/1.1" 200 OK
wren-ai-service-1 | INFO: 172.22.0.6:47524 - "GET /v1/question-recommendations/cb95d1e2-a363-41d2-bb0d-b51e48b54fd7 HTTP/1.1" 200 OK
wren-ai-service-1 | INFO: 172.22.0.6:47536 - "GET /v1/question-recommendations/cb95d1e2-a363-41d2-bb0d-b51e48b54fd7 HTTP/1.1" 200 OK
wren-ai-service-1 | INFO: 172.22.0.6:42204 - "GET /v1/question-recommendations/cb95d1e2-a363-41d2-bb0d-b51e48b54fd7 HTTP/1.1" 200 OK
wren-ai-service-1 | INFO: 172.22.0.6:42206 - "GET /v1/question-recommendations/cb95d1e2-a363-41d2-bb0d-b51e48b54fd7 HTTP/1.1" 200 OK
wren-ai-service-1 | INFO: 172.22.0.6:42220 - "GET /v1/question-recommendations/cb95d1e2-a363-41d2-bb0d-b51e48b54fd7 HTTP/1.1" 200 OK
wren-ai-service-1 | INFO: 172.22.0.6:42222 - "GET /v1/question-recommendations/cb95d1e2-a363-41d2-bb0d-b51e48b54fd7 HTTP/1.1" 200 OK
wren-ai-service-1 | INFO: 172.22.0.6:42238 - "GET /v1/question-recommendations/cb95d1e2-a363-41d2-bb0d-b51e48b54fd7 HTTP/1.1" 200 OK
wren-ai-service-1 | INFO: 172.22.0.6:42252 - "GET /v1/question-recommendations/cb95d1e2-a363-41d2-bb0d-b51e48b54fd7 HTTP/1.1" 200 OK
wren-ai-service-1 | INFO: 172.22.0.6:42262 - "GET /v1/question-recommendations/cb95d1e2-a363-41d2-bb0d-b51e48b54fd7 HTTP/1.1" 200 OK
wren-ai-service-1 | INFO: 172.22.0.6:42268 - "GET /v1/question-recommendations/cb95d1e2-a363-41d2-bb0d-b51e48b54fd7 HTTP/1.1" 200 OK
wren-ai-service-1 | INFO: 172.22.0.6:42276 - "GET /v1/question-recommendations/cb95d1e2-a363-41d2-bb0d-b51e48b54fd7 HTTP/1.1" 200 OK
wren-ai-service-1 | INFO: 172.22.0.6:42278 - "GET /v1/question-recommendations/cb95d1e2-a363-41d2-bb0d-b51e48b54fd7 HTTP/1.1" 200 OK
wren-ai-service-1 | INFO: 172.22.0.6:53438 - "GET /v1/question-recommendations/cb95d1e2-a363-41d2-bb0d-b51e48b54fd7 HTTP/1.1" 200 OK
wren-ai-service-1 |
wren-ai-service-1 | ********************************************************************************
wren-ai-service-1 | > construct_retrieval_results [src.pipelines.retrieval.db_schema_retrieval.construct_retrieval_results()] encountered an error<
wren-ai-service-1 | > Node inputs:
wren-ai-service-1 | {'check_using_db_schemas_without_pruning': "<Task finished name='Task-362' "
wren-ai-service-1 | 'coro=<AsyncGraphAda...',
wren-ai-service-1 | 'construct_db_schemas': "<Task finished name='Task-361' "
wren-ai-service-1 | 'coro=<AsyncGraphAda...',
wren-ai-service-1 | 'dbschema_retrieval': "<Task finished name='Task-360' coro=<AsyncGraphAda...",
wren-ai-service-1 | 'filter_columns_in_tables': "<Task finished name='Task-364' "
wren-ai-service-1 | 'coro=<AsyncGraphAda...'}
wren-ai-service-1 | ********************************************************************************
wren-ai-service-1 | Traceback (most recent call last):
wren-ai-service-1 | File "/app/.venv/lib/python3.12/site-packages/hamilton/async_driver.py", line 122, in new_fn
wren-ai-service-1 | await fn(**fn_kwargs) if asyncio.iscoroutinefunction(fn) else fn(**fn_kwargs)
wren-ai-service-1 | ^^^^^^^^^^^^^^^
wren-ai-service-1 | File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 256, in sync_wrapper
wren-ai-service-1 | self._handle_exception(observation, e)
wren-ai-service-1 | File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 520, in _handle_exception
wren-ai-service-1 | raise e
wren-ai-service-1 | File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 254, in sync_wrapper
wren-ai-service-1 | result = func(*args, **kwargs)
wren-ai-service-1 | ^^^^^^^^^^^^^^^^^^^^^
wren-ai-service-1 | File "/src/pipelines/retrieval/db_schema_retrieval.py", line 349, in construct_retrieval_results
wren-ai-service-1 | columns_and_tables_needed = orjson.loads(
wren-ai-service-1 | ^^^^^^^^^^^^^
wren-ai-service-1 | orjson.JSONDecodeError: unexpected end of data: line 7 column 1819 (char 2043)
wren-ai-service-1 | -------------------------------------------------------------------
wren-ai-service-1 | Oh no an error! Need help with Hamilton?
wren-ai-service-1 | Join our slack and ask for help! https://join.slack.com/t/hamilton-opensource/shared_invite/zt-2niepkra8-DGKGf_tTYhXuJWBTXtIs4g
wren-ai-service-1 | -------------------------------------------------------------------
wren-ai-service-1 |
wren-ai-service-1 | E0930 05:27:35.916 8 wren-ai-service:58] Failed to parse MDL: unexpected end of data: line 7 column 1819 (char 2043)
wren-ai-service-1 | INFO: 172.22.0.6:53446 - "GET /v1/question-recommendations/cb95d1e2-a363-41d2-bb0d-b51e48b54fd7 HTTP/1.1" 200 OK