Replies: 1 comment
-
|
You can use the $agent = MyAgent::make()->observer(
new LogObserver($logger)
); |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
I'm currently using the Neuron AI framework to build a chatbot with Claude API, and I've noticed unexpectedly high token consumption in my requests (see attached screenshot - some requests consume 20k+ input tokens).
Looking at the Anthropic Console logs, I can only see basic metrics (timestamp, model, token counts), but I can't see the actual content being sent to the API - what prompts, system messages, or context are being included in each request.
My questions:
Does Neuron AI have built-in logging or debugging capabilities to see the full prompt structure being sent to Claude API?
Is there a way to inspect what exactly is included in each API call (system prompts, conversation history, context, etc.)?
Are there any configuration options to control or optimize how much context/history is being sent with each request?
Does the framework have any token usage monitoring or debugging tools I might have missed?
I'd like to understand and optimize the token consumption, but I need visibility into what's actually being sent to the API.
Thank you for your help!
Beta Was this translation helpful? Give feedback.
All reactions