You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
I have a customer using Gemini, but not through VertexAI. They would also prefer not to use manual instrumentation if possible
Describe the solution you'd like
A clear and concise description of what you want to happen.
Creation of a gemini auto-instrumentor
Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.
I've let them know they can try the @tracer.chain decorator which captures inputs and outputs but it's not an LLM span (and therefore no prompt playground easy access and no metrics like token count),
Additional context
Add any other context or screenshots about the feature request here.
Code example (this does not get picked up by Vertex AI auto-instrumentor):
@tracer.chain(name="gemini-override-name")
def generate_ai_response(prompt, model="gemini-2.0-flash"):
"""
Generates AI response using Google's Gemini API.
Args:
prompt (str): The input prompt to send to the model
api_key (str): Google API key for authentication
model (str): Model name to use, defaults to gemini-2.0-flash
Returns:
str: The generated response text
"""
try:
# import google.generativeai as genai
# Initialize the client
client = genai.Client(api_key=GEMINI_API_KEY)
# Generate the response
response = client.models.generate_content(
model=model,
contents=prompt
)
return response.text
except Exception as e:
return f"Error generating response: {str(e)}"
The text was updated successfully, but these errors were encountered:
Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
I have a customer using Gemini, but not through VertexAI. They would also prefer not to use manual instrumentation if possible
Describe the solution you'd like
A clear and concise description of what you want to happen.
Creation of a gemini auto-instrumentor
Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.
I've let them know they can try the @tracer.chain decorator which captures inputs and outputs but it's not an LLM span (and therefore no prompt playground easy access and no metrics like token count),
Additional context
Add any other context or screenshots about the feature request here.
Code example (this does not get picked up by Vertex AI auto-instrumentor):
The text was updated successfully, but these errors were encountered: