Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[feature request] Gemini Auto-Instrumentor #1281

Open
Jeffrey-Peng opened this issue Feb 11, 2025 · 1 comment
Open

[feature request] Gemini Auto-Instrumentor #1281

Jeffrey-Peng opened this issue Feb 11, 2025 · 1 comment
Labels
enhancement New feature or request language: python Related to Python integration

Comments

@Jeffrey-Peng
Copy link

Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

I have a customer using Gemini, but not through VertexAI. They would also prefer not to use manual instrumentation if possible

Describe the solution you'd like
A clear and concise description of what you want to happen.

Creation of a gemini auto-instrumentor

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

I've let them know they can try the @tracer.chain decorator which captures inputs and outputs but it's not an LLM span (and therefore no prompt playground easy access and no metrics like token count),

Additional context
Add any other context or screenshots about the feature request here.

Code example (this does not get picked up by Vertex AI auto-instrumentor):

@tracer.chain(name="gemini-override-name")
def generate_ai_response(prompt, model="gemini-2.0-flash"):
    """
    Generates AI response using Google's Gemini API.

    Args:
        prompt (str): The input prompt to send to the model
        api_key (str): Google API key for authentication
        model (str): Model name to use, defaults to gemini-2.0-flash

    Returns:
        str: The generated response text
    """
    try:
        # import google.generativeai as genai

        # Initialize the client
        client = genai.Client(api_key=GEMINI_API_KEY)

        # Generate the response
        response = client.models.generate_content(
            model=model,
            contents=prompt
        )

        return response.text

    except Exception as e:
        return f"Error generating response: {str(e)}"
@Jeffrey-Peng Jeffrey-Peng added enhancement New feature or request triage Issues that require triage labels Feb 11, 2025
@mikeldking mikeldking added the language: python Related to Python integration label Feb 20, 2025
@mikeldking
Copy link
Contributor

@mikeldking mikeldking removed the triage Issues that require triage label Feb 20, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request language: python Related to Python integration
Projects
Status: No status
Development

No branches or pull requests

2 participants