Skip to content

Conversation

@devin-ai-integration
Copy link
Contributor

Fix response_format to support Pydantic BaseModel classes (#3959)

Summary

This PR restores backwards compatibility for the response_format parameter in the LLM class, which was broken in PR #3793. Users can now pass Pydantic BaseModel classes directly to response_format when initializing an LLM instance, and the framework will automatically convert them to the json_schema format required by LiteLLM.

Changes:

  • Modified _prepare_completion_params to detect Pydantic BaseModel classes in response_format and convert them to {"type": "json_schema", "json_schema": {...}} format
  • Added response parsing in _handle_non_streaming_response to parse JSON responses back into Pydantic models when response_format is a BaseModel class
  • Ensured response_model parameter (passed to call()) takes precedence over response_format (set at init)
  • Added three comprehensive tests covering Pydantic model conversion, dict passthrough, and precedence behavior
  • Fixed unrelated test fixture issue (removed @pytest.mark.vcr decorator from anthropic_llm fixture that was causing test collection errors)

Review & Testing Checklist for Human

  • End-to-end test with real LLM: Test with an actual LLM call using a Pydantic BaseModel as response_format to verify the fix works in production (the tests only use mocks)
  • Complex Pydantic models: Verify the json_schema conversion works correctly with nested models, optional fields, unions, and other complex Pydantic features
  • Error handling: Test what happens when the LLM returns JSON that doesn't match the Pydantic schema - currently logs a warning and falls through to raw response
  • Backwards compatibility: Verify existing code using dict-based response_format (e.g., {"type": "json_object"}) still works correctly
  • Streaming responses: Note that this fix only handles non-streaming responses - streaming with Pydantic response_format may need additional work

Test Plan

from pydantic import BaseModel
from crewai import LLM

class AnswerResponse(BaseModel):
    answer: str
    confidence: float

# Test 1: Basic Pydantic model usage
llm = LLM(model="gpt-4o-mini", response_format=AnswerResponse)
result = llm.call("What is 2+2?")
# Should return JSON string that can be parsed into AnswerResponse

# Test 2: Dict format still works
llm2 = LLM(model="gpt-4o-mini", response_format={"type": "json_object"})
result2 = llm2.call("Return a JSON object")
# Should work as before

# Test 3: response_model takes precedence
class OtherResponse(BaseModel):
    data: str

llm3 = LLM(model="gpt-4o-mini", response_format=AnswerResponse)
result3 = llm3.call("Test", response_model=OtherResponse)
# Should use OtherResponse, not AnswerResponse

Notes

  • This fix addresses issue [BUG] Response format doesn't work for OpenAI LLM #3959 reported by users upgrading from crewAI 1.2 to 1.5
  • The conversion uses model_json_schema() which is Pydantic v2 syntax (correct for this codebase)
  • Error handling logs a warning if JSON parsing fails but doesn't raise an exception, allowing graceful degradation to raw text response
  • The unrelated fixture fix (removing @pytest.mark.vcr) was necessary to allow test collection to succeed

Link to Devin run: https://app.devin.ai/sessions/3672f697c80b4ae0bfea447d19ee34a6
Requested by: João ([email protected])

- Add conversion of Pydantic BaseModel classes to json_schema format in _prepare_completion_params
- Add parsing of JSON responses back into Pydantic models in _handle_non_streaming_response
- Ensure response_model parameter takes precedence over response_format
- Add three comprehensive tests covering Pydantic model conversion, dict passthrough, and precedence
- Fix test fixture decorator issue (removed @pytest.mark.vcr from anthropic_llm fixture)

Fixes #3959

Co-Authored-By: João <[email protected]>
@devin-ai-integration
Copy link
Contributor Author

🤖 Devin AI Engineer

I'll be helping with this pull request! Here's what you should know:

✅ I will automatically:

  • Address comments on this PR. Add '(aside)' to your comment to have me ignore it.
  • Look at CI failures and help fix them

Note: I can only respond to comments from users who have write access to this repository.

⚙️ Control Options:

  • Disable automatic comment and CI monitoring

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants