Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 2 additions & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,8 @@

- Agent Bridge: Consolidate bridged tools implementation into the existing sandbox model proxy service (eliminate Python requirement for using bridged tools).
- Anthropic: Correctly replay reasoning when sourced from Inspect cache.
- OpenAI Compatible: Don't ever send `background` parameter as this is OpenAI service-specific.
- OpenAI Compatible: Added support for disabling reasoning history emulation.
- Grok: Correctly replay tool calling errors in message history.
- VLLM and SGLang: Don't require API key environment variable to be set when running in local mode.
- Google: Support `minimal` and `medium` reasoning effort levels for Gemini 3 Flash.
Expand All @@ -13,7 +15,6 @@
- Inspect View: Scale ANSI display in messages view to preserve row/column layout without wrapping.
- Inspect View: Render custom tool view when viewing messages.
- Bugfix: Prevent component not found error during Human Agent transition.
- OpenAI Compatible: Added support for disabling reasoning history emulation.

## 0.3.159 (03 January 2026)

Expand Down
2 changes: 1 addition & 1 deletion src/inspect_ai/model/_providers/openai_compatible.py
Original file line number Diff line number Diff line change
Expand Up @@ -161,7 +161,7 @@ async def generate(
tools=tools,
tool_choice=tool_choice,
config=config,
background=False,
background=None,
service_tier=None,
prompt_cache_key=NOT_GIVEN,
prompt_cache_retention=NOT_GIVEN,
Expand Down
8 changes: 4 additions & 4 deletions src/inspect_ai/model/_providers/openai_responses.py
Original file line number Diff line number Diff line change
Expand Up @@ -69,14 +69,14 @@ async def generate_responses(
handle_bad_request: Callable[[APIStatusError], ModelOutput | Exception]
| None = None,
) -> ModelOutput | tuple[ModelOutput | Exception, ModelCall]:
# batch mode and background are incompatible
if batcher:
background = False

# background in extra_body should be applied
if background is None and config.extra_body:
background = config.extra_body.pop("background", None)

# batch mode and background are incompatible
if batcher:
background = None

# allocate request_id (so we can see it from ModelCall)
request_id = http_hooks.start_request()

Expand Down