Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion src/platform/configuration/common/configurationService.ts
Original file line number Diff line number Diff line change
Expand Up @@ -813,7 +813,7 @@ export namespace ConfigKey {
/** Configure reasoning effort sent to Responses API */
export const ResponsesApiReasoningEffort = defineSetting<'low' | 'medium' | 'high' | 'default'>('chat.responsesApiReasoningEffort', ConfigType.ExperimentBased, 'default');
/** Configure reasoning summary style sent to Responses API */
export const ResponsesApiReasoningSummary = defineSetting<'off' | 'detailed'>('chat.responsesApiReasoningSummary', ConfigType.ExperimentBased, 'detailed');
export const ResponsesApiReasoningSummary = defineSetting<'off' | 'concise' | 'detailed'>('chat.responsesApiReasoningSummary', ConfigType.ExperimentBased, 'detailed');
export const EnableChatImageUpload = defineSetting<boolean>('chat.imageUpload.enabled', ConfigType.ExperimentBased, true);
/** Thinking token budget for Anthropic extended thinking. If set, enables extended thinking. */
export const AnthropicThinkingBudget = defineSetting<number>('chat.anthropic.thinking.budgetTokens', ConfigType.ExperimentBased, 4000);
Expand Down
3 changes: 2 additions & 1 deletion src/platform/endpoint/common/chatModelCapabilities.ts
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,8 @@ const VSC_MODEL_HASHES_A = [
const HIDDEN_MODEL_B_HASHES = [
'31a2d5282683edb3a22c565f199aa96fb9ffb3107af35aad92ee1cd567cfc25d',
'dd832404e8eeb90793f0369b96ed1702e0e22487a58eb4c1f285a4af5c4f6f21',
'131e2083b68bde4fe879efc38ed9651b1623f8735eeb42157fa3b63ef943fdc6'
'131e2083b68bde4fe879efc38ed9651b1623f8735eeb42157fa3b63ef943fdc6',
'768e03eb8c0003f009745eb72538a435874c5b52a3e4cc71714ca06ceca60d29',
];

// Currently empty, will be used in future for a different set of VSC models
Expand Down
24 changes: 21 additions & 3 deletions src/platform/endpoint/node/responsesApi.ts
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ import { ChatCompletion, FinishedCompletionReason, TokenLogProb } from '../../ne
import { IExperimentationService } from '../../telemetry/common/nullExperimentationService';
import { ITelemetryService } from '../../telemetry/common/telemetry';
import { TelemetryData } from '../../telemetry/common/telemetryData';
import { getVerbosityForModelSync } from '../common/chatModelCapabilities';
import { getVerbosityForModelSync, isHiddenModelB } from '../common/chatModelCapabilities';
import { getStatefulMarkerAndIndex } from '../common/statefulMarkerContainer';
import { rawPartAsThinkingData } from '../common/thinkingDataContainer';

Expand Down Expand Up @@ -57,8 +57,12 @@ export function createResponsesRequestBody(accessor: ServicesAccessor, options:
'disabled';
const effortConfig = configService.getExperimentBasedConfig(ConfigKey.ResponsesApiReasoningEffort, expService);
const summaryConfig = configService.getExperimentBasedConfig(ConfigKey.ResponsesApiReasoningSummary, expService);
const effort = effortConfig === 'default' ? 'medium' : effortConfig;
const summary = summaryConfig === 'off' ? undefined : summaryConfig;
let effort = effortConfig === 'default' ? 'medium' : effortConfig;
let summary = summaryConfig === 'off' ? 'detailed' : summaryConfig;
Copy link

Copilot AI Dec 18, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The logic change on line 61 modifies the behavior when summaryConfig === 'off'. Previously, this would result in summary = undefined, but now it defaults to 'detailed'. This changes the existing behavior: when users explicitly set the config to 'off', they likely expect reasoning summaries to be disabled, not to use 'detailed'.

Consider either:

  1. Keeping the original behavior where 'off' results in undefined
  2. Or if the behavior change is intentional, it should be documented in the PR description as a breaking change
Suggested change
let summary = summaryConfig === 'off' ? 'detailed' : summaryConfig;
let summary = summaryConfig === 'off' ? undefined : summaryConfig;

Copilot uses AI. Check for mistakes.
const reasoningParams = reasoningParameterValuesBasedOnModel(endpoint.family, effort, summary);
effort = reasoningParams?.effort || effort;
summary = reasoningParams?.summary || summary;

if (effort || summary) {
body.reasoning = {
...(effort ? { effort } : {}),
Expand All @@ -71,6 +75,20 @@ export function createResponsesRequestBody(accessor: ServicesAccessor, options:
return body;
}

// for gpt-5.2 + models, changing the default reasoning parameters
Copy link

Copilot AI Dec 18, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The comment "for gpt-5.2 + models" is vague and could be clearer. The function applies to "hidden model B" which may include various models, not just gpt-5.2. Consider updating the comment to be more accurate, such as "Adjusts reasoning parameters for specific model families (hidden model B)" or refer to the actual logic that determines which models are affected.

Suggested change
// for gpt-5.2 + models, changing the default reasoning parameters
// Adjust reasoning parameter defaults for specific model families (hidden model B)

Copilot uses AI. Check for mistakes.
type ResponsesReasoningEffort = 'low' | 'medium' | 'high';
type ResponsesReasoningSummary = 'concise' | 'detailed';
Copy link

Copilot AI Dec 18, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The type definition for ResponsesReasoningSummary is missing the 'off' option that is present in the configuration type ConfigKey.ResponsesApiReasoningSummary. The config type is 'off' | 'concise' | 'detailed' but the local type only includes 'concise' | 'detailed'.

While 'off' is handled specially in the code (converting to 'detailed'), the type should accurately reflect all possible config values to maintain type safety and code clarity. Consider adding 'off' to the type definition or creating a separate type that represents the normalized values after processing the config.

Suggested change
type ResponsesReasoningSummary = 'concise' | 'detailed';
type ResponsesReasoningSummary = 'off' | 'concise' | 'detailed';

Copilot uses AI. Check for mistakes.

function reasoningParameterValuesBasedOnModel(model: string, effort: ResponsesReasoningEffort, summary: ResponsesReasoningSummary): { effort?: ResponsesReasoningEffort; summary?: ResponsesReasoningSummary } | undefined {
if (isHiddenModelB(model)) {
return {
effort,
summary: 'concise',
};
}
Comment on lines +82 to +88
Copy link

Copilot AI Dec 18, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The function reasoningParameterValuesBasedOnModel always overrides the summary parameter to 'concise' for hidden model B, regardless of the input summary value. However, it returns the effort parameter unchanged. This seems inconsistent - if the function is meant to override defaults for gpt-5.2+, why is only summary being overridden?

Additionally, the function accepts a summary parameter but ignores it when returning, making the parameter misleading. Consider either:

  1. Removing the summary parameter if it's not used
  2. Or using it in some conditional logic if there's a specific reason to accept it

Copilot uses AI. Check for mistakes.
}


Copy link

Copilot AI Dec 18, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are two consecutive blank lines (lines 90-91) which is inconsistent with the rest of the codebase style. According to the coding standards, there should typically be only one blank line between functions.

Suggested change

Copilot uses AI. Check for mistakes.
function rawMessagesToResponseAPI(modelId: string, messages: readonly Raw.ChatMessage[], ignoreStatefulMarker: boolean): { input: OpenAI.Responses.ResponseInputItem[]; previous_response_id?: string } {
const statefulMarkerAndIndex = !ignoreStatefulMarker && getStatefulMarkerAndIndex(modelId, messages);
let previousResponseId: string | undefined;
Expand Down
Loading