Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
44 changes: 31 additions & 13 deletions pr_agent/algo/ai_handlers/litellm_ai_handler.py
Original file line number Diff line number Diff line change
Expand Up @@ -292,16 +292,24 @@ async def chat_completion(self, model: str, system: str, user: str, temperature:

thinking_kwargs_gpt5 = None
if model.startswith('gpt-5'):

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

1. Azure gpt-5 models skip reasoning configuration 🐞 Bug ✓ Correctness

• When using Azure-deployed GPT-5 models, the model name is transformed to 'azure/gpt-5-xxx' at line
  272 BEFORE the GPT-5 check at line 294
• The condition model.startswith('gpt-5') will fail for Azure models since
  'azure/gpt-5-xxx'.startswith('gpt-5') returns False
• This causes Azure GPT-5 models to skip the reasoning_effort configuration entirely, resulting in
  API calls without the required reasoning_effort parameter
• GPT-5 models require reasoning_effort to function properly, so this will likely cause API errors
  or unexpected behavior
Agent prompt
## Issue Description
Azure-deployed GPT-5 models are not receiving the reasoning_effort configuration because the Azure prefix is added before the GPT-5 model detection check. This causes the `model.startswith('gpt-5')` condition to fail for Azure models.

## Issue Context
- Azure transformation happens at line 272: `model = 'azure/' + model`
- GPT-5 check happens at line 294: `if model.startswith('gpt-5'):`
- For Azure GPT-5 models, the check evaluates: `'azure/gpt-5-xxx'.startswith('gpt-5')` which returns False

## Fix Focus Areas

### Option 1: Check for GPT-5 before Azure transformation
- pr_agent/algo/ai_handlers/litellm_ai_handler.py[271-294]

### Option 2: Modify GPT-5 check to handle Azure prefix
- pr_agent/algo/ai_handlers/litellm_ai_handler.py[294-294]

Recommended approach: Store the original model name before Azure transformation and use it for the GPT-5 check, or modify the check to: `if 'gpt-5' in model and (model.startswith('gpt-5') or model.startswith('azure/gpt-5')):`

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

if model.endswith('_thinking'):
thinking_kwargs_gpt5 = {
"reasoning_effort": 'low',
"allowed_openai_params": ["reasoning_effort"],
}
else:
thinking_kwargs_gpt5 = {
"reasoning_effort": 'minimal',
"allowed_openai_params": ["reasoning_effort"],
}
# Use configured reasoning_effort or default to MEDIUM
config_effort = get_settings().config.reasoning_effort
try:
ReasoningEffort(config_effort)
effort = config_effort
except (ValueError, TypeError):
effort = ReasoningEffort.MEDIUM.value
if config_effort is not None:
get_logger().warning(
f"Invalid reasoning_effort '{config_effort}' in config. "
f"Using default '{effort}'. Valid values: {[e.value for e in ReasoningEffort]}"
)

thinking_kwargs_gpt5 = {
"reasoning_effort": effort,
"allowed_openai_params": ["reasoning_effort"],
}
get_logger().info(f"Using reasoning_effort='{effort}' for GPT-5 model")
model = 'openai/'+model.replace('_thinking', '') # remove _thinking suffix


Expand Down Expand Up @@ -338,9 +346,19 @@ async def chat_completion(self, model: str, system: str, user: str, temperature:
del kwargs['temperature']

# Add reasoning_effort if model supports it
if (model in self.support_reasoning_models):
supported_reasoning_efforts = [ReasoningEffort.HIGH.value, ReasoningEffort.MEDIUM.value, ReasoningEffort.LOW.value]
reasoning_effort = get_settings().config.reasoning_effort if (get_settings().config.reasoning_effort in supported_reasoning_efforts) else ReasoningEffort.MEDIUM.value
if model in self.support_reasoning_models:
config_effort = get_settings().config.reasoning_effort
try:
ReasoningEffort(config_effort)
reasoning_effort = config_effort
except (ValueError, TypeError):
reasoning_effort = ReasoningEffort.MEDIUM.value
if config_effort is not None:
get_logger().warning(
f"Invalid reasoning_effort '{config_effort}' in config. "
f"Using default '{reasoning_effort}'. Valid values: {[e.value for e in ReasoningEffort]}"
)

get_logger().info(f"Adding reasoning_effort with value {reasoning_effort} to model {model}.")
kwargs["reasoning_effort"] = reasoning_effort

Expand Down
3 changes: 3 additions & 0 deletions pr_agent/algo/utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -64,9 +64,12 @@ class PRReviewHeader(str, Enum):


class ReasoningEffort(str, Enum):
XHIGH = "xhigh"
HIGH = "high"
MEDIUM = "medium"
LOW = "low"
MINIMAL = "minimal"
NONE = "none"


class PRDescriptionHeader(str, Enum):
Expand Down
Loading