Skip to content

[WIP][algo] Migrate and implement the GDPO algorithm into the existing framework.#5422

Draft
Rhetee wants to merge 2 commits intoverl-project:mainfrom
Rhetee:GDPO
Draft

[WIP][algo] Migrate and implement the GDPO algorithm into the existing framework.#5422
Rhetee wants to merge 2 commits intoverl-project:mainfrom
Rhetee:GDPO

Conversation

@Rhetee
Copy link
Contributor

@Rhetee Rhetee commented Feb 27, 2026

What does this PR do?

Add concise overview of what this PR aims to achieve or accomplish. Reference related GitHub issues and PRs that help with the review.

This PR references the original paper and adds necessary logic such as multi-reward scoring to reproduce the paper's results within the existing Verl framework.

Checklist Before Starting

  • Search for similar PRs. Paste at least one query link here: [trainer] feat: add support for the GDPO algorithm #5409
  • Format the PR title as [{modules}] {type}: {description} (This will be checked by the CI)
    • {modules} include fsdp, megatron, veomni, sglang, vllm, rollout, trainer, ci, training_utils, recipe, hardware, deployment, ray, worker, single_controller, misc, perf, model, algo, env, tool, ckpt, doc, data, cfg, reward, fully_async, one_step_off
    • If this PR involves multiple modules, separate them with , like [megatron, fsdp, doc]
    • {type} is in feat, fix, refactor, chore, test
    • If this PR breaks any API (CLI arguments, config, function signature, etc.), add [BREAKING] to the beginning of the title.
    • Example: [BREAKING][fsdp, megatron] feat: dynamic batching

Test

WIP

For changes that can not be tested by CI (e.g., algorithm implementation, new model support), validate by experiment(s) and show results like training curve plots, evaluation results, etc.

API and Usage Example

Demonstrate how the API changes if any, and provide usage example(s) if possible.

# Add code snippet or script demonstrating how to use this

Design & Code Changes

Demonstrate the high-level design if this PR is complex, and list the specific changes.

Checklist Before Submitting

Important

Please check all the following items before requesting a review, otherwise the reviewer might deprioritize this PR for review.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces the GDPO algorithm and its associated reward calculation logic. My review focuses on improving code correctness, robustness, and maintainability. I've identified a critical bug in the reward manager that would lead to a crash, a typo in a class name, and several areas for improvement in the new reward scoring script, including fragile parsing logic, use of print statements, and a variable name typo. Addressing these points will make the new implementation more robust and easier to maintain.

Comment on lines +84 to +90
if isinstance(result, dict):
score = result["score"]
for key, value in result.items():
reward_extra_info[key] = value
else:
score = result
reward_extra_info["acc"] = score
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The current logic for processing the result from compute_score is incorrect when the function returns a tuple, as verl.utils.reward_score.rlla.compute_score does. This will assign a tuple to the score variable, which will cause a runtime error later when it's expected to be a float (e.g., in torch.tensor(scores)). You need to handle the tuple case explicitly to correctly extract the score and other reward components.

Suggested change
if isinstance(result, dict):
score = result["score"]
for key, value in result.items():
reward_extra_info[key] = value
else:
score = result
reward_extra_info["acc"] = score
if isinstance(result, dict):
score = result["score"]
reward_extra_info.update(result)
elif isinstance(result, tuple) and len(result) == 4:
score, format_score, correctness_score, length_score = result
reward_extra_info["score"] = score
reward_extra_info["format_score"] = format_score
reward_extra_info["correctness_score"] = correctness_score
reward_extra_info["length_score"] = length_score
else:
score = result
reward_extra_info["acc"] = score



@register("gdpo")
class GDPOdRewardManager(RewardManagerBase):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

There appears to be a typo in the class name GDPOdRewardManager. It should likely be GDPORewardManager to align with the algorithm name ("gdpo") it's registered for. This will improve clarity and prevent confusion.

Suggested change
class GDPOdRewardManager(RewardManagerBase):
class GDPORewardManager(RewardManagerBase):

return 1.0

if os.getenv("REFINEDREWARD", 0) == "1":
print("REFINEDREWARD is set to 1, so strict match is used")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

This file contains many print statements, likely for debugging. In a library, this is considered bad practice as it pollutes stdout and makes output difficult to control. Please replace these with a proper logging framework, such as Python's built-in logging module. This will allow for configurable log levels (e.g., DEBUG, INFO) and integrate better with the application's overall logging strategy.

Comment on lines +288 to +296
exp_name = str(os.getenv("EXPERIMENT_NAME", ""))
if "llama" in exp_name:
predict_str = (
solution_str.split("<|start_header_id|>assistant<|end_header_id|>")[-1].split("<|eot_id|>")[0].strip()
)
elif "qwen" in exp_name:
predict_str = solution_str.split("<|im_start|>assistant")[-1].split("<|im_end|>")[0].strip()
else:
raise NotImplementedError(f"Unknown model name: {exp_name}")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The logic for parsing the solution_str relies on hardcoded model name checks within an environment variable (EXPERIMENT_NAME). This approach is fragile and not scalable. It will break if the model name in the environment variable changes, or for new models not explicitly handled here. A more robust solution would be to either pass the tokenizer to this function to properly decode the response, or to perform the parsing in the calling context (which has the tokenizer) and pass the clean response string. Relying on string splitting with model-specific tokens is not a maintainable practice.

completions = [[{"role": "assistant", "content": predict_str}]]
answer = [ground_truth]

fomrat_score = customize_format_reward_func(completions, answer, step, format_max_possible, format_min_possible)[0]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

There is a typo in the variable name fomrat_score. It should be format_score. This typo is used consistently within the compute_score function, which harms readability and maintainability. Please correct it here and in its other usages within this function.

Suggested change
fomrat_score = customize_format_reward_func(completions, answer, step, format_max_possible, format_min_possible)[0]
format_score = customize_format_reward_func(completions, answer, step, format_max_possible, format_min_possible)[0]

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant