Skip to content

Conversation

@araina-amd
Copy link
Contributor

  • Multinode scaling projection from baseline to target node count
  • Automatic config reduction for single-node benchmarking (PP and EP rescaling)
  • Integration with pipeline simulation for accurate baseline calculation
  • Per-layer communication estimation (TP AllReduce, MoE All-to-All)
  • Detailed communication breakdown with message sizes
  • Support for overlapped gradient all-reduce (default enabled)

- Multinode scaling projection from baseline to target node count
- Automatic config reduction for single-node benchmarking (PP and EP rescaling)
- Integration with pipeline simulation for accurate baseline calculation
- Per-layer communication estimation (TP AllReduce, MoE All-to-All)
- Detailed communication breakdown with message sizes
- Support for overlapped gradient all-reduce (default enabled)
@yuankaichen-amd
Copy link
Contributor

Let's separate the style/formatting changes from the actual changes and make it into two PRs (if formatting is actually necessary).

@yuankaichen-amd
Copy link
Contributor

To the actual changes, there are several key things missing from the code. Let's discuss it offline.

@araina-amd araina-amd marked this pull request as draft January 15, 2026 19:20
@araina-amd araina-amd changed the title Multinode projection with different parallelization strategies when single node is benchmarked [WIP] Multinode projection with different parallelization strategies when single node is benchmarked Jan 15, 2026
…nce_projection

- Delete primus/core/projection/multinode_projection/ directory
- All multinode projection functionality is now in performance_projection/projection.py
- Communication calculation, hardware config loading, and projection logic consolidated
@araina-amd araina-amd force-pushed the dev/araina/multinode_performance_model branch from 53259c7 to 23ea7c6 Compare January 16, 2026 00:46
if protocol == "simple":
# Simple protocol: one packet, add header
node_lat = args.write_latency + args.write_resp + args.write_latency
num_packets = 1
"""
if protocol == "simple":
pod_lat = args.pod_lat * 3
num_packets = 1
intra_node_fanout, inter_node_fanout = get_max_fanout(args)
msg_size_per_peer = ceil(msg_size / gpus)
gpus_per_node = min(gpus, args.node_size)
num_nodes = ceil(gpus / gpus_per_node)
# Model parameters
hidden_size = model_config.hidden_size
num_layers = model_config.num_layers
num_experts = model_config.num_experts
…y accounted in the pipeline simulation model.
a2a_combine = cm.alltoall(coll_args, dispatch_size, ep, groups=['ep'])

# Forward: dispatch + combine, Backward: same
fwd_time = (a2a_dispatch + a2a_combine) / 1000 # Convert to ms
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

these two numbers are identical. why don't we have a unified variable?


comm_ops.append({
'type': 'MoE All-to-All',
'time_fwd_ms': fwd_time,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same here.


print(f" Forward time: {forward_time:.2f} ms")
print(f" Backward time: {backward_time:.2f} ms")
print(f" Forward time (compute only): {forward_time:.2f} ms")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it's not "compute only" -- the benchmarked layer contains down-scaled all-to-alls.

@yuankaichen-amd
Copy link
Contributor

LGTM in general, I left some comments in the code as well as below:

  1. Baseline (time, nodes) in the CLI input and its related code is not very useful. Since it is only used in printing results, I suggest we should remove those.

  2. Please make PROJECTION_NNODES=4 as a CLI flag, if not specified, default to the baseline_nodes which is to be calculated based on pp/tp/ep/... in the config

  3. Document an example of hardware config in the CLI level and what should be included. If user doesn't provide one, what are we using? Is the collective model able to select config numbers based on GPUs/Nics it detects on the node?

…a_parallel_size to use PROJECTION_NNODES, fixed wgrad double-counting (set to 0.0), removed wgrad additions for IO layers, and added zero-bubble scheduler support with 50/50 B/W split when enable_zero_bubble=True.
…d _run_pipeline_simulation_megatron_zb() to use actual Megatron zero-bubble scheduler (ILP-based) instead of simple heuristic scheduler.

Add custom_hardware_example.yaml for hardware configuration.
Plus fixing some prints.
Usage:
	bash runner/primus-cli direct --script primus/cli/main.py -- projection performance --config examples/megatron/configs/MI300X/deepseek_v2_lite-BF16-pretrain.yaml --target-nodes 6
Projection accuracy for DeepSeek V2 Lite:
	- PP=3, EP=8 (3 nodes): Projected 6628ms vs Measured 6468ms = +2.5% error
	- PP=1, EP=16 (2 nodes): Projected 5337ms vs Measured 5276ms = +1.2% error
total_gpus = num_nodes * gpus_per_node

if dp == -1:
dp = total_gpus // (tp * pp * ep * cp)
# Only print from rank 0 to avoid duplicate output
is_rank_0 = not dist.is_initialized() or dist.get_rank() == 0

runtime_config = training_config.runtime_config
@araina-amd araina-amd changed the title [WIP] Multinode projection with different parallelization strategies when single node is benchmarked Multinode projection with different parallelization strategies when single node is benchmarked Jan 24, 2026
@araina-amd araina-amd marked this pull request as ready for review January 24, 2026 01:47
target_grad_ar = target_breakdown.get('gradient_allreduce', 0)
grad_ar_msg = f"{target_grad_ar:.3f} ms (overlapped - not in critical path)"
else:
target_grad_ar = 0
)

# Calculate speedup
speedup = benchmarked_time_ms / projected_time_ms if projected_time_ms > 0 else 0

# Calculate speedup
speedup = benchmarked_time_ms / projected_time_ms if projected_time_ms > 0 else 0
ideal_speedup = dp_target / min_dp if min_dp > 0 else dp_target
@@ -0,0 +1,659 @@
import numpy as np
from math import ceil
from typing import Tuple
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants