Skip to content

Add functional tests for init, train step, and inference for every supported released model#15433

Open
pzelasko wants to merge 9 commits intomainfrom
model-support-functional-tests
Open

Add functional tests for init, train step, and inference for every supported released model#15433
pzelasko wants to merge 9 commits intomainfrom
model-support-functional-tests

Conversation

@pzelasko
Copy link
Collaborator

Important

The Update branch button must only be pressed in very rare occassions.
An outdated branch is never blocking the merge of a PR.
Please reach out to the automation team before pressing that button.

What does this PR do ?

Add a one line overview of what this PR aims to accomplish.

Collection: all

Changelog

  • Add specific line by line info of high level changes in this PR.

Usage

  • You can potentially add a usage example below
# Add a code snippet demonstrating how to use this 

GitHub Actions CI

The Jenkins CI system has been replaced by GitHub Actions self-hosted runners.

The GitHub Actions CI will run automatically when the "Run CICD" label is added to the PR.
To re-run CI remove and add the label again.
To run CI on an untrusted fork, a NeMo user with write access must first click "Approve and run".

Before your PR is "Ready for review"

Pre checks:

  • Make sure you read and followed Contributor guidelines
  • Did you write any new necessary tests?
  • Did you add or update any necessary documentation?
  • Does the PR affect components that are optional to install? (Ex: Numba, Pynini, Apex etc)
    • Reviewer: Does the PR have correct import guards for all optional libraries?

PR Type:

  • New Feature
  • Bugfix
  • Documentation

If you haven't finished some of the above items you can still open "Draft" PR.

Who can review?

Anyone in the community is free to review the PR once the checks have passed.
Contributor guidelines contains specific people who can review PRs to various areas.

Additional Information

  • Related to # (issue)

pzelasko and others added 6 commits February 18, 2026 10:50
Adds CI-level functional tests covering initialization, a training step,
and inference for every non-deprecated model in model-support-table.csv
(~80 models across ASR, TTS, speaker, VAD, diarization, audio codec,
audio enhancement, and SSL categories).

Deliverables:
- scripts/ci/download_model_support_models.py: downloads all models
  to a local cache dir; handles non-standard HF filenames and NGC
  models absent from list_available_models()
- tests/functional_tests/test_model_support.py: parametrized pytest
  suite with init / training_step / inference tests per model
- tests/functional_tests/L2_Model_Support_*.sh: one bash script per
  model for fully parallel CI jobs
- .github/workflows/cicd-main-speech.yml: ~80 new matrix entries

Key implementation notes:
- Models loaded from pre-downloaded .nemo files (NEMO_MODEL_SUPPORT_DIR)
- Single-model cache eviction prevents GPU OOM when running full suite
- SSL models (EncDecDenoiseMaskedTokenPredModel) require noisy_input_signal
- Diarization (SortformerEncLabelModel) uses audio_signal parameter
- vad_multilingual_frame_marblenet loaded as EncDecFrameClassificationModel
  with strict=False (legacy checkpoint / architecture mismatch)
- Frame_VAD_Multilingual_MarbleNet_v2.0 loaded with strict=False
- Training step skipped for TTS / codec / SALM categories (GAN loops)

Test results: 211 passed, 27 skipped, 3 xfailed, 0 failures

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Signed-off-by: Piotr Żelasko <pzelasko@nvidia.com>
Two root causes fixed:

1. conftest.py: Null out model data configs (train_ds/validation_ds/test_ds)
   before Trainer.fit() to prevent ModelPT.setup() from trying to load
   training data from paths that don't exist in CI, and to prevent
   OmegaConf.to_object() from failing on MISSING (???) config values.

2. multitalker test: Include spk_targets and bg_spk_targets directly in the
   batch tuple (6 elements), matching what the model's training_step expects,
   instead of calling set_speaker_targets separately.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Signed-off-by: Piotr Żelasko <pzelasko@nvidia.com>
Signed-off-by: Piotr Żelasko <pzelasko@nvidia.com>
Signed-off-by: Piotr Żelasko <pzelasko@nvidia.com>
@github-actions github-actions bot added core Changes to NeMo Core ASR CI labels Feb 24, 2026
Signed-off-by: Piotr Żelasko <pzelasko@nvidia.com>
Copy link
Contributor

@github-advanced-security github-advanced-security bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

CodeQL found more than 20 potential problems in the proposed changes. Check the Files changed tab for more details.

Signed-off-by: Piotr Żelasko <pzelasko@nvidia.com>
Signed-off-by: Piotr Żelasko <pzelasko@nvidia.com>
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR adds comprehensive functional tests for model support across NeMo's entire model catalog. The tests verify model initialization, training step execution, and inference for each supported model, ensuring backward compatibility and catching regressions.

Changes:

  • Added functional test suite with ~60+ model-specific test files following a consistent pattern
  • Added conftest.py with shared utilities for test preparation and trainer stubbing
  • Added shell scripts for running each model's tests with coverage tracking
  • Updated existing test runner scripts to exclude the new functional_tests directory
  • Fixed CUDA version compatibility issue in cuda_python_utils.py

Reviewed changes

Copilot reviewed 167 out of 167 changed files in this pull request and generated no comments.

Show a summary per file
File Description
tests/functional_tests/conftest.py Shared utilities for training step and transcribe preparation
tests/functional_tests/test_model_support_*.py Model-specific test files (60+ files) following consistent pattern
tests/functional_tests/L2_Model_Support_*.sh Shell scripts for running individual model tests with coverage
tests/functional_tests/L0_Unit_Tests_*.sh Updated to exclude functional_tests from unit test runs
nemo/core/utils/cuda_python_utils.py Fixed CUDA 13 compatibility by replacing dynamic parameter inspection with version check

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Copy link
Collaborator

@blisc blisc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm wondering if there is a way for us to matrix all of these tests together into one file rather than having a lot of loose files. I did not check that all shell scripts call the correct python file (hoping copilot will do this for me)
TTS tests look good; did not look at other models. Hifigan training test is a bit simplistic, but since it's an older model, I'm inclined to give it a pass.
Approved, but we should find a way to make sure that these tests continue to be updated as we add more models.

@@ -0,0 +1,479 @@
[
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is the expectation for future model owners to add to this file as they release? Seems prone to human error

"audio_codes_lens": audio_codes_lens,
}

batch_output = model.process_batch(batch)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm surprised this works without context audio. I'll have to see what we do under the hood.

- runner: self-hosted-azure
script: L2_TTS_InferEvaluatelongform_Magpietts_MoE_ZeroShot
# Model support functional tests
- runner: self-hosted-azure
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@pzelasko regarding scheduling this, we actually run nightly tests on main branch already. Looks like one of the CI tests needs to be fixed.
https://github.com/NVIDIA-NeMo/NeMo/actions/runs/22421911989
https://github.com/NVIDIA-NeMo/NeMo/blob/main/.github/workflows/cicd-main.yml#L17

In any case, to separate these out to run nightly only, it may be better to move the tests to a new group called e2e-nightly or whatever you want on it and apply a condition like this:
https://github.com/NVIDIA-NeMo/NeMo/blob/main/.github/workflows/cicd-main.yml#L246

The condition could be something like:

if: ${{ github.event_name == 'schedule' }}
needs: unit-tests

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CI core Changes to NeMo Core

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants