Skip to content

Remove triton optimization config, causing error for multi gpu inference#2079

Merged
rapids-bot[bot] merged 3 commits intonv-morpheus:branch-25.02from
tzemicheal:tz-sid-triton-fix
Jan 10, 2025
Merged

Remove triton optimization config, causing error for multi gpu inference#2079
rapids-bot[bot] merged 3 commits intonv-morpheus:branch-25.02from
tzemicheal:tz-sid-triton-fix

Conversation

@tzemicheal
Copy link
Contributor

@tzemicheal tzemicheal commented Dec 10, 2024

Description

When running Triton inference for SID & Phishing detection pipeline using multi-gpu on nvcr.io/nvidia/morpheus/morpheus-tritonserver-models:24.11. It result on segment fault. The TRT optimization line at the config.pbtxt of the models is causing tritonserver:24.11 to fail with following error. This PR address the issue to run when all gpu is selected for inference.

2024-12-09 23:24:38.378753895 [E:onnxruntime:, sequential_executor.cc:516 ExecuteKernel] Non-zero status code returned while running TRTKernel_graph_torch_jit_3139280210422962738_0 node. Name:'TensorrtExecutionProvider_TRTKernel_graph_torch_jit_3139280210422962738_0_0' Status Message: TensorRT EP execution context enqueue failed.

Closes #2028

By Submitting this PR I confirm:

  • I am familiar with the Contributing Guidelines.
  • When the PR is ready for review, new or existing tests cover these changes.
  • When the PR is ready for review, the documentation is up to date with these changes.

@tzemicheal tzemicheal requested a review from a team as a code owner December 10, 2024 00:03
@copy-pr-bot
Copy link

copy-pr-bot bot commented Dec 10, 2024

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

@tzemicheal tzemicheal added the bug Something isn't working label Dec 10, 2024
@tzemicheal tzemicheal self-assigned this Dec 10, 2024
@tzemicheal tzemicheal added improvement Improvement to existing functionality breaking Breaking change and removed bug Something isn't working labels Dec 10, 2024
@tzemicheal tzemicheal requested a review from a team December 11, 2024 15:51
@tzemicheal tzemicheal added non-breaking Non-breaking change and removed breaking Breaking change labels Dec 13, 2024
@dagardner-nv
Copy link
Contributor

/ok to test

@dagardner-nv
Copy link
Contributor

/merge

@rapids-bot rapids-bot bot merged commit e6a1170 into nv-morpheus:branch-25.02 Jan 10, 2025
11 checks passed
@tzemicheal tzemicheal deleted the tz-sid-triton-fix branch January 27, 2025 16:58
rapids-bot bot pushed a commit that referenced this pull request Jan 31, 2025
- Remove automatic TensorRT optimization from `all-MiniLM-L6-v2` config.pbtxt. This was causing segfault in `vdb_upload` example.
- Triton issue: triton-inference-server/server#7885
- This [PR](#2079) removed optimization from `sid-minibert-onnx` and `phishing-bert-onnx`

Closes #1649

## By Submitting this PR I confirm:
- I am familiar with the [Contributing Guidelines](https://github.com/nv-morpheus/Morpheus/blob/main/docs/source/developer_guide/contributing.md).
- When the PR is ready for review, new or existing tests cover these changes.
- When the PR is ready for review, the documentation is up to date with these changes.

Authors:
  - Eli Fajardo (https://github.com/efajardo-nv)

Approvers:
  - David Gardner (https://github.com/dagardner-nv)
  - Tad ZeMicheal (https://github.com/tzemicheal)

URL: #2143
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

improvement Improvement to existing functionality non-breaking Non-breaking change

Projects

Archived in project

Development

Successfully merging this pull request may close these issues.

[BUG]: Segfault when using Triton 24.09

3 participants