Skip to content

Commit 0ba9ee0

Browse files
authored
Increase time limit for Conda builds in CI to 90 minutes (#2075)
Current time limit is 60 minutes, recent builds have come close to hitting this limit, and Conda builds are often time sensitive. Closes #2073 ## By Submitting this PR I confirm: - I am familiar with the [Contributing Guidelines](https://github.com/nv-morpheus/Morpheus/blob/main/docs/source/developer_guide/contributing.md). - When the PR is ready for review, new or existing tests cover these changes. - When the PR is ready for review, the documentation is up to date with these changes. Authors: - David Gardner (https://github.com/dagardner-nv) Approvers: - Anuradha Karuppiah (https://github.com/AnuradhaKaruppiah) URL: #2075
1 parent 5e1116d commit 0ba9ee0

File tree

10 files changed

+38
-6
lines changed

10 files changed

+38
-6
lines changed

.github/workflows/ci_pipe.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -209,7 +209,7 @@ jobs:
209209
if: ${{ inputs.conda_run_build }}
210210
needs: [documentation, test]
211211
runs-on: linux-amd64-gpu-v100-latest-1
212-
timeout-minutes: 60
212+
timeout-minutes: 90
213213
container:
214214
image: ${{ inputs.base_container }}
215215
options: --cap-add=sys_nice

conda/environments/all_cuda-125_arch-x86_64.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -80,7 +80,7 @@ dependencies:
8080
- numexpr
8181
- numpydoc=1.5
8282
- onnx=1.15
83-
- openai=1.13
83+
- openai==1.13.*
8484
- papermill=2.4.0
8585
- pip
8686
- pkg-config=0.29

conda/environments/examples_cuda-125_arch-x86_64.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -38,7 +38,7 @@ dependencies:
3838
- numexpr
3939
- numpydoc=1.5
4040
- onnx=1.15
41-
- openai=1.13
41+
- openai==1.13.*
4242
- papermill=2.4.0
4343
- pip
4444
- pluggy=1.3

dependencies.yaml

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -436,6 +436,7 @@ dependencies:
436436
- &langchain-nvidia-ai-endpoints langchain-nvidia-ai-endpoints==0.0.11
437437
- &langchain-openai langchain-openai==0.1.3
438438
- milvus==2.3.5 # update to match pymilvus when available
439+
- &openai openai==1.13.*
439440
- pymilvus==2.3.6
440441
- &nemollm nemollm==0.3.5
441442

@@ -494,7 +495,7 @@ dependencies:
494495
- newspaper3k=0.2
495496
- numexpr
496497
- onnx=1.15
497-
- openai=1.13
498+
- *openai
498499
- pypdf=3.17.4
499500
- *pypdfium2
500501
- *python-docx

docs/source/developer_guide/guides/2_real_world_phishing.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -413,7 +413,7 @@ else:
413413
pipeline.add_stage(RecipientFeaturesStage(config))
414414
```
415415

416-
To tokenize the input data we will use Morpheus' `PreprocessNLPStage`. This stage uses the [cuDF subword tokenizer](https://docs.rapids.ai/api/cudf/stable/user_guide/api_docs/subword_tokenize/#subwordtokenizer) to transform strings into a tensor of numbers to be fed into the neural network model. Rather than split the string by characters or whitespaces, we split them into meaningful subwords based upon the occurrence of the subwords in a large training corpus. You can find more details here: [https://arxiv.org/abs/1810.04805v2](https://arxiv.org/abs/1810.04805v2). All we need to know for now is that the text will be converted to subword token ids based on the vocabulary file that we provide (`vocab_hash_file=vocab file`).
416+
To tokenize the input data we will use Morpheus' `PreprocessNLPStage`. This stage uses the [cuDF subword tokenizer](https://docs.rapids.ai/api/cudf/legacy/user_guide/api_docs/subword_tokenize/#subwordtokenizer) to transform strings into a tensor of numbers to be fed into the neural network model. Rather than split the string by characters or whitespaces, we split them into meaningful subwords based upon the occurrence of the subwords in a large training corpus. You can find more details here: [https://arxiv.org/abs/1810.04805v2](https://arxiv.org/abs/1810.04805v2). All we need to know for now is that the text will be converted to subword token ids based on the vocabulary file that we provide (`vocab_hash_file=vocab file`).
417417

418418
Let's go ahead and instantiate our `PreprocessNLPStage` and add it to the pipeline:
419419

python/morpheus/morpheus/_lib/cudf_helpers/__init__.pyi

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,20 +1,24 @@
11
from __future__ import annotations
22
import morpheus._lib.cudf_helpers
33
import typing
4+
from cudf.core.column.column import ColumnBase
45
from cudf.core.buffer.exposure_tracked_buffer import ExposureTrackedBuffer
56
from cudf.core.buffer.spillable_buffer import SpillableBuffer
67
from cudf.core.dtypes import StructDtype
78
import _cython_3_0_11
89
import cudf
10+
import itertools
911
import rmm
1012

1113
__all__ = [
14+
"ColumnBase",
1215
"ExposureTrackedBuffer",
1316
"SpillableBuffer",
1417
"StructDtype",
1518
"as_buffer",
1619
"bitmask_allocation_size_bytes",
1720
"cudf",
21+
"itertools",
1822
"rmm"
1923
]
2024

python/morpheus_llm/morpheus_llm/requirements_morpheus_llm.txt

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -8,5 +8,6 @@ langchain-openai==0.1.3
88
langchain==0.1.16
99
milvus==2.3.5
1010
nemollm==0.3.5
11+
openai==1.13.*
1112
pymilvus==2.3.6
1213
torch==2.4.0+cu124

tests/conftest.py

Lines changed: 18 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1125,6 +1125,16 @@ def langchain_community_fixture(fail_missing: bool):
11251125
fail_missing=fail_missing)
11261126

11271127

1128+
@pytest.fixture(name="langchain_openai", scope='session')
1129+
def langchain_openai_fixture(fail_missing: bool):
1130+
"""
1131+
Fixture to ensure langchain_openai is installed
1132+
"""
1133+
yield import_or_skip("langchain_openai",
1134+
reason=OPT_DEP_SKIP_REASON.format(package="langchain_openai"),
1135+
fail_missing=fail_missing)
1136+
1137+
11281138
@pytest.fixture(name="langchain_nvidia_ai_endpoints", scope='session')
11291139
def langchain_nvidia_ai_endpoints_fixture(fail_missing: bool):
11301140
"""
@@ -1145,6 +1155,14 @@ def databricks_fixture(fail_missing: bool):
11451155
fail_missing=fail_missing)
11461156

11471157

1158+
@pytest.fixture(name="numexpr", scope='session')
1159+
def numexpr_fixture(fail_missing: bool):
1160+
"""
1161+
Fixture to ensure numexpr is installed
1162+
"""
1163+
yield import_or_skip("numexpr", reason=OPT_DEP_SKIP_REASON.format(package="numexpr"), fail_missing=fail_missing)
1164+
1165+
11481166
@pytest.mark.usefixtures("openai")
11491167
@pytest.fixture(name="mock_chat_completion")
11501168
def mock_chat_completion_fixture():

tests/morpheus_llm/llm/conftest.py

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -61,6 +61,14 @@ def langchain_community_fixture(langchain_community: types.ModuleType):
6161
yield langchain_community
6262

6363

64+
@pytest.fixture(name="langchain_openai", scope='session', autouse=True)
65+
def langchain_openai_fixture(langchain_openai: types.ModuleType):
66+
"""
67+
Fixture to ensure langchain_openai is installed
68+
"""
69+
yield langchain_openai
70+
71+
6472
@pytest.fixture(name="langchain_nvidia_ai_endpoints", scope='session', autouse=True)
6573
def langchain_nvidia_ai_endpoints_fixture(langchain_nvidia_ai_endpoints: types.ModuleType):
6674
"""

tests/morpheus_llm/llm/test_agents_simple_pipe.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -127,7 +127,7 @@ def test_agents_simple_pipe_integration_openai(config: Config, questions: list[s
127127
assert float(response_match.group(1)) >= 3.7
128128

129129

130-
@pytest.mark.usefixtures("openai", "restore_environ")
130+
@pytest.mark.usefixtures("langchain_community", "langchain_openai", "numexpr", "openai", "restore_environ")
131131
@mock.patch("langchain_community.utilities.serpapi.SerpAPIWrapper.aresults")
132132
@mock.patch("langchain_openai.OpenAI._agenerate",
133133
autospec=True) # autospec is needed as langchain will inspect the function

0 commit comments

Comments
 (0)