Skip to content

Commit 34effa4

Browse files
authored
fix tips (#17249)
1 parent 6090744 commit 34effa4

File tree

1 file changed

+5
-7
lines changed

1 file changed

+5
-7
lines changed

docs/version3.x/pipeline_usage/PaddleOCR-VL.en.md

Lines changed: 5 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -91,13 +91,11 @@ Currently, PaddleOCR-VL offers four inference methods, with varying levels of su
9191

9292
> [!TIP]
9393
> 1. When using NVIDIA GPU for inference, ensure that the Compute Capability (CC) and CUDA version meet the requirements:
94-
>
95-
> - PaddlePaddle: CC ≥ 7.0, CUDA ≥ 11.8
96-
> - vLLM: CC ≥ 8.0, CUDA ≥ 12.6
97-
> - SGLang: 8.0 ≤ CC < 12.0, CUDA ≥ 12.6
98-
> - FastDeploy: 8.0 ≤ CC < 12.0, CUDA ≥ 12.6
99-
> - Common GPUs with CC ≥ 8 include RTX 30/40/50 series and A10/A100, etc. For more models, refer to [CUDA GPU Compute Capability](https://developer.nvidia.com/cuda-gpus)
100-
>
94+
> > - PaddlePaddle: CC ≥ 7.0, CUDA ≥ 11.8
95+
> > - vLLM: CC ≥ 8.0, CUDA ≥ 12.6
96+
> > - SGLang: 8.0 ≤ CC < 12.0, CUDA ≥ 12.6
97+
> > - FastDeploy: 8.0 ≤ CC < 12.0, CUDA ≥ 12.6
98+
> > - Common GPUs with CC ≥ 8 include RTX 30/40/50 series and A10/A100, etc. For more models, refer to [CUDA GPU Compute Capability](https://developer.nvidia.com/cuda-gpus)
10199
> 2. vLLM compatibility note: Although vLLM can be launched on NVIDIA GPUs with CC 7.x such as T4/V100, timeout or OOM issues may occur, and its use is not recommended.
102100
> 3. Currently, PaddleOCR-VL does not support ARM architecture CPUs. More hardware support will be expanded based on actual needs in the future, so stay tuned!
103101
> 4. vLLM, SGLang, and FastDeploy cannot run natively on Windows or macOS. Please use the Docker images we provide.

0 commit comments

Comments
 (0)