Skip to content

Commit e0ef491

Browse files
committed
update
Signed-off-by: Roger Wang <[email protected]>
1 parent 93ea815 commit e0ef491

File tree

1 file changed

+38
-66
lines changed

1 file changed

+38
-66
lines changed

_posts/2025-10-28-run-multimodal-reasoning-agents-nvidia-nemotron.md renamed to _posts/2025-10-29-run-multimodal-reasoning-agents-nvidia-nemotron.md

Lines changed: 38 additions & 66 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,6 @@ title: "Run Multimodal Reasoning Agents with NVIDIA Nemotron on vLLM"
44
author: "NVIDIA Nemotron Team"
55
---
66

7-
87
# Run Multimodal Reasoning Agents with NVIDIA Nemotron on vLLM
98

109
We are excited to release [NVIDIA Nemotron Nano 2 VL](https://huggingface.co/nvidia/Nemotron-Nano-12B-v2-VL-BF16), supported by vLLM. This open vision language model ([VLM](https://www.nvidia.com/en-us/glossary/vision-language-models/)) is built for video understanding and document intelligence.
@@ -13,6 +12,33 @@ Nemotron Nano 2 VL uses a hybrid Transformer–Mamba design and delivers higher
1312

1413
In this blog post, we’ll explore how Nemotron Nano 2 VL advances video understanding and document intelligence, showcase real-world use cases and benchmark results, and guide you through getting started with vLLM for inference to unlock high-efficiency multimodal AI at scale.
1514

15+
## Leading multimodal model for efficient video understanding and document intelligence
16+
17+
NVIDIA Nemotron Nano 2 VL brings both video understanding and document intelligence capabilities together in a single, highly efficient model. Built on the hybrid Transformer–Mamba architecture, it combines the reasoning strength of Transformer models with the compute efficiency of Mamba, achieving high throughput and low latency, allowing it to process multi-image inputs faster.
18+
19+
Trained on NVIDIA-curated, high-quality multimodal data, [Nemotron Nano 2 VL](https://huggingface.co/blog/nvidia/nemotron-vlm-dataset-v2) leads in video understanding and document intelligence benchmarks such as MMMU, MathVista, AI2D, OCRBench, OCRBench-v2, OCR-Reasoning, ChartQA, DocVQA, and Video-MME, delivering top-tier accuracy in multimodal [reasoning](https://www.nvidia.com/en-us/glossary/ai-reasoning/), character recognition, chart reasoning, and visual question answering. This makes it ideal for building multimodal applications that automate data extraction and comprehension across videos, documents, forms, and charts with enterprise-grade precision.
20+
21+
22+
<p align="center">
23+
<picture>
24+
<img src="/assets/figures/2025-multimodal-nvidia-nemotron/figure1.png" width="100%">
25+
</picture>
26+
<br>
27+
Figure 1: Nemotron Nano 2 VL provides leading accuracy on various video understanding and document intelligence benchmarks
28+
</p>
29+
30+
Improving Efficiency with EVS
31+
With EVS, the model achieves higher throughput and faster response times without sacrificing accuracy. EVS technique prunes redundant frames, preserving semantic richness while enabling longer video processing efficiently. As a result, enterprises can analyze hours of footage, from meetings and training sessions to customer calls, in minutes, gaining actionable insights faster and at lower cost.
32+
33+
34+
<p align="center">
35+
<picture>
36+
<img src="/assets/figures/2025-multimodal-nvidia-nemotron/figure2.png" width="100%">
37+
</picture>
38+
<br>
39+
Figure 2: Accuracy trend of the Nemotron Nano 2 VL model across various token-drop thresholds using efficient video sampling on Video-MME and LongVideo benchmarks
40+
</p>
41+
1642
## About Nemotron Nano 2 VL
1743

1844
* Architecture:
@@ -33,49 +59,35 @@ In this blog post, we’ll explore how Nemotron Nano 2 VL advances video underst
3359

3460
## Run optimized inference with vLLM
3561

36-
Nemotron Nano 2 VL, achieves accelerated [inference](https://www.nvidia.com/en-us/glossary/ai-inference/) and serves more requests on the same GPU with BF16, FP8 and FP4 precision support. Follow these instructions to get started:
62+
This guide demonstrates how to run Nemotron Nano 2 VL on vLLM, achieving accelerated [inference](https://www.nvidia.com/en-us/glossary/ai-inference/) and serving concurrent requests efficiently with BF16, FP8 and FP4 precision support.
3763

38-
`Create a fresh conda environment`
64+
### Install vLLM
3965

40-
````shell
66+
The support for Nemotron Nano 2 VL is available in the nightly version of vLLM. Run the command below to install vLLM:
4167
```bash
42-
conda create -n nemotron-vllm-env python=3.10 -y
43-
conda activate nemotron-vllm-env
68+
uv venv
69+
uv pip install vllm --extra-index-url https://wheels.vllm.ai/nightly --prerelease=allow
4470
```
45-
````
46-
47-
`Make sure to use the main branch of the vLLM. Run the command below to install vLLM`
48-
49-
````shell
50-
```bash
51-
!VLLM_USE_PRECOMPILED=1 pip install git+https://github.com/vllm-project/vllm.git@main
52-
````
5371

54-
`We can then serve this model via an OpenAI-compatible API`
5572

56-
````shell
73+
### Deploy and query the inference server
74+
Deploy an OpenAI-compatible inference server with vLLM by running the following commands for BF16, FP8 and FP4 precision:
5775

5876
```bash
5977
vllm serve nvidia/Nemotron-Nano-12B-v2-VL-BF16 --trust-remote-code --dtype bfloat16 --video-pruning-rate 0
60-
```
6178

6279
# FP8
63-
```bash
6480
vllm serve nvidia/Nemotron-Nano-VL-12B-V2-FP8 --trust-remote-code --quantization modelopt --video-pruning-rate 0
65-
```
6681

6782
# FP4
68-
```bash
6983
vllm serve nvidia/Nemotron-Nano-VL-12B-V2-FP4-QAD --trust-remote-code --quantization modelopt_fp4 --video-pruning-rate 0
7084
```
7185

72-
````
73-
74-
`Once the server is up and running, you can prompt the model using the below code snippets`
86+
Once the server is up and running, you can prompt the model using the below code snippet:
7587

7688
```python
7789
from openai import OpenAI
78-
client = OpenAI(base_url="http://127.0.0.1:8033/v1", api_key="null")
90+
client = OpenAI(base_url="http://localhost:8000/v1", api_key="null")
7991
# Simple chat completion
8092
resp = client.chat.completions.create(
8193
model="nvidia/Nemotron-Nano-12B-v2-VL-BF16",
@@ -92,48 +104,8 @@ resp = client.chat.completions.create(
92104
)
93105
print(resp.choices[0].message.content)
94106
```
95-
96-
For an easier setup with vLLM, refer to our getting started cookbook, available here.
97-
98-
## Leading multimodal model for efficient video understanding and document intelligence
99-
100-
NVIDIA Nemotron Nano 2 VL brings both video understanding and document intelligence capabilities together in a single, highly efficient model. Built on the hybrid Transformer–Mamba architecture, it combines the reasoning strength of Transformer models with the compute efficiency of Mamba, achieving high throughput and low latency, allowing it to process multi-image inputs faster.
101-
102-
Trained on NVIDIA-curated, high-quality multimodal data, [Nemotron Nano 2 VL](https://huggingface.co/blog/nvidia/nemotron-vlm-dataset-v2) leads in video understanding and document intelligence benchmarks such as MMMU, MathVista, AI2D, OCRBench, OCRBench-v2, OCR-Reasoning, ChartQA, DocVQA, and Video-MME, delivering top-tier accuracy in multimodal [reasoning](https://www.nvidia.com/en-us/glossary/ai-reasoning/), character recognition, chart reasoning, and visual question answering. This makes it ideal for building multimodal applications that automate data extraction and comprehension across videos, documents, forms, and charts with enterprise-grade precision.
103-
104-
105-
<p align="center">
106-
<picture>
107-
<img src="/assets/figures/2025-multimodal-nvidia-nemotron/figure1.png" width="100%">
108-
</picture>
109-
<br>
110-
Figure 1: Nemotron Nano 2 VL provides leading accuracy on various video understanding and document intelligence benchmarks
111-
</p>
112-
113-
Improving Efficiency with EVS
114-
With EVS, the model achieves higher throughput and faster response times without sacrificing accuracy. EVS technique prunes redundant frames, preserving semantic richness while enabling longer video processing efficiently. As a result, enterprises can analyze hours of footage, from meetings and training sessions to customer calls, in minutes, gaining actionable insights faster and at lower cost.
115-
116-
117-
<p align="center">
118-
<picture>
119-
<img src="/assets/figures/2025-multimodal-nvidia-nemotron/figure2.png" width="100%">
120-
</picture>
121-
<br>
122-
Figure 2: Accuracy trend of the Nemotron Nano 2 VL model across various token-drop thresholds using efficient video sampling on Video-MME and LongVideo benchmarks
123-
</p>
124-
125-
## Get Started
126-
127-
To summarize, Nemotron Nano 2 VL helps build scalable, cost-efficient agentic AI systems that truly understand documents and video. With open weights, training datasets, and recipes, developers gain full transparency and flexibility to fine-tune and deploy the model across any environment, from on-premise to cloud, for maximum security and privacy.
128-
129-
Ready to build enterprise-ready agents?
130-
131-
- Download model weights from Hugging Face - [BF16](https://huggingface.co/nvidia/NVIDIA-Nemotron-Nano-12B-v2-VL-BF16)[FP8](https://huggingface.co/nvidia/NVIDIA-Nemotron-Nano-12B-v2-VL-FP8)[FP4-QAD](https://huggingface.co/nvidia/NVIDIA-Nemotron-Nano-12B-v2-VL-NVFP4-QAD)
132-
- Run with vLLM for inference with [this notebook](https://github.com/NVIDIA-NeMo/Nemotron/blob/main/usage-cookbook/Nemotron-Nano2-VL/vllm_cookbook.ipynb)
133-
- [Technical report](https://research.nvidia.com/labs/adlr/files/NVIDIA-Nemotron-Nano-V2-VL-report.pdf) to build custom, optimized models with Nemotron techniques.
134-
- [Training dataset](https://huggingface.co/datasets/nvidia/Nemotron-VLM-Dataset-v2) is hosted on Hugging Face. Learn more [here](https://huggingface.co/blog/nvidia/nemotron-vlm-dataset-v2).
135-
107+
For more examples, check out our vLLM cookbook: [Nemotron-Nano2-VL/vllm_cookbook.ipynb](https://github.com/NVIDIA-NeMo/Nemotron/blob/main/usage-cookbook/Nemotron-Nano2-VL/vllm_cookbook.ipynb)
136108

137109
[*Share your ideas*](http://nemotron.ideas.nvidia.com/?ncid=so-othe-692335) *and vote on what matters to help shape the future of Nemotron.*
138110

139-
*Stay up to date on [NVIDIA Nemotron](https://developer.nvidia.com/nemotron) by subscribing to NVIDIA news and following NVIDIA AI on [LinkedIn](https://www.linkedin.com/showcase/nvidia-ai/posts/?feedView=all), [X](https://x.com/NVIDIAAIDev), [YouTube](https://www.youtube.com/@NVIDIADeveloper)*, *and the [Nemotron channel](https://discord.com/channels/1019361803752456192/1407781691698708682) on [Discord](https://discord.com/invite/nvidiadeveloper).*
111+
*Stay up to date on [NVIDIA Nemotron](https://developer.nvidia.com/nemotron) by subscribing to NVIDIA news and following NVIDIA AI on [LinkedIn](https://www.linkedin.com/showcase/nvidia-ai/posts/?feedView=all), [X](https://x.com/NVIDIAAIDev), [YouTube](https://www.youtube.com/@NVIDIADeveloper)*, *and the [Nemotron channel](https://discord.com/channels/1019361803752456192/1407781691698708682) on [Discord](https://discord.com/invite/nvidiadeveloper).*

0 commit comments

Comments
 (0)