Skip to content

Commit 54ccfff

Browse files
chrisalexiuk-nvidiaywang96hmellor
authored
Nano V2 VLM Blog (#111)
Signed-off-by: Chris Alexiuk <calexiuk@nvidia.com> Signed-off-by: Roger Wang <hey@rogerw.io> Co-authored-by: Roger Wang <hey@rogerw.io> Co-authored-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
1 parent 08a3395 commit 54ccfff

File tree

3 files changed

+112
-0
lines changed

3 files changed

+112
-0
lines changed
Lines changed: 112 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,112 @@
1+
---
2+
layout: post
3+
title: "Run Multimodal Reasoning Agents with NVIDIA Nemotron on vLLM"
4+
author: "NVIDIA Nemotron Team"
5+
---
6+
7+
We are excited to release [NVIDIA Nemotron Nano 2 VL](https://huggingface.co/nvidia/Nemotron-Nano-12B-v2-VL-BF16), supported by vLLM. This open vision language model ([VLM](https://www.nvidia.com/en-us/glossary/vision-language-models/)) is built for video understanding and document intelligence.
8+
9+
Nemotron Nano 2 VL uses a hybrid Transformer–Mamba design and delivers higher throughput while maintaining state-of-the-art multimodal reasoning accuracy. The model also features [**Efficient Video Sampling (EVS)**](https://arxiv.org/abs/2510.14624), a new technique that reduces redundant [tokens](https://blogs.nvidia.com/blog/ai-tokens-explained/) generation for video workloads, allowing processing of more videos with higher efficiency.
10+
11+
In this blog post, we’ll explore how Nemotron Nano 2 VL advances video understanding and document intelligence, showcase real-world use cases and benchmark results, and guide you through getting started with vLLM for inference to unlock high-efficiency multimodal AI at scale.
12+
13+
## Leading multimodal model for efficient video understanding and document intelligence
14+
15+
NVIDIA Nemotron Nano 2 VL brings both video understanding and document intelligence capabilities together in a single, highly efficient model. Built on the hybrid Transformer–Mamba architecture, it combines the reasoning strength of Transformer models with the compute efficiency of Mamba, achieving high throughput and low latency, allowing it to process multi-image inputs faster.
16+
17+
Trained on NVIDIA-curated, high-quality multimodal data, [Nemotron Nano 2 VL](https://huggingface.co/blog/nvidia/nemotron-vlm-dataset-v2) leads in video understanding and document intelligence benchmarks such as MMMU, MathVista, AI2D, OCRBench, OCRBench-v2, OCR-Reasoning, ChartQA, DocVQA, and Video-MME, delivering top-tier accuracy in multimodal [reasoning](https://www.nvidia.com/en-us/glossary/ai-reasoning/), character recognition, chart reasoning, and visual question answering. This makes it ideal for building multimodal applications that automate data extraction and comprehension across videos, documents, forms, and charts with enterprise-grade precision.
18+
19+
20+
<p align="center">
21+
<picture>
22+
<img src="/assets/figures/2025-multimodal-nvidia-nemotron/figure1.png" width="100%">
23+
</picture>
24+
<br>
25+
Figure 1: Nemotron Nano 2 VL provides leading accuracy on various video understanding and document intelligence benchmarks
26+
</p>
27+
28+
### Improving Efficiency with EVS
29+
With EVS, the model achieves higher throughput and faster response times without sacrificing accuracy. EVS technique prunes redundant frames, preserving semantic richness while enabling longer video processing efficiently. As a result, enterprises can analyze hours of footage, from meetings and training sessions to customer calls, in minutes, gaining actionable insights faster and at lower cost.
30+
31+
32+
<p align="center">
33+
<picture>
34+
<img src="/assets/figures/2025-multimodal-nvidia-nemotron/figure2.png" width="100%">
35+
</picture>
36+
<br>
37+
Figure 2: Accuracy trend of the Nemotron Nano 2 VL model across various token-drop thresholds using efficient video sampling on Video-MME and LongVideo benchmarks
38+
</p>
39+
40+
## About Nemotron Nano 2 VL
41+
42+
* Architecture:
43+
* [CRADIOH-V2](https://huggingface.co/nvidia/C-RADIOv2-H) based Vision Encoder
44+
* Efficient video sampling as token compression module
45+
* Hybrid Transformer-Mamba Architecture - [Nemotron Nano 2 LLM](https://huggingface.co/nvidia/NVIDIA-Nemotron-Nano-9B-v2) backbone with reasoning.
46+
* Accuracy:
47+
* Leading accuracy on OCRBench v2
48+
* 74 on average score (compared to 64.2 with current top VL model) on the following benchmarks: MMMU, MathVista, AI2D, OCRBench, OCRBench-v2, OCR-Reasoning, ChartQA, DocVQA, and Video-MME
49+
* Model size: 12B
50+
* Context length: 128k
51+
* Model input: Multi-image documents, videos, text
52+
* Model output: Text
53+
* Get started:
54+
* Download model weights from Hugging Face \- [BF16](https://huggingface.co/nvidia/Nemotron-Nano-12B-v2-VL-BF16), [FP8](https://huggingface.co/nvidia/Nemotron-Nano-12B-v2-VL-FP8), [FP4-QAD](https://huggingface.co/nvidia/Nemotron-Nano-12B-v2-VL-FP4-QAD)
55+
* Run with vLLM for inference
56+
* [Technical report](https://research.nvidia.com/labs/adlr/files/NVIDIA-Nemotron-Nano-V2-VL-report.pdf) to build custom, optimized models with Nemotron techniques..
57+
58+
## Run optimized inference with vLLM
59+
60+
This guide demonstrates how to run Nemotron Nano 2 VL on vLLM, achieving accelerated [inference](https://www.nvidia.com/en-us/glossary/ai-inference/) and serving concurrent requests efficiently with BF16, FP8 and FP4 precision support.
61+
62+
### Install vLLM
63+
64+
The support for Nemotron Nano 2 VL is available in the nightly version of vLLM. Run the command below to install vLLM:
65+
```bash
66+
uv venv
67+
source .venv/bin/activate
68+
uv pip install vllm --extra-index-url https://wheels.vllm.ai/nightly --prerelease=allow
69+
```
70+
71+
72+
### Deploy and query the inference server
73+
Deploy an OpenAI-compatible inference server with vLLM by running the following commands for BF16, FP8 and FP4 precision:
74+
75+
```bash
76+
vllm serve nvidia/Nemotron-Nano-12B-v2-VL-BF16 --trust-remote-code --dtype bfloat16 --video-pruning-rate 0
77+
78+
# FP8
79+
vllm serve nvidia/Nemotron-Nano-VL-12B-V2-FP8 --trust-remote-code --quantization modelopt --video-pruning-rate 0
80+
81+
# FP4
82+
vllm serve nvidia/Nemotron-Nano-VL-12B-V2-FP4-QAD --trust-remote-code --quantization modelopt_fp4 --video-pruning-rate 0
83+
```
84+
85+
Once the server is up and running, you can prompt the model using the below code snippet:
86+
87+
```python
88+
from openai import OpenAI
89+
client = OpenAI(base_url="http://localhost:8000/v1", api_key="null")
90+
# Simple chat completion
91+
resp = client.chat.completions.create(
92+
model="nvidia/Nemotron-Nano-12B-v2-VL-BF16",
93+
messages=[
94+
{"role": "system", "content": "/no_think"},
95+
{"role": "user", "content": [
96+
{"type": "text", "text": "Give me 3 interesting facts about this image."},
97+
{"type": "image_url", "image_url": {"url": "https://blogs.nvidia.com/wp-content/uploads/2025/08/gamescom-g-assist-nv-blog-1280x680-1.jpg"}
98+
}
99+
]},
100+
],
101+
temperature=0.0,
102+
max_tokens=1024,
103+
)
104+
print(resp.choices[0].message.content)
105+
```
106+
107+
For more examples, check out our [vLLM cookbook](https://github.com/NVIDIA-NeMo/Nemotron/blob/main/usage-cookbook/Nemotron-Nano2-VL/vllm_cookbook.ipynb) and [vLLM recipe for Nemotron Nano 2 VL](https://docs.vllm.ai/projects/recipes/en/latest/NVIDIA/Nemotron-Nano-12B-v2-VL.html).
108+
109+
110+
[*Share your ideas*](http://nemotron.ideas.nvidia.com/?ncid=so-othe-692335) *and vote on what matters to help shape the future of Nemotron.*
111+
112+
*Stay up to date on [NVIDIA Nemotron](https://developer.nvidia.com/nemotron) by subscribing to NVIDIA news and following NVIDIA AI on [LinkedIn](https://www.linkedin.com/showcase/nvidia-ai/posts/?feedView=all), [X](https://x.com/NVIDIAAIDev), [YouTube](https://www.youtube.com/@NVIDIADeveloper)*, *and the [Nemotron channel](https://discord.com/channels/1019361803752456192/1407781691698708682) on [Discord](https://discord.com/invite/nvidiadeveloper).*
22.8 KB
Loading
21.3 KB
Loading

0 commit comments

Comments
 (0)