From 392bbebfe6a4e063e3e0b7c75feb4730994c88a7 Mon Sep 17 00:00:00 2001 From: Chris Alexiuk Date: Tue, 28 Oct 2025 16:42:57 -0400 Subject: [PATCH 01/12] Nano V2 VLM Blog Signed-off-by: Chris Alexiuk --- ...imodal-reasoning-agents-nvidia-nemotron.md | 136 ++++++++++++++++++ .../figure1.png | Bin 0 -> 23328 bytes .../figure2.png | Bin 0 -> 21826 bytes 3 files changed, 136 insertions(+) create mode 100644 _posts/2025-10-28-run-multimodal-reasoning-agents-nvidia-nemotron.md create mode 100644 assets/figures/2025-multimodal-nvidia-nemotron/figure1.png create mode 100644 assets/figures/2025-multimodal-nvidia-nemotron/figure2.png diff --git a/_posts/2025-10-28-run-multimodal-reasoning-agents-nvidia-nemotron.md b/_posts/2025-10-28-run-multimodal-reasoning-agents-nvidia-nemotron.md new file mode 100644 index 0000000..3718b34 --- /dev/null +++ b/_posts/2025-10-28-run-multimodal-reasoning-agents-nvidia-nemotron.md @@ -0,0 +1,136 @@ +--- +layout: post +title: "Run Multimodal Reasoning Agents with NVIDIA Nemotron on vLLM" +author: "NVIDIA Nemotron Team" +--- + + +# Run Multimodal Reasoning Agents with NVIDIA Nemotron on vLLM + +We are excited to release [NVIDIA Nemotron Nano 2 VL](https://huggingface.co/nvidia/Nemotron-Nano-12B-v2-VL-BF16), supported by vLLM. This open vision language model ([VLM](https://www.nvidia.com/en-us/glossary/vision-language-models/)) is built for video understanding and document intelligence. + +Nemotron Nano 2 VL uses a hybrid Transformer–Mamba design and delivers higher throughput while maintaining state-of-the-art multimodal reasoning accuracy. The model also features Efficient Video Sampling (EVS), a new technique that reduces redundant [tokens](https://blogs.nvidia.com/blog/ai-tokens-explained/) generation for video workloads , allowing processing of more videos with higher efficiency. + +In this blog post, we’ll explore how Nemotron Nano 2 VL advances video understanding and document intelligence, showcase real-world use cases and benchmark results, and guide you through getting started with vLLM for inference to unlock high-efficiency multimodal AI at scale. + +## About Nemotron Nano 2 VL + +* Architecture: + * [CRADIOH-V2](https://huggingface.co/nvidia/C-RADIOv2-H) based Vision Encoder + * Efficient video sampling as token compression module + * Hybrid Transformer-Mamba Architecture- [Nemotron Nano 2 LLM](https://huggingface.co/nvidia/NVIDIA-Nemotron-Nano-9B-v2) backbone with reasoning. +* Accuracy + * Leading accuracy on OCRBench v2 + * 74 on average score (compared to 64.2 with current top VL model) on the following benchmarks: MMMU, MathVista, AI2D, OCRBench, OCRBench-v2, OCR-Reasoning, ChartQA, DocVQA, and Video-MME +* Model size: 12B +* Context length: 128k +* Model input: Multi-image documents, videos, text +* Model output: Text +* Get started: + * Download model weights from Hugging Face \- [BF16](https://huggingface.co/nvidia/Nemotron-Nano-12B-v2-VL-BF16), [FP8](https://huggingface.co/nvidia/Nemotron-Nano-12B-v2-VL-FP8), [FP4-QAD](https://huggingface.co/nvidia/Nemotron-Nano-12B-v2-VL-FP4-QAD) + * Run with vLLM for inference + * [Technical report](https://www.overleaf.com/project/68d1d48c83696e11ba669f70) to build custom, optimized models with Nemotron techniques.. + +## Run optimized inference with vLLM + +Nemotron Nano 2 VL, achieves accelerated [inference](https://www.nvidia.com/en-us/glossary/ai-inference/) and serves more requests on the same GPU with BF16, FP8 and FP4 precision support. Follow these instructions to get started: + +`Create a fresh conda environment` + +````shell +```bash +conda create -n nemotron-vllm-env python=3.10 -y +conda activate nemotron-vllm-env +``` +```` + +`Make sure to use the main branch of the vLLM. Run the command below to install vLLM` + +````shell +```bash +!VLLM_USE_PRECOMPILED=1 pip install git+https://github.com/vllm-project/vllm.git@main +```` + +`We can then serve this model via an OpenAI-compatible API` + +````shell + +```bash +vllm serve nvidia/Nemotron-Nano-12B-v2-VL-BF16 --trust-remote-code --dtype bfloat16 --video-pruning-rate 0 +``` + +# FP8 +```bash +vllm serve nvidia/Nemotron-Nano-VL-12B-V2-FP8 --trust-remote-code --quantization modelopt --video-pruning-rate 0 +``` + +# FP4 +```bash +vllm serve nvidia/Nemotron-Nano-VL-12B-V2-FP4-QAD --trust-remote-code --quantization modelopt_fp4 --video-pruning-rate 0 +``` + +```` + +`Once the server is up and running, you can prompt the model using the below code snippets` + +```python +from openai import OpenAI +client = OpenAI(base_url="http://127.0.0.1:8033/v1", api_key="null") +# Simple chat completion +resp = client.chat.completions.create( + model="nvidia/Nemotron-Nano-12B-v2-VL-BF16", + messages=[ + {"role": "system", "content": "/no_think"}, + {"role": "user", "content": [ + {"type": "text", "text": "Give me 3 interesting facts about this image."}, + {"type": "image_url", "image_url": {"url": "https://blogs.nvidia.com/wp-content/uploads/2025/08/gamescom-g-assist-nv-blog-1280x680-1.jpg"} + } + ]}, + ], + temperature=0.0, + max_tokens=1024, +) +print(resp.choices[0].message.content) +``` + +For an easier setup with vLLM, refer to our getting started cookbook, available here. + +## Leading multimodal model for efficient video understanding and document intelligence + +NVIDIA Nemotron Nano 2 VL brings both video understanding and document intelligence capabilities together in a single, highly efficient model. Built on the hybrid Transformer–Mamba architecture, it combines the reasoning strength of Transformer models with the compute efficiency of Mamba, achieving high throughput and low latency, allowing it to process multi-image inputs faster. + +Trained on NVIDIA-curated, high-quality multimodal data, [Nemotron Nano 2 VL](https://huggingface.co/blog/nvidia/nemotron-vlm-dataset-v2) leads in video understanding and document intelligence benchmarks such as MMMU, MathVista, AI2D, OCRBench, OCRBench-v2, OCR-Reasoning, ChartQA, DocVQA, and Video-MME, delivering top-tier accuracy in multimodal [reasoning](https://www.nvidia.com/en-us/glossary/ai-reasoning/), character recognition, chart reasoning, and visual question answering. This makes it ideal for building multimodal applications that automate data extraction and comprehension across videos, documents, forms, and charts with enterprise-grade precision. + + +

+ + + +
+Figure 1: Nemotron Nano 2 VL provides leading accuracy on various video understanding and document intelligence benchmarks +

+ +Improving Efficiency with EVS +With EVS, the model achieves higher throughput and faster response times without sacrificing accuracy. EVS technique prunes redundant frames, preserving semantic richness while enabling longer video processing efficiently. As a result, enterprises can analyze hours of footage, from meetings and training sessions to customer calls, in minutes, gaining actionable insights faster and at lower cost. + + +

+ + + +
+Figure 2: Accuracy trend of the Nemotron Nano 2 VL model across various token-drop thresholds using efficient video sampling on Video-MME and LongVideo benchmarks +

+ +## Get Started + +To summarize, Nemotron Nano 2 VL helps build scalable, cost-efficient agentic AI systems that truly understand documents and video. With open weights, training datasets, and recipes, developers gain full transparency and flexibility to fine-tune and deploy the model across any environment, from on-premise to cloud, for maximum security and privacy. + +Ready to build enterprise-ready agents? + +* Download Nemotron Nano 2 VL model weights from Hugging Face \- [BF16](https://huggingface.co/nvidia/Nemotron-Nano-12B-v2-VL-BF16), [FP8](https://huggingface.co/nvidia/Nemotron-Nano-12B-v2-VL-FP8), [FP4-QAD](https://huggingface.co/nvidia/Nemotron-Nano-12B-v2-VL-FP4-QAD) +* Run with vLLM for inference using this Jupyter Notebook + +[*Share your ideas*](http://nemotron.ideas.nvidia.com/?ncid=so-othe-692335) *and vote on what matters to help shape the future of Nemotron.* + +*Stay up to date on [NVIDIA Nemotron](https://developer.nvidia.com/nemotron) by subscribing to NVIDIA news and following NVIDIA AI on [LinkedIn](https://www.linkedin.com/showcase/nvidia-ai/posts/?feedView=all), [X](https://x.com/NVIDIAAIDev), [YouTube](https://www.youtube.com/@NVIDIADeveloper)*, *and the [Nemotron channel](https://discord.com/channels/1019361803752456192/1407781691698708682) on [Discord](https://discord.com/invite/nvidiadeveloper).* \ No newline at end of file diff --git a/assets/figures/2025-multimodal-nvidia-nemotron/figure1.png b/assets/figures/2025-multimodal-nvidia-nemotron/figure1.png new file mode 100644 index 0000000000000000000000000000000000000000..2e34c19753e112654cdb0a092d26e4aa23cef234 GIT binary patch literal 23328 zcmcG!18}9y);1j5PA0a^iEZ1qZQFJ-(ZrtE=ESybJK6s{&pGcoud4p9zN&BUs$ILf z@9NdP*0ru{b@#0#DJF)c4FsevDx{#Mz)1-A=f8syNG>pS2e=9-pCMz41SxSLQGNrC zL;xJLx!qkJKK9X7$4hvKKLDV24*{32Ly znPWKOzX6Q+_4=fLy}yfJ0e>F9=GDpzZyEL^0h&Kuo_RlPo|{gPKlVQus{P;dZ2Zr^ z3IPrH&HT5%`TQ;Zou5um9gDwirtfRd0QLGSd}2_)gR?wdbaqN{RVx>z7D^p-vMv-AN=P40ATwi@>Tum`h{S>XUYEz!2PFb`h?*w z{6(1i%Ki<=Q zI>`ua-|!*t=JS87qrYoo8dX(E-(Q|}hiLXKq>TzPni~6ui6Nl)i9fe6YFbiZ2W%l& zovx%wEt+NO_Kl8teSuj;koDD^_qh2#s)EP(DxtV7D!>lU<;2|aWTWbb+C{qPq3u^7 zDvGU*>#BaT;q~WWO+wn)cYph(=7AYFT!)V|7)^)j(64G=iJ31{+dIh8LX+ad-@*S@ z5p>d&+#4q*wTqLGZSdX~a%SehDuX#s?sZp{E?G&9NUpR`D8vwa9Cl?by#JbF)t#2ET@F6q9nyhl}NbDg0ZjD2Xpfa6?otr4r?s z{k6lW{Mf%iLGa3ggdrjkYJxyYhtN(36)X{^KY%}GP>p$~c!4>26AOvo#frl{nK#n~ zNi_=KnC=UJwTV^B_jhq2xA&CEh!T)3A|Gyui z%Ssl{AGW?&&RO-gQ_gvv4oop}&4 zaVKQvNyNgFkoiO6e|F$ix3z=5g(b@IUYlv?3(i_IZHu`w%~6e?q8++KIUa)6{a2Cz zu3_2{F(bvshJO7wPY9ZQwGO-1dOm6h+y8HFhWx-3HudT|h~p>y7pH`_QfR~F7@4&p znP!f*hxH`o9sF;!L&MGSemTfR^)J7;n3G7v|p=)~Kk#HE>}nEZ-@A{C6Jw)(HL~ z@1z)vfhuk|;q`B9S#5#KAg20%T3;XlS7!ZpN`=oHTOwcVgvlU4Y5Et*22Vw`DQYlJ z9Ow$`kICQtccg}Q-wl`|FRY*4{pV>Te5<&qYl|`0dIt@#I{GK8YD+28vbWzc{+zy| zH-E-td5_NgNZ!|gs7w0te&fJoSw<2nJ5xJ}u#L>y(E0Z>pF>txO>pWTwCqm) zpa88+_+z<-uF^7bWH>+T`d`!d+r*K-qrId07*6z2j$G>qq69`m+h$Yt_fRSh*Ry4s zpUQuYa_Xx5*J4KAByG5ykWfUJPEb?~hC*gnobmr1w6zxYRQL7FAhdsa3@OPb5IO0x z?H{?EbZejlA-H)=e7;mEs#8{l{GU?XCD#PVAqqcbcPm9Ve<|MtZyQFi<{S!3wc<)Q z5|oIaP}Q@Tnv0-v+>8?Gc`Z5-ObA3ghS~=-bNX-T;yFnQ3J@Rf`D%KC1X`SATrg#* z`SA7)fi1zCBSIZBa)vvQO65sl0?*FRc&|l2t?2y;iRh6_@ylXZmplJ&1x5T$nxBt+ zR|}~24eVuemSMw|KJUTbwh~_KBxrO{#qmqN6Bxv>^Gr%=mwVHA1XPYOaG^a!n6MSO zCMNxjGcexM7>paOC;tD4J#+f9A8+^U7L8y~LWauN)^0-XS{01ID0k<9uGgYPWh#0Q z;njUSR^1_F(q9CKv>FnCfGj+h!#x+>h_e6}+_5T6DbSC=?gAW6`jNo&6*rJ^t97WD z>K(lG$BrVNYCd8GPPe7;`~E63nDq#~ci4{CMChDM z%$NPRQMF(6sHjiT?01eg%mv{F?jHxE?(L zc?cEoSIOOv-{5wP+S&3z%Ku~-MBlR33JmyvWF|~4If<}Vu`Zb5IE$v>u@qjN7H2rw zpOvyCQWG^v{Sc`9AQP|auS{_F9Es6zAdN5{?`>h4`n%pD_^KuSD_proG}6=*nFQ6W zAAcd~Z<0rKDy$R1+ZtRvYaLraT{RH5fqF^JiByEoOYnO9Rg{!8xYiP=40@;!_y;0V zo!_c74;%ibVUpUKtUp)?A4McWl(8kj7gx5g99L48+{XJpO5`8%@y7!4-|@}(;ZnD^ zg0drtv;ZA3vsC_9@#p_jQ#TD`QcUTbf0p2nt%_N7Y8tRVsfEuZqM-W^(j#oJAGn<8 zgP|{|Jt%zvNc`&f9vJYKV92Cr53+HjJZaE425qc)fMhcKoexmnm!d-obv8e%4Q<+$ z*80B$D(;L}*(y|L#Bht)Qt(|0)OO3+ajvOVZlI}giy8Hg2h?U8C;p-8O3`SOuIwoh( z7j-1fU_Me)B4uIIqr6_LGJ2Mmp7N%Fcy%QI#dRtVs02x%<*^6gPX1~73`gjND`JgG z@qH6>=ho$L2S_7m<)3_?z($a@*;p*Ek28!O=LiM$+COvhPpyAwB_!Og7>#MqVO^ z%4CJTgc&o$mKF@WG=;P4C^&jjfze@UA%~72d9-h+fP4GVoo{DoJY2I@@8^i(X+Vx6 zsP_9BSCKZk3Cbw^Ng-x*TE8zq<|PMrbA#B^gD#hSc6wO@sF0*bQb>tT;;W$By=VT$ zhWJN*LQWs=QA&xB0~{I}2%j48KXfK_&VA$$`8+`r<4iYHFdFq+TySr<-|z2ssQJ3@ z*9+OH6x`DM8A1?XCD^X(vS^?L+L``R+6}7bTgBSUY1n@C*&UUkNTS#l})q%^b!qr?hn&)39iq9p_rHsk}2mK{iy~O{8d&_^VuF)4% z^QM8h^~V2Is_dC{NGXjYepk2*{Hrd(JNnpnXm*C`Lhw+1=-+H*8XX8>Lrx@8HDYMh z{emkVImHga;7L=6;gvGz8MPZgecF$lU9YTs-`C3j8HRb$2DaBJA*Yu?!5uf$B+;-G z-pE1wtnZ|-c`&uK=bTG@Ase%W|C^CXWDzsxlHxn_3u?7ggQLR}a$>f*l`nE~jjL;B zTq(1HG$@|Td>cz7LW{^1XYc-#t=dT}GSJjsQ+ zca@Pu$ut7Er*yXa>;}wK18xRe)sRS@T|8<|b-IvoppUXIYk1kVgS#w*e)c-unr2HE z>yohF#Wh@20W4O8{i2=RHe)E}HG;_=Xh&(^xgWDGN92kat8FGQ~f>G ziKAWf%EIwFhSaDQ!I6mBFGGlEV#Y%@G|#)Jf~^h8rKJUAh5LKx1(ze7VNKQRv?`7;`>-}8o= zgbqmrJ$KdVF)oN4LK%+MR7ha?LPXd)&NIA)@Gfy_?u?CqhZG}9)@vJ#B&KMHOX`LC zOn*+3{f|<0WJminh&d4aQ`jT3rA{zX=%D}q_hiA1o;1ZnesUFa4g#ysr;sYSRKNeyB$*_E`t{tVzn*b}xM41OO$vd}rZEm3VWDCS1$eS4mSgEOR#0lGPpFT~NvU$y zNhxhS?hl^+%c;bFDi{BK7P5lor1THL`@h{mX1)OdeSIbX{q>2P4S$>R$X#&q{QBLp z;)+JmoG5BjpsAArDCy0PS+y{zwwKY@bTy@g&BqwVy zs~-F9R}b>|;xiC95`all^n1hF&?)1WGV_;F1Gbl*ZCoo9FLg~1G$LV!x`tvy^Aiux59{EeT`DCVR_g1`;Z`UE3?nk3v&AsZ@ z16gju&&AwnJvHl7Ek{S|*(1=S>YvtgXKQDn4&N!=SjKnb)$Ej!Fv)359`|Jhe=sVg zy46F-52c+s%pA9gJvWl+ymNw>-Dx$Xb#Wu62TL>ak$8k#@Unf-iq*q7!X;$r7;$!U z{-#+l^TyW>ikCZ@QD-=f?xS0cCNzxDu%eTXBJ(A@4WEGtbyw`TQe=P?#<;Y|=Y-yx zs`R9CTa~U5p;2FzPKy4}>9Z|P&Ak7r2y{E@I>&AT~MTm=lVX&-Q6i=1vIwNz9y!0hLx;!nEG}^fM z7)iA{2Zq$U+*nsgTOBdENUdVd--nHq8iGI|#`)C%Se`KY5nZgHe$w$n9MN)3;R1)M za3R3%4O!E2S7EtaA2_m`2+9=6bVlCl*D%B9-K3wenUZi-2C4eE)lOAd>F!t#ueEC_ zx*IhcNvNUgV&Db_kj}^efl&eLN3{U_Gv@Cp*xP_FiNZi6_dn{;H2L$vsZeyi>aNMpd0@&&XiXr-PU-onr0cd4|f2)1TGM(M9TpY|VWW!Spe( zRl?zpJ3MAHP@kbS^T>g$-rQ97~y zC4?{3%M0i9wO7_?ncK^qe+nrTv52IPOSlg8w?p54530*%hoBQjfCJTRKZp^l8ye(^ zu+C{}CI6isc-_bhc{K97kZzpDgH4}RfP?s4+Vd5^!!(VRp!H<2-wU~>Vh1dLeZUS3 zty?F6$61{U%AKu->#|&8@mSQO5ID1aTs9a;1!?{cTV+DG8`@%;$k;y;`*d75r%`yM z=DT>msUKF+>1Lv?0wnq`z2PD9a3I8r+zK^wRYz>IpPb4Mk{GEXNBs1t991q+tJF6; zdFeh-dSxIdAgj`%AW;yj;N%B3#hI)}Jmw$sNn=$h5cu0*mMfueX`UsJ-*Dt~;79~L zwG|Rgvs(!y#6>xPUntGHKJD4IFeI(!e(YY%s>y?0u$Qn44A0Ia9T#$NomK2S zh7c)#9a@!>iYu2mL&Zagc9^SoB_@iLPk!niEvRCceVVh23FVinRL-WakX`h*{S4%3 zpX1<%J)!b}^&pR|PvM1EC`3oHBqz_OcKY|T!=U`}?84UaH_vPYSWB1&ju5WhZ87}=~Xgdq~`BHQ@J~w;F zC&6aZaU}KG2L||G!8xxgq)ZLpoOxZJ8JK8ix~3n5UQe)7O3Yb41@{Fg14{3y7f08E zw$r>?pD!H_gH9OBn7;j5^r{H0*tf43oReOeLE6a$`WWY|$UGNf$~xOuA9|;v%w$rL z35#<`)ONUvRuxh!HA-Rlc_RIp1E2Eb4SaUDCpuR58Y5d|#>W(0Bj^~(YBe+TD~Q|0 zmopuh3hIflnbE`$0G4dH-r)8lO-!W?XhJa^M;_2#M#o|)7hv6(YdCJl`3Mu>S-2m6 zii+96k-8`3Bi=B(v|VbbN(RP+2$=y_fj+4tp7Np1il1^4jyi6Nu)fK7Ob=IC5?Ldk zz)4H>>;xm03^0tCA}`x-(yTrf?a%C9HJO^y0w{!XZ%1dT&ZK45`aqwZ-ekgu>DKMd zM(`Ftm8nEQ?G^GufH%S&ak9t%^(+wM&`EinmH;tU=B8tM3J5) z^KSHit#S@0xt$H-+T4e@yd-BJ^F1`-$NogsP^PoC;X&HC2ke|ZbAcj5p*B6=k`!sd zL1J8t45FqzEL&gpf1gQ*EA#kzAfm2iFl_qWUr0SQxRU&%S&NB<)DutUq>06%H4Ro1 zWP{QiW=7sUrd)_lo5(l{IH1G!fG{Y)qc^gEG7~r66fFOM876SZL*H|b&DbLo1r;bE zKBM*(>=_|73$$X3e{zk4bX25*m_RTo$z^#$(U4Tx5?u&u@7Ijkf#iC2S`oNw(4&jj z3>Cj*?Ete84ZMIXsym6aWf6DuU{Fw6AKNg@)%`FG z8yl3h26s9=T1qjMTW!MMUMmJ-iA(2yAorr6nxPAca@7(`HwqNe)IVooNT<0bOL<5Q zCayh$@uDcv+4@5M^2<8@VLz{^F!Hlat)hYGs}sJ+Jukzs8{g~Nh0BLie|5cQI~2Wy zS9rT29u6^(9c^jAUiY-fqR-Fxmm|SCbTiKlNFu+AO#+AnQr42Ow!SB$zPQ~xAn5?V zpioG#=72_2q;UpzZ5=69$G7&D|2+b=Cq9O`(4tTR^gJUaHpamPJaz zx@=xeG{*)=-!Zv=lhfsF>ycU?Y~$UC+Di2sZF~jd=%|HTbkk1^D|>Wx5fYA-lhr8q zPLnw`9Xv;>p#odV9j|sDZcaU$i!oFBH1%wCszCcpSyeL@lT8%q&=FrBs^lV5nxJP3 zj_}MW?_vaTyWftyzJKlUbMeo6Fm3PYtdYAYWe~+xI?*1y zNr6o^XTkSDcgIte>F{cor{kep>uljjT1w8PbxI#Aa8fZox7CK@-xb0kWe_vs8Vc%c zcBY^|CJ%T#1crc2L`+bCkg8ZTVUd~vP~moyRYlDN{&xGqId@ln_IB}67dY&5#5|7f zIsf6`1Vnn0(Y4k=df%|nDTYm^Gjs?G`*EA=bHA53okEi8&>DVqP|vYesQk%WkV79I zJJTUvW7AaXK=RgM0{wV!8vq46b z^3uFPkDRTJ!3%$mQwDex!OhVPK1WZHa)BO<`8*!iJ-^4M17;;!>;4Ujm3CW3BwfGF z2mBVLEOpy((ru4&-}kDQ_{Fk&JZ7z(X)#y&<^Q7IK|09VH%X4a=6 zKjePNoB{JYYB16FNWDoDMvI>TJNojwMHRVy19;Q{y#|%z1YuENYVHlTEC>Eji$HS= zK4edVI*>F%GkqE**_9exiqWxqztKeH`CM{yimcm0X*Fd`ydT^l926|;RyNVKYc@ZR zc^5xxU%DyZ7hq+K-~*d`Zj!rpoo-auh=VmHZgF30yQR0ien1Ao5bk_j-OYPw%L)+M zc;f&kBt@=aYdh*vtcL){sON46?PcemgmI4w$_@mM8q%_At@A4S(t#FEgbP^$b@Elg zdI*jJwt3st9TS1%5gNB07%6!&WyQt8F#as0#XR-1W&DkeU&GVZm7rCgN(j zufkLJMhWvtvdZ&ac{`J}5fR7xSZ#%ffc}~y#_fKB2oLGJ1hG~v-s=I!sUyECFC!;q zEBm5szFKo2Pjz@EX>!^%8eYK268cae%qAI=3&Trk2?MsOi99hRnFsOi)Y6cF7&{6q zemNZE+w6lu^U4)5ya(1H+D-NZ!+vz*SV58iv}{=$GHt|5d=&b*U)OvhaW4m5*$!)gqC80Vs@ zpVo?GlycYw)&*<{SI&7_T-3)59Ox{m8(I8^E_d__LViF^kpQnN-A}G_I{wYGb(u+Q z6w>jH_hwWZ&mF;6o3n4X=Om+6elvHe6n(_N?EI4`@X-j9yJ~6Y#5-HI1re&LyCMaTcF$8y z{4&Q5f%(^_j3D-r3uNDB32lxwFwF3f<9J1bp`lc4JXl#2+>)B$oDuyzQ>b2tx*+Db zheKex=NN{km&HK9x8SINHwc5Tiq%>CQ)vC3IF~1CiN>Ft)!fRf_Wa;Dn-oR|&-{w# z-wX4b&lTeiC3c-ODQY9Wm*e{mABr1@1s&wEi)K3it{pXn@2B31&@Svzw6~TxEzSeA zTEDjP(ftf=|H)XRRR_6huZ2;)K%F-95OR&47DP-X&uAwZcyH!{NN6#Ga;7QI{1)4L zCzgUmwxn}v#}v6;FXsJDi-_0NP?LyXMS7FElISnICo@%neJ}P%9bVQ;k9^K+DyLwn z>yuZ&)gwngWH>#k3+JK8-0e}l-)?&kb;h=A@>%0Tr>#(Vx;xkt&G4Z;n{`Y;DWk#F za58*e6eM^5tdeoA%^!w<{cU5&6TUw@`6m3=MI>q&);OuW5td^?S4PU$U34B$v-2kr z$n?fgOPCsbB0S_i|+H{Z*aj>4pB$thkJHhj5%~OyJL=YN4qh2o9@<< zZ%lO-&nZu0O2z$PKjNNqI#zvqRieZ_VjK%q>J7Qa=6p)h=B8>zSXU0O2iEvwbQzu? zyGN(2n5dg?%087~s0(UozfELRwdW=&-+1^LYKp8OfJPOIotH+AHKHwOXVf056VAg= z2F8uz2`nd34cBJb9tCG$;N2(YXg5tr^ra68wlqIY@Mya__1GQI6BFyZ1yDXuY5EP1 zDKrayPZ9T8`i^p<%#6{d$drWLXhDX8BjR#wS&)3#Bq8!*<$~b^k1Uaar!O-4EFDjK zV1;qilMe@nO?(&nMDH6w1yKn4a$Uh?J|Gh)tO=&_DkhIh(R%O89aV|!nh8op zl_jk#i5eSUZv<|jG(qkzb7*4|Oq&=S;C#AXNz~IQirHYDJINfddLF`nUZX}js>f`B z%_gS2dI9<8E?WKu(osN~jv_dAp!(AkHoggcX7U8eDFai!q3aBK z>UR_;K~JKDgjMcW^?u=sdV-V?qRALO98Rq(J>&>TrnEbO4zN~85YBVjs zp&)8i)6&j_cVoMD z*|tM&$?Xo;oLIi)v>!JpM9mY&;uJdX2TBWu48QO?W{l8rAsdb1o2WrpfCS_K8p)r> z?S|YAzk3dWulYro9b7|&?l2;hXD9ivvu_oBSu=-w;E%zDg20oZh{>gn;5`OD@W^f= zKzfx^mU#BLq!17>8S}wX-jc|@WBlPk;+ GI-%OOaNN1!0FLWDmJk$o;|ztjFJT z`X+oUy&cwu@;pc`ho@A$Ni{1Q_hSH{p51WgXF%Xs$2tzOEE2MYgnLW?o!QWnI2KG z^IQb3o+ifb-5nqC5{+AB8Pz#UXS8{xfCEK6m)Uko$))E*wlD|pK%qsNt1FJ*x;VkR zI`UU)0h$ubadjuz!ezz)pOKMLD!g8R&==Ht0d&%S;NGNZZ|A*&Wgz*y1=tc>U6@gy zPN)tup48+xsKVT#O@K93I{)+)h48V29fk^G=;MdmM>83RPCh85`QRFMlOzkm3KoWZ$t8L-1Uwp~?#ZWlrL zg0|Lc1!Rpji(7?l2<%pwRt!6;h4?T>gWC6Qp4?_S5>71*Wxx@NOqB#X1cGsz?Od#~ z!E%Sa2k*?+{m$8PZjc?av(R~eIYt449(UQ!^UN)|F|(%JuW9e+e}Ej3f*X7LceJ2Z z@VR1H3e3%r*|jr8S}?4)I_NiTiN(GT^EVlIU^Ac+Z4>%V^QEb!1+=x|AiRg2B=!9e zx+S-eK?8MHW|FRklBm{<%3s&J)je)U$l)KK0__jb1X4N{QNs3Yap%o37oWXEMG5!@ z^bG#WiPt0~W~j+|TCa-`z$DzCoPoo6xoPABRZBXf zjUC)PO4qsKyw;}w;q5KSq0C5_&eZ!}Zz7f-1O8gF<0(&u7^OuMY2fveW2YNCTKMad zO(w?D{Eo&+E8Q_8iXbo&c=alg`AFM9IRA1HgIxCTn^EXD>o`x752onm-VlGkHd?XI z%meWn9&s%GfYKmGH3RY~2_E)En!qaQeBi9qdBN@n2%>*x>G#G1dJS>;I?M^2(|}y| zWqROs4*ifijpBf4d7CBfDZMmdv)qp*!$+iIirP>xrGlz-1t39MzcAm0Lto#;_?hYd z__pWN{t)>4+UnmwAWzykDmD={YM;F8RJpppqWo=*RI{#b=s9_rZQHf9d{426fCY`IFipq(Ys)N;)C31Q|r*TSml3sa?juXhf)8kU{3nmr07T%_9Y&esFc21qnsv zEZ~W57@MA1Cgfd+mymf`4_zM;c1p; zyslSA3*p{q{wB!6{__#VuP+Mx=yt%K<7s``Ge>0Fuh1{gkH+o^kXtj7HA*ko2aEFn zke$g!;VOUkZTSfr3C15AL0h;ys_Q5Ey8az86wwQy{lyzBa3J&Np5()=>(F)fqGG~P z=cfY_#_R=T>SguIEc3V~l?#Nuz@B>?(WUJv2ah811=z*3^XFl#_lT-LO*(^4)9Au{ zqthsvMd=OFEP{B(>bgy-HRV*Mw1OPrh1ZVnQt z#A5GJNK)djlh5bWS#;QsQf@FKKn84nMbSR1EpxV6vPDaTH6>#iT$TA-bn7vps^LQQ z0#&}U4Cn{8TTJx>lLGH^c(v;Q0SWdQRqKb2knp(WF*|Hcf%FRfRQw)g@R>3x$aq-v zh`JjbdkzGeEz>gy3{|Dc#*=s=G4MNZXU(SILRRMi2$!kMvDVL~vAEK?*ZbS+c(;jM z$^($7H^xPaPI4M7-Pe1lRx(IuZYjTF;IUrWUP?*{yTM*R)C! zWQuS=DO?g8F3Os-3Fz0jO5u$Zb^WW!(6({M0oi=~GX{BLr7Q_J2F;;-G{5XZdkF}| zZwh0ICP!bivddrsjn}2PHC;b)q@EK`s9MQ^SnSlj<1Re=o;}BWigva zkMU_{HSl+Lu*m!G>4~xS(AfUGD!iMkWx|$dW(|2etcTZw=bUnyq$J0fn4V!{vemm? zVfGgt^I&%gK8(jWr7x@NCZ4<;34Mp7GWkH@oInl7B-$&y!S9xU3Qte$eyAb~{gt~K zBsf@WIJ!vG;?2;VTav6wsT+Y%0pLNIgn~TGF{R2l`Vi~*N zNQ3r+hjLoLcjn=Ar%R=uLjp%grv37_tix|c^~hL%Fsvjz5tHH9lyyV$Nec%~2iVh~ z>Y%~nKwuM22r{a3v;Mlg+R0^S+FM9pVpqA;SSOLlfOGK=C6K0m|1R+JS!1f`=Hj6` zT%k|v^Gn3ZROYvR@N{_;t5?k_AaN|bExQ^)U!|q^K$+yU9HYpae5v&4Xr%UD@-Tex zOAEl7`#foWC!^Q@9SLQ5OtFXJqkD0678)cK-ET0w9vO5Pv9$&Mn(nx@tmvok{8hkl zqYT6P&nB1FZW7kujzza`TM5yIZ>0?b*NF<=*)B0uE7_=iK;L~@elZX$gWfYx{=l-% za;9fqZ~MlVe{YikS4F*2pIEmtknI}#tL9@YO`_J8Y0n?AyZ4$-4l1d~DPst9&pFHw z6e?+ORsiA|azKYa0xljVIx@*md2he}>FV&!wh44?*56#8+`yW|Q|4m&zU*lq`d5aV zzl}@f-Jo7wcmyBT*>{MuSxW=`77@8*Zua+U9ylJa?nxunr65nZ(GC$Q65c`8Us2@o z2&0~a^h`%xs+GhX!ljEBE$i01(ZVM(-a-T!3m;@a7Y%Y<~lY|mPUb_j$uJRE;Jdn zc0bsdZdIsw`}Okv;efYoZQT=Rf=xTMkQ4h)BOXz)Tv{xP;LyPk$@RQaC-O7Wa}mjT zIqlX7vietCVjhR-wjOs8$|_cM{D|+Jq&GiVMgi&&TNb4=S8*5SWiIGuUeb&|uw&q4 z!9nMm>A^O3n?TA2gBeT+P|)t2c7GlQZAl$1uRdm==~N=3CHiP-)ghJ zthlDlfh`t6iKz|q@Y#t=?^5t0x=z>+SC3jEM@l}m9ZEFa`3h9{xqP@yJ5(fl0}<%s zeaLn_f*F5u?0wb5xb|ScDF?D7Zb#NiLDzI~+ACFIFSOZ~j)+-jFQYqe9~fU}dM=10 zsU4jrh44X4()8cQibLS&M|irsw8_0Iy={wgDS~;Ar+t0z11a5Sh1Z)R$dz^Y!qYIT zQkG5qXQ-)6Q7?GVD>3j;DsM@67H}gU*zI1MEj0r*x>}4q!h&lxcvJ@zAp>&_S$ovs zF%qdk(XO#r^jDI-SP^!ZqfhZ9)3(UntxY3z)Ita}!-`{#1Jvz&Q(E={>2lAUTg%4% z*Gn_v&iT1+%IEfj(N!L!n5t1n*$KJFt57fMrTAh(!w*GgXPa2O3pC*?2VIGH6KIyiX><2!po|K=e*o|c8MV)=N8nbbCT#g>O#~?*jA=P%ZY*a&E*D8L$ z#BgGW4<-SN!b^~>7K}cQaCLDZ@t$uGen>Ma;pWsdBXgNrLEG<>Tl4ww6&er|ilJbG z8{*mUT3{uDwE{~FIYAf`vrStp5S`ndbdjAUtp5TH8Fd1MPc$5`9i5Z_6?*%i3b0U| zirWC2d`NzCOtk4Za7x&0hX%GirfQm94;)CI-RJ2(3T(=$sX?@4k$5t#oI!%WlTc6o zuy9(x74RuY21RTeyqH*v`u$>%# zkh(YkPp+_V`*i(T?MbvzjWQ+}_M_%{&R$Hh1*K#Rd)bcl^XkMKEnhv$z%(JUpoeNX zx4(>q$#G#c9AJialDRYnI?yblM5}wfr9v$OdhumBY|z4z_*D1Xd1_Y$ea}=qLaNzt zZPiEzDWw=?ozLB`4Rq`-ehbSYk`rqd?%I3&bfk;f^x)S4LueEh)*_Ffjkk^w8j2Kd z({~^Ks_jji!zm1$gMEh8CVoxkz9O{~Nc-YkKM<@=`2;vvReKaZ2EgJP6+Qrz01|dK z)0d#e0m$qZH^nh)>Rh~h?+2K9LC@$R|D%VLnm;7$q;V@`S-+QwFFk*1>iRgJc6`8 zFGM+f;nE;Yg3Ff~( zmzGbzo^0cwh<-cbeK((l(6?yl4=-kzEw}4?Gh{+7R!>XJOBlBGilO+SNYQzIKS1J< zeC_yA8{^8tMCmZ~tT?G{m_n2#mdb?zMq_v(<;J=iqSh_<-8^O34=2%z)qde%1ds<( zN2HBtq+I{g{D8f0HN6z!a`5dN!<%92y*N;=Do$eOC|lZIZqz(3g6dfpb&jJPR1bM+NGw{TPy}02K75%T8JU{4u0P71?dJ-g;(qf5 zD|kNTwsWOa`|(%@f?#}hY9eoShBR^#a=CvP2deJsME-te>AMHAj=bp^v7r)n_Mu)9Jn+iO$J@@4TU`Ms zMe{wlnNz85bfdJ;bSk2(d?o7*LU-nzHbe_c6Ekg01ya~w`t>NbpT`@Y`!u>Y_?rY# z1+k)C%R^6YgZeDjZvq+_yv+Ff>BHhD0&Ru-1JUZ@wDSuM&>@&nPs2KY4>{) zrjV#B*6-cG)>iM7bW)XpIq+w}Nq;{=RiBBdK;Vb6)<44%GR-^tN?&t3>Q;f;$Ysi- z4~N$o>7c-h=JFCAtX0ruF0Vr4o^Sd5iS4x;4sWfeks^xZA-djRWMGpR_=uaX^nDZG~Y*X!d8wEXonP0Q^(K~bRsO7 z3e=6n4GOvG_87Ij-aB}bDYEj&OAn%u+K6ug4=1>iZT?yM)7)TBa7z^__-1LeA7pC8 z-Lbsxqa_my8KAJ4ar7&%ldOfW(>Fb5e-i7byyx;YTDI$j%V8;d^NsGZ6j&?a&tbN^Y*&HN9vR1*Ct`J8g}5}s z%fe7glpzD0ls@ga2+yc9RGe)JTzMO1jDDiQTeL>L)Q>M~iL66NCfhKN1Z{8YmzEnK+m@qqbS z7+*)$_AJq3{M!rz3ynsB!!~`vN43$lYb8WNK(YdYA3#PzkNijsE?6=3E?}1 zOWE|bd3wwxAvj2%5R0bn9I!TNstdl#pX(M$KD0CTix&jN+@0ob2m>e1&k8t6LDE<^ zQ){n@`pH;Sb5_9*=Jm6D&{%H69YH`!FBiDngp|*?%|^=>F9g2G&xG=&RAuL=1T+}H zY)*Xg&8p5&D(0&tM$vq6odgSeSBKzT zRYVBPc^P1cWe?KLfQ8d<`IuAOVr*1uc7x6lO@Z?5buTPt0p9@Kno@4_@50-;vKtM# zoHD0zJQDkO?$g9tXOU}5cUKhTzcQNK#(Z?$O5#WJ^1WxWJcY0sMBpRJX=pPd zmb&Kfgh06Z>`47U)H_B?ynJT4m9dl%o(mexLGQ&7^P7 z3qe6HKl>muljMSxa!?5WkVP-0;_AC&yYj@JnycjqnXqJ)9~eLIfSWMuKw1P~XGXgI zq$$Yf4Y4s@S0^-mj%LZOYSGM|9FA+Zr-Nvb%&f9Ol6%6P?IMW}KU*-P3I>dbC$T!5 z#dIrXUY@QDH(BCrU`1|N1x=6-sb8?J$ zMY1B^D=YUa%F?*EbW-C;Lcc4@2nry&m}FBZbhS#B6r*OOHabYN5ZQYP&+KgPOfvMJ zBcXegny7lQb?Yy&>UbsiQ0!PA$_?Bag z=SW*iC^L;GRkOsN*EvZ3NjT#Glu&A3rQEY=z#bodmjQMnMKJ}auKl3i)5Fm z*ZFE=Sv zg!hEAnJSm^NH&O8c4IzFVtyXrO?}!b?2Y9X*L7fQcYM0?ed>Jhs@!Bm^js1XTP?B{ z%33}2YZP2)pk3DDy#XaPxqp7eA`nRSu!MWtW9GDsjwLM=dna%wgmWlgx@e3vML2iP zPmIhvA^+?XS)U7u8HuItKPxRW;EtS>4+V%Fbqc4~R^7HpeovCc5^(@3?O_mXZI@#; zYMh5jem{7sBH6~=(3+r6GXu{YBIy-WlLB?MGXWDY?vg@17gAKkbR3b@W{<>^16b&) zwS~Dsgb_OZR9{STx*oK%w4HnU!Mf@WG1cYvC3Iv}+9m`i@4c9gz!Wj@LnB60c1)he zu+=~&0ndJ%e&tZW)#Dx>8x5*ff3TS4=tNL6MVU zBaYVJRD8mR<~6X?ZM~Rv%1HN}bw{Kjd(Jb z7&@xn4iRl9?u!UF=*~-Y6R?!M)pIg#Z!B_+UM#+8Aq|GCI?~#y z96Ok*(LBbpYVMIy7DezTFZwb-GyZ>LvNI)}|WBQGm zXu|M%PpG*f1meVN32admzLGJlFi@nG8zm|-bPut`DvT>ZN<|7@4XVam7!jR3$W9*3 z_;GW~kI|LK^Lq6r?L+BF8 z&Mmimg_N}kVvs+PuNPPiAZB$Oy8r;~i7;llHZA9ase4Gd`8>gy+#uC<#r@a@BZGbz4P= zVg8-G>B%AE#XAv~pCEA}g!N1Dz8^qW#~{uJosa3JB4X^L_wPKxIN#~Ea|zJ`1>6l! zQ!$sokCvzFM4fz-vs#k9Yb7Ya!ApgCR>LoVMCW1jvBe!*C#*NgEh* zXYD2AML$h`^{F#Q?aSIWiWCm<9?HF4)~vAXAE&o+t4m{%Thz$r@HB2<#7tXGA`)e4 zc)+jC4Gs1&XKBvk>cB2i`z^tS>w?DVu|ogjRvqpQkxx^cj!$P;)*!WY>i|N3fbDg;Y#b-C&k`JMExQ$JPa6HvJFCFa)M z`HK18b-$HG&h^MlpU0osadbWnfGa@c6W(Zg%&^KJ=#OChzLDvCdRVue=kx1UePA|$lGub4qJ5;%B&u=C{SWrK6S6?U*rt*yRcNT+_*G$Q^}B%)%Sh4p$2PTwa3g{CkU6&`&@ zH1+v?M^JO$bme6i7O_HJBySKh5m0lulCSw6gCbdp=#)c@AO*J7oh+TDo(SK)?3K6= z(p(m=@7GcUdo08e%fYR@w^bneSuq(yDMl$q+Sub!KCYUZ0ZFPtJv&wp|(>;MiRJ$DY3vrF#4 z26yKPV%ktvt1Rj=#go7~z=m#lpp)N4zCKtvDW|CLWm&}$OtRxZH{lcEXRq^p2k~BHN3z78;Ma{IBHfu`8y6F zD9z^MNhN{wliAJO#=5f`FB%qiq}0p&1DUXxK3@F!*H(27kgS2GDYgI8U(7DpqNk@o zBL$l3aIjQ9ACQ$lpr+e*R&5H{T$a}(Xr)$q8>q{n+*X*tTrM~itYYr&(ZIDdGBStW6f12 z$$Nhlk;e__W$^&cT|!wIw_krrG{@4wf$xB^B{C?Ni2LZWm`588g2CH|!Vq1W`r#~Ot)mQbHWv+gKR#MU} zIP%n~q9``vUqmS1D{0m>ULCAL%_8O(up|n=wWg%XxW!Fj9Zu~9D%=r7<>lAJe~!^t zd?3nLe~tH)LL)L;%Q}fS02FmlnWtC;H=K&df$PPd@#v)pyy{qwn?O#}di=)d0-((~ zBx`9#hDbP#z}Pf}wKsxfp8x96^849cD#|nT)Lomdj;h9X*Wm4q1JV;Bx;T%az>e>J z8_kqz{(sx{AweIQ(4ZUVAqp7w&yM`XU@Q`eo>*%ym7#N03v{&H6!Bcbf-T`TV%*(U|II0@Ie85l!@;rAY!{AdWCwryacea9sj{WJxDvCAYOgg3#s_K zEnjgsb{{2+6c`=xB2@T95EfS51|~&mb#v8peKnL|ygT^yh5W}J@3tMkK(ctKMjCiS z6hx8638Q8CXxkZ0+*3DHRW}`$v@U!zmLu-BPcD4_r?Y>XGEm5<4L}%IbohTA!?xGi z|5VT8W^U2zDSL5IyLn8$SNn}z1z#>z z2mY`?jY1Q0fST_JtxDJBkX;}dkmaQ`Xq-JTLU3-FSRm+V_{}y;SLq7!X&X{yxI8IZ zZ3zo3HN4pDw=Ic(8LGg&D-pGtz~+X^11fraf2c+b4-bPAjy!4}ODDI_P5C;AOLd4W zlad&owlCMJmf5!vRK-C$0w+%(%+l`sGO`W67%71aDLPoLXjAYxk4GV8@6ByGCEhDn zGrP7pP;p^Pj^6JoB&NuF;FCya;eOIQPXBK^&%-p?+R+T~e>-8C)#S7g;CIkl{M-pP z_#8hl;ydEAC^#V^aS|IN-)lwwv8&w?VU!)qgks#w=%yzr*Q4-Apm_-I-@3&4Iokk7p*&745~X z>Fzy~4?PmsMaBHl^0GT&o|l z!OE$c;PwJJ3pjs-kYUBg&t7pH$QhWhovAY;T25B;C_EYR_FTlcf&Rp4{GVz*bzE0B2`T@2F+R^vy(*5?&JPVfg8b~09dHUia0A$2=)r&}cm5e{ zTg|ry{9vvO;WjS`>U`~)uKa#-!<)8kBBL_FM>6I_6Hn$3GK*2cP>^|v(3-?znZAI? zECr9X3c5$mH-1G##O*nFsHOB-6S>``$p&cwo(>jWCrsq>R(sy^?g|gmoF>e#IS~i# z-R>CXE+Yu}4uyfM$Jye2gXJ`Dzr%% zIQ)HxT0{aCYOJ@oN(Zb2FUW*+Le(qf3iM>w1AWOgh&!pCZx)4iLpjM=1`O<4R} zXbi6WW5QY}2W&uD2S_<)nz&z7g^Ch~yJPq9*#TU$dHkG$LR1V3IV)&tz7JI3|GnR; z9DoxH3(xaD4pvGZH&(65^eiFsWtO`Nz$|ULo2c#$zqo|F7K7j2od^>$4|qbt>zo2F*RY$&f6%2+ z?1T$@Y(flA@{0VdihzkJYi>N@ITay3f5K|mLj8+Hq&{;imrCK7Zgm38+{c^Mg@F(v z$k=Ci!`f#sx&vb$)W9z9=4?xmURcj^;cWQ~CpjJLN_W1Z|J396*Q~9X>%` z%A$IUf~HbN$zF3&r%)`-%z3?7S8MPlD`3OQAmv=$JmOFjajaQ#qyu+82GJ`zNtb?p z=e{F-Dws~B|NMW{DvaKv&oZS9`tu!#De=jQZjJ2*fCX@2xp&9Ui-^TbKj$&($8%O0 zk>sX#XnRdaB9! z%_iUw!fXIKcl(-tg~72_u#7AG#=`d>68*aC}RQZsFP+9(Tztx^IY}xR7UE$_Y_#~VWppQ9;hSw8Ulk0=<3fkGjk7@mT$Dr!;uot&ifqt=BrY42ZCY*9$G(b5*T5W_X z9SHR_O>%sJ!C_CwqVDB$A^-#`xRF_J8%h8TL93jb3GVF$s3vkS9?cDbAq8(=<4@rL zFVu{$2vHC<{~T-+qdqB>aV$tg;2aYH?a2`=+XEl*jqGEDR+%BtzX;LS0u|=)T7<04 zH_I=EN3I0P2W|2V5JyN>r{K^);2fIzuglS|Ikx5hPcTbp8G6RE%MHpwTz+7rHp#~W zDE5!B6*A<+)_*nYvz4VMD2eJ`VBWFzcq(z%H41Cj>)!8@T5&wJFpq3{uU))TMXoOx z0RC)B{)e9b_!1K^?JE@ds!7V<1iLOv9_|Gi5wJ~q=^TAL1M3090r^Td2^yvl&9&$4_pChutZVfd$!$#h0&Lfk!U z80Jm7p~XYzKFUl?^dx48wv>4Vf;0h_cxARcU#g)W0|XRbLyW9NIEJCrINwOWq&B%pe_wl)M-?y4LbkGDeZqw!Ie&o zMW`9Gjm1teTMEdN?g=4HzwpzJ77Uza!y8Fq!oQSJ{*D{frj5dkVE?x*GB{B0hw)NS z(7mKuGqoR><7h6Ku$7p0lZa6ZJoP@fZ^xz$dH!d5k}Dva)n=Wt7)OI#)tq5v?IhV^O8H!tG zzXZ=pRMH{?M2D&c*WOQvX zvrdmIkzOrjiSI9X{>3f1FC8Z}gxHDAXf{l*tg&9LscMO${a8DZTg``%V2r!??uYl2 zO7K-LJCEX>Z{2!Pj33#FX-TsS68ae?YNqDLUp~#ESPesK|%v*E+x+~&-XT^!w zv3K0>9AznSac(srAPq5LMRi3^Li)eH-yQ&G15>ktFM;qI(I-oi5)%^xkbv)Yzc_&Mr_M{)_R6|{^Y@^? z17PaHejh;hRr0~`y7MTo3%K*I{Al_VcvDD5u=uk1`k2i<+%e=o`G)+Ad(XY>Y0h=( z2>{4^GrXuw?M(Xn0=V8pzE%!l?`jSS)_W@a)&QPgJ@3u0H&0zd*aB~Wm)x{mApPzy zgfD`Zn=Si?oi+k9{-tk}PmL@4HiIF66@cw+edGd>OpNJ^Fw2Ec!C}QvpOid){|GdbWE2fZNx$cgR~BCT0@x3>&3+Wa9P$fe|F-~=Yt zt^w*hO|!dse^*SR;0S21Bxtbrx4w`Xkj6r;ZzF2|_`^}95?L+sWzSctbbt~lLdH8lO*~--Y%m z=JR@&FB19g8t-%W<1(C`-l6*4e{g7JBk-J61ZpmUY}js~GmWr8bBV{9;i6ZkjR=i6 zQx`@LH)<70sjsfW`OY#@Jul>!D3O zy>6U%czK+{<`uZoaBQFbst%C@#~yB{XK6xw^fAepp=!R`etX+kxU6>xalcR(=JGnE zjel_cUFCd72P*%ul#tC~l!!t}<4Ao;p2rj=)hRxUG>jKy^n6fG-~nr1NiLXf2rb(U zNC5y_`8plx;2XG;;jZYP@ba*y5Q%?wcpU%yKf#m2iEOW*nGtDV=h{;|Gk-mV#IoS% z`zRKH7AVODp{cJaTwG;f!vEW~R5w@GUy|nzO$_t*K>uy@|1d&__MH8{QT|syfc5V@ z=KcFr{A02IogX9n_y23l{G+)TTfY8lUjJpWza=|Grc7ahuv=HUg0r&Wiv%~j9@lJ5 zD!PM;Db0+W`Lb|WFFAdpJ_*ZB*@S*v_W$W3APR)SXOWT;K%PiYmiMnpir2^Zg6fybEN{Ze?`OtWkFG z-~U*=yGi+7*%$eoiknsCB<}CXNVGVLq5h}-`+K7JC4fBN z(n*I~t}lacX*yF@U-cZ4)gL!<(fr>1?&BvMZ14Me|4+~6xh>oy**5){xo>V<`fmsx zY;g0>^6;7rGJ%6ZL8J-w1+~R#{Ejz5g5sZ3Otw-N?TO z>#WlMOii9e_|M$%FIu9ikMLWm9hN06V+)V5=5Lgu`Usm;^7#8s;cT|P`M;N;Ta-L{`J(B3CA1aG!Z?wU7$7I~$a4Sn~zJKoI^ zf|rj)!z>R{?mYi)z858p? zPZH8wzaQvYl)c*mC?rdaBWqGT@=I2HrA@)mFy^U(B%C6;tdH5uYu{ciiaW{oFxIUo zrUga;-NSS%VQ!erY~u2*3L)XtYMQpe`c|!rUrLUb33p$&?|T20U9j3=g(tdPW~P1V zBBtJUQ^fCmQ}n`PsA9E06!;O$u*%M0D%5jHaiSUEWn4j2DVAd+1x4h{=NG21q9q26 z$rlQL@#&q&@@>y>+@tD0iMg_XuT5>NZIEFjn+wf_J1m9qqUa`cid$7cFf2VBIoJM9 zgBPt%Y`^6FzwoJ)we`PP{?_>o&bHc!yJ#{AyJWx0sJg_}kM_cIeyPYB|K}<$4^sZy zy6%a%;2g#^@}?;BEm~!*naGxS)LF)O(oOt)gP@c*vY>+bJGDOix4^K@BDkA8Xp@=J zHk>9M^x0WISdneX{JIm=Pa;X9a|${#SAT;r&;Z4%{zHqV`FA zpwPY~x-9!^RQUJc_^0P9S#*2`D9AUy2>wQ~tt&;}LYvGfdy1>-NM@WcGdrn`qVqX%k8RSA0( zTc!GVi8)Df*gt3E_kce-I(6!iILXOSz*gRO#yHSQ`UE$VJ;rV9C1cg>=+3f|iJ|sz z=5gMsf@u%Jr?PN{4=S{`Jg1>{%z_*-^2}@)lRf6N4C+yFK<%l_Q_h9z4?3G&@RoKj zJH-^LHmYQ~M@`F3D_SI7Nd$ml3&WEpkc(ieK)OKh<{zvyG>45ZjVIatnCwjXg@&j# z){!C#IcerSY3?5bFQJN#VAJY>SdTxRSGGSyv8z0_YAkme8|iS1zl8aOKPh|inkt`C zG0=AM`D5bcO6R{I1>{=bS{HgDl(CfgC_ullNa4=@k@JcHBl{Up4bWIusI-0%!F`i5 zdyM?=`TXxwzmYdG9dy`%e%nkpB82@Z&njoc5M%LL9wcQkwqzKpvnU`v6XWFx2+2ov zdQC7=?*btBUu670M#lewn*SIE{zdTre|~u&*S|`xfm2(!`y5#vz0f?lgH=E7)E+*g zU(PDP=(h6g7FS>hGmbv!%uv~Gu7NA$wri5=V96z@is>|THb*)yc^D>AHr7Temom4i zA-8B1x`gh=aP$ zy^w3(<)!b*cWCYT=#)hFiN2vV`koxn%rQJ5h>|@oqDix!U6&KYJOtUqeqo_9Io(zA zQ^Bw_*zNPHUJS;7i`)07ePNgv8##%1RG?p9em7|{7k1^rXL6gKo3p#>W0ZYudW2)1 z0}_pP!1`(1WwyUG+04R(Rv^Bl1iwQ(@700W6%)1*Q4fg#P9<(wHf`5HSKb=JhS$?{ zfBumLgQ}q!yB<$$NtP*s7eatUW=*J#ADyMg)A&1Rx~+!Ein@wJqJO4GXzO;&hJkk( zw)=1A8ZWZ)xfOPk!ua2zM7eqku*eB!HwHTlI}7fszn5vxdCj@83Q)2_akiZv9QkPB zd2J{{sLScFrE=T0`wY?G9G#M;H%HvhzwOF;Vx7ZIrQ2ULp6~+8VPjigQ+8!un`;>h!_rfikDKTB&@JK)(;ZOgT4ghTxtR4+ zZl-GQauMHBQ2Qup?Pw`T)k>3V6}0nCcBAg;QUrdpELOshPmNnterkhag6Y=N0WX^UmAJt#&H zpI%4SJ6P1Q4j7wpSvsT)je(*OA^r1UR#*gx?uvl>yS=yX0jj2s(?}F)t9YG$dc&Z@ zICDMG@fj}S$?X`Sug4yyKHIE|Y)0W0qlYrGFbd4l=|#_gsxoSX9ia;D+=#IL%IR87 zo0F8qItOc?nXQ)-bt+1YoNn-gK`=%#d%7gTCRK5qK6xRk&Me&9MXV&Bv;t3{xFvk7}^r{$%P8HmJHnIMVvp(vD7fJ$f*|hx!ep2hTmzDhd&z-*^0T~|FKUTuEh{K0YtF0(9)cw-6e!}-gPdG|jtS@A8%(8f&;acpFB?rVl8X-qR@ z`X3{-EEDdLi+QdYXyC59EO=H4C{w@o@`*#*MMr*4HKMV6edsGWa!t?3^a_*` z0okaiCe6qV*Hk#FT6X#LjA-|;Xrh)D>;$>9QecC?H0BOfqnB$k81kkOQa0)W7OwDk zQ??jgZM3}Q4wdap&D~i)UAF+{?54ek-*3aTu7+gllCKzfj(xxU?a3SQJJ?@R&V?Yo zU8RuA=noZMHhq#dVr)yIC@t|sc^Q8Q%TAQou3K-|) zSj=xFhPs;|^=$koOh6Qg>~q>dVO|j#@O4NW72Z(KqIjPYYd_=G|B1DbRopgsFbLAd zxe5apjrcy;tW|oesevZNKi+YIH9}-={I(p<-63lWq`UkL)ip|LdX*M_lAiePdZR8v z%USb;HIaeXPrX-lVOrsTk+z_@U1_OOH%g$JJjML24B3&_&_?NEiZ+`dNd&k88R}u8 z31~LlsJ%E;wS=aL@Gd?8c(Swnmz$R@YG9gChnZ zY8Pcf)+FMR7tC#f%%44rDQGt<6?t-v?x6k+M6Xaqan^r4WYroafEcBsB&(>3EeZHS zQTB2i{<3PgkbHbJ7Ri5)z&4PWM3>MGXn*ITyVTgS)%8!6ZG5w}xB1gjuSrwhAytBi zrpsh;xS{VtG(WImxmLvf9kSloGi?~23~Xf&lt-lVNTyK4-929 z-~?WvpD)l(!H8eBiaqoMi^@Zir}!B5I8vH2&&>zFk|n|G#p;wIEk&55daD+4{nj^rst|?TJldb^=itVOAkQk=V0GYKYPs`&&^EZH&dm-7cYwDsu)m5G_D}gdx(Tu`tCFs% zFDmPQt(Tcgx{P-0+OBd=)Y_xBgkczKGV4nP@+Km&Yxb)S|4C`&r%K*d8xd_=l)CU$ z8el8JxMFSLE_lRQ)DN!ZPCh2Lr<>=2I+Y=)Piv@gi?>4GbiFMsaxt97;Ct!(A;6eh zbus~)9*bdmT;VvLqO@Z09l=C`ou1h9TcS*X<|`V@Ig1GTz9yu#(n*H9t~P;!Q2Psk zrH2@)8A=MyKjZV#JB9AHAm@3NniLBVVkL8c8;S-T>Hge3XOn(`q+D+cL%KW*J*f#8 zIB}EoBvYig0K;b35;Ya}WR!4up)uFJKN1JyM8%2StjvdOKD>BwBX4cCRDI5O#ihgQ z$l}X6#rpC(%PNY@+pwplsPE(3<@-u0yeu}xpuCxsUzuyP8D(#gsE;OE_k0a;injTH zOPLfK3gqNE1LrI|^x}Rf&d2k}vnsS+SKU8^FrvQBsxWhh6@gWu5q~EAm8xDx1+6DV z3auHh;z!dGWsQy;&dG@tT0~&zRNBMbv6&;$!|D3x5JB$&isxEs~VqB!h}a5oBvgJ+dt_z ze|8%;mMFK+NWH>9L>(DRabdq1K+M-}%c&!VZLZU6nFk^;s^7~ZJKrzr;__52iT0>4{@$^E?sN8jJRO?|^oceFvn69gj;HapwF zdNd+}wG#6(K+}zPsA2ql&q}7FTXlTHMkr0(JzCUk35?Gp@OSy8N5jlK3@1C5N+L{G zlR%e;L`10Es@Tua*1i{2pMh_cA*+4qc_MoL(1KxM9op68E z0T9{;G|}1c@0~Jaqo{pr8OqhDZNjhsy`DZi1d!2bT$Vl>mdzgI_nZh}TLyk^D$<%4 z?+Qf1BAJA-{@SfYKx&ZzM{2f~=K#&*o?ZFrKz+{(R3lqtzp@*euuRSD?jkOhlV_PL!y z<`;MWV)*b7A}={X#djlRJw(P*SkjqeWfm3xVJKgYg%o8kJuHe(4|o!FH1p!~=ZJf` znSSBA)-YAm+voD_yz0Znp5mXUl9P}h(z^TdYynhA(kx`>xZS$&`+zO9#f_xwe)xoP zQ~sw22A#*YFFM)%9fKx)+!Bvi!_Dq8Afu)0P=BO?w%?UlWoR4^l`$9X%Y6J*c;4>n zGG-zLj8Z}fWv8T~2~R)66mLFFXr0hKco^b_h39XiG~>q4aWIG=yGJ!)XDn6tCIHv^ z@Ztb_sh4GLTO>PbG?p#bov>PFd!|32#XjRRMezwm&=0-4A!g0Zn4)8~Efp1{$+&Vy zvutk{+k2l#AB*1I=T8Orsx@t5TZd?yVk?8Yq?aj9h4 ziw5_(*2s94vpa=fyCd$u%?BPT)-4tQr}R`6#hrpi>f#K!4a=?l_~JlHwz5EHEg18=j_w4{Jv~6Ks?9)4kulQqOsnoCR~FaI+P&mkN!bsK}7@2F)VO3 zZfDQ~kA1@L<_Nbgor193c77JdEnoea&uz8y3YITW*kMO*BZpEs_g!;Gj#|_7WU$=f zEB|wNdC{wdH$;!s?y0CDz2xF-N?E2lcrVI3_ReWgonJ@`y_>jRu{!nid;6TrI|qwV z?|rUMDWnOa;A}^|?{vZk#$_}#4RNNa;C1P|eACAn#ck@B3&8E{$qT!c93@5(BP13? zkI$vdIdaj}ES)-Y>7G47*u~A1$7hV){E2t#dU!T>YA@i2&d?5NFs_zRL;< zQ{8d68}umN8${Dt#rQD^3I^;;&jY=l&CW_N8v9bcmXw0v_>f7OWBP(2u@zQKt^fno zus)IG>N~Z}ZIDdGDfc{9$gc^NQZn6)u)?nKQ-rA(&c!Mf`AwD^ z-MOJ>bqX3W4iRr)MLUOYBJet61NcvhI4#Y3j^_J=ddb41Eh;y7;EAb5Imwf$REd`# zaKT%O8QNpeDT7D++VoGCwdoj8D_>U_2gu5G9}Q$w zryWyWuLBsoID!lBvE0G>a5JDkML8O*vjvqde%a#l^_HfG~uUYEJqp!!8Z*`!b;oy|CKuK5xiMKea=C zE1VOlxw|Y!Xl36{e*9aq1b9U2T_-MuI;xCL&R?(in~e%YX?QQy5;FJ>sFA4)m1Y1jV|jacRbCd@XwOT9D5kdovQ2}M#IEOGH^#J~*X^t~p=#0QJPJ@<)+q#}X zl%{JIf;;4(m%(bZ9^>z?V^)1+H%!kd2YmSBP_-(MvL+!S&$fnyyC>sz6oNEARoD00 zE@DI-<;oNuB=41TWj#XPA-{fD-!1co$%jos88e%&5j0^Kgto3A)QI!ZNDm@udCZhPFTW zR_vTs%WvzENIL&iEsSnsCJ@|hw%bw`zH@z%y+%1S9+l-RHBz;e=#Gg|Re3ZuexuL^n4z0lK9BK9(hnOezt1hjc zHv?Ohic+@RF&Yz55Th2IjQUrq@uP&5_u)rF{(-#OITXk6NcPtZ{m2%Qz_BQ(krkl- zN^=IE60wo%VD|2k@O32<6eoV;B!YUGJpe98)r>8bh)*&`_+@*Cfz=1N zm*SxKJl;`RT_pFW@x!R}_GkYM>KG4TNWi3YJsK8Mgab^0>$K`9R zJEKynS&)0+8_Qv|POv)^zNfSFXVbnopR54=+1gv@t{F-z%GZcS7 zbfca3&PO2xR+H%)AfSmD!ZFIhiN-~A+xkVKc0co|%{cTpz%2-(gwKxkR%u)^+7SR+ zCkX5V1oUUkWxaf}sB|>rp5a}vKg`b{A}J3MEO}(^sCw#PM^}Nk7uS8jy zw;y@fUxqW(eupqIs58fTA|PoN#6dUvHI8KDa=SM zc{(2Kx}b$f6gk;=7Eb_isd81@W@7-X#7t*9N7mP3tv&8tM(=Ethyzwm-sfeU;Q~#< zbR()UFzWof*RJj?9(;e;YqgPA=Z%~HO^MN3gDHogO}_~={q|}st5kR-dT+Y-gPgh8 zUnfY*68$41PSwt76>itR&IK@WOy~_jC|mjS_*W3q8Q0@4V4g}5RH-8zvNETE#EhvaA4e>zsQ12?|_Nv4I6eVm3>s++{Gq(9@ig&iB& z#saWj2}fU2aT>$#2^TcV`9nr~g9jrXh5g}fx}(cdT?z;x*HEOjYcP>zq@1udEhu~T zUXtWtD>--*!ke7S^ucGdx05z36ycx4fqs-{kbQ4J;0md;gn;wV3vve}eoe0`rhw=- z1A3W_9kU{Qmt1No6W9fyDrNnVai$ZSD160#wfTe;+r8mEf_CNG7^J@fqY=?$tBoYv zo$Rr5drvUQsPgGE3s<}^HkVLvEaS1|3m-))qT|QsknrD(teM+SA6pKX8XIHTz0j|G zF(W633v5_l8&2WD8?1R3778f6_WB**<13b_(3o|W9TJ56D3JuuQHNam?T>B@eKMCT z?u+%Pv;#zv3@b_%nrkUPxcdF}I}fCAA)6_1+fxOo)+P8|FOQPZ+%O0K#KcL8FkdDR z@~hJM5#w@V?*vvLD?UQUcCuTgIO?RYI(o#wt%ozltP=`avvh{bVAI`OHX(=gK&!iO z(A*g8l8f3Z>QK!M7FOGfoZLZ+v{;9TytHvjs~EqrQL&_KN>AwcA2xXi~`ezOCZVc}|VuLFy_)Q9A<<%|fu4c6LdA2T==| z2GdW$Ej`mHy7p;a%wyk*3Nu~GpsEHk>gFU1A+^nZV?l?8K>XS3IyMtgex6V~Mnfv>JT!Z6xrH%&Ou+KyEC`Sj zMxzJlMVCp!Tx}iE4tRmW3Two1n**D8pf<5kF-D{Yn2#x{CfpGlLF;b zI^OoNl91*D-7@2vegZpxsKD=}<}7;9Hd~I=-wI<4Bny#yHxehe#~aYD-(7zIE<6dX z{iUpH{K7m}eh0U+^V&UuW~X{E)w&j(8;26pPfM*C*vWGyf7BzxW9_S3N3}OL95(+3 zss+$!D-*_gBJOuNDWU^Wr@!&r8ZM%3AQ`UJ^hTl_eWa^c%;0FaB9Ms|pY_eh;L5sH z4DnJY9j{3($oB%};5SOxI(D<8xq0KN4i%TNUvuMq}fQs&;U?z?gDUFWvK-0-Gy_H}hy6douNBC{3WT$t#4$ zb_V+E15VzXve>Jb8#&wCvsJr7`?it8*$uQz=wi&_d| zqmpGj)uss4BPVj~&&fLZ@bu29lKZkLJ!@x8>9|{pdcP)K7F#;%^A6Am3L4yV7t7Vv zpKekjXB)f8|E_#}OVAY6G2$zSKIsN33@mPX=_W2xMr zTn_ei$}|iSMA(rxCDBaZ2**3fc-f^+XU}vzLx!^%t!35KqDESH$Q(K&xSQC94Cb)8 z&#*7sVx<1#U${CQ@hamslytJ;4uT7|BOIrns^k_-sDf^T<|~DoQHmkynx?06M|RRT zoZ{Oty^QT#*}8VgppFue3R;*3np$esKaw1xUH_-iHh+kfVEyf>FL#JS z=_VnEdmBqWvAqt%k6y>A5qZ*z50liTW+D-)JFOoz49Z#3tkQ=9$s9p*LoE{%fWc>x zYed^^l?{s!hUD9Y0qduQ`$>VL4u9y*TNe$`p(X76O>j8k>Hb)$XZphuHal<yx)V)zAl5%QeN>x9MB9QSm6}UOo6Gmhk9k!6voY#(?1qERg zpV=$4su$4iX_yH4`B-+`o~8q{xykOJ!pK7yAV^dWYgUjrzby5kRY3&zox>1#3WaP4 z-d0o;E}i1}-0#gG>929AhJw_zF*9$w90Ux?%Ew8((TdgKGP2vPfjKVQc`4kDyDGaW zpJI|)U3XGe>EctV0{oTL)y2y<4=Vh5{ECKWAJ&VJ2;fn-^+$K zZ>JFikIHmc#$VElXx9=RMMKsmE#AO8RvLx1w;n7<79Z!+!=T_M8w8_T9y8HK%V`tp zXK^V9+xlv9m(O*#9O!wgr1cUh!7HJT+!9(29s{f$H8>g3+=(&Eo;@XCkdDJu-%Tqc zMMX_ntp4cEUW{n&STM{P*!mg~gZ6lshooq`mbQq@ja4>o@pWv>DEnF5(=kd=f3Ugu;fio( z1G_w^X-Q)r`@Wnf>z@s>&P0$psIA~3ptqrpE@-D}a$^w@9Rh$Jgm#Z;j{;d)+$;cZ zuHtN4?{LhlQ*ik~pC@3>bf23o(!L_<|)>B1709GWQazw|Y_v zny}4x?mq|Vg~uqjsDuc1)CNh18PrM1^on{k(yU_(Y7oDH+Hzx%kBRF1;XB;eu=Wls zyO2na!JTbdWn|T2gHMxWj+Jtg=h zS8_yvNIJ%umjdiCIyOqcV44Z)gzZqLuWP03CDt4yjBPymaDcYiZzLELi#J|L@PBWO zC~r3@X;#^b(AI-q0%6=KyX8s$Gzda~4n!*wEW}-Xirnymu{$X-;EvpNM6SUrz$cNeficw68ep1*i z6XLTN6}5np-bhTRe!tel;F45ak&ZP>)TdYH(SxxKg-2)=xN1vPoUQIgFdFchLNaSq z)g_2xs?JQBXP9n_@z-mgzS;%!w^d{M<9^O2hDl%T7#0md+KN=~_7e4ntX<(DJ+11F z;M5XB#=CtBP!wCx6?4+-P9G$DJ^AiHitH^qUz{t@z8(F30jTS&EsJEkWU5-8t!L_I z2B1@P5W;Wti*fzId%hx(3zEH^cA4u8mUJcGaS8r>v;38lMPy_7epW(*>bkGa@tE$P z^S6?$)iM2qr-5~e55XuoIg@Eh;$ai%)wFJXYH8|d*cP59EPfM7Kd~a7D|O18TA@d~ zot%}DTzoG%0;cuM0sNrqKl?1F@}Nx6xtm7>jG?JbIGAAvZxHurhfVq73tgd@34E!{ zhY_R&Xs$1qb&rS33bv>?g!Mwr9E}gu|W&Iv)#$*v!M+_cUg8gGk zy+gwYN1`LEcWOpNXCaSR)VqZ6a~B{sQ-vBfjPWw(TY*3?iCuWIM@3?(o?>E6k9xsv zV5qesQq6EhCjGa&F^1@Q)NrC{nF_L25uV)nTVU;M&EMiL#%ai4%}U#lvSl zSj{2MbO9TK8{9cU(sZQQgGn(&=Re6?Y3l5|6-M(Y1Sf~sl$1iWQJWBZ= zX&murf`&81(HL16Eo9Gd^ajF};D=IAbTxkp zT+{-&(FlK>cZ*FYo*BS;muE~7^)bUf5fNp@|BgvsQxS~~smynMR5gtL5bSsP@mM*A z5jOsV9dM{9DzbFgh^LuO@HuBy(i8kg^1{PP6l(FWv+=#_f@=O@m6)aWK8wIxYE|k7 z_y%XTU1M`=ex?9ysz{F4Q;mmT+Ya{o4*lyF$b&tE{vh8WXTHd0iJTSEaiB{q#bJ6)( z6m&F88KQ$KhL-pO`mNIPH#It8Ynx|gkx?LsgN~$$ltF`@WCXCyh&6AX)1b7+AN~1v zKXtY#EKl|mhfA9_IhvsB{LpblDSLk@J35eFd=%P925$JAUV=n~;~h?hins4D$a1*e zCQ=g&@MomUk%=>=aM%mZ;YCq9PN%PITF2W~g@9HRTu*8_;GRl8i(_0Y)pJ>&4Rv1?siEx^1Mb?%fz~uyj|bIHce#+FUI-MSo5dP0|`B^O;JnHl3T^c2?3Dym2lTo`ZW)sExOU z0E+{4Z^F$>k{oj2OBlXBC2$QL$onqx9U3MqZQW9hLjH>?OV-Vsb!I|~K9}YX{P+dbj zROdri-HMZ9#x@BAD9Jl?`)PLfx$|>@P1i-2nqq`ki>Ouy2!rgzvw2Bjn{c10*y!4$ zs0yZvDF~TbG2J$n!#71j4cJJ#Di4_Z9Wqy>y=7cHD)N?In)(FIrz2RDUQVzHg$yY zNt>c2PMyRK33)!V+m`*l+z(H|#X2tL463L-5=7b)MREMup0y&7>>j9NaIn-5?$2{R zr}src&3)Le$ed;fO&*~ci_S2_*;ox^z@vaER!53%mnv6-ng-&um)UNXe!e&}66V}j zWS~ha7jnLibRNoEEw40fXIFcdhl*;$2ad%xi(q0TNC`f01E?`s4Ee_gA!H+A*0o723y2A?rO1`}fLKjdmnn}d_x4#EK;Mj2aAQv~< zCYJUG5*ZQH0&JgmZG<2nU5Dd}9g4)rtvbs4nlp1f%ND*$Z_j*!-c7J(3hks79=dK~ zfNzLhJVfE(cb1J_Zi~e^r?{exs ziSZ!66OB3KGz!&-nT{9-1Wbt48la2bku$D19j zJP2opQ>XMI?pWUm&{y$rB9z(^y7gG3@yl|C4NzQLl)Kt)mKhjw8HB+XgX>v9y*DGd zv?(K};I}3S!(Kvy+ZszW2F<)Cd2mdvBTvsScg?$>)Anh`_jNEE3rf{{rt%H+@v>ct zA+}Q8l<0{-5mW+HDTl}!zul#5A?DuAG$UkYw7#PgpZ0{iA&8_2z-Q9f1&%UU3!E89 zmYjp341QK@(h1Yi!?hus-H$k0S)ww3iSj_%vT;Em%_#2{!O}><#;OCUH7)2=Q{_U6 z1nxy-LDf8v6UcLxaR)Pnv47O!B24sv>3kPE40kdsjJ6xxY$CiXhlbZ)Aj!v(h%EOg z1imR}TcF^$nf2HoI(nrD&y(mc3G3-biGXe!y8nW=6?*3fvlM9TZRC#ljr5zg5uTp? z;!fbNE2}`@cP^(*ToGg-C1UCA{y+nuCVnkZfWl(!k-}^gIZ7&|NKZChiOs6CWHG5= zl_sR|4^UaCzjYLB5P1{uN>OhnhMfhz{F(>CRv@kq22++gyJp)ku;?N-SpMrtEVorE zeUhUzAn{+!s^mEzwjBg6J$~P#|&?%|59Rzw!(uX7V(wejgJ2-ltodzVje&% zeAokz$K}&%XZ54cl(^XqgeFNcZh41cf6Y>F`c*TC1+_HpCy4%Vi{tg2U8Q-CuEP<_ zwvD@Cd$;D8(#!4bQwK{_fpA7?Dkn%wMZJM?`}cV~M<86rdlylKH1tO|sTm7ZRJ-&^ zbra#0wo4r2SiD|!@Ua>K!igM6K&*PMCwy|Y(3QC(O{qMA;99;YoDo{-$$Sf+Ch1ZB zqdhTPXro`GMRUB9Zp7Sa1GNyM5Ic0oGAI&+AqoZXP2s3>@lFDa@aTK^M-x%Z5CwPI zVRlS|F_meX5fSrS1e2g?^t=Tfu)=0m>cX9iCqHziuoAwv?`6emv0Yi2boQv;?YRJV z{DqQ)H>e*Y1%-;|CVRT$s=dCFWad=Eif45g&XlMpsspEAa$3r|8Q~|C`g|A=R6uq7 z-p{<7#?a-TKM{X;*<8UMF30=fUTcjkFu{=Sebg@pkYGbF2yl)n>4FFNOEdYNC?~*k z`jtp?svtDe6w%7|at7ka?(NuBlnS;{lthJN<=wY){E{siJi_>*z27!uj&wz3?lzdY zN;yVems$B$p0=5BAIKOMX?K>)pJ;!v>rK}bMhKr;TQJdQ;@lx=5D`cc5g`QKLFtCf z**LR}7Qrit=N2{ilBJ-y*Z{vlHqsZJ?JTc1jWbE$TORXCd>Yh8n@PJL5o@JP?Yb$a zu%sxBwCf}Ns<$;e!HuFvrt?$R4v>WdcyrXExHTNs7BlwD!Mi5j10}ywhf4wtrJOzv zThYsCGIl9-^+_N3;qvn-J**Z82qYx3eO1?k+-V$v6@q%cqkufV?kh@{0>Q%>)krq- z$j=x$e$|Bqn&oCuUXC)zy@U|2F#12t@T?lx@beHM6%yja7Ubg-(0-$gx)I>d@#47~ zT23z=P<)C@5f|JC08x&!)?0Ai1hS?q6eEh{woBZ4N7uK*PaA6Iv!5CGE}Lm?iB=Dw zZQbjB;LQCD(>6^Jqc1q0z!^j~lK*>P* z?ZM0=M(JJIA5p7eV3lb6^G4zB>7zBBJDJrjfh{$GMSrxIV)=a#p&)DsoZMici+UEBf#YtJ|se3j)Q? zG4$X5pbaAdP7ZutR~d9SAENRPh`Bz&%9AiZY02P>5+kLrr08yi3Y6ZSfag-4kKe2p zFn*WPXEDaC@McPqN%4fnmB;)6T!svm%*3u?)8#;dtCFxv<{z3I^b~X)iq-ST;RFrY zGT@KaTEM@YJPR?H6e+OTk3$OAKw*4W7T>`V*mY!;aA%Kk0 zB6JgL0j1t(T@$cVCfm$%*52G}@m&`B#k5G`IOucko$v6-7aP8Jr8->GbS{_PDr}3& zUfyfcO@6{iDYetl9|R2%ll9nB{<#{o7HwT!J?ktwQ++! z2Xs;-qJ$cPWl6ou?tU#ueE>vj&@jX}F&oj7T6iETg+z@7TjeY$?KMyhwod7sitOn~ z$#v7^8xM}d-1JMD`erea?yb{GznXkzp>Oz>)GKYxY8W}pBpZd2Z{7=4g=q&E<)Bl7 z8n`2#U-4)QKaa&*^>RYS`yvf=xV#}cE&8G~lgTU4eU%*_gkFg}Jtw z_0x8zjRzU>1b9C^0|^+UIvSmk^ij_gz!zYYL=&yMY?qvRulxLIUtk&WG=1BZv#sl7 z`++Vn4+O(s#oOv;JyAZ^&&EE86BPr}XA~M_YI$sJ!o5s%2gJ>EK2lyBC)p~rNWEGY zB~ojzcFPwYBd@CCoCLk zDF_zHWl+ZmRudcY77Kw}&Q16T^Z|a19h06cYE4-qe7iMd2q$u|pgh~2^9$TYRhA1T zW#u;r9g??TIE9;K7<5Lc;}myON79S2bISz} z1CRb?%-vFwAmKBmK@?l4F;5E!J*Em@u)Wy|U*k87h?^RJ;3NIzV5h#uDU=hXtiz!Q zII;@}gS&bS`~~+zdZCqecl4K^EQcKl=P;&&lM++w&|hf~K=eVE&&MYGaHw8Rjy(`- zkG+9>Kdu25K`VJ&IoeOq7_(cd_c^uSD{B{9mZ=6`3&dW%s z$LyuAj;*2HRSrj$L=>jt6lt_9wSH`qc!!(a)S=>^YJmGY-%v~xS6#@iV#`+KGWm>~ z)DUzWf;KwFcmR)K&1>whyK+kF>RE+`Qg~ae7x<@$d3A-~oR=UyKn}*x=1!?W54_fc ztOmb(Z2&KC+1#1|6GSZV_?JKeA=S(Km67kF6)k zw`#g>Jlh7ef{@|KT@CD)vS|ctQa|W%fPj21;pH%zReP_4xcZNFR@)UlBY+Rma4HPQ z9=4{aMSbuEM?^-Z2oc2+K{P$p=;Ndv<*2IAfRPe>jD1fv`;DQ9Zp%cvB=?_v6{R{%l~z3-1<#Aq<=bO88nF<#pbumEYV zmKRMnc@VTI75slu4mvR8IXOUI3H7aJ)*m_*hwuZ_=(7Jv=_ZK1jvXlgKGL$Qa!W}iS` zo;&LJf1appKZJ*$fARkj|5X0JuNmOV$NM|_k$7+?_NUZ%b0uh&2>Ls8gpY$}^3rX~ z3bQK?&wj4we*gA_d-RS3Qb%AynWImSA&DPa81VP~(Dw;$!)~X2-ZE%<^KTcWksVPd z>LJmpb z)UlNU1ju;tdH8lU_*d&g0w@bAA(!4@JWPJ2+M{V@ZUn9KR-9J=%5XO5117zMu4&+! zt76GoPKHA_;()9O0{1GSdE`FW_+V^v9$$pUqK!21WM+ z^4jP~i*`-%Tdii=G*4wQXPET1<#;ewlBp`^R$c-VoY0RnKg5j`{=s4k_dMTOug5t- zgKb?5m95_c;@1b-+e~;3q6l`Wba3A=gBmm`bqgKKN29-1Y<&us(Dndthov#C@{Wz~ z;!32eomqGYPIE#$(EkztRQ|ub=bS!5wWCWNdZTx92|2p0e1nVL4sMmFf+bz15bp1} zEZO|8>ug5l>C-or=v*fjeal4{zN2C-**C>*wVP5z-rC65mmbz<&E{XNJ`@i*wRsy-j!tcUXSAMQl#$^Td$EM_h^uR-MLGb z!5B;YfV8o%!oOMK)ZD zkqi8Z;vj%Ig+Gh5D-L9Yx7kR$cB7FOC^ zr7&C43NCRTqA9_LcULrAnE5=N=xUpKL&N9p4qu0bceN3>&JN`x*OEL@3k#2qfo05s zES4Ak_!zx5P`q1dq5s$*?#7CjKdsFzf|ljnC{>M+Lk_ZX^dI$X0FoS~7M5ivEB?$` zcL?o(v&a_aBW_tzjYGZH7g9YG&Zmm?qHxFktAEBA3b^-)2{a1+ zs|(_Q;w(X0oM*0^l>$D_KeSkT&1Ow4dD+|hu$cUn4(jKkB)%gHvd=d6hoM}#Huu%( z%y^Z9Rv%sxQQ`Wgi2Xta3-Ni@rs?e8Hw|Y$2FTrVwrG3>yx+X4a3NSKmn_DV_v?Q{ zYg6tw+F#vs59-W*T1*mfcj@75?Xp6YRy@5&OnJ3OGV(l($?c+Zs*fz(Qtsk^1hh$7 zopHk2$`3-89g`(-NKjrwan4)O099$BqB9INmfnw(sqK;PN+#AD2E_MsQrV~?LW@yuMx>D?|k!#+~g!UpO+@!vq>xraRr;*ky4vI-~r-{`#DQb0p}aG84sm z(Dv%Ad>l9O>yh8MCrs2KhKSI=!Hz=w8+8SP#EHIbWiH98@CIrUE)DgBbh>TW&<$5; z6#28YOK3`Lf;;Ds*kPw~+(D%Dk1;dXUfUOyAX^Reu)-?zh8XA#_gixU9zPrs`!eCx zAt};CuUi|_hn-6nay>xW-!RSPw)S52XIdkpqEWDVXtT)4fSVVizzCpO940?|IFROq z_i8A0u(Aj74POQg7HUa8NMWx!(VioCpX{Yb$*w9*46lrQ32n_z41I(A&u;yr_?|+^ zMSbKZGiHmq^i1EZ3eItoeCx{YIjdbhJ?01Hrprh&n2w`b72r1>bDK>*Y5Vr}>oJ7k z!*C7;xvjf(v%w!7&g#>K3E*fRCw5ERM#iWBEy>9kg1P!Hh1rvKmj@X3x3m=nAZT-v z6z|Ql`asGrnI83eDm!|H3rnhdlZYZjQ0qRiSe($p0tuMc4$VYZCul5w0Ww`Lk{$5E z{k&9z8f}?Kf5-8yzEop4b^fqb9)XeuSS{Sa@_3xlx{;aWE&)N*5iZmoiM%HOzEAyq zcDl@Q#ZOrnSrcJ@<@nDVuVCp&cO|PfUZS_2%1~MLpf*vdi27xGeN!v7;y$V>L^N3B z{-KKtWQ9eWIjJ>9U;*z}Y%0^=!nU`Vv2-qE`fWN}tbYU7s#oCu$n!Dk^dge=+n1a) zLLN4)=`Y57)@KHMu#{;Ofh&_kxpf^_A4r${VZ$K{9b7p5`0a?H?{&lul_8^+Op%X^rFne~ zo^N0$A)e=Z_CfiHc4lq;68;%2puc{Q3N8;^dYCPXp!_28%{`PIus$MUYzYI21?B~O zV_ibAb`5aZJX-*~72Oa8$Vnsd*&$2_!=d6|+(EyU+0$y2$26zTY&`fr?f!$~EDlY0 zh4F~MEpJY5P02AyCPSWfJ!2NtqC~f6B zcNbz|a7=neBRga&n1Y`vfOg*WF4SiII!#^kwT{)q(-f~}U)D9z%D_D>Ay#ZmjvK?3 z6PxOjed1Rddsvo}a1(WI{%3YZEYPNpis5B!x3$qV z^3Z!e~M=Z+Ke5>uK02Gf+eX6r!DgGR=iBf=*=n>7y<%mwY!Qhv4XJoj&q2+pVYvhqCXLD<;7>^4 zEi5t}@=jJrQnS5Wa5uHtEE#|F;b)HenVAVwl`FKf(e$eDZ8sjgiG;Ss+&bcSioJ_ zh{qDKw9~AUPA#_?41GEV1(03)D}Vq7eJuY9Q-AV05czb@zk_EQ7piOm2sSb?1Gw@@oe#A!6p%Q$g_pmARGbtpCv z7YAWnQXpLV9bwf&A|KrlQk-+FVRsdcjXt=<)+k$;xzDxM1>o*5Vg-c6LWSqcZ>%C> zBD(>z_mE3MNdFi*Zr9i9nArscS&pV%#uf$fL--AMy;IX1w!)KvIOkt%x)LB&D6GHb zhd39$l7XPPsb5~hKQLl;Km^ehjXye7(i2~f04Brrf9Z&n}pxU);4-TiF_>#xzZlHQjE-V$R#0Z zlk?Y(s!YAfvf3W)KN=LnuA5NME-|&T@W~E5&&f6-^4}@L;vc&d^qVmR)zz15Ker)m2WS4pT5=o62q##MP?Qo)$L- zD4?COH<47iHPgrD$RvU-#ubxHN>-qYTFJ_zdhot&kQ7~P{ebGR&gxMv6bu+KT-Ogn z9uk)Z*moaIlKUpkdx)vT`Us$V^{7WF^}hA%=chR>t!KL=t=+D6?EuUaJTxd`GKu!3 z@PNzop$BsQx~&a=A6vg!PX`^pU@s6h;=-Cn)An9Fv)aXwAfbwb_u-(o;Nttv@O43N z5w(4DBSmw&xlLs3JH=4Q_EEMpCL4TbR?h7(QZ2S(&Yn|lS4DBVx4Bb>M8z);g!km< zHd_nvOYQaKr`*h>JmcyF^i<-XdA|i-ymbyneykqYPiJQzq%>tA;CAgzPZiY+P&2Iz%000003V-=_ literal 0 HcmV?d00001 From 8d2482125859e4cff8aca5df9ba0b64225863429 Mon Sep 17 00:00:00 2001 From: Chris Alexiuk Date: Tue, 28 Oct 2025 18:31:49 -0400 Subject: [PATCH 02/12] Nano V2 VLM Blog Signed-off-by: Chris Alexiuk --- ...0-28-run-multimodal-reasoning-agents-nvidia-nemotron.md | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/_posts/2025-10-28-run-multimodal-reasoning-agents-nvidia-nemotron.md b/_posts/2025-10-28-run-multimodal-reasoning-agents-nvidia-nemotron.md index 3718b34..3ae049b 100644 --- a/_posts/2025-10-28-run-multimodal-reasoning-agents-nvidia-nemotron.md +++ b/_posts/2025-10-28-run-multimodal-reasoning-agents-nvidia-nemotron.md @@ -128,8 +128,11 @@ To summarize, Nemotron Nano 2 VL helps build scalable, cost-efficient agentic AI Ready to build enterprise-ready agents? -* Download Nemotron Nano 2 VL model weights from Hugging Face \- [BF16](https://huggingface.co/nvidia/Nemotron-Nano-12B-v2-VL-BF16), [FP8](https://huggingface.co/nvidia/Nemotron-Nano-12B-v2-VL-FP8), [FP4-QAD](https://huggingface.co/nvidia/Nemotron-Nano-12B-v2-VL-FP4-QAD) -* Run with vLLM for inference using this Jupyter Notebook +- Download model weights from Hugging Face - [BF16](https://huggingface.co/nvidia/NVIDIA-Nemotron-Nano-12B-v2-VL-BF16), [FP8](https://huggingface.co/nvidia/NVIDIA-Nemotron-Nano-12B-v2-VL-FP8), [FP4-QAD](https://huggingface.co/nvidia/NVIDIA-Nemotron-Nano-12B-v2-VL-NVFP4-QAD) +- Run with vLLM for inference with [this notebook](https://github.com/NVIDIA-NeMo/Nemotron/blob/main/usage-cookbook/Nemotron-Nano2-VL/vllm_cookbook.ipynb) +- [Technical report](https://research.nvidia.com/labs/adlr/files/NVIDIA-Nemotron-Nano-V2-VL-report.pdf) to build custom, optimized models with Nemotron techniques. +- [Training dataset](https://huggingface.co/datasets/nvidia/Nemotron-VLM-Dataset-v2) is hosted on Hugging Face. Learn more [here](https://huggingface.co/blog/nvidia/nemotron-vlm-dataset-v2). + [*Share your ideas*](http://nemotron.ideas.nvidia.com/?ncid=so-othe-692335) *and vote on what matters to help shape the future of Nemotron.* From e0ef49106f2fc677aa547817b770aa97818f66fe Mon Sep 17 00:00:00 2001 From: Roger Wang Date: Wed, 29 Oct 2025 19:05:52 -0700 Subject: [PATCH 03/12] update Signed-off-by: Roger Wang --- ...modal-reasoning-agents-nvidia-nemotron.md} | 104 +++++++----------- 1 file changed, 38 insertions(+), 66 deletions(-) rename _posts/{2025-10-28-run-multimodal-reasoning-agents-nvidia-nemotron.md => 2025-10-29-run-multimodal-reasoning-agents-nvidia-nemotron.md} (74%) diff --git a/_posts/2025-10-28-run-multimodal-reasoning-agents-nvidia-nemotron.md b/_posts/2025-10-29-run-multimodal-reasoning-agents-nvidia-nemotron.md similarity index 74% rename from _posts/2025-10-28-run-multimodal-reasoning-agents-nvidia-nemotron.md rename to _posts/2025-10-29-run-multimodal-reasoning-agents-nvidia-nemotron.md index 3ae049b..db9fe60 100644 --- a/_posts/2025-10-28-run-multimodal-reasoning-agents-nvidia-nemotron.md +++ b/_posts/2025-10-29-run-multimodal-reasoning-agents-nvidia-nemotron.md @@ -4,7 +4,6 @@ title: "Run Multimodal Reasoning Agents with NVIDIA Nemotron on vLLM" author: "NVIDIA Nemotron Team" --- - # Run Multimodal Reasoning Agents with NVIDIA Nemotron on vLLM We are excited to release [NVIDIA Nemotron Nano 2 VL](https://huggingface.co/nvidia/Nemotron-Nano-12B-v2-VL-BF16), supported by vLLM. This open vision language model ([VLM](https://www.nvidia.com/en-us/glossary/vision-language-models/)) is built for video understanding and document intelligence. @@ -13,6 +12,33 @@ Nemotron Nano 2 VL uses a hybrid Transformer–Mamba design and delivers higher In this blog post, we’ll explore how Nemotron Nano 2 VL advances video understanding and document intelligence, showcase real-world use cases and benchmark results, and guide you through getting started with vLLM for inference to unlock high-efficiency multimodal AI at scale. +## Leading multimodal model for efficient video understanding and document intelligence + +NVIDIA Nemotron Nano 2 VL brings both video understanding and document intelligence capabilities together in a single, highly efficient model. Built on the hybrid Transformer–Mamba architecture, it combines the reasoning strength of Transformer models with the compute efficiency of Mamba, achieving high throughput and low latency, allowing it to process multi-image inputs faster. + +Trained on NVIDIA-curated, high-quality multimodal data, [Nemotron Nano 2 VL](https://huggingface.co/blog/nvidia/nemotron-vlm-dataset-v2) leads in video understanding and document intelligence benchmarks such as MMMU, MathVista, AI2D, OCRBench, OCRBench-v2, OCR-Reasoning, ChartQA, DocVQA, and Video-MME, delivering top-tier accuracy in multimodal [reasoning](https://www.nvidia.com/en-us/glossary/ai-reasoning/), character recognition, chart reasoning, and visual question answering. This makes it ideal for building multimodal applications that automate data extraction and comprehension across videos, documents, forms, and charts with enterprise-grade precision. + + +

+ + + +
+Figure 1: Nemotron Nano 2 VL provides leading accuracy on various video understanding and document intelligence benchmarks +

+ +Improving Efficiency with EVS +With EVS, the model achieves higher throughput and faster response times without sacrificing accuracy. EVS technique prunes redundant frames, preserving semantic richness while enabling longer video processing efficiently. As a result, enterprises can analyze hours of footage, from meetings and training sessions to customer calls, in minutes, gaining actionable insights faster and at lower cost. + + +

+ + + +
+Figure 2: Accuracy trend of the Nemotron Nano 2 VL model across various token-drop thresholds using efficient video sampling on Video-MME and LongVideo benchmarks +

+ ## About Nemotron Nano 2 VL * Architecture: @@ -33,49 +59,35 @@ In this blog post, we’ll explore how Nemotron Nano 2 VL advances video underst ## Run optimized inference with vLLM -Nemotron Nano 2 VL, achieves accelerated [inference](https://www.nvidia.com/en-us/glossary/ai-inference/) and serves more requests on the same GPU with BF16, FP8 and FP4 precision support. Follow these instructions to get started: +This guide demonstrates how to run Nemotron Nano 2 VL on vLLM, achieving accelerated [inference](https://www.nvidia.com/en-us/glossary/ai-inference/) and serving concurrent requests efficiently with BF16, FP8 and FP4 precision support. -`Create a fresh conda environment` +### Install vLLM -````shell +The support for Nemotron Nano 2 VL is available in the nightly version of vLLM. Run the command below to install vLLM: ```bash -conda create -n nemotron-vllm-env python=3.10 -y -conda activate nemotron-vllm-env +uv venv +uv pip install vllm --extra-index-url https://wheels.vllm.ai/nightly --prerelease=allow ``` -```` - -`Make sure to use the main branch of the vLLM. Run the command below to install vLLM` - -````shell -```bash -!VLLM_USE_PRECOMPILED=1 pip install git+https://github.com/vllm-project/vllm.git@main -```` -`We can then serve this model via an OpenAI-compatible API` -````shell +### Deploy and query the inference server +Deploy an OpenAI-compatible inference server with vLLM by running the following commands for BF16, FP8 and FP4 precision: ```bash vllm serve nvidia/Nemotron-Nano-12B-v2-VL-BF16 --trust-remote-code --dtype bfloat16 --video-pruning-rate 0 -``` # FP8 -```bash vllm serve nvidia/Nemotron-Nano-VL-12B-V2-FP8 --trust-remote-code --quantization modelopt --video-pruning-rate 0 -``` # FP4 -```bash vllm serve nvidia/Nemotron-Nano-VL-12B-V2-FP4-QAD --trust-remote-code --quantization modelopt_fp4 --video-pruning-rate 0 ``` -```` - -`Once the server is up and running, you can prompt the model using the below code snippets` +Once the server is up and running, you can prompt the model using the below code snippet: ```python from openai import OpenAI -client = OpenAI(base_url="http://127.0.0.1:8033/v1", api_key="null") +client = OpenAI(base_url="http://localhost:8000/v1", api_key="null") # Simple chat completion resp = client.chat.completions.create( model="nvidia/Nemotron-Nano-12B-v2-VL-BF16", @@ -92,48 +104,8 @@ resp = client.chat.completions.create( ) print(resp.choices[0].message.content) ``` - -For an easier setup with vLLM, refer to our getting started cookbook, available here. - -## Leading multimodal model for efficient video understanding and document intelligence - -NVIDIA Nemotron Nano 2 VL brings both video understanding and document intelligence capabilities together in a single, highly efficient model. Built on the hybrid Transformer–Mamba architecture, it combines the reasoning strength of Transformer models with the compute efficiency of Mamba, achieving high throughput and low latency, allowing it to process multi-image inputs faster. - -Trained on NVIDIA-curated, high-quality multimodal data, [Nemotron Nano 2 VL](https://huggingface.co/blog/nvidia/nemotron-vlm-dataset-v2) leads in video understanding and document intelligence benchmarks such as MMMU, MathVista, AI2D, OCRBench, OCRBench-v2, OCR-Reasoning, ChartQA, DocVQA, and Video-MME, delivering top-tier accuracy in multimodal [reasoning](https://www.nvidia.com/en-us/glossary/ai-reasoning/), character recognition, chart reasoning, and visual question answering. This makes it ideal for building multimodal applications that automate data extraction and comprehension across videos, documents, forms, and charts with enterprise-grade precision. - - -

- - - -
-Figure 1: Nemotron Nano 2 VL provides leading accuracy on various video understanding and document intelligence benchmarks -

- -Improving Efficiency with EVS -With EVS, the model achieves higher throughput and faster response times without sacrificing accuracy. EVS technique prunes redundant frames, preserving semantic richness while enabling longer video processing efficiently. As a result, enterprises can analyze hours of footage, from meetings and training sessions to customer calls, in minutes, gaining actionable insights faster and at lower cost. - - -

- - - -
-Figure 2: Accuracy trend of the Nemotron Nano 2 VL model across various token-drop thresholds using efficient video sampling on Video-MME and LongVideo benchmarks -

- -## Get Started - -To summarize, Nemotron Nano 2 VL helps build scalable, cost-efficient agentic AI systems that truly understand documents and video. With open weights, training datasets, and recipes, developers gain full transparency and flexibility to fine-tune and deploy the model across any environment, from on-premise to cloud, for maximum security and privacy. - -Ready to build enterprise-ready agents? - -- Download model weights from Hugging Face - [BF16](https://huggingface.co/nvidia/NVIDIA-Nemotron-Nano-12B-v2-VL-BF16), [FP8](https://huggingface.co/nvidia/NVIDIA-Nemotron-Nano-12B-v2-VL-FP8), [FP4-QAD](https://huggingface.co/nvidia/NVIDIA-Nemotron-Nano-12B-v2-VL-NVFP4-QAD) -- Run with vLLM for inference with [this notebook](https://github.com/NVIDIA-NeMo/Nemotron/blob/main/usage-cookbook/Nemotron-Nano2-VL/vllm_cookbook.ipynb) -- [Technical report](https://research.nvidia.com/labs/adlr/files/NVIDIA-Nemotron-Nano-V2-VL-report.pdf) to build custom, optimized models with Nemotron techniques. -- [Training dataset](https://huggingface.co/datasets/nvidia/Nemotron-VLM-Dataset-v2) is hosted on Hugging Face. Learn more [here](https://huggingface.co/blog/nvidia/nemotron-vlm-dataset-v2). - +For more examples, check out our vLLM cookbook: [Nemotron-Nano2-VL/vllm_cookbook.ipynb](https://github.com/NVIDIA-NeMo/Nemotron/blob/main/usage-cookbook/Nemotron-Nano2-VL/vllm_cookbook.ipynb) [*Share your ideas*](http://nemotron.ideas.nvidia.com/?ncid=so-othe-692335) *and vote on what matters to help shape the future of Nemotron.* -*Stay up to date on [NVIDIA Nemotron](https://developer.nvidia.com/nemotron) by subscribing to NVIDIA news and following NVIDIA AI on [LinkedIn](https://www.linkedin.com/showcase/nvidia-ai/posts/?feedView=all), [X](https://x.com/NVIDIAAIDev), [YouTube](https://www.youtube.com/@NVIDIADeveloper)*, *and the [Nemotron channel](https://discord.com/channels/1019361803752456192/1407781691698708682) on [Discord](https://discord.com/invite/nvidiadeveloper).* \ No newline at end of file +*Stay up to date on [NVIDIA Nemotron](https://developer.nvidia.com/nemotron) by subscribing to NVIDIA news and following NVIDIA AI on [LinkedIn](https://www.linkedin.com/showcase/nvidia-ai/posts/?feedView=all), [X](https://x.com/NVIDIAAIDev), [YouTube](https://www.youtube.com/@NVIDIADeveloper)*, *and the [Nemotron channel](https://discord.com/channels/1019361803752456192/1407781691698708682) on [Discord](https://discord.com/invite/nvidiadeveloper).* From e7626f9770a300ab0485555e15be87fd4ee24e49 Mon Sep 17 00:00:00 2001 From: Roger Wang Date: Wed, 29 Oct 2025 19:16:34 -0700 Subject: [PATCH 04/12] update Signed-off-by: Roger Wang --- ...25-10-29-run-multimodal-reasoning-agents-nvidia-nemotron.md | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/_posts/2025-10-29-run-multimodal-reasoning-agents-nvidia-nemotron.md b/_posts/2025-10-29-run-multimodal-reasoning-agents-nvidia-nemotron.md index db9fe60..4c09ddf 100644 --- a/_posts/2025-10-29-run-multimodal-reasoning-agents-nvidia-nemotron.md +++ b/_posts/2025-10-29-run-multimodal-reasoning-agents-nvidia-nemotron.md @@ -4,8 +4,6 @@ title: "Run Multimodal Reasoning Agents with NVIDIA Nemotron on vLLM" author: "NVIDIA Nemotron Team" --- -# Run Multimodal Reasoning Agents with NVIDIA Nemotron on vLLM - We are excited to release [NVIDIA Nemotron Nano 2 VL](https://huggingface.co/nvidia/Nemotron-Nano-12B-v2-VL-BF16), supported by vLLM. This open vision language model ([VLM](https://www.nvidia.com/en-us/glossary/vision-language-models/)) is built for video understanding and document intelligence. Nemotron Nano 2 VL uses a hybrid Transformer–Mamba design and delivers higher throughput while maintaining state-of-the-art multimodal reasoning accuracy. The model also features Efficient Video Sampling (EVS), a new technique that reduces redundant [tokens](https://blogs.nvidia.com/blog/ai-tokens-explained/) generation for video workloads , allowing processing of more videos with higher efficiency. @@ -106,6 +104,7 @@ print(resp.choices[0].message.content) ``` For more examples, check out our vLLM cookbook: [Nemotron-Nano2-VL/vllm_cookbook.ipynb](https://github.com/NVIDIA-NeMo/Nemotron/blob/main/usage-cookbook/Nemotron-Nano2-VL/vllm_cookbook.ipynb) + [*Share your ideas*](http://nemotron.ideas.nvidia.com/?ncid=so-othe-692335) *and vote on what matters to help shape the future of Nemotron.* *Stay up to date on [NVIDIA Nemotron](https://developer.nvidia.com/nemotron) by subscribing to NVIDIA news and following NVIDIA AI on [LinkedIn](https://www.linkedin.com/showcase/nvidia-ai/posts/?feedView=all), [X](https://x.com/NVIDIAAIDev), [YouTube](https://www.youtube.com/@NVIDIADeveloper)*, *and the [Nemotron channel](https://discord.com/channels/1019361803752456192/1407781691698708682) on [Discord](https://discord.com/invite/nvidiadeveloper).* From 194b929f30ef8d0ad7d2da4ce968dd260dd2f2b8 Mon Sep 17 00:00:00 2001 From: Roger Wang Date: Wed, 29 Oct 2025 19:41:38 -0700 Subject: [PATCH 05/12] add activation Signed-off-by: Roger Wang --- ...2025-10-29-run-multimodal-reasoning-agents-nvidia-nemotron.md | 1 + 1 file changed, 1 insertion(+) diff --git a/_posts/2025-10-29-run-multimodal-reasoning-agents-nvidia-nemotron.md b/_posts/2025-10-29-run-multimodal-reasoning-agents-nvidia-nemotron.md index 4c09ddf..6b98f75 100644 --- a/_posts/2025-10-29-run-multimodal-reasoning-agents-nvidia-nemotron.md +++ b/_posts/2025-10-29-run-multimodal-reasoning-agents-nvidia-nemotron.md @@ -64,6 +64,7 @@ This guide demonstrates how to run Nemotron Nano 2 VL on vLLM, achieving acceler The support for Nemotron Nano 2 VL is available in the nightly version of vLLM. Run the command below to install vLLM: ```bash uv venv +source .venv/bin/activate uv pip install vllm --extra-index-url https://wheels.vllm.ai/nightly --prerelease=allow ``` From 8230019eb88de37752421f2ba340e9685572c689 Mon Sep 17 00:00:00 2001 From: Roger Wang Date: Wed, 29 Oct 2025 19:44:15 -0700 Subject: [PATCH 06/12] clean up Signed-off-by: Roger Wang --- ...5-10-29-run-multimodal-reasoning-agents-nvidia-nemotron.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/_posts/2025-10-29-run-multimodal-reasoning-agents-nvidia-nemotron.md b/_posts/2025-10-29-run-multimodal-reasoning-agents-nvidia-nemotron.md index 6b98f75..8412fd4 100644 --- a/_posts/2025-10-29-run-multimodal-reasoning-agents-nvidia-nemotron.md +++ b/_posts/2025-10-29-run-multimodal-reasoning-agents-nvidia-nemotron.md @@ -6,7 +6,7 @@ author: "NVIDIA Nemotron Team" We are excited to release [NVIDIA Nemotron Nano 2 VL](https://huggingface.co/nvidia/Nemotron-Nano-12B-v2-VL-BF16), supported by vLLM. This open vision language model ([VLM](https://www.nvidia.com/en-us/glossary/vision-language-models/)) is built for video understanding and document intelligence. -Nemotron Nano 2 VL uses a hybrid Transformer–Mamba design and delivers higher throughput while maintaining state-of-the-art multimodal reasoning accuracy. The model also features Efficient Video Sampling (EVS), a new technique that reduces redundant [tokens](https://blogs.nvidia.com/blog/ai-tokens-explained/) generation for video workloads , allowing processing of more videos with higher efficiency. +Nemotron Nano 2 VL uses a hybrid Transformer–Mamba design and delivers higher throughput while maintaining state-of-the-art multimodal reasoning accuracy. The model also features **Efficient Video Sampling (EVS)**, a new technique that reduces redundant [tokens](https://blogs.nvidia.com/blog/ai-tokens-explained/) generation for video workloads, allowing processing of more videos with higher efficiency. In this blog post, we’ll explore how Nemotron Nano 2 VL advances video understanding and document intelligence, showcase real-world use cases and benchmark results, and guide you through getting started with vLLM for inference to unlock high-efficiency multimodal AI at scale. @@ -25,7 +25,7 @@ Trained on NVIDIA-curated, high-quality multimodal data, [Nemotron Nano 2 VL](ht Figure 1: Nemotron Nano 2 VL provides leading accuracy on various video understanding and document intelligence benchmarks

-Improving Efficiency with EVS +### Improving Efficiency with EVS With EVS, the model achieves higher throughput and faster response times without sacrificing accuracy. EVS technique prunes redundant frames, preserving semantic richness while enabling longer video processing efficiently. As a result, enterprises can analyze hours of footage, from meetings and training sessions to customer calls, in minutes, gaining actionable insights faster and at lower cost. From a1edfd473b580ff09a84b9630cd2cd191e7218c9 Mon Sep 17 00:00:00 2001 From: Roger Wang Date: Thu, 30 Oct 2025 09:09:53 -0700 Subject: [PATCH 07/12] Update _posts/2025-10-29-run-multimodal-reasoning-agents-nvidia-nemotron.md Co-authored-by: Harry Mellor <19981378+hmellor@users.noreply.github.com> Signed-off-by: Roger Wang --- ...25-10-29-run-multimodal-reasoning-agents-nvidia-nemotron.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/_posts/2025-10-29-run-multimodal-reasoning-agents-nvidia-nemotron.md b/_posts/2025-10-29-run-multimodal-reasoning-agents-nvidia-nemotron.md index 8412fd4..f9d3eb9 100644 --- a/_posts/2025-10-29-run-multimodal-reasoning-agents-nvidia-nemotron.md +++ b/_posts/2025-10-29-run-multimodal-reasoning-agents-nvidia-nemotron.md @@ -103,7 +103,8 @@ resp = client.chat.completions.create( ) print(resp.choices[0].message.content) ``` -For more examples, check out our vLLM cookbook: [Nemotron-Nano2-VL/vllm_cookbook.ipynb](https://github.com/NVIDIA-NeMo/Nemotron/blob/main/usage-cookbook/Nemotron-Nano2-VL/vllm_cookbook.ipynb) + +For more examples, check out our vLLM cookbook: [Nemotron-Nano2-VL/vllm_cookbook.ipynb](https://github.com/NVIDIA-NeMo/Nemotron/blob/main/usage-cookbook/Nemotron-Nano2-VL/vllm_cookbook.ipynb). [*Share your ideas*](http://nemotron.ideas.nvidia.com/?ncid=so-othe-692335) *and vote on what matters to help shape the future of Nemotron.* From 05d18cc5dab52003967a69f167c494d97ce3f488 Mon Sep 17 00:00:00 2001 From: Roger Wang Date: Thu, 30 Oct 2025 09:10:04 -0700 Subject: [PATCH 08/12] Update _posts/2025-10-29-run-multimodal-reasoning-agents-nvidia-nemotron.md Co-authored-by: Harry Mellor <19981378+hmellor@users.noreply.github.com> Signed-off-by: Roger Wang --- ...025-10-29-run-multimodal-reasoning-agents-nvidia-nemotron.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/_posts/2025-10-29-run-multimodal-reasoning-agents-nvidia-nemotron.md b/_posts/2025-10-29-run-multimodal-reasoning-agents-nvidia-nemotron.md index f9d3eb9..32734e0 100644 --- a/_posts/2025-10-29-run-multimodal-reasoning-agents-nvidia-nemotron.md +++ b/_posts/2025-10-29-run-multimodal-reasoning-agents-nvidia-nemotron.md @@ -43,7 +43,7 @@ Figure 2: Accuracy trend of the Nemotron Nano 2 VL model across various token-dr * [CRADIOH-V2](https://huggingface.co/nvidia/C-RADIOv2-H) based Vision Encoder * Efficient video sampling as token compression module * Hybrid Transformer-Mamba Architecture- [Nemotron Nano 2 LLM](https://huggingface.co/nvidia/NVIDIA-Nemotron-Nano-9B-v2) backbone with reasoning. -* Accuracy +* Accuracy: * Leading accuracy on OCRBench v2 * 74 on average score (compared to 64.2 with current top VL model) on the following benchmarks: MMMU, MathVista, AI2D, OCRBench, OCRBench-v2, OCR-Reasoning, ChartQA, DocVQA, and Video-MME * Model size: 12B From 2f090f179ae3fab4077713074483bd3474ef1374 Mon Sep 17 00:00:00 2001 From: Roger Wang Date: Thu, 30 Oct 2025 09:10:14 -0700 Subject: [PATCH 09/12] Update _posts/2025-10-29-run-multimodal-reasoning-agents-nvidia-nemotron.md Co-authored-by: Harry Mellor <19981378+hmellor@users.noreply.github.com> Signed-off-by: Roger Wang --- ...025-10-29-run-multimodal-reasoning-agents-nvidia-nemotron.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/_posts/2025-10-29-run-multimodal-reasoning-agents-nvidia-nemotron.md b/_posts/2025-10-29-run-multimodal-reasoning-agents-nvidia-nemotron.md index 32734e0..efaeb5a 100644 --- a/_posts/2025-10-29-run-multimodal-reasoning-agents-nvidia-nemotron.md +++ b/_posts/2025-10-29-run-multimodal-reasoning-agents-nvidia-nemotron.md @@ -42,7 +42,7 @@ Figure 2: Accuracy trend of the Nemotron Nano 2 VL model across various token-dr * Architecture: * [CRADIOH-V2](https://huggingface.co/nvidia/C-RADIOv2-H) based Vision Encoder * Efficient video sampling as token compression module - * Hybrid Transformer-Mamba Architecture- [Nemotron Nano 2 LLM](https://huggingface.co/nvidia/NVIDIA-Nemotron-Nano-9B-v2) backbone with reasoning. + * Hybrid Transformer-Mamba Architecture - [Nemotron Nano 2 LLM](https://huggingface.co/nvidia/NVIDIA-Nemotron-Nano-9B-v2) backbone with reasoning. * Accuracy: * Leading accuracy on OCRBench v2 * 74 on average score (compared to 64.2 with current top VL model) on the following benchmarks: MMMU, MathVista, AI2D, OCRBench, OCRBench-v2, OCR-Reasoning, ChartQA, DocVQA, and Video-MME From 1662ee5d8786964ae51229b1ef116654ea4b0ff8 Mon Sep 17 00:00:00 2001 From: Roger Wang Date: Fri, 31 Oct 2025 00:13:46 -0700 Subject: [PATCH 10/12] add Signed-off-by: Roger Wang --- ...5-10-29-run-multimodal-reasoning-agents-nvidia-nemotron.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/_posts/2025-10-29-run-multimodal-reasoning-agents-nvidia-nemotron.md b/_posts/2025-10-29-run-multimodal-reasoning-agents-nvidia-nemotron.md index efaeb5a..6fb862a 100644 --- a/_posts/2025-10-29-run-multimodal-reasoning-agents-nvidia-nemotron.md +++ b/_posts/2025-10-29-run-multimodal-reasoning-agents-nvidia-nemotron.md @@ -6,7 +6,7 @@ author: "NVIDIA Nemotron Team" We are excited to release [NVIDIA Nemotron Nano 2 VL](https://huggingface.co/nvidia/Nemotron-Nano-12B-v2-VL-BF16), supported by vLLM. This open vision language model ([VLM](https://www.nvidia.com/en-us/glossary/vision-language-models/)) is built for video understanding and document intelligence. -Nemotron Nano 2 VL uses a hybrid Transformer–Mamba design and delivers higher throughput while maintaining state-of-the-art multimodal reasoning accuracy. The model also features **Efficient Video Sampling (EVS)**, a new technique that reduces redundant [tokens](https://blogs.nvidia.com/blog/ai-tokens-explained/) generation for video workloads, allowing processing of more videos with higher efficiency. +Nemotron Nano 2 VL uses a hybrid Transformer–Mamba design and delivers higher throughput while maintaining state-of-the-art multimodal reasoning accuracy. The model also features [**Efficient Video Sampling (EVS)**](https://arxiv.org/abs/2510.14624), a new technique that reduces redundant [tokens](https://blogs.nvidia.com/blog/ai-tokens-explained/) generation for video workloads, allowing processing of more videos with higher efficiency. In this blog post, we’ll explore how Nemotron Nano 2 VL advances video understanding and document intelligence, showcase real-world use cases and benchmark results, and guide you through getting started with vLLM for inference to unlock high-efficiency multimodal AI at scale. @@ -53,7 +53,7 @@ Figure 2: Accuracy trend of the Nemotron Nano 2 VL model across various token-dr * Get started: * Download model weights from Hugging Face \- [BF16](https://huggingface.co/nvidia/Nemotron-Nano-12B-v2-VL-BF16), [FP8](https://huggingface.co/nvidia/Nemotron-Nano-12B-v2-VL-FP8), [FP4-QAD](https://huggingface.co/nvidia/Nemotron-Nano-12B-v2-VL-FP4-QAD) * Run with vLLM for inference - * [Technical report](https://www.overleaf.com/project/68d1d48c83696e11ba669f70) to build custom, optimized models with Nemotron techniques.. + * [Technical report](https://research.nvidia.com/labs/adlr/files/NVIDIA-Nemotron-Nano-V2-VL-report.pdf) to build custom, optimized models with Nemotron techniques.. ## Run optimized inference with vLLM From 56723df18e355a75e42ca3b2533e36b97addf903 Mon Sep 17 00:00:00 2001 From: Roger Wang Date: Fri, 31 Oct 2025 00:17:47 -0700 Subject: [PATCH 11/12] update date and add recipe link Signed-off-by: Roger Wang --- ...25-10-31-run-multimodal-reasoning-agents-nvidia-nemotron.md} | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) rename _posts/{2025-10-29-run-multimodal-reasoning-agents-nvidia-nemotron.md => 2025-10-31-run-multimodal-reasoning-agents-nvidia-nemotron.md} (97%) diff --git a/_posts/2025-10-29-run-multimodal-reasoning-agents-nvidia-nemotron.md b/_posts/2025-10-31-run-multimodal-reasoning-agents-nvidia-nemotron.md similarity index 97% rename from _posts/2025-10-29-run-multimodal-reasoning-agents-nvidia-nemotron.md rename to _posts/2025-10-31-run-multimodal-reasoning-agents-nvidia-nemotron.md index 6fb862a..edfbfd2 100644 --- a/_posts/2025-10-29-run-multimodal-reasoning-agents-nvidia-nemotron.md +++ b/_posts/2025-10-31-run-multimodal-reasoning-agents-nvidia-nemotron.md @@ -104,7 +104,7 @@ resp = client.chat.completions.create( print(resp.choices[0].message.content) ``` -For more examples, check out our vLLM cookbook: [Nemotron-Nano2-VL/vllm_cookbook.ipynb](https://github.com/NVIDIA-NeMo/Nemotron/blob/main/usage-cookbook/Nemotron-Nano2-VL/vllm_cookbook.ipynb). +For more examples, check out our vLLM cookbook: [Nemotron-Nano2-VL/vllm_cookbook.ipynb](https://github.com/NVIDIA-NeMo/Nemotron/blob/main/usage-cookbook/Nemotron-Nano2-VL/vllm_cookbook.ipynb) and vLLM recipe for [Nemotron Nano 2 VL](https://docs.vllm.ai/projects/recipes/en/latest/NVIDIA/Nemotron-Nano-12B-v2-VL.html). [*Share your ideas*](http://nemotron.ideas.nvidia.com/?ncid=so-othe-692335) *and vote on what matters to help shape the future of Nemotron.* From c9ac55f8b6017e91ecc3c9b55e29662a06e26236 Mon Sep 17 00:00:00 2001 From: Roger Wang Date: Fri, 31 Oct 2025 00:20:38 -0700 Subject: [PATCH 12/12] update link Signed-off-by: Roger Wang --- ...025-10-31-run-multimodal-reasoning-agents-nvidia-nemotron.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/_posts/2025-10-31-run-multimodal-reasoning-agents-nvidia-nemotron.md b/_posts/2025-10-31-run-multimodal-reasoning-agents-nvidia-nemotron.md index edfbfd2..3d2f63c 100644 --- a/_posts/2025-10-31-run-multimodal-reasoning-agents-nvidia-nemotron.md +++ b/_posts/2025-10-31-run-multimodal-reasoning-agents-nvidia-nemotron.md @@ -104,7 +104,7 @@ resp = client.chat.completions.create( print(resp.choices[0].message.content) ``` -For more examples, check out our vLLM cookbook: [Nemotron-Nano2-VL/vllm_cookbook.ipynb](https://github.com/NVIDIA-NeMo/Nemotron/blob/main/usage-cookbook/Nemotron-Nano2-VL/vllm_cookbook.ipynb) and vLLM recipe for [Nemotron Nano 2 VL](https://docs.vllm.ai/projects/recipes/en/latest/NVIDIA/Nemotron-Nano-12B-v2-VL.html). +For more examples, check out our [vLLM cookbook](https://github.com/NVIDIA-NeMo/Nemotron/blob/main/usage-cookbook/Nemotron-Nano2-VL/vllm_cookbook.ipynb) and [vLLM recipe for Nemotron Nano 2 VL](https://docs.vllm.ai/projects/recipes/en/latest/NVIDIA/Nemotron-Nano-12B-v2-VL.html). [*Share your ideas*](http://nemotron.ideas.nvidia.com/?ncid=so-othe-692335) *and vote on what matters to help shape the future of Nemotron.*