From e994234d999f69c29483c85bec4747b3a7af399b Mon Sep 17 00:00:00 2001 From: Matt Cossins Date: Sun, 8 Mar 2026 15:08:32 +0000 Subject: [PATCH 1/6] Project updates --- .../Always-On-AI-with-Ethos-U85-NPU.md | 8 +++- Projects/Projects/Edge-AI-On-Mobile.md | 1 + .../Projects/Ethos-U85-NPU-Applications.md | 39 ++++++++----------- ...v-Using-Neural-Graphics-&-Unreal-Engine.md | 13 ++++--- 4 files changed, 32 insertions(+), 29 deletions(-) diff --git a/Projects/Projects/Always-On-AI-with-Ethos-U85-NPU.md b/Projects/Projects/Always-On-AI-with-Ethos-U85-NPU.md index a9a9e521..9c04b0ef 100644 --- a/Projects/Projects/Always-On-AI-with-Ethos-U85-NPU.md +++ b/Projects/Projects/Always-On-AI-with-Ethos-U85-NPU.md @@ -67,10 +67,16 @@ You should either be familiar with, or willing to learn about, the following: ## Resources from Arm and our partners - Arm Developer: [Edge AI](https://developer.arm.com/edge-ai) -- Learning Path: [Navigating Machine Learning with Ethos-U processors](https://learn.arm.com/learning-paths/microcontrollers/nav-mlek/) +- Arm Alif Code-along: [Advanced AI on Arm Embedded Systems](https://developer.arm.com/code-along/advanced-ai-on-arm-embedded-systems) +- Code-along repo: [Ethos-U Workshop](https://github.com/ArmDeveloperEcosystem/workshop-ethos-u) +- Learning Path: [Navigating Machine Learning with Ethos-U processors](https://learn.arm.com/learning-paths/microcontrollers/nav-mlek/) +- Learning Path: [Visualize Ethos-U NPU performance with ExecuTorch on Arm FVPs](https://learn.arm.com/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance) - Repository: [AI on Arm course](https://github.com/arm-university/AI-on-Arm) - Example Board: [Alif Ensemble DevKit E8](https://www.keil.arm.com/boards/alif-semiconductor-devkit-e8-gen-1-2558a7b/features/) +- Documentation: [TOSA Specification](https://www.mlplatform.org/tosa/), [TOSA Model Explorer](https://github.com/arm/tosa-adapter-model-explorer), and [TOSA Reference Model](https://gitlab.arm.com/tosa/tosa-reference-model) +- [Model Explorer](https://ai.google.dev/edge/model-explorer) - PyTorch Blog: [ExecuTorch support for Ethos-U85](https://pytorch.org/blog/pt-executorch-ethos-u85/) +- PyTorch Tutorial: [Arm Ethos-U NPU Backend Tutorial](https://docs.pytorch.org/executorch/1.0/tutorial-arm-ethos-u.html) ## Support Level diff --git a/Projects/Projects/Edge-AI-On-Mobile.md b/Projects/Projects/Edge-AI-On-Mobile.md index b1dc74db..e42290ee 100644 --- a/Projects/Projects/Edge-AI-On-Mobile.md +++ b/Projects/Projects/Edge-AI-On-Mobile.md @@ -61,6 +61,7 @@ Utilise the resources and learning paths below and create an exciting and challe ## Resources from Arm and our partners - Arm Developer: [Launchpad - Mobile AI](https://developer.arm.com/mobile-graphics-and-gaming/ai-mobile) +- Learning Path: [Profile ExecuTorch models with SME2 on Arm](https://learn.arm.com/learning-paths/cross-platform/sme-executorch-profiling/) - Learning Path: [Mobile AI/ML Performance Profiling](https://learn.arm.com/learning-paths/mobile-graphics-and-gaming/profiling-ml-on-arm/) - Learning Path: [Build an Android chat app with Llama, KleidiAI, ExecuTorch, and XNNPACK](https://learn.arm.com/learning-paths/mobile-graphics-and-gaming/build-llama3-chat-android-app-using-executorch-and-xnnpack/) - Learning Path: [Vision LLM Inference on Android with KleidiAI](https://learn.arm.com/learning-paths/mobile-graphics-and-gaming/vision-llm-inference-on-android-with-kleidiai-and-mnn/) diff --git a/Projects/Projects/Ethos-U85-NPU-Applications.md b/Projects/Projects/Ethos-U85-NPU-Applications.md index 1f943477..5c96a3b9 100644 --- a/Projects/Projects/Ethos-U85-NPU-Applications.md +++ b/Projects/Projects/Ethos-U85-NPU-Applications.md @@ -42,74 +42,67 @@ This project challenges you to explore the boundaries of what’s possible on Et **Project Summary** -Using hardware such as the Alif Ensemble E4/E6/E8 DevKits (all include Ethos-U85) or a comparable platform or Arm Fixed Virtual Platform Corstone-320, your task is to design and benchmark an advanced edge inference application that exploits the Ethos-U85’s compute and transformer capabilities. +Using hardware such as the Alif Ensemble E4/E6/E8 DevKits (all include Ethos-U85), your task is to design and benchmark an advanced edge inference application that exploits the Ethos-U85’s compute and transformer capabilities. + +You can utilise the Arm Fixed Virtual Platform Corstone-320 to prototype and test your application functionally, without access to Alif hardware. You can use this to prove functional correctness - and can then later test performance on actual silicon. We are interested to see projects both in simulation, and on final hardware. Your project should include: -1. Model Deployment and Optimization +**Model Deployment and Optimization** Select a computationally intensive model — ideally transformer-based or multi-branch convolutional — and deploy it on the Ethos-U85 using: - - The TOSA Model Explorer extension to inspect and adapt unsupported or experimental models for TOSA compliance. + - Model Explorer to inspect models and identify problem layers that reduce optimal delegation to the Ethos-U backend - The Vela compiler for optimization. These tools can be used to: - Convert and visualize model graphs in TOSA format. - Identify unsupported operators. - - Modify or substitute layers for compatibility using the Flatbuffers schema before re-exporting. - - Run Vela for optimized compilation targeting Ethos-U85. -2. Application Demonstration +**Application Demonstration** Implement a working example that highlights the Ethos-U85’s strengths in real-world inference. Possible categories include: - Transformers on Edge: lightweight BERT, ViT, or audio transformers (e.g. speech or sound event classification). - High-resolution Vision: semantic segmentation, object detection on large input sizes, or multi-head perception networks. - Multi-modal Fusion: combining audio, image, or sensor streams for contextual understanding. -3. Analysis and Benchmarking +**Analysis and Benchmarking** Report quantitative results on: - Inference latency, throughput (FPS or tokens/s), and memory footprint. - Power efficiency under load (optional). - Comparative performance versus Ethos-U55/U65 (use available benchmarks for reference or utilise the other Ethos-U NPUs provided in the Alif DevKits). - - The effect of TOSA optimization — demonstrate measurable improvements from graph conversion and operator fusion. - ---- ## What kind of projects should you target? To clearly demonstrate the leap from Ethos-U55/U65 to U85, choose projects that meet at least one of the following criteria: - Transformer-heavy architectures: e.g. attention blocks, transformer encoders, ViTs, or hybrid CNN+transformer models. - - *Example:* an audio event detection transformer that must process longer sequences or higher-resolution spectrograms. -- High-resolution or multi-branch networks: models with high input dimensionality or multiple processing paths that saturate NPU throughput. - - *Example:* 512×512 semantic segmentation or multi-object detection. +- High-resolution or multi-branch networks: models with high input dimensionality or multiple processing paths that saturate NPU throughput. - Dense post-processing or large fully connected layers: cases where U55/U65 memory limits or MAC bandwidth previously restricted performance. - - *Example:* large MLP heads or transformer token mixers. - Multi-modal pipelines: combining multiple sensor inputs (e.g. image + IMU + audio) where the NPU must maintain concurrency or shared intermediate representations. The Ethos-U85 is ideal for projects where model performance is constrained by attention layers, large activations, or operator types that previously required fallback to the CPU. Use the Ethos-U85 to eliminate those fallbacks and achieve full-NPU execution of advanced topologies. ---- - ## What will you use? You should be familiar with, or willing to learn about: - Programming: Python, C/C++ -- ExecuTorch or TensorFlow Lite (Micro/LiteRT) +- ExecuTorch or LiteRT - Techniques for optimising AI models for the edge (quantization, pruning, etc.) - Optimization Tools: - - TOSA Model Explorer - - .tflite to .tosa converter (if using Tensorflow rather than ExecuTorch) + - Model Explorer with TOSA adapter (and PTE adapter for ExecuTorch) - Vela compiler for Ethos-U - Bare-metal or RTOS (e.g., Zephyr) ---- - ## Resources from Arm and our partners - Arm Developer: [Edge AI](https://developer.arm.com/edge-ai) -- Learning Path: [Navigating Machine Learning with Ethos-U processors](https://learn.arm.com/learning-paths/microcontrollers/nav-mlek/) +- Arm Alif Code-along: [Advanced AI on Arm Embedded Systems](https://developer.arm.com/code-along/advanced-ai-on-arm-embedded-systems) +- Code-along repo: [Ethos-U Workshop](https://github.com/ArmDeveloperEcosystem/workshop-ethos-u) +- Learning Path: [Navigating Machine Learning with Ethos-U processors](https://learn.arm.com/learning-paths/microcontrollers/nav-mlek/) +- Learning Path: [Visualize Ethos-U NPU performance with ExecuTorch on Arm FVPs](https://learn.arm.com/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance) - Repository: [AI on Arm course](https://github.com/arm-university/AI-on-Arm) - Example Board: [Alif Ensemble DevKit E8](https://www.keil.arm.com/boards/alif-semiconductor-devkit-e8-gen-1-2558a7b/features/) - Documentation: [TOSA Specification](https://www.mlplatform.org/tosa/), [TOSA Model Explorer](https://github.com/arm/tosa-adapter-model-explorer), and [TOSA Reference Model](https://gitlab.arm.com/tosa/tosa-reference-model) +- [Model Explorer](https://ai.google.dev/edge/model-explorer) - PyTorch Blog: [ExecuTorch support for Ethos-U85](https://pytorch.org/blog/pt-executorch-ethos-u85/) +- PyTorch Tutorial: [Arm Ethos-U NPU Backend Tutorial](https://docs.pytorch.org/executorch/1.0/tutorial-arm-ethos-u.html) ---- ## Support Level diff --git a/Projects/Projects/Game-Dev-Using-Neural-Graphics-&-Unreal-Engine.md b/Projects/Projects/Game-Dev-Using-Neural-Graphics-&-Unreal-Engine.md index 94fab5b5..725beee0 100644 --- a/Projects/Projects/Game-Dev-Using-Neural-Graphics-&-Unreal-Engine.md +++ b/Projects/Projects/Game-Dev-Using-Neural-Graphics-&-Unreal-Engine.md @@ -1,6 +1,6 @@ --- title: "Game development using Arm Neural Graphics with Unreal Engine" -description: "Build a playable Unreal Engine 5 game demo that utilises Arm’s Neural Graphics SDK UE plugin for features such as Neural Super Sampling (NSS). Showcase near-identical image quality at lower resolution by driving neural rendering directly in the graphics pipeline." +description: "Build a playable Unreal Engine 5 game demo that utilises Arm’s Neural Graphics SDK UE plugin for features such as Neural Super Sampling (NSS). Showcase improved graphical fidelity at lower resolution by driving neural rendering directly in the graphics pipeline." subjects: - "ML" - "Gaming" @@ -28,7 +28,7 @@ badges: donation: --- -![educate_on_arm](../../images/Educate_on_Arm_banner.png) +![learn_on_arm](../../images/Learn_on_Arm_banner.png) ## Description @@ -47,7 +47,7 @@ Future SDK support will be provided for Neural Frame Rate Upscaling (NFRU) - so ### Project Summary Create a small game scene utilising the Arm Neural Graphics UE plugin to demonstrate: -- **Near-identical visuals at lower resolution** (render low → upscale with NSS) +- **Improved graphical fidelity despite lower resolution** (render low → upscale with NSS) Document your progress and findings and consider alternative applications of the neural technology within games development. @@ -60,6 +60,9 @@ Attempt different environments and objects. For example: Make your scenes dynamic with particle effects, shadows, physics and motion. +### Beyond the plugin + +Want to go further and start experimenting more with Neural Graphics? After building your game with the NSS Unreal plugin, try-out the [Vulkan ML Extensions learning path](https://learn.arm.com/learning-paths/mobile-graphics-and-gaming/vulkan-ml-sample/) to explore how neural inference runs directly through the Vulkan API. This provides lower-level control over ML workloads in the graphics pipeline, and allows for prototyping custom neural effects or optimising performance beyond what’s exposed through the engine plugin. You may also want to explore [fine-tuning your own neural models with the Arm Neural Graphics Model Gym](https://learn.arm.com/learning-paths/mobile-graphics-and-gaming/model-training-gym/) and how to [apply different quantization strategies](https://learn.arm.com/learning-paths/mobile-graphics-and-gaming/quantize-neural-upscaling-models/) for optimisation of memory and latency. --- ## Pre-requisites @@ -72,11 +75,11 @@ Make your scenes dynamic with particle effects, shadows, physics and motion. - Get Started Blog: [Start experimenting with NSS today](https://developer.arm.com/community/arm-community-blogs/b/mobile-graphics-and-gaming-blog/posts/how-to-access-arm-neural-super-sampling) - Deep Dive Blog: [How NSS works](https://developer.arm.com/community/arm-community-blogs/b/mobile-graphics-and-gaming-blog/posts/how-arm-neural-super-sampling-works) - Arm Developer: [Neural Graphics Development Kit](https://developer.arm.com/mobile-graphics-and-gaming/neural-graphics) -- Learning Path: [Fine-tuning neural graphics models with Model Gym](https://learn.arm.com/learning-paths/mobile-graphics-and-gaming/model-training-gym/) - Learning Path: [Neural Super Sampling in Unreal Engine](https://learn.arm.com/learning-paths/mobile-graphics-and-gaming/nss-unreal/) - Learning Path: [Getting started with Arm Accuracy Super Resolution (Arm ASR)](https://learn.arm.com/learning-paths/mobile-graphics-and-gaming/get-started-with-arm-asr/) - Unreal Engine Intro by Epic Games: [Understanding the basics](https://dev.epicgames.com/documentation/en-us/unreal-engine/understanding-the-basics-of-unreal-engine) - Repo: [Arm Neural Graphics SDK](https://github.com/arm/neural-graphics-sdk-for-game-engines) +- Repo: [Arm Neural Geraphics for Unreal](https://github.com/arm/neural-graphics-for-unreal) - Repo: [Arm Neural Graphics Model Gym](https://github.com/arm/neural-graphics-model-gym) - Documentation: [Arm Neural Graphics SDK for Game Engines Developer guide](https://developer.arm.com/documentation/111167/latest/) @@ -88,7 +91,7 @@ This project is designed to be self-serve but comes with opportunity of some com ## Benefits -Standout project contributions to the community will earn digital badges. These badges can support CV or resumé building and demonstrate earned recognition. +Standout project contributions to the community will earn digital badges. These badges can support CV or resumé building and demonstrate earned recognition. Contributions may also be highlighted in case studies or newsletters. To receive the benefits, you must show us your project through our [online form](https://forms.office.com/e/VZnJQLeRhD). Please do not include any confidential information in your contribution. Additionally if you are affiliated with an academic institution, please ensure you have the right to share your material. From 0fece9fb778fe7373c5cbec745169f33b150e5a3 Mon Sep 17 00:00:00 2001 From: ci-bot Date: Sun, 8 Mar 2026 15:12:59 +0000 Subject: [PATCH 2/6] docs: auto-update --- docs/_data/navigation.yml | 2 +- ...5-11-27-Always-On-AI-with-Ethos-U85-NPU.md | 16 +++- docs/_posts/2025-11-27-Edge-AI-On-Mobile.md | 2 + .../2025-11-27-Ethos-U85-NPU-Applications.md | 78 ++++++++----------- ...v-Using-Neural-Graphics-&-Unreal-Engine.md | 24 +++--- 5 files changed, 64 insertions(+), 58 deletions(-) diff --git a/docs/_data/navigation.yml b/docs/_data/navigation.yml index 037a258d..39c3307c 100644 --- a/docs/_data/navigation.yml +++ b/docs/_data/navigation.yml @@ -233,7 +233,7 @@ projects: - title: Game-Dev-Using-Neural-Graphics-&-Unreal-Engine description: "Build a playable Unreal Engine 5 game demo that utilises Arm\u2019\ s Neural Graphics SDK UE plugin for features such as Neural Super Sampling (NSS).\ - \ Showcase near-identical image quality at lower resolution by driving neural\ + \ Showcase improved graphical fidelity at lower resolution by driving neural\ \ rendering directly in the graphics pipeline." url: /2025/11/27/Game-Dev-Using-Neural-Graphics-&-Unreal-Engine.html subjects: diff --git a/docs/_posts/2025-11-27-Always-On-AI-with-Ethos-U85-NPU.md b/docs/_posts/2025-11-27-Always-On-AI-with-Ethos-U85-NPU.md index c459b5fc..ffd7b1b8 100644 --- a/docs/_posts/2025-11-27-Always-On-AI-with-Ethos-U85-NPU.md +++ b/docs/_posts/2025-11-27-Always-On-AI-with-Ethos-U85-NPU.md @@ -69,10 +69,16 @@ full_description: |- ## Resources from Arm and our partners - Arm Developer: [Edge AI](https://developer.arm.com/edge-ai) - - Learning Path: [Navigating Machine Learning with Ethos-U processors](https://learn.arm.com/learning-paths/microcontrollers/nav-mlek/) + - Arm Alif Code-along: [Advanced AI on Arm Embedded Systems](https://developer.arm.com/code-along/advanced-ai-on-arm-embedded-systems) + - Code-along repo: [Ethos-U Workshop](https://github.com/ArmDeveloperEcosystem/workshop-ethos-u) + - Learning Path: [Navigating Machine Learning with Ethos-U processors](https://learn.arm.com/learning-paths/microcontrollers/nav-mlek/) + - Learning Path: [Visualize Ethos-U NPU performance with ExecuTorch on Arm FVPs](https://learn.arm.com/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance) - Repository: [AI on Arm course](https://github.com/arm-university/AI-on-Arm) - Example Board: [Alif Ensemble DevKit E8](https://www.keil.arm.com/boards/alif-semiconductor-devkit-e8-gen-1-2558a7b/features/) + - Documentation: [TOSA Specification](https://www.mlplatform.org/tosa/), [TOSA Model Explorer](https://github.com/arm/tosa-adapter-model-explorer), and [TOSA Reference Model](https://gitlab.arm.com/tosa/tosa-reference-model) + - [Model Explorer](https://ai.google.dev/edge/model-explorer) - PyTorch Blog: [ExecuTorch support for Ethos-U85](https://pytorch.org/blog/pt-executorch-ethos-u85/) + - PyTorch Tutorial: [Arm Ethos-U NPU Backend Tutorial](https://docs.pytorch.org/executorch/1.0/tutorial-arm-ethos-u.html) ## Support Level @@ -124,10 +130,16 @@ You should either be familiar with, or willing to learn about, the following: ## Resources from Arm and our partners - Arm Developer: [Edge AI](https://developer.arm.com/edge-ai) -- Learning Path: [Navigating Machine Learning with Ethos-U processors](https://learn.arm.com/learning-paths/microcontrollers/nav-mlek/) +- Arm Alif Code-along: [Advanced AI on Arm Embedded Systems](https://developer.arm.com/code-along/advanced-ai-on-arm-embedded-systems) +- Code-along repo: [Ethos-U Workshop](https://github.com/ArmDeveloperEcosystem/workshop-ethos-u) +- Learning Path: [Navigating Machine Learning with Ethos-U processors](https://learn.arm.com/learning-paths/microcontrollers/nav-mlek/) +- Learning Path: [Visualize Ethos-U NPU performance with ExecuTorch on Arm FVPs](https://learn.arm.com/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance) - Repository: [AI on Arm course](https://github.com/arm-university/AI-on-Arm) - Example Board: [Alif Ensemble DevKit E8](https://www.keil.arm.com/boards/alif-semiconductor-devkit-e8-gen-1-2558a7b/features/) +- Documentation: [TOSA Specification](https://www.mlplatform.org/tosa/), [TOSA Model Explorer](https://github.com/arm/tosa-adapter-model-explorer), and [TOSA Reference Model](https://gitlab.arm.com/tosa/tosa-reference-model) +- [Model Explorer](https://ai.google.dev/edge/model-explorer) - PyTorch Blog: [ExecuTorch support for Ethos-U85](https://pytorch.org/blog/pt-executorch-ethos-u85/) +- PyTorch Tutorial: [Arm Ethos-U NPU Backend Tutorial](https://docs.pytorch.org/executorch/1.0/tutorial-arm-ethos-u.html) ## Support Level diff --git a/docs/_posts/2025-11-27-Edge-AI-On-Mobile.md b/docs/_posts/2025-11-27-Edge-AI-On-Mobile.md index a33fcd67..ff6159d4 100644 --- a/docs/_posts/2025-11-27-Edge-AI-On-Mobile.md +++ b/docs/_posts/2025-11-27-Edge-AI-On-Mobile.md @@ -63,6 +63,7 @@ full_description: |- ## Resources from Arm and our partners - Arm Developer: [Launchpad - Mobile AI](https://developer.arm.com/mobile-graphics-and-gaming/ai-mobile) + - Learning Path: [Profile ExecuTorch models with SME2 on Arm](https://learn.arm.com/learning-paths/cross-platform/sme-executorch-profiling/) - Learning Path: [Mobile AI/ML Performance Profiling](https://learn.arm.com/learning-paths/mobile-graphics-and-gaming/profiling-ml-on-arm/) - Learning Path: [Build an Android chat app with Llama, KleidiAI, ExecuTorch, and XNNPACK](https://learn.arm.com/learning-paths/mobile-graphics-and-gaming/build-llama3-chat-android-app-using-executorch-and-xnnpack/) - Learning Path: [Vision LLM Inference on Android with KleidiAI](https://learn.arm.com/learning-paths/mobile-graphics-and-gaming/vision-llm-inference-on-android-with-kleidiai-and-mnn/) @@ -121,6 +122,7 @@ Utilise the resources and learning paths below and create an exciting and challe ## Resources from Arm and our partners - Arm Developer: [Launchpad - Mobile AI](https://developer.arm.com/mobile-graphics-and-gaming/ai-mobile) +- Learning Path: [Profile ExecuTorch models with SME2 on Arm](https://learn.arm.com/learning-paths/cross-platform/sme-executorch-profiling/) - Learning Path: [Mobile AI/ML Performance Profiling](https://learn.arm.com/learning-paths/mobile-graphics-and-gaming/profiling-ml-on-arm/) - Learning Path: [Build an Android chat app with Llama, KleidiAI, ExecuTorch, and XNNPACK](https://learn.arm.com/learning-paths/mobile-graphics-and-gaming/build-llama3-chat-android-app-using-executorch-and-xnnpack/) - Learning Path: [Vision LLM Inference on Android with KleidiAI](https://learn.arm.com/learning-paths/mobile-graphics-and-gaming/vision-llm-inference-on-android-with-kleidiai-and-mnn/) diff --git a/docs/_posts/2025-11-27-Ethos-U85-NPU-Applications.md b/docs/_posts/2025-11-27-Ethos-U85-NPU-Applications.md index a5424714..10abbb81 100644 --- a/docs/_posts/2025-11-27-Ethos-U85-NPU-Applications.md +++ b/docs/_posts/2025-11-27-Ethos-U85-NPU-Applications.md @@ -44,74 +44,67 @@ full_description: |- **Project Summary** - Using hardware such as the Alif Ensemble E4/E6/E8 DevKits (all include Ethos-U85) or a comparable platform or Arm Fixed Virtual Platform Corstone-320, your task is to design and benchmark an advanced edge inference application that exploits the Ethos-U85’s compute and transformer capabilities. + Using hardware such as the Alif Ensemble E4/E6/E8 DevKits (all include Ethos-U85), your task is to design and benchmark an advanced edge inference application that exploits the Ethos-U85’s compute and transformer capabilities. + + You can utilise the Arm Fixed Virtual Platform Corstone-320 to prototype and test your application functionally, without access to Alif hardware. You can use this to prove functional correctness - and can then later test performance on actual silicon. We are interested to see projects both in simulation, and on final hardware. Your project should include: - 1. Model Deployment and Optimization + **Model Deployment and Optimization** Select a computationally intensive model — ideally transformer-based or multi-branch convolutional — and deploy it on the Ethos-U85 using: - - The TOSA Model Explorer extension to inspect and adapt unsupported or experimental models for TOSA compliance. + - Model Explorer to inspect models and identify problem layers that reduce optimal delegation to the Ethos-U backend - The Vela compiler for optimization. These tools can be used to: - Convert and visualize model graphs in TOSA format. - Identify unsupported operators. - - Modify or substitute layers for compatibility using the Flatbuffers schema before re-exporting. - - Run Vela for optimized compilation targeting Ethos-U85. - 2. Application Demonstration + **Application Demonstration** Implement a working example that highlights the Ethos-U85’s strengths in real-world inference. Possible categories include: - Transformers on Edge: lightweight BERT, ViT, or audio transformers (e.g. speech or sound event classification). - High-resolution Vision: semantic segmentation, object detection on large input sizes, or multi-head perception networks. - Multi-modal Fusion: combining audio, image, or sensor streams for contextual understanding. - 3. Analysis and Benchmarking + **Analysis and Benchmarking** Report quantitative results on: - Inference latency, throughput (FPS or tokens/s), and memory footprint. - Power efficiency under load (optional). - Comparative performance versus Ethos-U55/U65 (use available benchmarks for reference or utilise the other Ethos-U NPUs provided in the Alif DevKits). - - The effect of TOSA optimization — demonstrate measurable improvements from graph conversion and operator fusion. - - --- ## What kind of projects should you target? To clearly demonstrate the leap from Ethos-U55/U65 to U85, choose projects that meet at least one of the following criteria: - Transformer-heavy architectures: e.g. attention blocks, transformer encoders, ViTs, or hybrid CNN+transformer models. - - *Example:* an audio event detection transformer that must process longer sequences or higher-resolution spectrograms. - - High-resolution or multi-branch networks: models with high input dimensionality or multiple processing paths that saturate NPU throughput. - - *Example:* 512×512 semantic segmentation or multi-object detection. + - High-resolution or multi-branch networks: models with high input dimensionality or multiple processing paths that saturate NPU throughput. - Dense post-processing or large fully connected layers: cases where U55/U65 memory limits or MAC bandwidth previously restricted performance. - - *Example:* large MLP heads or transformer token mixers. - Multi-modal pipelines: combining multiple sensor inputs (e.g. image + IMU + audio) where the NPU must maintain concurrency or shared intermediate representations. The Ethos-U85 is ideal for projects where model performance is constrained by attention layers, large activations, or operator types that previously required fallback to the CPU. Use the Ethos-U85 to eliminate those fallbacks and achieve full-NPU execution of advanced topologies. - --- - ## What will you use? You should be familiar with, or willing to learn about: - Programming: Python, C/C++ - - ExecuTorch or TensorFlow Lite (Micro/LiteRT) + - ExecuTorch or LiteRT - Techniques for optimising AI models for the edge (quantization, pruning, etc.) - Optimization Tools: - - TOSA Model Explorer - - .tflite to .tosa converter (if using Tensorflow rather than ExecuTorch) + - Model Explorer with TOSA adapter (and PTE adapter for ExecuTorch) - Vela compiler for Ethos-U - Bare-metal or RTOS (e.g., Zephyr) - --- - ## Resources from Arm and our partners - Arm Developer: [Edge AI](https://developer.arm.com/edge-ai) - - Learning Path: [Navigating Machine Learning with Ethos-U processors](https://learn.arm.com/learning-paths/microcontrollers/nav-mlek/) + - Arm Alif Code-along: [Advanced AI on Arm Embedded Systems](https://developer.arm.com/code-along/advanced-ai-on-arm-embedded-systems) + - Code-along repo: [Ethos-U Workshop](https://github.com/ArmDeveloperEcosystem/workshop-ethos-u) + - Learning Path: [Navigating Machine Learning with Ethos-U processors](https://learn.arm.com/learning-paths/microcontrollers/nav-mlek/) + - Learning Path: [Visualize Ethos-U NPU performance with ExecuTorch on Arm FVPs](https://learn.arm.com/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance) - Repository: [AI on Arm course](https://github.com/arm-university/AI-on-Arm) - Example Board: [Alif Ensemble DevKit E8](https://www.keil.arm.com/boards/alif-semiconductor-devkit-e8-gen-1-2558a7b/features/) - Documentation: [TOSA Specification](https://www.mlplatform.org/tosa/), [TOSA Model Explorer](https://github.com/arm/tosa-adapter-model-explorer), and [TOSA Reference Model](https://gitlab.arm.com/tosa/tosa-reference-model) + - [Model Explorer](https://ai.google.dev/edge/model-explorer) - PyTorch Blog: [ExecuTorch support for Ethos-U85](https://pytorch.org/blog/pt-executorch-ethos-u85/) + - PyTorch Tutorial: [Arm Ethos-U NPU Backend Tutorial](https://docs.pytorch.org/executorch/1.0/tutorial-arm-ethos-u.html) - --- ## Support Level @@ -139,74 +132,67 @@ This project challenges you to explore the boundaries of what’s possible on Et **Project Summary** -Using hardware such as the Alif Ensemble E4/E6/E8 DevKits (all include Ethos-U85) or a comparable platform or Arm Fixed Virtual Platform Corstone-320, your task is to design and benchmark an advanced edge inference application that exploits the Ethos-U85’s compute and transformer capabilities. +Using hardware such as the Alif Ensemble E4/E6/E8 DevKits (all include Ethos-U85), your task is to design and benchmark an advanced edge inference application that exploits the Ethos-U85’s compute and transformer capabilities. + +You can utilise the Arm Fixed Virtual Platform Corstone-320 to prototype and test your application functionally, without access to Alif hardware. You can use this to prove functional correctness - and can then later test performance on actual silicon. We are interested to see projects both in simulation, and on final hardware. Your project should include: -1. Model Deployment and Optimization +**Model Deployment and Optimization** Select a computationally intensive model — ideally transformer-based or multi-branch convolutional — and deploy it on the Ethos-U85 using: - - The TOSA Model Explorer extension to inspect and adapt unsupported or experimental models for TOSA compliance. + - Model Explorer to inspect models and identify problem layers that reduce optimal delegation to the Ethos-U backend - The Vela compiler for optimization. These tools can be used to: - Convert and visualize model graphs in TOSA format. - Identify unsupported operators. - - Modify or substitute layers for compatibility using the Flatbuffers schema before re-exporting. - - Run Vela for optimized compilation targeting Ethos-U85. -2. Application Demonstration +**Application Demonstration** Implement a working example that highlights the Ethos-U85’s strengths in real-world inference. Possible categories include: - Transformers on Edge: lightweight BERT, ViT, or audio transformers (e.g. speech or sound event classification). - High-resolution Vision: semantic segmentation, object detection on large input sizes, or multi-head perception networks. - Multi-modal Fusion: combining audio, image, or sensor streams for contextual understanding. -3. Analysis and Benchmarking +**Analysis and Benchmarking** Report quantitative results on: - Inference latency, throughput (FPS or tokens/s), and memory footprint. - Power efficiency under load (optional). - Comparative performance versus Ethos-U55/U65 (use available benchmarks for reference or utilise the other Ethos-U NPUs provided in the Alif DevKits). - - The effect of TOSA optimization — demonstrate measurable improvements from graph conversion and operator fusion. - ---- ## What kind of projects should you target? To clearly demonstrate the leap from Ethos-U55/U65 to U85, choose projects that meet at least one of the following criteria: - Transformer-heavy architectures: e.g. attention blocks, transformer encoders, ViTs, or hybrid CNN+transformer models. - - *Example:* an audio event detection transformer that must process longer sequences or higher-resolution spectrograms. -- High-resolution or multi-branch networks: models with high input dimensionality or multiple processing paths that saturate NPU throughput. - - *Example:* 512×512 semantic segmentation or multi-object detection. +- High-resolution or multi-branch networks: models with high input dimensionality or multiple processing paths that saturate NPU throughput. - Dense post-processing or large fully connected layers: cases where U55/U65 memory limits or MAC bandwidth previously restricted performance. - - *Example:* large MLP heads or transformer token mixers. - Multi-modal pipelines: combining multiple sensor inputs (e.g. image + IMU + audio) where the NPU must maintain concurrency or shared intermediate representations. The Ethos-U85 is ideal for projects where model performance is constrained by attention layers, large activations, or operator types that previously required fallback to the CPU. Use the Ethos-U85 to eliminate those fallbacks and achieve full-NPU execution of advanced topologies. ---- - ## What will you use? You should be familiar with, or willing to learn about: - Programming: Python, C/C++ -- ExecuTorch or TensorFlow Lite (Micro/LiteRT) +- ExecuTorch or LiteRT - Techniques for optimising AI models for the edge (quantization, pruning, etc.) - Optimization Tools: - - TOSA Model Explorer - - .tflite to .tosa converter (if using Tensorflow rather than ExecuTorch) + - Model Explorer with TOSA adapter (and PTE adapter for ExecuTorch) - Vela compiler for Ethos-U - Bare-metal or RTOS (e.g., Zephyr) ---- - ## Resources from Arm and our partners - Arm Developer: [Edge AI](https://developer.arm.com/edge-ai) -- Learning Path: [Navigating Machine Learning with Ethos-U processors](https://learn.arm.com/learning-paths/microcontrollers/nav-mlek/) +- Arm Alif Code-along: [Advanced AI on Arm Embedded Systems](https://developer.arm.com/code-along/advanced-ai-on-arm-embedded-systems) +- Code-along repo: [Ethos-U Workshop](https://github.com/ArmDeveloperEcosystem/workshop-ethos-u) +- Learning Path: [Navigating Machine Learning with Ethos-U processors](https://learn.arm.com/learning-paths/microcontrollers/nav-mlek/) +- Learning Path: [Visualize Ethos-U NPU performance with ExecuTorch on Arm FVPs](https://learn.arm.com/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance) - Repository: [AI on Arm course](https://github.com/arm-university/AI-on-Arm) - Example Board: [Alif Ensemble DevKit E8](https://www.keil.arm.com/boards/alif-semiconductor-devkit-e8-gen-1-2558a7b/features/) - Documentation: [TOSA Specification](https://www.mlplatform.org/tosa/), [TOSA Model Explorer](https://github.com/arm/tosa-adapter-model-explorer), and [TOSA Reference Model](https://gitlab.arm.com/tosa/tosa-reference-model) +- [Model Explorer](https://ai.google.dev/edge/model-explorer) - PyTorch Blog: [ExecuTorch support for Ethos-U85](https://pytorch.org/blog/pt-executorch-ethos-u85/) +- PyTorch Tutorial: [Arm Ethos-U NPU Backend Tutorial](https://docs.pytorch.org/executorch/1.0/tutorial-arm-ethos-u.html) ---- ## Support Level diff --git a/docs/_posts/2025-11-27-Game-Dev-Using-Neural-Graphics-&-Unreal-Engine.md b/docs/_posts/2025-11-27-Game-Dev-Using-Neural-Graphics-&-Unreal-Engine.md index dc87178b..7b86aef7 100644 --- a/docs/_posts/2025-11-27-Game-Dev-Using-Neural-Graphics-&-Unreal-Engine.md +++ b/docs/_posts/2025-11-27-Game-Dev-Using-Neural-Graphics-&-Unreal-Engine.md @@ -1,6 +1,6 @@ --- title: Game development using Arm Neural Graphics with Unreal Engine -description: Build a playable Unreal Engine 5 game demo that utilises Arm’s Neural Graphics SDK UE plugin for features such as Neural Super Sampling (NSS). Showcase near-identical image quality at lower resolution by driving neural rendering directly in the graphics pipeline. +description: Build a playable Unreal Engine 5 game demo that utilises Arm’s Neural Graphics SDK UE plugin for features such as Neural Super Sampling (NSS). Showcase improved graphical fidelity at lower resolution by driving neural rendering directly in the graphics pipeline. subjects: - ML - Gaming @@ -30,7 +30,7 @@ layout: article sidebar: nav: projects full_description: |- - + ## Description @@ -49,7 +49,7 @@ full_description: |- ### Project Summary Create a small game scene utilising the Arm Neural Graphics UE plugin to demonstrate: - - **Near-identical visuals at lower resolution** (render low → upscale with NSS) + - **Improved graphical fidelity despite lower resolution** (render low → upscale with NSS) Document your progress and findings and consider alternative applications of the neural technology within games development. @@ -62,6 +62,9 @@ full_description: |- Make your scenes dynamic with particle effects, shadows, physics and motion. + ### Beyond the plugin + + Want to go further and start experimenting more with Neural Graphics? After building your game with the NSS Unreal plugin, try-out the [Vulkan ML Extensions learning path](https://learn.arm.com/learning-paths/mobile-graphics-and-gaming/vulkan-ml-sample/) to explore how neural inference runs directly through the Vulkan API. This provides lower-level control over ML workloads in the graphics pipeline, and allows for prototyping custom neural effects or optimising performance beyond what’s exposed through the engine plugin. You may also want to explore [fine-tuning your own neural models with the Arm Neural Graphics Model Gym](https://learn.arm.com/learning-paths/mobile-graphics-and-gaming/model-training-gym/) and how to [apply different quantization strategies](https://learn.arm.com/learning-paths/mobile-graphics-and-gaming/quantize-neural-upscaling-models/) for optimisation of memory and latency. --- ## Pre-requisites @@ -74,11 +77,11 @@ full_description: |- - Get Started Blog: [Start experimenting with NSS today](https://developer.arm.com/community/arm-community-blogs/b/mobile-graphics-and-gaming-blog/posts/how-to-access-arm-neural-super-sampling) - Deep Dive Blog: [How NSS works](https://developer.arm.com/community/arm-community-blogs/b/mobile-graphics-and-gaming-blog/posts/how-arm-neural-super-sampling-works) - Arm Developer: [Neural Graphics Development Kit](https://developer.arm.com/mobile-graphics-and-gaming/neural-graphics) - - Learning Path: [Fine-tuning neural graphics models with Model Gym](https://learn.arm.com/learning-paths/mobile-graphics-and-gaming/model-training-gym/) - Learning Path: [Neural Super Sampling in Unreal Engine](https://learn.arm.com/learning-paths/mobile-graphics-and-gaming/nss-unreal/) - Learning Path: [Getting started with Arm Accuracy Super Resolution (Arm ASR)](https://learn.arm.com/learning-paths/mobile-graphics-and-gaming/get-started-with-arm-asr/) - Unreal Engine Intro by Epic Games: [Understanding the basics](https://dev.epicgames.com/documentation/en-us/unreal-engine/understanding-the-basics-of-unreal-engine) - Repo: [Arm Neural Graphics SDK](https://github.com/arm/neural-graphics-sdk-for-game-engines) + - Repo: [Arm Neural Geraphics for Unreal](https://github.com/arm/neural-graphics-for-unreal) - Repo: [Arm Neural Graphics Model Gym](https://github.com/arm/neural-graphics-model-gym) - Documentation: [Arm Neural Graphics SDK for Game Engines Developer guide](https://developer.arm.com/documentation/111167/latest/) @@ -90,13 +93,13 @@ full_description: |- ## Benefits - Standout project contributions to the community will earn digital badges. These badges can support CV or resumé building and demonstrate earned recognition. + Standout project contributions to the community will earn digital badges. These badges can support CV or resumé building and demonstrate earned recognition. Contributions may also be highlighted in case studies or newsletters. To receive the benefits, you must show us your project through our [online form](https://forms.office.com/e/VZnJQLeRhD). Please do not include any confidential information in your contribution. Additionally if you are affiliated with an academic institution, please ensure you have the right to share your material. --- - + ## Description @@ -115,7 +118,7 @@ Future SDK support will be provided for Neural Frame Rate Upscaling (NFRU) - so ### Project Summary Create a small game scene utilising the Arm Neural Graphics UE plugin to demonstrate: -- **Near-identical visuals at lower resolution** (render low → upscale with NSS) +- **Improved graphical fidelity despite lower resolution** (render low → upscale with NSS) Document your progress and findings and consider alternative applications of the neural technology within games development. @@ -128,6 +131,9 @@ Attempt different environments and objects. For example: Make your scenes dynamic with particle effects, shadows, physics and motion. +### Beyond the plugin + +Want to go further and start experimenting more with Neural Graphics? After building your game with the NSS Unreal plugin, try-out the [Vulkan ML Extensions learning path](https://learn.arm.com/learning-paths/mobile-graphics-and-gaming/vulkan-ml-sample/) to explore how neural inference runs directly through the Vulkan API. This provides lower-level control over ML workloads in the graphics pipeline, and allows for prototyping custom neural effects or optimising performance beyond what’s exposed through the engine plugin. You may also want to explore [fine-tuning your own neural models with the Arm Neural Graphics Model Gym](https://learn.arm.com/learning-paths/mobile-graphics-and-gaming/model-training-gym/) and how to [apply different quantization strategies](https://learn.arm.com/learning-paths/mobile-graphics-and-gaming/quantize-neural-upscaling-models/) for optimisation of memory and latency. --- ## Pre-requisites @@ -140,11 +146,11 @@ Make your scenes dynamic with particle effects, shadows, physics and motion. - Get Started Blog: [Start experimenting with NSS today](https://developer.arm.com/community/arm-community-blogs/b/mobile-graphics-and-gaming-blog/posts/how-to-access-arm-neural-super-sampling) - Deep Dive Blog: [How NSS works](https://developer.arm.com/community/arm-community-blogs/b/mobile-graphics-and-gaming-blog/posts/how-arm-neural-super-sampling-works) - Arm Developer: [Neural Graphics Development Kit](https://developer.arm.com/mobile-graphics-and-gaming/neural-graphics) -- Learning Path: [Fine-tuning neural graphics models with Model Gym](https://learn.arm.com/learning-paths/mobile-graphics-and-gaming/model-training-gym/) - Learning Path: [Neural Super Sampling in Unreal Engine](https://learn.arm.com/learning-paths/mobile-graphics-and-gaming/nss-unreal/) - Learning Path: [Getting started with Arm Accuracy Super Resolution (Arm ASR)](https://learn.arm.com/learning-paths/mobile-graphics-and-gaming/get-started-with-arm-asr/) - Unreal Engine Intro by Epic Games: [Understanding the basics](https://dev.epicgames.com/documentation/en-us/unreal-engine/understanding-the-basics-of-unreal-engine) - Repo: [Arm Neural Graphics SDK](https://github.com/arm/neural-graphics-sdk-for-game-engines) +- Repo: [Arm Neural Geraphics for Unreal](https://github.com/arm/neural-graphics-for-unreal) - Repo: [Arm Neural Graphics Model Gym](https://github.com/arm/neural-graphics-model-gym) - Documentation: [Arm Neural Graphics SDK for Game Engines Developer guide](https://developer.arm.com/documentation/111167/latest/) @@ -156,7 +162,7 @@ This project is designed to be self-serve but comes with opportunity of some com ## Benefits -Standout project contributions to the community will earn digital badges. These badges can support CV or resumé building and demonstrate earned recognition. +Standout project contributions to the community will earn digital badges. These badges can support CV or resumé building and demonstrate earned recognition. Contributions may also be highlighted in case studies or newsletters. To receive the benefits, you must show us your project through our [online form](https://forms.office.com/e/VZnJQLeRhD). Please do not include any confidential information in your contribution. Additionally if you are affiliated with an academic institution, please ensure you have the right to share your material. \ No newline at end of file From 9dba98fe92e7a4c355d0a8227f567a705732dc28 Mon Sep 17 00:00:00 2001 From: Matt Cossins Date: Sun, 8 Mar 2026 15:15:59 +0000 Subject: [PATCH 3/6] Formatting --- .../Projects/Game-Dev-Using-Neural-Graphics-&-Unreal-Engine.md | 1 - 1 file changed, 1 deletion(-) diff --git a/Projects/Projects/Game-Dev-Using-Neural-Graphics-&-Unreal-Engine.md b/Projects/Projects/Game-Dev-Using-Neural-Graphics-&-Unreal-Engine.md index 725beee0..4369408e 100644 --- a/Projects/Projects/Game-Dev-Using-Neural-Graphics-&-Unreal-Engine.md +++ b/Projects/Projects/Game-Dev-Using-Neural-Graphics-&-Unreal-Engine.md @@ -63,7 +63,6 @@ Make your scenes dynamic with particle effects, shadows, physics and motion. ### Beyond the plugin Want to go further and start experimenting more with Neural Graphics? After building your game with the NSS Unreal plugin, try-out the [Vulkan ML Extensions learning path](https://learn.arm.com/learning-paths/mobile-graphics-and-gaming/vulkan-ml-sample/) to explore how neural inference runs directly through the Vulkan API. This provides lower-level control over ML workloads in the graphics pipeline, and allows for prototyping custom neural effects or optimising performance beyond what’s exposed through the engine plugin. You may also want to explore [fine-tuning your own neural models with the Arm Neural Graphics Model Gym](https://learn.arm.com/learning-paths/mobile-graphics-and-gaming/model-training-gym/) and how to [apply different quantization strategies](https://learn.arm.com/learning-paths/mobile-graphics-and-gaming/quantize-neural-upscaling-models/) for optimisation of memory and latency. ---- ## Pre-requisites - Laptop/PC/Mobile for Android Unreal Engine game development From 1b2b6705e2e960dfcd7fa5f8641ec07c5a1264eb Mon Sep 17 00:00:00 2001 From: ci-bot Date: Sun, 8 Mar 2026 15:18:04 +0000 Subject: [PATCH 4/6] docs: auto-update --- ...2025-11-27-Game-Dev-Using-Neural-Graphics-&-Unreal-Engine.md | 2 -- 1 file changed, 2 deletions(-) diff --git a/docs/_posts/2025-11-27-Game-Dev-Using-Neural-Graphics-&-Unreal-Engine.md b/docs/_posts/2025-11-27-Game-Dev-Using-Neural-Graphics-&-Unreal-Engine.md index 7b86aef7..e33bb304 100644 --- a/docs/_posts/2025-11-27-Game-Dev-Using-Neural-Graphics-&-Unreal-Engine.md +++ b/docs/_posts/2025-11-27-Game-Dev-Using-Neural-Graphics-&-Unreal-Engine.md @@ -65,7 +65,6 @@ full_description: |- ### Beyond the plugin Want to go further and start experimenting more with Neural Graphics? After building your game with the NSS Unreal plugin, try-out the [Vulkan ML Extensions learning path](https://learn.arm.com/learning-paths/mobile-graphics-and-gaming/vulkan-ml-sample/) to explore how neural inference runs directly through the Vulkan API. This provides lower-level control over ML workloads in the graphics pipeline, and allows for prototyping custom neural effects or optimising performance beyond what’s exposed through the engine plugin. You may also want to explore [fine-tuning your own neural models with the Arm Neural Graphics Model Gym](https://learn.arm.com/learning-paths/mobile-graphics-and-gaming/model-training-gym/) and how to [apply different quantization strategies](https://learn.arm.com/learning-paths/mobile-graphics-and-gaming/quantize-neural-upscaling-models/) for optimisation of memory and latency. - --- ## Pre-requisites - Laptop/PC/Mobile for Android Unreal Engine game development @@ -134,7 +133,6 @@ Make your scenes dynamic with particle effects, shadows, physics and motion. ### Beyond the plugin Want to go further and start experimenting more with Neural Graphics? After building your game with the NSS Unreal plugin, try-out the [Vulkan ML Extensions learning path](https://learn.arm.com/learning-paths/mobile-graphics-and-gaming/vulkan-ml-sample/) to explore how neural inference runs directly through the Vulkan API. This provides lower-level control over ML workloads in the graphics pipeline, and allows for prototyping custom neural effects or optimising performance beyond what’s exposed through the engine plugin. You may also want to explore [fine-tuning your own neural models with the Arm Neural Graphics Model Gym](https://learn.arm.com/learning-paths/mobile-graphics-and-gaming/model-training-gym/) and how to [apply different quantization strategies](https://learn.arm.com/learning-paths/mobile-graphics-and-gaming/quantize-neural-upscaling-models/) for optimisation of memory and latency. ---- ## Pre-requisites - Laptop/PC/Mobile for Android Unreal Engine game development From 953f2885eb94efdb0fe48bf33a2d2c204ea4724d Mon Sep 17 00:00:00 2001 From: Matt Cossins Date: Sun, 8 Mar 2026 15:21:08 +0000 Subject: [PATCH 5/6] Final formatting --- .../Projects/Game-Dev-Using-Neural-Graphics-&-Unreal-Engine.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/Projects/Projects/Game-Dev-Using-Neural-Graphics-&-Unreal-Engine.md b/Projects/Projects/Game-Dev-Using-Neural-Graphics-&-Unreal-Engine.md index 4369408e..29445e9b 100644 --- a/Projects/Projects/Game-Dev-Using-Neural-Graphics-&-Unreal-Engine.md +++ b/Projects/Projects/Game-Dev-Using-Neural-Graphics-&-Unreal-Engine.md @@ -60,7 +60,7 @@ Attempt different environments and objects. For example: Make your scenes dynamic with particle effects, shadows, physics and motion. -### Beyond the plugin +**Beyond the plugin** Want to go further and start experimenting more with Neural Graphics? After building your game with the NSS Unreal plugin, try-out the [Vulkan ML Extensions learning path](https://learn.arm.com/learning-paths/mobile-graphics-and-gaming/vulkan-ml-sample/) to explore how neural inference runs directly through the Vulkan API. This provides lower-level control over ML workloads in the graphics pipeline, and allows for prototyping custom neural effects or optimising performance beyond what’s exposed through the engine plugin. You may also want to explore [fine-tuning your own neural models with the Arm Neural Graphics Model Gym](https://learn.arm.com/learning-paths/mobile-graphics-and-gaming/model-training-gym/) and how to [apply different quantization strategies](https://learn.arm.com/learning-paths/mobile-graphics-and-gaming/quantize-neural-upscaling-models/) for optimisation of memory and latency. From 231d9c63333e66d04f9966cead85c1206ab30a82 Mon Sep 17 00:00:00 2001 From: ci-bot Date: Sun, 8 Mar 2026 15:22:32 +0000 Subject: [PATCH 6/6] docs: auto-update --- ...25-11-27-Game-Dev-Using-Neural-Graphics-&-Unreal-Engine.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/_posts/2025-11-27-Game-Dev-Using-Neural-Graphics-&-Unreal-Engine.md b/docs/_posts/2025-11-27-Game-Dev-Using-Neural-Graphics-&-Unreal-Engine.md index e33bb304..3dd10bf7 100644 --- a/docs/_posts/2025-11-27-Game-Dev-Using-Neural-Graphics-&-Unreal-Engine.md +++ b/docs/_posts/2025-11-27-Game-Dev-Using-Neural-Graphics-&-Unreal-Engine.md @@ -62,7 +62,7 @@ full_description: |- Make your scenes dynamic with particle effects, shadows, physics and motion. - ### Beyond the plugin + **Beyond the plugin** Want to go further and start experimenting more with Neural Graphics? After building your game with the NSS Unreal plugin, try-out the [Vulkan ML Extensions learning path](https://learn.arm.com/learning-paths/mobile-graphics-and-gaming/vulkan-ml-sample/) to explore how neural inference runs directly through the Vulkan API. This provides lower-level control over ML workloads in the graphics pipeline, and allows for prototyping custom neural effects or optimising performance beyond what’s exposed through the engine plugin. You may also want to explore [fine-tuning your own neural models with the Arm Neural Graphics Model Gym](https://learn.arm.com/learning-paths/mobile-graphics-and-gaming/model-training-gym/) and how to [apply different quantization strategies](https://learn.arm.com/learning-paths/mobile-graphics-and-gaming/quantize-neural-upscaling-models/) for optimisation of memory and latency. @@ -130,7 +130,7 @@ Attempt different environments and objects. For example: Make your scenes dynamic with particle effects, shadows, physics and motion. -### Beyond the plugin +**Beyond the plugin** Want to go further and start experimenting more with Neural Graphics? After building your game with the NSS Unreal plugin, try-out the [Vulkan ML Extensions learning path](https://learn.arm.com/learning-paths/mobile-graphics-and-gaming/vulkan-ml-sample/) to explore how neural inference runs directly through the Vulkan API. This provides lower-level control over ML workloads in the graphics pipeline, and allows for prototyping custom neural effects or optimising performance beyond what’s exposed through the engine plugin. You may also want to explore [fine-tuning your own neural models with the Arm Neural Graphics Model Gym](https://learn.arm.com/learning-paths/mobile-graphics-and-gaming/model-training-gym/) and how to [apply different quantization strategies](https://learn.arm.com/learning-paths/mobile-graphics-and-gaming/quantize-neural-upscaling-models/) for optimisation of memory and latency.