From 774708a3d771d186b1ef67755887fe00c0989b8b Mon Sep 17 00:00:00 2001 From: MillyWei Date: Wed, 22 Oct 2025 20:49:20 +0800 Subject: [PATCH 1/7] update doc --- docs/AddingSamples.md | 209 +++++++++++++++++++++++++++++++--------- docs/Telemetry-Guide.md | 157 ++++++++++++++++++++++++++++++ 2 files changed, 323 insertions(+), 43 deletions(-) create mode 100644 docs/Telemetry-Guide.md diff --git a/docs/AddingSamples.md b/docs/AddingSamples.md index b214517d..7a76bbf6 100644 --- a/docs/AddingSamples.md +++ b/docs/AddingSamples.md @@ -8,33 +8,58 @@ All samples go in the `Samples` folder. The samples are loaded dynamically, so a A sample is defined by a `.xaml` and a `.xaml.cs` file, where the class in your `.xaml.cs` needs to be annotated with the [GallerySample] attribute. The [GallerySample] attribute contains metadata about the sample, and the `.xaml.cs`/`.xaml` file is the entry point to the sample. You can add your files anywhere within the Samples folder. Try to group them properly. -If your sample uses a Model that hasn't been used before in the gallery app, you might also need to include a new model file (`.model.json`)inside the `Samples\ModelsDefinitions\` directory. The `.model.json` file is used to define a collection of samples that use a specific model family, and the `apis.json` file is used to define a collection of samples that use a specific API. You can also add a `.modelgroup.json` file to organize the model family within a model group. The `.model.json`, `.modelgroup.json`, and `apis.json` files are used to group samples together in the app and they show up as a single page where all samples are rendered at the same time. +If your sample uses a Model that hasn't been used before in the gallery app, you might also need to include a new model definition file inside the `Samples\Definitions\Models\` directory. Two formats are supported: +- `.model.json` - A simpler format for defining a single model family +- `.modelgroup.json` - Used to define a group of related model families with additional grouping metadata + +If your sample uses a Windows AI API, you need to add it to the `apis.json` file inside the `Samples\Definitions\WcrApis\` directory. These files are used to group samples together in the app and they show up as a single page where all samples are rendered at the same time. A sample folder that is not inside a model or api folder renders as a single sample in the app. These are good for samples that are higher level concepts (such as guides), or samples that combine multiple models or APIs. ### `Sample.xaml` and `Sample.xaml.cs` This is the entry point to the sample - whatever you put here will show up when the sample is loaded. -If you are creating a sample for a model, and your sample is part of a model collection, the model downloading is handled for you. To get the path of the model on disk, override the OnNavigatedTo method in your Sample.xaml.cs file: +If you are creating a sample for a model, your sample should inherit from `BaseSamplePage` and override the `LoadModelAsync` method. The model downloading is handled for you. To load the model, override the LoadModelAsync method in your Sample.xaml.cs file: ```csharp -protected override void OnNavigatedTo(NavigationEventArgs e) +protected override async Task LoadModelAsync(SampleNavigationParameters sampleParams) { - base.OnNavigatedTo(e); - if (e.Parameter is SampleNavigationParameters params) + try + { + // For language models, get an IChatClient instance + chatClient = await sampleParams.GetIChatClientAsync(); + + // sampleParams.NotifyCompletion() should be called when the model is ready + // This will hide the loading spinner in the UI + } + catch (Exception ex) { - // path to the model on disk is params.CachedModel.Path - // if params.CachedModel.IsFile is true, the Path points to a file - // otherwise, it points to a folder - // this is determined by the model url in the model.json - // params.CachedModel.Details contains the model details as defined in model.json - // such as the url, description, and hardware requirements - // - // params.RequestWaitForCompletion() is an optional method - // to indicate to the SampleContainer that you are waiting for the model to load - // this will show a loading spinner in the UI - // once the model is loaded, call params.NotifyCompletion() + ShowException(ex); } + + sampleParams.NotifyCompletion(); +} +``` + +If your sample requires two models (for example, a language model plus an embeddings model), override the multi-model overload instead: + +```csharp +protected override async Task LoadModelAsync(MultiModelSampleNavigationParameters sampleParams) +{ + try + { + // Example: initialize second model using sampleParams.ModelPaths[1] + // ... your model-2 init code ... + + // For language models, still use the chat client helper + chatClient = await sampleParams.GetIChatClientAsync(); + } + catch (Exception ex) + { + ShowException(ex); + } + + sampleParams.NotifyCompletion(); } ``` @@ -58,65 +83,163 @@ This is an attribute that contains the sample icon, links and more that show up ```csharp [GallerySample( - ModelType = ModelType.ResNet, - Scenario = ScenarioType.ImageClassifyImage, - SharedCode = [ - SharedCodeEnum.Prediction, - SharedCodeEnum.LabelMap, - SharedCodeEnum.BitmapFunctions + Name = "Paraphrase", + Model1Types = [ModelType.LanguageModels, ModelType.PhiSilica], + Scenario = ScenarioType.TextParaphraseText, + NugetPackageReferences = [ + "Microsoft.Extensions.AI" ], - Name = "Image Classification", - Id = "09d73ba7-b877-45f9-9df7-41898ab4d339", - Icon = "\uE8B9")] + Id = "9e006e82-8e3f-4401-8a83-d4c4c59cc20c", + Icon = "\uE8D4")] ``` -The `Scenario` is used to group samples together by scenario. If you have multiple samples that are demonstrating the same scenario (for example: summarizing text), you can give them the same `Scenario` and they will show up together in the app. The scenario must be first defined in `scenarios.json` in the root of the `Samples` folder. +Key properties: +- `Name`: Display name of the sample +- `Model1Types`: Array of model types this sample supports (required) +- `Model2Types`: Optional second model type array if the sample uses multiple models +- `Scenario`: The scenario type defined in `scenarios.json` +- `SharedCode`: Optional array of shared code items for sample export; values come from `SharedCodeEnum` (auto-generated from files under `Samples\SharedCode` and `AIDevGallery\Utils\DeviceUtils`). Most basic language model samples can omit this. +- `NugetPackageReferences`: Array of NuGet package names needed +- `AssetFilenames`: Array of asset files needed by this sample +- `Id`: Unique identifier (GUID) +- `Icon`: Icon glyph for display + +The `Scenario` is used to group samples together by scenario. If you have multiple samples that are demonstrating the same scenario (for example: summarizing text), you can give them the same `Scenario` and they will show up together in the app. The scenario must be first defined in `Samples\scenarios.json`. This file is embedded at build time by a source generator — edit it and rebuild to take effect (it is not loaded dynamically at runtime). +Prompt templates for language models are defined in `Samples\promptTemplates.json` and are also embedded by a source generator. -### `.model.json` -This is a metadata file that contains the model details. The model details are used to group samples together by model that might have different hardware requirements. + +### `.model.json` and `.modelgroup.json` + +Both `.model.json` and `.modelgroup.json` formats are supported for defining models. The main difference is: +- `.model.json` - Used for a single model family +- `.modelgroup.json` - Used for grouping multiple related model families with additional metadata like display order + +#### `.model.json` format +This format defines a single model family with its variations: + +```json +{ + "Phi3Mini": { + "Id": "phi3mini", + "Name": "Phi 3 Mini", + "Description": "Phi-3 Mini is a 3.8B parameter, lightweight, state-of-the-art open language model", + "DocsUrl": "https://huggingface.co/microsoft/Phi-3-mini-4k-instruct-onnx", + "Models": { + "Phi3MiniDirectML": { + "Id": "202d6eaa-7a65-4d40-b2c3-53f1ef12a4eb", + "Name": "Phi 3 Mini DirectML", + "Url": "https://huggingface.co/microsoft/Phi-3-mini-4k-instruct-onnx/tree/main/directml/directml-int4-awq-block-128", + "Description": "Phi 3 Mini DirectML will run on your GPU", + "HardwareAccelerator": "GPU", + "Size": 2135763374, + "Icon": "Microsoft.svg", + "ParameterSize": "3.8B", + "PromptTemplate": "Phi3", + "License": "mit" + }, + "Phi3MiniCPU": { + ... + } + }, + "ReadmeUrl": "https://huggingface.co/microsoft/Phi-3-mini-4k-instruct-onnx/blob/main/README.md" + } +} +``` + +#### `.modelgroup.json` format +This format groups multiple related model families together with additional metadata like display order: ```json { "LanguageModels": { "Id": "615e2f1c-ea95-4c34-9d22-b1e8574fe476", - "Name": "Language Models", + "Name": "Language", "Icon": "\uE8BD", + "Order": 1, "Models": { - "Phi3": { - "Id": "phi3", - "Name": "Phi 3", - "Icon": "\uE8BD", - "Description": "Phi 3", + "Phi3Mini": { + "Id": "phi3mini", + "Name": "Phi 3 Mini", + "Description": "Phi-3 Mini is a 3.8B parameter, lightweight, state-of-the-art open language model", "DocsUrl": "https://huggingface.co/microsoft/Phi-3-mini-4k-instruct-onnx", - "PromptTemplate": "Phi3", "Models": { "Phi3MiniDirectML": { "Id": "202d6eaa-7a65-4d40-b2c3-53f1ef12a4eb", "Name": "Phi 3 Mini DirectML", "Url": "https://huggingface.co/microsoft/Phi-3-mini-4k-instruct-onnx/tree/main/directml/directml-int4-awq-block-128", - "Description": "Phi 3 Mini DirectML", - "HardwareAccelerator": "DML", + "Description": "Phi 3 Mini DirectML will run on your GPU", + "HardwareAccelerator": "GPU", "Size": 2135763374, - "ParameterSize": "3.8B" + "Icon": "Microsoft.svg", + "ParameterSize": "3.8B", + "PromptTemplate": "Phi3", + "License": "mit", + "AIToolkitActions": ["Playground", "PromptBuilder"], + "AIToolkitId": "Phi-3-mini-4k-directml-int4-awq-block-128-onnx" }, "Phi3MiniCPU": { ... } - } + }, + "ReadmeUrl": "https://huggingface.co/microsoft/Phi-3-mini-4k-instruct-onnx/blob/main/README.md" } } } } ``` -The Models dictionary contains the different variations of the model that are available. The Url of the model is used to download the model from HuggingFace and it could point to the root of the HF repo, directly to a subfolder, or a single file. +#### Key fields for both formats: + +**Model Family fields** (applies to both `.model.json` and models within `.modelgroup.json`): +- `Id`: Unique identifier for the model family +- `Name`: Display name +- `Description`: Description of the model family +- `DocsUrl`: Link to documentation +- `ReadmeUrl`: Link to README file +- `Models`: Dictionary of model variations with different hardware requirements + - `Url`: HuggingFace URL - can point to repo root, a subfolder, or a single file + - `HardwareAccelerator`: One or more of: "CPU", "GPU", "DML", "QNN", "NPU", "WCRAPI", "OLLAMA", "OPENAI", "VitisAI", "OpenVINO" (or an array like ["CPU", "GPU"]) + - `Size`: Model size in bytes + - `License`: License type (e.g., "mit", "apache-2.0") + - `PromptTemplate`: Prompt template to use (for language models) + - `SupportedOnQualcomm`: Whether the model is supported on Qualcomm devices (optional) + - `Icon`: Icon filename shown for the model (optional) + - `FileFilter`: A string or array of strings to filter selectable files (optional) + - `AIToolkitActions`: Actions available in AI Toolkit (optional) + - `AIToolkitId`: AI Toolkit identifier (optional) + - `AIToolkitFinetuningId`: AI Toolkit finetuning identifier (optional) + - `InputDimensions` / `OutputDimensions`: Model input/output dimensions (optional) + +**Additional fields for `.modelgroup.json` only:** +- `Order`: Display order in the app (optional) - this field is only available in `.modelgroup.json` +- `Models`: Dictionary of model families within this group (in `.modelgroup.json`, this wraps multiple model families) ## Adding a sample -To add a sample, first identify the model or API that the sample is using. If there is no `.model.json`/`.modelgroup.json` for the model or API, create a new `.model.json`/`.modelgroup.json` under the `ModelsDefinitions` folder. +To add a sample, follow these steps: + +1. **Identify the model or API**: Determine which model or Windows AI API your sample will use. + +2. **Create or update model definitions** (if needed): + - For Open Source Models: If the model doesn't exist, create either: + - A new `.model.json` file for a single model family, or + - A new `.modelgroup.json` file for grouping multiple related model families + - Place these files under `Samples\Definitions\Models\` + - For Windows AI APIs: Add your API to the `apis.json` file under `Samples\Definitions\WcrApis\` + +3. **Create the sample files**: Add a new WinUI Blank Page in the appropriate folder: + - For Open Source Models: `Samples\Open Source Models\[Model Category]\[SampleName].xaml` + - For Windows AI APIs: `Samples\WCRAPIs\[SampleName].xaml` -Add a new WinUI Blank Page in the desired folder called `[YourSampleName].xaml`, and add the proper `[GallerySample]` attribute with the appropriate metadata as defined above. If adding a sample for a model, override the OnNavigatedTo method to get the path of the model on disk, as described above. +4. **Implement the sample**: + - Make your sample class inherit from `BaseSamplePage` + - Add the `[GallerySample]` attribute with appropriate metadata (use `Model1Types` array, `Scenario`, etc.) + - Override `LoadModelAsync(SampleNavigationParameters sampleParams)` for single-model samples, or `LoadModelAsync(MultiModelSampleNavigationParameters sampleParams)` for dual-model samples + - Use `await sampleParams.GetIChatClientAsync()` for language models or appropriate methods for other model types + - For WinML-based models, use `sampleParams.WinMlSampleOptions` for EP policy, device type, and compile options + - Call `sampleParams.NotifyCompletion()` when initialization is complete + - Register cleanup code in the `Unloaded` event handler -Run the app - the sample should show up as part of the model or API collection, or as a standalone page if it's not part of a collection. \ No newline at end of file +5. **Test**: Run the app - the sample should show up as part of the model or API collection, or as a standalone page if it's not part of a collection. \ No newline at end of file diff --git a/docs/Telemetry-Guide.md b/docs/Telemetry-Guide.md new file mode 100644 index 00000000..9e7a3244 --- /dev/null +++ b/docs/Telemetry-Guide.md @@ -0,0 +1,157 @@ +### AI Dev Gallery Telemetry Guide + +This guide summarizes the telemetry implementation in this repo, the event catalog and how to view it locally. + +--- + +## Overview +- **Provider name**: `Microsoft.Windows.AIDevGallery` +- **Implementation**: `EventSource` with Windows ETW TraceLogging +- **Code locations**: + - `AIDevGallery/Telemetry/Telemetry.cs` (core logic) + - `AIDevGallery/Telemetry/TelemetryFactory.cs` (singleton factory) + - `AIDevGallery/Telemetry/ITelemetry.cs`, `AIDevGallery/Telemetry/LogLevel.cs` + - `AIDevGallery/Telemetry/TelemetryEventSource.cs` (keywords/tags) + - `AIDevGallery/Telemetry/PrivacyConsentHelpers.cs` (privacy-sensitive regions) + - `AIDevGallery/Telemetry/Events/*.cs` (event data contracts and logging entry points) + +--- + +## Collection and Write Flow +1) Product code calls static `Log(...)`/`LogError(...)`/`LogCritical(...)` methods defined under `Telemetry/Events/*.cs`; parameters are the public properties on the event type. +2) Calls go through the `ITelemetry` implementation (`Telemetry.cs`), which performs sensitive string replacement (see “Sensitive Information Handling”). +3) Based on `LogLevel` and `IsDiagnosticTelemetryOn`, events may upload or be logged locally only: + - When `IsDiagnosticTelemetryOn == false`, non-error `Info`/`Measure` events are downgraded to `Local` (local-only). + - `Critical` and error-level events are not downgraded. +4) Finally, events are written via `EventSource.Write(...)` using TraceLogging: + - Provider: `Microsoft.Windows.AIDevGallery` + - Keywords: `TelemetryKeyword` / `MeasuresKeyword` / `CriticalDataKeyword` + +Key code snippet: + +```text +// Provider name +private const string ProviderName = "Microsoft.Windows.AIDevGallery"; // in Telemetry.cs + +// Downgrade (when diagnostics off, non-error Info/Measure → Local) +if (!IsDiagnosticTelemetryOn) +{ + if (!isError && (level == LogLevel.Measure || level == LogLevel.Info)) + { + level = LogLevel.Local; + } +} + +// Final write to ETW +TelemetryEventSourceInstance.Write(eventName, ref telemetryOptions, ref activityId, ref relatedActivityId, ref data); +``` + +--- + +## View Telemetry Locally (Debug/Verification) + +### Method 1: PerfView (recommended) +1) Run PerfView as Administrator. +2) Collect → Collect: + - Uncheck Thread Time, CPU, .NET/CLR, Kernel (set Kernel Events to None under Advanced). + - In Additional Providers, manually enter: + - `Microsoft.Windows.AIDevGallery:0xFFFFFFFFFFFFFFFF:Verbose` +3) Start → exercise the app → Stop. +4) Open the produced etl.zip → Events; filter by Provider `Microsoft.Windows.AIDevGallery` to inspect events and their public properties. + +Tip: If the provider is not in the dropdown, manually type the provider string above. + +### Method 2: PerfView Listen (live) +- Run → Listen → Providers `Microsoft.Windows.AIDevGallery:0xFFFFFFFFFFFFFFFF:Verbose` → Start → exercise the app to see live events. + +### Method 3: logman (command line, minimal) +```bat +logman start AIGalleryOnly -p Microsoft.Windows.AIDevGallery 0xFFFFFFFFFFFFFFFF 5 -ets +rem After exercising the app +logman stop AIGalleryOnly -ets +``` +The resulting .etl can be opened with PerfView/WPA; or convert to CSV via `tracerpt`. + +### Method 4: dotnet-trace (attach to process) +```bat +dotnet-trace collect --process-id --providers Microsoft.Windows.AIDevGallery:0xFFFFFFFFFFFFFFFF:Verbose +``` + +--- + +## Event Catalog (by topic) + +- Navigation (Log) + - `NavigatedToPage_Event` + - `NavigatedToSample_Event` + - `NavigatedToSampleLoaded_Event` + +- Interaction/UI (Log) + - `ButtonClicked_Event` + - `ToggleCodeButton_Event` + - `AIToolkitActionClicked_Event` + +- Search & Result (Log) + - `SearchModel_Event` + - `DownloadSearchedModel_Event` + +- Deep link / Links / Files (Log) + - `DeepLinkActivated_Event` + - `ModelDetailsLinkClicked_Event` + - `OpenModelFolder_Event` + +- Model download / model ops + - Log: `ModelDownloadEnqueue_Event`, `ModelDownloadStart_Event`, `ModelDownloadComplete_Event`, `ModelDownloadCancel_Event`, `ModelDeleted_Event` + - LogError: `ModelDownloadFailed_Event` + +- WCR API + - Log: `WcrApiDownloadRequested_Event` + - LogError: `WcrApiDownloadFailed_Event` + +- Cache maintenance (LogCritical) + - `ModelCacheMoved_Event` + - `ModelCacheDeleted_Event` + +- Samples/Project (Log) + - `SampleInteraction_Event` + - `SampleProjectGenerated_Event` + +- Others (via `LogException`, event name `ExceptionThrown`, includes exception name/message/stack; see next sections) + +--- + +## Event Data & Sensitive Information Handling + +- Event “public properties” are the fields written; they are defined on `[EventData]` classes under `Telemetry/Events/*.cs`. +- On singleton initialization, “well-known sensitive strings” are added for replacement: + - User directory (e.g., `C:\Users\` → ``) + - User name / current user (→ `` / ``) +- `Log/LogError` call `ReplaceSensitiveStrings(...)` on the event. Most events implement no-op replacement; fields such as `ModelUrl`/`Uri`/`Query`/`Link` are only replaced when they contain known sensitive substrings. +- `LogException` replaces sensitive substrings in `Message` and `InnerException.Message`; type names and stack traces are recorded as-is for diagnostics. + +--- + +## Region and Diagnostic Switch + +- `AppData.IsDiagnosticDataEnabled` default: `true` for non-privacy-sensitive regions; `false` for privacy-sensitive regions (see `PrivacyConsentHelpers.IsPrivacySensitiveRegion()`). +- On app start, its value is assigned to `Telemetry.IsDiagnosticTelemetryOn`: + - Non-error `Info`/`Measure` events downgrade to `Local` when `false`. + - `Critical` and error events are not downgraded. + +--- + +## FAQ + +- Q: The PerfView provider list does not show `Microsoft.Windows.AIDevGallery`. + - A: This is expected. EventSource-based custom providers often don’t appear in the dropdown; manually type `Microsoft.Windows.AIDevGallery:0xFFFFFFFFFFFFFFFF:Verbose` in Additional Providers. + +- Q: I see other providers’ events in the etl. + - A: Additional Providers is an “add” subscription; PerfView also captures Kernel/CLR by default. Disable defaults (Kernel=None) before collection, or filter by Provider while viewing. + +--- + +## Change and Maintenance +- Adding a new event: create a type under `Telemetry/Events/` with `[EventData]`; public properties become event fields. Provide `public static void Log(...)` and use `TelemetryFactory.Get()` to write. +- Changing event fields can impact downstream processing/queries; avoid when possible. If required, update release notes and this document accordingly. + + From 60a550fe006b9dcd29e0b2ac00ca51e5213741fc Mon Sep 17 00:00:00 2001 From: MillyWei Date: Thu, 23 Oct 2025 12:01:42 +0800 Subject: [PATCH 2/7] Update doc --- docs/AddingSamples.md | 27 +++++++++++++++++++++++---- 1 file changed, 23 insertions(+), 4 deletions(-) diff --git a/docs/AddingSamples.md b/docs/AddingSamples.md index 7a76bbf6..428eb938 100644 --- a/docs/AddingSamples.md +++ b/docs/AddingSamples.md @@ -195,8 +195,8 @@ This format groups multiple related model families together with additional meta - `Id`: Unique identifier for the model family - `Name`: Display name - `Description`: Description of the model family -- `DocsUrl`: Link to documentation -- `ReadmeUrl`: Link to README file +- `DocsUrl`: External link to the model family's documentation homepage/repository overview, used for external link and source display. +- `ReadmeUrl`: Direct link to the model family's README markdown that the app fetches and renders in-app - `Models`: Dictionary of model variations with different hardware requirements - `Url`: HuggingFace URL - can point to repo root, a subfolder, or a single file - `HardwareAccelerator`: One or more of: "CPU", "GPU", "DML", "QNN", "NPU", "WCRAPI", "OLLAMA", "OPENAI", "VitisAI", "OpenVINO" (or an array like ["CPU", "GPU"]) @@ -237,9 +237,28 @@ To add a sample, follow these steps: - Make your sample class inherit from `BaseSamplePage` - Add the `[GallerySample]` attribute with appropriate metadata (use `Model1Types` array, `Scenario`, etc.) - Override `LoadModelAsync(SampleNavigationParameters sampleParams)` for single-model samples, or `LoadModelAsync(MultiModelSampleNavigationParameters sampleParams)` for dual-model samples - - Use `await sampleParams.GetIChatClientAsync()` for language models or appropriate methods for other model types + - Use `await sampleParams.GetIChatClientAsync()` for language models or appropriate methods for other model types (See [Common entry points cheat sheet](#common-entry-points)) - For WinML-based models, use `sampleParams.WinMlSampleOptions` for EP policy, device type, and compile options - Call `sampleParams.NotifyCompletion()` when initialization is complete - Register cleanup code in the `Unloaded` event handler -5. **Test**: Run the app - the sample should show up as part of the model or API collection, or as a standalone page if it's not part of a collection. \ No newline at end of file +5. **Test**: Run the app - the sample should show up as part of the model or API collection, or as a standalone page if it's not part of a collection. + +### Common entry points + +- Language models + - Use from navigation params: `await sampleParams.GetIChatClientAsync()` + - Factory (direct use): `await OnnxRuntimeGenAIChatClientFactory.CreateAsync(modelDir, LlmPromptTemplate?)` + - Phi Silica: `await PhiSilicaClient.CreateAsync()` + +- Embeddings + - Create: `var embeddings = await EmbeddingGenerator.CreateAsync(modelPath, sampleParams.WinMlSampleOptions)` + - Use: `await embeddings.GenerateAsync(values)` or `await foreach (var v in embeddings.GenerateStreamingAsync(values)) { ... }` + +- Speech (Whisper) + - Create: `var whisper = await WhisperWrapper.CreateAsync(modelPath, sampleParams.WinMlSampleOptions)` + - Use: `await whisper.TranscribeAsync(pcmBytes, language, WhisperWrapper.TaskType.Transcribe)` + +- Image generation (Stable Diffusion) + - Init: `var sd = new StableDiffusion(modelFolder); await sd.InitializeAsync(sampleParams.WinMlSampleOptions);` + - Inference: `var image = sd.Inference(prompt, cancellationToken);` \ No newline at end of file From f662715370e3392b82dc273de4cbb56761988ca3 Mon Sep 17 00:00:00 2001 From: MillyWei Date: Tue, 28 Oct 2025 10:53:39 +0800 Subject: [PATCH 3/7] Update --- docs/AddingSamples.md | 18 ++++++++---------- 1 file changed, 8 insertions(+), 10 deletions(-) diff --git a/docs/AddingSamples.md b/docs/AddingSamples.md index 428eb938..258ca86a 100644 --- a/docs/AddingSamples.md +++ b/docs/AddingSamples.md @@ -12,7 +12,7 @@ If your sample uses a Model that hasn't been used before in the gallery app, you - `.model.json` - A simpler format for defining a single model family - `.modelgroup.json` - Used to define a group of related model families with additional grouping metadata -If your sample uses a Windows AI API, you need to add it to the `apis.json` file inside the `Samples\Definitions\WcrApis\` directory. These files are used to group samples together in the app and they show up as a single page where all samples are rendered at the same time. +If your sample uses a Windows AI API, add it to an `apis.json` file anywhere under the `Samples\Definitions\` directory. These files are used to group samples together in the app and they show up as a single page where all samples are rendered at the same time. A sample folder that is not inside a model or api folder renders as a single sample in the app. These are good for samples that are higher level concepts (such as guides), or samples that combine multiple models or APIs. @@ -223,15 +223,13 @@ To add a sample, follow these steps: 1. **Identify the model or API**: Determine which model or Windows AI API your sample will use. 2. **Create or update model definitions** (if needed): - - For Open Source Models: If the model doesn't exist, create either: - - A new `.model.json` file for a single model family, or - - A new `.modelgroup.json` file for grouping multiple related model families - - Place these files under `Samples\Definitions\Models\` - - For Windows AI APIs: Add your API to the `apis.json` file under `Samples\Definitions\WcrApis\` - -3. **Create the sample files**: Add a new WinUI Blank Page in the appropriate folder: - - For Open Source Models: `Samples\Open Source Models\[Model Category]\[SampleName].xaml` - - For Windows AI APIs: `Samples\WCRAPIs\[SampleName].xaml` + If your sample uses a Model that hasn't been used before in the gallery app, you might also need to include a new model file (`.model.json`)inside the `Samples\Definitions\` directory. + - The `.model.json` file is used to define a collection of samples that use a specific model family + - You can also add a `.modelgroup.json` file to organize the model family within a model group + - The `apis.json` file is used to define a collection of samples that use a specific API + - The `.model.json`, `.modelgroup.json`, and `apis.json` files are used to group samples together in the app and they show up as a single page where all samples are rendered at the same time. + +3. **Create the sample files**: Add a new WinUI Blank Page in the appropriate folder: `Samples\[CategoryName]\[SampleName].xaml` 4. **Implement the sample**: - Make your sample class inherit from `BaseSamplePage` From 623d53f097129ac86fda9ea9cbc1c99a0dfca09d Mon Sep 17 00:00:00 2001 From: MillyWei Date: Tue, 28 Oct 2025 12:09:39 +0800 Subject: [PATCH 4/7] update --- docs/AddingSamples.md | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/docs/AddingSamples.md b/docs/AddingSamples.md index 258ca86a..8e6a087e 100644 --- a/docs/AddingSamples.md +++ b/docs/AddingSamples.md @@ -12,7 +12,7 @@ If your sample uses a Model that hasn't been used before in the gallery app, you - `.model.json` - A simpler format for defining a single model family - `.modelgroup.json` - Used to define a group of related model families with additional grouping metadata -If your sample uses a Windows AI API, add it to an `apis.json` file anywhere under the `Samples\Definitions\` directory. These files are used to group samples together in the app and they show up as a single page where all samples are rendered at the same time. +If your sample uses a specific API, add it to an `apis.json` file anywhere under the `Samples\Definitions\` directory. The `.model.json`, `.modelgroup.json`, and `apis.json` files are used to group samples together in the app and they show up as a single page where all samples are rendered at the same time. A sample folder that is not inside a model or api folder renders as a single sample in the app. These are good for samples that are higher level concepts (such as guides), or samples that combine multiple models or APIs. @@ -227,7 +227,6 @@ To add a sample, follow these steps: - The `.model.json` file is used to define a collection of samples that use a specific model family - You can also add a `.modelgroup.json` file to organize the model family within a model group - The `apis.json` file is used to define a collection of samples that use a specific API - - The `.model.json`, `.modelgroup.json`, and `apis.json` files are used to group samples together in the app and they show up as a single page where all samples are rendered at the same time. 3. **Create the sample files**: Add a new WinUI Blank Page in the appropriate folder: `Samples\[CategoryName]\[SampleName].xaml` From 2c2a59ecf90f8a66bdeb0fa25ea8d9829c28e7dc Mon Sep 17 00:00:00 2001 From: MillyWei Date: Tue, 28 Oct 2025 14:18:21 +0800 Subject: [PATCH 5/7] update --- docs/AddingSamples.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/docs/AddingSamples.md b/docs/AddingSamples.md index 8e6a087e..f620db91 100644 --- a/docs/AddingSamples.md +++ b/docs/AddingSamples.md @@ -12,7 +12,9 @@ If your sample uses a Model that hasn't been used before in the gallery app, you - `.model.json` - A simpler format for defining a single model family - `.modelgroup.json` - Used to define a group of related model families with additional grouping metadata -If your sample uses a specific API, add it to an `apis.json` file anywhere under the `Samples\Definitions\` directory. The `.model.json`, `.modelgroup.json`, and `apis.json` files are used to group samples together in the app and they show up as a single page where all samples are rendered at the same time. +If your sample uses a specific API, add it to an `apis.json` file anywhere under the `Samples\Definitions\` directory. + +The `.model.json`, `.modelgroup.json`, and `apis.json` files are used to group samples together in the app and they show up as a single page where all samples are rendered at the same time. A sample folder that is not inside a model or api folder renders as a single sample in the app. These are good for samples that are higher level concepts (such as guides), or samples that combine multiple models or APIs. From 4b4e808b12c74d59b4fe03d281465b74d75ad3da Mon Sep 17 00:00:00 2001 From: MillyWei Date: Tue, 28 Oct 2025 16:42:15 +0800 Subject: [PATCH 6/7] Add WinML-Beginner-Guide-Create-Sample --- docs/WinML-Beginner-Guide-Create-Sample.md | 265 +++++++++++++++++++++ 1 file changed, 265 insertions(+) create mode 100644 docs/WinML-Beginner-Guide-Create-Sample.md diff --git a/docs/WinML-Beginner-Guide-Create-Sample.md b/docs/WinML-Beginner-Guide-Create-Sample.md new file mode 100644 index 00000000..9a5a95e9 --- /dev/null +++ b/docs/WinML-Beginner-Guide-Create-Sample.md @@ -0,0 +1,265 @@ +### Beginner Guide to Create a WinML Sample in AI Dev Gallery + +## What you’ll build + +- Learn how samples are organized and displayed in AI Dev Gallery app +- Understand WinML, ONNX, and Execution Providers (EPs) +- Add `[GallerySample]` metadata and link your page to a scenario (`scenarios.json`) +- Initialize a WinML model the “project-approved” way in `LoadModelAsync` +- Dispose resources when the page unloads +- Use the Minimal Sample skeleton (at the end) to get started quickly + +--- + +## Core concepts + +- **What is ONNX?** + - ONNX is a common file format for AI models. Export your model (from PyTorch, etc.) to `.onnx` and you can run it on Windows using WinML/ONNX Runtime. + +- **What is WinML?** + - Windows Machine Learning (WinML) is the Windows-native way to run ONNX models. It offers high-performance inference, hardware acceleration (e.g., GPU), and tight integration with Windows app packaging and devices. + +- **ONNX Runtime (ORT) and Execution Providers (EPs)** + - ORT is the inference engine. EPs are backend plugins that execute model ops on different hardware. + - Typical EPs: + - CPU: pure CPU inference + - DirectML (DML): GPU acceleration on Windows via DirectML + - QNN / NPU: accelerators on specific hardware + - In this project, EP choice and options come from `WinMlSampleOptions` (policy, EP name, device type, compile option). + +- **Tensors (inputs/outputs)** + - Models take tensors (multi-dimensional arrays), e.g., images as `[N, C, H, W]` or `[N, H, W, C]`. + - You must preprocess raw data (resize, normalize, layout conversion) into the model’s expected input tensor, and postprocess outputs (softmax, top-K, parsing) for human-readable results. + +--- + +## How samples appear in the app (See [AddingSamples.md](AddingSamples.md) for the original quick guide) + +- All samples live under `AIDevGallery/Samples`. Organization is flexible but grouping by model or feature is recommended. +- Each sample is a WinUI page defined by a pair: `Sample.xaml` + `Sample.xaml.cs`. +- The page class in `Sample.xaml.cs` must carry the `[GallerySample]` attribute. + - This attribute provides display metadata (name, icon, scenario, NuGet packages, etc.). The app dynamically loads it and shows your page. + - For a reference, see [ImageClassification.xaml.cs](../AIDevGallery/Samples/Open Source Models/Image Models/ImageNet/ImageClassification.xaml.cs). + +--- + +## Link to a Scenario (ScenarioType) + +- [scenarios.json](../AIDevGallery/Samples/scenarios.json) defines scenario groups and UI copy (e.g., “Classify Image”, “Detect Objects”). +- In `[GallerySample]`, set `Scenario = ScenarioType.YourScenario` to categorize your sample. +- If you need a new scenario, add it to [scenarios.json](../AIDevGallery/Samples/scenarios.json) and rebuild (it’s embedded via a source generator at compile time, not loaded at runtime). + +--- + +## Link to model definitions (optional) + +- If your sample introduces a new model family not previously used in the gallery, add under `Samples/Definitions/Models/`: + - `.model.json`: a single model family definition + - `.modelgroup.json`: multiple related families with display order, etc. +- If your sample demonstrates a particular API, add an `apis.json` under `Samples/Definitions/`. +- These definitions group samples on a single collection page. For a one-off page, you can skip this. + +--- + +## What files to create + +1) Create a new page at a suitable location, e.g.: + - `AIDevGallery/Samples/MyModel/MySample.xaml` + - `AIDevGallery/Samples/MyModel/MySample.xaml.cs` + +2) In `MySample.xaml.cs`: + - Inherit from `BaseSamplePage` + - Add the `[GallerySample]` attribute with minimal metadata + - Override `LoadModelAsync(SampleNavigationParameters sampleParams)` to initialize the model; call `sampleParams.NotifyCompletion()` when ready + - Register `Unloaded` to dispose `InferenceSession` and other resources + +--- + +## WinML initialization flow (project convention) + +1) Ensure and register certified EPs: + - Purpose: install/register certified hardware acceleration packages (e.g., DML) when available. + - If it fails, gracefully fall back to CPU. + +2) Create `SessionOptions` and register ORT extensions: + - `sessionOptions.RegisterOrtExtensions()` enables additional operators many models need. + +3) Honor `sampleParams.WinMlSampleOptions` for EP selection: + - `Policy`: let the system pick the best EP + - Or `EpName` + `DeviceType`: explicitly request a specific EP and device + - `CompileModel`: pre-compile for faster subsequent runs (first-time cost, later speedup) + +4) Create the `InferenceSession`: + - `new InferenceSession(modelPath, sessionOptions)` + - If compilation is enabled, the path may be replaced by a compiled artifact. + +5) Call `sampleParams.NotifyCompletion()` when initialization finishes: + - UI hides the loading spinner; the page is ready for interaction. + +6) Dispose resources on page unload: + - `this.Unloaded += (s, e) => _inferenceSession?.Dispose();` + +--- + +## I/O, preprocessing, and postprocessing (image classification example) + +- Inspect input name and shape (e.g., `NCHW`) and set `N=1`. +- Resize the image to the expected resolution (e.g., `224x224`), normalize per the model’s spec, and fill a `Tensor`. +- Run `_inferenceSession.Run(inputs)`, then postprocess (softmax/top-K) and bind the results to your UI. + +For concrete code, refer to `ImageClassification.xaml.cs` (`InitModel` / `ClassifyImage`). + +--- + +## `[GallerySample]` minimal fields + +- `Name`: display name +- `Model1Types`: required array of model types your sample supports +- `Scenario`: from `ScenarioType` generated by `scenarios.json` +- `NugetPackageReferences`: additional packages (e.g., `Microsoft.ML.OnnxRuntime.Extensions`) +- `Id`: unique GUID +- `Icon`: optional Segoe MDL2 glyph + +Other optional fields (e.g., `SharedCode`) can be added as needed—see existing samples for patterns. + +--- + +## Dev tips + +- Use `ShowException(ex, "...message...")` to display a helpful dialog with copyable details. +- `SendSampleInteractedEvent(...)` can record user interactions for telemetry. +- First-time init/inference can be slow—`CompileModel` significantly speeds up subsequent runs. +- Input layout and preprocessing vary by model—check the model’s README or similar samples. + +--- + +## Minimal Sample skeleton (copy, rename, and adapt) + +> Replace namespace/class name/model type/scenario with yours. Comments map to the steps above. + +```csharp +using AIDevGallery.Models; +using AIDevGallery.Samples; +using AIDevGallery.Samples.Attributes; +using Microsoft.ML.OnnxRuntime; +using Microsoft.UI.Xaml; +using System; +using System.Diagnostics; +using System.Threading.Tasks; + +namespace AIDevGallery.Samples.MySamples +{ + [GallerySample( + Name = "My Minimal WinML Sample", + Model1Types = [ModelType.SqueezeNet], // pick a model type that matches your sample + Scenario = ScenarioType.ImageClassifyImage, // pick a scenario defined in scenarios.json + NugetPackageReferences = [ + "Microsoft.ML.OnnxRuntime.Extensions" + ], + Id = "00000000-0000-0000-0000-000000000000", // sample GUID + Icon = "\uE8B9" + )] + internal sealed partial class MyMinimalSample : BaseSamplePage + { + private InferenceSession? _session; + + public MyMinimalSample() + { + // Step 6: ensure resources are released when page unloads + this.Unloaded += (s, e) => _session?.Dispose(); + + this.InitializeComponent(); + } + + protected override async Task LoadModelAsync(SampleNavigationParameters sampleParams) + { + try + { + // Step 1/2/3: initialize session options and EP based on WinMlSampleOptions + await InitializeSessionAsync(sampleParams.ModelPath, sampleParams.WinMlSampleOptions); + + // Step 5: notify the UI that model is ready (hide loading spinner) + sampleParams.NotifyCompletion(); + + // (Optional) Run a first inference or prepare UI state + } + catch (Exception ex) + { + ShowException(ex, "Failed to initialize model."); + } + } + + private Task InitializeSessionAsync(string modelPath, WinMlSampleOptions options) + { + return Task.Run(async () => + { + if (_session != null) + { + return; // already initialized + } + + // Step 1: ensure certified EPs are present (e.g., DirectML) + var catalog = Microsoft.Windows.AI.MachineLearning.ExecutionProviderCatalog.GetDefault(); + try + { + var _ = await catalog.EnsureAndRegisterCertifiedAsync(); + } + catch (Exception installEx) + { + Debug.WriteLine($"WARNING: Failed to install packages: {installEx.Message}"); + } + + // Step 2: create session options and register ORT extensions + SessionOptions so = new(); + so.RegisterOrtExtensions(); + + // Step 3: choose EP policy or a specific EP + if (options.Policy != null) + { + so.SetEpSelectionPolicy(options.Policy.Value); + } + else if (options.EpName != null) + { + so.AppendExecutionProviderFromEpName(options.EpName, options.DeviceType); + + // Step 3 (optional): pre-compile the model for faster subsequent runs + if (options.CompileModel) + { + modelPath = so.GetCompiledModel(modelPath, options.EpName) ?? modelPath; + } + } + + // Step 4: create inference session + _session = new InferenceSession(modelPath, so); + }); + } + + // (Optional) Add methods for preprocessing, inference, and postprocessing + } +} +``` + +--- + +## FAQ + +- Is first inference slow? + - Yes. Enabling `CompileModel` makes subsequent runs fast. + +- How do I know the input size and preprocessing? + - Check the model’s README and look at similar samples in this repo. + +- Can I run without a dedicated GPU? + - Yes. DirectML supports many integrated GPUs. If unavailable, CPU fallback still works. + +- Why doesn’t my modified `scenarios.json` show up immediately? + - It’s embedded via a source generator. Rebuild the app. + +--- + +## Next steps + +- Copy the minimal sample, change the namespace/class/model type/scenario, and run the app to see your page appear. +- If you need grouped presentation or a new model family, add `.model.json` / `.modelgroup.json` or `apis.json` under `Samples/Definitions`. + + From c3e92e5a520effc4e1cbbbaa4e94b16ebe54a6aa Mon Sep 17 00:00:00 2001 From: MillyWei Date: Tue, 28 Oct 2025 16:47:45 +0800 Subject: [PATCH 7/7] Add Related links --- docs/WinML-Beginner-Guide-Create-Sample.md | 3 +++ 1 file changed, 3 insertions(+) diff --git a/docs/WinML-Beginner-Guide-Create-Sample.md b/docs/WinML-Beginner-Guide-Create-Sample.md index 9a5a95e9..3b36b0c0 100644 --- a/docs/WinML-Beginner-Guide-Create-Sample.md +++ b/docs/WinML-Beginner-Guide-Create-Sample.md @@ -262,4 +262,7 @@ namespace AIDevGallery.Samples.MySamples - Copy the minimal sample, change the namespace/class/model type/scenario, and run the app to see your page appear. - If you need grouped presentation or a new model family, add `.model.json` / `.modelgroup.json` or `apis.json` under `Samples/Definitions`. +## Related links +- [Use ONNX APIs in Windows ML](https://learn.microsoft.com/en-us/windows/ai/new-windows-ml/use-onnx-apis) +- [Get started with Windows ML](https://learn.microsoft.com/en-us/windows/ai/new-windows-ml/get-started) \ No newline at end of file