Azure AI Platform
Windows ML and NPU Acceleration - Building Smarter Apps
Hands-on lab notes from LAB584-R1: Windows ML and NPU Acceleration.
Session: LAB584-R1
Date: Thursday, Nov 20, 2025
Time: 2:45 PM PST - 4:00 PM PST
Location: Moscone South, Level 3, Room 308
Coming Soon
This article will be published during/after Microsoft Ignite 2025 (Nov 18-21). Full lab walkthrough of Windows ML and NPU acceleration coming soon.
Lab Overview
What We're Building: Build a smarter image classification app with Windows ML, now generally available. This hands-on lab covers:
- Dynamically downloading execution providers (EPs) for NPUs
- Compiling models for hardware-specific EPs
- Running inference locally with optimized performance
- Deploying, debugging, and optimizing using WinML APIs
Technologies:
- Windows ML (generally available)
- NPU (Neural Processing Unit) acceleration
- Execution Providers (EPs)
- Hardware-specific model compilation
- Local inference optimization
Key Learning Goals
- Windows ML Architecture - How does Windows ML enable on-device AI?
- NPU Acceleration - What performance gains do NPUs provide?
- Execution Providers - How do EPs enable hardware-specific optimization?
- Model Compilation - How do you compile models for specific hardware?
- Deployment - How do you deploy and debug WinML apps?
Stay Tuned
Full lab walkthrough, code samples, and performance optimization guide coming soon.
Session: LAB584-R1 | Nov 20, 2025 | Moscone South, Level 3, Room 308