Microsoft Agent Framework: The migration path nobody asked for
Microsoft's agent framework consolidation answers a question enterprises didn't know they needed to ask: how do we migrate existing agent code when the platform keeps changing underneath us? The unified Microsoft Agent Framework, announced at Ignite 2025, merges Semantic Kernel, AutoGen, and custom implementations into a single .NET and Python framework.
The pitch: billions of agents deployed across 25,000+ organizations in Azure AI Foundry, now with a migration path. The reality: one-click migration tools that "optimize your codebase" and "run all tests" whilst you review pull requests. Whether this represents platform maturity or admission of fragmentation depends on how much technical debt you're carrying.
Session context
BRK197 - AI powered automation & multi-agent orchestration in Microsoft Foundry
Speakers:
- Christof Gebhart (BMW)
- Shawn Henry
- Tina Manghnani
- Mark Wallace
When: November 19, 2025, 10:15 - 11:00 AM Where: Marriott Marquis, Yerba Buena Ballroom, BO2 Level: Expert (400)
Session description:
"Build multi-agent systems the right way with Microsoft Foundry. Go from single-agent prototypes to fleet-level orchestration using the Foundry Agent Framework (Semantic Kernel + AutoGen), shared state, Human in the loop, OpenTel, MCP toolchains, A2A, and the Activity Protocol."
Key topics:
- Foundry Agent Framework (Semantic Kernel + AutoGen merger)
- Multi-agent orchestration
- Shared state management
- Human-in-the-loop workflows
- OpenTelemetry integration
- Model Context Protocol (MCP) toolchains
- Agent-to-Agent (A2A) communication
- Activity Protocol
The consolidation: Three frameworks become one
Microsoft spent the year merging agent frameworks that proliferated when everyone realized LLMs needed structured workflows.
What merged
Semantic Kernel:
- Model orchestration and tool calling
- Context management
- Prompt engineering patterns
AutoGen:
- Multi-agent communication
- Agent-to-agent coordination
- Conversation patterns
Custom implementations:
- Enterprise-specific agent patterns
- Internal Microsoft tooling
- Partner-built frameworks
The result:
Microsoft Agent Framework - "the umbrella for all of the work on Microsoft" agent infrastructure.
Platform status
Generally available: .NET and Python versions Public preview: TypeScript/JavaScript version (GA early 2026)
What this means:
If you're a .NET or Python shop, the framework is production-ready. JavaScript developers wait until next year, which matters if you're building web-native agent experiences.
The value proposition
Model flexibility:
- Plug in different model providers
- Model routers (announced yesterday - use Anthropic models deployed on Azure AI Foundry)
- Swap providers without rewriting agent logic
Context protocols:
- Model Context Protocol (MCP) for tool integration
- Retrieval Augmented Generation (RAG) for grounding
- Agent-to-agent communication via standardized protocols
Observability and monitoring:
- Built-in telemetry
- Agent behavior tracking
- Integration with Azure monitoring stack
BMW case study: 12× faster test data analysis
Christoph from BMW demonstrated what multi-agent orchestration enables at automotive scale.
The problem: Test data access bottleneck
Traditional process:
- Engineers needed test data from vehicle testing
- Data stored on physical hard drives
- Wait time: At least one day to get fresh data
- Manual data retrieval and processing
Why this mattered:
Speed to market for new features is competitive advantage. Waiting days for test data to validate code changes is unacceptable when competitors iterate faster.
The solution: NVR data platform with agent orchestration
NVR system evolution:
BMW's NVR (telemetry) system became "aware of itself" - no longer passive data storage, now an active participant in engineering workflows.
Architecture:
Orchestrator agent:
- Receives engineer questions
- Routes to specialized sub-agents
- Coordinates response assembly
Specialized agents:
- Data retrieval agent: Accesses telemetry data from vehicles
- Preprocessing agent: Structures and cleans raw data
- Analysis agent: Performs domain-specific analysis
- Modular design allows adding new agents without disrupting existing ones
Integration:
Digital twin data from vehicles now flows directly to engineers via agent interface. Real-time access replaces day-long waits.
The results
12× acceleration in test-fleet data analysis:
- Days → minutes for fresh test data (terabytes of telemetry)
- Engineers query sophisticated questions in natural language
- Agents provide contextual answers grounded in actual vehicle telemetry
Christoph's framing:
"We moved from access to agents. Agents access all telemetry data for all engineers. This empowers specialized agents to analyze using knowledge and context. Engineers focus on innovation whilst agents handle data complexity."
The automotive future
BMW's vision:
"We move closer to a future where automotive is smarter and richer than ever."
What this actually means:
Multi-agent systems analyzing real-time vehicle data enable:
- Faster feature development cycles
- Data-driven engineering decisions
- Reduced time from concept to road testing
The operational insight:
BMW didn't just build an agent. They built an agent orchestration platform where specialized agents coordinate to solve complex engineering workflows. This is the "multi-agent system" concept moving from conference demos to production automotive engineering.
The migration story: One click to rewrite your codebase
Microsoft's answer to "we have existing agents built on your old frameworks" is automation that rewrites your code.
The migration tools
Visual Studio and VS Code extensions:
- Available now on marketplace
- One-click migration from Semantic Kernel/AutoGen to unified framework
What happens when you click:
Step 1: Analysis
- Extension analyzes your codebase
- Identifies migration requirements
- Generates migration plan
Step 2: Automated migration
- Rewrites code to new framework patterns
- Runs all existing tests against migrated code
- Performs static analysis on new codebase
Step 3: Review and merge
- Creates new git branch with all changes
- Documents what changed and why
- Developer reviews pull request
- Merge when satisfied
The promise:
"Everything will be documented and what will be left for you to do is create a pull request, review all of the code, and then have it merged."
The skeptical view
What could go wrong:
1. Tests passing doesn't mean behavior preserved
Automated migration might maintain test coverage whilst introducing subtle behavioral changes. Agents aren't deterministic - small changes in orchestration logic can produce different outcomes.
2. Custom patterns don't migrate cleanly
If you built custom agent patterns on top of Semantic Kernel, the migration tool may not understand your abstractions. "Optimize your codebase" is a euphemism for "we'll guess at what you meant."
3. Documentation doesn't replace understanding
Automated documentation of code changes helps, but someone still needs to understand why the migration tool made specific choices. This isn't fire-and-forget.
4. The pull request review burden
"Review all of the code" for a large agent codebase rewritten by automation is non-trivial work. How many developers will actually review every change versus trusting the tool?
The pragmatic view
When this makes sense:
If your agent code is:
- Relatively simple orchestration
- Well-tested with comprehensive coverage
- Using standard Semantic Kernel/AutoGen patterns
- Small enough to review migration changes
When this is risky:
If your agent code:
- Has complex custom orchestration
- Relies on undocumented framework behavior
- Has sparse test coverage
- Is business-critical with zero tolerance for behavioral changes
The operational question:
Do you trust automated migration more than manual rewrite? For simple agents, probably yes. For complex multi-agent systems orchestrating critical workflows, this warrants extensive validation.
Durable agents: Checkpoints for long-running workflows
New durable agents extension enables checkpointing for workflows that span hours or days.
The problem: Agents crash, workflows fail
Traditional approach:
Agent runs workflow. Agent crashes. Workflow fails. Start over from beginning.
Why this matters for long-running processes:
If your agent orchestrates a workflow requiring human approval that takes hours or days to arrive, crashing means losing all progress.
Durable agents solution
Checkpointing:
Agent creates checkpoints as it progresses through workflow. If something crashes, agent restarts from last checkpoint, not from beginning.
Human-in-the-loop workflows:
Example scenario:
- Agent starts workflow requiring human approval
- Agent reaches approval gate, creates checkpoint
- Agent spins down whilst waiting (could be hours/days)
- Human provides approval
- Agent spins back up from checkpoint
- Agent continues workflow from where it stopped
The efficiency gain:
Traditional approach: Keep agent running and waiting (costs accrue, resources held)
Durable approach: Spin down during wait, spin up when needed (pay only for active processing)
The architectural implication
This enables asynchronous agent workflows where humans operate on their timeline whilst agents operate on compute-efficient timelines.
Use cases:
- Approval workflows in enterprise processes
- Long-running data analysis with manual verification gates
- Multi-day orchestrations with human decision points
Live demo: Multi-agent workflow orchestration with visualization
Mark Wallace demonstrated the framework's capabilities with a practical multi-agent workflow.
The scenario: Event planning with multiple specialized agents
Objective: Plan a corporate event for 50 people with $10,000 budget in Seattle.
Agent architecture:
Coordinator agent:
- Receives high-level request
- Routes to specialized agents
- Orchestrates workflow execution
Specialized agents:
- Venue agent: Finds and evaluates venues
- Logistics agent: Coordinates logistics requirements
- Budget agent: Tracks costs and budget constraints
All agents communicate with each other - not just coordinator-to-agent, but agent-to-agent collaboration.
Workflow visualization: Real-time execution graph
What the visualization shows:
The framework provides automatic workflow visualization - no custom code required.
Visual elements:
- Coordinator receives question
- Request routed to venue agent
- Venue agent status: "working on this at the moment"
- Timeline of agent interactions
- Back-and-forth communication between agents
- Human-in-the-loop approval gates
The operational insight:
You can see the entire workflow timeline - which agent did what, when, and how they coordinated. This isn't post-hoc logging, it's real-time visualization during execution.
Human-in-the-loop approval checkpoint
The workflow paused at a human approval gate:
Status: "It needs human approval. So here's the approval. I'll approve it so it can continue now."
What happened:
- Agent reached decision point requiring human input
- Workflow created checkpoint
- Agent spun down whilst waiting
- Human provided approval
- Agent resumed from checkpoint
- Workflow continued
The durable agent pattern in action - exactly what was described earlier, now demonstrated live.
OpenTelemetry integration: Deep observability
The framework uses OpenTelemetry (GA, not preview) for comprehensive tracing.
What you can observe:
Agent execution details:
- Which LLM was called (example: GPT-4)
- Input sent to the model
- Response received
- Token usage and pricing
Tool calling visibility:
- Web search tool invoked
- Query: "Identify venues in Seattle"
- Responses returned
- Full request/response cycle traced
The architectural advantage:
Standard observability tooling (OpenTelemetry) means you can use existing monitoring infrastructure. Not proprietary Microsoft telemetry - industry standard tracing.
Agent introspection during execution
You can interact with individual agents whilst workflow runs:
- View agent state
- See what agent is currently doing
- Inspect agent's context and memory
- Debug agent behavior in real-time
Example from demo:
Clicked into individual agent, saw:
- Background context agent is using
- Current task agent is working on
- Tools agent has available
This enables live debugging - not waiting for workflow to complete to understand what went wrong.
Microsoft Purview integration: Data governance at runtime
The challenge:
Workflow complete, ready to deploy to production. Need to ensure no sensitive data leaks through agent responses.
Microsoft Purview solution:
What Purview provides:
- Data governance policies
- Data security controls
- DLP (Data Loss Prevention) at agent runtime
Implementation in framework:
Middleware pattern:
# Pseudocode from demo
middleware = create_purview_middleware(
application_name="event_planning_agent"
)
agent = create_agent(
name="coordinator",
middleware=[middleware]
)
How middleware works:
Before agent execution:
- Middleware intercepts request
- Checks for sensitive data in input
- Applies Purview policies
- Blocks or sanitizes if needed
After agent execution:
- Middleware intercepts response
- Scans for sensitive data in output
- Applies DLP policies
- Blocks or redacts sensitive information before returning to user
The operational advantage:
Data governance enforced at runtime, not reliant on developers remembering to implement security checks. Purview policies apply automatically to all agents using the middleware.
The demo's architectural lessons
1. Visualization is built-in, not custom
You don't build workflow visualization. Framework provides it automatically via OpenTelemetry integration.
2. Observability is standard, not proprietary
OpenTelemetry means you can use Datadog, New Relic, Azure Monitor, or any observability platform that understands OpenTel.
3. Security via middleware, not manual checks
Purview integration as middleware means governance applies uniformly across all agents, not dependent on individual agent implementations.
4. Agent-to-agent communication is native
Agents communicate directly with each other, not just through coordinator. This enables more sophisticated collaboration patterns.
The family tree: How frameworks relate
Microsoft Agent Framework is the foundation.
Built on top:
- Azure AI Foundry Agent Service: Hosted agent execution environment
- Copilot Studio: No-code/low-code agent builder
- Custom enterprise implementations
Integration points:
- Model Context Protocol (MCP): Tool integration standard
- Agent-to-agent communication protocols: Standardized messaging
- Azure monitoring and observability: Telemetry and tracking
The positioning:
Framework is for developers building custom agents with code. Copilot Studio is for business users building agents without code. Both use same underlying framework, different abstractions.
What wasn't addressed
Several operational questions remain unanswered:
1. Migration validation beyond tests
How do you validate that migrated agent behavior matches original when agents aren't deterministic? Passing tests is necessary but not sufficient.
2. Version compatibility guarantees
Will Microsoft maintain backward compatibility, or will future framework versions require more migrations? The consolidation suggests platform instability.
3. Performance characteristics
How does unified framework performance compare to specialized Semantic Kernel or AutoGen implementations? Consolidation often trades performance for generality.
4. Multi-language agent coordination
Can .NET agents coordinate with Python agents coordinate with JavaScript agents (when GA)? Or does language choice fragment your agent fleet?
5. Durable agent cost model
Checkpoint storage, spin-down/spin-up overhead, state management - what's the actual cost model for durable agents versus always-on?
The honest assessment
Microsoft Agent Framework addresses real fragmentation in their agent ecosystem.
What's genuinely useful:
Unified development experience: One framework instead of choosing between Semantic Kernel, AutoGen, or custom patterns eliminates decision paralysis for new projects.
Model flexibility: Plug in different providers (including yesterday's Anthropic announcement) without rewriting agent logic provides vendor optionality.
BMW validation: Real automotive engineering use case demonstrates multi-agent orchestration working at scale, not just conference demos.
Durable agents: Checkpointing for long-running workflows solves real problem of human-in-the-loop asynchronous processes.
What's concerning:
Migration automation risk: One-click rewrites inspire less confidence than incremental manual migration for complex systems.
Platform churn: Need for migration tools suggests Microsoft hasn't stabilized their agent platform yet. Will there be another consolidation next year?
JavaScript GA delay: Web developers wait until 2026 for production-ready framework whilst .NET/Python developers build now.
Orchestration complexity hidden: BMW's multi-agent system works, but session didn't address operational complexity of managing specialized agent coordination at scale.
The verdict
Microsoft Agent Framework represents platform maturity - or admission that the previous approach fragmented too quickly.
For new agent development, the unified framework makes sense. Model flexibility, protocol standardization, and observability built-in provide solid foundation.
For existing agent systems, the migration decision depends on code complexity and risk tolerance. Simple agents with good test coverage: automated migration probably works. Complex multi-agent orchestrations: manual migration with extensive validation warranted.
The BMW case study proves multi-agent orchestration delivers operational value (12× acceleration, days → minutes for test-fleet data access). Whether the unified framework makes building such systems easier than previous approaches remains to be validated outside controlled demos.
What's clear: Microsoft is betting on agent frameworks becoming as fundamental as web frameworks. The consolidation suggests they're learning from past platform fragmentation. Whether this consolidation holds or fragments again next year determines if enterprises can build on stable foundation.
What to watch
Migration adoption: How many enterprises actually use one-click migration versus manual rewrites? Adoption patterns reveal trust in automation.
JavaScript GA timeline: Does TypeScript/JavaScript version ship early 2026 as promised, or slip? Delays suggest platform instability.
Durable agent pricing: Actual cost models for checkpoint storage and spin-down/spin-up cycles will determine economic viability.
Framework stability: Does Microsoft maintain this consolidation, or introduce new patterns that require another migration? Track breaking changes.
Multi-agent orchestration patterns: BMW showed one pattern. Do best practices emerge, or does every enterprise reinvent orchestration?
Performance benchmarks: Independent validation of unified framework performance versus specialized implementations.
Learn More
Official Resources:
- Microsoft Agent Framework documentation (links TBD)
- Migration tooling on Visual Studio Marketplace
- Durable agents extension
Related Coverage:
- Foundry Control Plane: AI fleet operations
- Azure SRE Agent Deep Dive
- Building Multi-Agent Systems with Azure AI Foundry