Interwhen Verifies AI Agent Actions Before They Become Irreversible
Microsoft Research’s Amit Sharma presents Interwhen as a framework for moving AI agents from post-hoc checking to verified execution while they are still acting. The open-source library uses LLMs to turn natural-language instructions, policies, and partial responses into smaller verifiable properties, then applies symbolic or model-based verifiers to tool calls and intermediate behavior. Sharma argues that this lets agents continue normally when checks pass but interrupts them when a verifier detects a violation, addressing risks that final-output review may catch too late.
Microsoft Research·May 14, 2026·6 min read