🤯PAN: The Future of Agentic AI 🚀

AI

🎧English flagFrench flagGerman flagSpanish flag

Summary

Researchers at Asari AI, MIT CSAIL, and Caltech have identified a need for revised architectural standards to manage agentic workflows within enterprises. Their investigation introduced a programming model, Probabilistic Angelic Nondeterminism – PAN – and a Python implementation, ENCOMPASS. This framework enables developers to define the core, predictable path of an agent’s operations, while deferring complex inference strategies to a dedicated runtime engine. Utilizing a “branchpoint()" primitive, programmers can designate areas of potential uncertainty. The research suggests that optimized search algorithms can achieve superior outcomes with reduced reliance on extensive feedback loops, ultimately leading to more stable enterprise architectures.

INSIGHTS


PROGRAM-IN-CONTROL AGENTS: A NEW ARCHITECTURAL STANDARD
The transition from generative AI prototypes to production-grade agents introduces a specific engineering hurdle: reliability. LLMs are stochastic by nature. A prompt that works once may fail on the second attempt. To mitigate this, development teams often wrap core business logic in complex error-handling loops, retries, and branching paths. This approach creates a maintenance problem. The code defining what an agent should do becomes inextricably mixed with the code defining how to handle the model’s unpredictability.

THE ENTANGLEMENT PROBLEM IN AGENT DESIGN
When these are combined, the resulting codebase becomes brittle. Implementing a strategy like “best-of-N” sampling requires wrapping the entire agent function in a loop. Moving to a more complex strategy, such as tree search or refinement, typically requires a complete structural rewrite of the agent’s code. The researchers argue that this process is less predictable.

BRANCHPOINTS: LOCATING UNRELIABILITY
The ENCOMPASS framework addresses this by allowing programmers to mark “locations of unreliability” within their code using a primitive called branchpoint(). These markers indicate where an LLM call occurs and where execution might diverge. The developer writes the code as if the operation will succeed. At runtime, the framework interprets these branch points to construct a search tree of possible execution paths.

PROGRAM-IN-CONTROL: A SEARCH-BASED APPROACH
Unlike “LLM-in-control” systems, where the model decides the entire sequence of operations, program-in-control agents operate within a workflow defined by code. The LLM is invoked only to perform specific subtasks. This structure is generally preferred in enterprise environments for its higher predictability and auditability compared to fully autonomous agents.

CONTROLLING THE COST OF INFERENCE
By treating inference strategies as a search over execution paths, the framework allows developers to apply different algorithms – such as depth-first search, beam search, or Monte Carlo tree search – without altering the underlying business logic. The code difficult to read or lint. Implementing beam search required the programmer to break the workflow into individual steps and explicitly manage state across a dictionary of variables.

This article is AI-synthesized from public sources and may not reflect original reporting.