A position paper with 30 co-authors from leading AI research institutions argues that agentic AI systems should apply Bayesian principles at the orchestration layer rather than within LLM parameters themselves. The paper, titled "Position: agentic AI orchestration should be Bayes-consistent," was published on arXiv on May 1, 2026, and has been accepted for publication at ICML 2026. Authors include Theodore Papamarkou, Pierre Alquier, Matthias Bauer, Wray Buntine, Andrew Davison, and 25 other prominent researchers in probabilistic machine learning and AI safety.
LLMs Excel at Prediction But Struggle With Decision-Making Under Uncertainty
The paper distinguishes between LLMs' strengths in predictive tasks and complex reasoning versus their challenges in making decisions under uncertainty. High-value agentic deployments require critical decisions: which tool to call, which expert to consult, and how many resources to invest. The authors note that making LLMs themselves explicitly Bayesian remains "computationally intensive and conceptually nontrivial," arguing instead for Bayesian principles at the control layer that manages agents.
Bayesian Orchestration Framework Enables Calibrated Decision-Making
The proposed framework involves three core components:
- Calibrated belief maintenance: Maintain calibrated beliefs over task-relevant latent quantities
- Evidence-based updates: Update beliefs from observed agentic and human-AI interactions
- Utility-aware policies: Choose actions using utility-aware policies informed by those beliefs
According to the paper, "While the usefulness and feasibility of Bayesian approaches remain unclear for LLM inference, this position paper argues that the control layer of an agentic AI system (that orchestrates LLMs and tools) is a clear case where Bayesian principles should shine."
Framework Provides Practical Implementation Patterns
The paper includes concrete examples of Bayesian orchestration in practice, design patterns for implementation, and a framework for coherent decision-making in agentic systems. The practical properties proposed include calibrated belief maintenance, utility-aware action selection, principled handling of uncertainty, and compatibility with modern agentic AI systems and human-AI collaboration.
30 Co-Authors Represent Significant Consensus in Bayesian ML Community
The impressive roster of 30 co-authors represents a significant consensus statement from the Bayesian machine learning community about the future direction of agentic AI. The author list includes prominent researchers across probabilistic ML, decision theory, and AI safety, signaling broad academic support for this architectural approach to AI orchestration.
Key Takeaways
- 30 researchers from leading AI institutions published a position paper arguing for Bayesian principles at the orchestration layer of agentic AI systems, accepted at ICML 2026
- The framework proposes maintaining calibrated beliefs, updating from observations, and using utility-aware policies rather than making LLMs themselves Bayesian
- High-value agentic deployments require principled decision-making under uncertainty for tool selection, resource allocation, and expert consultation
- The paper provides concrete implementation patterns and examples for Bayesian orchestration in production systems
- The large number of co-authors signals significant consensus in the Bayesian ML research community about this architectural direction