A technical essay arguing that AI agents require deterministic control flow rather than elaborate prompt chains has sparked extensive debate in the developer community. Published on May 7, 2026 by bsuh on bearblog.dev, the article 'Agents need control flow, not more prompts' reached the front page of Hacker News with 309 points and 171 comments.
Prompt-Based Approaches Fail at Scale
The author identifies three fundamental problems with relying solely on prompts to control agent behavior:
- Non-deterministic execution: Prompts lack predictable behavior necessary for complex systems. Even emphatic instructions like 'MANDATORY' or 'DO NOT SKIP' eventually fail as complexity increases.
- Lack of composability: Unlike software modules and functions, prompt chains cannot be reasoned about locally and don't scale through recursive composition.
- Silent failures: LLM-based systems can reach incorrect conclusions without detecting errors, with no equivalent to compile-time or runtime checking.
The core thesis states that you cannot reason about system behavior when control flow is expressed in natural language rather than code.
Deterministic Scaffolding as the Solution
Rather than relying solely on prompting, the author advocates for deterministic scaffolding—explicit state transitions and validation checkpoints that treat the LLM as a component within a larger system. This approach includes programmatic verification with aggressive error detection built into the runtime, not dependent on human oversight.
The author presents three inadequate alternatives to control flow: human babysitters, exhaustive audits, or accepting outputs on faith. These workarounds underscore why structural control flow is essential for reliable agent systems.
Industry Movement Toward Programmatic Control
This debate connects to broader trends in agent development. Frameworks like LangGraph, Pydantic AI's structured outputs, and the rise of agent harnesses all reflect movement toward programmatic control flow over pure prompt engineering. The 171 comments on Hacker News indicate this touched a nerve in the developer community, reflecting a fundamental tension: should we push LLMs to be more reliable through better prompting, or constrain them with programmatic control flow?
Key Takeaways
- Prompt-based agent control fails at scale due to non-deterministic behavior, lack of composability, and silent failures
- Deterministic control flow with explicit state transitions and validation checkpoints offers a more reliable approach to complex agent systems
- The developer community is actively debating whether to improve prompting techniques or embrace programmatic control structures
- Industry frameworks like LangGraph and Pydantic AI increasingly reflect movement toward programmatic control over pure prompt engineering
- The article generated 309 upvotes and 171 comments on Hacker News, indicating significant community interest in agent architecture approaches