Modern AI has been dominated by one idea: predict the next token. But what if intelligence doesn’t have to work that way?
In this episode of The Neuron, we’re joined by Eve Bodnia, Founder and CEO of Logical Intelligence, to explore energy-based models (EBMs)—a radically different approach to AI reasoning that doesn’t rely on language, tokens, or next-word prediction.
With a background in theoretical physics and quantum information, Eve explains how EBMs operate over an energy landscape, allowing models to reason about many possible solutions at once rather than guessing sequentially. We discuss why this matters for tasks like spatial reasoning, planning, robotics, and safety-critical systems—and where large language models begin to show their limits.
You’ll learn:
What energy-based models are (in plain English)
Why token-free architectures change how AI reasons
How EBMs reduce hallucinations through constraints and verification
Why EBMs and LLMs may work best together, not in competition
What this approach reveals about the future of AI systems
To learn more about Eve’s work, visit https://logicalintelligence.com.
For more practical, grounded conversations on AI systems that actually work, subscribe to The Neuron newsletter at https://theneuron.ai.