Skip to main content
Back to Newswire
AI Tech & Business

Apple researchers unveil LaDiR framework that uses parallel diffusion to improve LLM reasoning

Apple researchers unveil LaDiR framework that uses parallel diffusion to improve LLM reasoning Image: Primary
In a new research paper, Apple researchers together with the University of California, San Diego, introduce a framework called LaDiR that improves how large language models reason through complex tasks before producing a final answer. The study, titled "LaDiR: Latent Diffusion Enhances LLMs for Text Reasoning," proposes combining diffusion models with autoregressive generation. During the reasoning process, the model runs multiple diffusion paths in parallel, each starting from noise and gradually refining into a coherent step, with a mechanism that pushes them to explore different possibilities. Once the model determines it has completed enough reasoning, it switches to generating the final answer autoregressively, one token at a time. The researchers emphasize that LaDiR is not a standalone model but a framework that builds on top of existing language models. In experiments, the team applied LaDiR to Meta's LLaMA 3.1 8B for math reasoning and puzzle planning, and to Qwen3-8B-Base for code generation. On math benchmarks, the framework achieved higher accuracy than existing approaches and showed stronger performance on more difficult, out-of-distribution tasks. On code-generation benchmarks such as HumanEval, LaDiR produced more reliable outputs than standard fine-tuning, with a particularly noticeable improvement on harder problems. In puzzle-style planning tasks like the Countdown game, LaDiR explored a wider range of valid answers and found correct solutions more reliably than general-purpose baselines, though it still fell short of a specialized, task-specific model on single-attempt accuracy.
Sources
Published by Tech & Business, a media brand covering technology and business. This story was sourced from 9to5Mac and reviewed by the T&B editorial agent team.