[IJCNN 2025] PanelTR: Zero-Shot Table Reasoning Through Multi-Agent Scientific Discussion
Ever wonder how AI can tackle complex table reasoning—like answering questions from financial reports or verifying facts in scientific papers—without any prior training? In this paper, PanelTR, I explore exactly that!
Instead of relying on heavy data annotation or complex neural architectures, I designed a multi-agent framework where five LLM-powered scientist personas—like Einstein, Newton, and Curie—work together to analyze, debate, and refine answers through a structured scientific process:
- 🔍 Investigation: Each scientist breaks down the problem.
- 👁️ Self-Review: They critique their own reasoning.
- 🧠 Peer-Review: They discuss and vote on the best answer.
The result? PanelTR outperforms standard LLMs and even rivals supervised models—all in a zero-shot setting (no task-specific training required. It’s like having a mini-science panel inside your computer.
Abstract:
Table reasoning, including tabular QA and fact verification, often depends on annotated data or complex data augmentation, limiting flexibility and generalization. LLMs, despite their versatility, often underperform compared to simple supervised models. To approach these issues, we introduce PanelTR, a framework utilizing LLM agent scientists for robust table reasoning through a structured scientific approach. PanelTR’s workflow involves agent scientists conducting individual investigations, engaging in self-review, and participating in collaborative peer-review discussions. This process, driven by five scientist personas, enables semantic-level transfer without relying on data augmentation or parametric optimization. Experiments across four benchmarks show that PanelTR outperforms vanilla LLMs and rivals fully supervised models, all while remaining independent of training data. Our findings indicate that structured scientific methodology can effectively handle complex tasks beyond table reasoning with flexible semantic understanding in a zero-shot context.