The Workshop on Post-AI Formal Methods at AAAI-26

Talks

Complete Schedule Coming Soon

:
:
:
Days
Hours
Minutes
Seconds
Countdown finished!

Invited Speakers

Dr. Soonho Kong

Amazon Web Services

Lean into Verifiable Intelligence


Abstract


The future of AI lies in "Verifiable Intelligence" through self-verifying and self-improving systems enabled by formal verification tools like Lean that can validate reasoning steps, not just final answers. Current reasoning models are flawed because they focus only on answer correctness while ignoring intermediate reasoning validity, a critical limitation for agentic AI systems that must take reliable actions based on their reasoning. This talk demonstrates that large language models fail as judges of mathematical reasoning, performing barely better than random chance when distinguishing correct from incorrect proofs. This violates Rich Sutton's "Verification Principle" that AI systems can only create knowledge to the extent they can verify it. We argue that Lean offers the solution by enabling complete verification of reasoning steps, creating a virtuous cycle where better AI accelerates formalization of human knowledge, which in turn enables better verifiable AI systems. This represents the next major revolution in knowledge creation, analogous to how digitization and the internet transformed knowledge access and sharing.

BIO


A Principal Applied Scientist for Amazon Web Services within the Neuro-Symbolic AI Lab. His research focuses on designing and implementing AI systems with formal reasoning capabilities, particularly combining LLMs with interactive and automated theorem proving techniques. He received the ACM SIGPLAN Programming Languages Software Award and the Skolem Award for his contributions to the Lean theorem prover.

Prof. Hana Chockler

King's College London

Using causality for debugging and explainability


Abstract


 --TBA--

BIO


A full professor at King's College London. Her research focus on formal verification, causality, and explainable AI, with applications to hardware and software systems. She pioneered the use of causality in software engineering, leading to the first industrial applications at IBM and Intel.

Prof. DONG Jin Song

National University of Singapore

Trusted AI and Reasoning Beyond LLM


Abstract


The rise of code-centric Large Language Models (LLMs) has reshaped the software engineering world with tools like Copilot, DeepSeek, and OpenAI GPT-5 that can easily generate code. However, there is no correctness guarantee for the code generated by LLMs, which suffer from the hallucination problem. The first part of this talk shows that the program refinement calculus can be used as a formal chain of thought to guide LLM and verify the correctness of the LLM-generated code [POPL'25]. The second part of this talk investigates LLM-aided System Design, where LLM enhanced formal methods agents are proposed [ICML'25]. The third part of this talk shows the weakness of LLM in solving planning and strategy analysis problems where formal reasoning techniques are needed.

BIO


A professor at the School of Computing at the National University of Singapore. His research focus on formal methods, safety and security systems, probabilistic reasoning, and trusted machine learning. He co-founded the PAT verification system, “Silas: Trusted Machine Learning” and the Dependable Intelligence company.