ICLR 2026, running April 24-28 in Singapore with 3,462 accepted papers from 11,617 submissions, awarded its best paper to "LLMs Get Lost In Multi-Turn Conversation" — a Microsoft Research study showing that all tested frontier models drop an average 39% in performance across six generation tasks when moving from single-turn to multi-turn exchanges, primarily because models that commit to early wrong assumptions fail to recover. A second outstanding paper, "Transformers are Inherently Succinct," provides a theoretical account of why Transformer architectures encode concepts more compactly than RNNs.