Michal Valko Challenges How the Industry Defines Intelligence at Machines Can Think Summit 2026

Dubai, UAE – Michal Valko, Chief Models Officer at a Stealth AI Startup and one of the most respected researchers in modern machine learning, offered a grounded and forward looking perspective on artificial intelligence during the Machines Can Think Summit 2026. Valko shared these insights in an on stage interview with Katie Jensen following his keynote address.

Valko began by addressing a common misconception around the speed of AI progress. While public attention surged in recent years, he noted that AI research spans more than seven decades, with foundational work dating back to the 1950s and reinforcement learning research emerging in the late 1990s.

“AI feels new because people started using it,” Valko said. “Research existed for decades. The difference today is practical value.”

When asked whether modern AI systems demonstrate real intelligence or strong imitation, Valko urged a broader definition. He described intelligence as a collection of capabilities rather than a single trait. Current large language models excel at organizing and summarizing information at a global scale.

“They compress knowledge from across humanity into a structure people can interact with,” he said. “That ability represents one form of intelligence.”

He also identified a critical limitation. Humans reason well under uncertainty, combining conflicting signals and acting without complete information. Current AI models lack this skill and tend to respond with certainty even when evidence conflicts.

“Reasoning under uncertainty defines the next challenge,” Valko said. “Confidence without uncertainty awareness creates risk.”

Valko connected this limitation to his long standing research focus on reducing human supervision during model training. He argued that systems requiring extensive labeling and instruction fail to scale meaningfully.

“If learning effort exceeds value, usefulness drops,” he said.

He used human learning as a reference point. Infants learn through observation long before formal instruction. They recognize faces and patterns without repeated labels or explicit guidance.

“Observation drives early intelligence,” Valko said. “AI systems need stronger observational learning.”

He emphasized that intelligence emerges when systems extract structure independently rather than repeat human input.

“If humans inject all knowledge, models mirror humans,” he said. “Independent learning creates real progress.”

Drawing on experience across academia, Meta, and startup research, Valko discussed where the next major breakthroughs are likely to emerge. He pointed to rising cost and diminishing returns from scaling single massive models.

“Scaling rules are well known,” he said. “More data and compute raise cost quickly.”

He suggested future gains will come from modular design, specialization, and collective intelligence. Human intelligence operates through groups, specialized roles, and coordination rather than a single dominant unit.

“Societies function through collaboration,” Valko said. “Biology works the same way. AI systems need similar structure.”

He described active research into modular architectures, expert mixtures, and collaborative systems designed to work together efficiently. This approach aims to improve reasoning while controlling cost and complexity.

The conversation highlighted a shift underway in AI research. Focus moves from size toward structure, uncertainty handling, and learning efficiency.

The interview reflected central themes of the Machines Can Think Summit 2026, realistic assessment of intelligence, long term research priorities, and the evolution from scale driven progress toward structured intelligence systems.

Media Contact

Breaking AI News – https://breakingai.news/
contact@breakingai.news
Interview conducted by Katie Jensen at Machines Can Think Summit 2026