Baidu hits the turbo button to get back into AI race

One month after initial release, the company has launched upgrades to its foundation model, ERNIE 4.5, and reasoning model, ERNIE X1, at its developer conference.

An industry analyst Friday offered a lukewarm response to a series of announcements from Chinese tech giant Baidu around upgrades to its multimodal foundation model, ERNIE 4.5, and reasoning model, ERNIE X1, first released last month.

During his keynote at the firm’s annual developer conference in Wuhan, China, CEO Robin Li launched ERNIE 4.5 Turbo and ERNIE X1 Turbo, which, according to a release, feature “enhanced multimodal capabilities, strong reasoning, low costs and are available for users to access on Ernie Bot now free of charge.”

Li said, “the releases aim to empower developers to build the best applications — without having to worry about model capability costs, or development tools. Without practical applications, neither advanced chips nor sophisticated models hold value.”

At the launch of the new models’ predecessors last month, Baidu said in a release that the introduction of the two offerings “pushes the boundaries of multimodal and reasoning models,” adding that ERNIE X1 “delivers performance on par with DeepSeek R1 at only half the price.”

Analysts unimpressed

Paul Smith-Goodson,  vice president and principal analyst for quantum computing, AI and robotics at Moor Insights & Strategy, was unimpressed.

“[Baidu’s] announcement that the P800 Kunlun chip clusters were  ‘illuminated’ only means they were turned on in preparation for training models with hundreds of billions of parameters,” he said. “While that is a technical advancement for China, it is the norm for companies such as OpenAI, Google, IBM, Anthropic, Microsoft, and Meta to train their models with hundreds of billions of parameters.”

Also, said Smith-Goodson, “Baidu’s statement that it used 30,000 Kunlun chips is nothing exceptional when compared to the number of GPUs the US uses to train large models. Kunlun chips are also inferior to US GPUs. In the next-gen AI we will be using something on the order of 100,000 GPUs. Because there is a lack of benchmarks, I have to be skeptical about the performance of this model compared to global leaders.”

Smith-Goodson pointed out, “it boils down to a race between China and the US to build the first Artificial General Intelligence (AGI) model. The US still holds a lead, but China is pressing hard to catch up.”

Source: https://www.infoworld.com