
Meta has reportedly recruited one of the co-founders of artificial intelligence (AI) startup Thinking Machines.
Andrew Tulloch, a high-profile AI researcher, confirmed his departure in a message to workers on Friday (Oct. 10), The Wall Street Journal (WSJ) reported, citing sources familiar with the matter.
“Andrew has decided to pursue a different path for personal reasons,” a spokeswoman for Thinking Machines told the news outlet.
According to the report, Tulloch worked at Meta for 11 years before leaving the company in 2023 to join OpenAI. He co-founded Thinking Machines with Mira Murati — another OpenAI vet — at the beginning of this year.
The report characterized Tulloch’s recruitment as the latest in a series of coups for Meta following a recent hiring spree, with the company shifting its focus to its new AI teams in pursuit of so-called “superintelligence.”
An earlier WSJ report said Tulloch was offered and declined a pay package from Meta that could have been worth up to $1.5 billion with top bonuses and extraordinary stock performance. A spokesperson for Meta called the description of the offer “inaccurate and ridiculous.”
The news comes days after Thinking Machine debuted its first product. Tinker is a training application programming interface (API) designed to give organizations complete control over model training and fine-tuning while the startup manages the underlying infrastructure.
“With Tinker, Thinking Machines joins a growing number of firms building tools aimed to help organizations train and deploy models faster, at lower cost and with greater control than major providers like OpenAI or Anthropic,” PYMNTS wrote last week.
In a recent press release, the company said its goal was to allow “more people to do research on cutting-edge models and customize them to their needs.”
The launch comes months after the company’s $2 billion seed round, among the biggest on record for the AI sector. Until now, little was known about what the firm was building.
According to the company’s announcement, Tinker lets users “fine-tune a range of large and small open-weight models” by adapting an existing model to a specific task, like identifying fraud or analyzing transactions, without needing to retrain it from scratch.
“It takes care of the heavy lifting behind AI training: distributing workloads, handling compute resources and maintaining reliability,” PYMNTS wrote. “This approach removes a major operational barrier for smaller research teams, startups and enterprise developers that want to adapt open models for their own data.”
Source: https://www.pymnts.com/