AI Startup Thinking Machines Unveils ‘Tinker’ to Cut AI Training Costs

Thinking Machines, the artificial intelligence (AI) startup founded by former OpenAI executive Mira Murati, has released its first product, Tinker, a training application programming interface (API) that gives organizations full control over model training and fine-tuning while the company manages the underlying infrastructure.

With Tinker, Thinking Machines joins a growing number of firms building tools aimed to help organizations train and deploy models faster, at lower cost and with greater control than major providers like OpenAI or Anthropic. In a recent press release, the company said its mission is “enabling more people to do research on cutting-edge models and customize them to their needs.”

The launch comes months after the company’s $2 billion seed round, one of the largest in AI history. Until now, little was known about what the firm was developing. PYMNTS previously reported on the funding round.

All About Tinker

According to the company’s announcement, “Tinker lets you fine-tune a range of large and small open-weight models.” Fine-tuning involves adapting an existing model to a specific task, such as identifying fraud or analyzing transactions, without retraining it from scratch.

Today, that process remains one of AI’s most expensive and time-consuming challenges. Training large models often requires thousands of GPUs, weeks of continuous compute and extensive data preparation. Even when using open models, many organizations lack the infrastructure, software expertise or budget to manage the training.

Tinker is designed to address those obstacles. It takes care of the heavy lifting behind AI training: distributing workloads, handling compute resources and maintaining reliability. This approach removes a major operational barrier for smaller research teams, startups and enterprise developers that want to adapt open models for their own data.

The platform works with open-weight systems, which are publicly released models that anyone can modify or improve. Unlike proprietary AI services from companies such as OpenAI or Anthropic, open-weight models do not charge per-token fees or restrict customization. However, they typically require significant engineering to fine-tune effectively, the gap Tinker aims to close by automating much of the setup and providing scalable infrastructure.

Tinker uses a technique called low-rank adaptation, which updates only a fraction of a model’s parameters rather than retraining the entire system. This allows organizations to achieve similar accuracy with a fraction of the cost and compute power.

In a review published by Forbes, the product was described as “useful, though not a big-time blockbuster.” The article noted that while Tinker may not redefine the industry overnight, its value lies in reducing the cost and complexity of training specialized AI systems.

Tinker also includes the Tinker Cookbook, a set of ready-made post-training templates. Early testers from Princeton, Stanford, Berkeley and Redwood Research have used it for experiments in mathematics, chemistry and reinforcement learning.

Thinking Machines said Tinker is in private beta, with a free trial period during the rollout. Usage-based pricing will follow once the service becomes publicly available.

Source: https://www.pymnts.com/