AI Models and Tools: OpenAI Enables Creation of Shopify AI Assistants

OpenAI unveiled a way for retailers to create artificial intelligence (AI) shopping assistants in Shopify, according to a post on X.

With a few clicks, developers can now connect the Storefront Managed Compute Platform (MCP) server directly to the OpenAI Responses API to build agents that can search for products, add items to a cart, and generate checkout links — all without requiring authentication.

Go to the OpenAI Playground, under Tools, add MCP Server. Click on Shopify and add your store’s URL to create the AI shopping assistant.

Now when users type, “I am looking for a lightweight men’s button up shirt for a vacation,” for example, the assistant will search for options from your inventory to show the shopper. If the shopper picks one of the options, the assistant can automatically move it to checkout.

The move reflects OpenAI’s broader strategy to move deeper into offering shopping capabilities.

Rival Perplexity already offers a shopping assistant within its AI chatbot. For Pro users, they can also check out with one click right on the chatbot itself through Buy with Pro. Shopify is one of the merchants whose products can be found through Perplexity.

Perplexity also rolled out a free merchant program to enable retailers to share their product specifications so shoppers can find their products. Payment integrations include the one-click Buy with Pro checkout.

Read moreShopify to Enable Merchants to Accept USDC Stablecoin Payments

Google Unveils Robotics Model

Google has introduced Gemini Robotics On-Device, a new version of its most advanced vision-language-action (VLA) model designed to run directly on robots without needing to connect to the cloud.

Since the model doesn’t need to send data to and receive data from the cloud, Google said it is well-suited for applications where latency makes a difference, and works well in areas with weak or with no internet access.

Gemini Robotics On-Device is a foundation model for bi-arm robots. It is designed for fast experimentation with dexterous manipulation. It can adapt to new tasks through fine-tuning or further training.

Further, it can understand natural language instructions from the user to do tasks such as unzipping bags, folding clothes or uncapping a marker. Developers can train the model to do more tasks in new domains after as few as 50 to 100 demonstrations.

Developers can try it out through the Gemini Robotics SDK. Sign up for model and SDK access through Google’s trusted tester program.

Read moreNvidia and Foxconn Aim to Use Humanoid Robots in AI Server Factory

Amazon’s DeepFleet

Amazon also rolled out a new generative AI foundation model for its robots called DeepFleet.

The model is designed to make Amazon’s warehouse robots “smarter and more efficient,” according to a company blog post. It coordinates the movements of its robot fleet at fulfillment centers and has improved their travel time by 10% — to enable faster and cheaper shipping of products.

“Think of DeepFleet as an intelligent traffic management system for a city filled with cars moving through congested streets,” according to Amazon.

“Just as a smart traffic system could reduce wait times and create better routes for drivers, DeepFleet coordinates our robots’ movements to optimize how they navigate our fulfillment centers,” the company said. “This means less congestion, more efficient paths, and faster processing of customer orders.”

The model also continually learns to improve over time.

Source: https://www.pymnts.com/