Big Tech Broadens AI Footprint from Local PC Agents to Super Factories

A new wave of announcements from big technology companies shows how quickly the artificial intelligence (AI) landscape is shifting. This week’s updates reflect a broader push to increase existing footprint from local PC agents to AI Super Factories.

Nvidia introduces a local AI agent for PCs

Nvidia introduced a new on-device AI tool through its RTX AI Garage program. The tool, called Hyperlink by Nexa, runs directly on the user’s computer and indexes local files, slides, PDFs and images. It allows users to search and generate content without sending data to the cloud.

Nvidia said in its blog that the approach boosts speed and privacy and gives everyday computing devices more intelligence. The agent uses the GPU inside RTX PCs to run generative models locally and supports tasks such as creating study summaries, preparing presentations or organizing research material. The launch reflects the shift toward distributing AI capability across both cloud systems and personal devices.

Nvidia and Microsoft Move Ahead on AI Super Factories

Nvidia and Microsoft expanded their work on what they call AI super factories, which link large data centers to train models and run enterprise AI applications. According to the Nvidia blog, the companies are pairing Microsoft cloud infrastructure with Nvidia GPUs and microservices to support AI workloads in industries such as manufacturing, logistics and finance. 

Microsoft described these distributed systems as connected facilities tied together by high-speed networks to deliver reliable compute at scale. A release from Microsoft detailed how the company is linking sites from Wisconsin to Atlanta to build its first AI superfactory. The project combines energy-efficient data centers, long-haul fiber links and Nvidia accelerated computing to support training and inference across corporate applications.

AMD and HPE Push Open Rack-Scale AI Systems

AMD and HPE expanded their partnership to build a new large-scale AI computing system called Helios. AMD said the setup combines its processors and graphics chips with HPE’s networking hardware so the machines can move data quickly and handle heavier AI workloads. HPE expects to begin shipping the racks globally in 2026.

The companies say the design is “open,” meaning organizations can mix and match components rather than rely on a single supplier. The system is built to handle the demands of training big AI models and running simulations. AMD said the work draws on its long experience in supercomputing and gives businesses a more flexible way to expand their AI infrastructure.

Meta Releases SAM 3D for 3D Object Creation

Meta advanced its work in foundation models with the release of SAM 3D. The model converts 2D images into 3D representations, allowing developers and creators to produce objects that can be used in games, metaverse environments or design tools. Meta described the model on its AI blog as an evolution of its segment-anything family. The company said the new version uses depth estimation and learned 3D structure to create cleaner and more consistent shapes than earlier methods. Meta framed the update as part of its push to make spatial content creation accessible without requiring advanced modeling skills.

Amazon tests AI Voices in Streaming Content

Amazon’s Prime Video team tested AI-generated voice dubs for the anime series Banana Fish. Forbes reported that the AI dubs were widely criticized by early viewers who described the voices as stiff and mismatched with on-screen emotion. The publication also said the experiment shows the limits of current voice synthesis in entertainment where tone, pacing and acting remain difficult for AI to reproduce. The test followed Amazon’s broader efforts to integrate AI into production and localization workflows but drew immediate backlash from fans who argued the audio lacked the expression needed for dramatic scenes.

Source: https://www.pymnts.com/