Close Menu
    • Home
    • Events
      • Upcoming Events
      • Videos
        • Machine Can Think Summit 2026
        • Step Dubai Conference 2026
    • Technology & Innovation
    • Business & Marketing
    • Trends & Insights
    • Industry Applications
    • Tutorials & Guides
    What's Hot
    Industry Applications

    AI Drug Development Johnson & Johnson Impact on Healthcare

    By Art RyanApril 28, 20260

    Johnson & Johnson (J&J) has unveiled new information about the future of AI in healthcare,…

    Qualcomm OpenAI AI Smartphone Processors Partnership News

    April 28, 2026

    Google AI Campus South Korea and Its Development Plans

    April 28, 2026

    Accenture Copilot Rollout Enhances Employee Productivity

    April 28, 2026
    Facebook X (Twitter) Instagram
    Facebook X (Twitter) Instagram
    Breaking AI News
    Wednesday, April 29
    • Home
    • Events
      • Upcoming Events
      • Videos
        • Machine Can Think Summit 2026
        • Step Dubai Conference 2026
    • Technology & Innovation

      AI Drug Development Johnson & Johnson Impact on Healthcare

      April 28, 2026

      Qualcomm OpenAI AI Smartphone Processors Partnership News

      April 28, 2026

      Google AI Campus South Korea and Its Development Plans

      April 28, 2026

      New AI-Based Solution Launched by Box to Revolutionize Enterprise Workflows

      April 28, 2026

      Meta AWS Graviton AI Partnership: Revolutionizing Infrastructure

      April 28, 2026
    • Business & Marketing

      UK AI Startup Ineffable Secures $1.1B in Europe’s Largest Seed Round

      April 28, 2026

      Meta Manus AI Acquisition Blocked Over Strategic Concerns

      April 28, 2026

      Microsoft Ceases Revenue Split With OpenAI in Landmark AI Partnership Move

      April 28, 2026

      ZainTECH Named a Leader in IDC MarketScape: Gulf Countries AI Professional Services

      April 28, 2026

      AI Job Cuts Forecast: Shocking Prediction That 50% of UK Executives Expect Workforce Reduction

      April 20, 2026
    • Trends & Insights

      Google AI Campus South Korea and Its Development Plans

      April 28, 2026

      Meta Manus AI Acquisition Blocked Over Strategic Concerns

      April 28, 2026

      Emirati Inventor AI UAE: Bridging Culture and Technology

      April 28, 2026

      Cursor’s $50 Billion Ambition: Explosive AI Coding Demand Fuels Massive Growth

      April 19, 2026

      Dubai AI-powered government will change your daily life in the UAE

      April 3, 2026
    • Industry Applications

      AI Drug Development Johnson & Johnson Impact on Healthcare

      April 28, 2026

      Accenture Copilot Rollout Enhances Employee Productivity

      April 28, 2026

      HomeLight AI Real Estate Closings Transforming the Market

      April 27, 2026

      UiPath & Databricks Partner to Transform Enterprise Operations through Automation and Data Intelligence

      April 27, 2026

      Visit Oman Launches Revolutionary AI Digital Hub and Global Collaboration to Transform Tourism Industry

      April 27, 2026
    • Tutorials & Guides

      How AI Is Revolutionizing the Future of Travel 2026 with Wellness and Sustainability

      April 19, 2026

      University of Wollongong in Dubai AI initiative boosts future-ready education

      March 31, 2026

      Microsoft AI upgrades Copilot Cowork unveiled for early access users

      March 31, 2026

      Starcloud $11 billion valuation signals AI space race surge

      March 31, 2026

      Flexible AI Factories Power the Future of Energy Grids

      March 30, 2026
    Breaking AI News
    Home » NVIDIA Dynamo: Scaling AI inference with open-source efficiency
    Technology & Innovation

    NVIDIA Dynamo: Scaling AI inference with open-source efficiency

    Art RyanBy Art RyanMarch 20, 2025No Comments6 Mins Read
    Facebook Twitter Pinterest LinkedIn Tumblr Email
    Share
    Facebook Twitter LinkedIn Pinterest Email

    NVIDIA has launched Dynamo, an open-source inference software designed to accelerate and scale reasoning models within AI factories.

    Efficiently managing and coordinating AI inference requests across a fleet of GPUs is a critical endeavour to ensure that AI factories can operate with optimal cost-effectiveness and maximise the generation of token revenue.

    As AI reasoning becomes increasingly prevalent, each AI model is expected to generate tens of thousands of tokens with every prompt, essentially representing its “thinking” process. Enhancing inference performance while simultaneously reducing its cost is therefore crucial for accelerating growth and boosting revenue opportunities for service providers.

    A new generation of AI inference software

    NVIDIA Dynamo, which succeeds the NVIDIA Triton Inference Server, represents a new generation of AI inference software specifically engineered to maximise token revenue generation for AI factories deploying reasoning AI models.

    Dynamo orchestrates and accelerates inference communication across potentially thousands of GPUs. It employs disaggregated serving, a technique that separates the processing and generation phases of large language models (LLMs) onto distinct GPUs. This approach allows each phase to be optimised independently, catering to its specific computational needs and ensuring maximum utilisation of GPU resources.

    “Industries around the world are training AI models to think and learn in different ways, making them more sophisticated over time,” stated Jensen Huang, founder and CEO of NVIDIA. “To enable a future of custom reasoning AI, NVIDIA Dynamo helps serve these models at scale, driving cost savings and efficiencies across AI factories.”

    Using the same number of GPUs, Dynamo has demonstrated the ability to double the performance and revenue of AI factories serving Llama models on NVIDIA’s current Hopper platform. Furthermore, when running the DeepSeek-R1 model on a large cluster of GB200 NVL72 racks, NVIDIA Dynamo’s intelligent inference optimisations have shown to boost the number of tokens generated by over 30 times per GPU.

    To achieve these improvements in inference performance, NVIDIA Dynamo incorporates several key features designed to increase throughput and reduce operational costs.

    Dynamo can dynamically add, remove, and reallocate GPUs in real-time to adapt to fluctuating request volumes and types. The software can also pinpoint specific GPUs within large clusters that are best suited to minimise response computations and efficiently route queries. Dynamo can also offload inference data to more cost-effective memory and storage devices while retrieving it rapidly when required, thereby minimising overall inference costs.

    NVIDIA Dynamo is being released as a fully open-source project, offering broad compatibility with popular frameworks such as PyTorch, SGLang, NVIDIA TensorRT-LLM, and vLLM. This open approach supports enterprises, startups, and researchers in developing and optimising novel methods for serving AI models across disaggregated inference infrastructures.

    NVIDIA expects Dynamo to accelerate the adoption of AI inference across a wide range of organisations, including major cloud providers and AI innovators like AWS, Cohere, CoreWeave, Dell, Fireworks, Google Cloud, Lambda, Meta, Microsoft Azure, Nebius, NetApp, OCI, Perplexity, Together AI, and VAST.

    NVIDIA Dynamo: Supercharging inference and agentic AI

    A key innovation of NVIDIA Dynamo lies in its ability to map the knowledge that inference systems hold in memory from serving previous requests, known as the KV cache, across potentially thousands of GPUs.

    The software then intelligently routes new inference requests to the GPUs that possess the best knowledge match, effectively avoiding costly recomputations and freeing up other GPUs to handle new incoming requests. This smart routing mechanism significantly enhances efficiency and reduces latency.

    “To handle hundreds of millions of requests monthly, we rely on NVIDIA GPUs and inference software to deliver the performance, reliability and scale our business and users demand,” said Denis Yarats, CTO of Perplexity AI.

    “We look forward to leveraging Dynamo, with its enhanced distributed serving capabilities, to drive even more inference-serving efficiencies and meet the compute demands of new AI reasoning models.”

    AI platform Cohere is already planning to leverage NVIDIA Dynamo to enhance the agentic AI capabilities within its Command series of models.

    “Scaling advanced AI models requires sophisticated multi-GPU scheduling, seamless coordination and low-latency communication libraries that transfer reasoning contexts seamlessly across memory and storage,” explained Saurabh Baji, SVP of engineering at Cohere.

    “We expect NVIDIA Dynamo will help us deliver a premier user experience to our enterprise customers.”

    Support for disaggregated serving

    The NVIDIA Dynamo inference platform also features robust support for disaggregated serving. This advanced technique assigns the different computational phases of LLMs – including the crucial steps of understanding the user query and then generating the most appropriate response – to different GPUs within the infrastructure.

    Disaggregated serving is particularly well-suited for reasoning models, such as the new NVIDIA Llama Nemotron model family, which employs advanced inference techniques for improved contextual understanding and response generation. By allowing each phase to be fine-tuned and resourced independently, disaggregated serving improves overall throughput and delivers faster response times to users.

    Together AI, a prominent player in the AI Acceleration Cloud space, is also looking to integrate its proprietary Together Inference Engine with NVIDIA Dynamo. This integration aims to enable seamless scaling of inference workloads across multiple GPU nodes. Furthermore, it will allow Together AI to dynamically address traffic bottlenecks that may arise at various stages of the model pipeline.

    “Scaling reasoning models cost effectively requires new advanced inference techniques, including disaggregated serving and context-aware routing,” stated Ce Zhang, CTO of Together AI.

    “The openness and modularity of NVIDIA Dynamo will allow us to seamlessly plug its components into our engine to serve more requests while optimising resource utilisation—maximising our accelerated computing investment. We’re excited to leverage the platform’s breakthrough capabilities to cost-effectively bring open-source reasoning models to our users.”

    Four key innovations of NVIDIA Dynamo

    NVIDIA has highlighted four key innovations within Dynamo that contribute to reducing inference serving costs and enhancing the overall user experience:

    • GPU Planner: A sophisticated planning engine that dynamically adds and removes GPUs based on fluctuating user demand. This ensures optimal resource allocation, preventing both over-provisioning and under-provisioning of GPU capacity.
    • Smart Router: An intelligent, LLM-aware router that directs inference requests across large fleets of GPUs. Its primary function is to minimise costly GPU recomputations of repeat or overlapping requests, thereby freeing up valuable GPU resources to handle new incoming requests more efficiently.
    • Low-Latency Communication Library: An inference-optimised library designed to support state-of-the-art GPU-to-GPU communication. It abstracts the complexities of data exchange across heterogeneous devices, significantly accelerating data transfer speeds.
    • Memory Manager: An intelligent engine that manages the offloading and reloading of inference data to and from lower-cost memory and storage devices. This process is designed to be seamless, ensuring no negative impact on the user experience.

    NVIDIA Dynamo will be made available within NIM microservices and will be supported in a future release of the company’s AI Enterprise software platform. 

    Source: https://www.artificialintelligence-news.com/

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Art Ryan

    Related Posts

    AI Drug Development Johnson & Johnson Impact on Healthcare

    April 28, 2026

    Qualcomm OpenAI AI Smartphone Processors Partnership News

    April 28, 2026

    Google AI Campus South Korea and Its Development Plans

    April 28, 2026

    Comments are closed.

    Latest News

    AI Drug Development Johnson & Johnson Impact on Healthcare

    April 28, 2026

    Qualcomm OpenAI AI Smartphone Processors Partnership News

    April 28, 2026

    Google AI Campus South Korea and Its Development Plans

    April 28, 2026

    Accenture Copilot Rollout Enhances Employee Productivity

    April 28, 2026
    Facebook X (Twitter) Pinterest Vimeo WhatsApp TikTok Instagram LinkedIn YouTube Spotify Reddit Snapchat Threads

    AI University

    • Global Universities
    • Universities in Africa
    • Universities in Asia
    • Universities in Europe
    • Universities in Latin America
    • Universities in Middle East
    • Universities in North America
    • Universities in Oceania

    AI Tools & Apps Directory

    • AI Productivity Tools
    • AI Coding Tools
    • AI Voice Tools
    • AI Video Tools
    • AI Image Generators
    • AI Writing Tools

    Info

    • Home
    • About Us
    • AI Organizations & Associations
    • Contact Us

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    © 2026 Breaking AI News.
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.