Close Menu
    • Home
    • Events
      • Upcoming Events
      • Videos
        • Machine Can Think Summit 2026
        • Step Dubai Conference 2026
    • Technology & Innovation
    • Business & Marketing
    • Trends & Insights
    • Industry Applications
    • Tutorials & Guides
    What's Hot
    Technology & Innovation

    SAS Puts AI Governance at the Core of Its Agent Strategy

    By Art RyanApril 29, 20260

    As it moves deeper into the era of agentic AI, SAS is making governance a…

    Big Tech AI Spending 2026: Investment Trends Revealed

    April 29, 2026

    Amazon AI Hiring Software Enhances Recruitment Efficiency

    April 29, 2026

    Oracle & CoreWeave Shares Fall on OpenAI Growth Miss

    April 29, 2026
    Facebook X (Twitter) Instagram
    Facebook X (Twitter) Instagram
    Breaking AI News
    Wednesday, April 29
    • Home
    • Events
      • Upcoming Events
      • Videos
        • Machine Can Think Summit 2026
        • Step Dubai Conference 2026
    • Technology & Innovation

      SAS Puts AI Governance at the Core of Its Agent Strategy

      April 29, 2026

      Amazon AI Hiring Software Enhances Recruitment Efficiency

      April 29, 2026

      AI Drug Development Johnson & Johnson Impact on Healthcare

      April 28, 2026

      Qualcomm OpenAI AI Smartphone Processors Partnership News

      April 28, 2026

      Google AI Campus South Korea and Its Development Plans

      April 28, 2026
    • Business & Marketing

      Big Tech AI Spending 2026: Investment Trends Revealed

      April 29, 2026

      Oracle & CoreWeave Shares Fall on OpenAI Growth Miss

      April 29, 2026

      Authentic Brands Group Could Hit $50 Billion in Retail Sales by 2026, CEO Says

      April 29, 2026

      UK AI Startup Ineffable Secures $1.1B in Europe’s Largest Seed Round

      April 28, 2026

      Meta Manus AI Acquisition Blocked Over Strategic Concerns

      April 28, 2026
    • Trends & Insights

      SAS Puts AI Governance at the Core of Its Agent Strategy

      April 29, 2026

      Big Tech AI Spending 2026: Investment Trends Revealed

      April 29, 2026

      Oracle & CoreWeave Shares Fall on OpenAI Growth Miss

      April 29, 2026

      Google AI Campus South Korea and Its Development Plans

      April 28, 2026

      Meta Manus AI Acquisition Blocked Over Strategic Concerns

      April 28, 2026
    • Industry Applications

      Amazon AI Hiring Software Enhances Recruitment Efficiency

      April 29, 2026

      AI Drug Development Johnson & Johnson Impact on Healthcare

      April 28, 2026

      Accenture Copilot Rollout Enhances Employee Productivity

      April 28, 2026

      HomeLight AI Real Estate Closings Transforming the Market

      April 27, 2026

      UiPath & Databricks Partner to Transform Enterprise Operations through Automation and Data Intelligence

      April 27, 2026
    • Tutorials & Guides

      How AI Is Revolutionizing the Future of Travel 2026 with Wellness and Sustainability

      April 19, 2026

      University of Wollongong in Dubai AI initiative boosts future-ready education

      March 31, 2026

      Microsoft AI upgrades Copilot Cowork unveiled for early access users

      March 31, 2026

      Starcloud $11 billion valuation signals AI space race surge

      March 31, 2026

      Flexible AI Factories Power the Future of Energy Grids

      March 30, 2026
    Breaking AI News
    Home » Vectara Launches Open Source Framework for RAG Evaluation
    Technology & Innovation

    Vectara Launches Open Source Framework for RAG Evaluation

    Art RyanBy Art RyanApril 9, 2025No Comments4 Mins Read
    Facebook Twitter Pinterest LinkedIn Tumblr Email
    Share
    Facebook Twitter LinkedIn Pinterest Email

    Vectara, a platform for enterprise Retrieval-Augmented Generation (RAG) and AI-powered agents and assistants, today announced the launch of Open RAG Eval, its open-source RAG evaluation framework.

    The framework, developed in conjunction with researchers from the University of Waterloo, allows enterprise users to evaluate response quality for each component
    and configuration of their RAG systems in order to quickly and consistently optimize the accuracy and reliability of their AI agents and other tools.

    Vectara Founder and CEO Amr Awadallah said, “AI implementations – especially for agentic RAG systems – are growing more complex by the day. Sophisticated workflows, mounting security and observability concerns along with looming regulations are driving organizations to deploy bespoke RAG systems on the fly in increasingly ad hoc ways. To avoid putting their entire AI strategies at risk, these organizations need a consistent, rigorous way to evaluate
    performance and quality. By collaborating with Professor Jimmy Lin and his exceptional team at the University of Waterloo, Vectara is proactively tackling this challenge with our Open RAG Eval.”

    Professor Jimmy Lin is the David R. Cheriton Chair in the School of Computer Science at the University of Waterloo. He and members of his team are pioneers in creating world-class benchmarks and datasets for information retrieval evaluation.

    Professor Lin said, “AI agents and other systems are becoming increasingly central to how enterprises operate today and how they plan to grow in the future. In order to capitalize on the promise these technologies offer, organizations need robust evaluation methodologies that combine scientific rigor and practical utility in order to continually assess and optimize their RAG systems. My team and I have been thrilled to work with Vectara to bring our research findings to the enterprise in a way that will advance the accuracy and reliability of AI systems around the world.”

    Open RAG Eval is designed to determine the accuracy and usefulness of the responses provided to user prompts, depending on the components and configuration of an enterprise RAG stack. The framework assesses response quality according to two major metric categories: retrieval metrics and generation metrics.

    Users of Open RAG Eval can utilize this first iteration of the platform to help inform developers of these systems how a RAG pipeline performs along selected metrics. By inspecting these metric categories, an evaluator can compare otherwise ‘black-box’ systems on separate or aggregate scores.

    A low relevance score, for example, may indicate that the user should upgrade or reconfigure the system’s retrieval pipeline, or that there is no relevant information in the dataset. Lower-than-expected generation scores, meanwhile, may mean that the system should use a stronger LLM – in cases where, for example, the generated response includes hallucinations – or that the user should update their RAG prompts.

    The new framework is designed to seamlessly evaluate any RAG pipeline, including Vectara’s own GenAI platform or any other custom RAG solution.

    Open RAG Eval helps AI teams solve such real-world deployment and configuration challenges as:
    ● Whether to use fixed token chunking or semantic chunking;
    ● Whether to use hybrid or vector search, and what value to use for lambda in hybrid
    search deployments;
    ● Which LLM to use and how to optimize RAG prompts;
    ● Which threshold to use for hallucination detection and correction, and more.

    Vectara’s decision to launch Open RAG Eval as an open-source, Apache 2.0-licensed tool reflects the company’s track record of success in establishing other industry standards in hallucination mitigation with its open-source Hughes Hallucination Evaluation Model (HHEM), which has been downloaded over 3.5 million times on Hugging Face.

    As AI systems continue to grow rapidly in complexity – especially with agentic on the rise – and as RAG techniques continue to evolve, organizations will need open and extendable AI evaluation frameworks to help them make the right choices. This will allow organizations to also leverage their own data, add their own metrics, and measure their existing systems against emerging alternative options. Vectara’s open-s ource and extendable approach will help Open RAG Eval stay ahead of these dynamics by enabling ongoing contributions from the AI community while also ensuring that the implementation of each suggested and contributed evaluation metric is well understood and open for review and improvement.

    Source: https://insideainews.com/

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Art Ryan

    Related Posts

    SAS Puts AI Governance at the Core of Its Agent Strategy

    April 29, 2026

    Amazon AI Hiring Software Enhances Recruitment Efficiency

    April 29, 2026

    AI Drug Development Johnson & Johnson Impact on Healthcare

    April 28, 2026

    Comments are closed.

    Latest News

    SAS Puts AI Governance at the Core of Its Agent Strategy

    April 29, 2026

    Big Tech AI Spending 2026: Investment Trends Revealed

    April 29, 2026

    Amazon AI Hiring Software Enhances Recruitment Efficiency

    April 29, 2026

    Oracle & CoreWeave Shares Fall on OpenAI Growth Miss

    April 29, 2026
    Facebook X (Twitter) Pinterest Vimeo WhatsApp TikTok Instagram LinkedIn YouTube Spotify Reddit Snapchat Threads

    AI University

    • Global Universities
    • Universities in Africa
    • Universities in Asia
    • Universities in Europe
    • Universities in Latin America
    • Universities in Middle East
    • Universities in North America
    • Universities in Oceania

    AI Tools & Apps Directory

    • AI Productivity Tools
    • AI Coding Tools
    • AI Voice Tools
    • AI Video Tools
    • AI Image Generators
    • AI Writing Tools

    Info

    • Home
    • About Us
    • AI Organizations & Associations
    • Contact Us

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    © 2026 Breaking AI News.
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.