Close Menu
    • Home
    • Events
      • Upcoming Events
      • Videos
        • Machine Can Think Summit 2026
        • Step Dubai Conference 2026
    • Technology & Innovation
    • Business & Marketing
    • Trends & Insights
    • Industry Applications
    • Tutorials & Guides
    What's Hot
    Business & Marketing

    eBay Q2 Revenue Forecast AI Driving Marketplace Success

    By Art RyanApril 30, 20260

    eBay is on track for a strong year with Q2 revenue expected to beat analysts’…

    Pirelli AI Tyre Technology: Revolutionizing Mobility

    April 30, 2026

    Microsoft Cloud Growth AI: Azure Revenue Surge

    April 30, 2026

    Amazon Surprises Investors As Artificial Intelligence Demand Booms

    April 30, 2026
    Facebook X (Twitter) Instagram
    Facebook X (Twitter) Instagram
    Breaking AI News
    Thursday, April 30
    • Home
    • Events
      • Upcoming Events
      • Videos
        • Machine Can Think Summit 2026
        • Step Dubai Conference 2026
    • Technology & Innovation

      Pirelli AI Tyre Technology: Revolutionizing Mobility

      April 30, 2026

      Pentagon Google AI Deal: Transforming Defense Technology

      April 30, 2026

      SAS Puts AI Governance at the Core of Its Agent Strategy

      April 29, 2026

      Amazon AI Hiring Software Enhances Recruitment Efficiency

      April 29, 2026

      AI Drug Development Johnson & Johnson Impact on Healthcare

      April 28, 2026
    • Business & Marketing

      eBay Q2 Revenue Forecast AI Driving Marketplace Success

      April 30, 2026

      Microsoft Cloud Growth AI: Azure Revenue Surge

      April 30, 2026

      Amazon Surprises Investors As Artificial Intelligence Demand Booms

      April 30, 2026

      Alphabet AI Cloud Revenue Growth Surpasses Expectations

      April 30, 2026

      Big Tech AI Spending 2026: Investment Trends Revealed

      April 29, 2026
    • Trends & Insights

      eBay Q2 Revenue Forecast AI Driving Marketplace Success

      April 30, 2026

      Amazon Surprises Investors As Artificial Intelligence Demand Booms

      April 30, 2026

      SAS Puts AI Governance at the Core of Its Agent Strategy

      April 29, 2026

      Big Tech AI Spending 2026: Investment Trends Revealed

      April 29, 2026

      Oracle & CoreWeave Shares Fall on OpenAI Growth Miss

      April 29, 2026
    • Industry Applications

      Pirelli AI Tyre Technology: Revolutionizing Mobility

      April 30, 2026

      Pentagon Google AI Deal: Transforming Defense Technology

      April 30, 2026

      Amazon AI Hiring Software Enhances Recruitment Efficiency

      April 29, 2026

      AI Drug Development Johnson & Johnson Impact on Healthcare

      April 28, 2026

      Accenture Copilot Rollout Enhances Employee Productivity

      April 28, 2026
    • Tutorials & Guides

      How AI Is Revolutionizing the Future of Travel 2026 with Wellness and Sustainability

      April 19, 2026

      University of Wollongong in Dubai AI initiative boosts future-ready education

      March 31, 2026

      Microsoft AI upgrades Copilot Cowork unveiled for early access users

      March 31, 2026

      Starcloud $11 billion valuation signals AI space race surge

      March 31, 2026

      Flexible AI Factories Power the Future of Energy Grids

      March 30, 2026
    Breaking AI News
    Home » Meta plans to replace humans with AI to assess privacy and societal risks
    Technology & Innovation

    Meta plans to replace humans with AI to assess privacy and societal risks

    Art RyanBy Art RyanJune 1, 2025No Comments7 Mins Read
    Facebook Twitter Pinterest LinkedIn Tumblr Email
    Share
    Facebook Twitter LinkedIn Pinterest Email

    For years, when Meta launched new features for Instagram, WhatsApp and Facebook, teams of reviewers evaluated possible risks: Could it violate users’ privacy? Could it cause harm to minors? Could it worsen the spread of misleading or toxic content?

    Until recently, what are known inside Meta as privacy and integrity reviews were conducted almost entirely by human evaluators.

    But now, according to internal company documents obtained by NPR, up to 90% of all risk assessments will soon be automated.

    In practice, this means things like critical updates to Meta’s algorithms, new safety features and changes to how content is allowed to be shared across the company’s platforms will be mostly approved by a system powered by artificial intelligence — no longer subject to scrutiny by staffers tasked with debating how a platform change could have unforeseen repercussions or be misused.

    Inside Meta, the change is being viewed as a win for product developers, who will now be able to release app updates and features more quickly. But current and former Meta employees fear the new automation push comes at the cost of allowing AI to make tricky determinations about how Meta’s apps could lead to real world harm.

    “Insofar as this process functionally means more stuff launching faster, with less rigorous scrutiny and opposition, it means you’re creating higher risks,” said a former Meta executive who requested anonymity out of fear of retaliation from the company. “Negative externalities of product changes are less likely to be prevented before they start causing problems in the world.”

    Meta said in a statement that it has invested billions of dollars to support user privacy.

    Since 2012, Meta has been under the watch of the Federal Trade Commission after the agency reached an agreement with the company over how it handles users’ personal information. As a result, privacy reviews for products have been required, according to current and former Meta employees.

    In its statement, Meta said the product risk review changes are intended to streamline decision-making, adding that “human expertise” is still being used for “novel and complex issues,” and that only “low-risk decisions” are being automated.

    But internal documents reviewed by NPR show that Meta is considering automating reviews for sensitive areas including AI safety, youth risk and a category known as integrity that encompasses things like violent content and the spread of falsehoods.

    Former Meta employee: ‘engineers are not privacy experts’

    A slide describing the new process says product teams will now in most cases receive an “instant decision” after completing a questionnaire about the project. That AI-driven decision will identify risk areas and requirements to address them. Before launching, the product team has to verify it has met those requirements.

    Meta Founder and CEO Mark Zuckerberg speaks at LlamaCon 2025, an AI developer conference, in Menlo Park, Calif., Tuesday, April 29, 2025. (AP Photo/Jeff Chiu)

    Meta Founder and CEO Mark Zuckerberg speaks at LlamaCon 2025, an AI developer conference, in Menlo Park, Calif., Tuesday, April 29, 2025. (AP Photo/Jeff Chiu)

    Jeff Chiu/AP/AP

    Under the prior system, product and feature updates could not be sent to billions of users until they received the blessing of risk assessors. Now, engineers building Meta products are empowered to make their own judgements about risks.

    In some cases, including projects involving new risks or where a product team wants additional feedback, projects will be given a manual review by humans, the slide says, but it will not be by default, as it used to be. Now, the teams building products will make that call.

    “Most product managers and engineers are not privacy experts and that is not the focus of their job. It’s not what they are primarily evaluated on and it’s not what they are incentivized to prioritize,” said Zvika Krieger, who was director of responsible innovation at Meta until 2022. Product teams at Meta are evaluated on how quickly they launch products, among other metrics.

    “In the past, some of these kinds of self-assessments have become box-checking exercises that miss significant risks,” he added.

    Krieger said while there is room for improvement in streamlining reviews at Meta through automation, “if you push that too far, inevitably the quality of review and the outcomes are going to suffer.”

    Meta downplayed concerns that the new system will introduce problems into the world, pointing out that it is auditing the decisions the automated systems make for projects that are not assessed by humans.

    The Meta documents suggest its users in the European Union could be somewhat insulated from these changes. An internal announcement says decision making and oversight for products and user data in the European Union will remain with Meta’s European headquarters in Ireland. The EU has regulations governing online platforms, including the Digital Services Act, which requires companies including Meta to more strictly police their platforms and protect users from harmful content.

    Some of the changes to the product review process were first reported by The Information, a tech news site. The internal documents seen by NPR show that employees were notified about the revamping not long after the company ended its fact-checking program and loosened its hate speech policies.

    Taken together, the changes reflect a new emphasis at Meta in favor of more unrestrained speech and more rapidly updating its apps — a dismantling of various guardrails the company has enacted over the years to curb the misuse of its platforms. The big shifts at the company also follow efforts by CEO Mark Zuckerberg to curry favor with President Trump, whose election victory Zuckerberg has called a “cultural tipping point.”

    Is moving faster to assess risks ‘self-defeating’?

    Another factor driving the changes to product reviews is a broader, years-long push to tap AI to help the company move faster amid growing competition from TikTok, OpenAI, Snap and other tech companies.

    Meta said earlier this week it is relying more on AI to help enforce its content moderation policies.

    “We are beginning to see [large language models] operating beyond that of human performance for select policy areas,” the company wrote in its latest quarterly integrity report. It said it’s also using those AI models to screen some posts that the company is “highly confident” don’t break its rules.

    “This frees up capacity for our reviewers allowing them to prioritize their expertise on content that’s more likely to violate,” Meta said.

    Katie Harbath, founder and CEO of the tech policy firm Anchor Change, who spent a decade working on public policy at Facebook, said using automated systems to flag potential risks could help cut down on duplicative efforts.

    “If you want to move quickly and have high quality you’re going to need to incorporate more AI, because humans can only do so much in a period of time,” she said. But she added that those systems also need to have checks and balances from humans.

    Another former Meta employee, who spoke on condition of anonymity because they also fear reprisal from the company, questioned whether moving faster on risk assessments is a good strategy for Meta.

    “This almost seems self-defeating. Every time they launch a new product, there is so much scrutiny on it — and that scrutiny regularly finds issues the company should have taken more seriously,” the former employee said.

    Michel Protti, Meta’s chief privacy officer for product, said in a March post on its internal communications tool, Workplace, that the company is “empowering product teams” with the aim of “evolving Meta’s risk management processes.”

    The automation roll-out has been ramping up through April and May, said one current Meta employee familiar with product risk assessments who was not authorized to speak publicly about internal operations.

    Protti said automating risk reviews and giving product teams more say about the potential risks posed by product updates in 90% of cases is intended to “simplify decision-making.” But some insiders say that rosy summary of removing humans from the risk assessment process greatly downplays the problems the changes could cause.

    “I think it’s fairly irresponsible given the intention of why we exist,” said the Meta employee close to the risk review process. “We provide the human perspective of how things can go wrong.”

    Source: https://www.npr.org/

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Art Ryan

    Related Posts

    Pirelli AI Tyre Technology: Revolutionizing Mobility

    April 30, 2026

    Pentagon Google AI Deal: Transforming Defense Technology

    April 30, 2026

    SAS Puts AI Governance at the Core of Its Agent Strategy

    April 29, 2026

    Comments are closed.

    Latest News

    eBay Q2 Revenue Forecast AI Driving Marketplace Success

    April 30, 2026

    Pirelli AI Tyre Technology: Revolutionizing Mobility

    April 30, 2026

    Microsoft Cloud Growth AI: Azure Revenue Surge

    April 30, 2026

    Amazon Surprises Investors As Artificial Intelligence Demand Booms

    April 30, 2026
    Facebook X (Twitter) Pinterest Vimeo WhatsApp TikTok Instagram LinkedIn YouTube Spotify Reddit Snapchat Threads

    AI University

    • Global Universities
    • Universities in Africa
    • Universities in Asia
    • Universities in Europe
    • Universities in Latin America
    • Universities in Middle East
    • Universities in North America
    • Universities in Oceania

    AI Tools & Apps Directory

    • AI Productivity Tools
    • AI Coding Tools
    • AI Voice Tools
    • AI Video Tools
    • AI Image Generators
    • AI Writing Tools

    Info

    • Home
    • About Us
    • AI Organizations & Associations
    • Contact Us

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    © 2026 Breaking AI News.
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.

    Sign Up

    Want to stay ahead In Artificial Intelligence?

     Sign up now and get exclusive breaking AI news and special updates—FREE!