Close Menu
    • Home
    • Events
      • Upcoming Events
      • Videos
        • Machine Can Think Summit 2026
        • Step Dubai Conference 2026
    • Technology & Innovation
    • Business & Marketing
    • Trends & Insights
    • Industry Applications
    • Tutorials & Guides
    What's Hot
    Technology & Innovation

    SAS Puts AI Governance at the Core of Its Agent Strategy

    By Art RyanApril 29, 20260

    As it moves deeper into the era of agentic AI, SAS is making governance a…

    Big Tech AI Spending 2026: Investment Trends Revealed

    April 29, 2026

    Amazon AI Hiring Software Enhances Recruitment Efficiency

    April 29, 2026

    Oracle & CoreWeave Shares Fall on OpenAI Growth Miss

    April 29, 2026
    Facebook X (Twitter) Instagram
    Facebook X (Twitter) Instagram
    Breaking AI News
    Thursday, April 30
    • Home
    • Events
      • Upcoming Events
      • Videos
        • Machine Can Think Summit 2026
        • Step Dubai Conference 2026
    • Technology & Innovation

      SAS Puts AI Governance at the Core of Its Agent Strategy

      April 29, 2026

      Amazon AI Hiring Software Enhances Recruitment Efficiency

      April 29, 2026

      AI Drug Development Johnson & Johnson Impact on Healthcare

      April 28, 2026

      Qualcomm OpenAI AI Smartphone Processors Partnership News

      April 28, 2026

      Google AI Campus South Korea and Its Development Plans

      April 28, 2026
    • Business & Marketing

      Big Tech AI Spending 2026: Investment Trends Revealed

      April 29, 2026

      Oracle & CoreWeave Shares Fall on OpenAI Growth Miss

      April 29, 2026

      Authentic Brands Group Could Hit $50 Billion in Retail Sales by 2026, CEO Says

      April 29, 2026

      UK AI Startup Ineffable Secures $1.1B in Europe’s Largest Seed Round

      April 28, 2026

      Meta Manus AI Acquisition Blocked Over Strategic Concerns

      April 28, 2026
    • Trends & Insights

      SAS Puts AI Governance at the Core of Its Agent Strategy

      April 29, 2026

      Big Tech AI Spending 2026: Investment Trends Revealed

      April 29, 2026

      Oracle & CoreWeave Shares Fall on OpenAI Growth Miss

      April 29, 2026

      Google AI Campus South Korea and Its Development Plans

      April 28, 2026

      Meta Manus AI Acquisition Blocked Over Strategic Concerns

      April 28, 2026
    • Industry Applications

      Amazon AI Hiring Software Enhances Recruitment Efficiency

      April 29, 2026

      AI Drug Development Johnson & Johnson Impact on Healthcare

      April 28, 2026

      Accenture Copilot Rollout Enhances Employee Productivity

      April 28, 2026

      HomeLight AI Real Estate Closings Transforming the Market

      April 27, 2026

      UiPath & Databricks Partner to Transform Enterprise Operations through Automation and Data Intelligence

      April 27, 2026
    • Tutorials & Guides

      How AI Is Revolutionizing the Future of Travel 2026 with Wellness and Sustainability

      April 19, 2026

      University of Wollongong in Dubai AI initiative boosts future-ready education

      March 31, 2026

      Microsoft AI upgrades Copilot Cowork unveiled for early access users

      March 31, 2026

      Starcloud $11 billion valuation signals AI space race surge

      March 31, 2026

      Flexible AI Factories Power the Future of Energy Grids

      March 30, 2026
    Breaking AI News
    Home » Microsoft is taking down AI hackers
    Technology & Innovation

    Microsoft is taking down AI hackers

    Art RyanBy Art RyanMay 9, 2025Updated:May 9, 2025No Comments5 Mins Read
    Facebook Twitter Pinterest LinkedIn Tumblr Email
    Share
    Facebook Twitter LinkedIn Pinterest Email

    It was a slow Friday afternoon in July when a seemingly isolated problem appeared on the radar of Phillip Misner, head of Microsoft’s AI Incident Detection and Response team. Someone had stolen a customer’s unique access code for an AI image generator and was going around safeguards to create sexualized images of celebrities. 

    Misner and his coworkers revoked the code but soon saw more stolen customer credentials, or API keys, pop up on an anonymous message board known for spreading hateful material. They escalated the issue into a company-wide security response in what has now become Microsoft’s first legal case to stop people from creating harmful AI content.  

    “We take the misuse of AI very seriously and recognize the harm of abusive images to victims,” Misner says.  

    Court documents detail how Microsoft is dismantling a global network alleged to have created thousands of abusive AI images of celebrities, women and people of color. Many of the images were sexually explicit, misogynistic, violent or hateful.  

    The company says the network, dubbed Storm-2139, includes six people who built tools to break into Azure OpenAI Service and other companies’ AI platforms in a “hacking-as-a-service scheme.” Four of those people — located in Iran, England, Hong Kong and Vietnam — are named as defendants in Microsoft’s civil complaint filed in the U.S. District Court for the Eastern District of Virginia. The complaint alleges another 10 people used the tools to bypass AI safeguards and create images in violation of Microsoft’s terms of use.  

    “This case sends a clear message that we do not tolerate the abuse of our AI technology,” says Richard Boscovich, assistant general counsel for the company’s Digital Crimes Unit (DCU). “We are taking down their operation and serving notice that if anyone abuses our tools, we will go after you.”  

    Keeping people safe online 

    The lawsuit is part of the company’s longtime work in fostering digital safety, from responding to cyberthreats and disrupting criminals to building safe and secure AI systems. The efforts include working with lawmakers, advocates and victims to protect people from explicit images shared without their consent — regardless of whether the images are real, or made or modified with AI.  

    “This kind of image abuse disproportionately targets women and girls, and the era of AI has fundamentally changed the scale at which it can happen,” says Courtney Gregoire, vice president and chief digital safety officer at Microsoft. “Core to our approach in digital safety is listening to those who’ve been impacted negatively by technology and taking a multi-layered approach to mitigate harm.”

    Soon after DCU filed its initial complaint in December, it seized a website, blocked the activity and continued building its case. The lawsuit prompted network members to turn on each other, share the case lawyers’ emails and send anonymous tips casting blame. That helped investigators name defendants in court in a public strategy to deter other AI abusers. An amended complaint in February led to more network chatter and evidence for the team’s ongoing investigation. 

    “The pressure heated up on this group, and they started giving up information on each other,” says Maurice Mason, a principal investigator with DCU.  

    We take the misuse of AI very seriously and recognize the harm of abusive images to victims.

    Phillip Misner, head of Microsoft AI Incident Detection and Response

    Thousands of malicious AI prompts 

    Investigators say the defendants built and promoted a suite of software for illicitly accessing image-generating models and a reverse proxy service that hid the activity and saved images to a computer in Virginia. The stolen credentials used to authenticate access belonged to Azure customers who had left them exposed on a public platform.  

    People using the tools went out of their way to bypass Microsoft’s content safety filters. They iterated on blocked prompts, shared bypassing techniques and entered thousands of malicious prompts designed to manipulate AI models into ignoring safeguards to get what they wanted, investigators say. 

    When content filters rejected prompts for celebrity images — a safeguard against deepfakes — the users substituted physical descriptions of celebrities in some cases. When filters barred prompts for harmful content, users replaced letters and words with technical notations like subscripts to trick the AI model. Microsoft has addressed those bypassing methods and enhanced its AI safeguards in response to the incident.  

    “One of the takeaways we found in analyzing prompts and images is that it’s very clear their intention was to skirt guardrails and produce images that were prohibited,” says Michael McDonald, a senior data analyst with DCU. 

    Disrupting and deterring abuse 

    The company also helped affected customers improve their security — part of an ongoing investment in safeguards and security against evolving AI risks and harmful content. One safeguard, provenance metadata called Content Credentials, helped investigators establish the origin of many of the discovered images. Microsoft attaches the metadata to images made with its AI to provide information transparency and combat deepfakes. The company is also a longtime leader of the industry group that created Content Credentials.  

    “The misuse of AI has real, lasting impacts,” says Sarah Bird, chief product officer for Responsible AI at Microsoft. “We are continuously innovating to build strong guardrails and implement security measures to ensure our AI technologies are safe, secure and reliable.”  

    Microsoft alleges defendants violated the Computer Fraud and Abuse Act, the Digital Millenium Copyright Act and other U.S. laws. Investigators are referring other people in the U.S. and other countries to law enforcement agencies for criminal charges. The team has shared information about the images and legal case with known victims.  
     
    “We will continue to monitor this network and identify additional defendants as needed to stop the abuse,” Boscovich says.  

    Source: https://news.microsoft.com/

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Art Ryan

    Related Posts

    SAS Puts AI Governance at the Core of Its Agent Strategy

    April 29, 2026

    Amazon AI Hiring Software Enhances Recruitment Efficiency

    April 29, 2026

    AI Drug Development Johnson & Johnson Impact on Healthcare

    April 28, 2026

    Comments are closed.

    Latest News

    SAS Puts AI Governance at the Core of Its Agent Strategy

    April 29, 2026

    Big Tech AI Spending 2026: Investment Trends Revealed

    April 29, 2026

    Amazon AI Hiring Software Enhances Recruitment Efficiency

    April 29, 2026

    Oracle & CoreWeave Shares Fall on OpenAI Growth Miss

    April 29, 2026
    Facebook X (Twitter) Pinterest Vimeo WhatsApp TikTok Instagram LinkedIn YouTube Spotify Reddit Snapchat Threads

    AI University

    • Global Universities
    • Universities in Africa
    • Universities in Asia
    • Universities in Europe
    • Universities in Latin America
    • Universities in Middle East
    • Universities in North America
    • Universities in Oceania

    AI Tools & Apps Directory

    • AI Productivity Tools
    • AI Coding Tools
    • AI Voice Tools
    • AI Video Tools
    • AI Image Generators
    • AI Writing Tools

    Info

    • Home
    • About Us
    • AI Organizations & Associations
    • Contact Us

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    © 2026 Breaking AI News.
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.

    Sign Up

    Want to stay ahead In Artificial Intelligence?

     Sign up now and get exclusive breaking AI news and special updates—FREE!