AI-Powered Development is Creating a Non-Human Identity Crisis: Here’s What CISOs Need to Know in 2025
While coding assistants like GitHub Copilot have revolutionized developer productivity, they’ve simultaneously created an explosion of machine identities that are overwhelming traditional security approaches.
Between 2023 and 2024 alone, the number of repositories using Copilot increased by 27%, confirming that developers are increasingly relying on AI tools to enhance their productivity. This acceleration shows no signs of slowing in 2025, as GitHub now offers Copilot as part of its free offering, further lowering barriers to adoption.

However, this AI revolution comes with significant security implications. According to GitGuardian’s State of Secrets Sprawl 2025, repositories where Copilot is active exhibit a 40% higher incidence of secret leaks compared to the average public repository. This alarming statistic reveals that as AI accelerates development, it’s simultaneously accelerating security risks—particularly those related to non-human identities.
Understanding Non-Human Identities in the Age of AI#
Non-Human Identities (NHIs) refer to machine-based identities that require authentication and authorization to access enterprise systems, data, and applications. These identities include service accounts, API keys, cloud workloads, IoT devices, bots, and automation scripts, all of which operate without direct human intervention but play a critical role in business processes. Unlike human users, NHIs authenticate via API tokens, cryptographic certificates, and automated trust mechanisms rather than traditional usernames and passwords. They’re also created at a vastly higher rate, often without proper governance or oversight.
The rise of AI-powered development has dramatically accelerated this proliferation. AI agents in particular introduce unique challenges, as they require broader permissions than traditional deterministic bots to accomplish their tasks. Since AI agents autonomously determine the best path for completing assignments, they often necessitate expansive access rights that increase the attack surface.
This problem extends beyond just the quantity of credentials. When developers integrate AI assistants into their workflows, they frequently:
- Generate API keys directly within code drafts
- Create temporary credentials that never get rotated
- Deploy machine identities with excessive permissions
- Lose track of which AI systems have access to which resources
The result is a sprawling ecosystem of credentials that are difficult to track, manage, and secure—creating an ideal target for attackers looking to gain unauthorized access to critical systems.
The Data on AI-Generated Secrets: A Concerning Reality#
GitGuardian’s analysis of approximately 20,000 repositories with active Copilot usage revealed a troubling pattern: over 1,200 repositories leaked at least one secret, representing 6.4% of the sample. This rate is 40% higher than the average across all public repositories, which stands at 4.6%.

This disparity points to two critical factors. First, the code generated by Large Language Models (LLMs) may inherently contain more security vulnerabilities. Second, and perhaps more concerning, is that developers using AI assistants may be prioritizing speed over security, inadvertently creating more opportunities for credential exposure.
This reality presents a stark contrast to developer expectations. GitHub’s survey revealed that 99% of U.S. respondents expect AI coding tools to improve security. The contradiction between this optimistic perception and the reality of increased secret leaks suggests a dangerous disconnect. Despite continuous improvements in AI coding assistants, these tools aren’t delivering security improvements when it comes to credential management. In fact, they may be exacerbating existing problems even as developers increasingly trust them for security.
Three Critical Vulnerabilities in AI-Accelerated Development#
1. AI Agents and Permission Sprawl#
AI agents require broader access permissions than traditional automation to accomplish their tasks effectively. Since they determine their own execution paths rather than following strict programming, they’re typically granted wider read, write, and even creation/deletion permissions.
This creates a significant security challenge: how do you scope permissions for a system that chooses its own execution path? Many organizations err on the side of permissiveness to avoid blocking the AI’s work. The result is a growing number of over-privileged machine identities that could grant attackers extensive access if compromised.
Consider an AI-powered procurement agent that analyzes needs, compares vendors, negotiates with other AI systems, and places orders. Each secure communication requires separate credentials across multiple systems. If any single credential is leaked, attackers could potentially execute unauthorized purchases or access sensitive vendor data.
2. AI-Generated Code and Embedded Secrets#
Perhaps the most direct security risk comes from AI coding assistants like GitHub Copilot generating code that contains hardcoded credentials. Since these assistants are trained on vast repositories of code—including code with poor security practices—they often replicate these same problematic patterns.
For example, when a developer asks an AI assistant to generate API calls to cloud services, it might produce code like this:
import requests API_KEY = "sk_live_ABC123XYZ" response = requests.get("https://api.example.com/data", headers={"Authorization": f"Bearer {API_KEY}"})
While experienced developers might immediately recognize the security risk of hardcoded credentials, newer developers or those under time pressure might simply replace the placeholder with a real API key. Once committed to a repository, these credentials become vulnerable to extraction by malicious actors.
GitGuardian’s findings confirm this isn’t just a theoretical concern. The 40% higher incidence of leaked secrets in AI-assisted repositories demonstrates that this is happening at scale, creating significant risk for organizations embracing AI development tools.
3. Prompt-Based Architecture and Data Leakage#
The prompt-based architecture of AI systems introduces another vector for credential exposure. When developers interact with AI assistants by sharing context, commands, and data through prompts, sensitive information can inadvertently be exposed.
This risk extends beyond development teams. As AI adoption spreads throughout organizations, even non-technical teams may use AI assistants in ways that expose credentials. For instance, a finance team might use an AI chatbot with a prompt like “Find all invoices over $100,000 using API key ABC123,” inadvertently exposing the key in logs or training data.
AI agents often ingest, process, and store data from various sources, including:
- Cloud storage such as AWS S3 and Google Drive
- Enterprise applications like Jira, Confluence, and Salesforce
- Messaging systems, including Slack and Microsoft Teams
- Internal knowledge bases and documentation
- Code repositories and version control systems
The broad access required by these AI agents creates multiple paths for credential exposure. Any system the AI agent can access becomes a potential location for leaked credentials, significantly expanding the attack surface. If your AI agent can access any data in these systems, a path for an attacker to abuse this NHI also exists.
The risk of data leakage through prompts is particularly concerning because it can happen even when teams follow security best practices in their actual code. The interaction layer with AI itself becomes a new attack surface that traditional security tools aren’t designed to monitor.
Building Security Guardrails for AI-Driven Development#
As AI continues to transform development practices, organizations need new approaches to secure the growing ecosystem of non-human identities. Here are the critical strategies CISOs should implement in 2025:
1. Implementing NHI Governance for the AI Era#
The first step in addressing AI-driven NHI risks is establishing clear governance frameworks that account for the unique challenges of machine identities in AI environments:
- Create accountability structures: Define clear ownership for every non-human identity, including those created by or for AI systems. This prevents credentials from becoming orphaned when team members leave or projects conclude.
- Enforce least privilege by default: Implement technical controls that ensure AI systems and agents receive only the minimum permissions necessary to complete their tasks, even if this requires more granular permission management.
- Implement regular credential rotation: Establish automated rotation schedules for all machine identities, with shorter lifespans for credentials with higher privileges.
- Mandate secrets vaulting: Require all machine credentials to be stored in secure vaults rather than configuration files, environment variables, or code.
The goal is to create a framework that accommodates the speed and flexibility of AI-powered development while maintaining rigorous security standards for all machine identities.
2. Security Technologies for the AI-Accelerated Enterprise#
Traditional security tools weren’t designed for the volume or velocity of non-human identities generated in AI-powered environments. GitHub’s survey found that 52% of U.S. respondents are using automation tools for security reviews throughout development processes. While automation is becoming standard, in 2025, organizations need specialized solutions that specifically address NHI security:
- Automated NHI discovery tools: Implement solutions that can automatically discover AI-agent credentials across enterprise environments, including those outside of designated vaults.
- Secret analyzers: Deploy tools that analyze discovered credentials to identify those with excessive permissions, helping prioritize remediation efforts.
- AI-aware secret scanning: Implement secret scanning tools that can detect credentials in AI prompts, logs, and generated code, not just traditional code repositories.
- Prompt sanitization: Consider solutions that can sanitize AI prompts to prevent sensitive data from being shared with large language models.

These technologies create a security mesh that can identify, analyze, and secure non-human identities across the entire AI lifecycle, from development to deployment.
3. Developer Education in the Age of AI#
Technology alone isn’t sufficient to address AI-driven security challenges. Organizations must also evolve their security culture and developer education:
- AI-specific security training: Develop training programs that address the unique security challenges of AI-powered development, including proper credential management in AI workflows.
- Security champions: Identify and empower security champions within development teams who can advocate for secure AI practices and provide guidance on credential management.
- AI-generated testing: GitHub’s survey found that 92% of U.S. developers report using AI coding tools to generate test cases at least some of the time. Leverage this capability to include security-specific test cases that can identify credential leaks and improper NHI management.
- Pre-commit reviews: Implement peer review processes specifically focused on identifying hardcoded credentials or other security issues in AI-generated code.
- Secure default templates: Create secure templates for AI interactions that include placeholders reminding developers to use credential vaults rather than hardcoded secrets.
Building a security culture that accommodates AI’s speed while maintaining security standards is essential for managing non-human identity risks.
Source: https://thehackernews.com/