The year 2025 presents a critical juncture for cybersecurity leaders. While the promise of Artificial Intelligence (AI) for productivity and innovation is undeniable, its rapid adoption, coupled with the proliferation of non-human identities (NHIs) and low-code/no-code platforms, is fueling an unprecedented surge in secrets sprawl across the enterprise. For CISOs, the challenge is clear: harness AI’s potential while establishing robust, technology-first defenses against an expanding and increasingly complex attack surface.
The New Threat Landscape: Beyond Human Error
The traditional security perimeter is dissolving, replaced by a distributed environment where secrets are exposed in unexpected places, often outside the direct control and visibility of security teams.
- Exploding Secrets Sprawl: In 2024 alone, 23.8 million new hardcoded secrets were added to public GitHub repositories, marking a 25% year-over-year increase. More alarmingly, 70% of valid secrets detected in 2022 remain active today, providing attackers with prolonged access.
- The Rise of Non-Human Identities (NHIs): These machine identities, such as API keys, service accounts, and AI agents, now vastly outnumber human identities and are crucial for modern DevOps and cloud-native environments. However, NHI secrets are prone to sprawl, often lack proper offboarding plans, and commonly have excessive permissions (e.g., 96% of GitHub tokens had write access, 95% full repository access).
- Shadow AI and Unmanaged Usage: Employees are freely downloading and using unauthorized AI applications, browser extensions, and mobile tools—“Shadow AI”—without IT approval. They routinely paste sensitive information, including passwords, company credentials, customer records, financial data, and trade secrets, into public AI services like ChatGPT. This creates an invisible data exposure risk, with credential leaks from shadow AI stretching the median remediation time to 94 days.
- AI-Generated Code Risks: AI coding assistants like GitHub Copilot, while boosting productivity, are contributing to the problem. Repositories using Copilot show a 40% higher incidence rate of secrets (6.4%) compared to all public repositories (4.6%), suggesting either less secure generated code or developers prioritizing speed over security.
- No-Code/Low-Code Platforms: Tools like Zapier, n8n, Airtable, and Supabase are seeing a rise in leaks. These platforms are increasingly used by “low coders or no coders” who may be less trained in secret security, accelerating secrets sprawl within “shadow IT” environments.
- Overlooked Collaboration and Artifacts: Secrets are not confined to code repositories. Collaboration tools like Jira, Slack, and Confluence are “overlooked frontiers” where secrets are shared, with 38% of incidents classified as highly critical or urgent. Similarly, container images on Docker Hub contained over 100,000 valid secrets, including 7,000 active AWS keys, often embedded in image layers.
- Escalating Regulatory Pressure: Regulators are sharpening their enforcement tools, with 59 AI regulations issued by U.S. agencies in 2024 alone, more than double the previous year. Companies are often blind to their AI usage, violating daily provisions of GDPR, CCPA, HIPAA, and SOX, with severe financial and legal consequences. [
DevSecOps Maturity Calculator | Security Posture Assessment Tool
Evaluate your organization’s security posture and receive actionable recommendations to strengthen your security practices.
![]()
DevSecOps Maturity Calculator
![]()
](https://devsecops.vibehack.dev/)
Strategic Imperatives for CISOs: Bridging the Remediation Gap
Addressing this pervasive secrets sprawl demands a holistic, technology-first approach that integrates prevention, detection, and rapid remediation across the entire digital ecosystem.
- End the Security Delusion and Face Reality:
- Conduct an honest assessment of actual AI usage within your organization, not just theoretical frameworks. Over 83% of companies lack automatic controls to prevent sensitive data uploads to public AI tools, and many overestimate their AI governance capabilities by over threefold.
- Prioritize tracking and controlling inputs into AI systems as much as monitoring outputs.
- Deploy Technology-First Controls:
- Automated blocking and scanning are the bare minimum. Relying on human-dependent measures like training and warning emails has consistently failed.
- Implement real-time monitoring for leaked credentials across all environments—code repositories, collaboration tools, container images, and AI systems.
- Establish Data Governance Command Centers:
- Create unified governance platforms that track every data movement, enforce data classification policies (a fundamental requirement for compliance), and maintain audit trails across all AI touchpoints.
- Gain total visibility into real-time AI monitoring across cloud, on-premises, and shadow IT. Data lineage tracking from creation through AI processing to final outputs is no longer optional.
- Implement Comprehensive Secrets Management and NHI Governance:
- Vault Everything: Consistently use centralized secrets managers like HashiCorp Vault, AWS Secrets Manager, Azure Key Vault, CyberArk, or Akeyless.
- Combat “Vault Sprawl”: Develop strategies to centralize and consolidate secrets management solutions, as juggling multiple tools across teams can lead to fragmentation and inconsistent practices.
- Automate Rotation and Revocation: This is crucial. Since 70% of leaked secrets remain active for years, semi-automated secret rotation policies are essential to eliminate long-lived credentials.
- Enforce Least Privilege: Grant only the minimum necessary permissions to NHIs. Over-privileging, often done for convenience, dramatically amplifies the impact of a compromised credential.
- NHI Lifecycle Management: Develop clear decommissioning and offboarding plans for NHIs and their associated secrets, just as you would for human employees.
- Shift-Left Security: Integrate secret detection tools (e.g., pre-commit hooks, push protection) directly into developer workflows to prevent secrets from entering the codebase in the first place.
- Embrace “Secretless” Approaches: Explore alternative authentication and authorization mechanisms that minimize reliance on traditional secrets, such as dynamic, just-in-time, and ephemeral credentials.
[
AI RMF to ISO 42001 Crosswalk Tool
Navigate between NIST AI Risk Management Framework and ISO/IEC 42001 standards with our interactive crosswalk tool.
![]()
AI Risk Management Framework to ISO 42001 MappingAI Risk Assessment
![]()
](https://compliance.airiskassess.com/)
- Secure AI Data Sources and Logs:
- Scrub Knowledge Bases: Before connecting Large Language Models (LLMs) to internal data sources like Confluence, Jira, Slack, or internal wikis, scan and clean every knowledge base for secrets. An LLM can turn a chatbot into an “internal secrets-leaking engine”.
- Monitor and Sanitize AI Logs: Treat AI system logs as sensitive infrastructure. Monitor, sanitize, and audit them regularly to prevent multiple copies of leaked secrets from proliferating across third-party logging tools.
- Role-Based Access for RAG: Implement role-based access to Retrieval-Augmented Generation (RAG) systems, restricting document retrieval based on user roles and document sensitivity.
- Invest in Human Capital and Collaboration:
- Comprehensive Developer Training: Provide clear guidelines for developers on secure vault usage and the significant risks of hardcoding secrets. Foster a culture of compliance where security is a shared organizational priority.
- Vet Third-Party Vendors: Third-party involvement in breaches doubled from 15% to 30% in one year. Regularly audit vendors and enforce strict security controls, especially for managed file transfer systems used for AI data exchange.
- Collaboration with Law Enforcement: Involving law enforcement during extortion attacks can reduce breach costs by nearly $1 million.
[
Vibe Hacking Security Assessment | Security for AI-Generated Code
Identify and fix security vulnerabilities in AI-generated code with our comprehensive assessment tool and tailored AI prompts.
Vibe Hacking Security AssessmentVibe Hacking Security Team

The window for action is rapidly closing. The collision of explosive AI adoption, surging security incidents, and accelerating regulation means that model training contamination is permanent, and every piece of sensitive data shared today can become tomorrow’s compliance violation and competitive disadvantage. Companies will divide into two groups: those who secured their AI usage, and those explaining their failures to regulators, customers, and shareholders. By adopting a proactive, technology-first approach to secrets management and NHI governance, CISOs can build resilient, secure systems that confidently navigate the complexities of 2025 and beyond.