Gartner: 50% of Incident Response Will Be AI-Related by 2028 — Here’s How to Prepare Your Career
If you’re building a cybersecurity career in 2026, here’s the number that should shape your next two years of skill development: by 2028, at least half of all enterprise incident response efforts will focus on security incidents involving custom-built AI applications. That’s not a fringe prediction from a startup’s marketing deck — it’s from Gartner, the world’s largest technology research firm, announced at their Security & Risk Management Summit in Sydney.
“AI is evolving quickly, yet many tools — especially custom-built AI applications — are being deployed before they’re fully tested,” warned Christopher Mixter, VP Analyst at Gartner. “These systems are complex, dynamic, and difficult to secure over time. Most security teams still lack clear processes for handling AI-related incidents, which means issues can take longer to resolve and require far more effort.”
Translation: the incident response team of 2028 will spend as much time dealing with AI problems as they do with traditional network intrusions, ransomware, and data breaches combined. The career implications are enormous.
What “AI-Related Incidents” Actually Means
This isn’t about AI being used to attack you (though that’s happening too). Gartner’s prediction focuses on incidents caused by your own AI applications — the custom-built models, agents, RAG pipelines, and AI-powered tools that organizations are deploying at breakneck speed.
Categories of AI Incidents
Prompt Injection and Manipulation Attackers feed malicious inputs to AI systems to make them behave in unintended ways — leaking data, bypassing controls, or executing unauthorized actions. As AI agents gain more autonomy and tool access, prompt injection becomes a pathway to real-world damage.
Data Poisoning and Training Data Compromise If the data used to train or fine-tune AI models is tampered with, the model’s outputs become unreliable or actively harmful. Detecting and responding to data poisoning requires skills most IR teams don’t currently have.
AI Supply Chain Attacks Models downloaded from public repositories, third-party integrations, and AI service dependencies all create supply chain risk. A compromised model or a malicious MCP server can exfiltrate data or inject backdoors.
Shadow AI Gartner previously predicted that by 2025, 40% of firms would experience shadow AI security incidents. Employees deploying AI tools without IT approval creates unmonitored, unmanaged AI systems that security teams only discover during incident response.
Machine Identity Explosion A Sysdig report found that machine identities now outnumber human users by 40,000 to one and present 7.5 times more risk. Over-permissioned AI agents are particularly concerning — when an AI agent has access to databases, APIs, and internal tools, compromising its credentials gives an attacker broad access.
AI Model Misbehavior Models that hallucinate, produce harmful outputs, or behave unpredictably in production create incidents that traditional IR playbooks aren’t designed to handle. What do you do when your customer-facing AI starts providing incorrect medical information or leaking internal pricing?
The Skills Gap
Here’s the career opportunity: most security teams don’t have the skills to handle these incidents today.
Traditional incident response training covers:
- Network forensics
- Malware analysis
- Log analysis and SIEM correlation
- Endpoint detection and response
- Ransomware containment and recovery
None of that prepares you for:
- Analyzing whether a model’s outputs were manipulated via prompt injection
- Determining if training data was poisoned
- Investigating AI agent behavior logs to identify unauthorized actions
- Assessing whether a RAG pipeline is leaking sensitive documents
- Responding to a compromised MCP server configuration
This is a green field for career development. The professionals who build these skills now — while the field is still forming — will be the ones leading AI incident response teams in 2028.
Career Paths to Build Now
1. AI Security Incident Responder
What it is: The evolution of traditional IR, focused on investigating and containing AI-specific security incidents.
Skills to develop:
- Understanding of LLM architectures, fine-tuning, and RAG systems
- Prompt injection detection and analysis
- AI agent behavior analysis and audit log interpretation
- Machine identity and non-human identity (NHI) governance
- Familiarity with AI frameworks (LangChain, LlamaIndex, CrewAI, MCP)
How to start:
- Take OWASP’s LLM Top 10 training
- Practice prompt injection techniques on CTF platforms
- Set up Ollama locally and learn how LLM APIs work
- Study real AI security incidents (Samsung code leak, Air Canada chatbot case, etc.)
Expected demand: High. Every organization deploying custom AI will need this capability.
2. AI Red Team Specialist
What it is: Proactively testing AI systems for vulnerabilities before attackers find them. Gartner specifically recommends organizations “shift left” by involving security teams early in AI development.
Skills to develop:
- Prompt injection and jailbreak technique development
- Model extraction and inversion attacks
- Training data extraction
- AI agent privilege escalation testing
- Adversarial machine learning
Certifications and training:
- NVIDIA AI Red Team certification
- MITRE ATLAS framework knowledge
- AI Village (DEF CON) participation
- Practical experience with tools like Garak, PyRIT, and AI Verify
Expected demand: Very high. Gartner predicts that by 2028, half of organizations will use AI security platforms — someone needs to test them.
3. AI Governance and Compliance Analyst
What it is: Ensuring AI deployments meet security, privacy, and regulatory requirements. This is where security meets policy.
Skills to develop:
- AI regulation knowledge (EU AI Act, NIST AI RMF, ISO 42001)
- Risk assessment for AI systems
- AI model inventory and lifecycle management
- Data sovereignty requirements for AI training data
- Third-party AI vendor risk assessment
Why it matters: Gartner also predicts that by 2027, nearly 30% of organizations will demand “comprehensive sovereignty” of cloud security controls. AI governance sits at the intersection of security, compliance, and the growing sovereignty trend.
4. Machine Identity and NHI Security Specialist
What it is: Managing the security of non-human identities — API keys, service accounts, AI agent credentials, tokens, and certificates — that now outnumber human users by 40,000:1.
Skills to develop:
- Secrets management platforms (HashiCorp Vault, CyberArk, etc.)
- Certificate and token lifecycle management
- Zero-trust architecture for machine identities
- AI agent credential management and least-privilege design
- Automated secret rotation and remediation
Why it’s growing: The GitGuardian report found 29 million secrets leaked on GitHub in 2025. As AI agents proliferate, each with their own credentials and permissions, machine identity security becomes critical.
5. AI Security Platform Engineer
What it is: Building and operating the security platforms that protect AI deployments. Gartner predicts that by 2028, half of organizations will use AI security platforms.
Skills to develop:
- AI observability and monitoring
- Guardrail implementation for LLM applications
- Content filtering and output validation
- AI-aware WAF and API gateway configuration
- Integration of AI security into CI/CD pipelines
How to Get Started This Week
-
Read the OWASP LLM Top 10 — It’s the baseline vocabulary for AI security. Understand each risk category and how it maps to incident response.
-
Set up a local AI environment — Install Ollama, run a model, and experiment with the API. Understanding how these systems work at a technical level is prerequisite to securing them.
-
Study the MITRE ATLAS framework — ATLAS (Adversarial Threat Landscape for AI Systems) is to AI security what ATT&CK is to traditional security. Learn the tactics and techniques.
-
Follow AI security researchers — Simon Willison, Johann Rehberger, Joseph Thacker, Daniel Miessler, and others regularly publish AI security research and practical attacks.
-
Practice — Platforms like Gandalf (by Lakera), Prompt Airlines, and AI-specific CTF challenges let you develop hands-on AI security skills.
-
Start documenting AI in your current environment — What AI tools are deployed? What credentials do they use? What data do they access? This inventory exercise is the foundation of AI incident preparedness.
The Timeline
Gartner’s 2028 prediction gives the cybersecurity workforce roughly two years to upskill. That sounds like plenty of time until you consider:
- AI deployments are accelerating, not slowing down
- IR team sizes aren’t growing proportionally
- The skills required are genuinely new — not extensions of existing knowledge
- Organizations that wait until 2028 to build AI IR capability will be responding to incidents they don’t understand with tools they haven’t built
The professionals who invest in AI security skills now — while the specialization is still forming and the competition is low — will have a significant career advantage. By 2028, when half of IR is AI-related, the demand for these skills will far outstrip supply.
The Gartner prediction isn’t a warning. It’s a career roadmap. Start building.
Sources
- Infosecurity Magazine, “AI Issues Will Drive Half of Incident Response Efforts by 2028, Says Gartner,” March 18, 2026
- Communications Today, “AI to drive half of enterprise cyber incident response by 2028,” March 18, 2026
- TechNadu, “Commonwealth Bank Builds Custom AI Threat Hunter,” March 17, 2026
- IT Brief Asia, “Custom AI to drive half of cyber incidents by 2028,” March 16, 2026
- Sysdig, “Machine identities outnumber human users 40,000 to 1,” 2025
- Gartner Security & Risk Management Summit, Sydney, March 2026



