
Cybercriminals aren’t just using AI – they’re weaponizing it. Deepfakes, automated phishing and AI-written malware are emerging as some of the fastest growing threats on enterprise radar. According to Foundry’s 2025 Security Priorities Study, AI-enabled attacks are now one of the top concerns for security buyers, even as most organizations are investing or planning to invest in AI-powered security. The battle lines are clear: AI vs. AI.
recent CSO The reporting already paints a disturbing picture of what is happening. Autonomous AI agents are learning to execute full attack chains – from reconnaissance and exploitation to evasion and data theft – without human direction. Researchers have documented AI models used to generate extortion emails, launch ransomware, and discover new vulnerabilities in minutes. As one expert put it, the attackers are “operating at the speed and scale of computers,” threatening to tilt the balance of power decisively in their favor.
For defenders, the answer is not to blindly match automation with automation. Interviews by security leaders CSO Describe the growing focus on considering AI as “not an autopilot, but a co-pilot.” Well-governed AI can accelerate detection, testing, and containment, but it still depends on strong human oversight to remain effective. “The real win isn’t just speed,” one CISO explained. CSO“It’s handling routine things so that analysts can focus on the complex and strategic problems that machines can’t do.”

