As companies rush to adopt AI tools to improve operational efficiency, this double-edged sword is quietly opening the door for hackers. Trend Micro recently released its 2026 cybersecurity forecast report, pointing out that AI is moving from a simple auxiliary tool to a highly autonomous "industrialized" stage. This means that future cybersecurity battles will be a race between AI and AI, and if companies neglect governance, their internal automated processes could even become springboards for attackers.
Trend 1: AI Hacking "Automation" and Professional Division of Labor in the Underground Economy
One of the boldest predictions in the report is that 2026 will be a key turning point for AI-driven "automation hackers".
With the proliferation of AI agents, future attacks will no longer require extensive human collaboration. Malicious organizations will be able to leverage AI to automatically search for vulnerabilities, chain APIs, and even mimic the behavior of corporate employees, independently executing continuous attacks across multiple platforms.
Furthermore, the underground economy will also give rise to a new type of "Premier Pass-as-a-Service". The attack chain will be broken down into more fragmented and specialized parts: one party is responsible for seizing initial access rights, while the other party is responsible for subsequent operations, forming a highly efficient threat ecosystem centered on AI.
Trend 2: Locking in supply chain and GPU computing power
At the infrastructure level, supply chains, cloud computing, and hybrid multi-cloud environments remain the main battlegrounds. Trend Micro predicts that attackers will continue to target supply chain vulnerabilities such as open-source suites, CI/CD processes, and AI model libraries.
It's worth noting that as enterprises increasingly rely on GPUs for AI training, the computing power resources of this hybrid architecture have become prime targets for hackers. Attackers may attempt to steal computing power, launch cross-tenant attacks, or even exploit GPU-level vulnerabilities for penetration.
Trend 3: Hidden Dangers of "Vibe Coding"—Automated Processes May Become Internal Scourges
In addition to external threats, changes in the development culture within enterprises also pose risks. Trend Micro specifically points out that as developers use "Vibe Coding"—that is, over-reliance on AI to intuitively generate code to accelerate development—hidden weaknesses are accumulating rapidly.
These AI-generated codes, lacking rigorous auditing, are highly likely to become entry points for automated attacks. Even more dangerous is that if a company's systems are compromised, the automated processes originally intended to improve efficiency may be reverse-engineered and become "insider attacks," forcing the company to simultaneously contend with external hackers and internally manipulated AI systems.
Experts recommend that visibility is key and a risk-oriented framework should be established.
In response to the challenges brought about by the industrialization of AI, Trend Micro General Manager Hung Wei-kan emphasized that AI is reshaping the cybersecurity landscape and blurring the lines of defense. Enterprises that do not prioritize attack surface management, governance, and identity security will find it difficult to build resilience.
Chien Sheng-tsai, senior technical consultant at Trend Micro, suggests that in an era where AI agents possess reasoning and autonomous capabilities, enterprises must prioritize improving the visibility of their AI operational tools and processes. By utilizing platforms like Trend Vision One that offer Cybersecurity Exposure Management (CREM), enterprises can fundamentally understand the status of their digital assets and proactively prevent attacks from taking shape.



