Most security tools were designed for attacks that produced visible, repeatable artifacts. Signature-based detection could catalog malicious payloads. Rule engines could match known attack sequences. Anomaly detection could flag statistical outliers. The defensive architecture worked because attacks remained consistent long enough for detection logic to adapt.
That consistency no longer exists. AI-driven tooling allows every component of an attack to change in seconds: payloads rewrite themselves, command infrastructure rotates continuously, and techniques vary between executions. Threat actors are now using LLM-driven code regeneration to rewrite malware source code on an hourly basis to evade detection. Traditional approaches were not built for this level of volatility. AI did not just accelerate attacks. It provided adversaries with the ability to iterate faster than defenders can respond.
Why detection rules break down
Rules are precise, and precision is fragile. A detection rule fires only when a specific sequence occurs exactly as written. If lateral movement uses PowerShell with particular command-line arguments, the rule matches those arguments. If credential dumping invokes a known LSASS access pattern, the rule triggers on that pattern. The logic is deterministic: match the signature, raise the alert.
Modern attack frameworks regenerate their components between uses: lateral movement scripts rewrite command syntax after each execution, credential access tools randomize their process injection techniques, and exfiltration logic varies its network patterns based on observed monitoring posture. AI can rewrite a malware's codebase continuously, adapting it to evade defenses, with polymorphic malware like Ghost ransomware rapidly rotating payload variants. Every intrusion uses functionally equivalent but technically distinct infrastructure and payloads. The content pipeline that feeds rule and signature-based systems cannot update fast enough to keep pace.
The result is that precision turns into blindness. Rules designed to catch specific threats now miss most of them because the specificity that made them accurate also made them narrow. Broadening rules to catch variants introduces false positives that make the alerts unusable. The fundamental trade-off between precision and coverage becomes untenable when attacks deliberately avoid producing consistent artifacts.
Why network inspection tools lost their edge
Deep packet inspection filled the gaps left by signature-based detection. Rather than matching known malware samples, network tools analyzed traffic content to identify suspicious patterns: unusual protocols, malformed packets, command and control communication embedded in seemingly legitimate requests. These tools operated on the assumption that defenders could see what attackers were doing if they looked closely enough at network data.
Almost all enterprise traffic now uses strong encryption by default, with over 95% of global web traffic encrypted using HTTPS as of 2025. Firefox reports that over 80% of the web is encrypted, while Google reports 95% encryption across all of its services. This shift improved security posture in many ways, but it also eliminated the visibility that network inspection tools depended on. Research indicates that 93% of malware now lurks in encrypted traffic, with attackers using the same encryption standards to hide their activity. Command channels now run over legitimate cloud APIs and collaboration platforms that security teams cannot inspect without breaking functionality that businesses depend on.
Metadata and timing analysis can provide some detection value, but they rarely offer enough context to distinguish malicious activity from legitimate use when both travel over the same encrypted channels to the same trusted destinations. An attacker exfiltrating data through a cloud storage API produces metadata that looks nearly identical to an employee uploading files as part of their normal workflow. Network visibility was built for an era when most data traveled in cleartext. That era ended, and the tools designed for it lost most of their detection surface.
Why anomaly detection cannot compensate
Anomaly detection promised to detect the unexpected rather than matching known threats. The logic made sense: build models of normal operations, flag deviations from those models, investigate the outliers. This approach should theoretically catch novel attacks that rule-based systems miss because it does not depend on prior knowledge of attack techniques. In practice, modern attacks are designed to look routine rather than anomalous.
An adversary using valid credentials to access legitimate endpoints at normal volumes fits comfortably within statistical boundaries that anomaly models consider acceptable. The individual actions are not unusual. A user logging in is normal. A process accessing the registry is normal. A network connection to a cloud service is normal. Attackers chain together sequences of normal-looking actions that collectively achieve malicious objectives. Each step avoids triggering anomaly thresholds because each step matches expected patterns when evaluated in isolation.
Cloud and identity systems change constantly, with continuous scaling, shifting workloads, and frequent updates making it difficult to establish a consistent baseline for normal behavior. Configuration drift occurs when infrastructure deviates from an intended baseline, which is common in multi-cloud setups due to manual changes, varying tools, and regional rule differences. New users join, roles change, applications get deployed, services migrate to different infrastructure. Anomaly models must continuously retrain to avoid flagging legitimate changes as suspicious. This retraining teaches the models to accept an increasingly broad range of behaviors as normal. The dynamic and volatile nature of cloud environments, reflected in high dimensionality and rapid changes in telemetry data, makes it challenging to accurately detect anomalies. The detection boundaries expand until they become too loose to catch adaptive attacks. The alternative is to keep baselines static, which results in alert fatigue as every operational change triggers false positives. Anomaly detection lacks the context to understand whether a sequence of individually normal actions represents malicious intent or legitimate business activity.
Why behavioral AI and UEBA miss the point
User and entity behavior analytics was meant to bridge the gap between rules and context by learning patterns of legitimate user activity and raising alerts when those patterns changed. The concept addressed a real problem: traditional detection tools treated each event independently, missing attacks that unfolded gradually across multiple actions. UEBA promised to track activity over time and detect subtle shifts that indicated compromise.
The technology inherits the same fundamental limitations as other pattern-matching approaches. UEBA systems learn what users typically do and flag deviations from those patterns. Modern enterprises change faster than behavioral models stabilize. Employees shift roles, adopt new tools, collaborate with different teams, and adjust their workflows based on evolving business needs. Every change requires model updates. Continuous retraining and alert suppression become operational necessities just to keep the system functional.
More critically, most UEBA implementations treat each identity or entity as independent. They track what a specific user does but lack awareness of relationships between actions across time and across multiple entities. Attacks rarely involve just one compromised account. APT actors use AI to support several phases of the attack lifecycle, including researching infrastructure, reconnaissance on target organizations, vulnerability research, payload development, and assistance with malicious scripting and evasion techniques. An adversary gains initial access through one identity, escalates privileges using another, moves laterally through several accounts, and exfiltrates data from a service account that rarely logs in. Each step might look unremarkable when evaluated in isolation. UEBA systems flag individual deviations but miss the sequence that connects them into a coherent attack.
The deeper problem: architectural assumptions
All of these technologies share one foundational assumption: that an attack will be visible as a discrete, identifiable event or pattern. A rule assumes a match will occur. A network inspection tool assumes traffic be readable instead of encrypted. An anomaly model assumes malicious activity will deviate measurably from normal operations. These assumptions made sense when adversaries operated in bursts with consistent infrastructure.
The assumption fails when attacks are continuous, distributed, and context-aware (actively managed by AI). Modern intrusions do not produce obvious bursts of malicious traffic. They blend into routine operations. Commands look like administrative tasks. Lateral movement resembles legitimate remote access. Data exfiltration follows the same paths as normal file transfers. The revealing pattern exists in the communication structure between IP pairs: which systems talk to which other systems, in what sequence, with what timing characteristics, and how these communication relationships differ from legitimate operational patterns between the same endpoints.
Detection tools were never designed to reason about communication structure in this way. They were built to match, inspect, or compare individual events or packets. Matching requires consistency. Inspection requires visibility. Comparison requires measurable deviation. None of these capabilities address the core challenge of recognizing malicious communication patterns when individual flows appear normal and stay within expected operational boundaries.
What modern detection must understand
Effective detection in an AI-driven threat landscape must understand the structure of communication sequences rather than the content of individual events. A login event, a role change, and a data export operation are all routine when viewed independently. The detection opportunity exists in how systems communicate with each other: which IP pairs exchange traffic, in what order connections occur, with what timing relationships and volume patterns.
This is fundamentally a reasoning problem about communication structure rather than a pattern-matching problem about event content. Flow records capture this structure. They show which systems communicate, in what order, with what volume patterns, and how those relationships evolve over time. When lateral movement logic rewrites its command syntax, the underlying communication pattern between IP pairs maintains recognizable characteristics: timing relationships between connections, data volume patterns, and the order in which different systems are contacted. These structural features of the communication persist even when the attack's surface artifacts change.
DeepTempo analyzes sequences of flow records between IP pairs and projects their behavioral representation into a multi-dimensional space. In this space, malicious communication sequences cluster separately from benign operational traffic because their structural patterns differ at a fundamental level. The embedding space preserves the characteristics of how two endpoints communicate. The structural features of IP pair communication remain stable even when command strings, payloads, and specific tools change.

Classifiers trained on these embeddings can identify malicious intent without depending on specific signatures or command strings. The platform detects reconnaissance not by matching port scan patterns, but by recognizing the flow sequence structure that emerges when one IP probes multiple other IPs in rapid succession with characteristic timing and volume patterns. It identifies credential abuse by the structural signature of authentication flows followed by access patterns between IP pairs that differ behaviorally from how those same endpoints normally communicate, regardless of which specific credentials were used.
Closing note
The existing defensive stack remains valuable for handling threats that produce consistent, visible artifacts. Signatures still catch known malware. Network tools still block obviously malicious traffic. Anomaly detection still flags clear operational mistakes. These tools are not obsolete, but they are no longer sufficient. They were engineered for a level of visibility that no longer exists and attack consistency that adversaries no longer provide.
Modern attacks are contextual, low-noise, and constantly regenerating. They succeed not by being loud but by staying within normal boundaries while producing communication patterns between IP pairs that reveal malicious intent. Detection in this environment depends on reasoning about the structure of network communications rather than matching the content of individual events or recognizing progression of events. The shift from content matching to communication structure analysis is not optional. It is necessary.
Related reading:
- Anomalies are not enough
- Rules, rules everywhere: Why signature-based detection falls short against AI threats
- The great security detection illusion: Why your "AI-powered" tools are still just playing by rules
- From packets to patterns: How foundation models detect network threats