MITRE: Multiple tactics (Defense Evasion, Command and Control, Credential Access)
Detection engineers face a structural problem: the indicators they rely on are built from past attacks, while adversaries now use AI to generate attacks that have never existed before. This creates a detection gap that widens with each new AI capability.
Indicator-based detection relies on known patterns. Hash values, IP addresses, domain names, file signatures, registry keys. These indicators work when attackers repeat techniques or reuse infrastructure. But AI-driven attacks change the operational logic. Adversaries can now generate unique variants at scale, rendering indicator databases obsolete before they are published.
How indicators are created and why they lag
Security teams extract indicators from incidents after they occur. An attacker compromises a network, analysts investigate, they find malicious files or network connections, and they document the indicators. These get shared through threat intelligence feeds and integrated into detection rules. The problem is not the sharing speed but the detection speed.
Organizations take an average of 194 days to identify that a breach occurred, according to IBM's 2024 Cost of a Data Breach Report. By the time analysts extract IOCs from an incident, attackers have often been present for months. When those indicators finally get shared through threat intelligence feeds, they describe infrastructure and techniques the attacker used weeks or months ago.
This worked when attacks followed predictable patterns. When Emotet used specific C2 domains, blocking those domains provided value. When ransomware groups reused encryption binaries, signature-based detection caught subsequent infections. The assumption was that attackers would repeat their methods because building new tools required time and skill.
That assumption no longer holds. Recent reports from security researchers show AI models now design complete attack sequences, generate polymorphic malware, and adapt tactics in real-time. Each attack can be unique while achieving the same objective. If an attacker uses AI to generate 100 variants of a credential theft tool, each with different file hashes and network behaviors, indicator-based detection catches none of them on first contact.
The indicator obsolescence problem versus AI generation speed
The operational reality is worse than a simple publication delay. Many IOCs become obsolete within hours or days of being identified. Attackers rotate infrastructure constantly. A malicious IP address identified on Monday might be abandoned by Tuesday. A C2 domain extracted from malware on Wednesday could be replaced by Thursday. Community threat intelligence feeds can publish indicators within hours of discovery, but by then the attacker has often moved to new infrastructure.
AI accelerates this rotation dramatically. An adversary can generate a thousand attack variants in minutes, each with different file hashes and network signatures. They can spin up new command servers in cloud environments that exist for hours before being replaced. The infrastructure changes faster than human analysts can document it.
This is not theoretical. Security vendors track polymorphic malware families that change their signatures on every deployment. Attackers use code obfuscation, packing, and encryption to ensure that binaries never match known hashes. They rotate infrastructure constantly, using cloud-based command servers that exist for hours before being replaced. Indicators become stale before they can be operationalized.
Living off the land makes indicators irrelevant
The shift to living off the land techniques compounds the problem. Attackers now use legitimate system tools to achieve their objectives. PowerShell, WMI, legitimate remote access tools. These generate no malicious indicators because the tools themselves are not malicious. The file hash for powershell.exe is the same whether it is used by IT or by an attacker. The process itself provides no signal.
Detection teams try to compensate by building behavioral rules around these tools. "Alert if PowerShell connects to an external IP." "Flag if WMI spawns suspicious child processes." These rules create noise because legitimate administrative activity triggers them constantly. Security operations teams spend time investigating false positives instead of real threats.
AI-driven attacks exploit this gap systematically. An attacker can probe a network, identify which administrative tools are used routinely, and craft an attack sequence that mimics normal operations. The attack uses the same tools, at the same time of day, with similar traffic volumes. No indicators fire because nothing looks anomalous at the atomic level. The attack succeeds because detection systems cannot distinguish intent from indicators alone.
The timelines tell the story. Median dwell time for ransomware dropped to 5 days in 2024, down from 9 days in 2022, according to Sophos Active Adversary and Mandiant M-Trends reports. Attackers know detection capabilities have improved, so they move faster. But the mean time to identify any breach remains 194 days. The gap between attacker speed and defender detection continues to widen. By the time indicators are extracted and shared, the infrastructure they describe has often been abandoned.
Why adding more indicators does not solve the problem
Some organizations respond by collecting more indicators. They subscribe to dozens of threat intelligence feeds, ingest millions of IoCs, and build correlation rules to connect them. This creates operational overhead without improving detection efficacy. The core issue is not the quantity of indicators but the nature of indicator-based logic itself.
Indicators describe what has been seen, not what could be done. They are reactive by definition. An adversary who understands this can simply avoid known indicators. If hash-based detection is prevalent, use polymorphic code. If IP reputation is enforced, rotate infrastructure. If domain names are blocked, use DGA or dead-drop resolvers. Each defensive measure has a known counter-move, and AI accelerates the adversary's ability to implement it.
More indicators also increase false negatives. When detection rules rely on long lists of IoCs, they become brittle. A single changed character in a domain name, a slight modification to a file, or a new IP address causes the match to fail. Detection engineers tune rules to be more permissive, which increases false positives. The system degrades in both directions.
What detection requires instead of indicators
Effective detection against AI-driven attacks requires understanding intent, not matching patterns. This shifts the detection model from "have we seen this before" to "what is this sequence of actions trying to accomplish."
Flow-based detection captures this intent at the network level. Network flows reveal who communicates with whom, when, and with what rhythm. These patterns reflect operational behavior rather than specific artifacts. An attacker moving laterally through a network generates flow sequences that differ structurally from normal service-to-service traffic, even if they use legitimate tools and protocols.
Flow data provides several advantages over indicators. Flows cannot be trivially randomized. An attacker can change IP addresses and ports, but the sequence of connections, timing relationships, and traffic volumes reflect the underlying intent. A credential theft operation creates a recognizable pattern of authentication attempts followed by lateral movement, regardless of which specific tools are used.
Deep learning models trained on flow sequences learn to recognize these patterns without requiring labeled indicators. The model observes normal operational behavior and identifies sequences that deviate in structure, timing, or progression. When an AI-generated attack creates a novel variant, the flow sequence still reveals the intent because the underlying objectives (reconnaissance, access, exfiltration) impose constraints on network behavior.
The transition from reactive to structural detection
This represents a fundamental change in detection philosophy. Instead of asking "do we have an indicator for this," detection systems should ask "does this flow sequence match the structural anatomy of malicious activity."
Sequence-based detection works because each type of attack action has inherent structural characteristics. Reconnaissance scanning creates connection patterns with specific timing, target diversity, and port usage that differs from application behavior. Lateral movement produces authentication and connection sequences unlike normal system administration. Data staging generates file access patterns distinct from routine operations. These structural signatures exist in the network flows themselves, independent of which specific tools or techniques the attacker uses.
The critical insight is that detection operates on individual sequences representing discrete attack steps. When a model identifies reconnaissance activity, it does so based on the structural anatomy of that specific reconnaissance sequence, not because it observed the complete attack progression from initial access through exfiltration. Each flow sequence contains sufficient structural information to assess whether it represents malicious activity.
Detection engineers can implement this by prioritizing Deep learning over signature matching. This does not mean abandoning indicators entirely. Known bad IPs and malware hashes still provide value for catching careless attackers and known threats. But they cannot be the primary detection mechanism when facing adaptive adversaries.
The operational shift involves investing in systems that understand the structural anatomy of both benign and malicious network activity. Each flow sequence is assessed independently. The system flags a reconnaissance pattern not because it knows what comes next in the attack chain, but because the reconnaissance itself violates structural expectations for legitimate network behavior. This requires learning what normal operational patterns look like and identifying sequences whose structure indicates malicious intent.
Deep learning detection operates differently
Intent-based detection using deep learning foundation models addresses these limitations through a different approach. Instead of learning what is "normal" and flagging deviations, the model learns the structural patterns of both malicious and legitimate activity. The distinction is critical.
When DeepTempo's foundation model processes flow sequences, it learns to recognize the structural anatomy of different types of activity. Reconnaissance has inherent characteristics in how it probes targets, regardless of whether that reconnaissance happens quickly or slowly, during business hours or overnight. Lateral movement creates connection patterns distinct from legitimate administration, even when the attacker uses legitimate tools and credentials. The model identifies intent based on structure, not deviation from baseline.
This approach handles several scenarios where traditional behavioral detection fails. An attacker moving slowly to avoid anomaly detection still creates reconnaissance sequences with recognizable structure. The speed changes, but the fundamental pattern of probing multiple targets in exploratory ways remains. A legitimate administrator working unusual hours does not trigger alerts because the structural pattern of their activity matches legitimate administration, even if the timing is anomalous. The system assesses "what is this trying to accomplish" rather than "is this different."
Zero-shot detection capability emerges from this approach. The foundation model learns general patterns of malicious and legitimate activity from training data. When deployed in a new environment, it can identify threats it has never seen before because it recognizes the structural anatomy of the attack action, not a specific implementation. An AI-generated malware variant using novel infrastructure still performs reconnaissance, lateral movement, or exfiltration in ways that create recognizable structural patterns.
The operational difference is substantial. Traditional UEBA requires months of baseline learning before providing useful detection, and that baseline requires continuous retuning as the environment changes. Deep learning models can detect threats immediately upon deployment because they assess intent based on learned structural patterns rather than environment-specific baselines. False positive rates remain low because the model distinguishes between structural patterns of malicious activity and legitimate operations, not between normal and anomalous behavior.
What this means for detection engineering practice
The shift from indicators to intent changes daily detection work. Instead of updating indicator feeds and tuning signature rules, detection engineers focus on understanding what normal operations look like in their environment and identifying sequences that deviate from that norm. This is less about maintaining lists and more about modeling behavior.
When investigating an alert, the question changes from "which indicator matched" to "what structural characteristics flagged this sequence as malicious." An analyst sees a flagged flow sequence identified as reconnaissance activity. The system provides rich context: the specific flow patterns that triggered detection, timing characteristics, connection sequences, MITRE ATT&CK mapping, and how this structure differs from normal operational patterns in the environment. Even without specific indicators like known-bad IPs, the structural analysis and contextual information provide clear investigative direction. The analyst can pivot to endpoint telemetry to understand what processes were involved, check authentication logs for credential usage, and assess whether the activity pattern makes sense for legitimate operations.
This investigative approach works because it focuses on the structural anatomy of specific attack actions rather than their surface characteristics. Detection does not require observing a complete attack chain. Each type of malicious activity creates a distinct structural signature in network flow data. Reconnaissance probing has recognizable patterns in connection timing, target selection, and port sequences. Lateral movement creates flow structures that differ from normal service-to-service communication. Credential abuse generates authentication patterns unlike legitimate user behavior.
AI can generate countless implementation variants, but it cannot eliminate the structural requirements of the underlying action. An attacker performing reconnaissance must probe targets in ways that reveal intent, regardless of which tools are used. Someone moving laterally must create connection sequences that differ from application logic. These structural characteristics exist in individual flow sequences, independent of what comes before or after in the broader attack.
The rich contextual information provided by structural detection accelerates investigation. Instead of pursuing a single indicator match, analysts receive a complete picture: what the sequence was attempting to accomplish, which specific structural patterns triggered detection, how it maps to known attacker techniques, and how it deviates from baseline behavior. This context enables faster triage and more informed decisions, even when facing novel attack variants that have never been documented before.
Closing observations
Indicator-based detection served the security industry when attacks were manually crafted and infrastructure was reused. Those conditions no longer hold. AI enables attackers to generate unique variants at scale and rotate infrastructure continuously. The transition to structural detection requires different tools and a fundamental shift in how security teams assess threats. Instead of asking "have we seen this indicator," the question becomes "does this flow sequence match the structural anatomy of malicious activity." Organizations cannot eliminate indicator-based detection immediately, nor should they. Known indicators still provide value for opportunistic threats. But detection strategies must evolve to prioritize structural analysis of individual attack steps. The alternative is falling further behind as AI accelerates the attacker's ability to evade static defenses. Detection systems that recognize the structural anatomy of malicious actions in network flows can surface threats that indicator systems miss. Each sequence stands on its own merit based on structural characteristics. Against AI-driven threats that never repeat themselves, indicator-based logic's fundamental limitation is fatal: it requires seeing something before it can be detected.
Want to see how intent-based detection works in your environment? Talk to DeepTempo about deploying deep learning detection that identifies attacks based on what they're trying to accomplish, not what they look like.